WO2022065120A1 - Dispositif de traitement d'informations, procédé de traitement d'informations et programme - Google Patents

Dispositif de traitement d'informations, procédé de traitement d'informations et programme Download PDF

Info

Publication number
WO2022065120A1
WO2022065120A1 PCT/JP2021/033620 JP2021033620W WO2022065120A1 WO 2022065120 A1 WO2022065120 A1 WO 2022065120A1 JP 2021033620 W JP2021033620 W JP 2021033620W WO 2022065120 A1 WO2022065120 A1 WO 2022065120A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
virtual object
information
hand
unit
Prior art date
Application number
PCT/JP2021/033620
Other languages
English (en)
Japanese (ja)
Inventor
京二郎 永野
淳 木村
真 城間
毅 石川
大輔 田島
純輝 井上
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2022065120A1 publication Critical patent/WO2022065120A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • This technology relates to information processing devices, information processing methods, and programs, and in particular, information processing devices, information processing methods, and programs designed to improve operability in the space provided by xR (x-Reality). Regarding.
  • Japanese Unexamined Patent Publication No. 2014-09296 Japanese Unexamined Patent Publication No. 2019-008798 Japanese Patent Application Laid-Open No. 2003-067107 Japanese Patent No. 5871345
  • the operability of virtual objects may differ from that of the real world, and there is room for improvement in terms of operability.
  • This technology was made in view of such a situation, and it is intended to improve the operability in the space provided by xR (x-Reality).
  • the information processing unit of the information processing apparatus having the processing unit is operated by the user for the operation of the virtual object based on the attribute of the virtual object to be operated or the operation status of the virtual object. It is an information processing method that controls the presentation of information to any one or more of visual, auditory, and tactile sensations.
  • one or more of the user's visual, auditory, and tactile senses for the operation of the virtual object can be obtained.
  • the presentation of information is controlled.
  • FIG. 1 is a block diagram showing a configuration example of a first embodiment of an information processing apparatus to which the present technology is applied.
  • the information processing device 11 in FIG. 1 uses an HMD (Head Mounted Display) and a controller to provide a user with a space / world generated by xR (x-Reality).
  • xR is a technology that includes VR (Virtual Reality), AR (Augmented Reality), MR (Mixed Reality), SR (Substitution Reality), and the like.
  • VR is a technology that provides users with a space and world that is different from reality.
  • AR is a technology that provides users with information that does not actually exist in the real space (hereinafter referred to as the real space).
  • MR is a technology that provides users with a world that fuses the real space and virtual space of the present and the past.
  • SR is a technology that projects past images in real space to give the illusion that past events are actually happening.
  • the space provided (generated) by xR is called the xR space.
  • the xR space contains at least one of a real object and a virtual object in real space. In the following, when there is no distinction between an object that exists in real space and an object that is virtual, it is simply called an object.
  • the HMD is a device equipped with a display that mainly presents images to the user.
  • the display held by the head regardless of the mounting method on the head is referred to as HMD.
  • a stationary display such as a PC (personal computer) or a display of a mobile terminal such as a notebook PC, a smartphone, or a tablet may be used.
  • the controller is a device for detecting the user's operation, and is not limited to a specific type of controller.
  • a controller there are a controller that the user holds by hand, a controller that is worn by the user's hand (hand controller), and the like.
  • the information processing device 11 has a sensor unit 21, a control unit 22, a video display unit 23, a sound presentation unit 24, a tactile presentation unit 25, and a storage unit 26.
  • the sensor unit 21 includes various sensors such as a camera 31, a gyro sensor 32, an acceleration sensor 33, a direction sensor 34, and a ToF (Time-of-Flight) camera 35.
  • Various sensors of the sensor unit 21 are arranged in any one of a device such as an HMD and a controller worn by the user, a device such as a controller held by the user, an environment in which the user exists, and a user's body.
  • the type of sensor included in the sensor unit 21 is not limited to the type shown in FIG.
  • the sensor unit 21 supplies the sensor information detected by various sensors to the control unit 22.
  • the sensor unit 21 When the sensor unit 21 is distinguished for each device or portion where the sensor is arranged, there are a plurality of sensor units 21, but they are not specified in the present embodiment. When there are a plurality of sensor units 21, the types of sensors possessed by each sensor unit 21 may be different.
  • the camera 31 shoots a subject in the shooting range and supplies the image of the subject as sensor information to the control unit 22.
  • the sensor unit 21 arranged in the HMD 73 may include, for example, an outward-facing camera and an inward-facing camera as the camera 31.
  • the outward camera captures the front view of the HDM.
  • An inward-looking camera placed on the HMD captures the user's eyes.
  • the gyro sensor 32 detects the angular velocity around the three orthogonal axes, and supplies the detected angular velocity as sensor information to the control unit 22.
  • the gyro sensor 32 is arranged in, for example, an HMD or a controller.
  • the acceleration sensor 33 detects acceleration in the three orthogonal axes directions, and supplies the detected acceleration to the control unit 22 as sensor information.
  • the acceleration sensor 33 is arranged in the HMD or the controller in combination with the gyro sensor 32, for example.
  • the azimuth sensor 34 detects the direction of the geomagnetism as the azimuth, and supplies the detected azimuth as sensor information to the control unit 22.
  • the azimuth sensor 34 is arranged in the HMD or the controller in combination with the gyro sensor 32 and the acceleration sensor 33, for example.
  • the ToF camera 35 detects the distance to the subject (object) and supplies a distance image having the detected distance as a pixel value to the control unit 22 as sensor information.
  • the control unit 22 controls the output to the video display unit 23, the sound presentation unit 24, and the tactile presentation unit 25 based on the sensor information from the sensor unit 21.
  • the control unit 22 includes a sensor information acquisition unit 41, a position / attitude acquisition unit 42, an object information acquisition unit 43, an intention identification unit 44, a feedback determination unit 45, an output control unit 46, and the like.
  • the sensor information acquisition unit 41 acquires sensor information detected by various sensors of the sensor unit 21.
  • the position / attitude acquisition unit 42 acquires the position and attitude of the HMD and the controller as position / attitude information based on the sensor information acquired by the sensor information acquisition unit 41.
  • the position / posture acquisition unit 42 acquires not only the position and posture of the controller itself, but also the position and posture of the user's hand or finger when the controller attached to the user's hand is used. The positions and postures of the user's hands and fingers can also be acquired based on the image taken by the camera 31.
  • the object information acquisition unit 43 acquires the object information of the object existing in the xR space.
  • the object information may be information that is pre-tagged to the object or may be dynamically extracted from the object in xR space presented to the user.
  • the object information includes intention determination information and feedback information in addition to information regarding the shape, position, and posture of the object in the xR space.
  • the intention determination information is information regarding a determination standard used for determining the presence or absence of an intention in the intention identification unit 44, which will be described later.
  • the feedback information is information about the content of the user's visual, auditory, and tactile feedback to contact with the object.
  • the intention identification unit 44 combines the position / orientation information of the controller (including information regarding the position and posture of the user's hand or finger) acquired by the position / attitude acquisition unit 42 and the object information acquired by the object information acquisition unit 43. Based on this, the presence or absence of the user's intention with respect to the contact between the user's hand and the object is identified (determined).
  • the feedback determination unit 45 has the user's visual, auditory, and tactile senses with respect to the contact between the user's hand and the object based on the determination result of the intention identification unit 44 and the object information acquired by the object information acquisition unit 43. Generate feedback (contents) to.
  • the output control unit 46 generates an image (video), a sound, and a tactile sensation for presenting the xR space generated by a predetermined program such as an application to the user, and the video display unit 23, the sound presentation unit 24, respectively.
  • the output signal to be output by the tactile presentation unit 25 is supplied to the video display unit 23, the sound presentation unit 24, and the tactile presentation unit 25.
  • the output control unit 46 controls the visual, auditory, and tactile information presented to the user.
  • the output control unit 46 reflects the feedback of the content generated by the feedback determination unit 45 when generating the image, sound, and tactile sensation to be presented to the user.
  • the video display unit 23 is a display that displays an image (video) of the output signal supplied from the output control unit 46.
  • the sound presentation unit 24 is a speaker that outputs the sound of the output signal supplied from the output control unit 46.
  • the tactile presentation unit 25 is a presentation unit that presents the tactile sensation of the output signal supplied from the output control unit 46.
  • the oscillator is used as the tactile presentation unit 25, and the tactile sensation is presented to the user by vibration.
  • the tactile presentation unit 25 is not limited to the oscillator, and may be an arbitrary type vibration generator such as an eccentric motor or a linear vibrator.
  • the tactile presentation unit 25 is not limited to the case where vibration is presented as a tactile sensation, and may be a presentation unit that presents an arbitrary tactile sensation such as pressure.
  • FIG. 2 is a block diagram showing a first configuration example of the hardware in which the information processing apparatus 11 of FIG. 1 is constructed.
  • the hardware 61 of FIG. 2 has a display side device 71, a sensor side device 72, an HMD 73, a speaker 74, a controller 75, and a sensor 76.
  • the display side device 71 has a CPU (Central Processing Unit) 81, a memory 82, an input / output I / F (interface) 83, and a communication device 84. These components are connected by a bus so that data can be exchanged with each other.
  • CPU Central Processing Unit
  • the CPU 81 executes a series of instructions included in the program stored in the memory 82.
  • the memory 82 includes RAM (RandomAccessMemory) and storage.
  • the storage includes non-volatile memory such as ROM (Read Only Memory) and a hard disk device.
  • the memory 82 stores programs and data used in the processing of the CPU 81.
  • the input / output I / F 83 inputs / outputs signals to / from the HMD 73, the speaker 74, and the controller 75.
  • the communication device 84 controls communication with the communication device 84 of the sensor side device 72.
  • Memory 92 includes RAM and storage. Storage includes non-volatile memory such as ROM and hard disk drive. The memory 92 stores programs and data used in the processing of the CPU 91.
  • the input / output I / F 93 inputs / outputs a signal to / from the sensor 76.
  • the communication device 94 controls communication between the display side device 71 and the communication device 84.
  • the HMD 73 is a device having the image display unit 23 of FIG. 1, and is attached to the user's head to present an image (image) of the xR space to the user.
  • the HMD 73 may have a sound presentation unit 24, a camera 31, a gyro sensor 32, an acceleration sensor 33, an orientation sensor 34, and a ToF camera 35 in the sensor unit 21.
  • the speaker 74 is a device provided with the sound presenting unit 24 of FIG. 1, and presents sound to the user.
  • the speaker 74 is arranged, for example, in an environment where a user exists.
  • the controller 75 is a device having the tactile presentation unit 25 of FIG.
  • the controller 75 is attached to or gripped by the user's hand.
  • the controller 75 may have a gyro sensor 32, an acceleration sensor 33, and a direction sensor 34 in the sensor unit 21.
  • the controller 75 may have a control button.
  • the sensor 76 is a device having various sensors in the sensor unit 21 of FIG. 1, and is arranged in an environment where a user exists.
  • FIG. 3 is a block diagram showing a second configuration example of the hardware in which the information processing apparatus 11 of FIG. 1 is constructed.
  • the same parts as those in FIG. 2 are designated by the same reference numerals, and the description thereof will be omitted.
  • the hardware 61 of FIG. 3 has an HMD 73, a speaker 74, a controller 75, a sensor 76, and a display / sensor device 111. Therefore, the hardware 61 of FIG. 3 is common to the case of FIG. 2 in that it has an HMD 73, a speaker 74, a controller 75, and a sensor 76. However, the hardware 61 of FIG. 3 is different from the case of FIG. 2 in that the display / sensor device 111 is provided in place of the display side device 71 and the sensor side device 72 of FIG.
  • the display / sensor device 111 in FIG. 3 is a device in which the display side device 71 and the sensor side device 72 in FIG. 2 are integrated. That is, the display / sensor device 111 of FIG. 3 corresponds to the display side device 71 when the processing of the sensor side device 72 is performed by the display side device 71 in FIG. 2.
  • the display / sensor device 111 of FIG. 3 is common to the display side device 71 of FIG. 2 in that it has a CPU 121, a memory 122, and an input / output I / F 123. However, the display / sensor device 111 of FIG. 3 does not have the communication device 84 of the display side device 71 of FIG. 3, and the sensor 76 is connected to the input / output I / F 123. It is different from the display side device 71.
  • FIG. 4 is an external view illustrating the HMD 73.
  • the HMD73 in FIG. 4 is a glasses-type HMD (AR glass) compatible with AR or MR.
  • the HMD 73 is fixed to the head of the user 131.
  • the HMD 73 has an image display unit 23A for the right eye and an image display unit 23B for the left eye corresponding to the image display unit 23 of FIG.
  • the video display units 23A and 23B are arranged in front of the user 131.
  • the video display units 23A and 23B are, for example, transmissive displays.
  • AR virtual objects virtual objects
  • virtual objects such as text, figures, or objects having a three-dimensional structure are displayed on the video display units 23A and 23B.
  • the virtual object is superimposed and displayed on the landscape in the real space.
  • Outward facing cameras 132A and 132B corresponding to the camera 31 of the sensor unit 21 of FIG. 1 are provided on the right and left edges of the front surface of the HMD 73.
  • the outward-facing cameras 132A and 132B photograph the front direction of the HMD 73. That is, the outward-facing cameras 132A and 132B take pictures in the real space in the direction visually recognized by the user's right eye and left eye. For example, by using the outward-facing cameras 132A and 132B as stereo cameras, the shape, position, and posture of an object (real object) in which a real space exists can be recognized.
  • the HMD 73 can be made into a video-transparent HMD by superimposing the real space image taken by the outward-facing cameras 132A and 132B and the virtual object and displaying them on the video display units 23A and 23B.
  • Inward facing cameras 132C and 132D corresponding to the camera 31 of the sensor unit 21 in FIG. 1 are provided on the right and left edges on the back side of the HMD 73.
  • the inward cameras 132C and 132D capture the user's right and left eyes.
  • the image taken by the inward-facing cameras 132C and 132D can recognize the position of the user's eyeball, the position of the pupil, the direction of the line of sight, and the like.
  • the HMD 73 may have the speaker 74 (or earphone speaker) or microphone shown in FIG.
  • the HMD 73 is not limited to the case of FIG. 4, and may be an HMD having an appropriate form corresponding to the type of xR space provided by xR (type of xR).
  • FIG. 5 is a diagram illustrating the arrangement of the sensor and the oscillator in the hand controller.
  • the IMU (Inertial Measurement Unit) 151A to 151C and the transducers 153A to 153C of FIG. 5 are held by, for example, a holding body (not shown) of a controller (hand controller), and the holding body is held by a user's hand. When attached to 141, it is placed in various places on the hand 141.
  • the hand controller corresponds to the controller 75 in FIG.
  • the IMU 151A and the oscillator 153A are arranged on the instep of the hand 141.
  • the IMU 151B and the oscillator 153B are placed on the thumb.
  • the IMU 151C and the oscillator 153C are arranged on the index finger.
  • the IMUs 151A to 151C are, for example, sensor units in which the gyro sensor 32, the acceleration sensor 33, and the directional sensor 34 of the sensor unit 21 of FIG. 1 are integrally packaged.
  • the position and posture of the hand, the position and posture of the thumb, the arrangement, and the position and posture of the index finger can be recognized based on the sensor information of the IMUs 151A to 151C.
  • the oscillators 153A to 153C present a tactile sensation to the user's hands and fingers for contact with the object.
  • the IMUs 151B and 151C and the IMUs and oscillators corresponding to the oscillators 153B to 153C are not limited to the thumb and the index finger, and may be arranged on one or more of the five fingers.
  • FIG. 6 is a flowchart illustrating the processing procedure of the information processing apparatus 11.
  • step S13 it is assumed that the user moves his / her hand and touches the object. The process proceeds from step S13 to step S14.
  • step S14 the intention identification unit 44 acquires the object information of the object in contact with the user's hand from the object information acquisition unit 43 of FIG.
  • the intention identification unit 44 determines whether or not the user intends the contact between the user's hand and the object based on the object information (intention determination information). That is, the intention identification unit 44 determines whether or not there is an intention for the contact between the hand and the object.
  • step S15 the feedback determination unit 45 generates feedback to the user regarding the contact between the user's hand and the object.
  • Feedback on the contact between the user's hand and the object represents the presentation to the user of contact with the object.
  • the presentation to the user is made to any one or more of the user's visual sense, auditory sense, and tactile sense.
  • Generating feedback means generating feedback content for the user's visual, auditory, and tactile sensations.
  • the output control unit 46 displays the image (video), sound, and tactile sensation reflecting the feedback of the content generated by the feedback determination unit 45 in the video display unit 23, the sound presentation unit 24, and the tactile sensation presentation unit 25, respectively. Present to the user via.
  • the process returns from step S15 to step S11 and is repeated from step S11.
  • the intention identification unit 44 of the information processing apparatus 11 of FIG. 1 responds to contact with a virtual object when the user moves his / her hand and touches the virtual object (or when the virtual object moves and touches the user's hand). Judgment of the presence or absence of the user's intention (determination of the presence or absence of intention). At this time, the intention identification unit 44 determines whether or not there is an intention based on the intention determination information acquired by the object information acquisition unit 43.
  • the intention determination information includes information that serves as a determination criterion described later in the intention presence / absence determination. The judgment criteria may be set for each virtual object that the user's hand touches, or may be common regardless of the virtual object.
  • the feedback determination unit 45 of the information processing apparatus 11 generates feedback to the user when it is determined by the intention presence / absence determination that there is an intention. The feedback in this case is normal feedback.
  • the feedback determination unit 45 does not give feedback to the user when it is determined by the intention determination that there is no intention. In this case, the feedback determination unit 45 does not generate feedback. However, the feedback determination unit 45 does not generate feedback when it is determined by the intention determination that there is no intention, but instead provides feedback (feedback corresponding to the unintention) corresponding to the case where it is determined that there is no intention. It may be generated based on the feedback information acquired by the information acquisition unit 43. The mode of feedback corresponding to unintentional will be described later.
  • the intention identification unit 44 may determine that there is an intention based on any of the determination criteria, it may determine that there is an intention as a comprehensive determination result, or the intention among the plurality of determination criteria. It may be the case that the comprehensive judgment result is determined according to the type of the determination criteria determined to be none or intentional. For example, assume that priorities are preset for multiple criteria. Even if the intention identification unit 44 determines that there is an intention based on the determination criteria of the predetermined priority, if it determines that there is no intention based on the determination criteria having a higher priority than that, there is no intention as a comprehensive determination result. It may be the case that it is determined that.
  • the first to ninth judgment criteria in the following intention presence / absence judgment will be described in order.
  • the outline of the first judgment criteria to the ninth judgment criteria is as follows.
  • ⁇ 1st judgment criterion whether or not it is within the viewing range (angle of view)
  • ⁇ 2nd judgment criterion whether or not the hand is moving at a high speed
  • ⁇ 3rd judgment criterion whether or not the hand is touched from the back side
  • 4th judgment criterion Whether or not another object is held in the hand
  • Fifth criterion The line of sight or headray hits the contacted object
  • Sixth criterion Position of the contacted object with respect to the user
  • Seventh criterion Head Relative distance between hand and hand
  • Eighth criterion Openness of index finger and thumb
  • Ninth criterion Relationship with the object being viewed
  • FIG. 7 is a diagram illustrating the first determination criterion.
  • the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR.
  • the HMD 73 displays an image (virtual object) of the field of view range (angle of view range) 171 for the user 161.
  • the visual field range 171 is regarded as a range limited to a central visual field or the like in consideration of general human visual field characteristics (range of a predetermined viewing angle), not a visual field range for the entire image displayed by the HMD 73 to the user 161. You may.
  • the virtual object 181 exists outside the field of view 171 and the virtual object 182 exists inside the field of view 171.
  • FIG. 7 shows a state in which the user 161 moves the hand 162 from outside the field of view 171 to the virtual object 182 within the field of view 171.
  • the user 161 does not visually recognize anything other than the virtual object having the visual field range 171. Therefore, while the user 161 is moving the hand 162 from outside the field of view 171 to inside the field of view 171 the hand 162 may unintentionally come into contact with the virtual object 181 outside the field of view 171.
  • the intention identification unit 44 determines that there is no intention when the hand 162 of the user 161 comes into contact with the virtual object 181 existing outside the visual field range 171.
  • the feedback determination unit 45 does not generate feedback for the contact (collision) between the hand 162 and the virtual object 181.
  • the mode in which the feedback determination unit 45 generates feedback corresponding to the intentionlessly will be described later (hereinafter, the same applies).
  • the intention identification unit 44 determines that there is an intention when the hand 162 of the user 1611 comes into contact with the virtual object 182 existing in the visual field range 171.
  • the feedback determination unit 45 generates the content of feedback for the contact (collision) between the hand 162 and the virtual object 181. ..
  • FIG. 8 is a diagram illustrating a second determination criterion.
  • the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR.
  • a virtual object 181 and a virtual object 182 exist in the space.
  • FIG. 8 shows a situation in which the hand 162 touches the virtual object 181 when the user 161 moves the hand 162 from a position away from the virtual object 181 in order to touch the virtual object 182.
  • the user 161 moves the hand 162 at a somewhat high speed until immediately before touching the virtual object 182, and slows down the hand 162 immediately before touching the virtual object 182. Therefore, when the hand 162 touches the virtual object and the hand 162 is moving above a predetermined threshold value, it is considered that the user 161 does not intend to touch the virtual object.
  • the intention identification unit 44 determines that there is an intention when the movement speed of the hand 162 is less than the threshold value. In FIG. 8, for example, when the hand 162 touches the virtual object 182, it is determined that there is an intention. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object.
  • FIG. 9 is a diagram illustrating a third determination criterion.
  • the user 161 wears the HMD 73 on the head and visually recognizes the space provided by xR.
  • a virtual object 183 exists in the space.
  • Situation 173 shows a situation where the user 161 touches the virtual object 183 from the back side of the hand 162 when the hand 162 is moved.
  • Situation 174 shows a situation where the user 161 touches the virtual object 183 from the flat side of the hand 162 when the hand 162 is moved.
  • the intention identification unit 44 determines that there is no intention.
  • the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object 183.
  • the intention identification unit 44 determines that there is no intention when the hand 162 of the user 161 is holding the virtual object 184 (when the hand 162 is holding the virtual object 184). judge.
  • the feedback determination unit 45 does not generate feedback for the contact (collision) between the hand 162 and the virtual object 185.
  • the intention identification unit 44 determines that there is an intention when the hand 162 of the user 161 does not hold the virtual object 184 (when it does not hold it). judge.
  • the intention identification unit 44 determines that there is an intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 generates normal feedback for the contact (collision) between the hand 162 and the virtual object 185. ..
  • FIG. 11 is a diagram illustrating a fifth determination criterion.
  • the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR.
  • a virtual object 186 and a virtual object 187 exist in the space.
  • the virtual object 187 is hit by a headlay indicating the line of sight of the user 161 or the front direction of the HMD 73 (the virtual object 187 exists in the line of sight direction or the headray direction).
  • FIG. 11 shows a state in which the hand 162 comes into contact with the virtual object 186 when the user 161 moves the hand 162 to the virtual object 187 in the line-of-sight direction or the headley direction. In this case, since the user 161 is not watching the virtual object 186, it is considered that the user 161 does not intend to touch the virtual object 186.
  • the intention identification unit 44 determines that there is no intention.
  • the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object 185.
  • the intention identification unit 44 determines that there is an intention. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object 185.
  • FIG. 12 is a diagram illustrating the sixth determination criterion.
  • the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR.
  • a virtual object 188 and a virtual object 189 exist in the space.
  • the virtual object 188 exists in an area (right area 190R) on the right side of the area (front area 190C) in front (front) of the user 161.
  • the virtual object 189 exists in the front area 190C with respect to the user 161. It is assumed that the space on the front side of the user 161 is divided into three regions in the left-right direction by the boundary surface along the front direction. In this case, of the three regions, the region having a predetermined width (about shoulder width) in the left-right direction about the central axis of the user 161 is defined as the front region 190C.
  • the region on the right side with respect to the front region 190C is the right region 190R, and the region on the left side is the left region 190L.
  • FIG. 12 shows a situation where the user 161 moves the right hand 162R from the right side area 190R to the virtual object 189 in the front area 190C, and the right hand 162R touches the virtual object 188 in the right side area 190R. ..
  • the virtual object is likely to exist in front of the body (front area 190C) or on the left side (left side area 190L). That is, when the user 161 is moving the right hand 162R in the right side area 190R, there is a high possibility that the right hand 162R is being moved to the virtual object existing in the front area 190C or the left side area 190L. Similarly, when the user 161 is moving the left hand 162L in the left side area 190L, there is a high possibility that the left hand 162L is being moved to the virtual object existing in the front area 190C or the right side area 190R.
  • the intention identification unit 44 touches the virtual object 188. Is determined to be unintentional. In consideration of this determination result, when the intention identification unit 44 determines that there is no intention as a comprehensive determination result, the feedback determination unit 45 does not generate feedback for the contact between the right hand 162R or the left hand 162L and the virtual object.
  • the intention identification unit 44 When the right hand 162R of the user 161 comes into contact with a virtual object (for example, a virtual object 189) existing in the front area 190C or the left side area 190L, the intention identification unit 44 is the front area 190C or the right side area.
  • the left hand 162L of the user 161 touches the virtual object (for example, the virtual object 189) existing in the 190R, it is determined that there is an intention.
  • the feedback determination unit 45 generates normal feedback for the contact between the right hand 162R or the left hand 162L and the virtual object.
  • FIG. 13 is a diagram illustrating a seventh determination criterion.
  • the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR.
  • a virtual object 191 and a virtual object 192 exist in the space.
  • the virtual object 191 exists at a distance of the short-distance boundary 201 or less with respect to the head of the user 161 (for example, eyes, hereinafter, the same applies).
  • the virtual object 192 exists at a distance farther (larger) than the short-distance boundary 201 and closer (smaller) than the long-distance boundary 202 with respect to the head of the user 161.
  • the long-distance boundary 202 represents a distance that is a predetermined distance to the head of the user 161 or a position of that distance.
  • the distance of the long-distance boundary 202 is farther (larger) than that of the short-distance boundary 201.
  • the distance of the long-distance boundary 202 is set to a distance at which it is estimated that the virtual object will not be touched at a further distant distance due to the same circumstances as the short-distance boundary 201.
  • the distance between the short-distance boundary 201 and the long-distance boundary 202 does not have to be the distance to the head of the user 161 and may be, for example, the distance from the central axis of the body of the user 161.
  • FIG. 13 shows a state in which the hand 162 passes through the virtual object 191 while the user 161 is moving the hand 162 from a distance of the short distance boundary 201 or less to the virtual object 192.
  • the virtual object 191 exists at a distance of the short distance boundary 201 or less, it is considered that the user 161 does not intend to touch the virtual object 191.
  • the virtual object 192 exists at a distance farther than the short-distance boundary 201 and closer than the long-distance boundary 202, it is considered that the user 161 intends to touch the virtual object 192.
  • the intention identification unit 44 determines that the distance to the head of the user 161 at the contact position is the short-distance boundary 201 or less, or the long-distance boundary 202 or more. In that case, it is determined that there is no intention.
  • the distance of the user 161 at the contact position with respect to the head may be the distance of the hand 162 with respect to the head when the hand 162 and the virtual object are in contact with each other, or the distance of the virtual object with respect to the head. May be.
  • the contact between the hand 162 and the virtual object 191 is determined by the intention identification unit 44 to be unintentional.
  • the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object.
  • the intention identification unit 44 when the hand 162 of the user 161 and the virtual object come into contact with each other, the distance to the head of the user 161 at the contact position is farther than the short-distance boundary 201 and closer than the long-distance boundary 202. In that case, it is determined that there is an intention. In FIG. 13, the contact between the hand 162 and the virtual object 192 is determined by the intention identification unit 44 to be intentional. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object.
  • FIG. 14 is a diagram illustrating the eighth determination criterion.
  • the virtual object 221 exists in a space provided by xR to a user (referred to as user 161) who wears an HMD 73 (not shown) on his head.
  • Situation 211 shows a case where the distance between the thumb and the index finger is smaller than the size of the virtual object 221 when the hand 162 of the user 161 comes into contact with the virtual object 221.
  • Situation 212 shows a case where the distance between the thumb and the index finger is larger than the size of the virtual object 221 when the hand 162 of the user 161 comes into contact with the virtual object 221.
  • the distance between the thumb and the index finger becomes larger than the size of the virtual object due to the pre-shaping operation for grasping the virtual object.
  • the size of the virtual object is the minimum distance between the thumb and the index finger that can grab the virtual object.
  • the intention identification unit 44 determines that the distance between the thumb and the index finger of the hand 162 of the user 161 is smaller than the size of the virtual object 221. Judged as unintentional. When the intention identification unit 44 determines that there is no intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object 221.
  • the intention identification unit 44 makes contact between the hand 162 of the user 161 and the virtual object 221 and the distance between the thumb and the index finger of the hand 162 of the user 161 is larger than the size of the virtual object 221. Is determined to have an intention. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object 221.
  • the user 161 wears the HMD 73 on his head and visually recognizes the space provided by xR.
  • a television receiver 231 which is a virtual object
  • a remote controller 232 as a virtual object of the television receiver 231
  • a virtual object 233 which is not related to the television receiver 231.
  • the line-of-sight direction of the user 161 or the headray direction of the HMD 73 is directed to the direction of viewing the television receiver 231.
  • FIG. 15 shows a state in which the user 161 touches the virtual object 233 when moving the hand 162 to take the remote controller 232 while watching the monitor of the television receiver 231.
  • the object to be grasped may be the remote controller 232 related to the television receiver 231. Highly sex. In this way, when the user 161 moves the hand 162, it is considered that the user 161 intends the contact between the hand 162 and the virtual object that is highly related to the virtual object that has been seen at least immediately before. Be done.
  • the intention identification unit 44 acquires the high degree of relevance between the contacted virtual object and the virtual object viewed by the user 161.
  • the virtual object viewed by the user 161 means a virtual object currently viewed by the user 161 or a virtual object viewed by the user 161 until immediately before the user 161 moves the hand 162. Whether or not the object is a virtual object viewed by the user 161 can be determined by the line-of-sight direction of the user 161 or the headray direction of the HMD 73.
  • the height of relevance between virtual objects is preset. For example, the height of the relevance may be set in two stages of high relevance or low relevance, or may be set so that the higher the relevance, the larger the numerical value.
  • the intention identification unit 44 determines that there is no intention. For example, when the height of relevance is set by a continuous or stepwise numerical value, it is determined that the relevance is low when the height of relevance is smaller than a predetermined threshold value. In FIG. 15, the intention identification unit 44 determines that there is no intention for the contact between the user 161's hand 162 and the virtual object 233. When the intention identification unit 44 determines that there is no intention as a comprehensive determination result in consideration of this determination result, the feedback determination unit 45 does not generate feedback for the contact between the hand 162 and the virtual object.
  • the intention identification unit 44 determines that the virtual object touched by the hand 162 of the user 161 is highly related to the virtual object viewed by the user 161, it determines that there is an intention. For example, when the height of relevance is set by a continuous or stepwise numerical value, it is determined that the relevance is high when the height of relevance is equal to or higher than a predetermined threshold value. In FIG. 15, the intention identification unit 44 determines that the contact between the user 161's hand 162 and the remote controller 232 has an intention. In consideration of this determination result, when the intention identification unit 44 determines that there is an intention as a comprehensive determination result, the feedback determination unit 45 generates normal feedback for the contact between the hand 162 and the virtual object 221.
  • the feedback determination unit 45 when the intention identification unit 44 determines that there is no intention, the feedback determination unit 45 does not generate feedback. However, when the intention identification unit 44 determines that there is no intention, the feedback determination unit 45 may generate feedback corresponding to the unintention.
  • the feedback determination unit 45 changes the brightness of the contacted virtual object. For example, when the HMD 73 is an optical see-through type AR glass, the feedback determination unit 45 generates feedback that reduces the brightness of the virtual object in contact with the HMD 73. When the HMD 73 is a video see-through type AR glass or a smartphone type AR glass using a smartphone, the feedback determination unit 45 generates feedback that raises the brightness of the contacted virtual object. The content of the generated feedback is reflected in the image generated by the output control unit 46, and the image is supplied to the video display unit 23. As a result, feedback to the user's vision (feedback corresponding to the unintention) to the unintentional contact between the user's hand and the virtual object is executed.
  • FIG. 16 is a diagram illustrating feedback corresponding to unintentional.
  • the state 241 in FIG. 16 represents a virtual object 251 that is visually presented to the user before the user's hand touches it.
  • the state 242 represents the virtual object 251 presented to the user's vision in a predetermined period after the user's hand touches the virtual object 251.
  • the brightness of the virtual object 251 changes from the state 241 to the state 242 when the user's hand touches it.
  • the user can recognize that he / she has touched the virtual object 251 even if he / she does not intend to touch the virtual object 251.
  • there is no effect such as the movement of the virtual object 251 due to contact with the hand.
  • the predetermined period from the state 242 to the original brightness from the state 243 may be a predetermined time length. However, it may be the time while the user's hand is in contact with the virtual object 251.
  • the brightness of the virtual object in the state 242 may be changed according to the speed at which the user's hand passes through the virtual object 251. For example, when the user's hand passes through the virtual object 251 faster than a predetermined speed, the brightness of the virtual object 251 in the state 242 is reduced by about 80% as compared with the state 241. When the user's hand passes through the virtual object 251 slower than a predetermined speed, the brightness of the virtual object 251 in the state 242 is reduced by about 40% as compared with the state 241.
  • the unintentional feedback to the visual sense is not limited to the case of changing the brightness of the contacted virtual object, but may be the case of changing an arbitrary element (color, etc.) of the display form of the contacted virtual object.
  • the intention identification unit 44 determines that there is no intention as a comprehensive determination result when the hand and the virtual object come into contact with each other
  • the feedback determination unit 45 presents a sound at the moment when the hand and the virtual object come into contact with each other. May be generated.
  • a method of presenting the sound it may be presented by stereophonic sound (3D audio) so that the user can recognize the contacted position by the sound.
  • the feedback determination unit 45 presents vibration to the user's hand at the moment when the hand and the virtual object come into contact with each other. Tactile feedback may be generated.
  • Tactile feedback may be generated.
  • the feedback that corresponds unintentionally the purpose is to make the user aware of the existence of the virtual object. Therefore, in the unintentional feedback, for example, the feedback determination unit 45 generates feedback that presents vibration at a lower frequency in a shorter period of time than in normal feedback when the hand and the virtual object come into contact with each other.
  • the feedback determination unit 45 may change the tactile sensation according to the speed at which the user's hand passes through the virtual object. For example, when the speed at which the user's hand passes through the virtual object is faster than a predetermined speed, the feedback determination unit 45 generates feedback that presents low-frequency vibration in a short period (first period). If the speed at which the user's hand passes through the virtual object is slower than the predetermined speed, the feedback determination unit 45 presents low-frequency vibration for a long period of time (a second period longer than the first period). May be generated.
  • the object due to the contact. It is possible to reduce the influence on such things. That is, there is a limit to the field of view that the user can see with the HMD73. There can be many objects in xR space. Therefore, the user may unintentionally touch another object while moving his / her hand toward the object he / she wants to touch. At that time, if an object that the user does not intend to touch flies or makes a sound, the user is confused.
  • the information processing apparatus since normal feedback is executed only when the intended object is touched, no feedback that causes confusion to the user is provided, and it is easy for the user to perform operations in the xR space. Become. In the first embodiment of the information processing apparatus, it is also possible to give some feedback (feedback corresponding to unintentional) to the user when the user's hand unintentionally touches the object. This makes it possible to present the user with the presence of an object that is not visible to the user.
  • FIG. 17 is a block diagram showing a configuration example of a second embodiment of the information processing apparatus to which the present technology is applied.
  • the information processing device 301 in FIG. 17 provides the user with the space / world generated by AR by using the HMD and the controller.
  • the second embodiment of the information processing apparatus is the information processing apparatus that provides the space / world generated by xR to the user, similarly to the information processing apparatus 11 of FIG. 1 in the first embodiment. It is possible to apply to.
  • AR space the space provided (generated) by AR will be referred to as the AR space.
  • real objects and virtual objects coexist in real space.
  • object when there is no distinction between an object that exists in real space and an object that is virtual, it is simply called an object.
  • the sensor unit 321 has an inward camera 331, an outward camera 332, a microphone 333, a gyro sensor 334, an acceleration sensor 335, and an azimuth sensor 336.
  • the inward-facing camera 331 corresponds to the inward-facing cameras 132C and 132D in the HMD73 of FIG.
  • the inward camera 331 photographs the user's eye and supplies the captured image as sensor information to the control unit 322.
  • the gyro sensor 334 detects the angular velocity around the three orthogonal axes, and supplies the detected angular velocity as sensor information to the control unit 322.
  • the gyro sensor 334 detects the angular velocity of the AR glass system 311 (HMD73 as hardware).
  • the acceleration sensor 335 detects the acceleration around the three orthogonal axes, and supplies the detected acceleration to the control unit 322 as sensor information.
  • the accelerometer 335 detects the angular velocity of the AR glass system 311 (HMD73 as hardware).
  • the azimuth sensor 336 detects the direction of the geomagnetism as the azimuth, and supplies the detected azimuth as sensor information to the control unit 322.
  • the directional sensor 336 detects the directional of the AR glass system 311 (HMD73 as hardware).
  • the control unit 322 has a display unit 324 (corresponding to the video display unit 23 in FIG. 1) and a speaker 325 (sound presentation in FIG. 1) based on the sensor information from the sensor unit 321 and the sensor information described later from the hand controller 312. (One form of unit 24) and output control to the vibration presenting unit 354 (corresponding to the tactile presenting unit 25 in FIG. 1) in the hand controller 312 are performed.
  • the hand position detection unit 341 detects the position and posture of the user's hand or finger based on the sensor information from the hand controller 312 (for example, corresponding to the hand controller shown in FIG. 5).
  • the positions and postures of the user's hands and fingers are detected based on the sensor signals from the gyro sensor 351, the acceleration sensor 352, and the orientation sensor 353, which will be described later, mounted on the hand controller 312.
  • the position and posture of the user's hand or finger may be detected based on the image taken by the outward camera 332.
  • the hand position detection unit 341 detects the shape (posture) of the hand controller 312 or the marker given on the hand controller 312 based on the image from the outward camera 332, and the position of the hand and the finger. And it is possible to detect the posture.
  • the hand position detection unit 341 includes the positions and postures of the hands and fingers detected based on the image from the outward camera 332, and the gyro sensor 351, the acceleration sensor 352, and the orientation sensor described later mounted on the hand controller 312. In the case of detecting the position and posture (shape) of the hand (hand controller 312) and the finger more accurately by integrating the positions and postures of the hand and the finger detected based on the sensor signal from the 353. May be good.
  • the object detection unit 342 detects object information (object information) representing the position and shape of the object in the AR space.
  • object information object information
  • the object detection unit 342 is an image from the outward camera 332 or a distance image obtained by a ToF camera (corresponding to the ToF camera 35 in FIG. 1) (not shown).
  • the position and shape of the real object are detected based on (depth information).
  • the position and shape of the real object may be known in advance, or it may be detected based on the design information of the environment (building, etc.) in which the user exists or the information acquired from another system. It may be the case.
  • the application execution unit 343 generates an AR space to be provided to the user by executing a program of a predetermined application.
  • the application execution unit 343 uses the user's hand based on the position and posture of the user's hand detected by the finger position detection unit 341 and the position and shape of the object detected by the object detection unit 342. And whether or not any object is approaching / contacting (approaching or contacting) is detected. The application execution unit 343 detects an object that has come into contact with or has come into contact with the hand.
  • the application execution unit 343 In the feedback generation process, the application execution unit 343 generates feedback (contents) to the user's visual, auditory, and tactile senses when the user's hand approaches or touches the object.
  • the output control unit 344 generates an image (video), sound, and tactile sensation for presenting the AR space generated by the application execution unit 343 to the user, and generates a display unit 324, a speaker 325, and a hand controller 312, respectively.
  • An output signal for output is generated by the vibration presenting unit 354 of.
  • the output control unit 344 supplies the generated output signal to the display unit 324, the speaker 325, and the vibration presentation unit 354 of the hand controller 312. As a result, the output control unit 344 controls the visual, auditory, and tactile information presented to the user.
  • the speaker 325 is a sound presentation unit that outputs the sound of the output signal supplied from the output control unit 344.
  • the storage unit 326 stores programs and data executed by the control unit 322.
  • the gyro sensor 351 detects the angular velocity around the three orthogonal axes, and supplies the detected angular velocity as sensor information to the control unit 322 of the AR glass system 311.
  • the gyro sensor 351 is arranged on the hand and the finger as in the case of the hand controller of FIG. 5, and detects the angular velocity of the hand and the finger.
  • the acceleration sensor 352 detects the acceleration in the orthogonal three-axis directions, and supplies the detected acceleration as sensor information to the control unit 322 of the AR glass system 311.
  • the acceleration sensor 352 is arranged on the hand and the finger as in the case of the hand controller of FIG. 5, and detects the acceleration of the hand and the finger.
  • the azimuth sensor 353 detects the direction of the geomagnetism as the azimuth, and supplies the detected azimuth as sensor information to the control unit 322 of the AR glass system 311.
  • the orientation sensor 353 is arranged on the hand and the finger as in the case of the hand controller of FIG. 5, and detects the orientation of the hand and the finger.
  • the vibration presentation unit 354 generates vibration of the output signal supplied from the control unit 322 (output control unit 344) of the AR glass system 311.
  • the vibration presenting unit 354 is arranged on the hand and the finger as in the case of the hand controller of FIG. 5, and gives vibration to the hand and the finger.
  • FIG. 18 is a flowchart illustrating the processing procedure of the information processing apparatus 301.
  • step S31 the application execution unit 343 of FIG. 17 determines whether or not the user's hand is approaching / contacting the object (object) (whether or not it is in the approaching / contacting state) by the approach / contact detection.
  • step S31 If it is determined in step S31 that the state is not in the approaching / contacting state, the process repeats step S31.
  • step S31 If it is determined in step S31 that the state is approaching / contacting, the process proceeds from step S31 to step S32.
  • step S32 the application execution unit 343 determines whether or not the user can see (whether the visible state or the non-visible state) an object that is approaching or in contact with the user's hand by visual recognition detection.
  • step S32 If it is determined in step S32 that the state is visible (visible), the process proceeds from step S32 to step S33.
  • step S33 the application execution unit 343 generates normal (conventional) feedback to the user regarding the approach / contact between the hand and the object by the feedback generation process.
  • the ordinary feedback has the same meaning as the ordinary feedback described in the first embodiment of the information processing apparatus.
  • the application execution unit 343 generates feedback that presents a realistic tactile sensation (vibration) to the user's hand according to the movement of the object and the hand.
  • the output control unit 344 generates a tactile sensation (vibration) that reflects the feedback of the content generated by the application execution unit 343.
  • the output control unit 344 supplies the generated tactile sensation (vibration) to the vibration presentation unit 354 of the hand controller 312 as an output signal.
  • normal feedback to the user's tactile sensation corresponding to the visual state is executed for the approach / contact between the user's hand and the object.
  • the process returns from step S33 to step S31 and is repeated from step S31.
  • step S32 If it is determined in step S32 that the state is invisible (not visible), the process proceeds from step S32 to step S34.
  • step S34 the application execution unit 343 detects an object (object) that is approaching or in contact with the user's hand. The process proceeds from step S34 to step S35.
  • the object that is approaching or in contact with the user's hand is detected at the same time when it is detected in step S31 that the user's hand is approaching or in contact with the object (object) by the approach / contact detection. There is.
  • step S35 the application execution unit 343 generates feedback to the user's visual, auditory, and tactile senses according to the property of the object detected in step S34 by the feedback generation process.
  • the output control unit 344 generates an image (video), sound, and tactile sensation that reflects the feedback of the content generated by the application execution unit 343, and displays the information in the display unit 324, the speaker 325, and the hand controller. It is supplied as an output signal to the vibration presenting unit 354 of 312.
  • feedback information presentation
  • the process returns from step S35 to step S31 and is repeated from step S31.
  • the application execution unit 343 of the information processing apparatus 301 in FIG. 17 performs the user's hand by the approach / contact process. Detects that the object and the object are close to each other or come into contact with each other. In this approach / contact process, the application execution unit 343 has the position and posture of the user's hand detected by the finger position detection unit 341 and the position and shape (object) of each object in the AR space detected by the object detection unit 342. Information) and get.
  • the application execution unit 343 is based on the acquired information, and the distance between the user's hand and each object (the distance of the closest part between the area of the user's hand and the area of each object). However, it is determined whether or not it is equal to or less than a predetermined threshold value. As a result of the determination, when there is an object whose distance is equal to or less than the threshold value with respect to the user's hand (hereinafter referred to as a target object), it is detected that the user's hand and the target object are close to each other or in contact with each other.
  • the state in which the user's hand and the object are in close contact may be regarded as a state in which they are in contact with each other, and the close contact may be included in the contact.
  • the feedback when the user's hand and the object come into contact with each other has been described.
  • the approach may be included as well as the morphology.
  • Visual detection When the application execution unit 343 detects that the user's hand and the object are close to each other, the visual detection detects whether or not the user is looking at the approaching / contacting target object (visual state or non-visual state). Is it a state?) Is detected. In this visual recognition detection, the application execution unit 343 detects (determines) whether it is a visible state or a non-visual state by using the first detection condition to the fifth detection condition described later.
  • the detection condition is not limited to the case where the visible state or the non-visible state is detected by using all of the first detection condition to the fifth detection condition, and the detection condition is one or more of the first detection condition to the fifth detection condition. May be used for visual detection.
  • the application execution unit 343 When the application execution unit 343 performs visual detection using a plurality of detection conditions and detects that it is in a non-visual state by any of the detection conditions, the application execution unit 343 is a comprehensive (final) detection result (comprehensive). The detection result) is assumed to be invisible. However, when the application execution unit 343 detects that it is in the visual state by any of the detection conditions, the comprehensive detection result may be in the visual state, and the visible state or non-visual state among the plurality of detection conditions. It may be the case that the comprehensive detection result is determined according to the type of the detection condition detected in the visual state. For example, assume that priorities are preset for multiple detection conditions.
  • the first detection condition to the fifth detection condition are as follows.
  • the first detection condition to the fifth detection condition are shown as conditions for detecting that the state is invisible.
  • -First detection condition The target object does not exist within the user's field of view (conditions such as closed eyes and blindness of the user are also applicable).
  • Second detection condition The target object does not exist within the field of view of the AR glass system 311 (HMD73).
  • -Third detection condition The target object is in the user's peripheral visual field.
  • -Fourth detection condition The target object has never been visually recognized in the user's central visual field.
  • -Fifth detection condition The target object is shielded by another object (the target object exists in a box, hot water, smoke, etc.).
  • the positions and postures of the user's hand and the target object in the AR space are detected by the finger position detection unit 341 and the object detection unit 342 as information that the application execution unit 343 appropriately refers to when performing visual detection.
  • the orientation (headlay direction) of the AR glass system 311 is detected based on the sensor information from the gyro sensor 334, the acceleration sensor 335, and the azimuth sensor 336.
  • the user's line-of-sight direction is detected by the image of the user's eyes from the inward camera 331.
  • the application execution unit 343 specifies the user's visual field range in the AR space based on the user's line-of-sight direction.
  • the application execution unit 343 detects that the target object is invisible when the target object does not exist within the visual field range of the specified user.
  • the application execution unit 343 detects that it is in the visual state.
  • the application execution unit 343 may be invisible when the user closes his / her eyes. Whether or not the user has his eyes closed is detected by the image from the inward camera 331.
  • the application execution unit 343 may be invisible even when the target object exists within the user's visual field range when the user's eyes are blind. Information on whether or not the user is visually impaired can be obtained from another system or the like. When the user is visually impaired, the application execution unit 343 acquires information in the visual field range that the user can stably see, and is invisible when the target object does not exist in the visual field range. May be.
  • the application execution unit 343 specifies the visual field range of the AR glass system 311 in the AR space based on the direction (head ray direction) of the AR glass system 311 (HMD73), and the AR thereof.
  • the application execution unit 343 detects that it is in the visual state.
  • the application execution unit 343 In the visual detection using the fourth detection condition, the application execution unit 343 records the history of the range of the user's central visual field for a predetermined time. In the history of the range of the central visual field recorded, the application execution unit 343 considers that the target object has never been visually recognized in the user's central visual field when the position of the target object is not included in the central visual field range. Detects that it is in a visible state. When the position of the target object is included in the range of the central visual field even once in the history of the range of the central visual field recorded by the application execution unit 343 (the target object is within the central visual field during a predetermined time from the present to the past). If it does not exist), it is detected that it is in the visual field state.
  • the application execution unit 343 detects another object existing between the target object and the user's head.
  • the existence of other objects can be grasped based on the design information of the environment (building, etc.) in which the user exists and the information from other systems.
  • the application execution unit 343 detects that the target object is invisible when another object that shields the target object exists between the target object and the user's head. For example, when the target object is in a box, hot water, or smoke, the application execution unit 343 detects that it is invisible.
  • the application execution unit 343 detects that it is in the visual state when there is no other object that shields the target object between the target object and the user's head.
  • the application execution unit 343 After performing visual detection using any one or more of the above first detection conditions to the fifth detection conditions, the application execution unit 343 sets the user's hand and the target object by the feedback generation process. Generate feedback to the user regarding the approach / contact of the object.
  • the application execution unit 343 In the feedback generation process, the application execution unit 343 generates normal feedback when it is detected by visual recognition detection that it is in a visual state.
  • the application execution unit 343 presents a realistic tactile sensation (vibration) to the user's hand according to the movement between the target object and the hand.
  • the output control unit 344 generates a tactile sensation (vibration) that reflects the feedback of the content generated by the application execution unit 343 when generating an image, a sound, and a tactile sensation for presenting the AR space to the user. ..
  • the output control unit 344 supplies the generated image, sound, and tactile sensation as output signals to the display unit 324, the speaker 325, and the vibration presentation unit 354 of the hand controller 312, respectively.
  • normal feedback to the user's tactile sensation corresponding to the visual state is executed for the approach / contact between the hand and the target object.
  • the usual feedback is not limited to this.
  • FIG. 19 is a diagram illustrating a first embodiment of feedback corresponding to a non-visual state.
  • the target object 381 and the target object 382 represent target objects that are close to or in contact with each other and are not visible to the user.
  • the target object 381 is smaller than the target object 382.
  • the application execution unit 343 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 acquires the size of the target object from the object information (property of the target object) detected by the object detection unit 342. The application execution unit 343 generates feedback that presents the user with a sound or vibration (tactile sensation) having a frequency corresponding to the size of the acquired target object. For example, the application execution unit 343 generates feedback of high frequency sound or vibration (tactile sensation) as the size of the target object is smaller. In FIG. 19, the target object 381 in the situation 371 is smaller than the target object 382 in the situation 372. The application execution unit 343 makes the frequency of the sound or vibration of the feedback for the approach / contact between the hand and the target object 381 higher than the feedback for the approach / contact between the hand and the target object 382. Both sound and vibration may be presented to the user.
  • the application execution unit 343 may generate a sound or vibration feedback having an amplitude according to the size of the target object. For example, the application execution unit 343 generates feedback of sound or vibration having a smaller amplitude as the size of the target object is smaller.
  • the user can be made to recognize the size of the target object in the invisible state by the volume of sound or the magnitude of vibration.
  • FIG. 20 is a diagram illustrating a second modification of the first embodiment of feedback corresponding to the invisible state.
  • the user 411 wears the AR glass 401 representing the AR glass system 311 of FIG. 17 as hardware on the head, and visually recognizes the image 402 of the AR space.
  • the target object 421 and the target object 422 are target objects that have approached or touched the hand 412 of the user 411, respectively, and represent the target objects that the user cannot see.
  • the target object 421 is smaller than the target object 422.
  • the application execution unit 343 generates feedback that presents the image of the figure enlarged or reduced at a magnification according to the size of the target object to the user. For example, the application execution unit 343 generates feedback of an image representing a circular figure having a smaller magnification as the size of the target object is smaller.
  • the target object 421 of the situation 391 is smaller than the target object 422 of the situation 392. Therefore, the application execution unit 343 sets the enlargement magnification of the circular figure 403, which is superimposed on the image 402 in the AR space and presented as feedback for the approach / contact between the hand and the target object 421, so that the hand and the target object 422 approach each other. -Make it smaller than the magnification of the circular figure 403 in the feedback to the contact.
  • the figure presented by feedback may have a shape other than a circular shape.
  • the size of the target object can be recognized by the feedback to the user's visual, auditory, or tactile sense.
  • the target object For the user, if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough.
  • Such useful information can be presented to the user by feedback to the user's visual, auditory, or tactile sensations.
  • the second embodiment is an embodiment for generating feedback for presenting the number of target objects in the invisible state to the user.
  • the waveforms 441 to 443 in FIG. 21 represent the waveforms of sound or vibration (tactile sensation) presented by feedback.
  • the application execution unit 343 acquires the number of the target objects in the invisible state from the object information (property of the target object) detected by the object detection unit 342. do.
  • the number of target objects means the number of target objects that the user's hands approach and touch at the same time.
  • the application execution unit 343 generates feedback that presents the user with the sound or vibration (tactile sensation) of the waveform of the peak number according to the number of acquired target objects.
  • the presentation time of sound or vibration by feedback is determined to be a fixed time.
  • the application execution unit 343 sets the frequency of the sound or vibration so that the number of peaks of the sound or vibration waveform within the presentation time becomes a number corresponding to the number of target objects. For example, the application execution unit 343 generates feedback of a waveform (frequency) having a smaller number of peaks or vibration as the number of target objects is smaller.
  • the number of peaks of the sound or vibration waveform presented by feedback increases in the order of waveform 441, waveform 442, and waveform 443, and the target object in the order of waveform 441, waveform 442, and waveform 443. It is presented to the user that the number of is large.
  • the application execution unit 343 generates feedback for presenting a rough number.
  • FIG. 22 is a diagram illustrating feedback for presenting a rough number of target objects.
  • the application execution unit 343 sets the frequency of the sound or vibration presented to the user as shown in the waveform of FIG. 22 to a fixed frequency that is easy for humans to perceive. ..
  • the application execution unit 343 sets the presentation time of the sound or vibration at the fixed frequency to the length corresponding to the number.
  • the application execution unit 343 generates feedback of sound or vibration having a constant frequency and with a presentation time corresponding to the number of target objects.
  • the number of target objects in the invisible state can be recognized by the feedback to the user's auditory sense or tactile sense.
  • the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough.
  • Such useful information can be presented to the user by feedback to the user's auditory or tactile sensation.
  • the third embodiment is an embodiment for generating feedback for presenting the color (including brightness, pattern, etc.) of the target object in the invisible state to the user.
  • the application execution unit 343 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 acquires the color of the target object in the invisible state from the object information (property of the target object) detected by the object detection unit 342. do.
  • the application execution unit 343 generates feedback that presents the sound or vibration (tactile sensation) of the frequency corresponding to the acquired brightness of the target object to the user. For example, the application execution unit 343 generates feedback of high frequency sound or vibration as the brightness of the target object increases.
  • the height of the brightness of the target object in the invisible state is presented to the user by the frequency of the feedback sound or vibration.
  • the application execution unit 343 detects the complexity of the pattern (high spatial frequency) of the target object in the invisible state by the object detection unit 342. Obtained from the object information (property of the target object).
  • the application execution unit 343 generates feedback that presents the user with a sound or vibration (tactile sensation) having a frequency corresponding to the height of the spatial frequency of the acquired pattern of the target object. For example, the application execution unit 343 generates feedback of high frequency sound or vibration as the spatial frequency of the pattern of the target object is higher. As a result, the more complicated the pattern of the target object is, the higher the frequency of the sound or vibration is presented to the user. The more monotonous the pattern of the target object is, the lower the frequency of sound or vibration is presented to the user.
  • the complexity of the pattern of the target object in the invisible state is presented to the user by the sound of the feedback or the frequency of the vibration. That is, the color (information about the color) of the target object can be recognized by the feedback to the user's auditory sense or tactile sense. For the user, if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough. Such useful information can be presented to the user by feedback to the user's auditory or tactile sensation.
  • the fourth embodiment is an embodiment for generating feedback for presenting the type of the target object in the invisible state to the user.
  • the application execution unit 343 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 acquires the type of the target object in the invisible state from the object information (property of the target object) detected by the object detection unit 342. do.
  • the application execution unit 343 generates feedback that presents the user with the sound or vibration (tactile sensation) of the number of times according to the type of the acquired target object.
  • the number of sounds or vibrations may be the number of peaks of the waveform, or the number of intermittent sounds or vibrations, as in the second embodiment.
  • the number of times according to the type of the target object means the number of times of sound or vibration associated with the type of the target object in advance.
  • FIG. 23 is a diagram illustrating a fourth embodiment of feedback corresponding to the invisible state.
  • the target object example 461 in FIG. 23 represents the case where the target object 471 is a pistol.
  • the target object example 462 represents a case where the target object 472 is a knife.
  • the circle mark 481 and the circle mark 482 at the top of the target object 471 and the target object 472 represent the number of sounds or vibrations associated with the types of the target object 471 and the target object 472, respectively. Since the number of circle marks 481 is one for the target object 471, the number of times of the associated sound or vibration is one. Since the number of circle marks 482 is two for the target object 472, the number of associated sounds or vibrations is two. In this way, the number of sounds or vibrations is associated with the type of target object in advance.
  • the user may memorize the sound or the number of vibrations associated with the type of the target object.
  • the application execution unit 343 may present images of the circle mark 481 and the circle mark 482 as shown in FIG. 23 in the vicinity of the target object.
  • the type of the target object in the invisible state is presented to the user by the sound of the feedback or the number of vibrations. That is, the type of the target object can be recognized by the feedback to the user's auditory sense or tactile sense. For the user, if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough. Such useful information can be presented to the user by feedback to the user's auditory or tactile sensation.
  • a fifth embodiment is an embodiment that generates feedback for presenting the orientation of the target object in the invisible state.
  • 24 and 25 are diagrams illustrating a fifth embodiment of feedback corresponding to the invisible state.
  • the target object 501 is arranged laterally with respect to the palm of the hand 412. That is, the axis direction of the target object 501 is the direction along the flat of the user's hand 412, and the target object 501 is arranged in the direction perpendicular to the axis direction of the finger.
  • the target object 501 is arranged vertically with respect to the palm of the hand 412. That is, the axial direction of the target object 501 is arranged along the flat of the user's hand 412 and parallel to the axial direction of the finger.
  • the application execution unit 343 changes the position of the oscillator that presents vibration when the user moves the hand 412 among the oscillators of the vibration presentation unit 354 of the hand controller 312 according to the orientation of the detected target object.
  • the oscillators are arranged, for example, at the fingertips of the thumb, the fingertips of the index finger, and the back of the hand as shown in FIG. However, the oscillators may be arranged in more parts.
  • the application execution unit 343 moves the hand 412 forward in a situation where the target object 501 is arranged sideways with respect to the palm of the user's hand 412 as in the situation 491 of FIG. 24.
  • the application execution unit 343 changes the position of the oscillator that presents vibration from the oscillator arranged on the tip end side of the hand 412 to the oscillator arranged on the base end side of the finger.
  • the application execution unit 343 causes only the oscillator on which some fingers are arranged to present the vibration.
  • the change in the position of the oscillator that presents vibration is not limited to the above case. It suffices if the difference in the orientation of the target object in the invisible state and the difference in the movement of the hand can be recognized by the difference in the position of the oscillator that presents the vibration.
  • a sixth embodiment is an embodiment that generates feedback for showing that the target object in the invisible state required by the user is in the vicinity of the user's hand. For example, a user may want to pick up the necessary tools while keeping an eye on something. In such a case, the user moves his / her hand to the approximate position where the target tool is placed without looking at the target tool. At this time, in the sixth embodiment, when the hand moves near the target tool, the user is presented with the feedback that the target tool is near the hand.
  • FIG. 26 is a diagram illustrating a sixth embodiment of feedback corresponding to the invisible state.
  • the user 411 wears the AR glass 401 representing the AR glass system 311 of FIG. 17 as hardware on the head, and visually recognizes the image of the engine 511 in the AR space.
  • the user 411 is in a state of gazing at the attention point with respect to the engine 511 with the finger of the left hand 412L.
  • the user 411 keeps the state, moves the right hand 412R to the place where the unused yellow sticky note (virtual object) is placed, picks the sticky note, and tries to paste the sticky note in the place to be noted.
  • User 411 repeats the operation and work of pasting a yellow sticky note.
  • the application execution unit 343 detects (predicts) the target object that the user 411 wants to grasp, depending on the operation of the user 411, the fixed process, and the like.
  • the user 411 detects the yellow sticky note as the target object.
  • the application execution unit 343 generates feedback that presents vibration to the user's hand.
  • the vibration may be a simple sine wave.
  • the feedback may be the presentation of sound rather than vibration.
  • the application execution unit 343 when the user's right hand 412R approaches the yellow sticky note, the application execution unit 343 generates feedback that presents vibration to the user's right hand 412R or the like.
  • the seventh embodiment generates feedback for showing that the target object in the invisible state, which is dangerous to touch (which may cause an accident, disadvantage, etc.), is in the vicinity of the user's hand. Is.
  • FIG. 27 is a diagram illustrating a seventh embodiment of feedback corresponding to the invisible state.
  • the situation 521 shows a case where the target object approaching / contacting the user's hand 412 is a cup (real object) containing a liquid.
  • Situation 522 shows a case where the target object approaching / contacting the user's hand 412 is a heated kettle (real object).
  • Situation 523 shows the case where the target object approaching / contacting the user's hand 412 is a selected image of the credit card to be used.
  • the application execution unit 343 When generating feedback corresponding to the invisible state in the feedback generation process, the application execution unit 343 acquires the type of the target object in the invisible state from the object information (property of the target object) detected by the object detection unit 342. do. The application execution unit 343 detects whether or not the target object in the invisible state is dangerous based on the acquired object information.
  • Whether or not the target object is dangerous may be set as object information in advance, or may be determined based on the environment in the real space.
  • the application execution unit 343 generates feedback that presents an image, sound, or vibration (tactile sensation) to the user when the target object in the invisible state is dangerous. This presents, for example, an image, sound, or vibration (tactile sensation) that indicates danger to the user. If the risk level of the target object can be acquired, the application execution unit 343 may generate feedback that presents the user with a sound or vibration having an amplitude corresponding to the risk level of the target object. For example, the higher the risk of the target object, the larger the amplitude of the sound or vibration may be presented to the user. An image of a color or size corresponding to the degree of danger of the target object may be presented to the user. When the target object is a virtual object, the application execution unit 343 may invalidate the operation on the target object.
  • the application execution unit 343 may generate feedback that presents an image, sound, or vibration to the user only when the user performs a specific action.
  • the application execution unit 343 may generate the following two-step feedback. For example, the application execution unit 343 generates feedback that presents an image, sound, or vibration to the user as simple primary information in order to present to the user the existence of a dangerous target object in an invisible state. After that, when the user shows interest and the user performs a specific hand action (action) such as "stroking", the application execution unit 343 displays an image, a sound, or an image, a sound, or as further information presentation. Generate feedback that presents the vibration to the user.
  • action such as "stroking”
  • the application execution unit 343 is a communication unit not only for the user himself but also for others (for the AR glass system used by others).
  • the communication of 323 may cause feedback to present an image, sound, or vibration.
  • the application execution unit 343 notifies the management center via the network or the like from the communication unit 323 so that the related parties can be contacted. good. This makes it possible for people around you to know that the user is performing a dangerous operation and provide support.
  • the seventh embodiment of feedback corresponding to the above-mentioned invisible state when the user's hand approaches or touches a dangerous object in the invisible state, the feedback image, sound, or vibration is presented to the user. Will be done. That is, it is possible to recognize that the user's hand has approached or touched the dangerous object in an invisible state by feedback to the user's visual, auditory, or tactile sense.
  • the target object if the target object is visible, it becomes redundant, but if the target object is not visible, there is useful information even if the amount of information obtained by visually observing the target object is not enough.
  • Such useful information can be presented to the user by feedback to the user's visual, auditory, or tactile sensations.
  • the user in the xR space provided to the user, when the user's hand and the object come into close contact with each other, the user is visually recognizing the object. At the time, normal feedback is executed. In normal feedback, for example, a realistic tactile sensation is presented to the user.
  • feedback corresponding to the invisible state is executed. In the feedback corresponding to the non-visual state, information that is easily obtained in the visual state but cannot be obtained in the non-visual state is presented to the user by feedback to the user's visual, auditory, or tactile sense.
  • the target object when the target object is invisible, the user often wants to know the property of the target object. Therefore, in the feedback corresponding to the non-visual state, attributes such as size, number, color, type, orientation, necessity, and danger of objects that the user has not seen are presented to the user. In this way, by switching the feedback action when the user's hand and the object come into close contact with each other between the visible state and the non-visible state, the feedback to the user is useful and a realistic experience in the xR space. It will be compatible as a means of providing various information to users.
  • Patent Document 2 Japanese Unexamined Patent Publication No. 2019-008798 proposes to present tactile sensation according to the position visually observed.
  • the technique of Patent Document 2 aims to direct the line of sight to a point of interest by stimulating the tactile sensation of the user when the user is looking away from the place (area of interest) that the user wants to see.
  • it is detected whether or not the user is visually recognizing the target object at hand as in the second embodiment of the information processing apparatus, and information corresponding to the target object is presented. Etc. are not done. Therefore, the technical contents of the second embodiment of the present information processing apparatus and the technique of Patent Document 2 are significantly different.
  • FIG. 28 is a block diagram showing a configuration example of a third embodiment of the information processing apparatus to which the present technology is applied.
  • the same reference numerals are given to the parts common to the information processing apparatus 11 of FIG. 1, and the description thereof will be omitted.
  • the information processing device 601 uses the HMD 73 of FIG. 4 and the controller of FIG. 5 (controller 75 of FIG. 2) to display the space / world generated by xR to the user. offer.
  • the information processing device 601 has a sensor unit 21, a control unit 612, a video display unit 23, a sound presentation unit 24, a tactile presentation unit 25, and a storage unit 26. Therefore, the information processing device 601 is common to the information processing device 11 of FIG. 1 in that it has a sensor unit 21, a video display unit 23, a sound presentation unit 24, a tactile presentation unit 25, and a storage unit 26. However, the information processing apparatus 601 is different from the information processing apparatus 11 of FIG. 1 in that the control unit 612 is provided instead of the control unit 22 of FIG.
  • the information processing device 601 can be built with the hardware 61 shown in FIGS. 2 and 3.
  • the position / attitude acquisition unit 622 acquires the positions and attitudes of the HMD 73, the speaker 74, and the controller 75 (see FIG. 2) as position / attitude information based on the sensor information acquired by the sensor information acquisition unit 621.
  • the position / posture acquisition unit 622 acquires not only the position and posture of the controller itself, but also the position and posture of the user's hand or finger when the controller attached to the user's hand is used. The positions and postures of the user's hands and fingers can also be acquired based on the image taken by the camera 31.
  • the object selection acquisition unit 623 targets any object existing in the xR space as an operation target based on the sensor information obtained from the sensor information acquisition unit 621 and the position / attitude acquisition unit 622 and the position / attitude information of the hands and fingers. Gets whether or not it has been selected by the user. It is determined that the object selection is executed when the user touches the index finger and the thumb, or when the user holds the hand. The object selection acquisition unit 623 identifies the selected object when any of the objects is selected.
  • the application execution unit 624 creates an xR space to be provided to the user by executing the program of the predetermined application.
  • the application execution unit 624 moves, rotates, and enlarges / reduces the object (object selected as the operation target by the user) specified by the object selection acquisition unit 623 based on the operation by the user's hand movement or the like in the xR space. Etc. are performed.
  • the application execution unit 624 generates feedback to the user's visual sense (visual sense), auditory sense (sound), and tactile sense according to the object attribute (object information) for the user's operation on the object.
  • the application execution unit 624 has an object information acquisition unit 631 and a selection feedback generation unit 632.
  • the object information acquisition unit 631 acquires the object information tagged with the object selected as the operation target.
  • the object information is tagged in advance for each object existing in the xR space and stored in the storage unit 26.
  • the object information acquisition unit 631 reads the object information tagged with respect to the object selected as the operation target from the storage unit 26.
  • the object information acquisition unit 631 may recognize a real object and extract object information based on the sensor information acquired by the sensor information acquisition unit 621, or may dynamically extract object information from a virtual object. May be good.
  • the selection feedback generation unit 632 has a visual generation unit 641, a sound generation unit 642, and a tactile generation unit 643.
  • the visual generation unit 641 generates an image of the operation line according to the object (attribute) selected as the operation target based on the object information acquired by the object information acquisition unit 631.
  • the operation line is a line connecting the selected object and the hand of the operating user.
  • the operation line can be not only a straight line, but also a dotted line, only the start point and the end point, and a line connecting 3D objects.
  • the sound generation unit 642 generates a sound (sound) according to the selected object (attribute) based on the object information acquired by the object information acquisition unit 631.
  • the tactile sensation generation unit 643 generates a tactile sensation according to (attribute) the selected object based on the object information acquired by the object information acquisition unit 631.
  • the output control unit 625 generates an image, a sound, and a tactile sensation for presenting the xR space generated by the application execution unit 624 to the user, and the video display unit 23, the sound presentation unit 24, and the tactile sensation presentation unit, respectively. It is output to 25 as an output signal. As a result, the output control unit 625 controls the visual, auditory, and tactile information presented to the user. The output control unit 625 visually generates the application execution unit 624 as information of feedback to the user regarding the operation of the object selected as the operation target when generating the image, sound, and tactile sensation to be presented to the user. The image, sound, and tactile sensation generated by the unit 641, the sound generation unit 642, and the tactile sensation generation unit 643 are reflected.
  • FIG. 29 is a diagram illustrating the operation of the object.
  • the image of the xR space presented to the user's vision by the HMD 73 includes the user's right hand 661R and the left hand 661L.
  • the virtual object 667 which is a human model, is presented as a 3D object. It is assumed that the user performs an operation (operation) of emitting (virtual) light rays from, for example, the right hand 661R and the left hand 661L, and irradiates the virtual object 667 with the light rays.
  • a rectangular frame 672 representing an individually operable range is displayed on the virtual object 667 irradiated with the light beam.
  • the frame 672 that surrounds the entire virtual object 667 is displayed.
  • connection points 673R and 673L are set as connection points 673R and 673L.
  • the user's right hand 661R and the connection point 673R are connected by the operation line 674R
  • the user's left hand 661L and the connection point 673L are connected by the operation line 674L.
  • the virtual object 667 is selected as the operation target.
  • the operation lines 674R and 674L are made of a highly rigid material, for example, the user moves the right hand 661R and the left hand 661L to perform operations such as moving, rotating, and scaling the virtual object 667. be able to. It is possible to perform the same operation as a hand using the controller.
  • the frame 672 may be the surface of the virtual object 667, and the existence of the frame 672 is not shown in the following description.
  • the object to be operated may be a real object instead of a virtual object. However, in the following, the operation target will be described as being a virtual object.
  • FIG. 30 is a flowchart illustrating the processing procedure of the information processing apparatus 601.
  • step S51 the object selection acquisition unit 623 emits a virtual light ray from the user's hand by a predetermined operation or operation of the user. The user irradiates the virtual object to be operated with the virtual light beam. The process proceeds from step S51 to step S52.
  • step S52 the user operates a hand or a controller to select a virtual object to be operated. That is, as described with reference to FIG. 29, for example, a virtual object is selected by performing a pinching operation with the thumb and the index finger.
  • the object selection acquisition unit 623 identifies a virtual object selected as an operation target. The process proceeds from step S52 to step S53.
  • step S53 the object information acquisition unit 631 of the control unit 612 acquires the object information of the selected object.
  • the object information acquisition unit 631 acquires the object information of the selected portion. The process proceeds from step S53 to step S54.
  • step S54 the position / posture acquisition unit 622 acquires the positions and postures of the hands and fingers as position / posture information. The process proceeds from step S54 to step S55.
  • step S55 the selection feedback generation unit 632 of the control unit 612 uses the hand and finger position / posture information acquired in step S54 as feedback for the user's operation based on the object information acquired in step S53. Generates the image, sound, and tactile sensation presented to. The process proceeds from step S55 to step S56.
  • step S56 the output control unit 625 of the control unit 612 outputs the feedback image, sound, and tactile sensation generated in step S55 to the video display unit 23, the sound presentation unit 24, and the tactile sensation presentation unit 25, respectively. do.
  • the process returns from step S56 to step S54 and repeats from step S54.
  • the process returns to step S51 and repeats from step S51.
  • the selection feedback generation unit 632 of the application execution unit 624 generates feedback to the user of images, sounds, and vibrations such as operation lines for the operation of the virtual object in the xR space. Feedback is generated based on the object information (attribute) of the virtual object selected as the operation target. This gives the user a feeling of operation according to the attributes of the virtual object to be operated.
  • FIG. 31 is a diagram illustrating a virtual object.
  • the portion that can be individually selected as the operation target in this way may be registered in the storage unit 26 as object information in advance, or may be dynamically extracted from the virtual object.
  • the object information acquisition unit 631 of the application execution unit 624 acquires object information for each part of the virtual object selected as the operation target.
  • Object information includes color, hardness, weight, image, sound (sound), size, object elasticity, thickness, hardness / brittleness, material characteristics (glossiness and roughness), heat, and importance. Degree etc. are included.
  • the object information acquisition unit 631 may use, for example, color, hardness, weight, image, sound, and as object information (attribute type) of the leaf portion 692. , Acquire information about vibration data, etc.
  • color the information "green or fresh green color” is acquired.
  • hardness information that it is “soft” is acquired.
  • weight information that it is “soft” is acquired.
  • weight the information "light” is acquired.
  • image “leaf image data” is acquired.
  • the sound “the sound when the leaves rub against each other” is acquired.
  • the "vibration data” the "rough vibration when the leaf hits the hand” is acquired.
  • the selection feedback generation unit 632 (visual generation unit 641) of the application execution unit 624 uses the object information of the leaf portion 692 as the feedback to the user's vision for the operation of the virtual object, and the operation line 674R and the operation line 674R shown in FIG. Generate an operation line (object) corresponding to 674L.
  • FIG. 32 is a diagram illustrating four forms of an operation line presented to the user when the leaf portion 692 of the virtual object 691 of FIG. 31 is selected as an operation target.
  • the user's right hand 661R and the leaf portion 692 are connected by operation lines 711 to 714 having different forms, respectively.
  • the operation line on the left hand is omitted.
  • both hands and the virtual object to be operated do not necessarily have to be connected by the operation line.
  • the operation line 711 is generated by a solid line. Information on color, hardness, and weight among the object information of the leaf portion 692 is reflected in the generation of the operation line 711. For example, since the color of the leaf portion 692 of the operation line 711 is green, the operation line 711 is also generated to be green. Since the leaf portion 692 is soft and light in weight, the operation line 711 is generated by a soft line (curved line) instead of a straight line. The thickness of the operation line 711 may be changed according to the hardness or weight of the leaf portion 692.
  • the operation line 712 is generated by a dotted line.
  • the operation line 712 differs only in the type of line from the operation line 711 of the form 701.
  • the operation line 712 also reflects the object information of the leaf portion 692 like the operation line 711.
  • the operation line 713 differs only in the form of the line from the operation line 711 of the form 701.
  • the operation line 713 also reflects the object information of the leaf portion 692 like the operation line 711.
  • the operation line 714 is generated by a series of leaves.
  • the operation line 714 among the object information of the leaf portion 692, the information of "leaf image data" related to the image is used. Similar to the operation line 711, the operation line 714 reflects information on the hardness and weight of the leaf portion 692.
  • the operation line may be generated by connecting the leaf animation.
  • the selection feedback generation unit 632 (sound generation unit 642) of the application execution unit 624 generates a sound (sound) using the object information of the leaf portion 692 as feedback to the user's hearing for the operation of the virtual object.
  • the sound generation unit 642 uses information about the sound included in the object information of the leaf portion 692 (sound when the leaves rub against each other) as the sound presented to the user when the leaf portion 692 is moved by the user's operation. To generate.
  • the volume of the sound may be varied according to the movement of the user's hand and the wood portion 629. For example, when both the user's hand and the leaf portion 692 are stationary, a sound is generated in which the leaves are slightly rubbed by the wind. When the user moves his hand to the right and the leaf portion 692 also moves to the right at high speed, a sound like a strong wind hitting the leaves is generated.
  • the selection feedback generation unit 632 (tactile generation unit 643) of the application execution unit 624 generates a tactile sensation (vibration) using the object information of the leaf portion 692 as feedback to the user's tactile sensation for the operation of the virtual object.
  • the tactile generation unit 643 uses the information related to the vibration data included in the object information of the leaf portion 692 to generate a rough vibration when the leaf hits the hand.
  • the tactile generation unit 643 changes the generated vibration according to the movement between the user's hand and the leaf portion 692. For example, the tactile generation unit 643 generates high-frequency vibrations that are repeated about twice a second when the user's hand and the leaf portion 692 are both stationary or slow. The tactile generation unit 643 generates high-frequency vibrations that are repeated about 10 times per second when the movement between the user's hand and the leaf portion 692 is fast.
  • the vibration frequency reflects information on the weight included in the object information of the leaf portion 692.
  • the object information acquisition unit 631 determines the color, hardness, weight, and the like as the object information (attribute type) of the trunk portion 693, as in the leaf portion 692. Acquire information about images, sounds, vibration data, etc. For example, regarding the color, the information “brown” is acquired. Regarding hardness, information that it is “hard” is acquired. Regarding the weight, the information “heavy” is acquired. As for the image, “image data of the tree trunk” is acquired. As for the sound, “the rattling sound of bending an elastic tree branch” is acquired. As for “vibration data”, “heavy vibration of elastic tree branches bent” is acquired.
  • the selection feedback generation unit 632 (visual generation unit 641) of the application execution unit 624 generates an operation line (object) using the object information of the trunk portion 693 as feedback to the user's vision for the operation of the virtual object.
  • FIG. 33 is a diagram illustrating four forms of an operation line presented to the user when the trunk portion 693 of the virtual object 691 of FIG. 31 is selected as an operation target.
  • the forms 701 to 704 of FIG. 33 correspond to the forms 701 to 704 having the same reference numerals in FIG. 32, the methods of generating the operation lines are the same, and the same reference numerals are given to the operation lines.
  • the operation line 711 is generated by a solid line. Information on color, hardness, and weight among the object information of the trunk portion 693 is reflected in the generation of the operation line 711. For example, since the color of the trunk portion 693 of the operation line 711 is brown, the operation line 711 is also generated to be brown. Since the trunk portion 693 is hard and heavy, the operation line 711 is generated in a straight line.
  • the operation line 712 is generated by a dotted line.
  • the operation line 712 differs only in the type of line from the operation line 711 of the form 701. Similar to the operation line 711, the operation line 712 also reflects the object information of the trunk portion 693.
  • the operation line 713 differs only in the form of the line from the operation line 711 of the form 701. Similar to the operation line 711, the operation line 713 also reflects the object information of the trunk portion 693.
  • the operation line 714 is generated by a series of tree trunks.
  • the operation line 714 among the object information of the trunk portion 693, the information of "image data of the trunk of the tree" regarding the image is used. Similar to the operation line 711, the operation line 714 reflects information on the hardness and weight of the trunk portion 693.
  • the operation line may be generated by connecting the animation of the tree trunk.
  • the selection feedback generation unit 632 (sound generation unit 642) of the application execution unit 624 generates a sound (sound) using the object information of the trunk portion 693 as feedback to the user's hearing for the operation of the virtual object.
  • the sound generation unit 642 sets the sound presented to the user when the trunk portion 693 is moved by the user's operation as information about the sound included in the object information of the trunk portion 693 (bending an elastic tree branch). It is generated using a gurgling sound).
  • the volume of the sound may be changed according to the movement of the user's hand and the trunk portion 693. For example, when both the user's hand and the trunk portion 693 are stationary, almost no sound is generated, but when the user's hand and the trunk portion 693 are moving, a strong gurgling sound is generated.
  • the selection feedback generation unit 632 (tactile generation unit 643) of the application execution unit 624 generates a tactile sensation (vibration) using the object information of the trunk portion 693 as feedback to the user's tactile sensation for the operation of the virtual object.
  • the tactile generation unit 643 uses the information related to the vibration data included in the object information of the trunk portion 693 to generate a heavy vibration by bending an elastic tree branch.
  • the tactile generation unit 643 changes the generated vibration according to the movement of the user's hand and the trunk portion 693. For example, the tactile generation unit 643 generates low-frequency vibrations that are repeated about once per second when the user's hand and the trunk portion 693 are both stationary or slow. The tactile generator 643 generates a low-frequency vibration with a large amplitude when the user's hand and the trunk portion 693 start to move, and the faster the movement, the higher the frequency, and the high frequency repeated about 10 times per second. Generates vibrations. The vibration frequency reflects information about the weight contained in the object information of the trunk portion 693.
  • FIG. 34 is a diagram illustrating a case where the weight of the object information is changed according to the strength of the user's power.
  • the state 721 in FIG. 34 shows a state in which a weak user is operating the leaf portion 692 with the right hand 661R by the operation line 711.
  • the state 722 represents a state in which a strong user is operating the leaf portion 692 with the right hand 661R by the operation line 711.
  • state 721 a weak user feels heavier than a strong user when moving the leaf portion 692. Therefore, when the feedback is generated, the selection feedback generation unit 632 changes the information regarding the weight included in the object information of the leaf portion 692 to be heavier. As a result, the operation line 711 is generated as a straight line. An appropriate feeling of operation is expressed for users with weak power.
  • the selection feedback generation unit 632 of the application execution unit 624 when the operation target object collides with a virtual or real wall or another object while the user is moving the operation target object, at that moment.
  • the object information may be changed.
  • FIG. 35 is a diagram showing a state when the object information is changed when the object to be operated collides.
  • the operation line 711 in the state 731 of FIG. 35 is an operation line generated by the same generation method as the operation line 711 of the form 701 of FIG. 32.
  • the state 731 in FIG. 35 represents a case where the user collides with a wall or the like while operating the operation line 711 with the right hand 661R to move the leaf portion 692.
  • the selection feedback generation unit 632 changes the direction of increasing the weight information included in the object information of the leaf portion 692 or the direction of increasing the hardness information.
  • the operation line 711 which was curved before the leaf portion 692 collided, changes to a straight line at the moment of the collision, and it is appropriately expressed that the collision occurred.
  • the operation line 714 in the state 732 of FIG. 35 is an operation line generated by the same generation method as the operation line 714 of the form 704 of FIG. 32.
  • the state 732 of FIG. 35 represents a case where the user collides with a wall or the like while operating the operation line 714 with the right hand 661R to move the leaf portion 692.
  • the selection feedback generation unit 632 changes the direction of increasing the weight information included in the object information of the leaf portion 692 or the direction of increasing the hardness information. As a result, the leaves connecting the operation lines 714 are scattered at the moment of collision, so that the collision is appropriately expressed.
  • FIG. 36 is a diagram showing a state when the object to be operated is placed at a designated position on the surface.
  • FIG. 36 shows a state when the user operates the operation line 711 with the right hand 661R to move the object 741 to be operated in the air and places it at a designated position on a surface such as a desk.
  • the selection feedback generation unit 632 changes the information regarding the weight of the object information of the object 741 to lighten when the object 741 is moving in the air.
  • the operation line 711 is generated as a curved soft line.
  • the selection feedback generation unit 632 changes the information regarding the weight of the object information of the object 741 in the direction of increasing the weight.
  • the operation line 711 is generated as a straight line. If the operation line 711 is a straight line, it is easier to perform a delicate operation of placing the object 741 at a designated position on the surface.
  • the selection feedback generation unit 632 may change the information regarding the color of the object information to be operated according to the environment color seen by the user.
  • FIG. 37 is a diagram illustrating a process for making the operation line easier to see.
  • the state 751 in FIG. 37 is a case where the environmental color is dark, and the state 752 is a case where the environmental color is bright.
  • the selection feedback generation unit 632 changes the information regarding the color of the object information of the object 741 to be operated to a bright color. As a result, when the environment color is dark, the operation line 711 is generated in a bright color, which makes it easy to see.
  • the selection feedback generation unit 632 changes the information regarding the color of the object information of the object 741 to be operated to a dark color.
  • the operation line 711 is generated in a dark color, which makes it easier to see.
  • the third embodiment of the above information processing apparatus when the user operates an object in the xR space provided to the user, feedback such as an operation line according to the attribute of the object to be operated is given to the user. Presented at. This makes it possible for the user to intuitively and easily grasp what kind of attribute object is being operated and what part of the object is being operated.
  • the series of processes in the information processing device 11, the information processing device 301, or the information processing device 601 described above can be executed by hardware or by software.
  • the programs constituting the software are installed in the computer.
  • the computer includes a computer embedded in dedicated hardware and, for example, a general-purpose personal computer capable of executing various functions by installing various programs.
  • FIG. 38 is a block diagram showing a configuration example of computer hardware that executes the above-mentioned series of processes programmatically.
  • the CPU Central Processing Unit
  • ROM Read Only Memory
  • RAM Random Access Memory
  • An input / output interface 905 is further connected to the bus 904.
  • An input unit 906, an output unit 907, a storage unit 908, a communication unit 909, and a drive 910 are connected to the input / output interface 905.
  • the input unit 906 includes a keyboard, a mouse, a microphone, and the like.
  • the output unit 907 includes a display, a speaker, and the like.
  • the storage unit 908 includes a hard disk, a non-volatile memory, and the like.
  • the communication unit 909 includes a network interface and the like.
  • the drive 910 drives a removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
  • the CPU 901 loads the program stored in the storage unit 908 into the RAM 903 via the input / output interface 905 and the bus 904 and executes the above-mentioned series. Is processed.
  • the program executed by the computer (CPU901) can be recorded and provided on the removable media 911 as a package media or the like, for example.
  • the program can also be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
  • the program can be installed in the storage unit 908 via the input / output interface 905 by mounting the removable media 911 in the drive 910. Further, the program can be received by the communication unit 909 via a wired or wireless transmission medium and installed in the storage unit 908. In addition, the program can be installed in the ROM 902 or the storage unit 908 in advance.
  • the program executed by the computer may be a program in which processing is performed in chronological order according to the order described in the present specification, in parallel, or at a necessary timing such as when a call is made. It may be a program in which processing is performed.
  • This technology can also take the following configurations.
  • An information processing device having a processing unit that controls presentation.
  • the information processing apparatus according to (1) wherein the operation of the virtual object includes an operation of bringing the user's hand into contact with the virtual object.
  • the processing unit is The information processing device according to (1) or (2), which controls the presentation based on whether or not the user intends to operate the virtual object.
  • the processing unit is The information processing apparatus according to (3), wherein the information is not presented for the operation of the virtual object when the user does not intend to operate the virtual object.
  • the processing unit is In the above (3) or (4), the user presents different information for the operation of the virtual object depending on whether the user intends to operate the virtual object or the operation of the virtual object.
  • the processing unit is The information processing device according to (3) or (5), wherein the user changes the brightness of the virtual object as the information to the visual sense when the user does not intend to operate the virtual object.
  • the processing unit is When the user does not intend to operate the virtual object, the sound or vibration of the type or amplitude according to the speed of the user's hand is presented as the information to the auditory sense or the tactile sense. 5) Or the information processing apparatus according to (6).
  • the processing unit is The information processing apparatus according to any one of (1) to (7), which controls the presentation based on whether or not the virtual object is within the field of view of the user.
  • the processing unit is The information processing apparatus according to any one of (1) to (8), which controls the presentation based on the orientation of the user's hand with respect to the virtual object.
  • the processing unit is The information processing apparatus according to any one of (1) to (9), which controls the presentation based on whether or not the object is held in the user's hand.
  • the processing unit is The information processing apparatus according to any one of (1) to (10), which controls the presentation based on the direction of the user's line of sight or the direction of the head-mounted display worn by the user.
  • the processing unit is The information processing apparatus according to any one of (1) to (11), which controls the presentation based on the positional relationship between the user and the virtual object. (13) The processing unit is The information processing apparatus according to any one of (1) to (12), which controls the presentation based on the state of the user's hand. (14) The processing unit is The information processing apparatus according to any one of (1) to (13), which controls the presentation based on the relationship with the object visually recognized by the user. (15) The processing unit is In the above (1) or (2), the user presents different information for the operation of the virtual object depending on whether the user is visually recognizing the virtual object or not. The information processing device described.
  • the processing unit is When the virtual object does not exist within the field of view of the user or the head-mounted display worn by the user, or when the user has his / her eyes closed, the virtual object is not visually recognized.
  • the processing unit is The information processing apparatus according to (15) or (16), wherein the virtual object is not visually recognized when the virtual object is present in the peripheral visual field of the user.
  • the processing unit is The above (15) to (17), wherein the virtual object is not visually recognized when the virtual object does not exist in the central visual field of the user during a predetermined time from the present to the past. Information processing equipment.
  • the processing unit is The information processing apparatus according to any one of (15) to (18), wherein the virtual object is not visually recognized when the object that shields the virtual object exists.
  • the processing unit is One of the above (1), (2), and (15) to (19) that controls the presentation based on the attribute of the virtual object when the user does not visually recognize the virtual object.
  • the information processing device described. (21)
  • the processing unit is Any one of (1), (2), and (15) to (20) that controls the presentation based on the size of the virtual object when the user does not visually recognize the virtual object.
  • the processing unit is The information processing apparatus according to (21), which presents an image corresponding to the size of the virtual object as the information to the visual sense.
  • the processing unit is One of the above (1), (2), and (15) to (22) that controls the presentation based on the number of the virtual objects when the user does not visually recognize the virtual object.
  • the processing unit is The information processing apparatus according to (23), wherein the sound or vibration of a waveform corresponding to the number of virtual objects is presented as the information to the auditory sense or the tactile sense.
  • the processing unit is One of (1), (2), and (15) to (24) that controls the presentation based on the color of the virtual object when the user does not visually recognize the virtual object.
  • the processing unit is The information processing apparatus according to (25), which presents sound or vibration having a frequency corresponding to the color of the virtual object as the information to the auditory sense or the tactile sense.
  • the processing unit is One of the above (1), (2), and (15) to (26) that controls the presentation based on the type of the virtual object when the user does not visually recognize the virtual object.
  • the processing unit is The information processing apparatus according to (27), wherein the sound or vibration of the number of times according to the type of the virtual object is presented as the information to the auditory sense or the tactile sense.
  • the processing unit is The presentation is controlled based on the orientation of the virtual object with respect to the user's hand when the user is not visually recognizing the virtual object (1), (2), and (15) to (28). ) Is described in any of the information processing devices.
  • the processing unit is The information processing apparatus according to (29), wherein when the orientation of the virtual object is different with respect to the direction in which the hand of the user has moved, different vibrations are presented as the information to the tactile sensation.
  • the processing unit is The presentation is controlled based on whether or not the virtual object is a virtual object predicted to be operated by the user when the user does not visually recognize the virtual object (1), (2). ), And the information processing apparatus according to any one of (15) to (30).
  • the processing unit is The information processing apparatus according to (1), which is an operation line connecting the virtual object and the hand of the user, and presents the operation line for operating the virtual object as the information to the visual sense.
  • the processing unit is The information processing apparatus according to (35), wherein the operation line corresponding to the color, hardness, weight, or image as the attribute of the virtual object is presented as the information to the visual sense.
  • the processing unit is The information processing according to (35) or (36), wherein the operation line is presented by a solid line, a dotted line, a line having only a start point portion and an end point portion, or a series of images as the attribute of the virtual object. Device.
  • the processing unit is The information processing apparatus according to any one of (35) to (37), wherein the shape of the operation line is changed according to the hardness as the attribute of the virtual object or the weight.
  • the processing unit is The information processing apparatus according to any one of (35) to (38), wherein the attribute is changed based on the characteristics of the user.
  • the processing unit is The information processing apparatus according to any one of (35) to (39), wherein the attribute is changed when the virtual object collides with another object.
  • the processing unit is The information processing apparatus according to any one of (35) to (40), wherein the attribute is changed depending on whether the virtual object exists in the air or on the object.
  • the processing unit is The information processing apparatus according to any one of (35) to (41), wherein the color of the virtual object as an attribute is changed according to the environment color. (43) The processing unit is The information processing apparatus according to any one of (1) and (35) to (42), which controls the presentation of the sound as the information to the auditory sense based on the sound as the attribute of the virtual object. .. (44) The processing unit is The information processing apparatus according to (43), wherein the sound as the information to the auditory sense is changed according to the speed of movement of the virtual object. (45) The processing unit is The information processing apparatus according to any one of (1) and (35) to (42), which controls the presentation of the vibration as the information to the tactile sensation based on the vibration as the attribute of the virtual object. ..
  • the processing unit is The information processing apparatus according to (45), wherein the vibration as the information to the tactile sensation is changed according to the speed of movement of the virtual object.
  • the processing unit of the information processing apparatus having the processing unit is Controlled presentation of information to any one or more of the user's visual, auditory, and tactile sensations for the operation of the virtual object based on the attributes of the virtual object to be operated or the status of the operation on the virtual object.
  • Information processing method Computer Controlled presentation of information to any one or more of the user's visual, auditory, and tactile sensations for the operation of the virtual object based on the attributes of the virtual object to be operated or the status of the operation on the virtual object.

Abstract

La présente technologie concerne un dispositif de traitement d'informations, un procédé de traitement d'informations et un programme qui permettent d'améliorer la fonctionnalité dans un espace fourni par la technologie x-Reality (xR). La présentation d'informations à l'une quelconque parmi la sensation visuelle, la sensation auditive et la sensation tactile d'un utilisateur par rapport à une manipulation d'un objet virtuel sont commandées sur la base d'un attribut de l'objet virtuel manipulé ou de l'état de la manipulation par rapport à l'objet virtuel.
PCT/JP2021/033620 2020-09-28 2021-09-14 Dispositif de traitement d'informations, procédé de traitement d'informations et programme WO2022065120A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020161753 2020-09-28
JP2020-161753 2020-09-28

Publications (1)

Publication Number Publication Date
WO2022065120A1 true WO2022065120A1 (fr) 2022-03-31

Family

ID=80845408

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/033620 WO2022065120A1 (fr) 2020-09-28 2021-09-14 Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Country Status (1)

Country Link
WO (1) WO2022065120A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003067107A (ja) * 2001-08-28 2003-03-07 Foundation For Advancement Of Science & Technology 触覚提示装置
JP2009069918A (ja) * 2007-09-10 2009-04-02 Canon Inc 情報処理装置、情報処理方法
JP2014092906A (ja) * 2012-11-02 2014-05-19 Nippon Hoso Kyokai <Nhk> 触力覚提示装置
WO2020105606A1 (fr) * 2018-11-21 2020-05-28 ソニー株式会社 Dispositif de commande d'affichage, dispositif d'affichage, procédé de commande d'affichage et programme

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003067107A (ja) * 2001-08-28 2003-03-07 Foundation For Advancement Of Science & Technology 触覚提示装置
JP2009069918A (ja) * 2007-09-10 2009-04-02 Canon Inc 情報処理装置、情報処理方法
JP2014092906A (ja) * 2012-11-02 2014-05-19 Nippon Hoso Kyokai <Nhk> 触力覚提示装置
WO2020105606A1 (fr) * 2018-11-21 2020-05-28 ソニー株式会社 Dispositif de commande d'affichage, dispositif d'affichage, procédé de commande d'affichage et programme

Similar Documents

Publication Publication Date Title
US9367136B2 (en) Holographic object feedback
US20220121344A1 (en) Methods for interacting with virtual controls and/or an affordance for moving virtual objects in virtual environments
JP7341166B2 (ja) ウェアラブルシステムのためのトランスモード入力融合
CN109804334B (zh) 用于三维空间中虚拟对象的自动放置的系统和方法
EP3332311B1 (fr) Comportement en survol pour interaction d&#39;un regard dans réalité virtuelle
EP3542252B1 (fr) Interaction de main sensible au contexte
EP3427103B1 (fr) Réalité virtuelle
EP3311249B1 (fr) Entrée de données tridimensionnelles d&#39;utilisateur
EP2672880B1 (fr) Détection de regard dans un environnement de mappage tridimensionnel (3d)
TW201214266A (en) Three dimensional user interface effects on a display by using properties of motion
CN113892074A (zh) 用于人工现实系统的手臂凝视驱动的用户界面元素选通
US20190272040A1 (en) Manipulation determination apparatus, manipulation determination method, and, program
WO2019187862A1 (fr) Dispositif de traitement d&#39;informations, procédé de traitement d&#39;informations, et support d&#39;enregistrement
JP6332652B1 (ja) 表示制御装置、及びプログラム
JP2022535182A (ja) ユーザインターフェース要素をゲーティングするパーソナルアシスタント要素を備えた人工現実システム
JP2022535322A (ja) 人工現実システムのための角を識別するジェスチャー駆動のユーザインターフェース要素ゲーティング
WO2019010337A1 (fr) Interface multi-sélection volumétrique pour la sélection de multiples entités dans un espace 3d
CN108369451B (zh) 信息处理装置、信息处理方法及计算机可读存储介质
WO2022065120A1 (fr) Dispositif de traitement d&#39;informations, procédé de traitement d&#39;informations et programme
CN111240483A (zh) 操作控制方法、头戴式设备及介质
WO2021176861A1 (fr) Dispositif de traitement d&#39;informations et procédé de traitement d&#39;informations, programme informatique et système de détection de réalité augmentée
JP6922743B2 (ja) 情報処理装置、情報処理方法及びプログラム
WO2024090303A1 (fr) Dispositif de traitement d&#39;informations et procédé de traitement d&#39;informations
WO2024090299A1 (fr) Dispositif de traitement d&#39;informations et procédé de traitement d&#39;informations
WO2023286316A1 (fr) Dispositif d&#39;entrée, système et procédé de commande

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21872250

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21872250

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP