WO2021145068A1 - Information processing device and information processing method, computer program, and augmented reality system - Google Patents

Information processing device and information processing method, computer program, and augmented reality system Download PDF

Info

Publication number
WO2021145068A1
WO2021145068A1 PCT/JP2020/043524 JP2020043524W WO2021145068A1 WO 2021145068 A1 WO2021145068 A1 WO 2021145068A1 JP 2020043524 W JP2020043524 W JP 2020043524W WO 2021145068 A1 WO2021145068 A1 WO 2021145068A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
hand
virtual
user
finger
Prior art date
Application number
PCT/JP2020/043524
Other languages
French (fr)
Japanese (ja)
Inventor
石川 毅
木村 淳
山野 郁男
真一 河野
壮一郎 稲谷
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2021145068A1 publication Critical patent/WO2021145068A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns

Definitions

  • the technology disclosed in this specification (hereinafter referred to as "the present disclosure") relates to an information processing device and an information processing method for processing information related to augmented reality, a computer program, and an augmented reality feeling system.
  • VR virtual reality
  • AR augmented reality
  • MR Magnetic Reality
  • VR is a technology that allows virtual space to be perceived as reality.
  • AR is a technology that expands the real space seen by the user by adding, emphasizing, attenuating, or deleting information to the real environment surrounding the user.
  • MR is a technology for displaying a virtual object (hereinafter, also referred to as “virtual object”) that replaces an object in real space and interlacing the real and the virtual.
  • AR and MR are realized by using, for example, a see-through type head-mounted display (hereinafter, also referred to as “AR glass”).
  • virtual objects are superimposed and displayed on the real space landscape that the user observes through AR glasses, specific real objects are emphasized or attenuated, and specific real objects are deleted as if they do not exist. You can make it look like. Further, a proposal has been made for an information processing device that presents a contact between a real object (such as a user's finger) and a virtual object to the user (see, for example, Patent Document 1).
  • An object of the present disclosure is to provide an information processing device and an information processing method for processing information related to augmented reality, a computer program, and an augmented reality feeling system.
  • the first aspect of the disclosure is An acquisition unit that acquires the position and posture of the user's hand, A control unit that controls the display operation of a display device that superimposes and displays virtual objects in real space, Equipped with The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object. It is an information processing device.
  • the control unit is at least one of a gripping method in which the information is held in the vicinity of the virtual object or the hand, and the virtual object is gripped by the thumb and one other finger, or the gripping method in which the virtual object is gripped by the entire hand.
  • the display device is controlled so as to display the information including.
  • the control unit is at least one of a state in which the hand is holding the virtual object, a position where the hand is holding the virtual object, and a movement in which the virtual hand is holding the virtual object at the position of the hand.
  • the display device is controlled so as to display the information indicating the above.
  • the second aspect of the present disclosure is The acquisition step to acquire the position and posture of the user's hand, A control step that controls the display operation of a display device that superimposes and displays virtual objects in real space, Have, In the control step, the display device is controlled so as to display information on a method of grasping the virtual object when the hand approaches the virtual object. It is an information processing method.
  • the third aspect of the present disclosure is Acquisition unit that acquires the position and posture of the user's hand, A control unit that controls the display operation of a display device that superimposes and displays virtual objects in real space. Written in computer readable format so that the computer works as The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object. It is a computer program.
  • the computer program according to the third aspect of the present disclosure defines a computer program written in a computer-readable format so as to realize a predetermined process on the computer.
  • a collaborative action is exhibited on the computer, and the same action and effect as the information processing device according to the first aspect of the present disclosure can be obtained. Obtainable.
  • the fourth aspect of the present disclosure is A display device that superimposes and displays virtual objects in real space, An acquisition unit that acquires the position and posture of the user's hand, A control unit that controls the display operation of the display device, Equipped with The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object. It is an augmented reality system.
  • system here means a logical assembly of a plurality of devices (or functional modules that realize a specific function), and each device or functional module is in a single housing. It does not matter whether or not it is.
  • an information processing device and an information processing method, a computer program, and an augmented reality system that realize realistic interaction with a virtual object by a user's hand or finger.
  • FIG. 1 is a diagram showing a functional configuration example of the AR system 100.
  • FIG. 2 is a diagram showing a state in which AR glasses are attached to the user's head.
  • FIG. 3 is a diagram showing a configuration example of the AR system 300.
  • FIG. 4 is a diagram showing a configuration example of the AR system 400.
  • FIG. 5 is a diagram showing an example in which the controller 500 is attached to the user's hand.
  • FIG. 6 is a diagram showing an example of a functional configuration included in the control unit 140.
  • FIG. 7 is a diagram showing how a virtual object is arranged around a user wearing AR glasses on his / her head.
  • FIG. 8 is a diagram for explaining a mechanism for displaying a virtual object so that the AR glass follows the movement of the user's head.
  • FIG. 1 is a diagram showing a functional configuration example of the AR system 100.
  • FIG. 2 is a diagram showing a state in which AR glasses are attached to the user's head.
  • FIG. 9 is a diagram showing a state according to the distance between the user's hand and the virtual object.
  • FIG. 10 is a diagram showing a display example of a UI that guides a method of grasping a virtual object.
  • FIG. 11 is a diagram showing a display example of a UI that guides a method of grasping a virtual object.
  • FIG. 12 is a diagram showing a display example of a UI that guides a method of grasping a virtual object.
  • FIG. 13 is a diagram showing a display example of a UI that guides a method of grasping a virtual object.
  • FIG. 14 is a diagram showing a display example of a UI that guides a method of grasping a virtual object.
  • FIG. 10 is a diagram showing a display example of a UI that guides a method of grasping a virtual object.
  • FIG. 11 is a diagram showing a display example of a UI that guides a method of grasping a virtual object.
  • FIG. 15 is a diagram showing a display example of a UI that guides a method of grasping a virtual object.
  • FIG. 16 is a diagram showing an example in which a user's hand approaches a virtual object from various directions.
  • FIG. 17 is a diagram showing an example in which the UI that guides the method of grasping the virtual object is switched according to the contact state between the finger and the virtual object.
  • FIG. 18 is a diagram showing an example in which the UI that guides the method of grasping the virtual object is switched according to the contact state between the finger and the virtual object.
  • FIG. 19 is a flowchart showing a processing procedure for presenting a UI that guides a user how to grasp a virtual object.
  • FIG. 20 is a diagram showing a display example of a virtual finger when the user's finger touches the virtual object.
  • FIG. 21 is a diagram showing a display example of a virtual finger when the user's finger approaches the virtual object.
  • FIG. 22 is a diagram showing another display example of the virtual finger when the user's finger approaches the virtual object.
  • FIG. 23 is a flowchart showing a processing procedure for displaying a virtual hand to the user.
  • FIG. 24 is a diagram showing a configuration example of the remote control system 2400.
  • FIG. 25 is a diagram showing an operator approaching a virtual object with his / her hand on the master device 2410 side.
  • FIG. 26 is a diagram showing a state in which the robot 2421 is approaching an object so as to follow the movement of the operator's hand on the slave device 2420 side.
  • FIG. 27 is a flowchart showing a processing procedure for presenting a UI that guides the operator how to grasp the virtual object.
  • an object can be held by a method of picking or grasping, and the shape of the object is changed by the force applied from the hand to pinch or grasp.
  • the hand slips through the object, and it is not possible to hold the object in the same manner as in the real world.
  • an augmented reality system that provides a user interface (UI) in which a finger is thrust into an object in a virtual world and pinched with a fingertip, or a frame provided on the outer circumference of the object is pinched is also conceivable.
  • UI user interface
  • the method of holding an object through the UI in the virtual world has a large divergence from the method of holding an object in the real world, and the reality is greatly impaired.
  • the object in the virtual world does not actually exist, when the object is held by the method of picking or grasping, the hand slips through the object, and the user cannot obtain a realistic tactile sensation.
  • you attach an exoskeleton type force sense presentation device to your hand and hold an object in the virtual world you can lock the movement of your hand so that the hand does not slip through the object, and you can hold the object in the real world.
  • a method of realizing a similar virtual world method is also conceivable. However, it can be used only by a limited number of users and environments because the purchase cost of the force sense presentation device is high and a place where the force sense presentation device is installed is required.
  • a method of holding an object in the virtual world is realized without using an external device such as a force sense presentation device by a method that does not deviate from the method of holding an object in the real world.
  • FIG. 1 shows an example of a functional configuration of the AR system 100 to which the present disclosure is applied.
  • the illustrated AR system 100 includes a first sensor unit 110 that detects the position of the user's hand wearing the AR glass and the shape of the user's finger, a second sensor unit 120 mounted on the AR glass, and AR. It includes a display unit 131 that displays virtual objects on the glass, and a control unit 140 that comprehensively controls the operation of the entire AR system 100.
  • the first sensor unit 110 includes a gyro sensor 111, an acceleration sensor 112, and a directional sensor 113.
  • the second sensor unit 120 which is mounted on the AR glass, includes an outward camera 121, an inward camera 122, a microphone 123, a gyro sensor 124, an acceleration sensor 125, and a directional sensor 126.
  • the AR system 100 includes a speaker 132 that outputs an audio signal such as a voice related to a virtual object, a vibration presentation unit 133 that provides feedback by vibration presentation to the back of the user's hand or other body parts, and an AR system 100 externally.
  • a communication unit 134 for communicating with the communication unit 134 may be further provided.
  • the control unit 140 may be equipped with a large-scale storage unit 150 including an SSD (Solid State Drive) or the like.
  • the AR glass body is generally a spectacle-type or goggle-type device, which is used by the user by wearing it on the head, superimposing digital information on the visual field of the user's eyes or one eye, or emphasizing a specific real object. It can be degraded or attenuated, or a particular real object can be deleted to make it appear as if it does not exist.
  • FIG. 2 shows a state in which AR glasses are attached to the user's head.
  • a display unit 131 for the left eye and a display unit 131 for the right eye are arranged in front of the left and right eyes of the user, respectively.
  • the display unit 131 is transparent or translucent, and displays a virtual object superimposed on a landscape in real space, emphasizes or attenuates a specific real object, or deletes a specific real object to make it appear as if it does not exist. Or something.
  • the left and right display units 131 may be independently displayed and driven, for example, to display a parallax image, that is, a virtual object in 3D.
  • an outward camera 121 directed in the user's line-of-sight direction is arranged substantially in the center of the AR glass.
  • the AR system 100 can be composed of two devices, for example, an AR glass worn on the user's head and a controller worn on the user's hand.
  • FIG. 3 shows a configuration example of an AR system 300 including an AR glass 301 and a controller 302.
  • the AR glass 301 includes a control unit 140, a storage unit 150, a second sensor unit 120, a display unit 131, a speaker 132, and a communication unit 134.
  • the controller 302 includes a first sensor unit 110 and a vibration presenting unit 133.
  • the AR system 100 is composed of three devices: an AR glass worn by the user on the head, a controller worn on the user's hand, and an information terminal such as a smartphone or tablet.
  • FIG. 4 shows a configuration example of an AR system 400 including an AR glass 401, a controller 402, and an information terminal 403.
  • the AR glass 401 includes a display unit 131, a speaker 132, and a second sensor unit 120.
  • the controller 402 includes a first sensor unit 110 and a vibration presenting unit 133.
  • the information terminal 403 includes a control unit 140, a storage unit 150, and a communication unit 134.
  • the specific device configuration of the AR system 100 is not limited to FIGS. 3 and 4. Further, the AR system 100 may further include components other than those shown in FIG.
  • the first sensor unit 110 and the vibration presentation unit 133 are configured as a controller to be worn on the user's hand.
  • the first sensor unit 110 includes a gyro sensor 111, an acceleration sensor 112, and a directional sensor 113.
  • the first sensor unit 110 may be an IMU (Inertial Measurement Unit) including a gyro sensor, an acceleration sensor, and a directional sensor.
  • the vibration presenting unit 133 is configured by arranging electromagnetic type or piezoelectric type vibrators in an array. The sensor signal of the first sensor unit 110 is transferred to the control unit 140.
  • FIG. 5 shows an example in which the controller 500 including the first sensor unit 110 and the vibration presentation unit 133 is attached to the user's hand.
  • IMUs 501, 502, and 503 are attached to the thumb and the proximal phalanx and the middle phalanx of the index finger by bands 511, 512, and 513, respectively.
  • the vibration presenting unit 133 is attached to the back of the hand.
  • the vibration presenting unit 133 may be fixed to the back of the hand with a band (not shown), an adhesive pad, or the like.
  • FIG. 5 shows an example of the first sensor unit 110, and another IMU may be attached to another place of the thumb and the index finger, or the IMU may be attached to a finger other than the thumb and the index finger. May be good. Further, the method of fixing the IMU to each finger is not limited to the band. Further, although FIG. 5 shows an example in which the first sensor unit 110 and the vibration presenting unit 133 are attached to the right hand, they may be attached to the left hand instead of the right hand, or may be attached to both hands.
  • the sensor signal from the first sensor unit 110 (IMU501, 502, 503 in the example shown in FIG. 5) is transmitted to the control unit 140, and the drive signal of the vibration presentation unit 133 is received from the control unit 140. It is assumed that there is a wired or wireless transmission line.
  • the control unit 140 can detect the position and posture of the fingers based on the sensor signal of the first sensor unit 110. As shown in FIG. 5, when the IMUs 501, 502, and 503 are attached to the base and middle nodes of the thumb and index finger, the control unit 140 is based on the detection signals of the IMUs 501, 502, and 503, respectively.
  • the second sensor unit 120 is mounted on the AR glass, and includes an outward camera 121, an inward camera 122, a microphone 123, a gyro sensor 124, an acceleration sensor 125, and a directional sensor 126.
  • the outward-facing camera 121 is composed of, for example, an RGB camera, and is installed so as to photograph the outside of the AR glass, that is, the front direction of the user wearing the AR glass.
  • the outward-facing camera 121 can capture the operation of the user's fingers, but if the user's fingers are hidden behind an obstacle, or if the fingertips are hidden by the back of the hand, the user is behind the body. It is not possible to capture the operation of the user's fingers when turning.
  • the outward-facing camera 121 may further include any one of an IR camera including an IR light emitting unit and an IR light receiving unit, and a TOF (Time Of Flight) camera.
  • a retroreflective material is attached to an object to be captured such as the back of the hand, and the IR camera emits infrared light and emits infrared light reflected from the retroreflective material. Receive light.
  • the image signal captured by the outward camera 121 is transferred to the control unit 140.
  • the inward camera 122 is composed of, for example, an RGB camera, and is installed so as to photograph the inside of the AR glass, specifically, the eyes of a user wearing the AR glass.
  • the line-of-sight direction of the user can be detected based on the captured image of the inward-facing camera 122.
  • the image signal captured by the inward camera 122 is transferred to the control unit 140.
  • the microphone 123 may be a single sound collecting element or a microphone array including a plurality of sound collecting elements.
  • the microphone 123 collects the voice of the user wearing the AR glass and the ambient sound of the user.
  • the audio signal picked up by the microphone 123 is transferred to the control unit 140.
  • the gyro sensor 124, the acceleration sensor 125, and the azimuth sensor 126 may be composed of an IMU.
  • the sensor signals of the gyro sensor 124, the acceleration sensor 125, and the directional sensor 126 are transferred to the control unit 140.
  • the control unit 140 can detect the position and posture of the head of the user wearing the AR glasses based on these sensor signals.
  • the display unit 131 is composed of a transmissive display (glasses lens, etc.) installed in front of both eyes or one eye of the user wearing AR glasses, and is used for displaying a virtual world. Specifically, the display unit 131 expands the real space seen by the user by displaying information (virtual objects) and emphasizing, attenuating, or deleting real objects. The display unit 131 performs a display operation based on a control signal from the control unit 140. Further, the mechanism for see-through display of virtual objects on the display unit 131 is not particularly limited.
  • the speaker 132 is composed of a single sounding element or an array of a plurality of sounding elements, and is installed in, for example, an AR glass.
  • the speaker 132 outputs the sound related to the virtual object displayed on the display unit 131, but other audio signals may be output.
  • the communication unit 134 has a wireless communication function such as Wi-Fi (registered trademark) or Bluetooth (registered trademark).
  • the communication unit 134 mainly performs a communication operation for realizing data exchange between the control unit 140 and an external system (not shown).
  • the control unit 140 is installed in the AR glass or is arranged in a device (smartphone or the like) separated from the AR glass together with a drive power source such as a storage unit 150 or a battery.
  • the control unit 140 executes various programs read from the storage unit 150 to perform various processes.
  • FIG. 6 schematically shows an example of a functional configuration included in the control unit 140.
  • the control unit 140 includes an application execution unit 601, a head position / posture detection unit 602, an output control unit 603, a finger position / posture detection unit 604, and a finger gesture detection unit 605.
  • These functional modules are realized by executing various programs read from the storage unit 150 by the control unit 140.
  • FIG. 6 shows only the minimum necessary functional modules for realizing the present disclosure, and the control unit 140 may further include other functional modules.
  • the application execution unit 601 executes the application program including the AR application under the execution environment provided by the OS.
  • the application execution unit 601 may execute a plurality of application programs in parallel at the same time.
  • AR apps are applications such as video playback and 3D object viewers, but virtual objects can be superimposed or specified in the view of a user wearing AR glasses (see Fig. 2) on their heads. Emphasizes or attenuates a real object in, or deletes a particular real object to make it appear as if it doesn't exist.
  • the application execution unit 601 also controls the display operation of the AR application (virtual object) by using the display unit 131. Virtual objects generated by the AR application are arranged all around the user.
  • FIG. 7 schematically shows how a plurality of virtual objects 701, 702, 703, ...
  • the application execution unit 601 has each virtual object 701, 702, 703 around the user with reference to the position of the user's head or the position of the center of gravity of the body estimated based on the sensor information from the second sensor unit 120. , ... are placed.
  • the head position / orientation detection unit 602 is based on the sensor signals of the gyro sensor 124, the acceleration sensor 125, and the orientation sensor 126 included in the second sensor unit 120 mounted on the AR glass, and is based on the sensor signals of the user's head. The position and orientation are detected, and the user's line-of-sight direction or visual field range is recognized.
  • the output control unit 603 controls the output of the display unit 131, the speaker 132, and the vibration presentation unit 133 based on the execution result of an application program such as an AR application.
  • the output control unit 603 specifies the user's visual field range based on the detection result of the head position / posture detection unit 602 so that the virtual object arranged in the visual field range can be observed by the user through the AR glass. That is, the display operation of the virtual object is controlled by the display unit 131 so as to follow the movement of the user's head.
  • the depth direction of the user's line of sight is the z w axis
  • the horizontal direction is the y w axis
  • the vertical direction is the x w axis
  • the origin position of the user's reference axis x w y w z w is the user's viewpoint position. do.
  • Roll ⁇ z corresponds to the movement of the user's head around the z w axis
  • tilt ⁇ y corresponds to the movement of the user's head around the y w axis
  • pan ⁇ z corresponds to the movement of the user's head around the x w axis. ..
  • the head position / orientation detection unit 602 moves the user's head in each of the roll, tilt, and pan directions ( ⁇ z , ⁇ y) based on the sensor signals of the gyro sensor 124, the acceleration sensor 125, and the orientation sensor 126. , ⁇ z ) and the posture information consisting of the translation of the head.
  • the output control unit 603 moves the display angle of view of the display unit 131 in the real space (for example, see FIG. 7) in which the virtual object is arranged so as to follow the posture of the user's head.
  • the image of the virtual object existing at the display angle of view is displayed on the display unit 131.
  • the region 802-1 is rotated according to the roll component of the user's head movement
  • the region 802-2 is moved according to the tilt component of the user's head movement
  • the user's head movement is performed.
  • the display angle of view is moved so as to cancel the movement of the user's head by moving the area 802-3 according to the pan component of. Therefore, since the virtual object arranged at the display angle of view moved according to the position and orientation of the user's head is displayed on the display unit 131, the user can see the virtual object superimposed on the AR glass. You can observe the space.
  • control unit 140 The functional configuration of the control unit 140 will be described with reference to FIG. 6 again.
  • the finger position / posture detection unit 604 detects the position / posture of the user's hand and fingers wearing the AR glass based on the recognition result of the image taken by the outward camera 121 or the detection signal of the first sensor unit 110. .. Further, the finger gesture detection unit 605 detects the gesture of the user's finger wearing the AR glass based on the recognition result of the image taken by the outward camera 121 or the detection signal of the first sensor unit 110.
  • the gesture of the finger referred to here includes the shape of the finger, specifically the angle of the third joint and the second joint of the index finger, and the presence or absence of contact between the thumb and the fingertip of the index finger.
  • the finger position / posture detection unit 604 and the finger gesture detection unit 605 are mainly from the first sensor unit 110 (gyro sensor 111, acceleration sensor 112, and orientation sensor 113) attached to the user's hand.
  • the posture of the finger and the gesture of the finger are detected with higher accuracy by using the information of the position and the posture of the finger and the restraint condition of the position and the posture that the finger can take.
  • the fingers cannot be detected by image recognition from the head, but high accuracy can be obtained by using the sensor signal of the first sensor unit 110 attached to the hand.
  • the position and orientation of the fingers can be detected with.
  • the method of detecting the position and orientation of the fingers and the gestures of the fingers using the outward-facing camera 121 it may not be possible to detect with high accuracy due to occlusion or the like.
  • the AR system 100 presents a guide on how to hold a virtual object to a user so as to hold a virtual object so as not to deviate from the method of holding an object in the real world. ..
  • the finger position / posture detection unit 604 detects the position / posture of the user's hand trying to grasp the virtual object.
  • the application execution unit 601 determines the distance between the user's hand and the virtual object based on the positional relationship between the position and orientation of the user's hand detected by the finger position / posture detection unit 604 and the virtual object arranged in the real space.
  • a process of presenting a guide of the gripping method is performed around the hand.
  • the output control unit 603 outputs a display to the display unit 131 (or AR glass) of a virtual object that guides the gripping method around the hand.
  • FIG. 9 shows three states of “approach”, “contact”, and “entry”.
  • Approach is a state in which the shortest distance between the user's hand and the virtual object is equal to or less than a predetermined value.
  • Contact is a state in which the shortest distance between the user's hand and the virtual object is zero.
  • Embedding is a state in which the user's hand is interfering with the area of the virtual object.
  • a display indicating the gripping method is displayed in the vicinity of the virtual object so that the gripping method of the virtual object can be understood when the user's hand approaches the virtual object. Based on the displayed gripping method, the user comes to have a virtual object so as not to deviate from the method of holding an object in the real world, and secures reality.
  • the AR system 100 can simplify the processing of the system by limiting the gripping method of the virtual object, but the user may not be able to use an arbitrary gripping method.
  • the AR system 100 when the user's hand approaches the virtual object, a UI that guides the method of grasping the virtual object is displayed near the virtual object (or near the hand), and the AR system Information that guides the gripping method that can be handled in 100 is shown to the user.
  • the guide of the gripping method is displayed by using the display unit 131 in the form of UI, for example, but character information such as a text message and voice announcement may also be performed. Therefore, the user can easily grasp the virtual object without hesitation by being guided by the UI that guides the grasping method of the virtual object.
  • the guides for gripping virtual objects are assumed to be the following types (1) to (3) as the types of guides for gripping virtual objects. Of course, other methods may be used to guide the user on how to grasp the virtual object.
  • the application execution unit 601 that generates the virtual object may also use the display unit 131 to display the UI that guides the method of grasping the virtual object.
  • FIG. 10 shows one display example of a UI that guides a method of grasping a virtual object according to the above guide type (1).
  • the hand in the state where the user is holding the outer circumference of the virtual object 1001 is attached to the virtual object 1001 by the dotted line in the figure. It is displayed superimposed.
  • the position where the virtual object 1001 is held is set in consideration of the position of the center of gravity of the virtual object 1001. Therefore, the user can easily grip the outer circumference of the virtual object 1001 without hesitation, guided by the UI that guides the gripping method shown in FIG.
  • FIG. 11 shows another display example of the UI that guides the method of grasping the virtual object according to the above guide type (1).
  • a virtual mirror 1102 is placed behind the virtual object 1101, and the user places the outer circumference of the mirror image 1103 of the virtual object 1101.
  • the hand holding the object is displayed through the mirror 1102.
  • the position where the virtual object 1001 is held is set in consideration of the position of the center of gravity of the virtual object 1101.
  • the fingertip of the user is hidden behind the virtual object 1101 and cannot be seen, but in the example shown in FIG.
  • the position of each finger touching the outer circumference of the mirror image 1103 can be confirmed. Therefore, the user can be guided by the UI that guides the gripping method shown in FIG. 11 to have a deeper understanding of how to use each finger when gripping the outer circumference of the virtual object 1101, and can easily grip the virtual object 1101 without hesitation.
  • FIG. 12 shows one display example of the UI that guides the method of grasping the virtual object according to the above guide type (2).
  • the user grips (or precisely grips) the outer circumference of the virtual object 1201 so as to be sandwiched between the thumb and the index finger.
  • the positions 1202 and 1203 picked by the thumb and index finger at the time are displayed superimposed on the virtual object 1201 by the dotted line in the figure.
  • the position where the virtual object 1201 is held is set in consideration of the position of the center of gravity of the virtual object 1201. Therefore, the user is guided by the UI that guides the gripping method shown in FIG. 12, and has a deeper understanding of the position where the outer circumference of the virtual object 1201 is pinched by the thumb and the index finger (or the position where the virtual object 1201 is precisely gripped) without hesitation. It can be easily grasped.
  • FIG. 13 shows another display example of the UI that guides the method of grasping the virtual object according to the above guide type (2).
  • the user grips (or grips) the outer circumference of the virtual object 1301 with all fingers.
  • the finger positions 1302 to 1306 are superimposed and displayed on the virtual object 1301 by the dotted line in the figure.
  • the position of holding the virtual object 1001 is set in consideration of the position of the center of gravity of the virtual object 1301. Therefore, the user is guided by the UI that guides the gripping method shown in FIG. 13 to better understand the position (or the position where the grip strength is gripped) where the outer circumference of the virtual object 1301 is gripped by all fingers, and without hesitation. It can be easily grasped.
  • FIG. 14 shows one display example of the UI that guides the method of grasping the virtual object according to the above guide type (3).
  • the virtual thumb and index finger are once spread as shown by reference numbers 1402 and 1403 in the figure.
  • An animation of closing and grasping (or precisely grasping with a fingertip) the virtual object 1401 so as to sandwich it is displayed superimposed on the actual thumb and index finger. Therefore, the user is guided by the UI that guides the gripping method shown in FIG. 14, and once the thumb and index finger are spread and then closed, the user picks the outer periphery of the virtual object 1201 (or precisely grips). Can be easily grasped without hesitation by understanding more deeply.
  • FIG. 15 shows another display example of the UI that guides the method of grasping the virtual object according to the above guide type (3).
  • the user when the user's hand approaches the thick cylindrical virtual object 1501, the user uses all fingers on the outer circumference of the virtual object 1501 as shown by the reference number 1502 in the figure.
  • the virtual hand movement when gripping (or gripping force) is superimposed on the actual hand and displayed.
  • an animation in which the wrist is slightly rotated so as to imitate the outer peripheral surface of the virtual object 1501 is superimposed and displayed on the actual hand. Therefore, the user is guided by the UI that guides the gripping method shown in FIG. 15, rotates his / her wrist so as to follow the outer circumference of the virtual object 1501, and then grips the virtual object 1501 (or grips the grip force). You can understand the operation) more deeply and easily grasp it without hesitation.
  • the AR system 100 when the user brings his / her hand close to grip the virtual object, he / she can see the UI guiding the gripping method as shown in FIGS. 10 to 15 through the AR glass. It can grip the virtual object according to the specified gripping method.
  • the application execution unit 601 creates and displays a virtual object, and also displays a UI that guides how to hold the virtual object.
  • the application execution unit 601 acquires the movement of the hand or finger when the user tries to grasp the virtual object based on the detection results of the finger position / posture detection unit 604 and the finger gesture detection unit 605, and the user virtual object. You may want to evaluate the gripping motion of.
  • C. Selection of gripping method There are two methods of gripping an object: precision gripping, in which the object is gripped with the thumb and index finger, and grip strength gripping, in which the object is gripped with all fingers (or with the entire finger). Furthermore, there are variations in gripping methods such as intermediate gripping using the side surface of the finger and gripping without using the thumb. Further, in order to stably grip an object with only one hand, it is necessary to sandwich the object between two or more opposite surfaces of the hands. In some cases, multiple fingers are used on one surface.
  • the UI that guides the gripping method guides the user to grip the virtual object by the predetermined gripping method.
  • the UI that guides the gripping method differs depending on the method of gripping the virtual object such as pinching or grasping, and using which finger to pinch or grasp, and further differs depending on the type of guide. Which gripping method is selected to guide the user is determined at the time of designing the AR system 100, and it is assumed that the method is not switched according to each user or the user's operation. Alternatively, it is assumed that the gripping method used for each virtual object is set in advance and does not switch according to each user or the user's operation. For example, the optimum gripping method according to the size and shape of the virtual object is preset for each virtual object.
  • the gripping method for the same virtual object may be dynamically switched according to the user's operation. For example, gripping for each user or virtual object using a machine learning model learned to estimate the optimal gripping method based on user attribute information such as the user's personality, habit, age, gender, and physique. The method may be dynamically switched.
  • the application execution unit 601 selects either the UI that guides the gripping method to be picked or the UI that guides the gripping method based on the determination result of the gripping method. It may be selected and presented to the user through the AR glass.
  • the optimum gripping method may differ depending on the direction in which the user's hand approaches the virtual object.
  • the UI that guides the gripping method may be dynamically switched according to the direction in which the user's hand approaches the virtual object.
  • the application execution unit 601 detects the direction in which the user's hand approaches the virtual object based on the detection result of the finger position / posture detection unit 604, and determines the optimum gripping method according to the further approaching direction. Then, the user is presented with a UI that guides the gripping method selected based on the determination result.
  • the virtual object 1601 in the shape of a bottle having an elongated neck as shown in FIG. 16 has a different optimum gripping method depending on the location to be gripped.
  • the gripping method is suitable.
  • the gripping method of picking is suitable.
  • the optimum picking method is different.
  • the application execution unit 601 can determine from which direction the user's hand is approaching the currently displayed virtual object based on the detection result of the finger position / posture detection unit 604. Then, based on the determination result, the application execution unit 601 selects a UI that guides the gripping method suitable for the direction in which the user's hand approaches the virtual object, and presents the UI to the user through the AR glass.
  • the optimum gripping method may differ even for the same virtual object depending on the user's age (infant, elderly person), physical injury, and the user's daily gripping method.
  • the optimum guide type may differ depending on the age (infant, elderly), race, physical injury, and daily gripping method of the user. Therefore, the UI that guides the gripping method for the same virtual object may be switched for each user.
  • the gripping method even if it is a virtual object that can be stably gripped by a healthy person user by the gripping method, if the user has a damaged finger, the gripping method must be performed by using all the fingers. If so, it may not be possible to grip it stably. In the case of a user with a missing finger, when using a UI showing a movement held by a virtual hand, it is necessary to change to a UI showing a movement using only available fingers. Moreover, even a healthy person user may have a different daily gripping method depending on his / her habits and preferences.
  • the user may manually input information on the user's own user attributes such as age, race, physical injury, daily gripping method, etc. into the AR system 100 by himself / herself.
  • the AR system 100 may be able to acquire information on user attributes as user registration information at the start of use of the AR system 100.
  • the user attribute may be estimated by using the machine learning model from the sensor information of the first sensor unit 110 and the second sensor unit 120. Therefore, the first sensor unit 110 and the second sensor unit 120 may be equipped with sensors other than those shown in FIG. 1, such as a biological sensor.
  • the application execution unit 601 determines the optimum gripping method based on the attribute information of the user, and presents the UI to the user through the AR glass to guide the selected gripping method based on the determination result.
  • the AR system 100 uses AR glasses to provide a UI that guides the gripping method so that the user can understand how to grip the virtual object when the user's hand approaches the virtual object.
  • the embodiments presented in the above have been described.
  • the present disclosure can be applied not only when the user grips a virtual object, but also when the user grips a real object, that is, an object existing in the real space.
  • the application execution unit 601 can identify a real object in the user's field of view based on the image recognition result of the captured image of the outward camera 121. Further, when the application execution unit 601 detects that the user has approached the real object based on the detection result of the finger position / posture detection unit 604 and the image recognition result of the user's hand based on the image captured by the outward camera 121. , Select the optimal gripping method and display the UI that guides the gripping method of the real object on the AR glass. Further, the application execution unit 601 may dynamically switch the UI that guides the gripping method based on the direction in which the user's hand is approaching the real object, the user's attributes, and the like.
  • the application execution unit 601 determines the optimum gripping method so that the real object can be gripped while avoiding a dangerous place to hold, and the user guides the selected gripping method based on the determination result through the AR glass.
  • Places that are dangerous to hold are places that cannot be gripped stably, or places that are weak and break when trying to grip.
  • the application execution unit 601 estimates a dangerous place to hold the target real object based on the result of object recognition from the image captured by the outward camera 121.
  • a dangerous place to hold may be defined in advance based on an empirical rule or the like, and the application execution unit 601 may determine the optimum gripping method based on the definition.
  • the optimum gripping method may be determined by using a machine learning model learned to estimate a dangerous place to hold based on the recognized object category, size, shape, and the like.
  • E. Guide display trigger In the AR system 100 according to the present disclosure, when a user's hand approaches a virtual object, a UI indicating a method of grasping the virtual object is displayed on the AR glass. However, the user does not bring his / her hand close to grasp the virtual object, and the position of the hand may happen to be close to the virtual object. If the user does not intend to grab the virtual object and the UI that guides how to grab the virtual object is displayed on the AR glass, the user does not need the guide UI or obstructs the view and gets in the way. do.
  • the UI to be used may be displayed on the AR glass.
  • the application execution unit 601 can detect, for example, the line-of-sight direction of the user from the captured image of the inward camera 122, and determine whether or not the user is looking at the virtual object.
  • the application execution unit 601 may be able to estimate the degree of interest of the user in the virtual object by using the machine learning model from the sensor information of the first sensor unit 110 and the second sensor unit 120. Therefore, the first sensor unit 110 and the second sensor unit 120 may be equipped with sensors other than those shown in FIG. 1, such as a biological sensor. Then, when the condition that the user is looking at the target virtual object or the user is interested in the target virtual object is satisfied, the application execution unit 601 approaches the virtual object. When it is detected that the virtual object is being gripped, a UI that guides how to grasp the virtual object is displayed on the AR glass.
  • an exoskeleton type force sense presenting device may be attached to the hand to give the user a realistic tactile sensation according to the contact state with the virtual object.
  • problems in the purchase cost and installation location of the force sense presentation device there are problems in the purchase cost and installation location of the force sense presentation device.
  • the display of the UI that guides the gripping method of the virtual object is switched so that the user is notified or fed back that the contact has been made.
  • the application execution unit 601 acquires the movement of the hand or finger when the user tries to grasp the virtual object based on the detection results of the finger position / posture detection unit 604 and the finger gesture detection unit 605, and obtains the movement of the user's finger.
  • the contact state with the virtual object can be determined.
  • the application execution unit 601 switches the display of the UI that guides the method of grasping the virtual object based on the determination result, and notifies or feeds back the contact to the user.
  • FIG. 17 shows an example of displaying a UI that highlights the contact points between the user's thumb and index finger and the virtual object 1701 when the user's hand touches the outer circumference of the cylindrical virtual object 1701. ing.
  • the application execution unit 601 indicates that the user's thumb and index finger are in contact with the virtual object 1701 and that the user's thumb, index finger and virtual object are in contact with each other based on the detection results of the finger position / orientation detection unit 604 and the finger gesture detection unit 605.
  • the contact points 1702 and 1703 with the 1701 are detected, the contact points 1702 and 1703 are switched to the highlighted UI, and the user is notified or fed back that the contact points have been made.
  • the user can accurately pinch the virtual object 1701 at an appropriate distance between the thumb and the index finger (the distance that matches the width of the virtual object 1701).
  • the guiding UI is switched as shown in FIG. 17 to notify the user that the finger has touched the virtual object, the virtual object does not actually exist, and the user does not have a realistic tactile sensation.
  • the user's finger may be further sunk into the virtual object by continuing the gripping operation.
  • the user is notified of the degree of the finger immersing in the virtual object by gradually switching the UI that guides the method of grasping the virtual object according to the degree of the user's finger immersing in the virtual object. Try to give feedback.
  • FIG. 18 shows when the user's hand touches the outer circumference of the cylindrical virtual object 1801 and then the thumb and index finger are sunk into the virtual object 1801 because the user narrows the distance between the thumb and the index finger. Shows an example of displaying a UI that emphasizes a highlight indicating a contact point between the user's thumb and index finger and the virtual object 1801.
  • the application execution unit 601 detects that the user's thumb and index finger are sunk into the virtual object 1801 based on the detection results of the finger position / orientation detection unit 604 and the finger gesture detection unit 605, the thumb, index finger, and virtual Switch to a UI that highlights the highlights of contact points 1802 and 1803 with the object 1801 to notify or feed back to the user that the finger is sunk into the virtual object 1801.
  • the application execution unit 601 gradually switches to a UI that emphasizes highlights according to the degree of immersion. The user can visually understand that the virtual object 1801 is being excessively pressed with the thumb and forefinger by gradually switching the highlights indicated by the UI that guides the grasping method, and the user can grasp correctly. It can be modified to work.
  • the AR system 100 has a virtual object so as not to deviate from the method of holding an object in the real world. Provides the user with a guide on how to grab a virtual object.
  • FIG. 19 shows in the form of a flowchart a processing procedure for presenting a UI that guides a user how to grasp a virtual object in the AR system 100.
  • the application execution unit 601 plays a central role in this processing procedure.
  • the application execution unit 601 acquires the position of the user's hand based on the detection result of the finger position / posture detection unit 604 (step S1901).
  • the application execution unit 601 shall constantly monitor the relative position between the displayed virtual object and the hand of the user who is trying to grasp the virtual object.
  • the application execution unit 601 checks whether the user's hand approaches the virtual object, that is, whether the shortest distance between the user's hand and the virtual object is equal to or less than a predetermined value (step S1902).
  • step S1902 the application execution unit 601 adds the user's hand on the condition that the user is looking at the target virtual object or the user is interested in the target virtual object. May be checked to see if it has approached the virtual object.
  • the application execution unit 601 uses the UI to guide the user on how to grasp the virtual object. Is determined (step S1903).
  • step S1903 the application execution unit 601 selects the type of guide for grasping the virtual object. Further, the application execution unit 601 has selected a gripping method preset for the virtual object or a gripping method selected based on user attributes such as the user's personality, habit, age, gender, and physique. The UI for guiding is determined by the guide type. Further, the application execution unit 601 may select a gripping method according to the direction in which the user's hand approaches the virtual object, and determine a UI for guiding the gripping method.
  • step S1903 the application execution unit 601 displays a UI for guiding the method of grasping the virtual object approaching by the user's hand near the virtual object using the display unit 131 (the display unit 131 is used). Step S1904).
  • the application execution unit 601 acquires the detection results of the finger position / orientation detection unit 604 and the finger gesture detection unit 605 for the movement of the hand or finger when the user tries to grasp the virtual object (step S1905). Determine the contact state between the user's finger and the virtual object.
  • step S1906 when the user's finger touches the virtual object (Yes in step S1906), the application execution unit 601 switches the display of the UI that guides the gripping method of the virtual object, and notifies or feeds back the contact. (Step S1907).
  • step S1907 the application execution unit 601 uses the display unit 131 to display, for example, a UI that highlights the contact point between the user's finger and the virtual object. Further, when the user's finger is sunk into the virtual object, the application execution unit 601 gradually switches the highlight display according to the degree of sunk.
  • the virtual finger when the user's finger touches the virtual object, the virtual finger is displayed. Specifically, when the user is trying to grasp two opposing faces of a virtual object, each of the first finger approaching one face and the second finger approaching the other face On the other hand, the first virtual finger and the second virtual finger are displayed. At that time, the opening amount of the first virtual finger and the second virtual finger is made wider than the opening degree of the actual first finger and the second finger, and the actual first finger and the second finger are opened. Each virtual finger is displayed so that the first virtual finger and the second virtual finger come into contact with the virtual object and are in a gripped state when the fingers of the finger touch.
  • FIG. 20 shows an example of displaying a virtual finger when the user's finger touches the virtual object.
  • FIG. 20 is a display example when the user sandwiches and grips two opposing surfaces of the virtual object 2001 between the thumb and the index finger, and the position 2002 of the virtual thumb and the position 2003 of the virtual index finger are shown by dotted lines, respectively. There is.
  • the actual thumb and index finger are drawn with solid lines.
  • the opening amount of the virtual thumb 2002 and the virtual index finger 2003 is wider than the opening degree of the actual thumb and index finger. Then, when the actual thumb and index finger come into contact with each other, the virtual thumb 2002 and the virtual index finger 2003 come into contact with the virtual object 2001 and are in a gripped state.
  • the application execution unit 601 can acquire the movements of the thumb and index finger when the user tries to grasp the virtual object 2001 based on the detection results of the finger position posture detection unit 604 and the finger gesture detection unit 605. Then, the application execution unit 601 displays the virtual thumb 2002 and the virtual index finger 2003 for each of the thumb approaching one surface and the index finger approaching the other surface of the virtual object 2001.
  • the application execution unit 601 makes the opening amount of the virtual thumb 2002 and the virtual index finger 2003 wider than the opening degree of the actual thumb and index finger. Therefore, when the actual thumb and index finger come into contact with each other, the user visually observes that the virtual thumb 2002 and the virtual index finger 2003 come into contact with the virtual object 2001 in the virtual space displayed by the display unit 131 and are in a grasped state. Recognize. At this time, a contact force acts between the actual thumb and index finger, but the user recognizes the tactile sensation that his / her thumb and index finger receive from the virtual object 2001.
  • the application execution unit 601 displays the virtual thumb 2002 and the virtual index finger 2003 at positions near the user's thumb and index finger so as to sandwich the virtual object 2001. This allows the user to feel the contact between the thumb and index finger. Although the thumb and index finger are actually in contact with each other, the user can see the virtual object 2001 by the virtual thumb 2002 and the virtual index finger 2003 in the virtual space displayed by the display unit 131 (or seen through the AR glass). Since the sandwiched image is viewed, it is possible to obtain the feeling of actually sandwiching and holding the virtual object 2001.
  • the visual information when there is an inconsistency between the movement of the body in the visual sense and the movement of the body felt by oneself, the visual information becomes predominant and a pseudo tactile sensation is generated.
  • the illusion, or "visual-tactile interaction,” can be used to give the user a tactile sensation when grasping a virtual object.
  • FIG. 21 shows an example of displaying a virtual finger when the user's finger approaches the virtual object.
  • FIG. 21 is a display example when the user brings his / her hand close to the virtual object 2101 with the intention of sandwiching and grasping the two opposing surfaces of the virtual object 2101 with the thumb and the index finger, and the position of the virtual thumb.
  • the 2102 and the virtual index finger position 2103 are shown by dotted lines, respectively.
  • the actual thumb and index finger are drawn with solid lines.
  • the opening amount of the virtual thumb 2102 and the virtual index finger 2103 is wider than the opening degree of the actual thumb and index finger.
  • the application execution unit 601 acquires the position of the user's hand based on the detection result of the finger position / posture detection unit 604, and when the shortest distance between the hand and the virtual object 2101 is equal to or less than a predetermined value, the user's hand Detects a state in which is approaching the virtual object 2101. Then, the application execution unit 601 displays the virtual thumb 2102 and the virtual index finger 2103 for each of the thumb approaching one surface and the index finger approaching the other surface of the virtual object 2001.
  • the application execution unit 601 makes the opening amount of the virtual thumb 2102 and the virtual index finger 2103 wider than the opening degree of the actual thumb and index finger. After that, when the actual thumb and index finger come into contact with each other, the virtual thumb 2102 and the virtual index finger 2103 come into contact with the virtual object 2101 in the virtual space displayed by the display unit 131 to be in a gripped state.
  • the application execution unit 601 is a very natural and smooth operation in which the virtual thumb 2102 and the virtual index finger 2103 try to grasp the virtual object 2101 from the time when the user's hand enters the approaching state to the time when the user's hand shifts to the contacting state. Is displayed. Therefore, the user can easily grasp the virtual object 2101 without hesitation, guided by the UI of the guide displaying the virtual thumb 2102 and the virtual index finger 2103.
  • the amount of virtual finger opening displayed when the user's hand approaches the virtual object is set based on the thickness of the virtual object to be grasped.
  • the opening amount of the virtual finger when trying to grasp a thick virtual object, if the opening amount of the virtual finger is widened by the thickness of the virtual object, the virtual finger will be spread unnaturally, which is impossible in reality. It ends up. Therefore, the opening amount of the virtual finger displayed when the user's hand approaches the virtual object does not necessarily have to match the thickness of the virtual object to be gripped. Spread the virtual finger slightly wider than the actual finger position, and change the virtual finger closing width slightly with respect to the actual finger closing width.
  • FIG. 22 shows another display example of the virtual finger when the user's finger approaches the virtual object.
  • FIG. 22 similar to the display example shown in FIG. 21, when the user brings his / her hand closer to the virtual object 2201 with the intention of sandwiching and grasping the two opposing surfaces of the virtual object 2101 between the thumb and the index finger.
  • the position 2202 of the virtual thumb and the position 2203 of the virtual index finger are shown by dotted lines, respectively.
  • the actual thumb and index finger are drawn with solid lines.
  • the opening amount of the virtual thumb 2202 and the virtual index finger 2203 is wider than the opening degree of the actual thumb and index finger.
  • the total value d1 + d2 of the difference d1 of the opening amount of the virtual thumb 2202 from the actual thumb and the difference d2 of the opening amount of the virtual index finger 2203 with the actual index finger is the thickness of the virtual object 2201 to be gripped. Less than d. That is, d> d1 + d2 is established.
  • the application execution unit 601 acquires the position of the user's hand based on the detection result of the finger position / posture detection unit 604, and when the shortest distance between the hand and the virtual object 2101 is equal to or less than a predetermined value, the user's hand Detects a state in which is approaching the virtual object 2101. Then, the application execution unit 601 displays the virtual thumb 2202 and the virtual index finger 2203 for each of the thumb approaching one surface and the index finger approaching the other surface of the virtual object 2001.
  • the application execution unit 601 makes the opening amount (d1 + d2) of the virtual thumb 2202 and the virtual index finger 2203 wider than the actual opening degree of the thumb and index finger, but smaller than the thickness d of the virtual object 2201.
  • the virtual thumb 2202 and the virtual index finger 2203 come into contact with the virtual object 2101 in the virtual space displayed by the display unit 131, and are in a gripped state.
  • the application execution unit 601 is a very natural and smooth operation in which the virtual thumb 2202 and the virtual index finger 2203 try to grasp the virtual object 2101 from the time when the user's hand enters the approaching state to the time when the user's hand shifts to the contacting state. Is displayed.
  • the application execution unit 601 changes the closing width of the virtual thumb 2202 and the virtual index finger 2203 to be smaller than the closing width of the actual thumb and index finger. For example, when the actual opening amount of the thumb and the index finger is narrowed by 1 cm, the opening amount of the virtual thumb 2202 and the virtual index finger 2203 is narrowed by 0.2 cm, and the movement is performed by changing the magnification. This allows the virtual thumb 2202 and the virtual index finger 2203 to just fit when the real thumb and index finger are closed, without having to spread the virtual thumb 2202 and the virtual index finger 2203 unnaturally. It can be positioned so as to sandwich the virtual object 2201.
  • FIG. 23 shows a processing procedure for displaying a virtual hand to the user in the AR system 100 in the form of a flowchart. This processing procedure is mainly executed by, for example, the application execution unit 601 when the user's hand approaches the virtual object.
  • the application execution unit 601 acquires the width of the virtual object being displayed (step S2301).
  • the application execution unit 601 since the application execution unit 601 creates a virtual object by itself, the application execution unit 601 can acquire the width of the virtual object based on the setting information at the time of generating the virtual object. In the case of virtual objects having different widths depending on the gripping position, the application execution unit 601 determines the direction in which the user's hand approaches the virtual object based on the detection result of the finger position / posture detection unit 604. , The width of the virtual object may be obtained based on the approaching direction.
  • the application execution unit 601 acquires the actual finger width (the distance between the thumb and the index finger) used by the user to grasp the virtual object (step S2302).
  • the application execution unit 601 can acquire the actual finger width based on the detection result of the finger gesture detection unit 605. Further, the application execution unit 601 may acquire the actual finger width based on the addition recognition result of the captured image of the outward camera 121.
  • the application execution unit 601 calculates the virtual finger width based on the current finger width (step S2303).
  • the application execution unit 601 calculates the width of the virtual finger so that the virtual finger is exactly at the position where the virtual object is sandwiched in the state where the actual finger used by the user to grasp the virtual object is in contact with the user. ..
  • the application execution unit 601 displays the virtual finger corresponding to each actual finger in the vicinity of the actual finger used by the user to grasp the virtual object by using the display unit 131 (step S2304).
  • the user In the virtual space displayed by the display unit 131 (or seen through the AR glass), the user actually holds the virtual object in order to see the image in which the virtual finger is just sandwiching the virtual object. You can get a sense.
  • the timing for starting the virtual finger display is the time when the user's hand approaches the virtual object.
  • the display of the virtual finger may be started when the shortest distance between the user's hand and the virtual object falls within a predetermined value.
  • a predetermined value for example, 50 cm may be set in advance.
  • the opening amount of the virtual finger is made wider than the opening degree of the actual finger, and when the actual finger touches, the virtual object is opened with the virtual finger in the virtual space. It is in a gripping state (see, for example, FIG. 20). At this time, the user recognizes the contact force between the fingers as a tactile sensation received from the virtual object. That is, according to the present disclosure, when there is an inconsistency between the movement of the body in the visual sense and the movement of the body felt by oneself, the visual information becomes predominant and a pseudo tactile sensation occurs. By utilizing the "visual-tactile interaction" that occurs, it is possible to give the user a tactile sensation when grasping the virtual object.
  • the user may adjust the magnitude of the force for grasping the virtual object by strongly closing the actual finger (strongly pressing the thumb and index finger against each other). For example, when the weight of a virtual object can be set, if the user simply picks a heavy virtual object with a weak gripping force, the virtual object slides off the finger, but when the user is picking or grasping with a strong gripping force. It is possible to realize the expression that a virtual object can be lifted.
  • the operator performs operations such as picking or grasping an object that is not at hand with a remote robot.
  • the remote control performed by the operator in the master-slave system is treated as the same as the operation in which the user picks or grabs the virtual object through the AR glass in the AR system 100, and the present disclosure can be applied. Therefore, since the master side displays the UI that guides the gripping method of the object in the remote place to the operator, the operator is guided by the guide of the gripping method and easily grips the object that is not at hand without hesitation. Can be done.
  • the gripping method is set in advance for each virtual object, or the gripping method is selected based on the user attributes such as the user's personality, habit, age, gender, and physique, and the gripping method is guided according to a predetermined guide type.
  • the UI to do is determined.
  • an object to be gripped is detected in advance at a remote location, and a UI for guiding the gripping method and gripping method of the object is determined in advance based on the detection result.
  • the operator when the operator starts remote control on the master side, it is determined in a short time whether or not the object existing at the slave side is the object of gripping, and if it is the object of gripping, the operator is operated in a short time.
  • FIG. 24 shows a configuration example of the remote control system 2400 to which the present disclosure is applied.
  • the illustrated remote control system 2400 includes a master device 2410 operated by an operator and a slave device 2420 including a robot 2421 to be remotely controlled.
  • the master device 2410 includes a controller 2411, a display unit 2412, a master control unit 2413, and a communication unit 2414.
  • the controller 2411 is used by the operator to input a command for remotely controlling the robot 2421 on the slave device 2420 side.
  • the controller 2411 is used by being attached to the operator's hand as shown in FIG. 5, and is a device for inputting the position and orientation of the operator's fingers and the gestures of the fingers as operation commands for the robot 2421. I'm assuming.
  • the controller 2411 may be a camera or the like that captures the operator's hand, and may recognize the position and orientation of the operator's fingers and the gesture of the fingers from the captured image of the hand.
  • the display unit 2412 is composed of, for example, AR glasses, but may be a display device such as a general liquid crystal display.
  • the display unit 2412 displays the virtual object in the real space on which the operator's fingers are projected according to the control by the master control unit 2413.
  • the virtual object referred to here is a virtual object corresponding to a remote real object that the remotely controlled robot 2421 is trying to grasp.
  • the virtual object is displayed at a position where the relative position with respect to the operator's hand coincides with the relative position between the robot 2421 and the object.
  • a UI for guiding the method of grasping the virtual object is displayed near the virtual object (or near the operator's hand) on the display unit 2412. Is displayed.
  • the master control unit 2413 When the master control unit 2413 acquires the position and orientation of the operator's fingers and the gestures of the fingers based on the input signal from the controller 2411, the master control unit 2413 converts the robot 2421 into an operation command for remote control via the communication unit 2414. The operation command is transmitted to the slave device 2420.
  • the master control unit 2413 receives an image of the operation status of a remote object by the robot 2421 taken by the camera 2422 from the slave device 2420 via the communication unit 2414. Then, the master control unit 2413 controls the display unit 2412 so that the virtual object is displayed in the real space on which the operator's fingers are projected.
  • the virtual object is a virtual object corresponding to a remote real object that the remotely controlled robot 2421 is trying to grasp. The virtual object is placed at a position where the relative position with respect to the operator's hand coincides with the relative position between the robot 2421 and the object.
  • the UI for guiding the method of grasping the virtual object is located near the virtual object (or near the operator's hand). indicate.
  • the UI that guides the gripping method of the virtual object corresponds to the UI that guides the gripping method of the actual object by the robot 2421. Therefore, the operator is guided by the UI that guides the gripping method of the virtual object and performs the operation of gripping the virtual object with his / her own fingers, so that the robot 2421 grips the actual object on the slave device 2420 side. You can enter commands.
  • the communication unit 2414 is a functional module for interconnecting with the slave device 2420 side.
  • the communication medium between the master device 2410 and the slave device 2420 may be either wired or wireless, and is not limited to a specific communication standard.
  • the slave device 2420 includes a robot 2421, a camera 2422, a slave control unit 2423, and a communication unit 2424.
  • the slave device 2420 is interconnected with the master device 2410 side via the communication unit 2424, receives an operation command of the robot 2421 from the master device 2410, and transmits a captured image of the camera 2422 to the master device 2410. ..
  • the operation command sent from the master device 2410 is a command for driving the robot 2421 according to the position and orientation of the operator's fingers and the gesture of the fingers.
  • the slave control unit 2423 interprets the operation command received from the master device 2420 and controls the drive of the robot 2421 so that the robot 2421 reproduces the position and orientation of the operator's fingers and the gestures of the fingers.
  • FIG. 25 shows how the operator is approaching the virtual object with his / her hand on the master device 2410 side.
  • FIG. 26 shows how the robot 2421 is approaching an object on the slave device 2420 side so as to follow the movement of the operator's hand.
  • the camera 2422 captures the operation status of the object by the robot 2421.
  • the slave control unit 2423 encodes the captured image of the camera 2422 and controls the communication unit 2424 to transmit the captured image to the master device 2410 in a predetermined transmission format.
  • the display unit 2412 displays the virtual object corresponding to the object in the real space on which the operator's fingers are projected.
  • the virtual object is arranged at a position where the relative position with respect to the operator's hand coincides with the relative position between the robot 2421 and the object.
  • FIG. 27 shows a processing procedure in the form of a flowchart on the master device 2410 side for presenting a UI that guides the operator how to grasp the virtual object. This processing procedure is mainly carried out by the master control unit 2413.
  • the master control unit 2413 acquires the position of the operator's hand based on the detection result of the controller 2411 (step S2701).
  • the master control unit 2413 checks whether the operator's hand approaches the virtual object, that is, whether the shortest distance between the operator's hand and the virtual object is equal to or less than a predetermined value (step S2702).
  • the virtual object is placed at a position where the relative position with respect to the operator's hand coincides with the relative position between the robot 2421 and the object. Therefore, when the operator's hand approaches the virtual object, the robot 2421 is approaching the object on the slave device 2420 side.
  • the master control unit 2413 provides the operator with a UI used to guide the method of grasping the virtual object. Determine (step S2703).
  • the master control unit 2413 selects the type of guide for grasping the virtual object. Further, the master control unit 2413 is based on a gripping method preset based on a category of a remote object corresponding to a virtual object, or a user attribute such as an operator's personality, habit, age, gender, and physique. The UI for guiding the selected gripping method according to the selected guide type is determined. Further, the master control unit 2413 may select the gripping method according to the direction in which the operator's hand approaches the virtual object, and determine the UI for guiding the gripping method.
  • the master control unit 2413 displays a UI for guiding the gripping method of the virtual object approaching by the operator's hand near the virtual object using the display unit 2412 (the display unit 2412). Step S2704).
  • the master control unit 2413 acquires the movement of the hand or finger from the controller 2411 when the operator is trying to grasp the virtual object (step S2705), and determines the contact state between the operator's finger and the virtual object.
  • the virtual object is placed at a position where the relative position with respect to the operator's hand coincides with the relative position between the robot 2421 and the object. Therefore, the contact state between the operator's hand and the virtual object is the same as the contact state between the robot 2421 and the real object on the slave device 2420 side.
  • step S2706 when the operator's finger touches the virtual object (Yes in step S2706), the master control unit 2413 switches the display of the UI that guides the gripping method of the virtual object, and touches the virtual object, in other words. The operator is notified or fed back that the robot 2421 has come into contact with the object (step S2707).
  • step S2707 the master control unit 2413 uses the display unit 2412 to display, for example, a UI that highlights the contact point between the operator's finger and the virtual object. Further, when the operator's finger is sunk into the virtual object, the master control unit 2413 gradually switches the highlight display according to the degree of sunk.
  • the UI that guides the gripping method of the virtual object corresponds to the UI that guides the gripping method of the actual object by the robot 2421. Therefore, the operator remotely controls the robot 2421 on the slave device 2420 side by performing the operation of grasping the virtual object with his / her fingers guided by the UI that guides the grasping method of the virtual object, and the object that is not at hand. Can be easily grasped without hesitation.
  • the present specification has mainly described embodiments in which the present disclosure is applied to an AR system, the gist of the present disclosure is not limited to this.
  • the present disclosure can be similarly applied to a VR system that perceives a virtual space as reality, an MR system that intersects reality and virtual, and a master-slave remote system.
  • An acquisition unit that acquires the position and posture of the user's hand, A control unit that controls the display operation of a display device that superimposes and displays virtual objects in real space, Equipped with The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object.
  • Information processing device
  • the acquisition unit acquires the position and posture of the hand based on the sensor information from the sensor attached to the hand, or includes a sensor attached to the hand.
  • the information processing device according to (1) above.
  • the control unit controls the display device so that the information is displayed on either the virtual object or the vicinity of the hand.
  • the information processing device according to any one of (1) and (2) above.
  • the control unit displays the display device so as to display the information including at least one of a gripping method of picking the virtual object with the thumb and one other finger or a gripping method of grasping the virtual object with the whole hand.
  • Control The information processing device according to any one of (1) to (3) above.
  • the control unit is in a state where the hand is holding the virtual object, a position where the hand is holding the virtual object, and a movement in which the virtual hand is holding the virtual object at the position of the hand.
  • Control the display device to display the information indicating at least one.
  • the information processing device according to any one of (1) to (4) above.
  • the acquisition unit further acquires the shape of the hand and obtains the shape of the hand.
  • the control unit selects the information based on the shape of the hand.
  • the information processing device according to any one of (1) to (5) above.
  • the control unit selects the information based on the direction in which the hand approaches the virtual object.
  • the information processing device according to any one of (1) to (5) above.
  • the control unit selects the information based on the attributes of the user.
  • the information processing device according to any one of (1) to (5) above.
  • the user's attributes include at least one of age, race, physical injury, and daily gripping method.
  • the control unit controls the display of the information based on the state of the user when the hand approaches the virtual object.
  • the information processing device according to any one of (1) to (8) above.
  • the state of the user includes at least one of the line-of-sight direction of the user or the degree of interest of the user in the virtual object.
  • the information processing device according to (9) above.
  • the control unit controls the display of the information based on the contact state between the hand and the virtual object.
  • the information processing device according to any one of (1) to (9) above.
  • the control unit controls the display of the information so that the contact point indicates that the hand and the virtual object have come into contact with each other.
  • the information processing device according to (10) above.
  • the control unit controls the display of the information so as to indicate to the contact point that the hand has sunk into the virtual object.
  • the information processing device according to any one of (10) and (11) above.
  • the control unit controls the display device so as to display the virtual hand when the hand touches the virtual object.
  • the information processing device according to any one of (1) to (12) above.
  • the control unit When the user is trying to grasp the two opposing surfaces of the virtual object, the control unit has a first finger approaching one surface and a second finger approaching the other surface.
  • the display device is controlled to display the first virtual finger and the second virtual finger for each of the fingers.
  • the information processing device according to (13) above.
  • the control unit makes the opening amount of the first virtual finger and the second virtual finger wider than the actual opening degree of the first finger and the second finger, and actually The display operation of the display device so that when the first finger and the second finger come into contact with each other, the first virtual finger and the second virtual finger come into contact with the virtual object and are in a gripped state.
  • the information processing device according to (14) above.
  • control unit displays the virtual hand in which the opening amount of the virtual finger trying to grasp the virtual object is wider than the actual opening degree of the finger when the hand approaches the virtual object.
  • Control to start The information processing device according to any one of (13) to (15) above.
  • the control unit controls the opening amount of the virtual finger to be wider than the actual opening degree of the finger based on the thickness of the virtual object when the hand approaches the virtual object. do, The information processing device according to (16) above.
  • the display device is controlled so as to display information on a method of grasping the virtual object when the hand approaches the virtual object.
  • Information processing method
  • a display device that superimposes and displays virtual objects in real space
  • An acquisition unit that acquires the position and posture of the user's hand
  • a control unit that controls the display operation of the display device, Equipped with The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object.
  • Augmented reality system

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Provided is an information processing device for processing information on augmented reality. The information processing device is equipped with an acquisition unit for acquiring the position and posture of a user's hand, and a control unit for controlling display operation of a display device for displaying a virtual object superimposed on a real space, wherein the control unit controlsthe display device so as to display information on how to hold the virtual object when the hand approaches the virtual object. The control unit controls the display device so as to display the information near the hand or on the virtual object.

Description

情報処理装置及び情報処理方法、コンピュータプログラム、並びに拡張現実感システムInformation processing equipment and information processing methods, computer programs, and augmented reality systems
 本明細書で開示する技術(以下、「本開示」とする)は、拡張現実に関する情報を処理する情報処理装置及び情報処理方法、コンピュータプログラム、並びに拡張現実感システムに関する。 The technology disclosed in this specification (hereinafter referred to as "the present disclosure") relates to an information processing device and an information processing method for processing information related to augmented reality, a computer program, and an augmented reality feeling system.
 臨場感のある体験を実現する技術として、仮想現実(Virtual Reality:VR)や拡張現実(Augmented Reality:AR)、MR(Mixed Reality)が普及してきている。VRは仮想空間を現実として知覚させる技術である。また、ARは、ユーザを取り巻く現実環境に情報を付加したり、強調又は減衰、削除したりして、ユーザから見た実空間を拡張させる技術である。また、MRは、例えば、実空間の物体に置き換わる仮想的な物体(以下、「仮想オブジェクト」とも言う)を表示して、現実と仮想を交錯させる技術である。ARやMRは、例えばシースルー型のヘッドマウントディスプレイ(以下、「ARグラス」とも呼ぶ)を用いて実現される。AR技術によれば、ユーザがARグラス越しに観察する実空間の風景に仮想オブジェクトを重畳表示したり、特定の実オブジェクトを強調し又は減衰したり、特定の実オブジェクトを削除してあたかも存在しないように見せたりすることができる。また、現実物体(ユーザの指など)と仮想物体との接触をユーザに提示する情報処理装置について提案がなされている(例えば、特許文献1を参照のこと)。 Virtual reality (VR), augmented reality (AR), and MR (Mixed Reality) are becoming widespread as technologies for realizing realistic experiences. VR is a technology that allows virtual space to be perceived as reality. In addition, AR is a technology that expands the real space seen by the user by adding, emphasizing, attenuating, or deleting information to the real environment surrounding the user. Further, MR is a technology for displaying a virtual object (hereinafter, also referred to as “virtual object”) that replaces an object in real space and interlacing the real and the virtual. AR and MR are realized by using, for example, a see-through type head-mounted display (hereinafter, also referred to as “AR glass”). According to AR technology, virtual objects are superimposed and displayed on the real space landscape that the user observes through AR glasses, specific real objects are emphasized or attenuated, and specific real objects are deleted as if they do not exist. You can make it look like. Further, a proposal has been made for an information processing device that presents a contact between a real object (such as a user's finger) and a virtual object to the user (see, for example, Patent Document 1).
特開2019-40226号公報Japanese Unexamined Patent Publication No. 2019-40226
 本開示の目的は、拡張現実に関する情報を処理する情報処理装置及び情報処理方法、コンピュータプログラム、並びに拡張現実感システムを提供することにある。 An object of the present disclosure is to provide an information processing device and an information processing method for processing information related to augmented reality, a computer program, and an augmented reality feeling system.
 本開示の第1の側面は、
 ユーザの手の位置姿勢を取得する取得部と、
 実空間に仮想オブジェクトを重畳表示する表示装置の表示動作を制御する制御部と、
を具備し、
 前記制御部は、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトの把持方法に関する情報を表示するように前記表示装置を制御する、
情報処理装置である。
The first aspect of the disclosure is
An acquisition unit that acquires the position and posture of the user's hand,
A control unit that controls the display operation of a display device that superimposes and displays virtual objects in real space,
Equipped with
The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object.
It is an information processing device.
 前記制御部は、前記情報を前記仮想オブジェクト又は前記手の付近のいずれかに、前記仮想オブジェクトを親指とその他の1本の指で摘まむ把持方法又は手全体で掴む把持方法のうち少なくとも1つを含む前記情報を表示するように前記表示装置を制御する。 The control unit is at least one of a gripping method in which the information is held in the vicinity of the virtual object or the hand, and the virtual object is gripped by the thumb and one other finger, or the gripping method in which the virtual object is gripped by the entire hand. The display device is controlled so as to display the information including.
 前記制御部は、前記手が前記仮想オブジェクトを把持している状態、前記手が前記仮想オブジェクトを把持する位置、前記手の位置に仮想の手で前記仮想オブジェクトを把持する動きのうち少なくとも1つを示す前記情報を表示するように前記表示装置を制御する。 The control unit is at least one of a state in which the hand is holding the virtual object, a position where the hand is holding the virtual object, and a movement in which the virtual hand is holding the virtual object at the position of the hand. The display device is controlled so as to display the information indicating the above.
 また、本開示の第2の側面は、
 ユーザの手の位置姿勢を取得する取得ステップと、
 実空間に仮想オブジェクトを重畳表示する表示装置の表示動作を制御する制御ステップと、
を有し、
 前記制御ステップでは、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトの把持方法に関する情報を表示するように前記表示装置を制御する、
情報処理方法である。
The second aspect of the present disclosure is
The acquisition step to acquire the position and posture of the user's hand,
A control step that controls the display operation of a display device that superimposes and displays virtual objects in real space,
Have,
In the control step, the display device is controlled so as to display information on a method of grasping the virtual object when the hand approaches the virtual object.
It is an information processing method.
 また、本開示の第3の側面は、
 ユーザの手の位置姿勢を取得する取得部、
 実空間に仮想オブジェクトを重畳表示する表示装置の表示動作を制御する制御部、
としてコンピュータが機能するようにコンピュータ可読形式で記述され、
 前記制御部は、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトの把持方法に関する情報を表示するように前記表示装置を制御する、
コンピュータプログラムである。
In addition, the third aspect of the present disclosure is
Acquisition unit that acquires the position and posture of the user's hand,
A control unit that controls the display operation of a display device that superimposes and displays virtual objects in real space.
Written in computer readable format so that the computer works as
The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object.
It is a computer program.
 本開示の第3の側面に係るコンピュータプログラムは、コンピュータ上で所定の処理を実現するようにコンピュータ可読形式で記述されたコンピュータプログラムを定義したものである。換言すれば、本開示の第3の側面に係るコンピュータプログラムをコンピュータにインストールすることによって、コンピュータ上では協働的作用が発揮され、本開示の第1に係る情報処理装置と同様の作用効果を得ることができる。 The computer program according to the third aspect of the present disclosure defines a computer program written in a computer-readable format so as to realize a predetermined process on the computer. In other words, by installing the computer program according to the third aspect of the present disclosure on the computer, a collaborative action is exhibited on the computer, and the same action and effect as the information processing device according to the first aspect of the present disclosure can be obtained. Obtainable.
 また、本開示の第4の側面は、
 実空間に仮想オブジェクトを重畳表示する表示装置と、
 ユーザの手の位置姿勢を取得する取得部と、
 前記表示装置の表示動作を制御する制御部と、
を具備し、
 前記制御部は、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトの把持方法に関する情報を表示するように前記表示装置を制御する、
拡張現実感システムである。
In addition, the fourth aspect of the present disclosure is
A display device that superimposes and displays virtual objects in real space,
An acquisition unit that acquires the position and posture of the user's hand,
A control unit that controls the display operation of the display device,
Equipped with
The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object.
It is an augmented reality system.
 但し、ここで言う「システム」とは、複数の装置(又は特定の機能を実現する機能モジュール)が論理的に集合した物のことを言い、各装置や機能モジュールが単一の筐体内にあるか否かは特に問わない。 However, the "system" here means a logical assembly of a plurality of devices (or functional modules that realize a specific function), and each device or functional module is in a single housing. It does not matter whether or not it is.
 本開示によれば、仮想オブジェクトに対するユーザの手や指によるリアリティのあるインタラクションを実現する情報処理装置及び情報処理方法、コンピュータプログラム、並びに拡張現実感システムを提供することができる。 According to the present disclosure, it is possible to provide an information processing device and an information processing method, a computer program, and an augmented reality system that realize realistic interaction with a virtual object by a user's hand or finger.
 なお、本明細書に記載された効果は、あくまでも例示であり、本開示によりもたらされる効果はこれに限定されるものではない。また、本開示が、上記の効果以外に、さらに付加的な効果を奏する場合もある。 It should be noted that the effects described in the present specification are merely examples, and the effects brought about by the present disclosure are not limited thereto. In addition to the above effects, the present disclosure may have additional effects.
 本開示のさらに他の目的、特徴や利点は、後述する実施形態や添付する図面に基づくより詳細な説明によって明らかになるであろう。 Still other objectives, features and advantages of the present disclosure will be clarified by more detailed description based on embodiments and accompanying drawings described below.
図1は、ARシステム100の機能的構成例を示した図である。FIG. 1 is a diagram showing a functional configuration example of the AR system 100. 図2は、ユーザの頭部にARグラスを装着した様子を示した図である。FIG. 2 is a diagram showing a state in which AR glasses are attached to the user's head. 図3は、ARシステム300の構成例を示した図である。FIG. 3 is a diagram showing a configuration example of the AR system 300. 図4は、ARシステム400の構成例を示した図である。FIG. 4 is a diagram showing a configuration example of the AR system 400. 図5は、コントローラ500をユーザの手に装着した例を示した図である。FIG. 5 is a diagram showing an example in which the controller 500 is attached to the user's hand. 図6は、制御部140が備える機能的構成例を示した図である。FIG. 6 is a diagram showing an example of a functional configuration included in the control unit 140. 図7は、ARグラスを頭部に装着したユーザの周囲に仮想オブジェクトが配置される様子を示した図である。FIG. 7 is a diagram showing how a virtual object is arranged around a user wearing AR glasses on his / her head. 図8は、ARグラスがユーザの頭部の動きに追従するように仮想オブジェクトを表示させる仕組みを説明するための図である。FIG. 8 is a diagram for explaining a mechanism for displaying a virtual object so that the AR glass follows the movement of the user's head. 図9は、ユーザの手と仮想オブジェクトとの距離に応じた状態を示した図である。FIG. 9 is a diagram showing a state according to the distance between the user's hand and the virtual object. 図10は、仮想オブジェクトの把持方法をガイドするUIの表示例を示した図である。FIG. 10 is a diagram showing a display example of a UI that guides a method of grasping a virtual object. 図11は、仮想オブジェクトの把持方法をガイドするUIの表示例を示した図である。FIG. 11 is a diagram showing a display example of a UI that guides a method of grasping a virtual object. 図12は、仮想オブジェクトの把持方法をガイドするUIの表示例を示した図である。FIG. 12 is a diagram showing a display example of a UI that guides a method of grasping a virtual object. 図13は、仮想オブジェクトの把持方法をガイドするUIの表示例を示した図である。FIG. 13 is a diagram showing a display example of a UI that guides a method of grasping a virtual object. 図14は、仮想オブジェクトの把持方法をガイドするUIの表示例を示した図である。FIG. 14 is a diagram showing a display example of a UI that guides a method of grasping a virtual object. 図15は、仮想オブジェクトの把持方法をガイドするUIの表示例を示した図である。FIG. 15 is a diagram showing a display example of a UI that guides a method of grasping a virtual object. 図16は、ユーザの手がさまざまな方向から仮想オブジェクトに接近してくる例を示した図である。FIG. 16 is a diagram showing an example in which a user's hand approaches a virtual object from various directions. 図17は、指と仮想オブジェクトとの接触状態に応じて仮想オブジェクトの把持方法をガイドするUIが切り替わる例を示した図である。FIG. 17 is a diagram showing an example in which the UI that guides the method of grasping the virtual object is switched according to the contact state between the finger and the virtual object. 図18は、指と仮想オブジェクトとの接触状態に応じて仮想オブジェクトの把持方法をガイドするUIが切り替わる例を示した図である。FIG. 18 is a diagram showing an example in which the UI that guides the method of grasping the virtual object is switched according to the contact state between the finger and the virtual object. 図19は、ユーザに対して仮想オブジェクトの把持方法をガイドするUIを提示するための処理手順を示したフローチャートである。FIG. 19 is a flowchart showing a processing procedure for presenting a UI that guides a user how to grasp a virtual object. 図20は、ユーザの指が仮想オブジェクトに接触したときの仮想の指の表示例を示した図である。FIG. 20 is a diagram showing a display example of a virtual finger when the user's finger touches the virtual object. 図21は、ユーザの指が仮想オブジェクトに接近したときの仮想の指の表示例を示した図である。FIG. 21 is a diagram showing a display example of a virtual finger when the user's finger approaches the virtual object. 図22は、ユーザの指が仮想オブジェクトに接近したときの仮想の指の他の表示例を示した図である。FIG. 22 is a diagram showing another display example of the virtual finger when the user's finger approaches the virtual object. 図23は、ユーザに仮想的な手を表示するための処理手順を示したフローチャートである。FIG. 23 is a flowchart showing a processing procedure for displaying a virtual hand to the user. 図24は、遠隔操作システム2400の構成例を示した図である。FIG. 24 is a diagram showing a configuration example of the remote control system 2400. 図25は、マスタ装置2410側でオペレータが自分の手を仮想オブジェクトに接近している様子を示した図である。FIG. 25 is a diagram showing an operator approaching a virtual object with his / her hand on the master device 2410 side. 図26は、スレーブ装置2420側で、ロボット2421がオペレータの手の動きに追従するように物体に接近している様子を示した図である。FIG. 26 is a diagram showing a state in which the robot 2421 is approaching an object so as to follow the movement of the operator's hand on the slave device 2420 side. 図27は、オペレータに対して仮想オブジェクトの把持方法をガイドするUIを提示するための処理手順を示したフローチャートである。FIG. 27 is a flowchart showing a processing procedure for presenting a UI that guides the operator how to grasp the virtual object.
 以下、図面を参照しながら本開示の実施形態について詳細に説明する。 Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings.
 実世界では、摘まむ、掴むといった作法により物体を持つことができ、物体は摘まむ又は掴む手から加わる力によって形状を変化させる。一方、仮想世界では、物体は現実に存在しないため、手が物体をすり抜けてしまうので、実世界と同じ作法で物体を持つことができない。例えば、仮想世界の物体に指を突っ込んで指先で摘まんだり、物体の外周に設けた枠を摘まんだりするユーザインターフェース(UI)を提供する拡張現実感システムも考えられる。しかしながら、仮想世界においてUIを通じて物体を持つ作法は、実世界において物体を持つ作法との乖離が大きく、リアリティが大きく損なわれる。 In the real world, an object can be held by a method of picking or grasping, and the shape of the object is changed by the force applied from the hand to pinch or grasp. On the other hand, in the virtual world, since the object does not actually exist, the hand slips through the object, and it is not possible to hold the object in the same manner as in the real world. For example, an augmented reality system that provides a user interface (UI) in which a finger is thrust into an object in a virtual world and pinched with a fingertip, or a frame provided on the outer circumference of the object is pinched is also conceivable. However, the method of holding an object through the UI in the virtual world has a large divergence from the method of holding an object in the real world, and the reality is greatly impaired.
 また、仮想世界の物体は現実に存在しないため、摘まむ、掴むといった作法により物体を持つ場合に、手が物体をすり抜けてしまい、ユーザはリアリティのある触感が得られない。例えば、手に外骨格型の力覚提示装置を装着して、仮想世界の物体を持つ場合に、手が物体をすり抜けないように手の動きをロックして、実世界において物体を持つ作法に類似する仮想世界の作法を実現する方法も考えられる。しかしながら、力感提示装置の購入コストが高いことや、力覚提示装置を設置する場所が必要であることから、限られたユーザや環境でしか利用できない。 Also, since the object in the virtual world does not actually exist, when the object is held by the method of picking or grasping, the hand slips through the object, and the user cannot obtain a realistic tactile sensation. For example, if you attach an exoskeleton type force sense presentation device to your hand and hold an object in the virtual world, you can lock the movement of your hand so that the hand does not slip through the object, and you can hold the object in the real world. A method of realizing a similar virtual world method is also conceivable. However, it can be used only by a limited number of users and environments because the purchase cost of the force sense presentation device is high and a place where the force sense presentation device is installed is required.
 そこで、本開示では、実世界において物体を持つ作法から乖離しない作法により、力覚提示装置のような外部装置を利用しないで、仮想世界の物体を持つ作法を実現する。 Therefore, in this disclosure, a method of holding an object in the virtual world is realized without using an external device such as a force sense presentation device by a method that does not deviate from the method of holding an object in the real world.
A.システム構成
 図1には、本開示を適用したARシステム100の機能的構成例を示している。図示のARシステム100は、ARグラスを装着したユーザの手の位置検出とユーザの手指の形状を検出する第1のセンサ部110と、ARグラスに搭載される第2のセンサ部120と、ARグラスに仮想オブジェクトを表示する表示部131と、ARシステム100全体の動作を統括的にコントロールする制御部140を備えている。第1のセンサ部110は、ジャイロセンサ111と、加速度センサ112と、方位センサ113を備えている。第2のセンサ部120は、ARグラスに搭載されるが、外向きカメラ121と、内向きカメラ122と、マイク123と、ジャイロセンサ124と、加速度センサ125と、方位センサ126を含んでいる。
A. System Configuration FIG. 1 shows an example of a functional configuration of the AR system 100 to which the present disclosure is applied. The illustrated AR system 100 includes a first sensor unit 110 that detects the position of the user's hand wearing the AR glass and the shape of the user's finger, a second sensor unit 120 mounted on the AR glass, and AR. It includes a display unit 131 that displays virtual objects on the glass, and a control unit 140 that comprehensively controls the operation of the entire AR system 100. The first sensor unit 110 includes a gyro sensor 111, an acceleration sensor 112, and a directional sensor 113. The second sensor unit 120, which is mounted on the AR glass, includes an outward camera 121, an inward camera 122, a microphone 123, a gyro sensor 124, an acceleration sensor 125, and a directional sensor 126.
 また、ARシステム100は、仮想オブジェクトに関わる音声などのオーディオ信号を出力するスピーカー132や、ユーザの手の甲やその他の身体の部位に振動提示によるフィードバックを行う振動提示部133と、ARシステム100が外部と通信を行うための通信部134をさらに備えていてもよい。また、制御部140は、SSD(Solid State Drive)などからなる大規模な記憶部150を装備していてもよい。 Further, the AR system 100 includes a speaker 132 that outputs an audio signal such as a voice related to a virtual object, a vibration presentation unit 133 that provides feedback by vibration presentation to the back of the user's hand or other body parts, and an AR system 100 externally. A communication unit 134 for communicating with the communication unit 134 may be further provided. Further, the control unit 140 may be equipped with a large-scale storage unit 150 including an SSD (Solid State Drive) or the like.
 ARグラス本体は、一般には眼鏡型又はゴーグル型のデバイスであり、ユーザが頭部に装着して利用され、ユーザの両目又は片目の視野にデジタル情報を重畳表示したり、特定の実オブジェクトを強調し又は減衰したり、特定の実オブジェクトを削除してあたかも存在しないように見せたりすることができる。図2には、ユーザの頭部にARグラスを装着した様子を示している。図示のARグラスは、ユーザの左右の眼の前にそれぞれ左眼用の表示部131と右眼用の表示部131が配設されている。表示部131は、透明又は半透明で、実空間の風景に仮想オブジェクトを重畳表示したり、特定の実オブジェクトを強調し又は減衰したり、特定の実オブジェクトを削除してあたかも存在しないように見せたりする。左右の表示部131は、例えば独立して表示駆動され、視差画像すなわち仮想オブジェクトを3D表示するようにしてもよい。また、ARグラスのほぼ中央には、ユーザの視線方向に向けられた外向きカメラ121が配置されている。 The AR glass body is generally a spectacle-type or goggle-type device, which is used by the user by wearing it on the head, superimposing digital information on the visual field of the user's eyes or one eye, or emphasizing a specific real object. It can be degraded or attenuated, or a particular real object can be deleted to make it appear as if it does not exist. FIG. 2 shows a state in which AR glasses are attached to the user's head. In the illustrated AR glass, a display unit 131 for the left eye and a display unit 131 for the right eye are arranged in front of the left and right eyes of the user, respectively. The display unit 131 is transparent or translucent, and displays a virtual object superimposed on a landscape in real space, emphasizes or attenuates a specific real object, or deletes a specific real object to make it appear as if it does not exist. Or something. The left and right display units 131 may be independently displayed and driven, for example, to display a parallax image, that is, a virtual object in 3D. Further, an outward camera 121 directed in the user's line-of-sight direction is arranged substantially in the center of the AR glass.
 ARシステム100は、例えばユーザが頭部に装着するARグラスと、ユーザの手に装着されるコントローラという2つの装置で構成することができる。図3には、ARグラス301とコントローラ302からなるARシステム300の構成例を示している。ARグラス301は、制御部140と、記憶部150と、第2のセンサ部120と、表示部131と、スピーカー132と、通信部134を含んでいる。また、コントローラ302は、第1のセンサ部110と、振動提示部133を含んでいる。 The AR system 100 can be composed of two devices, for example, an AR glass worn on the user's head and a controller worn on the user's hand. FIG. 3 shows a configuration example of an AR system 300 including an AR glass 301 and a controller 302. The AR glass 301 includes a control unit 140, a storage unit 150, a second sensor unit 120, a display unit 131, a speaker 132, and a communication unit 134. Further, the controller 302 includes a first sensor unit 110 and a vibration presenting unit 133.
 他の構成例として、ARシステム100は、ユーザが頭部に装着するARグラスと、ユーザの手に装着されるコントローラと、スマートフォンやタブレットなどの情報端末という3台の装置で構成される。図4には、ARグラス401とコントローラ402と情報端末403からなるARシステム400の構成例を示している。ARグラス401は、表示部131と、スピーカー132と、第2のセンサ部120を含んでいる。コントローラ402は、第1のセンサ部110と、振動提示部133を含んでいる。また、情報端末403は、制御部140と、記憶部150と、通信部134を含んでいる。 As another configuration example, the AR system 100 is composed of three devices: an AR glass worn by the user on the head, a controller worn on the user's hand, and an information terminal such as a smartphone or tablet. FIG. 4 shows a configuration example of an AR system 400 including an AR glass 401, a controller 402, and an information terminal 403. The AR glass 401 includes a display unit 131, a speaker 132, and a second sensor unit 120. The controller 402 includes a first sensor unit 110 and a vibration presenting unit 133. Further, the information terminal 403 includes a control unit 140, a storage unit 150, and a communication unit 134.
 なお、ARシステム100の具体的に装置構成は、図3と図4に限定されるものではない。また、ARシステム100は、図1に示した以外の構成要素をさらに含んでいてもよい。 The specific device configuration of the AR system 100 is not limited to FIGS. 3 and 4. Further, the AR system 100 may further include components other than those shown in FIG.
 図1を参照しながら、ARシステム100の各構成要素について説明する。 Each component of the AR system 100 will be described with reference to FIG.
 図3及び図4にも示したように、第1のセンサ部110と振動提示部133は、ユーザの手に装着するコントローラとして構成される。第1のセンサ部110は、ジャイロセンサ111と、加速度センサ112と、方位センサ113を備えている。第1のセンサ部110は、ジャイロセンサと加速度センサと方位センサを備えたIMU(Inertial Measurement Unit)であってもよい。また、振動提示部133は、電磁型や圧電型の振動子をアレイ状に配置して構成される。第1のセンサ部110のセンサ信号は、制御部140に転送される。 As shown in FIGS. 3 and 4, the first sensor unit 110 and the vibration presentation unit 133 are configured as a controller to be worn on the user's hand. The first sensor unit 110 includes a gyro sensor 111, an acceleration sensor 112, and a directional sensor 113. The first sensor unit 110 may be an IMU (Inertial Measurement Unit) including a gyro sensor, an acceleration sensor, and a directional sensor. Further, the vibration presenting unit 133 is configured by arranging electromagnetic type or piezoelectric type vibrators in an array. The sensor signal of the first sensor unit 110 is transferred to the control unit 140.
 図5には、第1のセンサ部110及び振動提示部133からなるコントローラ500をユーザの手に装着した例を示している。図5に示す例では、親指と、人差し指の基節及び中節の3箇所に、IMU501、502、503がそれぞれバンド511、512、513によって取り付けられている。これによって、親指の姿勢、人差し指の基節及び中節の姿勢(又は、人差し指の第2関節の角度)を計測することができる。また、振動提示部133は、手の甲に取り付けられている。振動提示部133は、バンド(図示しない)又は粘着パッドなどで手の甲に固定されていてもよい。 FIG. 5 shows an example in which the controller 500 including the first sensor unit 110 and the vibration presentation unit 133 is attached to the user's hand. In the example shown in FIG. 5, IMUs 501, 502, and 503 are attached to the thumb and the proximal phalanx and the middle phalanx of the index finger by bands 511, 512, and 513, respectively. Thereby, the posture of the thumb, the posture of the proximal phalanx and the middle phalanx of the index finger (or the angle of the second joint of the index finger) can be measured. Further, the vibration presenting unit 133 is attached to the back of the hand. The vibration presenting unit 133 may be fixed to the back of the hand with a band (not shown), an adhesive pad, or the like.
 但し、図5は第1のセンサ部110の一例を示すものであり、親指と人差し指の別の場所にさらに他のIMUを取り付けてもよいし、親指と人差し指以外の指にもIMUを取り付けてもよい。また、IMUの各指への固定方法はバンドに限定されない。また、図5は右手に第1のセンサ部110及び振動提示部133を取り付けた例を示しているが、右手ではなく左手に取り付けてもよいし、両手に取り付けてもよい。 However, FIG. 5 shows an example of the first sensor unit 110, and another IMU may be attached to another place of the thumb and the index finger, or the IMU may be attached to a finger other than the thumb and the index finger. May be good. Further, the method of fixing the IMU to each finger is not limited to the band. Further, although FIG. 5 shows an example in which the first sensor unit 110 and the vibration presenting unit 133 are attached to the right hand, they may be attached to the left hand instead of the right hand, or may be attached to both hands.
 また、第1のセンサ部110(図5に示す例では、IMU501、502、503)によるセンサ信号を制御部140に送信するとともに、制御部140から振動提示部133の駆動信号を受信するための有線又は無線の伝送路があるものとする。制御部140は、第1のセンサ部110のセンサ信号に基づいて、手指の位置姿勢を検出することができる。図5に示すように、親指と人差し指の基節及び中節の3箇所にIMU501、502、503が取り付けられている場合には、制御部140は、各IMU501、502、503の検出信号に基づいて、親指と人差し指の開き角度、人差し指の第2関節の角度、親指と人差し指の指先の接触の有無など、手指の位置姿勢(又は、手指の形状)並びに手指のジェスチャを認識することができる。 Further, the sensor signal from the first sensor unit 110 (IMU501, 502, 503 in the example shown in FIG. 5) is transmitted to the control unit 140, and the drive signal of the vibration presentation unit 133 is received from the control unit 140. It is assumed that there is a wired or wireless transmission line. The control unit 140 can detect the position and posture of the fingers based on the sensor signal of the first sensor unit 110. As shown in FIG. 5, when the IMUs 501, 502, and 503 are attached to the base and middle nodes of the thumb and index finger, the control unit 140 is based on the detection signals of the IMUs 501, 502, and 503, respectively. Therefore, it is possible to recognize the position and orientation of the fingers (or the shape of the fingers) and the gestures of the fingers, such as the opening angle between the thumb and the index finger, the angle of the second joint of the index finger, and the presence or absence of contact between the thumb and the tip of the index finger.
 再び図1を参照して、ARシステム100の各構成要素の説明を続ける。 Refer to FIG. 1 again to continue the explanation of each component of the AR system 100.
 第2のセンサ部120は、ARグラスに搭載されるが、外向きカメラ121と、内向きカメラ122と、マイク123と、ジャイロセンサ124と、加速度センサ125と、方位センサ126を含んでいる。 The second sensor unit 120 is mounted on the AR glass, and includes an outward camera 121, an inward camera 122, a microphone 123, a gyro sensor 124, an acceleration sensor 125, and a directional sensor 126.
 外向きカメラ121は、例えばRGBカメラからなり、ARグラスの外側すなわちARグラスを装着したユーザの正面方向を撮影するように設置されている。外向きカメラ121は、ユーザの手指の操作を撮影することができるが、障害物の陰にユーザの手指が隠れてしまった場合、手の甲で指先が隠れている場合、ユーザが身体の後ろに手をまわした場合などには、ユーザの手指の操作を撮影することができない。また、外向きカメラ121は、IR発光部及びIR受光部からなるIRカメラ、TOF(Time Of Flight)カメラのうちいずれか1つをさらに備えていてもよい。外向きカメラ121にIRカメラを用いる場合、手の甲など捕捉の対象となる物体に再帰性反射材を取り付けて、IRカメラは赤外光を発光するとともに再帰性反射材から反射された赤外光を受光する。外向きカメラ121で撮影した画像信号は、制御部140に転送される。 The outward-facing camera 121 is composed of, for example, an RGB camera, and is installed so as to photograph the outside of the AR glass, that is, the front direction of the user wearing the AR glass. The outward-facing camera 121 can capture the operation of the user's fingers, but if the user's fingers are hidden behind an obstacle, or if the fingertips are hidden by the back of the hand, the user is behind the body. It is not possible to capture the operation of the user's fingers when turning. Further, the outward-facing camera 121 may further include any one of an IR camera including an IR light emitting unit and an IR light receiving unit, and a TOF (Time Of Flight) camera. When an IR camera is used for the outward camera 121, a retroreflective material is attached to an object to be captured such as the back of the hand, and the IR camera emits infrared light and emits infrared light reflected from the retroreflective material. Receive light. The image signal captured by the outward camera 121 is transferred to the control unit 140.
 内向きカメラ122は、例えばRGBカメラからなり、ARグラスの内側、具体的にはARグラスを装着したユーザの眼を撮影するように設置されている。内向きカメラ122の撮影画像に基づいて、ユーザの視線方向を検出することができる。内向きカメラ122で撮影した画像信号は、制御部140に転送される。 The inward camera 122 is composed of, for example, an RGB camera, and is installed so as to photograph the inside of the AR glass, specifically, the eyes of a user wearing the AR glass. The line-of-sight direction of the user can be detected based on the captured image of the inward-facing camera 122. The image signal captured by the inward camera 122 is transferred to the control unit 140.
 マイク123は、単一の収音素子又は複数の収音素子からなるマイクアレイであってもよい。マイク123は、ARグラスを装着したユーザの音声やユーザの周囲音を収音する。マイク123で収音したオーディオ信号は、制御部140に転送される。 The microphone 123 may be a single sound collecting element or a microphone array including a plurality of sound collecting elements. The microphone 123 collects the voice of the user wearing the AR glass and the ambient sound of the user. The audio signal picked up by the microphone 123 is transferred to the control unit 140.
 ジャイロセンサ124と、加速度センサ125と、方位センサ126は、IMUで構成されていてもよい。ジャイロセンサ124と、加速度センサ125と、方位センサ126のセンサ信号は、制御部140に転送される。制御部140は、これらのセンサ信号に基づいて、ARグラスを装着したユーザの頭部の位置姿勢を検出することができる。 The gyro sensor 124, the acceleration sensor 125, and the azimuth sensor 126 may be composed of an IMU. The sensor signals of the gyro sensor 124, the acceleration sensor 125, and the directional sensor 126 are transferred to the control unit 140. The control unit 140 can detect the position and posture of the head of the user wearing the AR glasses based on these sensor signals.
 表示部131は、ARグラスを装着したユーザの両目又は片目の前方に設置される透過型ディスプレイ(メガネレンズなど)で構成され、仮想世界の表示に利用される。具体的には、表示部131は、情報(仮想オブジェクト)を表示したり、現実のオブジェクトを強調又は減衰、削除したりして、ユーザから見た実空間を拡張させる。表示部131は、制御部140からの制御信号に基づいて表示動作を行う。また、表示部131で仮想オブジェクトをシースルー表示する仕組みは特に限定されない。 The display unit 131 is composed of a transmissive display (glasses lens, etc.) installed in front of both eyes or one eye of the user wearing AR glasses, and is used for displaying a virtual world. Specifically, the display unit 131 expands the real space seen by the user by displaying information (virtual objects) and emphasizing, attenuating, or deleting real objects. The display unit 131 performs a display operation based on a control signal from the control unit 140. Further, the mechanism for see-through display of virtual objects on the display unit 131 is not particularly limited.
 スピーカー132は、単一の発音素子、又は複数の発音素子のアレイで構成され、例えばARグラスに設置される。スピーカー132からは、例えば表示部131で表示される仮想オブジェクトに関する音声が出力されるが、その他のオーディオ信号を出力するようにしてもよい。 The speaker 132 is composed of a single sounding element or an array of a plurality of sounding elements, and is installed in, for example, an AR glass. For example, the speaker 132 outputs the sound related to the virtual object displayed on the display unit 131, but other audio signals may be output.
 通信部134は、例えばWi-Fi(登録商標)やBluetooth(登録商標)などの無線通信機能を備えている。通信部134は、主に制御部140と外部システム(図示しない)とのデータ交換を実現するための通信動作を行う。 The communication unit 134 has a wireless communication function such as Wi-Fi (registered trademark) or Bluetooth (registered trademark). The communication unit 134 mainly performs a communication operation for realizing data exchange between the control unit 140 and an external system (not shown).
 制御部140は、ARグラス内に設置されるか、又はARグラスとは分離した装置(スマートフォンなど)内に、記憶部150やバッテリなどの駆動用電源とともに配置される。制御部140は、記憶部150から読み出した各種プログラムを実行してさまざまな処理を実施する。 The control unit 140 is installed in the AR glass or is arranged in a device (smartphone or the like) separated from the AR glass together with a drive power source such as a storage unit 150 or a battery. The control unit 140 executes various programs read from the storage unit 150 to perform various processes.
 図6には、制御部140が備える機能的構成例を模式的に示している。図示の例では、制御部140は、アプリケーション実行部601と、頭部位置姿勢検出部602と、出力制御部603と、手指位置姿勢検出部604と、手指ジェスチャ検出部605を備えている。これらの機能モジュールは、制御部140が記憶部150から読み出した各種プログラムを実行することにより実現される。但し、図6には本開示を実現するための必要最低限の機能モジュールのみを図示しており、制御部140はさらに他の機能モジュールを備えていてもよい。 FIG. 6 schematically shows an example of a functional configuration included in the control unit 140. In the illustrated example, the control unit 140 includes an application execution unit 601, a head position / posture detection unit 602, an output control unit 603, a finger position / posture detection unit 604, and a finger gesture detection unit 605. These functional modules are realized by executing various programs read from the storage unit 150 by the control unit 140. However, FIG. 6 shows only the minimum necessary functional modules for realizing the present disclosure, and the control unit 140 may further include other functional modules.
 アプリケーション実行部601は、OSが提供する実行環境下で、ARアプリを含むアプリケーションプログラムを実行する。アプリケーション実行部601は、複数のアプリケーションプログラムを同時に並行して実行してもよい。ARアプリは、例えば動画再生や3Dオブジェクトのビューアなどのアプリケーションであるが、ARグラス(図2を参照のこと)を頭部に装着したユーザの視界の中に仮想オブジェクトを重畳表示したり、特定の実オブジェクトを強調し又は減衰したり、特定の実オブジェクトを削除してあたかも存在しないように見せたりする。アプリケーション実行部601は、表示部131を使って、ARアプリ(仮想オブジェクト)の表示動作も制御する。ARアプリが生成する仮想オブジェクトは、ユーザの全周囲にわたって配置される。図7には、ARグラスを頭部に装着したユーザの周囲700に複数の仮想オブジェクト701、702、703、…が配置される様子を模式的に示している。アプリケーション実行部601は、第2のセンサ部120からのセンサ情報に基づいて推定されるユーザの頭部の位置又は身体の重心位置を基準にして、ユーザの周囲に各仮想オブジェクト701、702、703、…を配置する。 The application execution unit 601 executes the application program including the AR application under the execution environment provided by the OS. The application execution unit 601 may execute a plurality of application programs in parallel at the same time. AR apps are applications such as video playback and 3D object viewers, but virtual objects can be superimposed or specified in the view of a user wearing AR glasses (see Fig. 2) on their heads. Emphasizes or attenuates a real object in, or deletes a particular real object to make it appear as if it doesn't exist. The application execution unit 601 also controls the display operation of the AR application (virtual object) by using the display unit 131. Virtual objects generated by the AR application are arranged all around the user. FIG. 7 schematically shows how a plurality of virtual objects 701, 702, 703, ... Are arranged around 700 around the user wearing the AR glass on the head. The application execution unit 601 has each virtual object 701, 702, 703 around the user with reference to the position of the user's head or the position of the center of gravity of the body estimated based on the sensor information from the second sensor unit 120. , ... are placed.
 頭部位置姿勢検出部602は、ARグラスに搭載される第2のセンサ部120に含まれるジャイロセンサ124と、加速度センサ125と、方位センサ126の各センサ信号に基づいて、ユーザの頭部の位置姿勢を検出し、さらにはユーザの視線方向又は視野範囲を認識する。 The head position / orientation detection unit 602 is based on the sensor signals of the gyro sensor 124, the acceleration sensor 125, and the orientation sensor 126 included in the second sensor unit 120 mounted on the AR glass, and is based on the sensor signals of the user's head. The position and orientation are detected, and the user's line-of-sight direction or visual field range is recognized.
 出力制御部603は、ARアプリなどのアプリケーションプログラムの実行結果に基づいて、表示部131、スピーカー132、及び振動提示部133の出力を制御する。例えば、出力制御部603は、頭部位置姿勢検出部602の検出結果に基づいてユーザの視野範囲を特定して、視野範囲に配置された仮想オブジェクトがARグラス越しにユーザが観察できるように、すなわちユーザの頭部の動きに追従するように表示部131による仮想オブジェクトの表示動作を制御する。 The output control unit 603 controls the output of the display unit 131, the speaker 132, and the vibration presentation unit 133 based on the execution result of an application program such as an AR application. For example, the output control unit 603 specifies the user's visual field range based on the detection result of the head position / posture detection unit 602 so that the virtual object arranged in the visual field range can be observed by the user through the AR glass. That is, the display operation of the virtual object is controlled by the display unit 131 so as to follow the movement of the user's head.
 ARグラスがユーザの頭部の動きに追従するように仮想オブジェクトを表示させる仕組みについて、図8を参照しながら説明する。図8では、ユーザの視線の奥行き方向がzw軸、水平方向がyw軸、垂直方向がxw軸であり、ユーザの基準軸xwwwの原点位置はユーザの視点位置とする。ロールθzはユーザの頭部のzw軸回りの運動、チルトθyはユーザの頭部のyw軸回りの運動、パンθzはユーザの頭部のxw軸回りの運動に相当する。頭部位置姿勢検出部602は、ジャイロセンサ124と、加速度センサ125と、方位センサ126のセンサ信号に基づいて、ユーザの頭部のロール、チルト、パンの各方向の動き(θz,θy,θz)や頭部の平行移動からなる姿勢情報を検出する。そして、出力制御部603は、ユーザの頭部の姿勢に追従するように、仮想オブジェクトが配置された実空間(例えば、図7を参照のこと)上で表示部131の表示画角を移動させ、その表示画角に存在する仮想オブジェクトの画像を表示部131で表示する。具体的には、ユーザの頭部運動のロール成分に応じて領域802-1を回転させたり、ユーザの頭部運動のチルト成分に応じて領域802-2を移動させたり、ユーザの頭部運動のパン成分に応じて領域802-3を移動させたりして、ユーザの頭部の動きを打ち消すように表示画角を移動させる。したがって、表示部131にはユーザの頭部の位置姿勢に追従して移動した表示画角に配置された仮想オブジェクトが表示されるので、ユーザは、ARグラス越しに、仮想オブジェクトが重畳された実空間を観察することができる。 A mechanism for displaying a virtual object so that the AR glass follows the movement of the user's head will be described with reference to FIG. In FIG. 8, the depth direction of the user's line of sight is the z w axis, the horizontal direction is the y w axis, and the vertical direction is the x w axis, and the origin position of the user's reference axis x w y w z w is the user's viewpoint position. do. Roll θ z corresponds to the movement of the user's head around the z w axis, tilt θ y corresponds to the movement of the user's head around the y w axis, and pan θ z corresponds to the movement of the user's head around the x w axis. .. The head position / orientation detection unit 602 moves the user's head in each of the roll, tilt, and pan directions (θ z , θ y) based on the sensor signals of the gyro sensor 124, the acceleration sensor 125, and the orientation sensor 126. , Θ z ) and the posture information consisting of the translation of the head. Then, the output control unit 603 moves the display angle of view of the display unit 131 in the real space (for example, see FIG. 7) in which the virtual object is arranged so as to follow the posture of the user's head. , The image of the virtual object existing at the display angle of view is displayed on the display unit 131. Specifically, the region 802-1 is rotated according to the roll component of the user's head movement, the region 802-2 is moved according to the tilt component of the user's head movement, and the user's head movement is performed. The display angle of view is moved so as to cancel the movement of the user's head by moving the area 802-3 according to the pan component of. Therefore, since the virtual object arranged at the display angle of view moved according to the position and orientation of the user's head is displayed on the display unit 131, the user can see the virtual object superimposed on the AR glass. You can observe the space.
 再び図6を参照して、制御部140の機能的構成について説明する。 The functional configuration of the control unit 140 will be described with reference to FIG. 6 again.
 手指位置姿勢検出部604は、外向きカメラ121で撮影した画像の認識結果、又は第1のセンサ部110の検出信号に基づいて、ARグラスを装着したユーザの手及び指の位置姿勢を検出する。また、手指ジェスチャ検出部605は、外向きカメラ121で撮影した画像の認識結果、又は第1のセンサ部110の検出信号に基づいて、ARグラスを装着したユーザの手指のジェスチャを検出する。ここで言う手指のジェスチャは、手指の形状、具体的には人差し指の第3関節及び第2関節の角度や、親指と人差し指の指先の接触の有無などを含む。 The finger position / posture detection unit 604 detects the position / posture of the user's hand and fingers wearing the AR glass based on the recognition result of the image taken by the outward camera 121 or the detection signal of the first sensor unit 110. .. Further, the finger gesture detection unit 605 detects the gesture of the user's finger wearing the AR glass based on the recognition result of the image taken by the outward camera 121 or the detection signal of the first sensor unit 110. The gesture of the finger referred to here includes the shape of the finger, specifically the angle of the third joint and the second joint of the index finger, and the presence or absence of contact between the thumb and the fingertip of the index finger.
 本実施形態では、手指位置姿勢検出部604及び手指ジェスチャ検出部605は、主に、ユーザの手に取り付けた第1のセンサ部110(ジャイロセンサ111と、加速度センサ112と、方位センサ113)からの位置姿勢の情報と、指が取り得る位置姿勢の拘束条件を用いて、より高精度に手指の姿勢及び手指のジェスチャを検出する。例えば、ユーザが背中や尻など身体の背面側に手を回した場合、頭部からの画像認識では手指を検出できないが、手に装着した第1のセンサ部110のセンサ信号を用いれば高精度で手指の位置姿勢を検出することができる。他方、外向きカメラ121を用いて手指の位置姿勢や手指のジェスチャを検出する方法の場合、オクルージョンなどにより高精度に検出できない場合がある。 In the present embodiment, the finger position / posture detection unit 604 and the finger gesture detection unit 605 are mainly from the first sensor unit 110 (gyro sensor 111, acceleration sensor 112, and orientation sensor 113) attached to the user's hand. The posture of the finger and the gesture of the finger are detected with higher accuracy by using the information of the position and the posture of the finger and the restraint condition of the position and the posture that the finger can take. For example, when the user turns his / her hand toward the back side of the body such as the back or hips, the fingers cannot be detected by image recognition from the head, but high accuracy can be obtained by using the sensor signal of the first sensor unit 110 attached to the hand. The position and orientation of the fingers can be detected with. On the other hand, in the case of the method of detecting the position and orientation of the fingers and the gestures of the fingers using the outward-facing camera 121, it may not be possible to detect with high accuracy due to occlusion or the like.
B.仮想オブジェクトの把持方法のガイド提示
 本開示に係るARシステム100は、実世界において物体を持つ手法から乖離しないように仮想オブジェクトを持つように、ユーザに対して仮想オブジェクトの把持方法のガイドを提示する。具体的には、手指位置姿勢検出部604は仮想オブジェクトを把持しようとするユーザの手の位置姿勢を検出する。アプリケーション実行部601は、手指位置姿勢検出部604が検出したユーザの手の位置姿勢と、実空間に配置している仮想オブジェクトとの位置関係に基づいて、ユーザの手と仮想オブジェクトとの距離に応じて手の周辺に把持方法のガイドを提示する処理を行う。そして、出力制御部603により、表示部131(又はARグラス)への、手の周辺に把持方法をガイドする仮想オブジェクトの表示出力を実施する。
B. Presenting a guide on how to hold a virtual object The AR system 100 according to the present disclosure presents a guide on how to hold a virtual object to a user so as to hold a virtual object so as not to deviate from the method of holding an object in the real world. .. Specifically, the finger position / posture detection unit 604 detects the position / posture of the user's hand trying to grasp the virtual object. The application execution unit 601 determines the distance between the user's hand and the virtual object based on the positional relationship between the position and orientation of the user's hand detected by the finger position / posture detection unit 604 and the virtual object arranged in the real space. Correspondingly, a process of presenting a guide of the gripping method is performed around the hand. Then, the output control unit 603 outputs a display to the display unit 131 (or AR glass) of a virtual object that guides the gripping method around the hand.
 ここで、ユーザの手と仮想オブジェクトの距離として、「接近」、「接触」、「めり込み」の3つの状態を定義する。図9には、「接近」、「接触」、「めり込み」の3つの状態を示している。「接近」は、ユーザの手と仮想オブジェクトとの最短距離が所定値以下になった状態である。「接触」は、ユーザの手と仮想オブジェクトとの最短距離が0になった状態である。「めり込み」は、ユーザの手が仮想オブジェクトの領域に干渉している状態である。 Here, as the distance between the user's hand and the virtual object, three states of "approach", "contact", and "immersion" are defined. FIG. 9 shows three states of “approach”, “contact”, and “entry”. "Approach" is a state in which the shortest distance between the user's hand and the virtual object is equal to or less than a predetermined value. "Contact" is a state in which the shortest distance between the user's hand and the virtual object is zero. "Embedding" is a state in which the user's hand is interfering with the area of the virtual object.
 本開示に係るARシステム100では、ユーザの手が仮想オブジェクトに接近した際に、仮想オブジェクトの把持方法が分かるように、把持方法を示す表示を仮想オブジェクト付近に行う。ユーザは、表示された把持方法に基づいて、実世界において物体を持つ手法から乖離しないように仮想オブジェクトを持つようになり、リアリティを確保する。 In the AR system 100 according to the present disclosure, a display indicating the gripping method is displayed in the vicinity of the virtual object so that the gripping method of the virtual object can be understood when the user's hand approaches the virtual object. Based on the displayed gripping method, the user comes to have a virtual object so as not to deviate from the method of holding an object in the real world, and secures reality.
 実オブジェクトでできるあらゆる把持方法を、仮想オブジェクトに対して完全に行えるようにすることは困難である。例えば、仮想オブジェクトを親指と人差し指で摘まんで持つ場合でも、指がすり抜けてしまうため、仮想オブジェクトに摘まむ力を加えることはできない。ユーザは摘まんでいる親指と人差し指を安定した間隔に維持することは困難であるため、実世界とまったく同じ作法で仮想オブジェクトを摘まんで持つことはできない。また、ARシステム100は、仮想オブジェクトの把持方法を限定することで、システムの処理を単純化させることができるが、ユーザは任意の把持方法を利用することができない場合もある。 It is difficult to completely perform all the gripping methods that can be done with a real object for a virtual object. For example, even when the virtual object is pinched and held by the thumb and index finger, the finger slips through, so that the force of pinching cannot be applied to the virtual object. Since it is difficult for the user to keep the thumb and index finger being picked at a stable distance, it is not possible to pick and hold a virtual object in exactly the same way as in the real world. Further, the AR system 100 can simplify the processing of the system by limiting the gripping method of the virtual object, but the user may not be able to use an arbitrary gripping method.
 実空間と同様の手法で仮想オブジェクトの把持方法をガイドしようとすると、ARシステム100が対応できない状況がある。このため、ユーザがARグラス越しに観察する仮想オブジェクトを把持したい場合に、どのような手法を用いればよいのか、その仮想オブジェクトを把持することができるのかを、理解が困難になってしまう。 When trying to guide the method of grasping a virtual object by the same method as in the real space, there is a situation where the AR system 100 cannot handle it. Therefore, when the user wants to grasp the virtual object to be observed through the AR glass, it becomes difficult to understand what kind of method should be used and whether the virtual object can be grasped.
 そこで、本開示に係るARシステム100では、ユーザの手が仮想オブジェクトに接近した際に、仮想オブジェクトの把持方法をガイドするUIを仮想オブジェクト付近(又は、手の付近)に表示して、ARシステム100において対応可能な把持方法をガイドする情報をユーザに示す。把持方法のガイドは、例えばUIの形式で表示部131を用いて表示されるが、テキストメッセージなどの文字情報や音声アナウンスを併せて行うようにしてもよい。したがって、ユーザは、仮想オブジェクトの把持方法をガイドするUIに導かれて、仮想オブジェクトを迷いなく容易に把持することができる。 Therefore, in the AR system 100 according to the present disclosure, when the user's hand approaches the virtual object, a UI that guides the method of grasping the virtual object is displayed near the virtual object (or near the hand), and the AR system Information that guides the gripping method that can be handled in 100 is shown to the user. The guide of the gripping method is displayed by using the display unit 131 in the form of UI, for example, but character information such as a text message and voice announcement may also be performed. Therefore, the user can easily grasp the virtual object without hesitation by being guided by the UI that guides the grasping method of the virtual object.
 仮想オブジェクトの把持方法のガイドは、仮想オブジェクトの把持方法のガイドの種別として以下の(1)~(3)などが想定される。もちろん、これら以外の方法でユーザに仮想オブジェクトの把持方法をガイドしてもよい。 The guides for gripping virtual objects are assumed to be the following types (1) to (3) as the types of guides for gripping virtual objects. Of course, other methods may be used to guide the user on how to grasp the virtual object.
(1)手が仮想オブジェクトを把持している状態を示す。
(2)手が仮想オブジェクトを把持する位置を示す。
(3)手の位置に仮想の手で仮想オブジェクトを把持する動きを示す。
(1) Indicates a state in which a hand is holding a virtual object.
(2) Indicates the position where the hand holds the virtual object.
(3) The movement of grasping the virtual object with the virtual hand is shown at the position of the hand.
 例えば、仮想オブジェクトを生成するアプリケーション実行部601が、表示部131を使って、仮想オブジェクトの把持方法をガイドするUIの表示も行うようにしてもよい。 For example, the application execution unit 601 that generates the virtual object may also use the display unit 131 to display the UI that guides the method of grasping the virtual object.
 図10には、上記のガイド種別(1)により、仮想オブジェクトの把持方法をガイドするUIの1つの表示例を示している。図10に示す例では、ユーザの手が円筒状の仮想オブジェクト1001に接近してきたときに、ユーザが仮想オブジェクト1001の外周を握っている状態の手を、同図中の点線で仮想オブジェクト1001に重畳して表示している。仮想オブジェクト1001が重量物であることを想定している場合には、仮想オブジェクト1001の重心位置も考慮して、仮想オブジェクト1001を握る位置が設定される。したがって、ユーザは、図10に示す把持方法をガイドするUIに導かれて、仮想オブジェクト1001の外周を迷いなく容易に把持することができる。 FIG. 10 shows one display example of a UI that guides a method of grasping a virtual object according to the above guide type (1). In the example shown in FIG. 10, when the user's hand approaches the cylindrical virtual object 1001, the hand in the state where the user is holding the outer circumference of the virtual object 1001 is attached to the virtual object 1001 by the dotted line in the figure. It is displayed superimposed. When it is assumed that the virtual object 1001 is a heavy object, the position where the virtual object 1001 is held is set in consideration of the position of the center of gravity of the virtual object 1001. Therefore, the user can easily grip the outer circumference of the virtual object 1001 without hesitation, guided by the UI that guides the gripping method shown in FIG.
 また、図11には、上記のガイド種別(1)により、仮想オブジェクトの把持方法をガイドするUIの他の表示例を示している。図11に示す例では、ユーザの手が円筒状の仮想オブジェクト1101に接近してきたときに、仮想オブジェクト1101の奥に仮想的なミラー1102を配置して、ユーザが仮想オブジェクト1101の鏡像1103の外周を握っている状態の手をミラー1102越しに表示している。仮想オブジェクト1101が重量物であることを想定している場合には、仮想オブジェクト1101の重心位置も考慮して、仮想オブジェクト1001を握る位置が設定される。図10に示した例では、ユーザの指先は仮想オブジェクト1101に隠れて見えないが、図11に示す例では、鏡像1103の外周に触れている各指の位置を確認することができる。したがって、ユーザは、図11に示す把持方法をガイドするUIに導かれて、仮想オブジェクト1101の外周を握るときの各指の使い方をより深く理解して、迷いなく容易に把持することができる。 Further, FIG. 11 shows another display example of the UI that guides the method of grasping the virtual object according to the above guide type (1). In the example shown in FIG. 11, when the user's hand approaches the cylindrical virtual object 1101, a virtual mirror 1102 is placed behind the virtual object 1101, and the user places the outer circumference of the mirror image 1103 of the virtual object 1101. The hand holding the object is displayed through the mirror 1102. When it is assumed that the virtual object 1101 is a heavy object, the position where the virtual object 1001 is held is set in consideration of the position of the center of gravity of the virtual object 1101. In the example shown in FIG. 10, the fingertip of the user is hidden behind the virtual object 1101 and cannot be seen, but in the example shown in FIG. 11, the position of each finger touching the outer circumference of the mirror image 1103 can be confirmed. Therefore, the user can be guided by the UI that guides the gripping method shown in FIG. 11 to have a deeper understanding of how to use each finger when gripping the outer circumference of the virtual object 1101, and can easily grip the virtual object 1101 without hesitation.
 図12には、上記のガイド種別(2)により、仮想オブジェクトの把持方法をガイドするUIの1つの表示例を示している。図12に示す例では、ユーザの手が細い円筒状の仮想オブジェクト1201に接近してきたときに、ユーザが仮想オブジェクト1201の外周を親指と人差し指で挟むように把持(又は、指先で精密把持)するときの親指及び人差し指がそれぞれ摘まむ位置1202、1203を、同図中の点線で仮想オブジェクト1201に重畳して表示している。仮想オブジェクト1201が重量物であることを想定している場合には、仮想オブジェクト1201の重心位置も考慮して、仮想オブジェクト1201を握る位置が設定される。したがって、ユーザは、図12に示す把持方法をガイドするUIに導かれて、仮想オブジェクト1201の外周を親指と人差し指で摘まむ位置(又は、精密把持する位置)をより深く理解して、迷いなく容易に把持することができる。 FIG. 12 shows one display example of the UI that guides the method of grasping the virtual object according to the above guide type (2). In the example shown in FIG. 12, when the user's hand approaches the thin cylindrical virtual object 1201, the user grips (or precisely grips) the outer circumference of the virtual object 1201 so as to be sandwiched between the thumb and the index finger. The positions 1202 and 1203 picked by the thumb and index finger at the time are displayed superimposed on the virtual object 1201 by the dotted line in the figure. When it is assumed that the virtual object 1201 is a heavy object, the position where the virtual object 1201 is held is set in consideration of the position of the center of gravity of the virtual object 1201. Therefore, the user is guided by the UI that guides the gripping method shown in FIG. 12, and has a deeper understanding of the position where the outer circumference of the virtual object 1201 is pinched by the thumb and the index finger (or the position where the virtual object 1201 is precisely gripped) without hesitation. It can be easily grasped.
 また、図13には、上記のガイド種別(2)により、仮想オブジェクトの把持方法をガイドするUIの他の表示例を示している。図13に示す例では、ユーザの手が太い円筒状の仮想オブジェクト1301に接近してきたときに、ユーザが仮想オブジェクト1301の外周をすべての指を使って把持(又は、握力把持)するときの各指の位置1302~1306を、同図中の点線で仮想オブジェクト1301に重畳して表示している。仮想オブジェクト1301が重量物であることを想定している場合には、仮想オブジェクト1301の重心位置も考慮して、仮想オブジェクト1001を握る位置が設定される。したがって、ユーザは、図13に示す把持方法をガイドするUIに導かれて、仮想オブジェクト1301の外周をすべての指で掴む)位置(又は、握力把持する位置)をより深く理解して、迷いなく容易に把持することができる。 Further, FIG. 13 shows another display example of the UI that guides the method of grasping the virtual object according to the above guide type (2). In the example shown in FIG. 13, when the user's hand approaches the thick cylindrical virtual object 1301, the user grips (or grips) the outer circumference of the virtual object 1301 with all fingers. The finger positions 1302 to 1306 are superimposed and displayed on the virtual object 1301 by the dotted line in the figure. When it is assumed that the virtual object 1301 is a heavy object, the position of holding the virtual object 1001 is set in consideration of the position of the center of gravity of the virtual object 1301. Therefore, the user is guided by the UI that guides the gripping method shown in FIG. 13 to better understand the position (or the position where the grip strength is gripped) where the outer circumference of the virtual object 1301 is gripped by all fingers, and without hesitation. It can be easily grasped.
 図14には、上記のガイド種別(3)により、仮想オブジェクトの把持方法をガイドするUIの1つの表示例を示している。図14に示す例では、ユーザの手が細い円筒状の仮想オブジェクト1401に接近してきたときに、同図中の参照番号1402及び1403で示すように、仮想的な親指と人差し指を一旦広げてから閉じて仮想オブジェクト1401の挟むように把持(又は、指先で精密把持)するアニメーションを、現実の親指と人差し指に重畳して表示している。したがって、ユーザは、図14に示す把持方法をガイドするUIに導かれて、自分の親指と人差し指を一旦広げてから閉じて、仮想オブジェクト1201の外周を摘まむ動作(又は、精密把持する動作)をより深く理解して、迷いなく容易に把持することができる。 FIG. 14 shows one display example of the UI that guides the method of grasping the virtual object according to the above guide type (3). In the example shown in FIG. 14, when the user's hand approaches the thin cylindrical virtual object 1401, the virtual thumb and index finger are once spread as shown by reference numbers 1402 and 1403 in the figure. An animation of closing and grasping (or precisely grasping with a fingertip) the virtual object 1401 so as to sandwich it is displayed superimposed on the actual thumb and index finger. Therefore, the user is guided by the UI that guides the gripping method shown in FIG. 14, and once the thumb and index finger are spread and then closed, the user picks the outer periphery of the virtual object 1201 (or precisely grips). Can be easily grasped without hesitation by understanding more deeply.
 また、図15には、上記のガイド種別(3)により、仮想オブジェクトの把持方法をガイドするUIの他の表示例を示している。図15に示す例では、ユーザの手が太い円筒状の仮想オブジェクト1501に接近してきたときに、同図中の参照番号1502で示すように、ユーザが仮想オブジェクト1501の外周をすべての指を使って把持(又は、握力把持)するときの仮想的な手の動作を、現実の手に重畳して表示している。図15に示す例では、仮想オブジェクト1501の外周面に倣うように手首が少し回転しているようなアニメーションを、現実の手に重畳して表示している。したがって、ユーザは、図15に示す把持方法をガイドするUIに導かれて、仮想オブジェクト1501の外周に倣うように自分の手首を回転させてから、仮想オブジェクト1501を掴む動作(又は、握力把持する動作)をより深く理解して、迷いなく容易に把持することができる。 Further, FIG. 15 shows another display example of the UI that guides the method of grasping the virtual object according to the above guide type (3). In the example shown in FIG. 15, when the user's hand approaches the thick cylindrical virtual object 1501, the user uses all fingers on the outer circumference of the virtual object 1501 as shown by the reference number 1502 in the figure. The virtual hand movement when gripping (or gripping force) is superimposed on the actual hand and displayed. In the example shown in FIG. 15, an animation in which the wrist is slightly rotated so as to imitate the outer peripheral surface of the virtual object 1501 is superimposed and displayed on the actual hand. Therefore, the user is guided by the UI that guides the gripping method shown in FIG. 15, rotates his / her wrist so as to follow the outer circumference of the virtual object 1501, and then grips the virtual object 1501 (or grips the grip force). You can understand the operation) more deeply and easily grasp it without hesitation.
 本開示に係るARシステム100によれば、ユーザは、仮想オブジェクトを把持しようと手を近付けたときには、図10~図15に示したような把持方法をガイドするUIをARグラス越しに見ることができ、指定された把持方法に従って仮想オブジェクトを把持する。 According to the AR system 100 according to the present disclosure, when the user brings his / her hand close to grip the virtual object, he / she can see the UI guiding the gripping method as shown in FIGS. 10 to 15 through the AR glass. It can grip the virtual object according to the specified gripping method.
 アプリケーション実行部601は、仮想オブジェクトを生成して表示するとともに、仮想オブジェクトの把持方法をガイドするUIの表示も行う。アプリケーション実行部601は、ユーザが仮想オブジェクトを把持しようとするときに手や指の動きを、手指位置姿勢検出部604及び手指ジェスチャ検出部605の検出結果に基づいて取得して、ユーザによる仮想オブジェクトの把持動作を評価するようにしてもよい。 The application execution unit 601 creates and displays a virtual object, and also displays a UI that guides how to hold the virtual object. The application execution unit 601 acquires the movement of the hand or finger when the user tries to grasp the virtual object based on the detection results of the finger position / posture detection unit 604 and the finger gesture detection unit 605, and the user virtual object. You may want to evaluate the gripping motion of.
C.把持方法の選択
 物体の把持方法は、親指と人差し指で摘まむように把持する精密把持、すべての指を使って(又は、指全体を使って)掴むように把持する握力把持の2つに分けられる。さらに、指の側面を使う中間把持や親指を用いない把持など、把持方法のバリエーションがある。また、物体を片手のみで安定して把持するためには、手の相対する2面以上で物体を挟み込む必要がある。1つの面に対して複数の指を使用する場合もある。
C. Selection of gripping method There are two methods of gripping an object: precision gripping, in which the object is gripped with the thumb and index finger, and grip strength gripping, in which the object is gripped with all fingers (or with the entire finger). Furthermore, there are variations in gripping methods such as intermediate gripping using the side surface of the finger and gripping without using the thumb. Further, in order to stably grip an object with only one hand, it is necessary to sandwich the object between two or more opposite surfaces of the hands. In some cases, multiple fingers are used on one surface.
 把持方法をガイドするUIは、ユーザが仮想オブジェクトを所定の把持方法で把持するように誘導する。把持方法をガイドするUIは、摘まむ又は掴む、どの指を使って摘まむ又は掴むなど仮想オブジェクトを把持する方法に応じて異なり、さらにガイドの種別によっても異なる。いずれの把持方法を選択してユーザを誘導するかは、ARシステム100の設計時に決定しておき、ユーザ毎又はユーザの動作に応じて切り替わらないことを想定している。あるいは、仮想オブジェクト毎に使用する把持方法をあらかじめ設定しておき、ユーザ毎又はユーザの動作に応じて切り替わらないことを想定している。例えば、仮想オブジェクトの大きさや形状に応じた最適な把持方法が、仮想オブジェクト毎にあらかじめ設定されている。 The UI that guides the gripping method guides the user to grip the virtual object by the predetermined gripping method. The UI that guides the gripping method differs depending on the method of gripping the virtual object such as pinching or grasping, and using which finger to pinch or grasp, and further differs depending on the type of guide. Which gripping method is selected to guide the user is determined at the time of designing the AR system 100, and it is assumed that the method is not switched according to each user or the user's operation. Alternatively, it is assumed that the gripping method used for each virtual object is set in advance and does not switch according to each user or the user's operation. For example, the optimum gripping method according to the size and shape of the virtual object is preset for each virtual object.
 但し、ユーザの動作に応じて、同じ仮想オブジェクトに対する把持方法を動的に切り替えるようにしてもよい。例えば、ユーザの性格や習慣、年齢、性別、体格などのユーザ属性情報に基づいて、最適な把持方法を推定するように学習された機械学習モデルを用いて、ユーザ毎又は仮想オブジェクト毎に、把持方法を動的に切り替えるようにしてもよい。 However, the gripping method for the same virtual object may be dynamically switched according to the user's operation. For example, gripping for each user or virtual object using a machine learning model learned to estimate the optimal gripping method based on user attribute information such as the user's personality, habit, age, gender, and physique. The method may be dynamically switched.
 例えば、手指ジェスチャ検出部605の検出結果に基づいてユーザの手の形状を取得して、ユーザが仮想オブジェクトを摘まもうとしているのか又は掴もうとしているのかを判定することができる場合がある。アプリケーション実行部601は、把持方法をガイドするUIの表示を制御する際に、その把持方法の判定結果に基づいて、摘まむ把持方法をガイドするUI又は掴む把持方法をガイドするUIのどちらかを選択して、ARグラス越しにユーザに提示するようにしてもよい。 For example, it may be possible to acquire the shape of the user's hand based on the detection result of the finger gesture detection unit 605 and determine whether the user is trying to pick or grab the virtual object. When controlling the display of the UI that guides the gripping method, the application execution unit 601 selects either the UI that guides the gripping method to be picked or the UI that guides the gripping method based on the determination result of the gripping method. It may be selected and presented to the user through the AR glass.
C-1.接近する方向に応じた把持方法の選択
 ユーザの手が仮想オブジェクトに接近してくる方向毎に、最適な把持方法が異なる場合がいる。このような場合、ユーザの手が仮想オブジェクトに接近してくる方向に応じて、把持方法をガイドするUIを動的に切り替えるようにしてもよい。アプリケーション実行部601は、手指位置姿勢検出部604の検出結果に基づいてユーザの手が仮想オブジェクトに接近してくる方向を検出し、さらに接近してくる方向に応じて最適な把持方法を判定して、判定結果に基づいて選択した把持方法をガイドするUIをユーザに提示する。
C-1. Selection of gripping method according to approaching direction The optimum gripping method may differ depending on the direction in which the user's hand approaches the virtual object. In such a case, the UI that guides the gripping method may be dynamically switched according to the direction in which the user's hand approaches the virtual object. The application execution unit 601 detects the direction in which the user's hand approaches the virtual object based on the detection result of the finger position / posture detection unit 604, and determines the optimum gripping method according to the further approaching direction. Then, the user is presented with a UI that guides the gripping method selected based on the determination result.
 例えば、図16に示すような、細長い首を持つ瓶の形状をした仮想オブジェクト1601は、把持しようとする箇所に応じて最適な把持方法が異なる。仮想オブジェクト1601に対して、参照番号1602で示すようにユーザの手が下半分の太い胴体部分に接近してくる場合には、掴む把持方法が適している。また、参照番号1603及び1604で示すようにユーザの手が上半分の細長い首の部分に接近してくる場合には摘まむ把持方法が適している。また、ユーザの手が細長い首の部分に接近してくる場合においても、参照番号1603で示すように首の側面に接近してくる場合と、参照番号1604で示すように上から瓶の口の部分に接近してくる場合では、最適な摘まみ方が相違する。 For example, the virtual object 1601 in the shape of a bottle having an elongated neck as shown in FIG. 16 has a different optimum gripping method depending on the location to be gripped. When the user's hand approaches the thick body portion of the lower half of the virtual object 1601 as shown by the reference number 1602, the gripping method is suitable. Further, as shown by reference numbers 1603 and 1604, when the user's hand approaches the elongated neck portion of the upper half, the gripping method of picking is suitable. Further, even when the user's hand approaches the elongated neck portion, the case where the user's hand approaches the side surface of the neck as shown by the reference number 1603 and the case where the user's hand approaches the side surface of the neck as shown by the reference number 1604 When approaching a part, the optimum picking method is different.
 アプリケーション実行部601は、手指位置姿勢検出部604の検出結果に基づいて、現在表示している仮想オブジェクトに対してユーザの手がどの方向から接近しているのかを判定することができる。そして、アプリケーション実行部601は、その判定結果に基づいて、ユーザの手が仮想オブジェクトに接近してくる方向に適した把持方法をガイドするUIを選択して、ARグラス越しにユーザに提示する。 The application execution unit 601 can determine from which direction the user's hand is approaching the currently displayed virtual object based on the detection result of the finger position / posture detection unit 604. Then, based on the determination result, the application execution unit 601 selects a UI that guides the gripping method suitable for the direction in which the user's hand approaches the virtual object, and presents the UI to the user through the AR glass.
C-2.ユーザに応じた把持方法の選択
 ユーザの年齢(幼児、老人)、身体の損傷、ユーザの日常の把持方法によって、同じ仮想オブジェクトであっても最適な把持方法が異なる場合がある。また、ガイド種別も、ユーザの年齢(幼児、老人)、人種、身体の損傷、ユーザの日常の把持方法によって、最適なものが異なる場合がある。そこで、ユーザ毎に、同じ仮想オブジェクトに対する把持方法をガイドするUIを切り替えるようにしてもよい。
C-2. Selection of gripping method according to the user The optimum gripping method may differ even for the same virtual object depending on the user's age (infant, elderly person), physical injury, and the user's daily gripping method. In addition, the optimum guide type may differ depending on the age (infant, elderly), race, physical injury, and daily gripping method of the user. Therefore, the UI that guides the gripping method for the same virtual object may be switched for each user.
 例えば、円筒形状の仮想オブジェクトを把持する場合、成人男性であれば摘まむ把持方法により安定して把持することができるが、指が短い幼児であればしっかりとすべての指を使って掴む把持方法でなければ安定して把持することができない場合がある。 For example, when gripping a cylindrical virtual object, an adult male can grip it stably by the gripping method, but an infant with short fingers can firmly grip it with all fingers. Otherwise, it may not be possible to grip stably.
 また、健常者のユーザであれば摘まむ把持方法により安定して把持することができる仮想オブジェクトであっても、指が損傷しているユーザであればすべての指を使って掴む把持方法でなければ安定して把持することができない場合がある。指が欠損しているユーザの場合、仮想の手で把持する動きを示すUIを使用する場合には利用可能な指だけを用いた動きを示すUIに変更する必要がある。また、健常者のユーザであっても、習慣や好みなどにより、日常の把持方法が異なることもある。 In addition, even if it is a virtual object that can be stably gripped by a healthy person user by the gripping method, if the user has a damaged finger, the gripping method must be performed by using all the fingers. If so, it may not be possible to grip it stably. In the case of a user with a missing finger, when using a UI showing a movement held by a virtual hand, it is necessary to change to a UI showing a movement using only available fingers. Moreover, even a healthy person user may have a different daily gripping method depending on his / her habits and preferences.
 ユーザは、年齢、人種、身体の損傷、日常の把持方法などのユーザ自身のユーザ属性に関する情報を、ARシステム100に自分で手付入力するようにしてもよい。あるいは、ARシステム100の使用開始時のユーザ登録情報として、ユーザ属性に関する情報をARシステム100が取得できるようにしてもよい。また、第1のセンサ部110や第2のセンサ部120のセンサ情報から機械学習モデルを用いてユーザ属性を推定できるようにしてもよい。このため、第1のセンサ部110や第2のセンサ部120は、生体センサなど図1に示した以外のセンサを装備していてもよい。 The user may manually input information on the user's own user attributes such as age, race, physical injury, daily gripping method, etc. into the AR system 100 by himself / herself. Alternatively, the AR system 100 may be able to acquire information on user attributes as user registration information at the start of use of the AR system 100. Further, the user attribute may be estimated by using the machine learning model from the sensor information of the first sensor unit 110 and the second sensor unit 120. Therefore, the first sensor unit 110 and the second sensor unit 120 may be equipped with sensors other than those shown in FIG. 1, such as a biological sensor.
 そして、アプリケーション実行部601は、ユーザの属性情報に基づいて、最適な把持方法を判定して、判定結果に基づいて選択した把持方法をガイドするUIをARグラス越しにユーザに提示する。 Then, the application execution unit 601 determines the optimum gripping method based on the attribute information of the user, and presents the UI to the user through the AR glass to guide the selected gripping method based on the determination result.
D.実オブジェクトの把持方法のガイド提示
 ここまでは、ARシステム100が、ユーザの手が仮想オブジェクトに接近した際に、仮想オブジェクトの把持方法が分かるように、把持方法をガイドするUIをARグラスでユーザに提示する実施形態について説明してきた。本開示は、ユーザが仮想オブジェクトを把持する場合だけでなく、実オブジェクトすなわち実空間に存在する物体をユーザが把持する場合にも、適用することができる。
D. Presenting a guide on how to grip a real object So far, the AR system 100 uses AR glasses to provide a UI that guides the gripping method so that the user can understand how to grip the virtual object when the user's hand approaches the virtual object. The embodiments presented in the above have been described. The present disclosure can be applied not only when the user grips a virtual object, but also when the user grips a real object, that is, an object existing in the real space.
 アプリケーション実行部601は、外向きカメラ121の撮影画像の画像認識結果に基づいて、ユーザの視界にある実オブジェクトを識別することができる。また、アプリケーション実行部601は、手指位置姿勢検出部604の検出結果や、外向きカメラ121の撮影画像によるユーザの手の画像認識結果に基づいて、ユーザが実オブジェクトに接近してきたことを検出すると、最適な把持方法を選択して、その実オブジェクトの把持方法をガイドするUIをARグラスに表示する。また、アプリケーション実行部601は、実オブジェクトに対してユーザの手が接近している方向や、ユーザの属性などに基づいて、把持方法をガイドするUIを動的に切り替えるようにしてもよい。 The application execution unit 601 can identify a real object in the user's field of view based on the image recognition result of the captured image of the outward camera 121. Further, when the application execution unit 601 detects that the user has approached the real object based on the detection result of the finger position / posture detection unit 604 and the image recognition result of the user's hand based on the image captured by the outward camera 121. , Select the optimal gripping method and display the UI that guides the gripping method of the real object on the AR glass. Further, the application execution unit 601 may dynamically switch the UI that guides the gripping method based on the direction in which the user's hand is approaching the real object, the user's attributes, and the like.
 アプリケーション実行部601は、例えば、持つと危ない場所を避けて実オブジェクトを把持できるように最適な把持方法を判定して、判定結果に基づいて選択した把持方法をガイドするUIをARグラス越しにユーザに提示する。持つと危ない場所は、安定して把持することができない場所や、強度が弱くて把持しようとすると壊れてしまう場所などである。アプリケーション実行部601は、外向きカメラ121の撮影画像から物体認識した結果に基づいて、対象とする実オブジェクトについて、持つと危ない場所を推定する。物体のカテゴリー毎に、経験則などに基づいて、持つと危ない場所をあらかじめ定義しておき、アプリケーション実行部601は定義に基づいて最適な把持方法を判定するようにしてもよい。あるいは、認識した物体のカテゴリー、大きさや形状などに基づいて持つと危ない場所を推定するように学習された機械学習モデルを用いて、最適な把持方法を判定するようにしてもよい。 For example, the application execution unit 601 determines the optimum gripping method so that the real object can be gripped while avoiding a dangerous place to hold, and the user guides the selected gripping method based on the determination result through the AR glass. Present to. Places that are dangerous to hold are places that cannot be gripped stably, or places that are weak and break when trying to grip. The application execution unit 601 estimates a dangerous place to hold the target real object based on the result of object recognition from the image captured by the outward camera 121. For each category of the object, a dangerous place to hold may be defined in advance based on an empirical rule or the like, and the application execution unit 601 may determine the optimum gripping method based on the definition. Alternatively, the optimum gripping method may be determined by using a machine learning model learned to estimate a dangerous place to hold based on the recognized object category, size, shape, and the like.
E.ガイドの表示トリガー
 本開示に係るARシステム100では、ユーザの手が仮想オブジェクトに接近した際に、仮想オブジェクトの把持方法を示すUIをARグラスに表示する。しかしながら、ユーザは、仮想オブジェクトを把持しようとして手を近付けた訳ではなく、たまたま手の位置が仮想オブジェクトに近くなってしまった場合などもある。ユーザが仮想オブジェクトを把持する意図がないのに、仮想オブジェクトの把持方法をガイドするUIがARグラスに表示されると、ユーザにとってガイドのUIが不要であったり、視界を遮って邪魔になったりする。
E. Guide display trigger In the AR system 100 according to the present disclosure, when a user's hand approaches a virtual object, a UI indicating a method of grasping the virtual object is displayed on the AR glass. However, the user does not bring his / her hand close to grasp the virtual object, and the position of the hand may happen to be close to the virtual object. If the user does not intend to grab the virtual object and the UI that guides how to grab the virtual object is displayed on the AR glass, the user does not need the guide UI or obstructs the view and gets in the way. do.
 そこで、ユーザの手が仮想オブジェクトに接近した際に、ユーザが対象とする仮想オブジェクトを見ていること、あるいはユーザが対象とする仮想オブジェクトに関心を持っていることを条件に、把持方法をガイドするUIをARグラスに表示するようにしてもよい。 Therefore, when the user's hand approaches the virtual object, the grasping method is guided on the condition that the user is looking at the target virtual object or the user is interested in the target virtual object. The UI to be used may be displayed on the AR glass.
 アプリケーション実行部601は、例えば内向きカメラ122の撮影画像からユーザの視線方向を検出して、ユーザが仮想オブジェクトを見ているかどうかを判定することができる。あるいは、アプリケーション実行部601は、第1のセンサ部110や第2のセンサ部120のセンサ情報から機械学習モデルを用いてユーザの仮想オブジェクトに対する関心度を推定できるようにしてもよい。このため、第1のセンサ部110や第2のセンサ部120は、生体センサなど図1に示した以外のセンサを装備していてもよい。そして、アプリケーション実行部601は、ユーザが対象とする仮想オブジェクトを見ている、あるいはユーザが対象とする仮想オブジェクトに関心を持っているという条件が満たされるときに、ユーザの手が仮想オブジェクトに接近していることを検出すると、その仮想オブジェクトの把持方法をガイドするUIをARグラスに表示する。 The application execution unit 601 can detect, for example, the line-of-sight direction of the user from the captured image of the inward camera 122, and determine whether or not the user is looking at the virtual object. Alternatively, the application execution unit 601 may be able to estimate the degree of interest of the user in the virtual object by using the machine learning model from the sensor information of the first sensor unit 110 and the second sensor unit 120. Therefore, the first sensor unit 110 and the second sensor unit 120 may be equipped with sensors other than those shown in FIG. 1, such as a biological sensor. Then, when the condition that the user is looking at the target virtual object or the user is interested in the target virtual object is satisfied, the application execution unit 601 approaches the virtual object. When it is detected that the virtual object is being gripped, a UI that guides how to grasp the virtual object is displayed on the AR glass.
F.接触時のガイドの表示切り替え
 仮想オブジェクトは実在しないため、ユーザが摘まむ又は掴むといった作法により仮想オブジェクトに接触したとしても、リアリティのある触感が得られず、手が仮想オブジェクトをすり抜けてしまう。
F. Switching the display of the guide at the time of contact Since the virtual object does not actually exist, even if the user touches the virtual object by a method such as picking or grasping it, a realistic tactile sensation cannot be obtained and the hand slips through the virtual object.
 例えば手に外骨格型の力覚提示装置を装着して、仮想オブジェクトとの接触状態に応じたリアリティのある触感をユーザに与えるようにしてもよい。しかしながら、力覚提示装置の購入コストや設置場所の問題がある。 For example, an exoskeleton type force sense presenting device may be attached to the hand to give the user a realistic tactile sensation according to the contact state with the virtual object. However, there are problems in the purchase cost and installation location of the force sense presentation device.
 そこで、本開示では、ユーザの指が仮想オブジェクトに接触した際に、仮想オブジェクトの把持方法をガイドするUIの表示を切り替えて、接触したことをユーザに通知又はフィードバックするようにする。 Therefore, in the present disclosure, when the user's finger touches the virtual object, the display of the UI that guides the gripping method of the virtual object is switched so that the user is notified or fed back that the contact has been made.
 アプリケーション実行部601は、ユーザが仮想オブジェクトを把持しようとするときに手や指の動きを、手指位置姿勢検出部604及び手指ジェスチャ検出部605の検出結果に基づいて取得して、ユーザの指と仮想オブジェクトとの接触状態を判定することができる。そして、アプリケーション実行部601は、判定結果に基づいて、仮想オブジェクトの把持方法をガイドするUIの表示を切り替えて、接触したことをユーザに通知又はフィードバックするようにする。 The application execution unit 601 acquires the movement of the hand or finger when the user tries to grasp the virtual object based on the detection results of the finger position / posture detection unit 604 and the finger gesture detection unit 605, and obtains the movement of the user's finger. The contact state with the virtual object can be determined. Then, the application execution unit 601 switches the display of the UI that guides the method of grasping the virtual object based on the determination result, and notifies or feeds back the contact to the user.
 図17には、ユーザの手が円筒状の仮想オブジェクト1701の外周に接触したときに、ユーザの親指及び人差し指と仮想オブジェクト1701との接触点をハイライトで示すUIを表示している例を示している。 FIG. 17 shows an example of displaying a UI that highlights the contact points between the user's thumb and index finger and the virtual object 1701 when the user's hand touches the outer circumference of the cylindrical virtual object 1701. ing.
 アプリケーション実行部601は、手指位置姿勢検出部604及び手指ジェスチャ検出部605の検出結果に基づいて、ユーザの親指と人差し指が仮想オブジェクト1701に接触していること、及びユーザの親指と人差し指と仮想オブジェクト1701との接触点1702及び1703を検出すると、その接触点1702及び1703をハイライトで示すUIに切り替えて、接触したことをユーザに通知又はフィードバックする。ユーザは、ハイライトを表示するUIに切り替わったことにより、親指と人差し指の適切な間隔(仮想オブジェクト1701の幅に合った間隔)で、仮想オブジェクト1701を正確に挟み込むことができるようになる。 The application execution unit 601 indicates that the user's thumb and index finger are in contact with the virtual object 1701 and that the user's thumb, index finger and virtual object are in contact with each other based on the detection results of the finger position / orientation detection unit 604 and the finger gesture detection unit 605. When the contact points 1702 and 1703 with the 1701 are detected, the contact points 1702 and 1703 are switched to the highlighted UI, and the user is notified or fed back that the contact points have been made. By switching to the UI that displays the highlight, the user can accurately pinch the virtual object 1701 at an appropriate distance between the thumb and the index finger (the distance that matches the width of the virtual object 1701).
 また、図17に示したようにガイドするUIを切り替えて、指が仮想オブジェクトに接触したことをユーザに通知したとしても、仮想オブジェクトは実在しないので、ユーザはリアリティのある触感がないためにそのまま把持動作を続けて、ユーザの指がさらに仮想オブジェクトの中にめり込んでしまうことがある。 Further, even if the guiding UI is switched as shown in FIG. 17 to notify the user that the finger has touched the virtual object, the virtual object does not actually exist, and the user does not have a realistic tactile sensation. The user's finger may be further sunk into the virtual object by continuing the gripping operation.
 そこで、本開示では、ユーザの指が仮想オブジェクトにめり込んだ程度に応じて、仮想オブジェクトの把持方法をガイドするUIを段階的に切り替えることによって、指の仮想オブジェクトへのめり込みの程度をユーザに通知又はフィードバックするようにする。 Therefore, in the present disclosure, the user is notified of the degree of the finger immersing in the virtual object by gradually switching the UI that guides the method of grasping the virtual object according to the degree of the user's finger immersing in the virtual object. Try to give feedback.
 図18には、ユーザの手が円筒状の仮想オブジェクト1801の外周に接触した後、さらにユーザが親指と人差し指の間隔を狭めたために、親指と人差し指が仮想オブジェクト1801の中にめり込んでしまったときに、ユーザの親指及び人差し指と仮想オブジェクト1801との接触点を示すハイライトを強調するUIを表示している例を示している。 FIG. 18 shows when the user's hand touches the outer circumference of the cylindrical virtual object 1801 and then the thumb and index finger are sunk into the virtual object 1801 because the user narrows the distance between the thumb and the index finger. Shows an example of displaying a UI that emphasizes a highlight indicating a contact point between the user's thumb and index finger and the virtual object 1801.
 アプリケーション実行部601は、手指位置姿勢検出部604及び手指ジェスチャ検出部605の検出結果に基づいて、ユーザの親指と人差し指が仮想オブジェクト1801の中にめり込んでいることを検出すると、親指及び人差し指と仮想オブジェクト1801との接触点1802及び1803を示すハイライトを強調するUIに切り替えて、指が仮想オブジェクト1801の中にめり込んでいることをユーザに通知又はフィードバックする。また、アプリケーション実行部601は、めり込みの程度に応じてハイライトを強調するUIに段階的に切り替えていく。ユーザは、把持方法をガイドするUIが示すハイライトが段階的に切り替わっていくことで、親指と人差し指で仮想オブジェクト1801を過度に押さえ付けていることを視覚的に理解することができ、正しい把持動作に修正することができる。 When the application execution unit 601 detects that the user's thumb and index finger are sunk into the virtual object 1801 based on the detection results of the finger position / orientation detection unit 604 and the finger gesture detection unit 605, the thumb, index finger, and virtual Switch to a UI that highlights the highlights of contact points 1802 and 1803 with the object 1801 to notify or feed back to the user that the finger is sunk into the virtual object 1801. In addition, the application execution unit 601 gradually switches to a UI that emphasizes highlights according to the degree of immersion. The user can visually understand that the virtual object 1801 is being excessively pressed with the thumb and forefinger by gradually switching the highlights indicated by the UI that guides the grasping method, and the user can grasp correctly. It can be modified to work.
G.把持方法をガイドするUIを提示する処理手順
 上記B項~F項で説明してきたように、本開示に係るARシステム100では、実世界において物体を持つ手法から乖離しないように仮想オブジェクトを持つように、ユーザに対して仮想オブジェクトの把持方法のガイドを提示する。
G. Processing procedure for presenting a UI that guides the gripping method As described in Sections B to F above, the AR system 100 according to the present disclosure has a virtual object so as not to deviate from the method of holding an object in the real world. Provides the user with a guide on how to grab a virtual object.
 図19には、ARシステム100において、ユーザに対して仮想オブジェクトの把持方法をガイドするUIを提示するための処理手順をフローチャートの形式で示している。この処理手順は、例えばアプリケーション実行部601が主体となって実施される。 FIG. 19 shows in the form of a flowchart a processing procedure for presenting a UI that guides a user how to grasp a virtual object in the AR system 100. For example, the application execution unit 601 plays a central role in this processing procedure.
 まず、アプリケーション実行部601は、手指位置姿勢検出部604の検出結果に基づいて、ユーザの手の位置を取得する(ステップS1901)。アプリケーション実行部601は、表示中の仮想オブジェクトと、この仮想オブジェクトを把持しようとしているユーザの手との相対位置を常時モニタするものとする。 First, the application execution unit 601 acquires the position of the user's hand based on the detection result of the finger position / posture detection unit 604 (step S1901). The application execution unit 601 shall constantly monitor the relative position between the displayed virtual object and the hand of the user who is trying to grasp the virtual object.
 そして、アプリケーション実行部601は、ユーザの手が仮想オブジェクトに接近したかどうか、すなわち、ユーザの手と仮想オブジェクトとの最短距離が所定値以下になったかどうかをチェックする(ステップS1902)。 Then, the application execution unit 601 checks whether the user's hand approaches the virtual object, that is, whether the shortest distance between the user's hand and the virtual object is equal to or less than a predetermined value (step S1902).
 なお、ユーザは、仮想オブジェクトを把持しようとして手を近付けた訳ではなく、たまたま手の位置が仮想オブジェクトに近くなってしまった場合などもある。そこで、アプリケーション実行部601は、ステップS1902では、ユーザが対象とする仮想オブジェクトを見ていること、あるいはユーザが対象とする仮想オブジェクトに関心を持っていることを条件に追加して、ユーザの手が仮想オブジェクトに接近したかどうかをチェックするようにしてもよい。 Note that the user did not bring his hand close to the virtual object in an attempt to grasp it, and the position of his hand may happen to be close to the virtual object. Therefore, in step S1902, the application execution unit 601 adds the user's hand on the condition that the user is looking at the target virtual object or the user is interested in the target virtual object. May be checked to see if it has approached the virtual object.
 ここで、ユーザの手と仮想オブジェクトとの最短距離が所定値以下になった状態になると(ステップS1902のYes)、アプリケーション実行部601は、ユーザに対して仮想オブジェクトの把持方法のガイドに用いるUIを決定する(ステップS1903)。 Here, when the shortest distance between the user's hand and the virtual object becomes equal to or less than a predetermined value (Yes in step S1902), the application execution unit 601 uses the UI to guide the user on how to grasp the virtual object. Is determined (step S1903).
 ステップS1903では、アプリケーション実行部601は、仮想オブジェクトの把持方法のガイドの種別を選択する。また、アプリケーション実行部601は、仮想オブジェクトに対してあらかじめ設定しておいた把持方法、又はユーザの性格や習慣、年齢、性別、体格などのユーザ属性に基づいて選択される把持方法を、選択したガイド種別によりガイドするためのUIを決定する。また、アプリケーション実行部601は、ユーザの手が仮想オブジェクトに接近する方向に応じて把持方法を選択して、その把持方法をガイドするUIを決定するようにしてもよい。 In step S1903, the application execution unit 601 selects the type of guide for grasping the virtual object. Further, the application execution unit 601 has selected a gripping method preset for the virtual object or a gripping method selected based on user attributes such as the user's personality, habit, age, gender, and physique. The UI for guiding is determined by the guide type. Further, the application execution unit 601 may select a gripping method according to the direction in which the user's hand approaches the virtual object, and determine a UI for guiding the gripping method.
 そして、アプリケーション実行部601は、ステップS1903で決定した結果に基づいて、ユーザの手が接近している仮想オブジェクトの把持方法をガイドするUIを、表示部131を使って仮想オブジェクト付近に表示する(ステップS1904)。 Then, based on the result determined in step S1903, the application execution unit 601 displays a UI for guiding the method of grasping the virtual object approaching by the user's hand near the virtual object using the display unit 131 (the display unit 131 is used). Step S1904).
 その後、アプリケーション実行部601は、ユーザが仮想オブジェクトを把持しようとするときに手や指の動きを、手指位置姿勢検出部604及び手指ジェスチャ検出部605の検出結果を取得して(ステップS1905)、ユーザの指と仮想オブジェクトとの接触状態を判定する。 After that, the application execution unit 601 acquires the detection results of the finger position / orientation detection unit 604 and the finger gesture detection unit 605 for the movement of the hand or finger when the user tries to grasp the virtual object (step S1905). Determine the contact state between the user's finger and the virtual object.
 そして、アプリケーション実行部601は、ユーザの指が仮想オブジェクトに接触した際に(ステップS1906のYes)、仮想オブジェクトの把持方法をガイドするUIの表示を切り替えて、接触したことをユーザに通知又はフィードバックする(ステップS1907)。 Then, when the user's finger touches the virtual object (Yes in step S1906), the application execution unit 601 switches the display of the UI that guides the gripping method of the virtual object, and notifies or feeds back the contact. (Step S1907).
 ステップS1907では、アプリケーション実行部601は、例えばユーザの指と仮想オブジェクトとの接触点をハイライトで示すUIを、表示部131を使って表示する。また、アプリケーション実行部601は、ユーザの指が仮想オブジェクトの中にめり込んだときには、めり込んだ程度に応じてハイライトの表示を段階的に切り替える。 In step S1907, the application execution unit 601 uses the display unit 131 to display, for example, a UI that highlights the contact point between the user's finger and the virtual object. Further, when the user's finger is sunk into the virtual object, the application execution unit 601 gradually switches the highlight display according to the degree of sunk.
H.接触時の仮想的な手の表示
 上記F項で示したようにユーザの指と仮想オブジェクトとの接触状態に応じて、把持方法をガイドするUIの表示を切り替えることによって、ユーザに接触状態を通知又はフィードバックすることができる。しかしながら、仮想オブジェクトは実在しないので、ユーザはリアリティのある触感がないことに変わりはない。
H. Virtual hand display at the time of contact As shown in Section F above, the user is notified of the contact state by switching the display of the UI that guides the gripping method according to the contact state between the user's finger and the virtual object. Or you can give feedback. However, since the virtual object does not exist, the user still has no realistic tactile sensation.
 そこで、本開示では、ユーザの指が仮想オブジェクトに接触した際に、仮想の指の表示を行う。具体的には、ユーザが仮想オブジェクトの相対する2面を把持しようとしているときに、一方の面に接近している第1の指と他方の面に接近している第2の指の各々に対して、第1の仮想の指及び第2の仮想の指を表示する。その際に、第1の仮想の指と第2の仮想の指の開き量を、実際の第1の指と第2の指の開き具合より広くして、実際の第1の指と第2の指が接触した際に、第1の仮想の指と第2の仮想の指が仮想オブジェクトに接触して把持状態となるように、各仮想の指を表示する。 Therefore, in the present disclosure, when the user's finger touches the virtual object, the virtual finger is displayed. Specifically, when the user is trying to grasp two opposing faces of a virtual object, each of the first finger approaching one face and the second finger approaching the other face On the other hand, the first virtual finger and the second virtual finger are displayed. At that time, the opening amount of the first virtual finger and the second virtual finger is made wider than the opening degree of the actual first finger and the second finger, and the actual first finger and the second finger are opened. Each virtual finger is displayed so that the first virtual finger and the second virtual finger come into contact with the virtual object and are in a gripped state when the fingers of the finger touch.
 図20には、ユーザの指が仮想オブジェクトに接触したときの仮想の指の表示例を示している。図20では、ユーザが、仮想オブジェクト2001の相対する2面を親指と人差し指で挟み込んで把持するときの表示例であり、仮想の親指の位置2002と仮想の人差し指の位置2003をそれぞれ点線で示している。また、実際の親指と人差し指は実線で描いている。図20から分かるように、仮想の親指2002と仮想の人差し指2003の開き量は、実際の親指と人差し指の開き具合より広くしてある。そして、実際の親指と人差し指が接触した際に、仮想の親指2002と仮想の人差し指2003が仮想オブジェクト2001に接触して把持状態となる。 FIG. 20 shows an example of displaying a virtual finger when the user's finger touches the virtual object. FIG. 20 is a display example when the user sandwiches and grips two opposing surfaces of the virtual object 2001 between the thumb and the index finger, and the position 2002 of the virtual thumb and the position 2003 of the virtual index finger are shown by dotted lines, respectively. There is. The actual thumb and index finger are drawn with solid lines. As can be seen from FIG. 20, the opening amount of the virtual thumb 2002 and the virtual index finger 2003 is wider than the opening degree of the actual thumb and index finger. Then, when the actual thumb and index finger come into contact with each other, the virtual thumb 2002 and the virtual index finger 2003 come into contact with the virtual object 2001 and are in a gripped state.
 アプリケーション実行部601は、ユーザが仮想オブジェクト2001を把持しようとするときの親指及び人差し指の動きを、手指位置姿勢検出部604及び手指ジェスチャ検出部605の検出結果に基づいて取得することができる。そして、アプリケーション実行部601は、仮想オブジェクト2001の一方の面に接近している親指と他方の面に接近している人差し指の各々に対して、仮想の親指2002及び仮想の人差し指2003を表示する。 The application execution unit 601 can acquire the movements of the thumb and index finger when the user tries to grasp the virtual object 2001 based on the detection results of the finger position posture detection unit 604 and the finger gesture detection unit 605. Then, the application execution unit 601 displays the virtual thumb 2002 and the virtual index finger 2003 for each of the thumb approaching one surface and the index finger approaching the other surface of the virtual object 2001.
 その際、アプリケーション実行部601は、仮想の親指2002と仮想の人差し指2003の開き量を、実際の親指と人差し指の開き具合より広くする。したがって、実際の親指と人差し指が接触した際に、ユーザは、表示部131が表示する仮想空間において、仮想の親指2002と仮想の人差し指2003が仮想オブジェクト2001に接触して把持状態となることを視覚的に認識する。このとき、実際の親指と人差し指の間には接触力が作用するが、ユーザは、自分の親指と人差し指が仮想オブジェクト2001から受ける触感として認識する。 At that time, the application execution unit 601 makes the opening amount of the virtual thumb 2002 and the virtual index finger 2003 wider than the opening degree of the actual thumb and index finger. Therefore, when the actual thumb and index finger come into contact with each other, the user visually observes that the virtual thumb 2002 and the virtual index finger 2003 come into contact with the virtual object 2001 in the virtual space displayed by the display unit 131 and are in a grasped state. Recognize. At this time, a contact force acts between the actual thumb and index finger, but the user recognizes the tactile sensation that his / her thumb and index finger receive from the virtual object 2001.
 仮想オブジェクト2001は実在しないため、ユーザが仮想オブジェクト2001に対して摘まむ又は掴むといった作法を実施しても、手が仮想オブジェクト2001をすり抜けてしまい、ユーザはリアリティのある触感が得られない。そこで、アプリケーション実行部601は、ユーザの親指及び人差し指の付近において、ちょうど仮想オブジェクト2001を挟み込んでいるような位置に仮想の親指2002と仮想の人差し指2003を表示する。これによって、ユーザは、親指と人差し指に接触を感じることができる。実際には親指と人差し指が接触しているが、ユーザは、表示部131が表示する(若しくは、ARグラス越しに見える)仮想空間において、仮想の親指2002と仮想の人差し指2003がちょうど仮想オブジェクト2001を挟み込んでいる映像を見るため、実際に仮想オブジェクト2001を挟み込んで持っているような感覚を得ることができる。 Since the virtual object 2001 does not actually exist, even if the user performs a method of picking or grasping the virtual object 2001, the hand slips through the virtual object 2001, and the user cannot obtain a realistic tactile sensation. Therefore, the application execution unit 601 displays the virtual thumb 2002 and the virtual index finger 2003 at positions near the user's thumb and index finger so as to sandwich the virtual object 2001. This allows the user to feel the contact between the thumb and index finger. Although the thumb and index finger are actually in contact with each other, the user can see the virtual object 2001 by the virtual thumb 2002 and the virtual index finger 2003 in the virtual space displayed by the display unit 131 (or seen through the AR glass). Since the sandwiched image is viewed, it is possible to obtain the feeling of actually sandwiching and holding the virtual object 2001.
 要するに、本開示では、視覚中での体の動きと自分で感じている体の動きとの間に不整合が生じたときに、視覚による情報が優勢になって擬似的な触力覚が生じる錯覚、すなわち「視触覚間相互作用」を利用して、ユーザに対して仮想オブジェクトを把持したときの触感を与えることができる。 In short, in the present disclosure, when there is an inconsistency between the movement of the body in the visual sense and the movement of the body felt by oneself, the visual information becomes predominant and a pseudo tactile sensation is generated. The illusion, or "visual-tactile interaction," can be used to give the user a tactile sensation when grasping a virtual object.
 また、図20に示したように、仮想の指が仮想オブジェクトに接触したときに実際の指同士が接触するような仮想オブジェクトの把持方法が、ごく自然で滑らかな動きとしてユーザに認識させるために、ユーザの手が仮想オブジェクトに接近した時点で、実際の指の開き具合よりも広くした仮想の指の表示を開始する。 Further, as shown in FIG. 20, in order to make the user recognize that the method of grasping the virtual object such that the actual fingers come into contact with each other when the virtual finger touches the virtual object is a very natural and smooth movement. , When the user's hand approaches the virtual object, the display of the virtual finger that is wider than the actual finger opening is started.
 図21には、ユーザの指が仮想オブジェクトに接近したときの仮想の指の表示例を示している。図21では、ユーザが、仮想オブジェクト2101の相対する2面を親指と人差し指で挟み込んで把持することを意図して、手を仮想オブジェクト2101に接近させるときの表示例であり、仮想の親指の位置2102と仮想の人差し指の位置2103をそれぞれ点線で示している。また、実際の親指と人差し指は実線で描いている。図21から分かるように、仮想の親指2102と仮想の人差し指2103の開き量は、実際の親指と人差し指の開き具合より広くしてある。 FIG. 21 shows an example of displaying a virtual finger when the user's finger approaches the virtual object. FIG. 21 is a display example when the user brings his / her hand close to the virtual object 2101 with the intention of sandwiching and grasping the two opposing surfaces of the virtual object 2101 with the thumb and the index finger, and the position of the virtual thumb. The 2102 and the virtual index finger position 2103 are shown by dotted lines, respectively. The actual thumb and index finger are drawn with solid lines. As can be seen from FIG. 21, the opening amount of the virtual thumb 2102 and the virtual index finger 2103 is wider than the opening degree of the actual thumb and index finger.
 アプリケーション実行部601は、ユーザの手の位置を手指位置姿勢検出部604の検出結果に基づいて取得して、手と仮想オブジェクト2101との最短距離が所定値以下となったときに、ユーザの手が仮想オブジェクト2101に接近した状態を検出する。そして、アプリケーション実行部601は、仮想オブジェクト2001の一方の面に接近している親指と他方の面に接近している人差し指の各々に対して、仮想の親指2102及び仮想の人差し指2103を表示する。 The application execution unit 601 acquires the position of the user's hand based on the detection result of the finger position / posture detection unit 604, and when the shortest distance between the hand and the virtual object 2101 is equal to or less than a predetermined value, the user's hand Detects a state in which is approaching the virtual object 2101. Then, the application execution unit 601 displays the virtual thumb 2102 and the virtual index finger 2103 for each of the thumb approaching one surface and the index finger approaching the other surface of the virtual object 2001.
 その際、アプリケーション実行部601は、仮想の親指2102と仮想の人差し指2103の開き量を、実際の親指と人差し指の開き具合より広くする。その後、実際の親指と人差し指が接触した際に、表示部131が表示する仮想空間において、仮想の親指2102と仮想の人差し指2103が仮想オブジェクト2101に接触して把持状態となる。アプリケーション実行部601は、ユーザの手が接近状態に入ってから接触状態に移行するまでの間の、仮想の親指2102及び仮想の人差し指2103が仮想オブジェクト2101を把持しようとするごく自然で滑らかな動作を表示する。したがって、ユーザは、仮想の親指2102及び仮想の人差し指2103を表示するガイドのUIに導かれて、仮想オブジェクト2101を迷いなく容易に把持することができる。 At that time, the application execution unit 601 makes the opening amount of the virtual thumb 2102 and the virtual index finger 2103 wider than the opening degree of the actual thumb and index finger. After that, when the actual thumb and index finger come into contact with each other, the virtual thumb 2102 and the virtual index finger 2103 come into contact with the virtual object 2101 in the virtual space displayed by the display unit 131 to be in a gripped state. The application execution unit 601 is a very natural and smooth operation in which the virtual thumb 2102 and the virtual index finger 2103 try to grasp the virtual object 2101 from the time when the user's hand enters the approaching state to the time when the user's hand shifts to the contacting state. Is displayed. Therefore, the user can easily grasp the virtual object 2101 without hesitation, guided by the UI of the guide displaying the virtual thumb 2102 and the virtual index finger 2103.
 ユーザの手が仮想オブジェクトに接近してきたときに表示される仮想の指の開き量は、把持対象とする仮想オブジェクトの厚みに基づいて設定する。仮想オブジェクトの厚みの分だけ仮想の指の開き量を実際の指の開き具合よりも広くすることによって、実際の指を閉じたときに、仮想の指がちょうど仮想オブジェクトを挟み込む位置となる。 The amount of virtual finger opening displayed when the user's hand approaches the virtual object is set based on the thickness of the virtual object to be grasped. By making the opening amount of the virtual finger wider than the opening degree of the actual finger by the thickness of the virtual object, when the actual finger is closed, the virtual finger is just at the position where the virtual object is sandwiched.
 但し、分厚い仮想オブジェクトを把持しようとするときに、その仮想オブジェクトの厚みの分だけ仮想の指の開き量を広くすると、実際にはあり得ないほど不自然に、仮想の指を広げることになってしまう。そこで、ユーザの手が仮想オブジェクトに接近してきたときに表示される仮想の指の開き量は、把持対象とする仮想オブジェクトの厚みに必ずしも一致させる必要はない。実際の指の位置よりも少しだけ広めに仮想の指を広げておき、実際の指の閉じ幅に対して少な目に仮想の指の閉じ幅を変化させる。 However, when trying to grasp a thick virtual object, if the opening amount of the virtual finger is widened by the thickness of the virtual object, the virtual finger will be spread unnaturally, which is impossible in reality. It ends up. Therefore, the opening amount of the virtual finger displayed when the user's hand approaches the virtual object does not necessarily have to match the thickness of the virtual object to be gripped. Spread the virtual finger slightly wider than the actual finger position, and change the virtual finger closing width slightly with respect to the actual finger closing width.
 図22には、ユーザの指が仮想オブジェクトに接近したときの仮想の指の他の表示例を示している。図22では、図21に示した表示例と同様に、ユーザが仮想オブジェクト2101の相対する2面を親指と人差し指で挟み込んで把持することを意図して、手を仮想オブジェクト2201に接近させるときの表示例であり、仮想の親指の位置2202と仮想の人差し指の位置2203をそれぞれ点線で示している。また、実際の親指と人差し指は実線で描いている。図22から分かるように、仮想の親指2202と仮想の人差し指2203の開き量は、実際の親指と人差し指の開き具合より広くしてある。但し、仮想の親指2202の実際の親指との開き量の差分d1と、仮想の人差し指2203の実際に人差し指との開き量の差分d2の合計値d1+d2は、把持の対象とする仮想オブジェクト2201の厚みdよりも小さい。すなわち、d>d1+d2が成立する。 FIG. 22 shows another display example of the virtual finger when the user's finger approaches the virtual object. In FIG. 22, similar to the display example shown in FIG. 21, when the user brings his / her hand closer to the virtual object 2201 with the intention of sandwiching and grasping the two opposing surfaces of the virtual object 2101 between the thumb and the index finger. As a display example, the position 2202 of the virtual thumb and the position 2203 of the virtual index finger are shown by dotted lines, respectively. The actual thumb and index finger are drawn with solid lines. As can be seen from FIG. 22, the opening amount of the virtual thumb 2202 and the virtual index finger 2203 is wider than the opening degree of the actual thumb and index finger. However, the total value d1 + d2 of the difference d1 of the opening amount of the virtual thumb 2202 from the actual thumb and the difference d2 of the opening amount of the virtual index finger 2203 with the actual index finger is the thickness of the virtual object 2201 to be gripped. Less than d. That is, d> d1 + d2 is established.
 アプリケーション実行部601は、ユーザの手の位置を手指位置姿勢検出部604の検出結果に基づいて取得して、手と仮想オブジェクト2101との最短距離が所定値以下となったときに、ユーザの手が仮想オブジェクト2101に接近した状態を検出する。そして、アプリケーション実行部601は、仮想オブジェクト2001の一方の面に接近している親指と他方の面に接近している人差し指の各々に対して、仮想の親指2202及び仮想の人差し指2203を表示する。 The application execution unit 601 acquires the position of the user's hand based on the detection result of the finger position / posture detection unit 604, and when the shortest distance between the hand and the virtual object 2101 is equal to or less than a predetermined value, the user's hand Detects a state in which is approaching the virtual object 2101. Then, the application execution unit 601 displays the virtual thumb 2202 and the virtual index finger 2203 for each of the thumb approaching one surface and the index finger approaching the other surface of the virtual object 2001.
 その際、アプリケーション実行部601は、仮想の親指2202と仮想の人差し指2203の開き量(d1+d2)を、実際の親指と人差し指の開き具合より広くするが、仮想オブジェクト2201の厚みdよりも小さくする。その後、実際の親指と人差し指が接触した際に、表示部131が表示する仮想空間において、仮想の親指2202と仮想の人差し指2203が仮想オブジェクト2101に接触して把持状態となる。アプリケーション実行部601は、ユーザの手が接近状態に入ってから接触状態に移行するまでの間の、仮想の親指2202及び仮想の人差し指2203が仮想オブジェクト2101を把持しようとするごく自然で滑らかな動作を表示する。このとき、アプリケーション実行部601は、実際の親指と人差し指の閉じ幅よりも小さめに、仮想の親指2202及び仮想の人差し指2203の閉じ幅を変化させる。例えば、実際の親指と人差し指の開き量が1cmだけ狭くなったときに、仮想の親指2202と仮想の人差し指2203の開き量を0.2cmだけ狭くするといった、倍率を変えて移動を行う。これによって、仮想の親指2202及び仮想の人差し指2203を、実際にはあり得ないほど不自然に広げなくても、実際の親指と人差し指がとじたときに仮想の親指2202と仮想の人差し指2203をちょうど仮想オブジェクト2201を挟み込んでいるような位置にすることができる。 At that time, the application execution unit 601 makes the opening amount (d1 + d2) of the virtual thumb 2202 and the virtual index finger 2203 wider than the actual opening degree of the thumb and index finger, but smaller than the thickness d of the virtual object 2201. After that, when the actual thumb and index finger come into contact with each other, the virtual thumb 2202 and the virtual index finger 2203 come into contact with the virtual object 2101 in the virtual space displayed by the display unit 131, and are in a gripped state. The application execution unit 601 is a very natural and smooth operation in which the virtual thumb 2202 and the virtual index finger 2203 try to grasp the virtual object 2101 from the time when the user's hand enters the approaching state to the time when the user's hand shifts to the contacting state. Is displayed. At this time, the application execution unit 601 changes the closing width of the virtual thumb 2202 and the virtual index finger 2203 to be smaller than the closing width of the actual thumb and index finger. For example, when the actual opening amount of the thumb and the index finger is narrowed by 1 cm, the opening amount of the virtual thumb 2202 and the virtual index finger 2203 is narrowed by 0.2 cm, and the movement is performed by changing the magnification. This allows the virtual thumb 2202 and the virtual index finger 2203 to just fit when the real thumb and index finger are closed, without having to spread the virtual thumb 2202 and the virtual index finger 2203 unnaturally. It can be positioned so as to sandwich the virtual object 2201.
I.仮想的な手を表示する処理手順
 上記H項で説明したように、本開示に係るARシステム100では、ユーザの指が仮想オブジェクトに接触した際に、実際の指の開き具合よりも広くした仮想の指の表示を行うことにより、視触覚間相互作用を利用してユーザに仮想オブジェクトの触感を与えることができる。
I. Processing procedure for displaying a virtual hand As described in Section H above, in the AR system 100 according to the present disclosure, when a user's finger touches a virtual object, the virtual hand is made wider than the actual finger opening degree. By displaying the finger, it is possible to give the user a tactile sensation of a virtual object by utilizing the visual-tactile interaction.
 図23には、ARシステム100において、ユーザに仮想的な手を表示するための処理手順をフローチャートの形式で示している。この処理手順は、ユーザの手が仮想オブジェクトに接近した際に、例えばアプリケーション実行部601が主体となって実施される。 FIG. 23 shows a processing procedure for displaying a virtual hand to the user in the AR system 100 in the form of a flowchart. This processing procedure is mainly executed by, for example, the application execution unit 601 when the user's hand approaches the virtual object.
 アプリケーション実行部601は、表示中の仮想オブジェクトの幅を取得する(ステップS2301)。 The application execution unit 601 acquires the width of the virtual object being displayed (step S2301).
 本実施形態では、アプリケーション実行部601が自ら仮想オブジェクトを生成するので、アプリケーション実行部601は、仮想オブジェクトの生成時の設定情報などに基づいて、仮想オブジェクトの幅を取得することができる。なお、把持する位置に応じて幅が異なる仮想オブジェクトの場合には、アプリケーション実行部601は、手指位置姿勢検出部604の検出結果に基づいてユーザの手が仮想オブジェクトに接近する方向を判定して、接近する方向に基づいて仮想オブジェクトの幅を取得するようにしてもよい。 In the present embodiment, since the application execution unit 601 creates a virtual object by itself, the application execution unit 601 can acquire the width of the virtual object based on the setting information at the time of generating the virtual object. In the case of virtual objects having different widths depending on the gripping position, the application execution unit 601 determines the direction in which the user's hand approaches the virtual object based on the detection result of the finger position / posture detection unit 604. , The width of the virtual object may be obtained based on the approaching direction.
 また、アプリケーション実行部601は、ユーザが仮想オブジェクトの把持に用いる実際の指の幅(親指と人差し指の間隔)を取得する(ステップS2302)。 Further, the application execution unit 601 acquires the actual finger width (the distance between the thumb and the index finger) used by the user to grasp the virtual object (step S2302).
 アプリケーション実行部601は、手指ジェスチャ検出部605の検出結果に基づいて、実際の指の幅を取得することができる。また、アプリケーション実行部601は、外向きカメラ121の撮影画像の加増認識結果に基づいて、実際の指の幅を取得するようにしてもよい。 The application execution unit 601 can acquire the actual finger width based on the detection result of the finger gesture detection unit 605. Further, the application execution unit 601 may acquire the actual finger width based on the addition recognition result of the captured image of the outward camera 121.
 次いで、アプリケーション実行部601は、現在の指の幅に基づいて、仮想の指の幅を計算する(ステップS2303)。 Next, the application execution unit 601 calculates the virtual finger width based on the current finger width (step S2303).
 アプリケーション実行部601は、ユーザが仮想オブジェクトの把持に用いる実際の指が接触している状態で、仮想の指がちょうど仮想オブジェクトを挟み込んでいる位置となるように、仮想の指の幅を算出する。 The application execution unit 601 calculates the width of the virtual finger so that the virtual finger is exactly at the position where the virtual object is sandwiched in the state where the actual finger used by the user to grasp the virtual object is in contact with the user. ..
 そして、アプリケーション実行部601は、ユーザが仮想オブジェクトの把持に用いる実際の指付近で、各々の実際の指に対応する仮想の指を表示部131を使って表示する(ステップS2304)。 Then, the application execution unit 601 displays the virtual finger corresponding to each actual finger in the vicinity of the actual finger used by the user to grasp the virtual object by using the display unit 131 (step S2304).
 ユーザは、表示部131が表示する(若しくは、ARグラス越しに見える)仮想空間において、仮想の指がちょうど仮想オブジェクトを挟み込んでいる映像を見るため、実際に仮想オブジェクトを挟み込んで持っているような感覚を得ることができる。 In the virtual space displayed by the display unit 131 (or seen through the AR glass), the user actually holds the virtual object in order to see the image in which the virtual finger is just sandwiching the virtual object. You can get a sense.
J.仮想的な手の表示タイミング
 仮想の指の表示を開始するタイミングとして、上記では、ユーザの手が仮想オブジェクトに接近した時点とした。ARシステム100において、ユーザの手と仮想オブジェクトとの最短距離が所定値以下に入ったときに仮想の指の表示を開始するというように設定してもよい。所定値として例えば50cmとあらかじめ設定してもよい。
J. Virtual hand display timing In the above, the timing for starting the virtual finger display is the time when the user's hand approaches the virtual object. In the AR system 100, the display of the virtual finger may be started when the shortest distance between the user's hand and the virtual object falls within a predetermined value. As a predetermined value, for example, 50 cm may be set in advance.
K.仮想オブジェクトの把持力の調整
 本開示では、仮想の指の開き量を実際の指の開き具合よりも広くして、実際の指が接触したときに、仮想空間ではちょうど仮想の指で仮想オブジェクトを把持している状態になる(例えば、図20を参照のこと)。このとき、ユーザは指同士の接触力を仮想オブジェクトから受ける触感として認識する。すなわち、本開示によれば、視覚中での体の動きと自分で感じている体の動きとの間に不整合が生じたときに、視覚による情報が優勢になって擬似的な触力覚が生じる「視触覚間相互作用」を利用して、ユーザに対して仮想オブジェクトを把持したときの触感を与えることができる。
K. Adjustment of gripping force of virtual object In this disclosure, the opening amount of the virtual finger is made wider than the opening degree of the actual finger, and when the actual finger touches, the virtual object is opened with the virtual finger in the virtual space. It is in a gripping state (see, for example, FIG. 20). At this time, the user recognizes the contact force between the fingers as a tactile sensation received from the virtual object. That is, according to the present disclosure, when there is an inconsistency between the movement of the body in the visual sense and the movement of the body felt by oneself, the visual information becomes predominant and a pseudo tactile sensation occurs. By utilizing the "visual-tactile interaction" that occurs, it is possible to give the user a tactile sensation when grasping the virtual object.
 さらに、ユーザが実際の指を強く閉じる(親指と人差し指を強く押し付け合う)ことで、仮想オブジェクトを把持する力の大きさを調整するようにしてもよい。例えば、仮想オブジェクトの重量を設定できる場合に、ユーザが重たい仮想オブジェクトを弱い把持力で摘まんでいるだけでは、仮想オブジェクトは指から滑り落ちてしまうが、強い把持力で摘まむ又は掴んでいるときには仮想オブジェクトを持ち上げることができる、といった表現を実現することができる。 Furthermore, the user may adjust the magnitude of the force for grasping the virtual object by strongly closing the actual finger (strongly pressing the thumb and index finger against each other). For example, when the weight of a virtual object can be set, if the user simply picks a heavy virtual object with a weak gripping force, the virtual object slides off the finger, but when the user is picking or grasping with a strong gripping force. It is possible to realize the expression that a virtual object can be lifted.
L.遠隔操作への適用
 例えば、マスタ側でオペレータがコントローラを操作することによって、スレーブ側で出力端のロボットを駆動して遠隔作業を行うマスタスレーブシステムが知られている。遠隔作業は、遠隔手術や遠隔工事など、さまざまな産業に導入されている。
L. Application to remote control For example, there is known a master-slave system in which an operator operates a controller on the master side to drive a robot at an output end on the slave side to perform remote work. Remote work has been introduced in various industries such as remote surgery and remote construction.
 マスタスレーブシステムでは、オペレータは手元にない物体を、遠隔のロボットで摘まむ又は掴むといった操作を行うことが想定される。マスタスレーブシステムにおいてオペレータが行う遠隔操作は、ARシステム100においてユーザがARグラス越しに仮想オブジェクトを指で摘まむ又は掴む操作と同じものとして扱い、本開示を適用することができる。したがって、マスタ側では、オペレータに対して、遠隔地の物体の把持方法をガイドするUIを表示するので、オペレータは、把持方法のガイドに導かれて手元にない物体を迷いなく容易に把持することができる。 In the master-slave system, it is assumed that the operator performs operations such as picking or grasping an object that is not at hand with a remote robot. The remote control performed by the operator in the master-slave system is treated as the same as the operation in which the user picks or grabs the virtual object through the AR glass in the AR system 100, and the present disclosure can be applied. Therefore, since the master side displays the UI that guides the gripping method of the object in the remote place to the operator, the operator is guided by the guide of the gripping method and easily grips the object that is not at hand without hesitation. Can be done.
 マスタスレーブシステムでは、仮想オブジェクトではなく、遠隔地に設置された現実の物体が把持の対象となる。ARシステム100では、仮想オブジェクト毎に把持方法があらかじめ設定され、又はユーザの性格や習慣、年齢、性別、体格などのユーザ属性に基づいて把持方法が選択され、把持方法を所定のガイド種別に従ってガイドするためのUIが決定される。一方、マスタスレーブシステムでは、遠隔地で把持の対象となる物体を事前に検出しておき、検出結果に基づいてその物体の把持方法や把持方法をガイドするUIをあらかじめ決定しておく。 In the master-slave system, not a virtual object but a real object installed in a remote location is the target of grasping. In the AR system 100, the gripping method is set in advance for each virtual object, or the gripping method is selected based on the user attributes such as the user's personality, habit, age, gender, and physique, and the gripping method is guided according to a predetermined guide type. The UI to do is determined. On the other hand, in the master-slave system, an object to be gripped is detected in advance at a remote location, and a UI for guiding the gripping method and gripping method of the object is determined in advance based on the detection result.
 したがって、マスタ側でオペレータが遠隔操作を開始すると、スレーブ側の手元に存在する物体が把持の対象であるかどうかを短時間で判断して、把持の対象である場合には、短時間でオペレータにその物体の把持方法をガイドするUIを提示することができる。オペレータは、把持方法のガイドに導かれて手元にない物体を迷いなく容易に把持することができる。 Therefore, when the operator starts remote control on the master side, it is determined in a short time whether or not the object existing at the slave side is the object of gripping, and if it is the object of gripping, the operator is operated in a short time. Can be presented with a UI that guides how to grip the object. The operator can easily grasp an object that is not at hand by being guided by the guide of the grasping method without hesitation.
 図24には、本開示が適用される遠隔操作システム2400の構成例を示している。図示の遠隔操作システム2400は、オペレータが操作するマスタ装置2410と、遠隔操作の対象となるロボット2421を含むスレーブ装置2420で構成される。 FIG. 24 shows a configuration example of the remote control system 2400 to which the present disclosure is applied. The illustrated remote control system 2400 includes a master device 2410 operated by an operator and a slave device 2420 including a robot 2421 to be remotely controlled.
 マスタ装置2410は、コントローラ2411と、表示部2412と、マスタ制御部2413と、通信部2414を備えている。 The master device 2410 includes a controller 2411, a display unit 2412, a master control unit 2413, and a communication unit 2414.
 コントローラ2411は、オペレータが、スレーブ装置2420側のロボット2421を遠隔操作するためのコマンドを入力するために使用する。本実施形態では、コントローラ2411は、図5に示したようなオペレータの手に装着して用いられ、オペレータの手指の位置姿勢や手指のジェスチャをロボット2421に対する操作コマンドとして入力するデバイスであることを想定している。但し、コントローラ2411は、オペレータの手を撮影するカメラなどでもよく、手の撮影画像からオペレータの手指の位置姿勢や手指のジェスチャを画像認識するようにしてもよい。 The controller 2411 is used by the operator to input a command for remotely controlling the robot 2421 on the slave device 2420 side. In the present embodiment, the controller 2411 is used by being attached to the operator's hand as shown in FIG. 5, and is a device for inputting the position and orientation of the operator's fingers and the gestures of the fingers as operation commands for the robot 2421. I'm assuming. However, the controller 2411 may be a camera or the like that captures the operator's hand, and may recognize the position and orientation of the operator's fingers and the gesture of the fingers from the captured image of the hand.
 表示部2412は、例えばARグラスで構成されるが、一般的な液晶ディスプレイなどの表示装置であってもよい。表示部2412は、マスタ制御部2413による制御に従って、オペレータの手指が映し出された実空間上に仮想オブジェクトを表示する。ここで言う仮想オブジェクトは、遠隔操作されるロボット2421が把持しようとしている遠隔の実在する物体に対応する仮想オブジェクトである。仮想オブジェクトは、オペレータの手との相対位置がロボット2421と物体との相対位置と一致する場所に表示される。オペレータによるコントローラ2411の操作に従って、スレーブ装置2420側でロボット2421が物体に接近すると、表示部2412には、仮想オブジェクトの把持方法をガイドするUIが仮想オブジェクト付近(又は、オペレータの手の付近)に表示される。 The display unit 2412 is composed of, for example, AR glasses, but may be a display device such as a general liquid crystal display. The display unit 2412 displays the virtual object in the real space on which the operator's fingers are projected according to the control by the master control unit 2413. The virtual object referred to here is a virtual object corresponding to a remote real object that the remotely controlled robot 2421 is trying to grasp. The virtual object is displayed at a position where the relative position with respect to the operator's hand coincides with the relative position between the robot 2421 and the object. When the robot 2421 approaches the object on the slave device 2420 side according to the operation of the controller 2411 by the operator, a UI for guiding the method of grasping the virtual object is displayed near the virtual object (or near the operator's hand) on the display unit 2412. Is displayed.
 マスタ制御部2413は、コントローラ2411からの入力信号に基づいてオペレータの手指の位置姿勢や手指のジェスチャを取得すると、ロボット2421を遠隔操作するための操作コマンドに変換して、通信部2414を介して操作コマンドをスレーブ装置2420に送信する。 When the master control unit 2413 acquires the position and orientation of the operator's fingers and the gestures of the fingers based on the input signal from the controller 2411, the master control unit 2413 converts the robot 2421 into an operation command for remote control via the communication unit 2414. The operation command is transmitted to the slave device 2420.
 また、マスタ制御部2413は、通信部2414を介してスレーブ装置2420から、ロボット2421による遠隔の物体の操作状況をカメラ2422で撮影した画像を受信する。そして、マスタ制御部2413は、オペレータの手指が映し出された実空間上に仮想オブジェクトを表示するように、表示部2412を制御する。仮想オブジェクトは、遠隔操作されるロボット2421が把持しようとしている遠隔の実在する物体に対応する仮想オブジェクトである。仮想オブジェクトは、オペレータの手との相対位置がロボット2421と物体との相対位置と一致する場所に配置される。 Further, the master control unit 2413 receives an image of the operation status of a remote object by the robot 2421 taken by the camera 2422 from the slave device 2420 via the communication unit 2414. Then, the master control unit 2413 controls the display unit 2412 so that the virtual object is displayed in the real space on which the operator's fingers are projected. The virtual object is a virtual object corresponding to a remote real object that the remotely controlled robot 2421 is trying to grasp. The virtual object is placed at a position where the relative position with respect to the operator's hand coincides with the relative position between the robot 2421 and the object.
 さらにマスタ制御部2413は、オペレータによるコントローラ2411の操作に従ってスレーブ装置2420側でロボット2421が物体に接近すると、仮想オブジェクトの把持方法をガイドするUIが仮想オブジェクト付近(又は、オペレータの手の付近)に表示する。仮想オブジェクトの把持方法をガイドするUIは、ロボット2421が実在する物体の把持方法をガイドするUIに対応する。したがって、オペレータは、仮想オブジェクトの把持方法をガイドするUIに導かれて仮想オブジェクトを自分の手指で把持する動作を行うことによって、スレーブ装置2420側でロボット2421が実在する物体を把持するための操作コマンドを入力することができる。 Further, in the master control unit 2413, when the robot 2421 approaches the object on the slave device 2420 side according to the operation of the controller 2411 by the operator, the UI for guiding the method of grasping the virtual object is located near the virtual object (or near the operator's hand). indicate. The UI that guides the gripping method of the virtual object corresponds to the UI that guides the gripping method of the actual object by the robot 2421. Therefore, the operator is guided by the UI that guides the gripping method of the virtual object and performs the operation of gripping the virtual object with his / her own fingers, so that the robot 2421 grips the actual object on the slave device 2420 side. You can enter commands.
 通信部2414は、スレーブ装置2420側と相互接続するための機能モジュールである。マスタ装置2410とスレーブ装置2420間の通信メディアは、有線又は無線のいずれであってもよく、また特定の通信規格に限定されない。 The communication unit 2414 is a functional module for interconnecting with the slave device 2420 side. The communication medium between the master device 2410 and the slave device 2420 may be either wired or wireless, and is not limited to a specific communication standard.
 スレーブ装置2420は、ロボット2421と、カメラ2422と、スレーブ制御部2423と、通信部2424を備えている。スレーブ装置2420は、通信部2424を介してマスタ装置2410側と相互接続しており、マスタ装置2410からロボット2421の操作コマンドを受信したり、カメラ2422の撮影画像をマスタ装置2410に送信したりする。 The slave device 2420 includes a robot 2421, a camera 2422, a slave control unit 2423, and a communication unit 2424. The slave device 2420 is interconnected with the master device 2410 side via the communication unit 2424, receives an operation command of the robot 2421 from the master device 2410, and transmits a captured image of the camera 2422 to the master device 2410. ..
 マスタ装置2410から送られてくる操作コマンドは、オペレータの手指の位置姿勢や手指のジェスチャに従ってロボット2421を駆動するためのコマンドである。スレーブ制御部2423は、マスタ装置2420から受信した操作コマンドを解釈して、ロボット2421がオペレータの手指の位置姿勢や手指のジェスチャを再現するように、ロボット2421の駆動を制御する。図25には、マスタ装置2410側でオペレータが自分の手を仮想オブジェクトに接近している様子を示している。図26には、スレーブ装置2420側で、ロボット2421がオペレータの手の動きに追従するように物体に接近している様子を示している。 The operation command sent from the master device 2410 is a command for driving the robot 2421 according to the position and orientation of the operator's fingers and the gesture of the fingers. The slave control unit 2423 interprets the operation command received from the master device 2420 and controls the drive of the robot 2421 so that the robot 2421 reproduces the position and orientation of the operator's fingers and the gestures of the fingers. FIG. 25 shows how the operator is approaching the virtual object with his / her hand on the master device 2410 side. FIG. 26 shows how the robot 2421 is approaching an object on the slave device 2420 side so as to follow the movement of the operator's hand.
 カメラ2422は、ロボット2421による物体の操作状況を撮影する。スレーブ制御部2423は、カメラ2422の撮影画像を符号化して、所定の伝送フォーマットで通信部2424からマスタ装置2410に送信するように制御する。上述したように、マスタ装置2410側では、表示部2412が、オペレータの手指が映し出された実空間上に、物体に対応する仮想オブジェクトが表示される。仮想オブジェクトは、オペレータの手との相対位置が、ロボット2421と物体との相対位置と一致する場所に配置される。 The camera 2422 captures the operation status of the object by the robot 2421. The slave control unit 2423 encodes the captured image of the camera 2422 and controls the communication unit 2424 to transmit the captured image to the master device 2410 in a predetermined transmission format. As described above, on the master device 2410 side, the display unit 2412 displays the virtual object corresponding to the object in the real space on which the operator's fingers are projected. The virtual object is arranged at a position where the relative position with respect to the operator's hand coincides with the relative position between the robot 2421 and the object.
 図27には、マスタ装置2410側で、オペレータに対して仮想オブジェクトの把持方法をガイドするUIを提示するための処理手順をフローチャートの形式で示している。この処理手順は、マスタ制御部2413が主体となって実施される。 FIG. 27 shows a processing procedure in the form of a flowchart on the master device 2410 side for presenting a UI that guides the operator how to grasp the virtual object. This processing procedure is mainly carried out by the master control unit 2413.
 まず、マスタ制御部2413は、コントローラ2411の検出結果に基づいて、オペレータの手の位置を取得する(ステップS2701)。 First, the master control unit 2413 acquires the position of the operator's hand based on the detection result of the controller 2411 (step S2701).
 そして、マスタ制御部2413は、オペレータの手が仮想オブジェクトに接近したか、すなわち、オペレータの手と仮想オブジェクトとの最短距離が所定値以下になったかどうかをチェックする(ステップS2702)。 Then, the master control unit 2413 checks whether the operator's hand approaches the virtual object, that is, whether the shortest distance between the operator's hand and the virtual object is equal to or less than a predetermined value (step S2702).
 仮想オブジェクトは、オペレータの手との相対位置が、ロボット2421と物体との相対位置と一致する場所に配置される。したがって、オペレータの手が仮想オブジェクトに接近したときには、スレーブ装置2420側ではロボット2421が物体に接近している。 The virtual object is placed at a position where the relative position with respect to the operator's hand coincides with the relative position between the robot 2421 and the object. Therefore, when the operator's hand approaches the virtual object, the robot 2421 is approaching the object on the slave device 2420 side.
 そして、オペレータの手と仮想オブジェクトとの最短距離が所定値以下になった状態になると(ステップS2702のYes)、マスタ制御部2413は、オペレータに対して仮想オブジェクトの把持方法のガイドに用いるUIを決定する(ステップS2703)。 Then, when the shortest distance between the operator's hand and the virtual object becomes equal to or less than a predetermined value (Yes in step S2702), the master control unit 2413 provides the operator with a UI used to guide the method of grasping the virtual object. Determine (step S2703).
 ステップS2703では、マスタ制御部2413は、仮想オブジェクトの把持方法のガイドの種別を選択する。また、マスタ制御部2413は、仮想オブジェクトに対応する遠隔の物体のカテゴリーなどに基づいてあらかじめ設定しておいた把持方法、又はオペレータの性格や習慣、年齢、性別、体格などのユーザ属性に基づいて選択される把持方法を、選択したガイド種別によりガイドするためのUIを決定する。また、マスタ制御部2413は、オペレータの手が仮想オブジェクトに接近する方向に応じて把持方法を選択して、その把持方法をガイドするUIを決定するようにしてもよい。 In step S2703, the master control unit 2413 selects the type of guide for grasping the virtual object. Further, the master control unit 2413 is based on a gripping method preset based on a category of a remote object corresponding to a virtual object, or a user attribute such as an operator's personality, habit, age, gender, and physique. The UI for guiding the selected gripping method according to the selected guide type is determined. Further, the master control unit 2413 may select the gripping method according to the direction in which the operator's hand approaches the virtual object, and determine the UI for guiding the gripping method.
 そして、マスタ制御部2413は、ステップS2703で決定した結果に基づいて、オペレータの手が接近している仮想オブジェクトの把持方法をガイドするUIを、表示部2412を使って仮想オブジェクト付近に表示する(ステップS2704)。 Then, based on the result determined in step S2703, the master control unit 2413 displays a UI for guiding the gripping method of the virtual object approaching by the operator's hand near the virtual object using the display unit 2412 (the display unit 2412). Step S2704).
 その後、マスタ制御部2413は、オペレータが仮想オブジェクトを把持しようとしているときに手や指の動きをコントローラ2411から取得して(ステップS2705)、オペレータの指と仮想オブジェクトとの接触状態を判定する。 After that, the master control unit 2413 acquires the movement of the hand or finger from the controller 2411 when the operator is trying to grasp the virtual object (step S2705), and determines the contact state between the operator's finger and the virtual object.
 仮想オブジェクトは、オペレータの手との相対位置が、ロボット2421と物体との相対位置と一致する場所に配置される。したがって、オペレータの手と仮想オブジェクトとの接触状態は、スレーブ装置2420側におけるロボット2421と実在の物体との接触状態と同じになる。 The virtual object is placed at a position where the relative position with respect to the operator's hand coincides with the relative position between the robot 2421 and the object. Therefore, the contact state between the operator's hand and the virtual object is the same as the contact state between the robot 2421 and the real object on the slave device 2420 side.
 そして、マスタ制御部2413は、オペレータの指が仮想オブジェクトに接触した際に(ステップS2706のYes)、仮想オブジェクトの把持方法をガイドするUIの表示を切り替えて、仮想オブジェクトに接触したこと、言い換えればロボット2421が物体に接触したことをオペレータに通知又はフィードバックする(ステップS2707)。 Then, when the operator's finger touches the virtual object (Yes in step S2706), the master control unit 2413 switches the display of the UI that guides the gripping method of the virtual object, and touches the virtual object, in other words. The operator is notified or fed back that the robot 2421 has come into contact with the object (step S2707).
 ステップS2707では、マスタ制御部2413は、例えばオペレータの指と仮想オブジェクトとの接触点をハイライトで示すUIを、表示部2412を使って表示する。また、マスタ制御部2413は、オペレータの指が仮想オブジェクトの中にめり込んだときには、めり込んだ程度に応じてハイライトの表示を段階的に切り替える。 In step S2707, the master control unit 2413 uses the display unit 2412 to display, for example, a UI that highlights the contact point between the operator's finger and the virtual object. Further, when the operator's finger is sunk into the virtual object, the master control unit 2413 gradually switches the highlight display according to the degree of sunk.
 仮想オブジェクトの把持方法をガイドするUIは、ロボット2421が実在する物体の把持方法をガイドするUIに対応する。したがって、オペレータは、仮想オブジェクトの把持方法をガイドするUIに導かれて仮想オブジェクトを自分の手指で把持する動作を行うことによって、スレーブ装置2420側のロボット2421を遠隔操作して、手元にない物体を迷いなく容易に把持することができる。 The UI that guides the gripping method of the virtual object corresponds to the UI that guides the gripping method of the actual object by the robot 2421. Therefore, the operator remotely controls the robot 2421 on the slave device 2420 side by performing the operation of grasping the virtual object with his / her fingers guided by the UI that guides the grasping method of the virtual object, and the object that is not at hand. Can be easily grasped without hesitation.
 以上、特定の実施形態を参照しながら、本開示について詳細に説明してきた。しかしながら、本開示の要旨を逸脱しない範囲で当業者が該実施形態の修正や代用を成し得ることは自明である。 The present disclosure has been described in detail with reference to the specific embodiment. However, it is self-evident that a person skilled in the art can modify or substitute the embodiment without departing from the gist of the present disclosure.
 本明細書では、主に本開示をARシステムに適用した実施形態を中心に説明してきたが、本開示の要旨はこれに限定されるものではない。例えば、仮想空間を現実として知覚させるVRシステムや現実と仮想を交錯させるMRシステム、さらにはマスタスレーブ方式の遠隔システムなどにも、同様に本開示を適用することができる。 Although the present specification has mainly described embodiments in which the present disclosure is applied to an AR system, the gist of the present disclosure is not limited to this. For example, the present disclosure can be similarly applied to a VR system that perceives a virtual space as reality, an MR system that intersects reality and virtual, and a master-slave remote system.
 要するに、例示という形態により本開示について説明してきたのであり、本明細書の記載内容を限定的に解釈するべきではない。本開示の要旨を判断するためには、特許請求の範囲を参酌すべきである。 In short, the present disclosure has been described in the form of an example, and the contents of the present specification should not be interpreted in a limited manner. In order to judge the gist of this disclosure, the scope of claims should be taken into consideration.
 なお、本開示は、以下のような構成をとることも可能である。 Note that this disclosure can also have the following structure.
(1)ユーザの手の位置姿勢を取得する取得部と、
 実空間に仮想オブジェクトを重畳表示する表示装置の表示動作を制御する制御部と、
を具備し、
 前記制御部は、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトの把持方法に関する情報を表示するように前記表示装置を制御する、
情報処理装置。
(1) An acquisition unit that acquires the position and posture of the user's hand,
A control unit that controls the display operation of a display device that superimposes and displays virtual objects in real space,
Equipped with
The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object.
Information processing device.
(2)前記取得部は、前記手に取り付けられたセンサからのセンサ情報に基づいて前記手の位置姿勢を取得し、又は、前記手に取り付けられたセンサを備える、
上記(1)に記載の情報処理装置。
(2) The acquisition unit acquires the position and posture of the hand based on the sensor information from the sensor attached to the hand, or includes a sensor attached to the hand.
The information processing device according to (1) above.
(3)前記制御部は、前記情報を前記仮想オブジェクト又は前記手の付近のいずれかに表示するように前記表示装置を制御する、
上記(1)又は(2)のいずれかに記載の情報処理装置。
(3) The control unit controls the display device so that the information is displayed on either the virtual object or the vicinity of the hand.
The information processing device according to any one of (1) and (2) above.
(4)前記制御部は、前記仮想オブジェクトを親指とその他の1本の指で摘まむ把持方法又は手全体で掴む把持方法のうち少なくとも1つを含む前記情報を表示するように前記表示装置を制御する、
上記(1)乃至(3)のいずれかに記載の情報処理装置。
(4) The control unit displays the display device so as to display the information including at least one of a gripping method of picking the virtual object with the thumb and one other finger or a gripping method of grasping the virtual object with the whole hand. Control,
The information processing device according to any one of (1) to (3) above.
(5)前記制御部は、前記手が前記仮想オブジェクトを把持している状態、前記手が前記仮想オブジェクトを把持する位置、前記手の位置に仮想の手で前記仮想オブジェクトを把持する動きのうち少なくとも1つを示す前記情報を表示するように前記表示装置を制御する、
上記(1)乃至(4)のいずれかに記載の情報処理装置。
(5) The control unit is in a state where the hand is holding the virtual object, a position where the hand is holding the virtual object, and a movement in which the virtual hand is holding the virtual object at the position of the hand. Control the display device to display the information indicating at least one.
The information processing device according to any one of (1) to (4) above.
(6)前記取得部は前記手の形状をさらに取得し、
 前記制御部は、前記手の形状に基づいて前記情報を選択する、
上記(1)乃至(5)のいずれかに記載の情報処理装置。
(6) The acquisition unit further acquires the shape of the hand and obtains the shape of the hand.
The control unit selects the information based on the shape of the hand.
The information processing device according to any one of (1) to (5) above.
(7)前記制御部は、前記手が前記仮想オブジェクトに接近してくる方向に基づいて前記情報を選択する、
上記(1)乃至(5)のいずれかに記載の情報処理装置。
(7) The control unit selects the information based on the direction in which the hand approaches the virtual object.
The information processing device according to any one of (1) to (5) above.
(8)前記制御部は、前記ユーザの属性に基づいて前記情報を選択する、
上記(1)乃至(5)のいずれかに記載の情報処理装置。
(8) The control unit selects the information based on the attributes of the user.
The information processing device according to any one of (1) to (5) above.
(8-1)前記ユーザの属性は、年齢、人種、身体の損傷、日常の把持方法のうち少なくとも1つを含む、
上記(8)に記載の情報処理装置。
(8-1) The user's attributes include at least one of age, race, physical injury, and daily gripping method.
The information processing device according to (8) above.
(9)前記制御部は、前記手が前記仮想オブジェクトに接近したときの前記ユーザの状態に基づいて、前記情報の表示を制御する、
上記(1)乃至(8)のいずれかに記載の情報処理装置。
(9) The control unit controls the display of the information based on the state of the user when the hand approaches the virtual object.
The information processing device according to any one of (1) to (8) above.
(9-1)前記ユーザの状態は、前記ユーザの視線方向又は前記ユーザの前記仮想オブジェクトに対する関心度のうち少なくとも1つを含む、
上記(9)に記載の情報処理装置。
(9-1) The state of the user includes at least one of the line-of-sight direction of the user or the degree of interest of the user in the virtual object.
The information processing device according to (9) above.
(10)前記制御部は、前記手と前記仮想オブジェクトとの接触状態に基づいて前記情報の表示を制御する、
上記(1)乃至(9)のいずれかに記載の情報処理装置。
(10) The control unit controls the display of the information based on the contact state between the hand and the virtual object.
The information processing device according to any one of (1) to (9) above.
(11)前記制御部は、前記手と前記仮想オブジェクトが接触したことを接触点に示すように前記情報の表示を制御する、
上記(10)に記載の情報処理装置。
(11) The control unit controls the display of the information so that the contact point indicates that the hand and the virtual object have come into contact with each other.
The information processing device according to (10) above.
(12)前記制御部は、前記手が前記仮想オブジェクトの中にめり込んだことを前記接触点に示すように前記情報の表示を制御する、
上記(10)又は(11)のいずれかに記載の情報処理装置。
(12) The control unit controls the display of the information so as to indicate to the contact point that the hand has sunk into the virtual object.
The information processing device according to any one of (10) and (11) above.
(13)前記制御部は、前記手が前記仮想オブジェクトに接触したときに、仮想の手を表示するように前記表示装置を制御する、
上記(1)乃至(12)のいずれかに記載の情報処理装置。
(13) The control unit controls the display device so as to display the virtual hand when the hand touches the virtual object.
The information processing device according to any one of (1) to (12) above.
(14)前記制御部は、ユーザが前記仮想オブジェクトの相対する2面を把持しようとしているときに、一方の面に接近している第1の指と他方の面に接近している第2の指の各々に対して、第1の仮想の指及び第2の仮想の指を表示するように前記表示装置を制御する、
上記(13)に記載の情報処理装置。
(14) When the user is trying to grasp the two opposing surfaces of the virtual object, the control unit has a first finger approaching one surface and a second finger approaching the other surface. The display device is controlled to display the first virtual finger and the second virtual finger for each of the fingers.
The information processing device according to (13) above.
(15)前記制御部は、前記第1の仮想の指と前記第第2の仮想の指の開き量を、実際の第1の指と第2の指の開き具合より広くして、実際の第1の指と第2の指が接触した際に、前記第1の仮想の指と前記第2の仮想の指が仮想オブジェクトに接触して把持状態となるように、前記表示装置の表示動作を制御する、
上記(14)に記載の情報処理装置。
(15) The control unit makes the opening amount of the first virtual finger and the second virtual finger wider than the actual opening degree of the first finger and the second finger, and actually The display operation of the display device so that when the first finger and the second finger come into contact with each other, the first virtual finger and the second virtual finger come into contact with the virtual object and are in a gripped state. To control,
The information processing device according to (14) above.
(16)前記制御部は、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトを把持しようとする仮想の指の開き量を実際の指の開き具合より広くした前記仮想の手の表示を開始するように制御する、
上記(13)乃至(15)のいずれかに記載の情報処理装置。
(16) The control unit displays the virtual hand in which the opening amount of the virtual finger trying to grasp the virtual object is wider than the actual opening degree of the finger when the hand approaches the virtual object. Control to start,
The information processing device according to any one of (13) to (15) above.
(17)前記制御部は、前記手が前記仮想オブジェクトに接近したときに、前記仮想の指の開き量を、前記仮想オブジェクトの厚みに基づいて実際の指の開き具合よりも広くするように制御する、
上記(16)に記載の情報処理装置。
(17) The control unit controls the opening amount of the virtual finger to be wider than the actual opening degree of the finger based on the thickness of the virtual object when the hand approaches the virtual object. do,
The information processing device according to (16) above.
(18)ユーザの手の位置姿勢を取得する取得ステップと、
 実空間に仮想オブジェクトを重畳表示する表示装置の表示動作を制御する制御ステップと、
を有し、
 前記制御ステップでは、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトの把持方法に関する情報を表示するように前記表示装置を制御する、
情報処理方法。
(18) An acquisition step for acquiring the position and posture of the user's hand, and
A control step that controls the display operation of a display device that superimposes and displays virtual objects in real space,
Have,
In the control step, the display device is controlled so as to display information on a method of grasping the virtual object when the hand approaches the virtual object.
Information processing method.
(19)ユーザの手の位置姿勢を取得する取得部、
 実空間に仮想オブジェクトを重畳表示する表示装置の表示動作を制御する制御部、
としてコンピュータが機能するようにコンピュータ可読形式で記述され、
 前記制御部は、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトの把持方法に関する情報を表示するように前記表示装置を制御する、
コンピュータプログラム。
(19) Acquisition unit for acquiring the position and posture of the user's hand,
A control unit that controls the display operation of a display device that superimposes and displays virtual objects in real space.
Written in computer readable format so that the computer works as
The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object.
Computer program.
(20)実空間に仮想オブジェクトを重畳表示する表示装置と、
 ユーザの手の位置姿勢を取得する取得部と、
 前記表示装置の表示動作を制御する制御部と、
を具備し、
 前記制御部は、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトの把持方法に関する情報を表示するように前記表示装置を制御する、
拡張現実感システム。
(20) A display device that superimposes and displays virtual objects in real space,
An acquisition unit that acquires the position and posture of the user's hand,
A control unit that controls the display operation of the display device,
Equipped with
The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object.
Augmented reality system.
 100…ARシステム、110…第1のセンサ
 111…ジャイロセンサ、112…加速度センサ、113…方位センサ
 120…第2のセンサ部、121…外向きカメラ
 122…内向きカメラ、123…マイク、124…ジャイロセンサ
 125…加速度センサ、126…方位センサ
 131…表示部、132…スピーカー、133…振動提示部
 134…通信部、140…制御部、150…記憶部
 300…ARシステム、301…ARグラス、302…コントローラ
 400…ARシステム、401…ARグラス、402…コントローラ
 403…情報端末
 500…コントローラ、501、502、503…IMU
 511、512、513…バンド
 601…アプリケーション実行部、602…頭部位置姿勢検出部
 603…出力制御部、604…手指位置姿勢検出部
 605…手指ジェスチャ検出部
 2400…遠隔操作システム、2410…マスタ装置
 2411…コントローラ、2412…表示部、2413…マスタ制御部
 2414…通信部、2420…スレーブ装置、2421…ロボット
 2422…カメラ、2423…スレーブ制御部、2424…通信部
100 ... AR system, 110 ... first sensor 111 ... gyro sensor, 112 ... acceleration sensor, 113 ... orientation sensor 120 ... second sensor unit, 121 ... outward camera 122 ... inward camera, 123 ... microphone, 124 ... Gyro sensor 125 ... Accelerometer, 126 ... Orientation sensor 131 ... Display unit, 132 ... Speaker 133 ... Vibration presentation unit 134 ... Communication unit, 140 ... Control unit, 150 ... Storage unit 300 ... AR system, 301 ... AR glass, 302 … Sensor 400… AR system, 401… AR glass, 402… Controller 403… Information terminal 500… Controller, 501, 502, 503… IMU
511, 512, 513 ... Band 601 ... Application execution unit, 602 ... Head position posture detection unit 603 ... Output control unit, 604 ... Finger position posture detection unit 605 ... Finger gesture detection unit 2400 ... Remote control system, 2410 ... Master device 2411 ... controller, 2412 ... display unit, 2413 ... master control unit 2414 ... communication unit, 2420 ... slave device, 2421 ... robot 2422 ... camera, 2423 ... slave control unit, 2424 ... communication unit

Claims (20)

  1.  ユーザの手の位置姿勢を取得する取得部と、
     実空間に仮想オブジェクトを重畳表示する表示装置の表示動作を制御する制御部と、
    を具備し、
     前記制御部は、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトの把持方法に関する情報を表示するように前記表示装置を制御する、
    情報処理装置。
    An acquisition unit that acquires the position and posture of the user's hand,
    A control unit that controls the display operation of a display device that superimposes and displays virtual objects in real space,
    Equipped with
    The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object.
    Information processing device.
  2.  前記取得部は、前記手に取り付けられたセンサからのセンサ情報に基づいて前記手の位置姿勢を取得し、又は、前記手に取り付けられたセンサを備える、
    請求項1に記載の情報処理装置。
    The acquisition unit acquires the position and posture of the hand based on the sensor information from the sensor attached to the hand, or includes a sensor attached to the hand.
    The information processing device according to claim 1.
  3.  前記制御部は、前記情報を前記仮想オブジェクト又は前記手の付近のいずれかに表示するように前記表示装置を制御する、
    請求項1に記載の情報処理装置。
    The control unit controls the display device so that the information is displayed on either the virtual object or the vicinity of the hand.
    The information processing device according to claim 1.
  4.  前記制御部は、前記仮想オブジェクトを親指とその他の1本の指で摘まむ把持方法又は手全体で掴む把持方法のうち少なくとも1つを含む前記情報を表示するように前記表示装置を制御する、
    請求項1に記載の情報処理装置。
    The control unit controls the display device to display the information including at least one of a gripping method of picking the virtual object with the thumb and one other finger or a gripping method of gripping the virtual object with the whole hand.
    The information processing device according to claim 1.
  5.  前記制御部は、前記手が前記仮想オブジェクトを把持している状態、前記手が前記仮想オブジェクトを把持する位置、前記手の位置に仮想の手で前記仮想オブジェクトを把持する動きのうち少なくとも1つを示す前記情報を表示するように前記表示装置を制御する、
    請求項1に記載の情報処理装置。
    The control unit is at least one of a state in which the hand is holding the virtual object, a position where the hand is holding the virtual object, and a movement in which the virtual hand is holding the virtual object at the position of the hand. Control the display device to display the information indicating
    The information processing device according to claim 1.
  6.  前記取得部は前記手の形状をさらに取得し、
     前記制御部は、前記手の形状に基づいて前記情報を選択する、
    請求項1に記載の情報処理装置。
    The acquisition unit further acquires the shape of the hand and
    The control unit selects the information based on the shape of the hand.
    The information processing device according to claim 1.
  7.  前記制御部は、前記手が前記仮想オブジェクトに接近してくる方向に基づいて前記情報を選択する、
    請求項1に記載の情報処理装置。
    The control unit selects the information based on the direction in which the hand approaches the virtual object.
    The information processing device according to claim 1.
  8.  前記制御部は、前記ユーザの属性に基づいて前記情報を選択する、
    請求項1に記載の情報処理装置。
    The control unit selects the information based on the attributes of the user.
    The information processing device according to claim 1.
  9.  前記制御部は、前記手が前記仮想オブジェクトに接近したときの前記ユーザの状態に基づいて、前記情報の表示を制御する、
    請求項1に記載の情報処理装置。
    The control unit controls the display of the information based on the state of the user when the hand approaches the virtual object.
    The information processing device according to claim 1.
  10.  前記制御部は、前記手と前記仮想オブジェクトとの接触状態に基づいて前記情報の表示を制御する、
    請求項1に記載の情報処理装置。
    The control unit controls the display of the information based on the contact state between the hand and the virtual object.
    The information processing device according to claim 1.
  11.  前記制御部は、前記手と前記仮想オブジェクトが接触したことを接触点に示すように前記情報の表示を制御する、
    請求項10に記載の情報処理装置。
    The control unit controls the display of the information so that the contact point indicates that the hand and the virtual object have come into contact with each other.
    The information processing device according to claim 10.
  12.  前記制御部は、前記手が前記仮想オブジェクトの中にめり込んだことを前記接触点に示すように前記情報の表示を制御する、
    請求項10に記載の情報処理装置。
    The control unit controls the display of the information so as to indicate to the contact point that the hand has sunk into the virtual object.
    The information processing device according to claim 10.
  13.  前記制御部は、前記手が前記仮想オブジェクトに接触したときに、仮想の手を表示するように前記表示装置を制御する、
    請求項1に記載の情報処理装置。
    The control unit controls the display device so as to display the virtual hand when the hand touches the virtual object.
    The information processing device according to claim 1.
  14.  前記制御部は、ユーザが前記仮想オブジェクトの相対する2面を把持しようとしているときに、一方の面に接近している第1の指と他方の面に接近している第2の指の各々に対して、第1の仮想の指及び第2の仮想の指を表示するように前記表示装置を制御する、
    請求項13に記載の情報処理装置。
    When the user is trying to grasp the two opposing surfaces of the virtual object, the control unit has a first finger approaching one surface and a second finger approaching the other surface, respectively. The display device is controlled so as to display the first virtual finger and the second virtual finger.
    The information processing device according to claim 13.
  15.  前記制御部は、前記第1の仮想の指と前記第第2の仮想の指の開き量を、実際の第1の指と第2の指の開き具合より広くして、実際の第1の指と第2の指が接触した際に、前記第1の仮想の指と前記第2の仮想の指が仮想オブジェクトに接触して把持状態となるように、前記表示装置の表示動作を制御する、
    請求項14に記載の情報処理装置。
    The control unit makes the opening amount of the first virtual finger and the second virtual finger wider than the actual opening degree of the first finger and the second finger, so that the actual first finger is opened. The display operation of the display device is controlled so that when the finger and the second finger come into contact with each other, the first virtual finger and the second virtual finger come into contact with the virtual object and are in a gripped state. ,
    The information processing device according to claim 14.
  16.  前記制御部は、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトを把持しようとする仮想の指の開き量を実際の指の開き具合より広くした前記仮想の手の表示を開始するように制御する、
    請求項13に記載の情報処理装置。
    When the hand approaches the virtual object, the control unit starts displaying the virtual hand in which the opening amount of the virtual finger trying to grasp the virtual object is wider than the actual opening degree of the finger. To control,
    The information processing device according to claim 13.
  17.  前記制御部は、前記手が前記仮想オブジェクトに接近したときに、前記仮想の指の開き量を、前記仮想オブジェクトの厚みに基づいて実際の指の開き具合よりも広くするように制御する、
    請求項16に記載の情報処理装置。
    The control unit controls the opening amount of the virtual finger to be wider than the actual opening degree of the finger based on the thickness of the virtual object when the hand approaches the virtual object.
    The information processing device according to claim 16.
  18.  ユーザの手の位置姿勢を取得する取得ステップと、
     実空間に仮想オブジェクトを重畳表示する表示装置の表示動作を制御する制御ステップと、
    を有し、
     前記制御ステップでは、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトの把持方法に関する情報を表示するように前記表示装置を制御する、
    情報処理方法。
    The acquisition step to acquire the position and posture of the user's hand,
    A control step that controls the display operation of a display device that superimposes and displays virtual objects in real space,
    Have,
    In the control step, the display device is controlled so as to display information on a method of grasping the virtual object when the hand approaches the virtual object.
    Information processing method.
  19.  ユーザの手の位置姿勢を取得する取得部、
     実空間に仮想オブジェクトを重畳表示する表示装置の表示動作を制御する制御部、
    としてコンピュータが機能するようにコンピュータ可読形式で記述され、
     前記制御部は、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトの把持方法に関する情報を表示するように前記表示装置を制御する、
    コンピュータプログラム。
    Acquisition unit that acquires the position and posture of the user's hand,
    A control unit that controls the display operation of a display device that superimposes and displays virtual objects in real space.
    Written in computer readable format so that the computer works as
    The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object.
    Computer program.
  20.  実空間に仮想オブジェクトを重畳表示する表示装置と、
     ユーザの手の位置姿勢を取得する取得部と、
     前記表示装置の表示動作を制御する制御部と、
    を具備し、
     前記制御部は、前記手が前記仮想オブジェクトに接近したときに、前記仮想オブジェクトの把持方法に関する情報を表示するように前記表示装置を制御する、
    拡張現実感システム。
    A display device that superimposes and displays virtual objects in real space,
    An acquisition unit that acquires the position and posture of the user's hand,
    A control unit that controls the display operation of the display device,
    Equipped with
    The control unit controls the display device so as to display information on how to hold the virtual object when the hand approaches the virtual object.
    Augmented reality system.
PCT/JP2020/043524 2020-01-17 2020-11-20 Information processing device and information processing method, computer program, and augmented reality system WO2021145068A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020006415 2020-01-17
JP2020-006415 2020-01-17

Publications (1)

Publication Number Publication Date
WO2021145068A1 true WO2021145068A1 (en) 2021-07-22

Family

ID=76864186

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/043524 WO2021145068A1 (en) 2020-01-17 2020-11-20 Information processing device and information processing method, computer program, and augmented reality system

Country Status (1)

Country Link
WO (1) WO2021145068A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7481783B1 (en) 2023-01-05 2024-05-13 Diver-X株式会社 Position detection device and information processing system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009104249A (en) * 2007-10-19 2009-05-14 Canon Inc Image processing apparatus, and image processing method
JP2014106543A (en) * 2012-11-22 2014-06-09 Canon Inc Image processor, image processing method and program
JP2018014119A (en) * 2014-08-22 2018-01-25 株式会社ソニー・インタラクティブエンタテインメント Glove interface object and method
JP2018014110A (en) * 2017-08-01 2018-01-25 株式会社コロプラ Method for providing virtual space, method for providing virtual experience, program and recording medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009104249A (en) * 2007-10-19 2009-05-14 Canon Inc Image processing apparatus, and image processing method
JP2014106543A (en) * 2012-11-22 2014-06-09 Canon Inc Image processor, image processing method and program
JP2018014119A (en) * 2014-08-22 2018-01-25 株式会社ソニー・インタラクティブエンタテインメント Glove interface object and method
JP2018014110A (en) * 2017-08-01 2018-01-25 株式会社コロプラ Method for providing virtual space, method for providing virtual experience, program and recording medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JEFF: "Portal-ble: Intuitive Free-hand Manipulation in Unbounded Smartphone-based Augmented Reality", YOUTUBE, 17 October 2019 (2019-10-17), XP054982275, Retrieved from the Internet <URL:https://www.youtube.com/watch?v=ZYMjhKMpNXk> [retrieved on 20201210] *
QIAN, JING ET AL.: "Portal-ble: Intuitive Free-hand Manipulation in Unbounded Smartphone-based Augmented Reality", October 2019 (2019-10-01), XP058442412, Retrieved from the Internet <URL:https://dl.acm.org/doi/pdf/10.1145/3332165.3347904> [retrieved on 20201210] *
SUZUKI, SOTA ET AL.: "An Examination of Grasping a Stereoscopic Virtual Object Using Pseudo-haptics", ITE TECHNICAL REPORT., vol. 39, no. 39, 29 October 2015 (2015-10-29), pages 25 - 28 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7481783B1 (en) 2023-01-05 2024-05-13 Diver-X株式会社 Position detection device and information processing system
WO2024147180A1 (en) * 2023-01-05 2024-07-11 Diver-X株式会社 Position detection device and information processing system

Similar Documents

Publication Publication Date Title
JP7504180B2 (en) Transmodal Input Fusion for Wearable Systems
JP7411133B2 (en) Keyboards for virtual reality display systems, augmented reality display systems, and mixed reality display systems
EP3549109B1 (en) Virtual user input controls in a mixed reality environment
JP7190434B2 (en) Automatic control of wearable display devices based on external conditions
EP3425481B1 (en) Control device
CN114341779A (en) System, method, and interface for performing input based on neuromuscular control
KR101548156B1 (en) A wireless exoskeleton haptic interface device for simultaneously delivering tactile and joint resistance and the method for comprising the same
JP7239916B2 (en) Remote control system, information processing method, and program
WO2017085974A1 (en) Information processing apparatus
JP6507827B2 (en) Display system
CN113892075A (en) Corner recognition gesture-driven user interface element gating for artificial reality systems
KR20190059726A (en) Method for processing interaction between object and user of virtual reality environment
WO2021192589A1 (en) Information processing device, information processing method, computer program, and augmented reality sensing system
WO2021145068A1 (en) Information processing device and information processing method, computer program, and augmented reality system
WO2021145067A1 (en) Information processing apparatus, information processing method, computer program, and augmented reality sense system
Hauser et al. Analysis and perspectives on the ana avatar xprize competition
US20230325002A1 (en) Techniques for neuromuscular-signal-based detection of in-air hand gestures for text production and modification, and systems, wearable devices, and methods for using these techniques
US20230359422A1 (en) Techniques for using in-air hand gestures detected via a wrist-wearable device to operate a camera of another device, and wearable devices and systems for performing those techniques
WO2021176861A1 (en) Information processing device and information processing method, computer program, and augmented reality sensing system
JP2024048680A (en) Control device, control method, and program
Chacón-Quesada et al. Augmented reality control of smart wheelchair using eye-gaze–enabled selection of affordances
WO2024090303A1 (en) Information processing device and information processing method
JP7513564B2 (en) System, information processing method and information processing program
WO2023281819A1 (en) Information processing device for determining retention of object
WO2023196671A1 (en) Techniques for neuromuscular-signal-based detection of in-air hand gestures for text production and modification, and systems, wearable devices, and methods for using these techniques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20914592

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20914592

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP