WO2023021757A1 - Dispositif de traitement d'informations, procédé de traitement d'informations et programme - Google Patents

Dispositif de traitement d'informations, procédé de traitement d'informations et programme Download PDF

Info

Publication number
WO2023021757A1
WO2023021757A1 PCT/JP2022/010540 JP2022010540W WO2023021757A1 WO 2023021757 A1 WO2023021757 A1 WO 2023021757A1 JP 2022010540 W JP2022010540 W JP 2022010540W WO 2023021757 A1 WO2023021757 A1 WO 2023021757A1
Authority
WO
WIPO (PCT)
Prior art keywords
information processing
reliability
detection
resolution
pointing
Prior art date
Application number
PCT/JP2022/010540
Other languages
English (en)
Japanese (ja)
Inventor
京二郎 永野
毅 石川
真 城間
大輔 田島
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2023021757A1 publication Critical patent/WO2023021757A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors

Definitions

  • the present technology relates to an information processing device, an information processing method, and a program applicable to building virtual spaces such as VR (Virtual Reality) spaces and AR (Augmented Reality) spaces.
  • VR Virtual Reality
  • AR Augmented Reality
  • Patent Literature 1 discloses a technique for constructing a virtual space in which a user interacts with a virtual object, aiming at improving the interaction between the user and the virtual object and improving the virtual experience provided to the user.
  • the purpose of the present technology is to provide an information processing device, an information processing method, and a program capable of realizing a high-quality virtual experience.
  • an information processing apparatus includes a detection unit and a resolution control unit holding determination unit.
  • the detection unit detects an operation parameter corresponding to an operation by the operation object based on a recognition result of the operation object.
  • the resolution control unit controls detection resolution of the operation parameter based on operation-related information related to the operation by the operation object.
  • the detection resolution is controlled based on the operation-related information regarding the detection of the operation parameter according to the operation by the operation object. This makes it possible to realize a high-quality virtual experience.
  • the operation by the operation object may be a pointing operation.
  • the detection unit may detect a pointing position indicated by the operation object as the operation parameter.
  • the operation-related information may include at least one of a distance between the operation object and a target object to be pointed, a size of the target object, or reliability of the recognition result.
  • the resolution control unit calculates the accuracy of the pointing operation based on the operation-related information, and controls the detection resolution so that the detection resolution decreases as the accuracy of the pointing operation decreases. good too.
  • the calculation of the accuracy by the resolution control unit is such that the accuracy is calculated so that the accuracy decreases as the distance between the operation object and the target object increases, and the size of the target object decreases. at least one of calculating the accuracy so that the accuracy decreases according to may contain.
  • the information processing device may further include a display control unit that displays a notification image for notifying the detected pointing position.
  • the notification image is at least one of a pointer image displayed at the pointing position, a linear image extending from the operation object to the pointing position, and an enhanced image that emphasizes the target object on which the pointing position overlaps.
  • the display control unit may change the display mode of the notification image according to the detection resolution set by the resolution control unit.
  • the display control section may display a detection mode notification image for reporting the detection mode of the pointing position according to the detection resolution set by the resolution control section.
  • the target object may be a virtual object or a real object.
  • the operation by the operation object may be a holding operation on the virtual object.
  • the detection unit may detect, as the operation parameter, an orientation of the virtual object held by the holding operation.
  • the operation-related information may include reliability of the recognition result.
  • the resolution control unit may control the detection resolution so that the detection resolution decreases as the reliability of the recognition result decreases.
  • the information processing device may further include a reliability determination unit that calculates the reliability of the recognition result.
  • the reliability is calculated by the reliability determination unit so that the reliability decreases as the detection range of the sensor for the part of the operation object holding the virtual object narrows. calculating the reliability, calculating the reliability so that the reliability decreases as the speed of movement of the operable object increases, or calculating the reliability as the size of the virtual object decreases; calculating the accuracy such that the confidence is low.
  • the reliability calculation unit calculates the reliability based on at least one of a set reliability preset according to the type of sensor that senses the operation object and a sensing reliability output by the sensor. good too.
  • the information processing device may further include a display control unit that displays the virtual object.
  • the display control unit may display a detection state notification image for reporting the detection state of the posture according to the detection resolution set by the resolution control unit.
  • the resolution control unit may control the detection resolution based on an instruction from a user.
  • the manipulation object may be one or more fingers of the user.
  • Information processing equipment may be one or more fingers of the user.
  • the resolution control unit may select and set one detection resolution from among a plurality of predetermined detection resolutions with different levels.
  • An information processing method is an information processing method executed by a computer system, and includes detecting an operation parameter corresponding to an operation performed by an operation object based on a recognition result of the operation object. Detection resolution of the operation parameter is controlled based on operation-related information related to the operation by the operation object.
  • a program causes a computer system to execute the following steps. Detecting an operation parameter corresponding to the operation by the operation object based on the recognition result of the operation object. controlling detection resolution of the operation parameter based on operation-related information related to the operation by the operation object;
  • FIG. 1 is a schematic diagram for explaining an overview of an AR providing system according to a first embodiment
  • FIG. 1 is a perspective view showing an appearance example of an HMD
  • FIG. It is a block diagram which shows the functional structural example of HMD.
  • FIG. 4 is a schematic diagram showing a detection state of a user's hand by an outward camera
  • FIG. 4 is a schematic diagram for explaining detection resolution of a pointing position
  • FIG. 4 is a schematic diagram for explaining detection resolution of a pointing position
  • FIG. 4 is a schematic diagram for explaining detection resolution of a pointing position
  • 5 is a flow chart showing an example of resolution control processing according to the present embodiment.
  • FIG. 9 is a schematic diagram for explaining each step shown in FIG. 8;
  • FIG. 9 is a schematic diagram for explaining each step shown in FIG. 8;
  • FIG. FIG. 5 is a schematic diagram for explaining display of a notification image according to detection resolution;
  • FIG. 10 is a schematic diagram for explaining an outline of an AR providing system according to a second embodiment;
  • FIG. 4 is a schematic diagram showing a detection state of a user's hand by an outward camera;
  • FIG. 4 is a schematic diagram for explaining the detection resolution of the posture (extending direction) of a virtual object (pen);
  • 5 is a flow chart showing an example of resolution control processing according to the present embodiment.
  • FIG. 5 is a schematic diagram showing a display example of a detection mode notification image; It is a schematic diagram which shows an example of a wearable controller. It is a block diagram showing a hardware configuration example of a computer applicable to the present technology.
  • FIG. 1 is a schematic diagram for explaining an outline of an AR providing system according to the first embodiment of the present technology.
  • the AR providing system 1 corresponds to an embodiment of an information processing system according to the present technology.
  • the AR providing system 1 includes an HMD (Head Mounted Display) 2 .
  • the HMD 2 is worn on the head of the user 3 and used.
  • the HMD 2 is a spectacle-type device with a transmissive display, and is also called AR glasses.
  • the HMD 2 reproduces the virtual content for the user 3 . This makes it possible to provide an AR space (virtual space) to the user 3 using the HMD 2 .
  • the user 3 can experience various AR worlds.
  • Playback of virtual content includes displaying virtual objects so as to be superimposed on the real world. Also, playing back virtual content includes outputting virtual sound (virtual voice). In addition, smell, tactile sensation, etc. may be virtually provided to the user 3 .
  • the user 3 can use his/her hands to perform various operations. That is, the user 3 can perform various operations using his or her hand as an operation object.
  • the hand includes each finger.
  • a pointing operation can be performed using the index finger 5 as an operation object.
  • the user 3 extends the index finger 5 toward the position to be pointed.
  • the HMD 2 performs recognition processing on the index finger 5 of the user 3, and detects the pointing position P pointed by the index finger 5 based on the recognition result.
  • the HMD 2 can detect the pointing position P according to the pointing operation.
  • the present technology can also be applied when a pointing operation is performed using a pointing stick, a ruler, or the like.
  • the HMD 2 displays a notification image 6 for notifying the user 3 of the detected pointing position P.
  • a linear virtual image virtual object
  • a linear image made up of broken lines is displayed in this embodiment, the present invention is not limited to this.
  • An image consisting of a solid line extending continuously from the tip of the index finger 5 to the pointing position P, or an image tapering toward the pointing position P may be displayed as the notification image 6 .
  • the notification image 6 may be an image in which the color, shape, etc. are appropriately adjusted and, for example, an image in which a beam is emitted.
  • the user 3 can easily confirm the position he or she is pointing to. Also, the user 3 can change the pointing position P by changing the extending direction of the index finger 5 .
  • a pointer image may be displayed at the pointing position P as the notification image 6 for notifying the pointing position P to the user 3 .
  • Any shape and color pointer image may be used, such as a mark such as a circle or a star, or an image of an arrow or a finger. By visually recognizing the pointer image, the user 3 can easily confirm the position he or she is pointing to.
  • an emphasized image that emphasizes the target object to be pointed may be displayed. That is, an emphasized image that emphasizes the target object on which the pointing position P overlaps may be displayed as the notification image 6 .
  • pointing operations can be performed on both virtual objects and real objects.
  • the virtual object on which the pointing position P overlaps is emphasized by changing its color, shape, or the like. Any image including an expression that emphasizes the virtual object is included in the emphasized image as the notification image 6 .
  • a predetermined color, text image, or the like is superimposed on the real object on which the pointing position P overlaps. Any image (including color display) superimposed to emphasize the real object is included in the enhanced image as the notification image 6 .
  • the user 3 can recognize that the pointing position P exists within the area of the target object.
  • any image may be displayed as the notification image 6 .
  • the AR world that the user 3 can experience is not limited, and various AR worlds can be experienced.
  • a virtual object it is possible to display arbitrary virtual images such as CG (Computer Graphics) such as characters, photographs, characters, and the like.
  • CG Computer Graphics
  • the virtual sound it is possible to output arbitrary sounds such as the voice of a character, the sound of a siren, the sound effect of closing a door, and the like.
  • the HMD 2 functions as an information processing device according to the present technology.
  • a pointing operation corresponds to an embodiment of an operation using an operation object.
  • the detected pointing position P corresponds to an embodiment of operation parameters according to the operation by the operation object.
  • FIG. 2 is a perspective view showing an appearance example of the HMD 2. As shown in FIG.
  • the HMD 2 has a frame 8, a left-eye lens 9a and a right-eye lens 9b, a left-eye display 10a and a right-eye display 10b, a left-eye camera 11a and a right-eye camera 11b, and an outward facing camera 12.
  • the frame 8 has a spectacle-like shape and has a rim portion 13 and a temple portion 14 .
  • the rim portion 13 is a portion arranged in front of the left and right eyes of the user 3, and supports the left eye lens 9a and the right eye lens 9b, respectively.
  • the temple portion 14 extends rearward from both ends of the rim portion 13 toward both ears of the user 3, and the tips thereof are worn on both ears.
  • the rim portion 13 and the temple portion 14 are made of materials such as synthetic resin and metal.
  • the left eye lens 9a and the right eye lens 9b are arranged in front of the left and right eyes of the user 3, respectively, so as to cover at least part of the user's 3 field of view.
  • each lens is designed to correct the vision of user 3 .
  • it is not limited to this, and a so-called non-prescription lens may be used.
  • the left-eye display 10a and the right-eye display 10b are transmissive displays and are arranged to cover partial areas of the left-eye and right-eye lenses 9a and 9b, respectively. That is, the left-eye and right-eye displays 10a and 10b are arranged in front of the left and right eyes of the user 3, respectively. Left-eye and right-eye images and the like are displayed on the left-eye and right-eye displays 10a and 10b, respectively.
  • the user 3 wearing the HMD 2 can visually recognize the actual scenery and at the same time, can visually recognize the images displayed on the respective displays 10a and 10b. This enables the user 3 to experience augmented reality (AR) and the like.
  • AR augmented reality
  • a virtual object for example, is displayed on each of the displays 10a and 10b.
  • the left-eye and right-eye displays 10a and 10b for example, a transmissive organic EL display or an LCD (Liquid Crystal Display) display is used.
  • the specific configurations of the left-eye and right-eye displays 10a and 10b are not limited. Any type of transmissive display may be used as appropriate.
  • the left-eye camera 11a and the right-eye camera 11b are appropriately installed on the frame 8 so as to photograph the left and right eyes of the user 3 .
  • line-of-sight information about the line of sight of the user 3 can be detected based on left-eye and right-eye images captured by the left-eye and right-eye cameras 11a and 11b.
  • the left-eye and right-eye cameras 11a and 11b for example, digital cameras equipped with image sensors such as CMOS (Complementary Metal-Oxide Semiconductor) sensors and CCD (Charge Coupled Device) sensors are used.
  • an infrared camera equipped with an infrared illumination such as an infrared LED may be used.
  • the left eye lens 9a and the right eye lens 9b may both be referred to as the lens 9, and the left eye display 10a and the right eye display 10b may both be referred to as the transmissive display 10.
  • the left-eye camera 11a and the right-eye camera 11b may both be referred to as the inward facing camera 11 in some cases.
  • the transmissive display 10 corresponds to the display section.
  • the outward facing camera 12 is arranged in the center of the frame 8 (rim portion 13) facing outward (the side opposite to the user 3).
  • the outward facing camera 12 can photograph the real space included in the field of view of the user 3 . Therefore, the outward facing camera 12 can generate a photographed image in which the real space is photographed.
  • the outward camera 12 captures an image of a range that is on the front side as viewed from the user 3 and that includes the display area of the transmissive display 10 . That is, the real space is captured so as to include a range that can be seen through the display area when viewed from the user 3 .
  • a digital camera having an image sensor such as a CMOS sensor or a CCD sensor is used as the outward facing camera 12 .
  • the range that the user 3 can see through the display area is the range in which the virtual object can be superimposed on the real world.
  • the range is defined as the effective field of view of the user 3 .
  • the effective field of view can also be said to be an angle of view in which a virtual object can be displayed.
  • FIG. 3 is a block diagram showing a functional configuration example of the HMD 2. As shown in FIG. As shown in FIG. 3 , HMD 2 further includes speaker 16 , vibration section 17 , communication section 18 , connector 19 , operation button 20 , sensor section 21 , storage section 22 and controller 23 .
  • a speaker 16 is provided at a predetermined position on the frame 8 .
  • the configuration of the speaker 16 is not limited, and for example, a speaker 16 capable of outputting stereo sound, monaural sound, or the like may be used as appropriate.
  • the vibrating section 17 is provided inside the frame 8 and generates vibration.
  • any vibration motor or the like capable of generating notification vibration or the like is used as the vibrating section 17 .
  • a tactile sensation can be presented to the user 3 by driving the vibrating section 17 .
  • the vibrating section 17 functions as an embodiment of a tactile sense presenting section.
  • the communication unit 18 is a module for performing network communication, short-range wireless communication, etc. with other devices.
  • a wireless LAN module such as WiFi and a communication module such as Bluetooth (registered trademark) are provided.
  • the connector 19 is a terminal for connection with other devices.
  • terminals such as USB (Universal Serial Bus) and HDMI (registered trademark) (High-Definition Multimedia Interface) are provided.
  • the operation button 20 is provided at a predetermined position on the frame 8, for example. With the operation button 20, it is possible to perform various functions of the HMD 2, such as power ON/OFF operations, image display and audio output functions, and network communication functions.
  • the sensor unit 21 has a 9-axis sensor 24 , a GPS 25 , a ranging sensor 26 and a microphone 27 .
  • the 9-axis sensor 24 includes a 3-axis acceleration sensor, a 3-axis gyro sensor, and a 3-axis compass sensor.
  • the 9-axis sensor 24 can detect the acceleration, angular velocity, and orientation of the HMD 2 along 3 axes.
  • an IMU (Inertial Measurement Unit) sensor having an arbitrary configuration may be used.
  • GPS25 acquires the information of the present position of HMD2.
  • the detection results of the 9-axis sensor 24 and the GPS 25 are used, for example, to detect the posture and position of the user 3 (HMD 2), movement (movement) of the user 3, and the like. These sensors are provided at predetermined positions on the frame 8, for example.
  • the ranging sensor 26 can acquire three-dimensional information (distance to a detection target). Examples include LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging), laser ranging sensors, stereo cameras, ToF (Time of Flight) sensors, ultrasonic sensors, structured light ranging sensors, and the like. . Also, a sensor having both the functions of an image sensor and a ranging sensor may be used.
  • the ranging sensor 26 is installed, for example, with the front side of the user 3 as the detection direction. That is, it is installed so that the distance can be measured with respect to the real space included in the field of view of the user 3 .
  • the distance measurement sensor 26 may be installed so as to be able to measure the distance around the entire 360 degrees around the user 3 .
  • the microphone 27 detects sound information around the user 3 .
  • the voice or the like uttered by the user 3 is appropriately detected.
  • the user 3 can enjoy the AR experience while making a voice call, and perform operation input of the HMD 2 using voice input.
  • the type of sensor provided as the sensor unit 21 is not limited, and any sensor may be provided.
  • a temperature sensor, humidity sensor, or the like that can measure the temperature and humidity of the environment in which the HMD 2 is used may be provided.
  • a biosensor capable of detecting biometric information of the user 3 may be provided.
  • biosensors electroencephalogram sensors, myoelectric sensors, pulse sensors, perspiration sensors, temperature sensors, blood flow sensors, body motion sensors, and the like are provided.
  • the inward camera 11 and the outward camera 12 as part of the sensor section 21 .
  • each sensor of the sensor unit 21 , the inward facing camera 11 and the outward facing camera 12 function as one embodiment of a sensor that senses the operation object 4 .
  • the storage unit 22 is a storage device such as a non-volatile memory, and for example, an HDD (Hard Disk Drive), an SSD (Solid State Drive), or the like is used. In addition, any computer-readable non-transitory storage medium may be used.
  • a control program for controlling the overall operation of the HMD 2 is stored in the storage unit 22 .
  • the storage unit 22 also stores various information related to AR applications that provide an AR space. For example, various information and data related to the present technology, such as content data such as virtual objects and virtual sounds, are stored.
  • the storage unit 22 also stores resolution meta information, required reliability information, and the like. These pieces of information are information used to control the detection resolution of the operation parameters according to the operation, and will be described in detail later.
  • the method of installing the control program, content data, etc. in the HMD 2 is not limited.
  • an application program for building an AR space is installed on the HMD 2 from a content providing server on the network.
  • a content providing server on the network Of course, it is not limited to such a form.
  • a controller 23 controls the operation of each block of the HMD 2 .
  • the controller 23 has hardware circuits necessary for a computer, such as a CPU and memory (RAM, ROM). Various processes are executed by the CPU executing a program according to the present technology stored in the storage unit 22 or the memory.
  • a device such as a PLD (Programmable Logic Device) such as an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit) may be used.
  • the CPU of the controller 23 executes a program (for example, an application program) according to the present technology, so that the functional blocks include an information acquisition unit 28, an object recognition unit 29, an operation parameter detection unit 30, and a resolution control unit 31. , a distance/size acquisition unit 32, a reliability determination unit 33, a notification control unit 34, an AR reproduction control unit 35, and a transmission control unit 36 are realized.
  • These functional blocks execute the information processing method according to the present embodiment.
  • dedicated hardware such as an IC (integrated circuit) may be used as appropriate.
  • the information acquisition unit 28 can, for example, acquire various information from each unit of the HMD 2 and output it to each functional block of the controller 23 .
  • the information acquisition unit 28 can acquire image information captured by the inward facing camera 11 and the outward facing camera 12, detection results (sensor information) of each sensor of the sensor unit 21, and the like.
  • the information acquisition unit 28 can also acquire various information received from other devices via the communication unit 18 .
  • the object recognition unit 29 recognizes an operation object by executing recognition processing on image information captured by the inward-facing camera 11 and the outward-facing camera 12 and the detection results of each sensor of the sensor unit 21, and recognizes the recognition result. (recognition information) can be output.
  • recognition result of the hand index finger 5 of the user 3 used as the operation object is output.
  • the recognition result by the object recognition unit 29 includes arbitrary information such as the position of the real object (including the operation object), the state of the real object, and the movement of the real object. For example, it is possible to output various information as recognition results, such as the amount of activity of the real object, the distance of the real object from a predetermined position, the posture of the real object, and whether or not there is an input operation by the real object.
  • recognition processing a region (real object region) where it is determined that a real object exists is extracted from two-dimensional image data, three-dimensional depth image data, point cloud data, or the like.
  • a predetermined recognition algorithm is executed with data of the extracted real object region as input, and recognition results are output.
  • the processing is not limited to such processing, and the recognition algorithm may be executed with the entirety of two-dimensional image data, three-dimensional depth image data, etc. as input, and the recognition result regarding the real object may be output.
  • recognition processing using, for example, a rule-based algorithm is executed.
  • recognition information can be obtained by performing processing such as matching a real object data with a model image of the real object, or by using a marker image or the like to specify the position in the data of the real object area. It is possible to generate Alternatively, the recognition information can be generated by referring to the table information from the data of the real object area.
  • any recognition process using rule-based algorithms may be employed.
  • recognition processing using a machine learning algorithm may be executed as the recognition processing. For example, any machine learning algorithm using DNN (Deep Neural Network) or the like can be used.
  • DNN Deep Neural Network
  • a learning data set is generated by setting a label of recognition information to be acquired to the data of the real object area for learning.
  • a program incorporating learned parameters is generated as a trained model.
  • the trained model outputs a recognition result with respect to the input of the data of the real object area. For example, it is possible to specify the three-dimensional position of each feature point in the real object by inputting the three-dimensional information of the real object area.
  • the real object is a whole body or a part of the body
  • Skeletal estimation is also called bone estimation or skeleton estimation. Any other algorithm for performing the recognition process may be used. Note that application of machine learning algorithms may be performed for any of the processes within this disclosure.
  • the object recognition unit 29 defines a coordinate system for the space within the effective visual field on which the virtual object can be superimposed.
  • coordinate values for example, XYZ coordinate values
  • absolute coordinate system world coordinate system
  • coordinate values eg, xyz coordinate values or uvd coordinate values
  • a relative coordinate system with a predetermined point as a reference (origin)
  • the origin that serves as a reference may be set arbitrarily.
  • the object recognition unit 29 appropriately uses a prescribed coordinate system to obtain information such as the position and orientation of a real object existing within the effective field of view.
  • any other method may be used as a method of defining position information.
  • the object recognition unit 29 may perform self-position estimation of the user 3 (HMD 2).
  • the self position includes the position and posture of HMD2.
  • the self-position it is possible to calculate the position of the HMD 2 and posture information such as which direction the HMD 2 faces.
  • the self-position of the HMD 2 is calculated, for example, based on the detection result from the sensor unit 21 and the captured images by the inward facing camera 11 and the outward facing camera 12 .
  • position coordinates in a three-dimensional coordinate system (XYZ coordinate system) defined by the object recognition unit 29 are calculated as the self-position of the HMD 2 .
  • the pitch angle, roll angle, and yaw angle of a predetermined reference axis extending in front of the user 3 (HMD 2) when the X axis is the pitch axis, the Y axis is the roll axis, and the Z axis is the yaw axis. is calculated.
  • the algorithm for estimating the self-position of the HMD 2 is also not limited, and any algorithm such as SLAM (Simultaneous Localization and Mapping) may be used. In addition, any machine learning algorithm or the like may be used. Peripheral three-dimensional coordinates may be defined based on the estimated self-position of the user 3 (HMD 2).
  • a self-position estimation unit may be configured as a functional block different from the object recognition unit 29 .
  • the object recognition unit 29 performs recognition processing on the hand of the user 3 .
  • Various information about the hand is then acquired.
  • Hands include fingers.
  • the positions of the right and left hands including the positional relationship with each other
  • the posture of the right and left hands including the orientation of the hands
  • the movements of the right and left hands including the speed of movement
  • the presence or absence of operations using the right and left hands. etc. can be acquired as the recognition result.
  • the object recognition unit 29 can determine arbitrary input operations such as pointing operations, holding operations, touch operations, drag operations, scroll operations, and pinch operations.
  • the object recognition unit 29 recognizes goo (with the hand closed), scissors (with only the index and middle fingers extended), par (with the hand open), pistol (with only the index and thumb extended). It is possible to determine gestures such as Also, for each of the thumb, index finger, middle finger, ring finger, and little finger, information such as the direction in which the pad of the finger faces, whether each joint of the finger is extended or bent, and if so, at what angle. can also be obtained. In addition, when each finger is extended, it is possible to acquire the extension direction of the finger.
  • the operation parameter detection unit 30 detects operation parameters corresponding to operations performed by the operation object 4 .
  • the pointing position P corresponding to the pointing operation with the index finger 5 is detected as the operation parameter.
  • the pointing position P is detected based on the recognition result by the object recognition section 29 .
  • the pointing position P is detected based on three-dimensional coordinates defined by the object recognition section 29 .
  • a specific algorithm for calculating the pointing position P based on the recognition result of the hand of the user 3 is not limited, and any algorithm may be used. Of course, any machine learning algorithm or the like may be used.
  • the resolution control unit 31 will be explained later.
  • the distance/size acquisition unit 32 acquires the distance between the operation object and the target object to be pointed.
  • the distance between the index finger 5 used for the pointing operation and the target object is acquired.
  • the distance is calculated based on the three-dimensional coordinates defined by the object recognition section 29, for example.
  • the target object is a virtual object
  • the coordinate values of the index finger 5 (coordinate values of the tip, the center of gravity, etc.) and the coordinate values of the virtual object displayed in the virtual space (coordinate values of the center of gravity, the front side portion, etc.)
  • the distance is calculated based on the coordinate values of the index finger 5 (coordinate values of the tip, center of gravity, etc.) and the coordinate values of the real object (coordinate values of the center of gravity, front side part, etc.).
  • any algorithm or the like may be used.
  • the distance/size obtaining unit 32 obtains the size of the target object.
  • the distance is calculated based on the three-dimensional coordinates defined by the object recognition section 29, for example.
  • the target object is a virtual object
  • the size of the virtual object in the virtual space is calculated.
  • the target object is a real object
  • the size of the real object is calculated. Any other algorithm may be used.
  • the reliability determination section 33 determines the reliability of the recognition result by the object recognition section 29 .
  • the degree of reliability can be determined based on the size of the detection range of the outward facing camera 12, the ranging sensor 26, or the like. More specifically, the reliability is calculated so that the reliability decreases as the detection range for the part for calculating the pointing position P narrows.
  • FIGS. 4A and 4B are schematic diagrams showing detection states of the hand of the user 3 by the outward facing camera 12.
  • FIGS. 4A and 4B schematically show the state of the hand of the user 3 photographed by the outward facing camera 12.
  • FIG. 4A the first joint of the index finger 5, which is the operation object 4, is sufficiently captured. That is, the first joint of the index finger 5 is sufficiently sensed.
  • FIG. 4B the back of the hand exists between the first joint of the index finger 5 and the outward camera 12, and so-called self-occlusion occurs. Therefore, the detection range of the outward facing camera 12 with respect to the first joint of the index finger 5 is narrow. In the detection state shown in FIG.
  • the extending direction of the index finger 5 can be detected with high accuracy, so the reliability of the recognition result is high.
  • the detection accuracy in the extending direction of the index finger 5 is low, so the reliability of the recognition result is low.
  • the display area in the captured image is a parameter representing the width/narrowness of the detection range. For example, the larger the area where the first joint of the index finger 5 is displayed, the higher the reliability of the recognition result calculated. The smaller the area where the first joint of the index finger 5 is displayed, the lower the reliability of the recognition result is calculated.
  • the reliability is not limited to the display area in the captured image, and the reliability may be determined based on the area that can be detected with respect to the first joint (the area that can be sensed and worn). It is also possible to calculate the reliability based on the ratio of the detected area to the entire area of the first joint of the index finger 5 .
  • a region having a predetermined shape such as a circular shape is defined with a predetermined size centering on the tip of the index finger 5 .
  • a detectable area may be calculated for the defined region. Then, the reliability may be calculated according to the area. It is also possible to calculate the reliability based on the ratio of the detected area to the total area of the defined region. Of course, the reliability may be calculated based on the recognition result that self-occlusion is occurring, the amount of occlusion, or the like.
  • It is also possible to determine reliability based on the speed of movement of user 3 . For example, if user 3 is moving fast, the reliability is calculated to be low. For example, a calculation such as reliability 1/moving speed of user 3 may be performed. Also, a value of 1/movement speed of user 3 may be used as a coefficient for calculating reliability.
  • the reliability may be set in advance according to the type of sensor that senses the operation object 4 .
  • the reliability of the recognition result may be calculated based on the set reliability (hereinafter referred to as set reliability).
  • set reliability For example, when developing an AR application for realizing an AR space, a developer sets the set reliability for each sensor that senses the manipulation object 4 . For example, a high setting reliability is set for high-performance sensors, and a low setting reliability is set for low-performance sensors. Alternatively, the set reliability may be appropriately set according to the sensing method.
  • the setting reliability may be set for each device such as an HMD equipped with a sensor. For example, it is possible to set the setting reliability at this level for a certain HMD and set the setting reliability at this level for another HMD.
  • the setting reliability corresponding to the set sensor type is embedded as meta information.
  • the reliability determination unit 33 calculates the reliability of the recognition result based on the embedded meta information.
  • the set reliability may be used as a coefficient for calculating the reliability.
  • a sensor that senses the operation object 4 may calculate and output the reliability of the detection result (sensing result).
  • the reliability calculated by the sensor is hereinafter referred to as sensing reliability.
  • the reliability determination unit 33 calculates the reliability of the recognition result based on the sensing reliability issued by the sensor itself.
  • the sensing reliability may be used as a coefficient for calculating reliability. Any algorithm may be used as the algorithm for calculating the sensing reliability by the sensor.
  • the sensor itself may calculate the reliability based on the occurrence of self-occlusion as illustrated in FIG. 4, the amount of occlusion, or the like.
  • the extending direction of the index finger 5 may be output as a result of sensing by the sensor. That is, there may be a case where the sensor itself can output the recognition result output by the object recognition unit 29 as the sensing result. At that time, the sensing reliability may also be output together.
  • any method may be adopted as a method of determining the reliability of the recognition result.
  • the plurality of determination methods described above may be combined.
  • the object recognition unit 29 may also function as the reliability determination unit 33 and output the reliability together with the recognition result.
  • the notification control unit 34 notifies the user 3 of various information by controlling the operation of each device in the HMD 2 .
  • information can be notified to the user 3 by tactile presentation, virtual image display, sound output, or the like.
  • the notification control unit 34 controls the left-eye and right-eye displays 10a and 10b, thereby performing notification of information by displaying a virtual image.
  • the notification control unit 34 controls the speaker 16 to notify information by outputting sound.
  • any method may be adopted as a method of notifying the user 3 of information.
  • the HMD 2 may be equipped with a light source device (illumination device) such as an LED, and lighting of the device may be controlled.
  • the AR playback control unit 35 controls playback of virtual content for the user 3 . For example, how the virtual object moves, how the virtual sound is heard, etc. are determined according to the view of the AR world. Then, the virtual object is displayed on the transmissive display 10 so that the determined content is realized. Also, a virtual sound is output from the speaker 16 .
  • the display position of the virtual object is calculated based on the three-dimensional coordinates defined by the object recognition unit 29, for example.
  • the calculated display position (three-dimensional coordinates) is converted into two-dimensional coordinates (display coordinates on the transmissive display 10) by projective transformation or the like.
  • a virtual object is displayed at the transformed display coordinates. This realizes an AR space in which a virtual object exists at a desired position in the real space.
  • the output of the virtual sound for example, the position where the virtual sound is generated (the position of the virtual sound source) is calculated based on the three-dimensional coordinates defined by the object recognition unit 29 .
  • By controlling the speaker 16 and adjusting the localization of the sound an AR space is realized in which the virtual sound can be heard from a desired position (a desired direction) in the real space.
  • a specific algorithm for playing virtual content is not limited, and arbitrary control may be performed.
  • the notification image 6 shown in FIG. 1 is displayed as a virtual object by the AR playback control unit 35.
  • the display of the notification image 6 is controlled by the AR reproduction control section 35 based on the detection resolution of the pointing position P set by the resolution control section 31 . This point will be described in detail later.
  • the transmission control unit 36 can transmit various information to other devices via the communication unit 18.
  • the information stored in the storage unit 22 such as the information acquired by the information acquisition unit 28, the recognition result acquired by the object recognition unit 29, the display position of the virtual object calculated by the AR playback control unit 35, etc. It is possible to send to other devices.
  • the operation parameter detection unit 30 corresponds to one embodiment of the detection unit according to the present technology.
  • the resolution control unit 31 corresponds to one embodiment of the resolution control unit according to the present technology.
  • the reliability determination unit 33 corresponds to an embodiment of the reliability determination unit according to the present technology.
  • the AR playback control unit 35 corresponds to an embodiment of the display control unit according to the present technology.
  • [Pointing position detection resolution] 5 to 7 are schematic diagrams for explaining the detection resolution of the pointing position P.
  • the object recognition unit 29 performs recognition processing on the index finger 5 used as the operation object 4 . Then, the pointing position P is detected based on the recognition result of the index finger 5 .
  • the detection resolution of the operation parameter is a parameter that represents the fineness of change in the operation parameter according to the change of the operation object 4 .
  • the detection resolution of the pointing position P is a parameter representing the fineness of change in the pointing position P according to changes in the posture and position of the index finger 5 . The higher the detection resolution, the finer the change in the pointing position P when the posture of the index finger 5 is changed. The lower the detection resolution, the finer the change in the pointing position P when the posture of the index finger 5 is changed.
  • the detection resolution defines how the pointing position P is detected in response to a change in the posture of the index finger 5 or the like. Therefore, the detection resolution can also be said to be the resolution of adjustment of the pointing P according to changes in the posture of the index finger 5 or the like. Also, the pointing position P changes in accordance with the amount of change in the posture of the index finger 5 or the like. The detection resolution can also be said to be a parameter that defines how much the pointing position P is changed according to the amount of change in the predetermined posture of the index finger 5 or the like. When the amount of change in the pointing position P corresponding to the amount of change in the predetermined posture of the index finger 5 is small, the detection resolution is high. When the amount of change in the pointing position corresponding to the amount of change in the predetermined posture of the index finger 5 is large, the detection resolution is low.
  • the amount of change in the pointing position P is fixed, and the amount of change in the posture of the index finger 5 required to change the pointing position P can be adjusted.
  • the detection resolution is a parameter that defines the amount of change in the posture of the index finger 5 required to change the pointing position P. FIG. When the change amount of the posture of the index finger 5 required to change the pointing position P is small, the pointing position P changes by slightly moving the index finger 5, so that the pointing position P changes finely. Therefore, in this case, detection resolution is high.
  • the pointing position P resolution detectability can be controlled.
  • the control of the detection resolution is regarded as the control of the correspondence width (control width) of each change amount in the correspondence relationship between the change of the posture of the index finger 5 and the change of the pointing position P, and the control of the reference when quantizing. is also possible.
  • the hand of the user 3 is rotated from the state H1 in which the pointing operation is performed to the state H2 in which the index finger 5 is directed downward. Further, the hand of the user 3 is rotated from the state H1 to the state H3 in which the index finger 5 is directed upward.
  • the detected pointing position P also changes according to such rotation of the hand of the user 3 .
  • pointing position P1 is detected for state H1
  • pointing position P2 is detected for state H2.
  • the pointing position P3 is detected for the state H3. As shown in FIG.
  • the pointing position P is detected in 8 stages between the pointing position P1 and the pointing position P2 corresponding to the change in the posture (orientation) of the index finger 5 from the state H1 to the state H2. .
  • the pointing position P is detected in eight stages between the pointing position P1 and the pointing position P3 corresponding to the change in the direction of the index finger 5 from the state H2 to the state H3. Therefore, the pointing position P is detected in 16 stages between the pointing position P2 and the pointing position P3 with respect to the change in the orientation of the index finger 5 from the state H2 to the state H3.
  • the pointing position P is detected in four stages between the pointing position P1 and the pointing position P2 in response to the change in the posture (orientation) of the index finger 5 from the state H1 to the state H2. . Further, the pointing position P is detected in four stages between the pointing position P1 and the pointing position P3 corresponding to the change in the orientation of the index finger 5 from the state H2 to the state H3. Therefore, the pointing position P is detected in eight stages between the pointing position P2 and the pointing position P3 with respect to the change in the orientation of the index finger 5 from the state H2 to the state H3. By changing the orientation of the index finger 5, the user 3 can change the pointing position P with the fineness shown in FIG. 5B.
  • the amount of change in the pointing position P with respect to the change in the orientation of the index finger 5 is smaller in FIG. 5A. Therefore, the detection resolution of the pointing position P is higher in FIG. 5A. In this way, by controlling the amount of change (the number of steps) of the pointing position P according to the change in the orientation of the index finger 5 used as the operation object 4, it is possible to adjust the detection resolution of the pointing position P. .
  • the entire hand is moved downward, and the state transitions to the state H2. Further, the entire hand is moved upward from the state H1, and the state is changed to the state H3.
  • the detected pointing position P also changes according to such vertical movement of the index finger 5 of the user 3 .
  • a pointing position P1 is detected for state H1
  • a pointing position P2 is detected for state H2.
  • the pointing position P3 is detected for the state H3. As shown in FIG.
  • the pointing position P is detected in six stages between the pointing position P1 and the pointing position P2 corresponding to the downward position change of the index finger 5 from the state H1 to the state H2. .
  • the pointing position P is detected in six stages between the pointing position P1 and the pointing position P3 corresponding to the upward position change of the index finger 5 from the state H2 to the state H3. Therefore, the pointing position P is detected in 12 steps from the pointing position P2 to the pointing position P3 with respect to the change in the position of the index finger 5 from the state H2 to the state H3.
  • the pointing position P is detected in three stages between the pointing position P1 and the pointing position P2 in response to the downward position change of the index finger 5 from the state H1 to the state H2. .
  • the pointing position P is detected in three stages between the pointing position P1 and the pointing position P3 corresponding to the upward position change of the index finger 5 from the state H2 to the state H3. Therefore, the pointing position P is detected in six stages between the pointing position P2 and the pointing position P3 with respect to the change in the position of the index finger 5 from the state H2 to the state H3.
  • the amount of change in the pointing position P with respect to the change in the position of the index finger 5 is smaller in FIG. 6A. Therefore, the detection resolution of the pointing position P is higher in FIG. 6A.
  • the amount of change (the number of steps) of the pointing position P in accordance with the change in the position of the index finger 5 used as the operation object 4, it is possible to adjust the detection resolution of the pointing position P. .
  • the pointing position P changes by the same amount of change as in the example shown in FIG. 6A with respect to the vertical movement of the index finger 5.
  • the amount of change in the position of the index finger 5 required to change the pointing position P is larger than in the example shown in FIG. 6A. That is, in the example shown in FIG. 7, the index finger 5 has to be moved to a greater extent in order to change the pointing position P compared to the example shown in FIG. 6A.
  • the detection resolution of the pointing position P is higher in FIG. 6A.
  • the detection resolution of the pointing position P can be adjusted.
  • control of detection resolution can also be said to be control of spatial scaling.
  • the spatial scaling of the corresponding amounts of change is controlled. This makes it possible to control the correspondence between the amount of change in the position of the index finger 5 and the amount of change in the pointing position P, and to control the detection resolution.
  • various methods may be adopted as the method of controlling the detection resolution of the pointing position P. FIG. Of course, the control of the detection resolution described above may be appropriately combined.
  • the resolution control unit 31 shown in FIG. 3 can control the detection resolution of the operation parameter.
  • the resolution control unit 31 controls detection resolution of operation parameters based on operation-related information related to operations by the operation object 4 .
  • the detection resolution of the pointing position P is controlled by the resolution control section 31 .
  • At least one of the distance between the operation object and the target object to be pointed, the size of the target object, and the reliability of the recognition result is used as the operation-related information. That is, at least one of the distance between the index finger 5 and the target object, the size of the target object, or the reliability of the recognition result of the index finger 5 is used as operation-related information.
  • any combination of these parameters may be used.
  • FIG. 8 is a flowchart showing an example of resolution control processing according to the present embodiment.
  • 9 and 10 are schematic diagrams for explaining each step shown in FIG.
  • the processing shown in FIG. 8 is processing executed by the HMD 2 of the user 3 .
  • the processing shown in FIG. 8 is repeated at a predetermined frame rate, for example. Of course, it is not limited to execution for each frame.
  • the case where the target object to be pointed is the virtual object 38 displayed by the HMD 2 is taken as an example.
  • the HMD 2 displays three virtual objects 38a to 38c labeled "List1" to "List3".
  • the user 3 can select a desired "List” from these three virtual objects 38a to 38c by pointing with the index finger 5.
  • the reference detection resolution is set by default, and the detection resolution is controlled according to the flowchart of FIG.
  • a relatively high detection resolution is set as the reference detection resolution.
  • it is not limited to such a setting.
  • the pointing position P detected by the operation parameter detection unit 30 overlaps the virtual object 38 (step 101).
  • the pointing position P overlaps the virtual object 38b of "List2".
  • the virtual object 38b of "List2" is the target object to be pointed.
  • the distance/size acquisition unit 32 acquires the distance between the index finger 5 and the virtual object 38b of "List2" as operation-related information. Further, the distance/size acquisition unit 32 acquires the size of the virtual object 38b of "List2" as the operation-related information.
  • the size acquired here is used as a parameter that affects the selection of the target object by the pointing operation. Therefore, typically a size is used that affects whether the pointer position P can be retained.
  • the size (height) in the vertical direction is acquired as the size of the target object.
  • the size (width) in the left-right direction is acquired as the size of the target object when vertically long target objects are arranged.
  • the diameter or area is acquired as the size of the target object.
  • the method of defining the size of the target object is not limited.
  • the accuracy of the pointing operation is calculated by the resolution control unit 31 based on the operation-related information (step 103).
  • the pointing accuracy is a parameter that indicates how accurately a pointing operation can be performed on the target object. For example, when performing a pointing operation using the index finger 5, when the distance to the virtual object 38, which is the target object, is short, or when the size of the virtual object 38 is large, it is possible to perform pointing with high accuracy. On the other hand, when the distance to the virtual object 38 is long, or when the size of the virtual object 38 is small, the pointing operation becomes difficult and the accuracy becomes low.
  • the pointing accuracy is calculated so that the accuracy decreases as the reliability of the recognition result decreases.
  • a specific calculation formula or the like is not limited. Further, calculation of accuracy using distance L and size S may be combined with calculation of accuracy using reliability.
  • the pointing accuracy calculated in step 103 and the detection resolution of the pointing position P will be described.
  • the detected pointing position P may fluctuate.
  • the notification image 6 shown in FIG. 1 etc. is displayed as if it were shaking.
  • the higher the detection resolution of the pointing position P the higher the possibility that the pointing position P will shake. Therefore, from the viewpoint of suppressing the shaking of the pointing position P, it is effective to lower the detection resolution.
  • the virtual objects 38a to 38c are relatively large in size, so the pointing accuracy is relatively high.
  • the pointing position P shakes, it is possible to keep the pointing position P within the same virtual object 38 . Therefore, the user 3 can accurately select a desired "List".
  • the detection resolution of the pointing position P is set lower than in FIG. 10B. This makes it possible to suppress the shaking of the pointing position P as shown in FIG. 10B.
  • the user 3 can precisely select a desired "List" by moving the index finger 5, although the fineness of the change in the pointing position P becomes small.
  • the detection resolution is controlled so that the detection resolution decreases as the pointing accuracy decreases.
  • the phrases “as A decreases” and “as B decreases” refer to any manner in which A decreases in any manner whereas B decreases in any manner. Including state.
  • B may be continuously low as A is continuously low.
  • B may decrease stepwise as A decreases continuously.
  • B may decrease continuously as A decreases in stages.
  • B may be lowered stepwise as A is lowered stepwise.
  • the stepwise lowering aspect may be synchronized, or may not be related at all.
  • step 104 it is determined whether or not the pointing accuracy is equal to or less than a predetermined threshold (step 104).
  • the predetermined threshold value is stored in the storage unit 22 as the virtual object required accuracy value. If the pointing accuracy is not equal to or less than the threshold (No in step 104), the detection resolution of the pointing position P is maintained unchanged (step 105). If the pointing accuracy is equal to or less than the threshold (Yes in step 104), the detection resolution of the pointing position P is set low (step 106). That is, a detection resolution lower than the reference detection resolution is set.
  • the virtual object required accuracy value (threshold value) is determined in advance. Also, how low the detection resolution is to be set in step 106 is also determined in advance and stored as resolution meta-information.
  • the virtual object required accuracy value (threshold value) and the resolution meta information may be appropriately set so that the pointing operation illustrated in FIG. 10 can be executed with high accuracy.
  • a developer may set a required virtual object accuracy value (threshold) and resolution meta information.
  • the resolution control unit 31 can select and set one detection resolution from a plurality of preset detection resolutions. For example, it is possible to switch from the reference detection resolution to a detection resolution of a lower level, and then switch to a detection resolution of a lower level. It is also possible to perform a process of switching back to the standard detection resolution after switching to a low detection resolution once. In any case, it becomes possible to control the detection resolution stepwise, and it becomes possible to perform the pointing operation properly. Of course, an appropriate detection resolution may be calculated and set in real time based on the operation-related information. Algorithms for calculating proper detection resolution are not limited, and for example, machine learning algorithms may be used.
  • FIG. 11 is a schematic diagram for explaining display of the notification image 6 according to the detection resolution.
  • the display mode of the notification image 6 may be changed according to the detection resolution set by the resolution control section 31.
  • FIG. 11A it is assumed that a high detection resolution as illustrated in FIG. 5A is set. In this case, the notification image 6 is displayed thicker. Assume that a low detection resolution as illustrated in FIG. 5B is set for the virtual object 38 illustrated in FIG. 11B, for example. In this case, the notification image is displayed so as to be thin.
  • the user 3 can recognize that the detection resolution has been switched and the current detection resolution level by visually recognizing the notification image 6 .
  • the result is a high-quality virtual experience.
  • Any image display may be employed to change the display mode of the notification image 6 according to the set detection resolution.
  • a change in the shape or color (including lightness) of the notification image 6, an expression such as blinking, a highlight display, or the like may be used as appropriate.
  • the shape and color of the pointer image displayed at the pointing position P are changed.
  • a virtual object may be superimposed on the index finger 5, which is an operation object.
  • a process of superimposing a predetermined color on the index finger 5 can also be adopted.
  • the user 3 may be notified of detection resolution switching, level, etc. by voice from the speaker 16 or virtual text images such as "high”, “medium”, and "low".
  • a detection mode notification image for reporting the detection mode of the pointing position P according to the detection resolution set by the resolution control unit 31 may be displayed.
  • the detection mode notification image is an image for notifying the user 3 of how finely the pointing position P is detected with the current detection resolution.
  • FIG. 5A assume that user 3's hand (index finger 5) is in state H1.
  • a notification image 6 extending from the index finger 5 to the pointing position P1 is displayed on the HMD 2 .
  • the notification image 6 displayed corresponding to another state such as the state H2 or the state H3 may be displayed as the detection state notification image.
  • eight notification images 6 corresponding to other states are displayed below and above the notification image 6 corresponding to the state H1.
  • the 16 notification images 6 corresponding to other states may be displayed in a display mode different from that of the notification image 6 corresponding to state H1.
  • 16 notification images 6 corresponding to other states can be displayed lightly.
  • the notification image 6 before the change may be displayed in a form such as a thin visual that remains.
  • a notification image 6 can also be used as a detection mode notification image.
  • the detection mode notification image By displaying the detection mode notification image, the user 3 can grasp the current detection resolution, and the operability of the pointing operation is improved. Further, when the detection resolution is switched, the user 3 can easily grasp how the pointing position P is detected with the switched detection resolution. Also, a detection mode notification image corresponding to the detection resolution before switching may be displayed. This allows the user 3 to understand how the pointing position P was detected before switching. Further, the detection state notification image before switching and the detection state notification image after switching may be displayed in a switchable manner.
  • the detection resolution may be controlled by the resolution control section 31 based on an instruction from the user 3 .
  • the detection resolution may be switched based on voice input, gesture input, or the like from the user 3 .
  • an instruction to lower the detection resolution is input.
  • the resolution control section 31 sets the detection resolution to a low value based on the instruction from the user 3 .
  • an instruction to increase the detection resolution may be input.
  • an instruction to switch the resolution detectability may be input based on the detection mode notification image.
  • a specific method for inputting an instruction, such as voice input or gesture input, is not limited and may be set arbitrarily.
  • the case where the target object to be pointed is the virtual object 38 is taken as an example.
  • the present technology is not limited to this, and can be applied even when a real object is selected.
  • the pointing accuracy is calculated based on the distance to the real object, the size of the real object, the reliability of the recognition result, and the like.
  • the pointing accuracy is compared with the required accuracy value (threshold), and the detection resolution is controlled based on the comparison result. As a result, an appropriate pointing operation is realized, and a high-quality virtual experience is realized.
  • the detection resolution is controlled based on the operation related information regarding the detection of the operation parameter according to the operation by the operation object. This makes it possible to realize a high-quality virtual experience.
  • the detection resolution of the pointing position P according to the pointing operation with the index finger 5 is appropriately controlled. As a result, the pointing operation can be properly performed, and a high-quality virtual experience is realized.
  • the detection resolution of the pointing position P can be adjusted according to the distance between the hand (finger) and the virtual object, the size of the virtual object, and the reliability of the recognition result. It is possible to change. As the distance increases and the virtual object becomes smaller, it becomes more difficult for the user 3 to finely adjust the pointing position P. However, since the detection resolution is adjusted, the cost (time and mistakes) of selecting the virtual object can be sufficiently suppressed. becomes possible.
  • one or more fingers of the user 3 are used as the operation object 4 , and a holding operation can be performed on the virtual object 40 .
  • a pen is held as a virtual object 40 by a thumb 41 and an index finger 42 (hereinafter, the pen may be referred to as pen 40 using the same reference numerals).
  • the pen may be referred to as pen 40 using the same reference numerals.
  • the type of the virtual object 40 to be held is not limited, and may be set arbitrarily.
  • the present technology can be applied to an AR application such as holding a dart and throwing it at a board.
  • the operation parameter detection unit 30 detects the orientation of the virtual object 40 held by the holding operation as an operation parameter according to the holding operation.
  • the position at which the pen 40 is held and the extending direction of the pen 40 are detected according to the respective positions and orientations of the thumb 41 and index finger 42 .
  • Detection of the held position and extension direction of the pen 40 is included in detection of the posture of the virtual object 40 .
  • detection of any other parameter for defining the pose of pen 40 may be performed as pose detection.
  • a specific algorithm for calculating the orientation of the pen 40 based on the recognition result of the hand (finger) of the user 3 is not limited, and any algorithm may be used. Of course, any machine learning algorithm or the like may be used.
  • the reliability determination section 33 determines the reliability of the recognition result by the object recognition section 29 .
  • the determination processing described in the first embodiment may be executed.
  • the reliability is calculated such that the reliability decreases as the detection range of the sensor for the portion holding the virtual object 40 narrows.
  • FIG. 13A and 13B are schematic diagrams showing detection states of the hand of the user 3 by the outward facing camera 12.
  • FIG. 13A the portion holding the pen 40 is sufficiently imaged. Therefore, reliability increases.
  • FIG. 13B the back of the hand exists between the part holding the pen 40 and the outward facing camera 12, and so-called self-occlusion occurs. Therefore, the detection range of the outward facing camera 12 for the portion holding the pen 40 is narrow. Therefore, reliability is low.
  • the reliability may be calculated based on the size of the detection range of the first joint of the finger holding the pen 40 .
  • a region having a predetermined shape may be defined centering on the portion where the pen 40 is held, and the detectable area of the region may be calculated. Then, the reliability may be calculated according to the area.
  • the reliability may be calculated such that the reliability decreases as the moving speed of the finger holding the pen 40 increases. Further, the reliability may be calculated so that the reliability decreases as the moving speed of the user 3 increases.
  • the reliability may be set in advance according to the type of sensor that senses the operation object 4 .
  • the reliability of the recognition result may be calculated based on the set reliability (hereinafter referred to as set reliability). Further, the reliability (sensing reliability) of the detection result (sensing result) may be calculated and output by a sensor that senses the operation object 4 .
  • the reliability may be calculated based on the size of the held virtual object 40 .
  • the reliability may be calculated such that the smaller the size of the held virtual object 40, the lower the reliability.
  • the size of the held virtual object 40 may be used as a factor.
  • any method may be adopted as a method of determining the reliability of the recognition result.
  • the plurality of determination methods described above may be combined.
  • FIG. 14 is a schematic diagram for explaining the detection resolution of the posture (extending direction) of the virtual object (pen) 40.
  • the detection resolution of the orientation of the pen 40 is a parameter representing the fineness of change in the orientation of the pen 40 in response to changes in the orientations and positions of the thumb 41 and index finger 42 .
  • the higher the detection resolution the finer the change in the posture of the pen 40 when the postures of the thumb 41 and the index finger 42 are changed.
  • the lower the detection resolution the finer the change in the posture of the pen 40 when the postures of the thumb 41 and the index finger 42 are changed.
  • the detected orientation of the pen 40 also changes according to such finger movement.
  • the posture of the pen 40 is detected in three stages downward and three stages upward, corresponding to the movement of the finger. Therefore, the posture of the pen 40 is detected in six stages including the top and bottom.
  • the user 3 can change the posture of the pen 40 with the precision shown in FIG. 14A.
  • the orientation of the pen 40 is detected in one stage downward and one stage upward, corresponding to the movement of the finger. Therefore, the orientation of the pen 40 is detected in two stages including the upper and lower sides.
  • the user 3 can change the posture of the pen 40 with the precision shown in FIG. 15B.
  • FIG. 14A shows a smaller amount of change in posture of the pen 40 with respect to the movement of the thumb 41 and the index finger 42 . Therefore, FIG. 14A has higher detection resolution for the orientation of the pen 40 .
  • the amount of change the number of steps
  • the amount of movement of the thumb 41 and index finger 42 required to change the posture of the pen 40 it is also possible to control the resolution and detectability of the posture of the pen 40.
  • the resolution control unit 31 controls the detection resolution of the orientation of the pen 40 .
  • the reliability of the recognition result is used. That is, the detection resolution of the orientation of the pen 40 is controlled based on the reliability of the recognition results of the thumb 41 and index finger 42 .
  • FIG. 15 is a flowchart showing an example of resolution control processing according to the present embodiment.
  • the processing shown in FIG. 15 is processing executed by the HMD 2 of the user 3 .
  • the processing shown in FIG. 15 is repeated at a predetermined frame rate, for example. Of course, it is not limited to execution for each frame.
  • the object recognition unit 29 monitors whether the virtual object 40 is held (step 201). In this embodiment, the pen 40 is held by the thumb 41 and the index finger 42 .
  • the reliability determination unit 33 calculates the reliability of the recognition results of the thumb 41 and the index finger 42 (step 202).
  • the detected posture of the pen 40 may fluctuate.
  • the virtually displayed pen 40 also trembles, making it difficult to hold the pen 40 in a desired posture.
  • the detection resolution of the orientation of the pen 40 is controlled so that the detection resolution decreases as the reliability of the recognition result decreases. Specifically, it is determined whether or not the reliability of the recognition result is equal to or less than a predetermined threshold (step 203). The predetermined threshold is stored in the storage unit 22 as the required reliability. If the reliability of the recognition result is not equal to or lower than the threshold (No in step 203), the detection resolution of the orientation of the pen 40 is maintained without change (step 204). If the reliability of the recognition result is equal to or less than the threshold (Yes in step 203), the detection resolution of the orientation of the pen 40 is set low (step 205).
  • the required reliability (threshold value) should be, for example, is determined in advance. Also, how low the detection resolution is to be set in step 205 is also determined in advance and stored as resolution meta-information. For example, the required reliability (threshold) and resolution meta information may be appropriately set so that the AR application can be enjoyed. Of course, the content described in the first embodiment may be adopted for controlling the detection resolution.
  • the display mode of the held pen 40 may be varied according to the detection resolution set by the resolution control section 31 . Also, the user 3 may be notified of detection resolution switching, level, etc. by voice or virtual text image.
  • a detection mode notification image for reporting the detection mode of the posture of the pen 40 according to the detection resolution set by the resolution control unit 31 may be displayed.
  • the change in the posture of the pen 40 according to the higher detection resolution (shown by the two-dot chain line) is superimposed on the change in the posture of the pen 40 according to the current detection resolution (shown by the solid line). ) may be displayed.
  • changes in the pose of pen 40 according to higher detection resolution are displayed with a lighter visual.
  • the user 3 can grasp the change in the posture of the pen 40 when the detection resolution is switched. It is also possible to compare with changes in the current posture of the pen 40 .
  • any image may be displayed as the detection mode notification image.
  • Detection resolution may be controllable based on instructions from user 3 .
  • various contents described in the first embodiment can also be applied to the second embodiment.
  • the detection resolution of the orientation of the virtual object is appropriately controlled according to the holding operation of the virtual object. This makes it possible to suppress unintended shaking (trembling) of the virtual object, realizing a high-quality virtual experience. For example, it is possible to enjoy the AR application that allows the player to play darts.
  • FIG. 17 is a schematic diagram showing an example of a wearable controller.
  • FIG. 17A is a schematic diagram showing the appearance of the palm side of the wearable controller.
  • FIG. 17B is a schematic diagram showing the appearance of the back side of the wearable controller.
  • the wearable controller 44 is configured as a so-called palm vest type device, and is used by being worn on the hand of the user 3 .
  • various devices such as a camera 11-axis sensor, GPS, distance measuring sensor, microphone, IR sensor, and optical marker are mounted at predetermined positions on the wearable controller 44 .
  • the cameras are arranged on the palm side and the back side of the hand so that the fingers can be photographed. It is possible to execute hand recognition processing of the user 3 based on the image of the finger captured by the camera, the detection result of each sensor (sensor information), the sensing result of the IR light reflected by the optical marker, and the like. . Therefore, it is possible to obtain various information such as the position, posture, and movement of the hand and each finger.
  • input operations such as touch operations, determine gestures using hands, and the like.
  • the user 3 can use his/her hands to perform various gesture inputs and operations on virtual objects.
  • a plurality of vibrators are mounted at predetermined positions of the wearable controller 44 as tactile sense presentation units. By driving the vibrator, it is possible to present various patterns of tactile sensations to the hand of the user 3 .
  • the specific configuration of the vibrator is not limited, and any configuration may be adopted.
  • the wearable controller 44 can notify the user 3 of various information. For example, it is possible to provide tactile feedback by driving multiple transducers. Of course, by installing a display unit and speakers, it is also possible to provide visual feedback and sound feedback.
  • processing according to the present technology may be performed by another computer (server device or the like) communicably connected to the wearable controller 44 .
  • the other computer functions as an embodiment of the information processing apparatus according to the present technology.
  • the wearable controller 44 may perform holding determination and release determination.
  • the wearable controller 44 functions as an embodiment of an information processing device according to the present technology.
  • the wearable controller 44 and another computer may cooperate to realize an information processing apparatus according to the present technology and execute the information processing method according to the present technology.
  • VR provision system As one embodiment of the information processing system according to the present technology, it is also possible to configure a VR providing system.
  • user 3 wears an immersive HMD 2 configured to cover user 3's field of view.
  • the corresponding virtual object that moves corresponding to the movement of the user's hand (fingers) is operated by moving one's own hand (fingers).
  • the corresponding virtual object may be a model image of one's hand (fingers).
  • the image is not limited to this, and may be a virtual image of a hand (fingers) of a character or robot, or a tool such as a crane or tongs.
  • a corresponding virtual object functions as an embodiment of an operation object according to the present technology.
  • various techniques described above may be implemented.
  • any device may be used to realize the virtual space.
  • the virtual space is not limited to devices such as HMDs and projectors, and may be realized using smart phones, tablet terminals, PCs (Personal Computers), and the like.
  • the pointing operation and the holding operation on the virtual object are given as examples of the operation by the operation object.
  • the present technology can be applied to arbitrary operations and calculation of operation parameters according to the operations. That is, the operation performed by the user 3 is not limited for application of the present technology. This technology can be applied to arbitrary operations that can be executed in the AR world.
  • the operation object is not limited to hands, fingers, etc. Canes, pointers, walking sticks, chopsticks, tweezers, cranes, tongs, whole hands, whole arms, feet, toes, etc. Any object that can perform pointing operations and holding operations on virtual objects can be used as operation objects. is.
  • the representation of the virtual content may be changed according to the set detection resolution. For example, assume that a relatively high detection resolution is set when a holding operation is performed on a virtual object. In this case, visual interpolation processing may be performed such that the held virtual object moves smoothly. Also, a sound that expresses how the virtual object is moving smoothly, such as "Shoot", may be output. Also, when presenting a tactile sensation by vibration, the virtual object should be strong at the beginning of its movement, weak during the movement, and strong at the end of the movement. It does not execute, and a single sound such as "beep" is played. Vibrational haptic presentation is controlled such that the virtual object simply vibrates when it moves.
  • the HMD 2 functions as an embodiment of the information processing apparatus according to the present technology
  • an arbitrary computer such as a PC connected to the HMD 2 via a network or the like realizes an embodiment of the information processing apparatus according to the present technology, and executes the information processing method according to the present technology. good too.
  • an embodiment of the information processing apparatus according to the present technology may be realized and the information processing method according to the present technology may be executed by cooperation between the HMD 2 and a computer on the network.
  • part or all of each functional block realized by the controller 23 shown in FIG. 3 may be realized by another computer connected to the HMD 2.
  • FIG. 18 is a block diagram showing a hardware configuration example of a computer 60 applicable to the present technology.
  • the computer 60 includes a CPU 61, a ROM (Read Only Memory) 62, a RAM 63, an input/output interface 65, and a bus 64 connecting them together.
  • a display unit 66, an input unit 67, a storage unit 68, a communication unit 69, a drive unit 70, and the like are connected to the input/output interface 65.
  • the display unit 66 is a display device using liquid crystal, EL, or the like, for example.
  • the input unit 67 is, for example, a keyboard, pointing device, touch panel, or other operating device.
  • the input portion 67 includes a touch panel
  • the touch panel can be integrated with the display portion 66 .
  • the storage unit 68 is a non-volatile storage device such as an HDD, flash memory, or other solid-state memory.
  • the drive unit 70 is a device capable of driving a removable recording medium 71 such as an optical recording medium or a magnetic recording tape.
  • the communication unit 69 is a modem, router, or other communication equipment for communicating with other devices that can be connected to a LAN, WAN, or the like.
  • the communication unit 69 may use either wired or wireless communication.
  • the communication unit 69 is often used separately from the computer 60 .
  • Information processing by the computer 60 having the hardware configuration as described above is realized by cooperation of software stored in the storage unit 68 or the ROM 62 or the like and the hardware resources of the computer 60 .
  • the information processing method according to the present technology is realized by loading a program constituting software stored in the ROM 62 or the like into the RAM 63 and executing the program.
  • the program is installed in the computer 60 via the recording medium 61, for example.
  • the program may be installed on the computer 60 via a global network or the like.
  • any computer-readable non-transitory storage medium may be used.
  • An information processing method and a program according to the present technology may be executed by a plurality of computers communicably connected via a network or the like to construct an information processing apparatus according to the present technology. That is, the information processing method and program according to the present technology can be executed not only in a computer system configured by a single computer, but also in a computer system in which a plurality of computers work together.
  • a system means a set of multiple components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and a single device housing a plurality of modules within a single housing, are both systems.
  • the information processing method according to the present technology and the execution of the program by the computer system include, for example, detection of operation parameters according to operation, control of detection resolution, display control, determination of reliability, notification control, playback control of virtual content, and the like. , both when executed by a single computer and when each process is executed by different computers. Execution of each process by a predetermined computer includes causing another computer to execute part or all of the process and obtaining the result. That is, the information processing method and program according to the present technology can also be applied to a configuration of cloud computing in which a plurality of devices share and jointly process one function via a network.
  • expressions using "more than” such as “greater than A” and “less than A” encompass both the concept including the case of being equivalent to A and the concept not including the case of being equivalent to A. is an expression contained in For example, “greater than A” is not limited to not including equal to A, but also includes “greater than or equal to A.” Also, “less than A” is not limited to “less than A”, but also includes “less than A”. When implementing the present technology, specific settings and the like may be appropriately adopted from concepts included in “greater than A” and “less than A” so that the effects described above are exhibited.
  • the present technology can also adopt the following configuration.
  • a detection unit that detects an operation parameter corresponding to an operation by the operation object based on a recognition result of the operation object; and a resolution control unit that controls detection resolution of the operation parameter based on operation-related information related to the operation by the operation object.
  • the operation by the operation object is a pointing operation;
  • the detection unit detects, as the operation parameter, a pointing position indicated by the operation object,
  • the information processing apparatus wherein the operation-related information includes at least one of a distance between the operation object and a target object to be pointed, a size of the target object, or a reliability of the recognition result.
  • the information processing device calculates the accuracy of the pointing operation based on the operation-related information, and controls the detection resolution so that the detection resolution decreases as the accuracy of the pointing operation decreases. processing equipment.
  • Calculation of the accuracy by the resolution control unit includes: calculating the accuracy so that the accuracy decreases as the distance between the operation object and the target object increases; calculating the accuracy so that the accuracy decreases as the size of the target object decreases; or
  • the information processing apparatus includes at least one of: calculating the accuracy so that the accuracy decreases as the reliability of the recognition result decreases.
  • the information processing device further comprising: An information processing apparatus comprising a display control unit that displays a notification image for notifying the detected pointing position.
  • An information processing apparatus comprising a display control unit that displays a notification image for notifying the detected pointing position.
  • the notification image is at least one of a pointer image displayed at the pointing position, a linear image extending from the operation object to the pointing position, and an enhanced image that emphasizes the target object on which the pointing position overlaps.
  • Information processing equipment (7) The information processing device according to (5) or (6), The information processing apparatus, wherein the display control section changes the display mode of the notification image according to the detection resolution set by the resolution control section.
  • the information processing device according to any one of (2) to (7), The information processing apparatus, wherein the display control section displays a detection mode notification image for reporting a detection mode of the pointing position according to the detection resolution set by the resolution control section.
  • the information processing device according to any one of (2) to (8), The information processing apparatus, wherein the target object is a virtual object or a real object.
  • the information processing device according to (1), the operation by the operation object is a holding operation on a virtual object; The detection unit detects, as the operation parameter, an orientation of the virtual object held by the holding operation, The information processing apparatus, wherein the operation-related information includes reliability of the recognition result.
  • the information processing device according to (10), The information processing apparatus, wherein the resolution control unit controls the detection resolution such that the detection resolution decreases as the reliability of the recognition result decreases.
  • the information processing device according to (10) or (11), further comprising: A reliability determination unit that calculates the reliability of the recognition result, Calculation of the reliability by the reliability determination unit includes: calculating the reliability so that the reliability decreases as the detection range of the sensor for the portion of the operation object holding the virtual object narrows; calculating the reliability so that the reliability decreases as the speed of movement of the operation object increases; or calculating the accuracy so that the reliability decreases as the size of the virtual object decreases.
  • the information processing device calculates the reliability based on at least one of a preset reliability set according to the type of sensor that senses the operation object and a sensing reliability output by the sensor. processing equipment.
  • the information processing device according to any one of (10) to (13), further comprising: An information processing apparatus comprising a display control unit that displays the virtual object.
  • the information processing device according to (14), The information processing apparatus, wherein the display control unit displays a detection state notification image for reporting the detection state of the posture according to the detection resolution set by the resolution control unit.
  • the information processing device according to any one of (1) to (15), The information processing apparatus, wherein the resolution control unit controls the detection resolution based on an instruction from a user.
  • the information processing device according to any one of (1) to (16), The information processing apparatus, wherein the operation object is one or more fingers of a user.
  • the resolution control unit selects and sets one detection resolution from among a plurality of predetermined detection resolutions having different levels.
  • An information processing method wherein a computer system controls detection resolution of the operation parameter based on operation-related information related to the operation by the operation object.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Un dispositif de traitement d'informations d'un mode de réalisation selon la présente invention est équipé d'une unité de détection et d'une unité de détermination de rétention d'unité de commande de résolution. L'unité de détection détecte un paramètre de fonctionnement en fonction d'une opération par un objet d'opération sur la base d'un résultat de reconnaissance pour l'objet d'opération. L'unité de commande de résolution commande une résolution de détection pour le paramètre de fonctionnement sur la base d'informations relatives au fonctionnement relatives à l'opération par l'objet d'opération. Cette caractéristique permet d'effectuer correctement des opérations de pointage et analogues et de mettre en œuvre des expériences virtuelles de haute qualité.
PCT/JP2022/010540 2021-08-20 2022-03-10 Dispositif de traitement d'informations, procédé de traitement d'informations et programme WO2023021757A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021134587 2021-08-20
JP2021-134587 2021-08-20

Publications (1)

Publication Number Publication Date
WO2023021757A1 true WO2023021757A1 (fr) 2023-02-23

Family

ID=85240383

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/010540 WO2023021757A1 (fr) 2021-08-20 2022-03-10 Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Country Status (1)

Country Link
WO (1) WO2023021757A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62189580A (ja) * 1986-02-17 1987-08-19 Nec Corp カ−ソル表示装置
JP2012137989A (ja) * 2010-12-27 2012-07-19 Sony Computer Entertainment Inc ジェスチャ操作入力処理装置およびジェスチャ操作入力処理方法
JP2012221250A (ja) * 2011-04-08 2012-11-12 Sony Corp 画像処理装置、表示制御方法及びプログラム
WO2015025874A1 (fr) * 2013-08-20 2015-02-26 株式会社ソニー・コンピュータエンタテインメント Dispositif de commande d'emplacement de curseur, procédé de commande d'emplacement de curseur, programme et support de stockage d'informations
JP2015176451A (ja) * 2014-03-17 2015-10-05 京セラドキュメントソリューションズ株式会社 ポインティング制御装置およびポインティング制御プログラム
WO2020171098A1 (fr) * 2019-02-19 2020-08-27 株式会社Nttドコモ Dispositif d'affichage d'informations utilisant une ligne de visée et des gestes

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62189580A (ja) * 1986-02-17 1987-08-19 Nec Corp カ−ソル表示装置
JP2012137989A (ja) * 2010-12-27 2012-07-19 Sony Computer Entertainment Inc ジェスチャ操作入力処理装置およびジェスチャ操作入力処理方法
JP2012221250A (ja) * 2011-04-08 2012-11-12 Sony Corp 画像処理装置、表示制御方法及びプログラム
WO2015025874A1 (fr) * 2013-08-20 2015-02-26 株式会社ソニー・コンピュータエンタテインメント Dispositif de commande d'emplacement de curseur, procédé de commande d'emplacement de curseur, programme et support de stockage d'informations
JP2015176451A (ja) * 2014-03-17 2015-10-05 京セラドキュメントソリューションズ株式会社 ポインティング制御装置およびポインティング制御プログラム
WO2020171098A1 (fr) * 2019-02-19 2020-08-27 株式会社Nttドコモ Dispositif d'affichage d'informations utilisant une ligne de visée et des gestes

Similar Documents

Publication Publication Date Title
US11181986B2 (en) Context-sensitive hand interaction
EP3411777B1 (fr) Procédé de suivi de mouvement d'objet avec dispositif distant pour système de réalité mixte, système de réalité mixte et support non transitoire lisible par ordinateur
JP6244593B1 (ja) 情報処理方法、装置、および当該情報処理方法をコンピュータに実行させるためのプログラム
KR102194164B1 (ko) 홀로그램 객체 피드백
US10359863B2 (en) Dragging virtual elements of an augmented and/or virtual reality environment
US10313481B2 (en) Information processing method and system for executing the information method
JP6392911B2 (ja) 情報処理方法、コンピュータ、および当該情報処理方法をコンピュータに実行させるためのプログラム
EP3196734B1 (fr) Dispositif de commande, procédé de commande et programme
US10515481B2 (en) Method for assisting movement in virtual space and system executing the method
JP2022184958A (ja) アニメーション制作システム
JP2018124981A (ja) 情報処理方法、装置、および当該情報処理方法をコンピュータに実行させるためのプログラム
JP6368404B1 (ja) 情報処理方法、プログラム及びコンピュータ
JP2019168962A (ja) プログラム、情報処理装置、及び情報処理方法
JP2018028765A (ja) 仮想空間を提供する方法、プログラム、および記録媒体
WO2023021757A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP6278546B1 (ja) 情報処理方法、装置、および当該情報処理方法をコンピュータに実行させるためのプログラム
JP2019020836A (ja) 情報処理方法、装置、および当該情報処理方法をコンピュータに実行させるためのプログラム
JP2019036239A (ja) 情報処理方法、情報処理プログラム、情報処理システム及び情報処理装置
JP2018190196A (ja) 情報処理方法、装置、および当該情報処理方法をコンピュータに実行させるためのプログラム
JP6403843B1 (ja) 情報処理方法、情報処理プログラム及び情報処理装置
US20240281072A1 (en) Information processing apparatus, information processing method, and program
JP2019016358A (ja) 情報処理方法、プログラム及びコンピュータ
JP2018190397A (ja) 情報処理方法、装置、および当該情報処理方法をコンピュータに実行させるためのプログラム
JP2018206353A (ja) 情報処理方法、装置、および当該情報処理方法をコンピュータに実行させるためのプログラム
WO2021131950A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22858082

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22858082

Country of ref document: EP

Kind code of ref document: A1