WO2021106552A1 - Dispositif de traitement d'informations, procédé de traitement d'informations et programme - Google Patents

Dispositif de traitement d'informations, procédé de traitement d'informations et programme Download PDF

Info

Publication number
WO2021106552A1
WO2021106552A1 PCT/JP2020/041895 JP2020041895W WO2021106552A1 WO 2021106552 A1 WO2021106552 A1 WO 2021106552A1 JP 2020041895 W JP2020041895 W JP 2020041895W WO 2021106552 A1 WO2021106552 A1 WO 2021106552A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
information processing
processing device
information
virtual
Prior art date
Application number
PCT/JP2020/041895
Other languages
English (en)
Japanese (ja)
Inventor
達雄 藤原
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2021106552A1 publication Critical patent/WO2021106552A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns

Definitions

  • This technology relates to information processing devices, information processing methods, and programs applicable to display control such as AR (Augmented Reality).
  • the mobile device described in Patent Document 1 can display at least a part of a predetermined image displayed on the display device. By moving the mobile device back and forth with respect to the display device, the image displayed on the mobile device is enlarged / reduced. It is also possible to move the image displayed on the mobile device by moving it left and right. As a result, information can be efficiently provided to the user (paragraphs [0022] [0030] of Patent Document 1, FIG. 6 and the like).
  • the purpose of this technology is to provide an information processing device, an information processing method, and a program capable of providing a high-quality viewing experience.
  • the information processing device includes a display control unit.
  • the display control unit receives the operation of the target object according to the operation of the target object by the operation object based on the target object information including the position and shape of the target object and the operation object information including the position of the operation object. Controls the superimposed display of virtual objects according to the shape.
  • this information processing device based on the target object information including the position and shape of the target object and the operation object information including the position of the operation object, according to the shape of the target object according to the operation of the operation object on the target object.
  • the superimposed display of virtual objects is controlled. This makes it possible to provide a high-quality viewing experience.
  • the display control unit may superimpose and display the virtual object along the surface of the target object.
  • the display control unit may control the superimposed display so that the virtual object is virtually drawn on the surface of the target object.
  • the display control unit may control the superimposed display so that the virtual object is virtually arranged along the surface of the target object.
  • Each of the position of the target object and the position of the operating object may include depth information detected by the depth sensor.
  • the depth sensor may include at least a TOF (Time of Flight) camera.
  • TOF Time of Flight
  • the target object information may include a shadow state of the target object.
  • the display control unit may control the brightness of the virtual object based on the shadow condition.
  • the display control unit may control the brightness of the virtual object so that the shadow state of the target object is reflected in the virtual object.
  • the information processing device may further include an acquisition unit that acquires the target object information and the operation object information.
  • the acquisition unit may be able to estimate the shadow state of the target object based on the lighting state of the target object and the shape of the target object.
  • the lighting condition may include the position of a light source that projects light onto the target object and the brightness of the light source.
  • the display control unit may superimpose and display the virtual object on the target object image including the target object.
  • the target object may be at least one of the face and the head.
  • the operating object may be a finger.
  • the target object and the operation object may be different parts of the same user.
  • the target object information may include brightness.
  • the display control unit may display the virtual object on the target object when the operating object comes into contact with the target object.
  • the display control unit may control the superimposed display of the virtual object according to the shape of the target object based on the position where the operation object operates on the target object.
  • the information processing method is an information processing method executed by a computer system, and is based on the target object information including the position and shape of the target object and the operation object information including the position of the operation object.
  • the present invention includes controlling the superimposition display of the virtual object according to the shape of the target object according to the operation on the target object by the operation object.
  • a program causes a computer system to perform the following steps. Based on the target object information including the position and shape of the target object and the operation object information including the position of the operation object, a virtual object according to the shape of the target object according to the operation on the target object by the operation object. Steps to control the superimposed display of.
  • FIG. 1 is a schematic diagram for explaining an outline of a display control system according to the present technology.
  • the display control system 100 according to the present technology transfers the target object 1 to the target object 1 by the operation object 2 based on the target object information 6 including the position and shape of the target object 1 and the operation object information 7 including the position of the operation object 2. It is possible to control the superimposed display of the virtual object 4 according to the shape of the target object 1 according to the operation.
  • the display control system 100 it is possible to provide a high-quality viewing experience. For example, as shown in FIG. 1, the user 3 takes a picture (selfie) of the user 3 himself using the user terminal 5. By using the display control system 100, the user 3 can execute a high-quality superimposed display of the virtual object 4 on his / her own face or the like being photographed.
  • the target object 1 is an object on which the virtual object 4 is superimposed and displayed.
  • the face of the user 3 is set as the target object 1.
  • the target object 1 and the operation object 2 are realized by different parts of the same user 3.
  • a part such as the body of the user 3 or the entire body (user 3 itself) may be set as the target object 1, or a plurality of parts such as both hands and feet may be set as the target object 1.
  • You may.
  • a person other than the user 3 may be set as the target object 1.
  • the target object 1 may be other than a person.
  • it may be a living thing such as a cat or a dog, or an object such as a doll, a mug, or a folding fan.
  • the operation object 2 is an object that executes an operation on the target object 1.
  • the finger of the user 3 is set as the operating object 2.
  • any object such as a pen or a brush may be set as the operation object 2.
  • a plurality of operating objects 2 may be set.
  • the index finger, the middle finger, and the ring finger may be set as the operating object 2.
  • Examples of the operation on the target object 1 include contact (touch) with the target object 1, tracing (slide) the surface of the target object 1, and the like.
  • operations on the target object 1 using various operation objects 2 may be executed.
  • an arbitrary operation for controlling the virtual object 4 is also included in the operation on the target object 1.
  • the operating object 2 slides while the operating object 2 touches the virtual object 4, so that the virtual object 4 follows the operating object 2.
  • You may.
  • the operation on the target object 1 is not limited to this.
  • the superimposed display of the virtual object 4 according to the operation object 2 may be controlled, such as the operation on the target object 1 is executed by the index finger and the operation on the virtual object 4 is executed by the middle finger.
  • CG Computer Graphics
  • the present invention is not limited to this, and any virtual object may be displayed.
  • the picture written by the user 3 when the picture written by the user 3 is input to the user terminal 5, the picture written by the user 3 may be output as the virtual object 4. Further, for example, it may be a still image or a moving image.
  • "displaying a virtual object superimposed on a target object” includes displaying a specific virtual object in a specific space including the target object. It also includes displaying virtual objects at specific locations.
  • an arbitrary display in which a virtual object is superimposed and displayed on a target object image (photographed image) including the target object 1 is included.
  • the target object information 6 includes various information such as the position and shape of the target object 1.
  • a coordinate value for example, an XYZ coordinate value
  • an absolute coordinate system world coordinate system
  • a coordinate value for example, xyz coordinate value or uvd coordinate value
  • a relative coordinate system with a predetermined point as a reference (origin)
  • the reference origin may be set arbitrarily.
  • the depth information of the face of the user 3 is set based on the inward camera 13 of the user terminal 5.
  • the target object information 6 also includes the orientation and posture of the user 3's face.
  • Depth information is the distance from the origin to the measurement object.
  • the depths of the target object 1 and the operating object 2 with the position of the inward camera 13 as a reference are the depth information.
  • the depth information in each pixel of the target object 1 and the operating object 2 is acquired by the inward camera 13.
  • a 3D shape of the face surface of the user 3 is set.
  • the 3D shape is information indicating the shape of an object such as size and curvature. For example, when the target object 1 is a face, the size and curvature of the nose, chin, cheeks, eyes, orbits, etc. are used as the 3D shape.
  • the target object information 6 includes the posture and the shadow state of the target object 1.
  • the posture of the target object 1 for example, when the target object 1 is a face, information such as which direction the face is facing or tilted based on the posture of the face facing the inward camera 13 is obtained. It is set as the posture of the target object 1.
  • the shadow situation is information about a shadow in the target object 1.
  • the brightness and the brightness information distribution at each position (pixel) of the target object 1 are included in the shadow situation. For example, when the target object 1 is a face, the brightness of the shadow region generated by the nose, hair, or the like is detected to be low.
  • information including the position of the operating object 2 is set as the operating object information 7.
  • a coordinate value for example, an XYZ coordinate value
  • an absolute coordinate system world coordinate system
  • a coordinate value for example, xyz coordinate value or uvd coordinate value
  • a relative coordinate system with a predetermined point as a reference (origin)
  • the reference origin may be set arbitrarily.
  • the position of the operating object 2 is defined by the coordinate system that defines the position of the target object 1 included in the target object information 6 and the same coordinate system.
  • the depth information of the finger of the user 3 is set as the operation object information 7 with reference to the inward camera 13 of the user terminal 5.
  • the operating object information 7 includes the posture and brightness of the operating object 2.
  • the posture of the operating object 2 for example, when the operating object 2 is a finger, information such as which direction the tip of the finger is facing or tilted is set as the posture of the operating object 2.
  • the display control system 100 includes a user terminal 5 and an information processing device 10.
  • the information processing device 10 is realized by the user terminal 5 itself. Of course, it is not limited to such a configuration.
  • the user terminal 5 is a terminal used by the user 3.
  • a mobile terminal such as a smartphone or a tablet terminal is used as the user terminal 5.
  • the user terminal 5 has an inward-facing camera 13, an outward-facing camera 14, and a touch panel 15.
  • the inward-facing camera 13 and the outward-facing camera 14 are imaging devices capable of capturing peripheral images.
  • the inward-facing camera 13 is a camera mounted on the same surface as the surface on which the touch panel 15 is mounted.
  • the outward-facing camera 14 is a camera mounted on a surface opposite to the surface on which the touch panel 15 is mounted.
  • the inward-facing camera 13 and the outward-facing camera 14 include a TOF (Time of Flight) camera capable of measuring the distance (depth information) to the target object 1 and the operating object 2.
  • TOF Time of Flight
  • imaging devices such as stereo cameras, digital cameras, monocular cameras, infrared cameras, polarized cameras, and other cameras are used.
  • sensor devices such as laser distance measuring sensors, contact sensors, ultrasonic sensors, LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging), and sonar may be used.
  • the inward-facing camera 13 and the outward-facing camera 14 include an RGB camera for acquiring the brightness of the target object 1.
  • the touch panel 15 functions as a display unit and can display various images and GUIs. In the present embodiment, the inside of the angle of view taken by the inward-facing camera 13 or the outward-facing camera 14 is displayed in real time. Further, the touch panel 15 can accept the touch operation of the user 3. The user 3 can input a predetermined instruction or the like via the touch panel 15.
  • FIG. 1 schematically shows how the user 3 photographed by the inward camera 13 is displayed on the touch panel 15.
  • the user 3 can execute the superimposed display of the virtual object 4 according to the shape of the face according to the operation on the face (target object 1) by the finger (operation object 2).
  • the virtual object 4 of the crocodile is superimposed and displayed at the place where the finger touches.
  • the virtual object 4 of the crocodile is controlled to have a shape corresponding to the surface of the face of the user 3.
  • the information processing device 10 has hardware necessary for configuring a computer, such as a processor such as a CPU, GPU, and DSP, a memory such as ROM and RAM, and a storage device such as an HDD (see FIG. 9).
  • a computer such as a processor such as a CPU, GPU, and DSP, a memory such as ROM and RAM, and a storage device such as an HDD (see FIG. 9).
  • the information processing method according to the present technology is executed when the CPU loads and executes the program according to the present technology recorded in advance in the ROM or the like into the RAM.
  • the information processing device 10 can be realized by an arbitrary computer such as a PC (Personal Computer).
  • hardware such as FPGA and ASIC may be used.
  • the display control unit as a functional block is configured by the CPU executing a predetermined program.
  • the program is installed in the information processing apparatus 10 via, for example, various recording media. Alternatively, the program may be installed via the Internet or the like.
  • the type of recording medium on which the program is recorded is not limited, and any computer-readable recording medium may be used. For example, any non-transient storage medium readable by a computer may be used.
  • the information processing device 10 acquires the target object information 6 and the operation object information 7.
  • the acquisition of the target object information 6 and the manipulated object information 7 is to receive the target object information 6 and the manipulated object information 7 transmitted from the outside, and the target object information 6 and the information processing device 10 itself. It includes both generating the operating object information 7.
  • the information processing device 10 illustrated in FIG. 1 controls the superimposed display of the virtual object 4 based on the target object information 6 and the operation object information 7.
  • FIG. 2 is a block diagram showing a functional configuration example of the user terminal 5.
  • the user terminal 5 includes a speaker 11, a microphone 12, an inward camera 13, an outward camera 14, a touch panel 15, an operation button 16, a lighting unit 17, a sensor unit 18, a communication unit 19, a storage unit 20, and a controller 21.
  • the speaker 11 can output audio.
  • the speaker 11 outputs, for example, voice guidance, an alarm sound, and the like.
  • the microphone 12 is used for making a call, inputting a voice instruction, collecting surrounding sounds, and the like.
  • the operation button 16 is provided to perform an operation different from the operation via the touch panel 15, such as an operation of turning the power on / off.
  • the lighting unit 17 has a light source such as an LED (Light Emitting Diode) or an LD (Laser Diode), and can output light. For example, by turning on the illumination unit 17, it is possible to illuminate the target object 1 like a light. It is also possible for the lighting unit 17 to notify the reception of an e-mail or the like.
  • a light source such as an LED (Light Emitting Diode) or an LD (Laser Diode)
  • LD Laser Diode
  • the sensor unit 18 can detect the surrounding situation, the state of the user terminal 5, the state of the user 3, and the like.
  • a 9-axis sensor As the sensor unit 18, a 9-axis sensor, GPS, a biological sensor, and the like are mounted.
  • the 9-axis sensor includes a 3-axis accelerometer, a 3-axis gyro sensor, and a 3-axis compass sensor.
  • the 9-axis sensor can detect acceleration, angular velocity, and direction in the three axes of the user terminal 5.
  • GPS acquires information on the current position of the user terminal 5.
  • the biosensor acquires the biometric information of the user.
  • a temperature sensor capable of measuring body temperature a heart rate sensor capable of measuring heart rate, a sweating sensor capable of measuring sweating amount, and the like are provided.
  • the type of sensor provided as the sensor unit 18 is not limited, and any sensor may be provided.
  • a temperature sensor, a humidity sensor, or the like capable of measuring the temperature, humidity, etc. of the environment in which the user terminal 5 is used may be provided.
  • the microphone 12, the inward-facing camera 13, and the outward-facing camera 14 can be regarded as a part of the sensor unit 18.
  • the communication unit 19 is a module for executing network communication, short-range wireless communication, infrared communication, etc. with other devices.
  • a wireless LAN module such as WiFi and a communication module such as Bluetooth (registered trademark) are provided. Further, any infrared communication module may be used.
  • the storage unit 20 is a non-volatile storage device, and for example, an HDD (Hard Disk Drive) or the like is used.
  • the storage unit 20 stores a control program for controlling the overall operation of the user terminal 5. Further, the storage unit 20 stores data used for performing various processes, data generated by various processes, and the like. The method of installing the control program or the like on the user terminal 5 is not limited.
  • the controller 21 controls the operation of each block of the user terminal 5.
  • the controller 21 has a hardware configuration necessary for a computer such as a CPU and a memory (RAM, ROM). Various processes are executed when the CPU loads the control program or the like stored in the storage unit 20 into the RAM and executes it. In this embodiment, the controller 21 functions as an information processing device.
  • a device such as a PLD (Programmable Logic Device) such as an FPGA (Field Programmable Gate Array) or another device such as an ASIC (Application Specific Integrated Circuit) may be used.
  • PLD Processable Logic Device
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • the information acquisition unit 31, the shape estimation unit 32, the action detection unit 33, the lighting estimation unit 34, and the display control unit 35 are functional blocks. Is realized, and the information processing method according to the present embodiment is executed.
  • dedicated hardware such as an IC (integrated circuit) may be appropriately used. ..
  • the controller 21 functions as an information processing device according to the present technology.
  • the information acquisition unit 31 acquires the sensing result output from the outside.
  • the depth information of the target object 1 and the depth information of the operating object 2 acquired from the inward camera 13 are acquired.
  • various information may be acquired from the speaker 11, the microphone 12, the outward-facing camera 14, the touch panel 15, the operation buttons 16, and the sensor unit 18.
  • an instruction to change the virtual object 4 by the user 3 may be obtained from the microphone 12.
  • the shape estimation unit 32 estimates the shapes of the target object 1 and the operation object 2.
  • the position, 3D shape, and posture of the target object 1 are estimated based on the depth information of the target object 1 acquired from the information acquisition unit 31.
  • the position and posture of the operating object 2 are estimated based on the depth information of the operating object 2 acquired from the information acquisition unit 31.
  • the shape estimation unit 32 estimates the depth information for a region having low brightness obtained from the RGB camera. For example, in a black region such as the black hair of the user 3, the IR (Infra-Red) reflection is weak, so that the depth information may not be obtained accurately. In this case, the shape estimation unit 32 estimates the depth information of the black region based on the depth information of the face.
  • the shape estimation unit 32 can estimate the shadow state of the target object based on the illumination state of the target object 1 and the shape of the target object 1.
  • the position (region) of the shadow generated on the target object 1 is estimated from the position and brightness of the light source that projects light onto the target object 1 and the shape of the target object 1.
  • the illumination situation is a situation relating to a light source that projects light onto the target object 1.
  • the position of the light source that projects light onto the target object 1 and the brightness of the light source are included.
  • the light source projects light onto the target object 1 to produce a region in which the light is projected (a region with high brightness) and a region in which the light is projected (a region in which the brightness is low).
  • the entire face (target object 1) of the user 3 has a high brightness region or a low brightness.
  • An object that projects light may be used as a light source even when it is a region.
  • Any technique (algorithm or the like) for estimating the target object information 6, the manipulated object information 7, and the depth information of the black region may be adopted.
  • an arbitrary machine learning algorithm using DNN (Deep Neural Network) or the like may be used.
  • AI artificial intelligence
  • a learning unit and an identification unit are constructed to estimate the target object information 6, the manipulated object information 7, and the depth information of the black region.
  • the learning unit performs machine learning based on the input information (learning data) and outputs the learning result.
  • the identification unit identifies (determines, predicts, etc.) the input information based on the input information and the learning result.
  • a neural network or deep learning is used as a learning method in the learning unit.
  • a neural network is a model that imitates a human brain neural circuit, and is composed of three types of layers: an input layer, an intermediate layer (hidden layer), and an output layer.
  • Deep learning is a model that uses a multi-layered neural network, and it is possible to learn complex patterns hidden in a large amount of data by repeating characteristic learning in each layer. Deep learning is used, for example, to identify objects in images and words in sounds.
  • a convolutional neural network (CNN) used for recognizing images and moving images is used.
  • a neurochip / neuromorphic chip incorporating the concept of a neural network can be used.
  • Machine learning problem settings include supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, reverse reinforcement learning, active learning, and transfer learning.
  • supervised learning features are learned based on given labeled learning data (teacher data). This makes it possible to derive labels for unknown data.
  • unsupervised learning a large amount of unlabeled learning data is analyzed to extract features, and clustering is performed based on the extracted features. This makes it possible to analyze trends and predict the future based on a huge amount of unknown data.
  • semi-supervised learning is a mixture of supervised learning and unsupervised learning. After learning features in supervised learning, a huge amount of training data is given in unsupervised learning, and the features are automatically characterized. This is a method of repeatedly learning while calculating the amount.
  • Reinforcement learning also deals with the problem of observing the current state of an agent in an environment and deciding what action to take. Agents learn rewards from the environment by choosing actions and learn how to get the most rewards through a series of actions. In this way, by learning the optimum solution in a certain environment, it is possible to reproduce human judgment and to make a computer acquire judgment that exceeds human judgment. It is also possible to generate virtual sensing data by machine learning. For example, it is possible to predict another sensing data from one sensing data and use it as input information, such as generating position information from the input image information. It is also possible to generate different sensing data from a plurality of sensing data. It is also possible to predict the required information and generate predetermined information from the sensing data.
  • an arbitrary learning algorithm or the like different from machine learning may be used.
  • the estimation accuracy of the target object information 6, the manipulated object information 7, and the depth information of the black region is improved. It becomes possible to make it.
  • the learning algorithm is used.
  • the application of the learning algorithm may be executed for any process in the present disclosure.
  • the action detection unit 33 detects the operation of the operation object 2 on the target object 1. In the present embodiment, it is detected whether or not the finger touches the face based on the depth information of the face which is the target object 1 and the depth information of the finger which is the operating object 2. Further, for example, it is detected whether the operation object 2 has executed a predetermined operation on the virtual object 4. Further, the action detection unit 33 detects the position where the operation object 2 operates the target object 1. For example, based on the 3D shape and posture of the face, it is detected whether the finger touches the cheek. By supplying the information detected by the action detection unit 33 to the display control unit 35, the superimposed display of the virtual object 4 is controlled.
  • the method of detecting the operation of the target object 1 by the operation object 2 is not limited, and any technique (algorithm or the like) may be adopted.
  • the illumination estimation unit 34 estimates the illumination condition around the target object 1.
  • the lighting state related to the illumination of the target object 1 at the time of shooting is estimated based on the 3D shape of the target object 1 and the brightness of the target object 1.
  • the method of estimating the lighting condition is not limited.
  • the light source may be estimated by object recognition or the like.
  • an object whose brightness is attenuated in a predetermined direction (radial or concentric) around an object that projects light may be used as a light source.
  • the display control unit 35 controls the superimposed display of the virtual object 4.
  • the virtual object 4 is superimposed and displayed along the surface of the target object 1.
  • the superimposed display of the virtual object 4 according to the shape of the face is controlled based on the position where the operation on the face is performed by the finger.
  • “along the surface” means that when the virtual object 4 is superimposed and displayed on the face of the user 3, the virtual object 4 is projected and transformed on the surface (3D shape) of the face of the user 3. It is in a state of being. That is, the superimposed display is controlled so that the virtual object 4 is virtually drawn on the surface of the target object 1. Further, the display control unit 35 controls the brightness of the virtual object 4 based on the shadow condition.
  • the brightness of the virtual object 4 is controlled based on the shadow area (the area with low brightness) actually existing in the target object 1. That is, the brightness of the virtual object 4 is controlled so that the shadow state of the target object 1 is reflected in the virtual object 4. Further, in the present embodiment, the brightness of the virtual object 4 is controlled based on the lighting condition estimated by the lighting estimation unit 34 and the shape of the target object 1. That is, the virtual object 4 is superposed on the target object in a shaded state. Further, the display control unit 35 controls the superimposed display so that the virtual object 4 is virtually arranged along the surface of the target object 1.
  • the display control unit 35 responds to the operation of the target object by the operation object based on the target object information including the position and shape of the target object and the operation object information including the position of the operation object. It corresponds to a display control unit that controls the superimposed display of virtual objects according to the shape of the target object.
  • the illumination estimation unit 34 corresponds to the illumination estimation unit that estimates the illumination status related to the illumination at the time of shooting the target object based on the target object information.
  • the information acquisition unit 31 and the shape estimation unit 32 function as acquisition units for acquiring the target object information and the operation object information.
  • FIG. 3 is a flowchart showing a basic execution example of the control of the superimposed display.
  • the target object information 6 is acquired by the information acquisition unit 31 and the shape estimation unit 32 (step 101).
  • the operation object information 7 is acquired by the information acquisition unit 31 and the shape estimation unit 32 (step 102).
  • the action detection unit 33 detects the operation of the operation object 2 on the target object 1 (step 103).
  • the virtual object 4 is superimposed and displayed by the display control unit 35 (step 104).
  • FIG. 4 is a flowchart showing a specific execution example of the control of the superimposed display.
  • the face 1 of the user 3 is set as the target object.
  • the finger 2 of the user 3 is set as the operation object.
  • FIG. 5 is a schematic diagram showing a superimposed display of the virtual object 4. As shown in FIG. 5, the user 3 is displayed on the touch panel 15 of the user terminal 5.
  • FIG. 5 is a simplified view of the state in which the user 3 is taking a selfie, as shown in FIG.
  • User 3 takes a picture of himself / herself using the inward camera 13 of the user terminal 5. As shown in FIG. 5, the camera through is displayed on the touch panel 15 of the user terminal 5 (step 201).
  • the depth information of the face 1 is acquired based on the sensing result acquired by the inward camera 13 by the information acquisition unit 31 (step 202).
  • the shape estimation unit 32 estimates the position, 3D shape, and posture of the face 1 based on the depth information of the face 1 (step 203).
  • Step 202 and step 203 correspond to the acquisition of the target object information in step 101 shown in FIG.
  • Step 204 corresponds to the acquisition of the operating object information in step 102 shown in FIG.
  • the action detection unit 33 detects that the finger 2 touches the face 1 of the user 3 (step 205).
  • the difference between the depth information of the face 1 and the depth information of the finger 2 is equal to or less than a predetermined threshold value, it is determined that the finger 2 touches the face 1.
  • Step 205 corresponds to the detection of the operation on the target object by the operation object in step 103 shown in FIG.
  • the display control unit 35 displays the virtual object 4 at the place where the finger 2 touches the face of the user 3 (step 206).
  • the virtual object 4 is virtually superimposed and displayed to confirm how the virtual object 4 is drawn with respect to the face 1 of the user 3. That is, an image diagram for confirming the display position of the virtual object 4, the size and angle of the virtual object 4, and the like is displayed on the face 1 of the user 3.
  • various controls such as enlargement, reduction, rotation, and movement of the virtual object 4 displayed in step 206.
  • the lighting estimation unit 34 estimates the lighting condition based on the 3D shape (surface shape) of the face 1 and the brightness of the face 1 (step 207). As shown in FIG. 5, in the present embodiment, the low-luminance region 40 of the face 1 is acquired as a shadow region by the RGB camera. The illumination estimation unit 34 estimates the position of the light source 41 based on the position of the region 40 generated in the shape of the face 1. For example, when the region 40 is generated in the lower left of the face, it is estimated that the light source 41 is on the opposite side of the region 40 with respect to the face 1.
  • the display control unit 35 adds a shadow along the shape of the face 1 to the virtual object 4 based on the lighting condition and displays it (step 208). As shown in FIG. 5, a shadow is added to the virtual object 4 in the area 40. In the present embodiment, a virtual shadow corresponding to the brightness of the shadow of the face 1 is given to the virtual object 4. It is possible to perform various controls such as enlargement, reduction, rotation, and movement of the virtual object 4 displayed in step 208. Further, when various controls are executed, the superimposed display is controlled for the virtual object 4. For example, when the virtual object 4 is moved to the mouth 42 of the user 3, the virtual object 4 is displayed along the shape of the mouth 42.
  • a shadow may be expressed by controlling the brightness of the virtual object 4 to be low. Further, the shadow may be expressed by superimposing a shadow layer corresponding to the brightness of the face 1 on the virtual object 4. Steps 206 to 208 correspond to step 104 shown in FIG.
  • the information processing device 10 responds to the operation of the target object by the operation object based on the target object information including the position and shape of the target object and the operation object information including the position of the operation object.
  • the superimposed display of the virtual object is controlled according to the shape of the target object. This makes it possible to provide a high-quality viewing experience.
  • depth information of the face and fingers is acquired using the TOF sensor.
  • the operation performed by the finger on the face is detected based on the depth information of the face and the finger, and the superimposed display of the virtual object on the face is controlled. This makes it possible to provide a high-quality viewing experience. In addition, three-dimensional processing and adjustment of virtual objects becomes possible.
  • the virtual object 4 when the finger, which is the operating object, comes into contact with the face, which is the target object, the virtual object 4 is superimposed and displayed. Not limited to this, the control of the superimposed display may be switched according to the contact point of the operating object.
  • FIG. 6 is a flowchart showing an execution example of switching the processing of the superimposed display.
  • FIG. 7 is a schematic diagram showing another example of the superimposed display of virtual objects.
  • the illustration of the user terminal and the left hand holding the user terminal is omitted.
  • the illustration of the right hand 51 of the user 50 is simplified.
  • the target object is set as the face 52
  • the operating object is set as the right hand 51
  • the virtual object is set as the star 53. Since the description of steps 201 to 208 in FIG. 6 is the same as that in FIG. 4, the description will be omitted or simplified.
  • the user 50 is wearing a black hat 54.
  • the depth information of the area of the black hat 54 may not be accurately obtained.
  • the region of the hat 54 (region with low brightness) is estimated based on the brightness (brightness information distribution) obtained from the RGB camera (step 301).
  • the shape estimation unit 32 estimates the depth information of the region of the hat 54 (step 302).
  • the depth information of the region of the hat 54 is estimated based on the depth information of the face 52 of the user 50 who wears the hat 54.
  • the depth information of the hat 54 may be estimated based on the depth information of the face 52 based on the size of the hat 54.
  • the display control unit 35 switches the control of the virtual object according to the position where the right hand 51 is operated (step 303).
  • the action detection unit 33 detects that the right hand 51 touches the face (step 205)
  • the processes of steps 206 to 208 are executed as shown in FIGS. 4 and 5.
  • the action detection unit 33 detects that the right hand 51 touches the area of the hat 54 (step 304).
  • the difference between the depth information in the area of the hat 54 and the depth information of the right hand 51 is equal to or less than a predetermined threshold value, it is determined that the right hand 51 touches the face 52.
  • the display control unit 35 controls the superimposed display of the virtual object in the area of the hat 54 (step 305).
  • the star 53 is displayed at the position touched by the right hand 51.
  • the size of the star 53 is controlled based on the depth information of the portion touched by the right hand 51. That is, the star 53 is superimposed and displayed along the surface of the face 52 (hat 54).
  • the action detection unit 33 detects an operation in which the right hand 51 pinches the star 53.
  • the display control unit 35 controls the superimposed display of the corresponding star 53. For example, when the right hand 51 moves while pinching the star 53, the star 53 may be moved so as to follow the movement of the right hand 51. In addition to this, the user 50 can execute arbitrary control on the virtual object.
  • the star 53 emit light, change the color of the star 53, control the size of the star 53, and the like.
  • the types of virtual objects that are superimposed and displayed are not limited. For example, a predetermined character or pattern may be superimposed and displayed around the hat 54. Further, these virtual objects may be shaded based on the lighting conditions. Further, a new shadow may be added to the virtual object superimposed on the face of the user 50 by using the superimposed virtual object as a light source.
  • the superimposed display of alligators and stars as virtual objects was controlled.
  • superimposition display may be controlled by using a part of the body such as the user's hair or an object such as a hat that can be worn by the user as a virtual object.
  • a part of the body such as the user's hair or an object such as a hat that can be worn by the user as a virtual object.
  • the size and posture of the virtual object may be controlled based on the size and posture of the user's head (target object information).
  • a plurality of the same virtual objects such as a star 53 are displayed.
  • any number of virtual objects may be superimposed and displayed.
  • makeup such as lipstick and eyeliner may be superimposed and displayed on the user's face as a virtual object.
  • a specific makeup combination may be set as one virtual object.
  • the virtual object is given a virtual shadow according to the shape and shadow of the target object.
  • the brightness of the virtual object may be controlled to be high according to the brightness of the region.
  • the virtual object is shaded based on the lighting conditions around the target object.
  • the brightness of the virtual object may be controlled based on the brightness in each area of the face without being limited to this.
  • the superimposed display of the virtual object is controlled by the user taking a picture of the user's face (target object) with the inward camera 13.
  • the present invention is not limited to this, and another user may be photographed by the outward-facing camera 13 and the superimposed display of the virtual object may be controlled.
  • the user confirms another user acquired by the outward-facing camera 14 on the touch panel 15, and causes the user to touch the face of the other user with a finger to superimpose and display the virtual object on the other user. It is possible.
  • a smartphone or the like was used as the user terminal 5.
  • a transmissive HMD Head Mounted Display
  • AR glasses may be used as the user terminal 5.
  • FIG. 8 is a perspective view showing the appearance of the HMD according to another embodiment.
  • the HMD 60 is a glasses-type device equipped with a transmissive display, and is worn on the user's head for use.
  • the HMD 60 includes a frame 61, left and right lenses 62a and 62b, a left-eye display 61a and a right-eye display 61b, a left-eye camera 63a and a right-eye camera 63b.
  • a controller, a sensor unit, and the like substantially the same as those shown in FIG. 2 are configured inside the frame 61 or at a predetermined position.
  • the left and right lenses 62a and 62b are arranged in front of the user's left eye and right eye, respectively.
  • the left-eye and right-eye displays 61a and 61b are provided on the left and right lenses 62a and 62b, respectively, so as to cover the user's field of view.
  • the left-eye and right-eye displays 61a and 61b are transmissive displays, and images for the left eye and the right eye are displayed, respectively.
  • the user wearing the HMD 60 can visually recognize the actual scenery and at the same time visually recognize the image displayed on each display. This allows the user to experience augmented reality (AR) and the like.
  • AR augmented reality
  • a dimming element (not shown) or the like may be provided on the outside of the left-eye and right-eye displays 61a and 61b (the side opposite to the user's eye).
  • the dimming element is an element capable of adjusting the amount of light transmitted through the element.
  • By providing the dimming element for example, it is possible to regulate the actual scenery seen by the user through each display and emphasize the image displayed on each display so that the user can see it. This allows the user to experience virtual reality (VR) and the like.
  • VR virtual reality
  • the left-eye and right-eye displays 61a and 61b for example, a transmissive organic EL display, an LCD (Liquid Crystal Display, liquid crystal display element) display, or the like is used.
  • the dimming element for example, a dimming glass, a dimming sheet, a liquid crystal shutter, or the like whose transmittance can be electrically controlled is used.
  • the left and right lenses 62a and 62b and the left-eye and right-eye displays 61a and 61b realize a virtual object display mechanism.
  • the left eye and right eye cameras 63a and 63b are provided at arbitrary positions where the user's left eye and right eye can be imaged.
  • the superposed position of the virtual object may be controlled based on the left eye and right eye images taken by the left eye and right eye cameras 63a and 63b.
  • each of the left and right lenses 62a and 62b is movable with respect to the frame 61, and is moved by a drive mechanism. Further, the frame 61 itself is also configured to be movable, and the holding force can be changed. By appropriately changing the positions and inclinations of the left and right lenses 62a and 62b, it is possible to realize a high-quality viewing experience as in the above embodiment.
  • the information processing device 10 is realized by the user terminal 5 itself. Not limited to this, the information processing device 10 and the user terminal 5 may be communicably connected via wire or wireless. Further, the connection form between each device is not limited, and for example, wireless LAN communication such as WiFi and short-range wireless communication such as Bluetooth (registered trademark) may be used.
  • FIG. 9 is a block diagram showing a hardware configuration example of the information processing device 10.
  • the information processing device 10 includes a CPU 71, a ROM 72, a RAM 73, an input / output interface 75, and a bus 74 that connects them to each other.
  • a display unit 76, an input unit 77, a storage unit 78, a communication unit 79, a drive unit 80, and the like are connected to the input / output interface 75.
  • the display unit 76 is a display device using, for example, a liquid crystal or an EL.
  • the input unit 77 is, for example, a keyboard, a pointing device, a touch panel, or other operation device. When the input unit 77 includes a touch panel, the touch panel can be integrated with the display unit 76.
  • the storage unit 78 is a non-volatile storage device, for example, an HDD, a flash memory, or other solid-state memory.
  • the drive unit 80 is a device capable of driving a removable recording medium 81 such as an optical recording medium or a magnetic recording tape.
  • the communication unit 79 is a modem, router, or other communication device for communicating with another device that can be connected to a LAN, WAN, or the like.
  • the communication unit 79 may communicate using either wire or wireless.
  • the communication unit 79 is often used separately from the information processing device 10. In the present embodiment, the communication unit 79 enables communication with other devices via the network.
  • Information processing by the information processing device 10 having the hardware configuration as described above is realized by the cooperation between the software stored in the storage unit 78 or the ROM 72 or the like and the hardware resources of the information processing device 10.
  • the information processing method according to the present technology is realized by loading the program constituting the software stored in the ROM 72 or the like into the RAM 73 and executing the program.
  • the program is installed in the information processing device 10 via, for example, the recording medium 71.
  • the program may be installed in the information processing apparatus 10 via a global network or the like.
  • any non-transient storage medium that can be read by a computer may be used.
  • the information processing device, information processing method, and program related to this technology are executed by linking the computer mounted on the communication terminal with another computer that can communicate via a network or the like, and the information processing device related to this technology. May be constructed.
  • the information processing apparatus, information processing method, and program according to the present technology can be executed not only in a computer system composed of a single computer but also in a computer system in which a plurality of computers operate in conjunction with each other.
  • the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether or not all the components are in the same housing. Therefore, a plurality of devices housed in separate housings and connected via a network, and one device in which a plurality of modules are housed in one housing are both systems.
  • the information processing device, information processing method, and program related to this technology are executed by a computer system when, for example, acquisition of target object information, estimation of lighting status, control of superimposed display, and the like are executed by a single computer. , And when each process is performed by a different computer. Further, the execution of each process by a predetermined computer includes causing another computer to execute a part or all of the process and acquire the result.
  • the information processing device, information processing method, and program related to the present technology can be applied to a cloud computing configuration in which one function is shared and jointly processed by a plurality of devices via a network. ..
  • the effects described in this disclosure are merely examples and are not limited, and other effects may be obtained.
  • the description of the plurality of effects described above does not necessarily mean that those effects are exerted at the same time. It means that at least one of the above-mentioned effects can be obtained depending on the conditions and the like, and of course, there is a possibility that an effect not described in the present disclosure may be exhibited.
  • the present technology can also adopt the following configurations.
  • (1) Based on the target object information including the position and shape of the target object and the operation object information including the position of the operation object, a virtual object according to the shape of the target object according to the operation on the target object by the operation object.
  • An information processing device including a display control unit that controls a superposed display of.
  • the display control unit is an information processing device that superimposes and displays the virtual object along the surface of the target object.
  • the display control unit is an information processing device that controls the superimposed display so that the virtual object is virtually drawn on the surface of the target object.
  • the information processing device according to (2) or (3) The information processing device according to (2) or (3).
  • the display control unit is an information processing device that controls the superimposed display so that the virtual object is virtually arranged along the surface of the target object.
  • the information processing device according to any one of (1) to (4).
  • An information processing device that includes depth information detected by a depth sensor in each of the position of the target object and the position of the operating object.
  • the depth sensor is an information processing device including at least a TOF (Time of Flight) camera.
  • the target object information includes the shadow status of the target object.
  • the display control unit is an information processing device that controls the brightness of the virtual object based on the shadow condition. (8) The information processing apparatus according to (7).
  • the display control unit is an information processing device that controls the brightness of the virtual object so that the shadow state of the target object is reflected in the virtual object.
  • the information processing apparatus according to (7) or (8), further It is provided with an acquisition unit for acquiring the target object information and the operation object information.
  • the acquisition unit is an information processing device capable of estimating the shadow state of the target object based on the lighting state of the target object and the shape of the target object.
  • the lighting condition is an information processing device including the position of a light source that projects light onto the target object and the brightness of the light source.
  • the display control unit is an information processing device that superimposes and displays the virtual object on an image of the target object including the target object.
  • the information processing apparatus according to any one of (1) to (11).
  • the object of interest is at least one of the face or head.
  • the operating object is an information processing device that is a finger.
  • the information processing apparatus according to any one of (1) to (12).
  • the information processing apparatus according to any one of (1) to (13).
  • the target object information is an information processing device including brightness.
  • the display control unit is an information processing device that displays the virtual object on the target object when the operating object comes into contact with the target object.
  • the information processing apparatus according to any one of (1) to (15).
  • the display control unit is an information processing device that controls superimposition display of the virtual object according to the shape of the target object based on a position where the operation object operates on the target object.
  • (17) Based on the target object information including the position and shape of the target object and the operation object information including the position of the operation object, a virtual object according to the shape of the target object according to the operation on the target object by the operation object.
  • a virtual object according to the shape of the target object according to the operation on the target object by the operation object by the operation object.
  • a program that causes a computer system to perform steps that control the superposition of.
  • Target object 2 ... Operation object 5 ... User terminal 6 ... Target object information 7 ... Operation object information 10 ... Information processing device 31 ... Information acquisition unit 32 ; Shape estimation unit 34 ... Lighting estimation unit 35 ... Display control unit 100 ... Display Control system

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Position Input By Displaying (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Selon un mode de réalisation, la présente invention porte sur un dispositif de traitement d'informations pourvu d'une unité de commande d'affichage. Sur la base d'informations d'objet cible qui comprennent la position et la forme d'un objet cible et d'informations d'objet d'opération qui comprennent la position d'un objet d'opération, l'unité de commande d'affichage commande l'affichage par superposition d'un objet virtuel en fonction de la forme de l'objet cible suite à une opération effectuée sur l'objet cible par l'objet d'opération. Avec cette configuration, il est possible de fournir une expérience de visualisation de haute qualité.
PCT/JP2020/041895 2019-11-29 2020-11-10 Dispositif de traitement d'informations, procédé de traitement d'informations et programme WO2021106552A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019216826A JP2021086511A (ja) 2019-11-29 2019-11-29 情報処理装置、情報処理方法、及びプログラム
JP2019-216826 2019-11-29

Publications (1)

Publication Number Publication Date
WO2021106552A1 true WO2021106552A1 (fr) 2021-06-03

Family

ID=76088334

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/041895 WO2021106552A1 (fr) 2019-11-29 2020-11-10 Dispositif de traitement d'informations, procédé de traitement d'informations et programme

Country Status (2)

Country Link
JP (1) JP2021086511A (fr)
WO (1) WO2021106552A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013520729A (ja) * 2010-02-22 2013-06-06 ナイキ インターナショナル リミテッド 拡張現実設計システム
JP2013196388A (ja) * 2012-03-19 2013-09-30 Bs-Tbs Inc 画像処理装置、画像処理方法、および画像処理プログラム
JP2016218547A (ja) * 2015-05-15 2016-12-22 セイコーエプソン株式会社 頭部装着型表示装置、頭部装着型表示装置を制御する方法、コンピュータープログラム
WO2019093156A1 (fr) * 2017-11-10 2019-05-16 ソニーセミコンダクタソリューションズ株式会社 Dispositif de traitement d'affichage, procédé de traitement d'affichage et programme
WO2019111465A1 (fr) * 2017-12-04 2019-06-13 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations, et support d'enregistrement
WO2019150430A1 (fr) * 2018-01-30 2019-08-08 株式会社ソニー・インタラクティブエンタテインメント Dispositif de traitement d'informations

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013520729A (ja) * 2010-02-22 2013-06-06 ナイキ インターナショナル リミテッド 拡張現実設計システム
JP2013196388A (ja) * 2012-03-19 2013-09-30 Bs-Tbs Inc 画像処理装置、画像処理方法、および画像処理プログラム
JP2016218547A (ja) * 2015-05-15 2016-12-22 セイコーエプソン株式会社 頭部装着型表示装置、頭部装着型表示装置を制御する方法、コンピュータープログラム
WO2019093156A1 (fr) * 2017-11-10 2019-05-16 ソニーセミコンダクタソリューションズ株式会社 Dispositif de traitement d'affichage, procédé de traitement d'affichage et programme
WO2019111465A1 (fr) * 2017-12-04 2019-06-13 ソニー株式会社 Dispositif de traitement d'informations, procédé de traitement d'informations, et support d'enregistrement
WO2019150430A1 (fr) * 2018-01-30 2019-08-08 株式会社ソニー・インタラクティブエンタテインメント Dispositif de traitement d'informations

Also Published As

Publication number Publication date
JP2021086511A (ja) 2021-06-03

Similar Documents

Publication Publication Date Title
EP3807710B1 (fr) Affichage à réalité augmentée avec fonctionnalité de modulation d'image
US11487366B2 (en) Multi-modal hand location and orientation for avatar movement
US20240005808A1 (en) Individual viewing in a shared space
US11755122B2 (en) Hand gesture-based emojis
WO2019177870A1 (fr) Animation de mouvements faciaux d'avatar virtuel
US20230034657A1 (en) Modes of user interaction
AU2021290132C1 (en) Presenting avatars in three-dimensional environments
EP3759542A1 (fr) Alignement de balayage de tête à l'aide d'un enregistrement oculaire
US11579693B2 (en) Systems, methods, and graphical user interfaces for updating display of a device relative to a user's body
WO2021106552A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
WO2021131950A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JP7502292B2 (ja) アバタ移動のためのマルチモードの手の場所および配向
US20230171484A1 (en) Devices, methods, and graphical user interfaces for generating and displaying a representation of a user
Bhaskaran et al. Immersive User Experiences: Trends and Challenges of Using XR Technologies
KR20240091224A (ko) 사용자의 표현을 생성 및 디스플레이하기 위한 디바이스들, 방법들, 및 그래픽 사용자 인터페이스들
CN116868152A (zh) 用于在三维环境中呈现化身的界面
WO2024054433A2 (fr) Dispositifs, procédés et interfaces utilisateur graphiques pour commander des avatars dans des environnements tridimensionnels
CN117642775A (zh) 用于确定对象的保持的信息处理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20892882

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20892882

Country of ref document: EP

Kind code of ref document: A1