WO2022021631A1 - Procédé de commande d'interaction, dispositif terminal, et support de stockage - Google Patents
Procédé de commande d'interaction, dispositif terminal, et support de stockage Download PDFInfo
- Publication number
- WO2022021631A1 WO2022021631A1 PCT/CN2020/123470 CN2020123470W WO2022021631A1 WO 2022021631 A1 WO2022021631 A1 WO 2022021631A1 CN 2020123470 W CN2020123470 W CN 2020123470W WO 2022021631 A1 WO2022021631 A1 WO 2022021631A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- limb
- image data
- user
- interactive control
- terminal device
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Definitions
- the present application relates to the field of virtual reality technology, and in particular, to an interactive control method, a terminal device, and a computer-readable storage medium.
- An HMD Head Mounted Display
- An HMD Head Mounted Display
- the main purpose of the present application is to provide an interactive control method, a terminal device and a computer-readable storage medium, aiming to achieve the effect of improving the convenience of the HMD human-computer interaction solution.
- the present application provides an interactive control method, the interactive control method includes the following steps:
- a three-dimensional image of the user's limb is rendered in a display screen, so as to perform interactive control based on the display screen containing the three-dimensional image.
- the method before the step of rendering a three-dimensional image of the user's limb in a display screen according to the three-dimensional model to perform interactive control based on the display screen containing the three-dimensional image, the method further includes:
- the three-dimensional model is determined based on the limb feature information, the limb image data, and the limb depth data.
- the method further includes:
- the limb image data is stored in association with the three-dimensional model as historical limb image data.
- the method before the step of acquiring the limb feature information of the user and the limb depth data of the user, the method further includes:
- the step of acquiring the limb feature information of the user and the limb depth data of the user includes:
- the limb depth data of the user is collected by the limb depth data collection device.
- the step of acquiring the limb feature information according to the identity information includes:
- the step of obtaining user identity information includes:
- the identity information is determined according to the bone voiceprint feature.
- the method before the step of acquiring the bone conduction signal collected by the bone conduction sensor in the terminal device, and acquiring the bone voiceprint feature corresponding to the bone conduction signal, the method further includes:
- the audio playback device in the terminal device is controlled to play the preset audio file.
- the step of determining the three-dimensional model according to the historical limb image data includes:
- the three-dimensional model associated with the historical limb image data with the largest similarity between the limb image data is acquired.
- the preset condition includes that the similarity between the limb image data and the historical limb image data is greater than a preset similarity.
- the present application also provides a terminal device, the terminal device includes a memory, a processor, and an interactive control program stored in the memory and running on the processor, the interactive control program When the program is executed by the processor, the steps of the above-mentioned interactive control method are implemented.
- the terminal device is a head-mounted display;
- the head-mounted display includes an audio playback device and a bone conduction sensor, wherein the audio playback device and the bone conduction sensor are arranged on the head-mounted display in different locations.
- the present application also provides a computer-readable storage medium, where an interactive control program is stored on the computer-readable storage medium, and the interactive control program is executed by the processor to realize the above-mentioned interactive control steps of the method.
- the user's limb image data is first acquired, and then the similarity between the limb image data and historical limb image data is determined, wherein the The historical limb image data is the limb image data saved by the terminal device, and when the similarity satisfies a preset condition, a three-dimensional model corresponding to the user's limb is determined according to the historical limb image data, and according to the three-dimensional The model renders the three-dimensional image of the user's limb in the display screen, so as to perform interactive control based on the display screen containing the three-dimensional image.
- the dependence of the terminal device on the peripheral device is reduced.
- it also avoids the defect that the control scheme is less convenient when the user performs interactive control with the terminal device through the peripheral device.
- the effects of reducing the dependence of the terminal device on the peripherals and improving the convenience of the interactive control scheme of the terminal device are achieved.
- FIG. 1 is a schematic diagram of a terminal structure of a hardware operating environment involved in a solution according to an embodiment of the present application
- FIG. 2 is a schematic flowchart of an embodiment of an interactive control method of the present application
- FIG. 3 is a schematic flowchart of another embodiment of the interactive control method of the present application.
- FIG. 4 is a schematic flowchart of an implementation manner in another embodiment of the interactive control method of the present application.
- FIG. 5 is a schematic flowchart of still another embodiment of the interactive control method of the present application.
- an embodiment of the present application proposes an interactive control method, the main solution of which is:
- a three-dimensional image of the user's limb is rendered in a display screen, so as to perform interactive control based on the display screen containing the three-dimensional image.
- the dependence of the terminal device on the peripheral device is reduced.
- the defect of poor convenience of the control scheme existing when the user performs interactive control with the terminal device through the peripheral device is also avoided.
- the effects of reducing the dependence of the terminal device on the peripherals and improving the convenience of the interactive control scheme of the terminal device are achieved.
- FIG. 1 is a schematic structural diagram of a terminal of a hardware operating environment involved in the solution of the embodiment of the present application.
- the terminal in this embodiment of the present application may be a terminal device such as an HMD.
- the terminal may include: a processor 1001 , such as a CPU, a network interface 1004 , a user interface 1003 , a memory 1005 , and a communication bus 1002 .
- the communication bus 1002 is used to realize the connection and communication between these components.
- the user interface 1003 may include a display screen (Display), input units such as keys, etc., and the optional user interface 1003 may also include a wireless interface.
- the network interface 1004 may optionally include a wireless interface (eg, a WI-FI interface).
- the memory 1005 may be high-speed RAM memory, or may be non-volatile memory, such as disk memory.
- the memory 1005 may also be a storage device independent of the aforementioned processor 1001 .
- terminal structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or less components than the one shown, or combine some components, or arrange different components.
- the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module and an interactive control program.
- the network interface 1004 is mainly used to connect to the background server and perform data communication with the background server; the processor 1001 can be used to call the interactive control program stored in the memory 1005, and perform the following operations:
- a three-dimensional image of the user's limb is rendered in a display screen, so as to perform interactive control based on the display screen containing the three-dimensional image.
- processor 1001 can call the interactive control program stored in the memory 1005, and also perform the following operations:
- the three-dimensional model is determined based on the limb feature information, the limb image data, and the limb depth data.
- processor 1001 can call the interactive control program stored in the memory 1005, and also perform the following operations:
- the limb image data is stored in association with the three-dimensional model as historical limb image data.
- processor 1001 can call the interactive control program stored in the memory 1005, and also perform the following operations:
- the step of acquiring the limb feature information of the user and the limb depth data of the user includes:
- the limb depth data of the user is collected by the limb depth data collection device.
- processor 1001 can call the interactive control program stored in the memory 1005, and also perform the following operations:
- processor 1001 can call the interactive control program stored in the memory 1005, and also perform the following operations:
- the identity information is determined according to the bone voiceprint feature.
- processor 1001 can call the interactive control program stored in the memory 1005, and also perform the following operations:
- the audio playback device in the terminal device is controlled to play the preset audio file.
- processor 1001 can call the interactive control program stored in the memory 1005, and also perform the following operations:
- the three-dimensional model associated with the historical limb image data with the largest similarity between the limb image data is acquired.
- the interactive control method includes the following steps:
- Step S10 acquiring the limb image data of the user
- Step S20 determining the similarity between the limb image data and historical limb image data, wherein the historical limb image data is limb image data saved by the terminal device;
- Step S30 when the similarity meets a preset condition, determine a three-dimensional model corresponding to the user's limb according to the historical limb image data;
- Step S40 Render a three-dimensional image of the user's limb in a display screen according to the three-dimensional model, so as to perform interactive control based on the display screen containing the three-dimensional image.
- the user can also interact with the terminal device through devices such as a remote control or a handle, so as to modify the display content of the terminal device according to the interactive control instructions.
- devices such as a remote control or a handle
- the area outside the display area cannot be seen. Therefore, when a user interacts through a remote control or a handle, it often happens that the wrong button is pressed, the wireless remote control is not aligned with the control device, which leads to the inability to interact, and the wired handle interface falls off, leading to uncontrollable phenomena.
- the control method of the remote control/handle also causes the existing interactive control scheme to be very dependent on external devices.
- the connection and debugging of external equipment and HMD equipment requires certain professional knowledge, and the debugging steps are cumbersome.
- the terminal device is provided with a collection device, and the collection device includes an image collection device for collecting image data of a user's limbs and a limb depth data collection device for collecting limb depth data.
- the image acquisition device may be a camera, which is used to capture a video or picture including a user's limb
- the limb depth data acquisition device may be a TOF (Time Of Flight, time of flight) camera, which is used to acquire the limb Limb depth data relative to the end device.
- TOF Time Of Flight, time of flight
- the terminal device first obtains the image data of the user's limbs through the acquisition device. Since it is a more convenient control method to interact through different gestures when the terminal device interacts, the limbs generally refer to user's hand.
- the Hamming distance between the current graphic data and the historical limb image data is calculated, and then the similarity between the limb image data and the historical limb image data is determined according to the Hamming distance.
- a hash sequence of currently collected limb image data and historical limb image data may be calculated, and the Hamming distance may be calculated through the hash sequence, so as to determine whether remodeling is required according to the Hamming distance. Because sometimes the state of the user's limb movement changes slightly, there is no need to perform 3D modeling again, and the modeling information of historical limb image data can be used directly.
- the similarity is calculated using the following formula:
- H temple (i) represents the ith bit of the hash sequence in the historical limb image data frame.
- H object (i) represents the ith bit of the hash sequence of the window to be matched corresponding to the current limb image data frame, and D h M is the similarity factor.
- the preset condition may be set such that the similarity between the limb image data and the historical limb image data is greater than the preset similarity.
- the preset condition is also set because there is historical limb image data whose similarity with the current image is greater than a preset threshold.
- the similarity between the historical limb image data and the current limb image data is the largest (that is, the Hamming distance is the smallest)
- the 3D model associated with the historical limb image data is used as the 3D model corresponding to the current limb image data. And according to the three-dimensional model, the limb is rendered in the display screen.
- the current display picture of the terminal device is further rendered according to the three-dimensional model, so as to render the user's limb into the display picture, so as to generate a display picture for protecting the user's limb.
- the three-dimensional model of the user's limb may be superimposed on a predetermined position in the display screen, so that the user can select the screen through the limb to preset in the screen options.
- the display screen for protecting the user's limb may be based on the posture of the limb (when the limb is a hand, the posture of the limb may be a gesture) and/or the relative position of the limb in the display screen.
- the position of other displayed contents determines the interactive control instruction currently issued by the user for interactive control. For example, in a selection interface, the option selected by the user is determined by the relative position of the limb in the screen.
- the limb image data of the user is obtained first, and then the similarity between the limb image data and historical limb image data is determined, wherein the historical limb image data is saved by the terminal device
- the similarity satisfies a preset condition
- a three-dimensional model corresponding to the user's limb is determined according to the historical limb image data, and the user's limb is rendered on the display screen according to the three-dimensional model.
- the three-dimensional image so as to perform interactive control based on the display screen containing the three-dimensional image. Since the user can realize the interactive control with the terminal device through his own limbs, the dependence of the terminal device on the peripheral device is reduced.
- the method further includes:
- Step S50 when the similarity does not meet a preset condition, acquire the limb feature information of the user and the limb depth data of the user;
- Step S60 Determine the three-dimensional model based on the limb feature information, the limb image data and the limb depth data.
- the terminal device first acquires the limb feature information of the user, and acquires the limb depth data of the user through the acquisition device.
- the user when acquiring the limb feature information, the user may first obtain the identity information of the user currently using the terminal device, and then according to the identity information, find out whether the limb feature information corresponding to the user is stored in the local database or the cloud database. If so, directly read the limb feature information stored in the local database or cloud database, otherwise, output a prompt message to prompt the user to input the limb feature information, and receive the limb feature information currently input by the user, so as to obtain the user's limb feature information the destination.
- a 3D (three dimensional, three-dimensional) image of the user's limb is established based on the limb feature information, limb image data and the limb depth data Model. It can be understood that, when the 3D model of the user's limb is established, since the hand feature information entered in advance by the user can be directly loaded, the time required for the 3D modeling of the terminal device can be reduced.
- the depth of field information corresponding to the limb image data ie, limb depth data
- the three-dimensional information of the limb is determined according to the limb depth data and the limb image data, and a three-dimensional model of the limb is constructed in combination with the limb feature information of the limb.
- the image data is acquired, since the image data is plane data, only the two-dimensional model of the limb can be acquired, and then based on the TOF data (limb depth data) and the limb feature data, in the two-dimensional model On the basis of the 3D model, a 3D model of the limb is established.
- the limb feature information is used as the basic data for constructing the three-dimensional model, so that the similarity between the three-dimensional model and the real limb of the user is improved.
- the three-dimensional image of the user's limb is rendered in the display screen according to the three-dimensional model, so as to perform interactive control based on the display screen containing the three-dimensional image.
- the method further includes:
- Step S70 storing the limb image data in association with the three-dimensional model.
- the limb image data may be saved in the database as historical limb image data , and the three-dimensional model generated according to the limb image data of the current frame is stored in association with the limb image data of the current frame.
- the limb image data may be saved as historical limb image data in the persist partition of the local database of the terminal device.
- the database may be either a local database or a cloud database, which is not limited in this embodiment.
- the number of buffers can be determined according to requirements.
- the size of the database for caching the limb image data and the three-dimensional model can be set, and then when the storage space occupied by the stored limb image data and the associated three-dimensional model is larger than the preset size, the current database , the limb image data and the associated three-dimensional model saved in the database first are deleted, so as to ensure that the storage space occupied by the limb image data and the associated three-dimensional model saved in the database is less than a predetermined value.
- the number of frames of limb image data stored in the database can also be preset, and when the number of frames of limb image data stored is greater than the preset value, the image frame and its associated 3D model first stored in the database are deleted.
- the limb image data can be stored in association with the corresponding three-dimensional model, when the user's limb movement range is small, there is no need to re-establish the three-dimensional model, which improves the generation speed of the three-dimensional limb model. And it avoids the defect of wasting computing resources caused by repeated modeling. Thus, the effects of improving the data processing speed of the terminal device and reducing the system overhead of the terminal device are achieved.
- the user's limb feature information, as well as the user's limb image data and limb depth data are obtained first, and then based on the limb feature information, the limb image data and the limb depth data
- a three-dimensional model corresponding to the limb of the user is determined, and finally, the limb is rendered in a display screen according to the three-dimensional model, so as to perform interactive control based on the display screen containing the limb. Since the user can realize interactive control with the terminal device through his own limbs, the dependence of the terminal device on the peripheral device is reduced. At the same time, the defect of poor convenience of the control scheme existing when the user performs interactive control with the terminal device through the peripheral device is also avoided. Thus, the effects of reducing the dependence of the terminal device on the peripherals and improving the convenience of the interactive control scheme of the terminal device are achieved.
- the method further includes:
- Step S70 Obtain user identity information.
- the terminal device is further provided with a biometric feature collection device, wherein the biometric feature collection device can be configured as a bone voiceprint feature collection device, a face feature collection device, a pupil feature collection device, and a fingerprint feature collection device device and/or voiceprint feature acquisition device.
- the biometric feature collection device can be configured as a bone voiceprint feature collection device, a face feature collection device, a pupil feature collection device, and a fingerprint feature collection device device and/or voiceprint feature acquisition device.
- biometric features such as the user's bone voiceprint feature, face feature, pupil feature, fingerprint feature and/or voiceprint feature through the biometric feature collecting device.
- user identification is performed according to the biometric feature, and user identity information is determined according to the user identification result.
- the terminal device is provided with a bone voiceprint feature acquisition device, wherein the bone voiceprint acquisition device is provided with an audio playback device and a bone conduction sensor.
- the bone conduction sensor fits with the user's torso during the use of the terminal device.
- the bone conduction sensor and the audio playback device are arranged at different positions of the terminal device.
- the bone conduction sensor is in contact with the first position of the user's head, and the audio playback device can also be set to be in contact with the second position of the user's head. So that when the terminal device plays specific audio data through the audio playback device, the specific audio data can be transmitted to the location of the bone conduction sensor through the transmission of the user's skull. So that the bone conduction sensor can receive the audio signal transmitted from the user's skull. Since each user's skull is different, the effect of transmitting the audio signal is also different, so that the characteristics of the bone voiceprint received by the bone conduction sensor are also different for different users. In this way, during user identification, different users can be distinguished according to the bone voiceprint feature.
- the terminal device may determine whether the local database and/or the cloud database has stored body feature information associated with the identity information. If yes, read the local database and obtain the limb feature information saved in the cloud database as the limb feature information of the user. Otherwise, a prompt message is output to prompt the user to input the limb feature information. and receive the currently entered limb feature information of the user.
- the biometric feature of the user can also be obtained, and the identity information corresponding to the user is constructed according to the biometric feature, and then the identity information and the limb feature are associated and saved. to a local database and/or a cloud database.
- the user may be prompted by voice how to input the body feature information. For example, when the limb is a hand, the user can be prompted to bend and rotate the hand by voice, so that the front and back of the hand are scanned and photographed to obtain complete limb feature information of the hand.
- the user's body feature information can be obtained through the user identity information, thereby avoiding the need for the user to input feature information each time the device is used, thus achieving the effect of simplifying the use steps of the terminal device.
- an embodiment of the present application also proposes a terminal device, the terminal device includes a memory, a processor, and an interactive control program stored on the memory and executable on the processor, the interactive control program being When executed by the processor, the steps of implementing the interactive control method described in the above embodiments are described.
- the terminal device is a head-mounted display;
- the head-mounted display includes an audio playback device and a bone conduction sensor, wherein the audio playback device and the bone conduction sensor are arranged on the head-mounted display in different locations.
- an embodiment of the present application also proposes a computer-readable storage medium, where an interactive control program is stored on the computer-readable storage medium, and the interactive control program is executed by a processor to implement the interactive control described in the above embodiments steps of the method.
- a software module can be placed in random access memory (RAM), internal memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other in the technical field. in any other known form of storage medium.
- RAM random access memory
- ROM read only memory
- electrically programmable ROM electrically erasable programmable ROM
- registers hard disk, removable disk, CD-ROM, or any other in the technical field. in any other known form of storage medium.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Un procédé de commande d'interaction, comprenant les étapes suivantes : l'obtention de données d'image de membre d'un utilisateur (S10); la détermination d'une similarité entre les données d'image de membre et les données d'image de membre historiques, les données d'image de membre historiques étant des données d'image de membre stockées par un dispositif terminal (S20); lorsque la similarité satisfait une condition prédéfinie, la détermination, en fonction des données d'image de membre historiques, d'un modèle tridimensionnel correspondant à un membre de l'utilisateur (S30); et selon le modèle tridimensionnel, le rendu d'une image tridimensionnelle du membre de l'utilisateur dans une image d'affichage de façon à réaliser une commande d'interaction sur la base de l'image d'affichage comprenant l'image tridimensionnelle (S40). Le procédé permet d'améliorer la commodité d'une solution d'interaction homme-ordinateur HMD.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010734378.XA CN111930231B (zh) | 2020-07-27 | 2020-07-27 | 交互控制方法、终端设备及存储介质 |
CN202010734378.X | 2020-07-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022021631A1 true WO2022021631A1 (fr) | 2022-02-03 |
Family
ID=73315386
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/123470 WO2022021631A1 (fr) | 2020-07-27 | 2020-10-24 | Procédé de commande d'interaction, dispositif terminal, et support de stockage |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111930231B (fr) |
WO (1) | WO2022021631A1 (fr) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113268626B (zh) * | 2021-05-26 | 2024-04-26 | 中国人民武装警察部队特种警察学院 | 数据处理方法、装置、电子设备及存储介质 |
CN113476837B (zh) * | 2021-07-01 | 2024-06-04 | 网易(杭州)网络有限公司 | 画质展示方法、装置、设备和存储介质 |
CN114863005A (zh) * | 2022-04-19 | 2022-08-05 | 佛山虎牙虎信科技有限公司 | 一种肢体特效的渲染方法、装置、存储介质和设备 |
CN117133281B (zh) * | 2023-01-16 | 2024-06-28 | 荣耀终端有限公司 | 语音识别方法和电子设备 |
CN118097796B (zh) * | 2024-04-28 | 2024-08-09 | 中国人民解放军联勤保障部队第九六四医院 | 一种基于视觉识别的姿态检测分析系统及方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104656890A (zh) * | 2014-12-10 | 2015-05-27 | 杭州凌手科技有限公司 | 虚拟现实智能投影手势互动一体机及互动实现方法 |
CN109359514A (zh) * | 2018-08-30 | 2019-02-19 | 浙江工业大学 | 一种面向deskVR的手势跟踪识别联合策略方法 |
CN110221690A (zh) * | 2019-05-13 | 2019-09-10 | Oppo广东移动通信有限公司 | 基于ar场景的手势交互方法及装置、存储介质、通信终端 |
CN110335342A (zh) * | 2019-06-12 | 2019-10-15 | 清华大学 | 一种用于沉浸式模拟器的手部模型实时生成方法 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6694163B1 (en) * | 1994-10-27 | 2004-02-17 | Wake Forest University Health Sciences | Method and system for producing interactive, three-dimensional renderings of selected body organs having hollow lumens to enable simulated movement through the lumen |
CN102789313B (zh) * | 2012-03-19 | 2015-05-13 | 苏州触达信息技术有限公司 | 一种用户交互系统和方法 |
CN103440677A (zh) * | 2013-07-30 | 2013-12-11 | 四川大学 | 一种基于Kinect体感设备的多视点自由立体交互系统 |
CN105589553A (zh) * | 2014-09-23 | 2016-05-18 | 上海影创信息科技有限公司 | 一种智能设备的手势控制方法和系统 |
WO2016097732A1 (fr) * | 2014-12-16 | 2016-06-23 | Metail Limited | Procédés permettant de générer un modèle de corps virtuel en 3d d'une personne combiné à une image de vêtement 3d, ainsi que dispositifs, systèmes et produits-programmes d'ordinateur associés |
CN104915978B (zh) * | 2015-06-18 | 2018-04-03 | 天津大学 | 基于体感相机Kinect的真实感动画生成方法 |
CN106296805B (zh) * | 2016-06-06 | 2019-02-26 | 厦门铭微科技有限公司 | 一种基于实时反馈的增强现实人体定位导航方法及装置 |
US10565719B2 (en) * | 2017-10-13 | 2020-02-18 | Microsoft Technology Licensing, Llc | Floor detection in virtual and augmented reality devices using stereo images |
CN109085966B (zh) * | 2018-06-15 | 2020-09-08 | 广东康云多维视觉智能科技有限公司 | 一种基于云计算的三维展示系统及方法 |
CN108985262A (zh) * | 2018-08-06 | 2018-12-11 | 百度在线网络技术(北京)有限公司 | 肢体运动指导方法、装置、服务器及存储介质 |
CN109671141B (zh) * | 2018-11-21 | 2023-04-18 | 深圳市腾讯信息技术有限公司 | 图像的渲染方法和装置、存储介质、电子装置 |
CN109660899B (zh) * | 2018-12-28 | 2020-06-05 | 广东思派康电子科技有限公司 | 计算机可读存储介质和应用该介质的骨声纹检测耳机 |
CN110236895A (zh) * | 2019-05-10 | 2019-09-17 | 苏州米特希赛尔人工智能有限公司 | Ai盲人导航眼镜 |
-
2020
- 2020-07-27 CN CN202010734378.XA patent/CN111930231B/zh active Active
- 2020-10-24 WO PCT/CN2020/123470 patent/WO2022021631A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104656890A (zh) * | 2014-12-10 | 2015-05-27 | 杭州凌手科技有限公司 | 虚拟现实智能投影手势互动一体机及互动实现方法 |
CN109359514A (zh) * | 2018-08-30 | 2019-02-19 | 浙江工业大学 | 一种面向deskVR的手势跟踪识别联合策略方法 |
CN110221690A (zh) * | 2019-05-13 | 2019-09-10 | Oppo广东移动通信有限公司 | 基于ar场景的手势交互方法及装置、存储介质、通信终端 |
CN110335342A (zh) * | 2019-06-12 | 2019-10-15 | 清华大学 | 一种用于沉浸式模拟器的手部模型实时生成方法 |
Also Published As
Publication number | Publication date |
---|---|
CN111930231A (zh) | 2020-11-13 |
CN111930231B (zh) | 2022-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022021631A1 (fr) | Procédé de commande d'interaction, dispositif terminal, et support de stockage | |
WO2018219120A1 (fr) | Procédé d'affichage d'image, procédé et dispositif de traitement d'image, terminal et serveur | |
US20210248763A1 (en) | Three-dimensional object reconstruction method and apparatus | |
WO2019034142A1 (fr) | Procédé et dispositif d'affichage d'image tridimensionnelle, terminal et support d'informations | |
US20210336784A1 (en) | Extended reality authentication | |
WO2019184499A1 (fr) | Procédé et dispositif d'appel vidéo et support de stockage informatique | |
WO2019148968A1 (fr) | Terminal mobile, procédé de déverrouillage facial et produit associé | |
WO2021104247A1 (fr) | Procédé d'affichage d'image et dispositif électronique | |
US10635180B2 (en) | Remote control of a desktop application via a mobile device | |
WO2022062808A1 (fr) | Procédé et dispositif de génération de portraits | |
US10147240B2 (en) | Product image processing method, and apparatus and system thereof | |
JP2016181018A (ja) | 情報処理システムおよび情報処理方法 | |
WO2022237116A1 (fr) | Procédé et appareil de traitement d'image | |
EP4136622A1 (fr) | Application d'améliorations par maquillage numérique stockées à des visages reconnus dans des images numériques | |
CN112449098A (zh) | 一种拍摄方法、装置、终端及存储介质 | |
US20150215530A1 (en) | Universal capture | |
CN113253829B (zh) | 眼球跟踪校准方法及相关产品 | |
US20230316612A1 (en) | Terminal apparatus, operating method of terminal apparatus, and non-transitory computer readable medium | |
US11671254B2 (en) | Extended reality authentication | |
CN109284002A (zh) | 一种用户距离估算方法、装置、设备及存储介质 | |
US20240096043A1 (en) | Display method, apparatus, electronic device and storage medium for a virtual input device | |
WO2017120767A1 (fr) | Procédé et appareil de prédiction d'attitude de la tête | |
US20230247383A1 (en) | Information processing apparatus, operating method of information processing apparatus, and non-transitory computer readable medium | |
WO2023160072A1 (fr) | Procédé et appareil d'interaction homme-ordinateur dans scène de réalité augmentée (ar), et dispositif électronique | |
US20230368475A1 (en) | Multi-Device Content Handoff Based on Source Device Position |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20947501 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20947501 Country of ref document: EP Kind code of ref document: A1 |