WO2018006481A1 - Procédé et dispositif de commande par détection de mouvement pour terminal mobile - Google Patents

Procédé et dispositif de commande par détection de mouvement pour terminal mobile Download PDF

Info

Publication number
WO2018006481A1
WO2018006481A1 PCT/CN2016/096407 CN2016096407W WO2018006481A1 WO 2018006481 A1 WO2018006481 A1 WO 2018006481A1 CN 2016096407 W CN2016096407 W CN 2016096407W WO 2018006481 A1 WO2018006481 A1 WO 2018006481A1
Authority
WO
WIPO (PCT)
Prior art keywords
somatosensory
mobile terminal
somatosensory operation
stream information
action
Prior art date
Application number
PCT/CN2016/096407
Other languages
English (en)
Chinese (zh)
Inventor
黄云晓
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018006481A1 publication Critical patent/WO2018006481A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present invention relates to the field of intelligentization of mobile terminals, and in particular, to a method and device for operating a body of a mobile terminal.
  • two main fluid sensor cameras kinect and LeTV body camera are mainly based on three major components: color camera, infrared camera and infrared emitter.
  • the depth data stream collected by the infrared emitter and the infrared camera and the color video stream collected by the color camera can realize the somatosensory and gesture control through an algorithm.
  • the somatosensory camera needs to purchase a special somatosensory camera, and the user needs to carry a separate somatosensory camera at any time to realize the somatosensory operation anytime and anywhere, and the user experience is not high.
  • the embodiment of the invention provides a method and a device for operating a body of a mobile terminal, which solves the problem that the mobile terminal cannot perform the somatosensory operation anytime and anywhere, and the user experience is poor.
  • a method for operating a body of a mobile terminal is provided.
  • the mobile terminal is provided with a color camera, an infrared camera, and an infrared emitter.
  • the operation action corresponding to the currently input somatosensory operation is determined and executed according to the correspondence between the somatosensory operation and the preset operational action.
  • the step of obtaining the deep data stream information collected by the infrared camera includes:
  • the cross-correlation operation is performed on the speckle image information and the reference speckle image information to obtain three-dimensional depth data stream information of a specific space.
  • the step of determining the currently input somatosensory operation comprises:
  • the three-dimensional model data is determined as the currently input somatosensory operation.
  • the step of determining the three-dimensional model data as the currently input somatosensory operation comprises:
  • the step of determining and executing the operation action corresponding to the currently input somatosensory operation according to the correspondence between the somatosensory operation and the preset operation action comprises:
  • the operation action corresponding to the currently input somatosensory operation is a specific operation action, and a specific operation action is performed.
  • the steps of performing a specific operational action include:
  • mapping the spatial coordinates to the display screen coordinates of the mobile terminal according to the mapping relationship between the spatial coordinates of the somatosensory operation and the display screen coordinates of the mobile terminal;
  • a somatosensory operation device for a mobile terminal where the mobile terminal is provided with a color camera, an infrared camera, and an infrared emitter; the somatosensory operation device includes:
  • Obtaining a module configured to obtain color video stream information collected by the color camera and deep data stream information collected by the infrared camera;
  • a first processing module configured to determine a currently input somatosensory operation according to the color video stream information and the depth data stream information
  • the second processing module is configured to determine and execute an operation action corresponding to the currently input somatosensory operation according to the correspondence between the somatosensory operation and the preset operation action.
  • the obtaining module includes:
  • a first acquiring unit configured to acquire reference speckle image information formed on a specific space by the infrared emitter
  • a second acquiring unit configured to acquire speckle image information collected by the infrared camera when a specified action occurs in the specific space
  • the first operation unit is configured to perform cross-correlation operation on the speckle image information and the reference speckle image information to obtain three-dimensional depth data stream information of a specific space.
  • the first processing module includes:
  • a second operation unit configured to perform superposition operations on the color video stream information and the depth data stream information to obtain corresponding three-dimensional model data
  • the first processing unit is configured to determine the three-dimensional model data as the currently input somatosensory operation.
  • the first processing unit includes:
  • the first processing sub-unit is configured to determine a current input somatosensory operation based on a change in the skeletal model.
  • the second processing module includes:
  • a detecting unit configured to detect whether the currently input somatosensory operation is the same as the somatosensory operation corresponding to the specific operation action
  • the second processing unit is configured to determine that the somatosensory operation corresponding to the currently input somatosensory operation is the specific operational action and to perform the specific operational action when it is detected that the somatosensory operation of the currently input is the same as the somatosensory operation corresponding to the specific operational action.
  • the second processing unit includes:
  • the calculation subunit is set to calculate the spatial coordinates of the left or right hand in the currently input somatosensory operation
  • Mapping subunits configured to map spatial coordinates to display screen coordinates of the mobile terminal according to a mapping relationship between spatial coordinates of the somatosensory operation and display screen coordinates of the mobile terminal;
  • a second processing sub-unit is arranged to map a particular operational action occurring at the spatial coordinates onto the display screen coordinates and execute.
  • a storage medium is also provided.
  • the storage medium is arranged to store program code for performing the following steps:
  • the corresponding operation is performed. Action, no need to hold the sensor props to realize the stereo operation of the mobile terminal anytime and anywhere, the operation is simple, greatly High user experience.
  • FIG. 1 is a schematic structural view of a mobile terminal of the present invention
  • FIG. 2 is a schematic flow chart showing a method for operating a body of a mobile terminal according to the present invention
  • FIG. 3 is a schematic flow chart showing a method for operating a body sense according to an example 1 in the first embodiment of the present invention
  • FIG. 4 is a schematic flow chart of a somatosensory operation method according to an example 2 in the first embodiment of the present invention
  • Fig. 5 is a view showing the configuration of a body feeling operating device of the mobile terminal of the present invention.
  • an embodiment of the present invention provides a method for operating a body of a mobile terminal.
  • the mobile terminal is provided with a color camera 1, an infrared transmitter 2, and an infrared. Camera 3.
  • the combination of the infrared emitter 2 and the infrared camera 3 can adopt a combination of a flash and a color camera 1.
  • the infrared emitter 2 emits an infrared laser to "illuminate" the target, and the infrared camera 3 collects a plurality of special shaped spots formed by the infrared laser emitter on the target.
  • the infrared emitter 2 is specialized The gate's chip (eg BOOST power chip) provides instantaneous high voltage pulse drive.
  • the infrared camera 3 can be connected to the mobile phone baseband chip via the MIPI or I2C bus for data communication.
  • the somatosensory operation method of this embodiment includes:
  • Step 201 Acquire color video stream information collected by the color camera and depth data stream information collected by the infrared camera.
  • the somatosensory mode can be set in the mobile terminal, and the somatosensory related software (such as: somatosensory game, smart TV simulation somatosensory camera, virtual touch screen, etc.) can be opened in the somatosensory mode, the mobile terminal can be turned on in the background, and the color camera can be automatically activated.
  • the infrared emitter and infrared camera enter the working state.
  • the process of acquiring color video stream information by the color camera is similar to the process of capturing a photo or video by using a color camera in the prior art, and therefore will not be described in detail.
  • This embodiment will introduce a process in which a mobile terminal collects deep data stream information by using an infrared emitter and an infrared camera.
  • the collection of deep data stream information is based on optical coding technology.
  • it uses light source illumination to encode the space to be measured, such as laser speckle in optical coding technology.
  • Laser speckle is when the laser is irradiated to a rough object. Or random diffraction spots formed after penetrating the frosted glass.
  • speckles are highly random and will change pattern with distance, which means that the speckle pattern at any two places in the space is different. As long as such structured light is applied to the space, the entire space is marked. Putting an object into the space, you can see where the object is located by looking at the speckle pattern on the object.
  • the step of acquiring the deep data stream information collected by the infrared camera specifically includes: acquiring the reference speckle image information formed on the specific space by the infrared emitter; and acquiring the infrared camera when the specified action occurs in the specific space. Speckle image information; performing cross-correlation operation on the speckle image information and the reference speckle image information to obtain three-dimensional depth data stream information of a specific space.
  • the speckle image information projected by the laser speckle at a certain distance in a specific space is recorded in advance.
  • the user activity space is within 1 to 4 meters of the camera.
  • Taking a reference plane every 10cm we can pre-record 30 speckle images and determine the reference speckle image information for a specific space.
  • the specific space needs to be measured, and a speckle image of a scene to be tested is taken, and the image and the 30 reference images saved by us are taken.
  • Do cross-correlation operations in turn, so that we will get 30 correlation images, and the position where the object exists in the space will show the peak on the correlation image.
  • the three-dimensional shape of the entire scene is obtained, so that the three-dimensional depth data stream information of the specific space can be obtained.
  • Step 202 Determine a currently input somatosensory operation according to the color video stream information and the depth data stream information.
  • a two-dimensional color video stream is combined with three-dimensional depth data stream information to create three-dimensional model data.
  • the color video stream information and the depth data stream information are superimposed to obtain corresponding three-dimensional model data; and the three-dimensional model data is determined as the currently input somatosensory operation.
  • the determining the three-dimensional model data as the current input somatosensory operation comprises: extracting a joint point in the three-dimensional model data to construct a skeleton model; and determining a current input somatosensory operation according to the change of the skeleton model. That is, after the acquired two-dimensional color video stream is superimposed with the three-dimensional depth data stream information to obtain the user's three-dimensional model data, the three-dimensional model data is simplified into a skeleton model composed of dozens of joint points. The skeletal model or the acquired gesture action is compared with a reference model or reference gesture pre-recorded into the system to determine the somatosensory operation of the current input.
  • the user's skeleton model can be stored, and the user's identity can be confirmed by face recognition on the next use, and the corresponding model data can be directly extracted.
  • Step 203 Determine and execute an operation action corresponding to the currently input somatosensory operation according to the correspondence between the somatosensory operation and the preset operation action.
  • the somatosensory operation corresponding to the currently input somatosensory operation is the same as the somatosensory operation corresponding to the specific operational action; if the same, the operational action corresponding to the currently input somatosensory operation is determined to be a specific operational action, and a specific operational action is performed. For example, it is detected whether the currently input somatosensory operation is the same as the somatosensory operation corresponding to the sliding operation, and if the same, the sliding operation is performed.
  • the step of performing a specific operation action includes: calculating spatial coordinates of the left or right hand in the currently input somatosensory operation; mapping the spatial coordinates to the movement according to a mapping relationship between the spatial coordinates of the somatosensory operation and the display screen coordinates of the mobile terminal
  • the display screen coordinates of the terminal; the specific operational actions occurring at the spatial coordinates are mapped to the display screen coordinates and executed.
  • it refers to: determining the relative distance and coordinates of the finger and the space background, and letting the user know the relative coordinates of the finger by displaying a transparent virtual finger or other identifier on the screen of the mobile terminal, and letting the user know by the shadow of the virtual finger or other identifier.
  • the relative distance between the finger and the virtual screen so that the touch screen command is directly sent by the operation of the finger on the virtual screen to realize the operation of the virtual touch screen.
  • the color camera, the infrared emitter and the infrared camera are integrated into the mobile terminal, and according to the color video stream information collected by the color camera and the depth data stream information collected by the infrared camera, the current input somatosensory operation is determined, thereby performing corresponding
  • the operation action can realize the stereo operation of the mobile terminal anytime and anywhere without the need of the hand-held sensing props, and the operation is simple, and the user experience is greatly improved.
  • the somatosensory operation method of the embodiment of the present invention will be further described below in conjunction with the two application scenarios of the somatosensory control application and the virtual touch screen.
  • Example 1 As shown in FIG. 3, the somatosensory operation method of the somatosensory application specifically includes the following steps:
  • Step 301 Turn on the somatosensory function, and activate the color camera, the infrared emitter, and the infrared camera. Open the somatosensory related software (such as: somatosensory game, smart TV simulation somatosensory camera, virtual touch screen, etc.), the somatosensory function is turned on in the background, and the somatosensory related hardware enters the working state.
  • somatosensory related software such as: somatosensory game, smart TV simulation somatosensory camera, virtual touch screen, etc.
  • Step 302 Collect color video stream information and depth data stream information.
  • Color video stream information is collected by a color camera
  • deep data stream information is collected by an infrared emitter and an infrared camera.
  • Step 303 Establish three-dimensional model data according to the color video stream information and the depth data stream information. A two-dimensional color video stream is combined with a three-dimensional depth data stream to create three-dimensional model data.
  • Step 304 Establish a skeleton model according to the three-dimensional model data.
  • the collected human body model is simplified into a skeleton model composed of dozens of joint points, and the user's model data can be stored, and the model data of a specific user can be directly extracted after confirming the user identity by means of face recognition next time.
  • Step 305 Output operation information according to comparison between the skeleton model and the preset model. Compare the bone model or the captured gestures with the models or gestures pre-recorded into the system. If the requirements are met, the corresponding data or operations are output, and the human-computer interaction of the sense is finally realized.
  • the mobile terminal is connected to the smart TV or the smart set-top box through the home network, and becomes a specialized somatosensory camera to realize functions such as gesture remote control and somatosensory game.
  • Example 2 As shown in FIG. 4, the somatosensory operation method of the somatosensory application specifically includes the following steps:
  • Step 401 Turn on the somatosensory function, and start the color camera, the infrared emitter, and the infrared camera. Open the somatosensory related software (such as: somatosensory game, smart TV simulation somatosensory camera, virtual touch screen, etc.), the somatosensory function is turned on in the background, and the somatosensory related hardware enters the working state.
  • somatosensory related software such as: somatosensory game, smart TV simulation somatosensory camera, virtual touch screen, etc.
  • Step 402 Collect color video stream information and depth data stream information.
  • Color video stream information is collected by a color camera
  • deep data stream information is collected by an infrared emitter and an infrared camera.
  • Step 403 Establish three-dimensional model data according to the color video stream information and the depth data stream information.
  • a two-dimensional color video stream is combined with a three-dimensional depth data stream to create three-dimensional model data.
  • Step 404 Calculate the relative position and coordinates of the finger and the specific spatial background according to the three-dimensional model data, and determine the virtual touch screen instruction. Determine the relative distance and coordinates of the finger from the background. The user is made aware of the relative coordinates of the finger by displaying a transparent virtual finger or other logo on the screen; the virtual finger or other identified shadow allows the user to know the relative distance of the finger from the virtual screen.
  • Step 405 Send a virtual touch screen instruction to implement a virtual touch screen operation.
  • the operation of the virtual touch screen is realized by directly transmitting the touch screen command by the operation of the finger on the virtual screen.
  • the input of the operation command can be realized by using the somatosensory imaging technique by capturing an action or gesture of the finger on the background of a specific space such as a table top, a wall or a thigh.
  • an embodiment of the present invention provides a somatosensory operation device for a mobile terminal.
  • the mobile terminal is provided with a color camera, an infrared camera and an infrared emitter; the somatosensory operation device comprises:
  • the obtaining module 51 is configured to obtain color video stream information collected by the color camera and depth data stream information collected by the infrared camera;
  • the first processing module 52 is configured to determine a currently input somatosensory operation according to the color video stream information and the depth data stream information;
  • the second processing module 53 is configured to determine and execute an operation action corresponding to the currently input somatosensory operation according to the correspondence between the somatosensory operation and the preset operation action.
  • the obtaining module 51 includes:
  • a first acquiring unit configured to acquire reference speckle image information formed on a specific space by the infrared emitter
  • a second acquiring unit configured to acquire speckle image information collected by the infrared camera when a specified action occurs in the specific space
  • the first operation unit is configured to perform cross-correlation operation on the speckle image information and the reference speckle image information to obtain three-dimensional depth data stream information of a specific space.
  • the first processing module 52 includes:
  • a second operation unit configured to perform superposition operations on the color video stream information and the depth data stream information to obtain corresponding three-dimensional model data
  • the first processing unit is configured to determine the three-dimensional model data as the currently input somatosensory operation.
  • the first processing unit includes:
  • the first processing sub-unit is configured to determine a current input somatosensory operation based on a change in the skeletal model.
  • the second processing module 53 includes:
  • a detecting unit configured to detect a body corresponding to a currently input somatosensory operation and a specific operation action Whether the operation is the same;
  • the second processing unit is configured to determine that the somatosensory operation corresponding to the currently input somatosensory operation is the specific operational action and to perform the specific operational action when it is detected that the somatosensory operation of the currently input is the same as the somatosensory operation corresponding to the specific operational action.
  • the second processing unit includes:
  • the calculation subunit is set to calculate the spatial coordinates of the left or right hand in the currently input somatosensory operation
  • Mapping subunits configured to map spatial coordinates to display screen coordinates of the mobile terminal according to a mapping relationship between spatial coordinates of the somatosensory operation and display screen coordinates of the mobile terminal;
  • a second processing sub-unit is arranged to map a particular operational action occurring at the spatial coordinates onto the display screen coordinates and execute.
  • the device is a device corresponding to the somatosensory operation method of the mobile terminal, and all the implementation manners in the foregoing method embodiments are applicable to the embodiment of the device, and the same technical effects can be achieved.
  • Embodiments of the present invention also provide a storage medium.
  • the foregoing storage medium may be configured to store program code for performing the following steps:
  • the foregoing storage medium may include, but not limited to, a USB flash drive, a Read-Only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk, and a magnetic memory.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • a mobile hard disk e.g., a hard disk
  • magnetic memory e.g., a hard disk
  • modules or steps of the present invention described above can be implemented by a general-purpose computing device that can be centralized on a single computing device or distributed across a network of multiple computing devices. Alternatively, they may be implemented by program code executable by the computing device such that they may be stored in the storage device by the computing device and, in some cases, may be different from the order herein.
  • the steps shown or described are performed, or they are separately fabricated into individual integrated circuit modules, or a plurality of modules or steps thereof are fabricated as a single integrated circuit module.
  • the invention is not limited to any specific combination of hardware and software.
  • the color camera, the infrared emitter, and the infrared camera are integrated into the mobile terminal, and the current input sense is determined according to the color video stream information collected by the color camera and the depth data stream information collected by the infrared camera.
  • the operation thereby performing the corresponding operation action, can realize the stereo operation of the mobile terminal anytime and anywhere without the need of the hand-held sensing prop, and the operation is simple, and the user experience is greatly improved.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)
  • Position Input By Displaying (AREA)

Abstract

L'invention concerne un procédé et un dispositif de commande par détection de mouvement destinés à un terminal mobile. Une caméra couleur, une caméra infrarouge et un émetteur infrarouge sont agencés sur le terminal mobile. Le procédé consiste à : acquérir des informations relatives à un flux vidéo couleur collecté par la caméra couleur et des informations relatives à un flux de données de profondeur collectées par la caméra infrarouge (201) ; selon les informations relatives au flux vidéo couleur et les informations relatives au flux de données de profondeur, déterminer une commande (202) de détection de mouvement actuellement appliquée ; et selon une corrélation entre la commande de détection de mouvement et une action de commande prédéfinie, déterminer et exécuter une action de commande correspondant à la commande (203) de détection de mouvement actuellement appliquée. Le procédé et le dispositif permettent, grâce à l'intégration d'une caméra couleur, d'un émetteur infrarouge et d'une caméra infrarouge sur un terminal mobile, et selon des informations relatives à un flux vidéo couleur collecté par la caméra couleur et des informations relatives à un flux de données de profondeur collectées par la caméra infrarouge, de déterminer la commande de détection de mouvement actuellement appliquée en vue de l'exécution d'une action de commande correspondante. Une commande stéréoscopique peut être mise en oeuvre sur le terminal mobile à n'importe quel moment et n'importe où, sans élément de détection tenu à la main, ce qui permet un actionnement pratique et améliore considérablement l'expérience de l'utilisateur.
PCT/CN2016/096407 2016-07-04 2016-08-23 Procédé et dispositif de commande par détection de mouvement pour terminal mobile WO2018006481A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610517335.X 2016-07-04
CN201610517335.XA CN107577334A (zh) 2016-07-04 2016-07-04 一种移动终端的体感操作方法及装置

Publications (1)

Publication Number Publication Date
WO2018006481A1 true WO2018006481A1 (fr) 2018-01-11

Family

ID=60901650

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/096407 WO2018006481A1 (fr) 2016-07-04 2016-08-23 Procédé et dispositif de commande par détection de mouvement pour terminal mobile

Country Status (2)

Country Link
CN (1) CN107577334A (fr)
WO (1) WO2018006481A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308015A (zh) * 2020-11-18 2021-02-02 盐城鸿石智能科技有限公司 一种基于3d结构光的新型深度恢复方案

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640175A (zh) * 2018-06-21 2020-09-08 华为技术有限公司 一种物体建模运动方法、装置与设备
CN110559645B (zh) 2019-07-18 2021-08-17 荣耀终端有限公司 一种应用的运行方法及电子设备
CN112102934A (zh) * 2020-09-16 2020-12-18 南通市第一人民医院 一种护士规范化培训考核评分方法和系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110134112A1 (en) * 2009-12-08 2011-06-09 Electronics And Telecommunications Research Institute Mobile terminal having gesture recognition function and interface system using the same
CN203950270U (zh) * 2014-01-22 2014-11-19 南京信息工程大学 体感识别装置及通过其控制鼠标键盘操作的人机交互系统
CN104252231A (zh) * 2014-09-23 2014-12-31 河南省辉耀网络技术有限公司 一种基于摄像头的体感识别系统和方法
WO2015053451A1 (fr) * 2013-10-10 2015-04-16 Lg Electronics Inc. Terminal mobile et son procédé de fonctionnement
CN204481940U (zh) * 2015-04-07 2015-07-15 北京市商汤科技开发有限公司 双目摄像头拍照移动终端

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101917685B1 (ko) * 2012-03-21 2018-11-13 엘지전자 주식회사 이동 단말기 및 그것의 제어 방법
CN202870727U (zh) * 2012-10-24 2013-04-10 上海威镜信息科技有限公司 一种带有动作捕捉模块的显示单元设备
CN103914129A (zh) * 2013-01-04 2014-07-09 云联(北京)信息技术有限公司 一种人机交互系统及方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110134112A1 (en) * 2009-12-08 2011-06-09 Electronics And Telecommunications Research Institute Mobile terminal having gesture recognition function and interface system using the same
WO2015053451A1 (fr) * 2013-10-10 2015-04-16 Lg Electronics Inc. Terminal mobile et son procédé de fonctionnement
CN203950270U (zh) * 2014-01-22 2014-11-19 南京信息工程大学 体感识别装置及通过其控制鼠标键盘操作的人机交互系统
CN104252231A (zh) * 2014-09-23 2014-12-31 河南省辉耀网络技术有限公司 一种基于摄像头的体感识别系统和方法
CN204481940U (zh) * 2015-04-07 2015-07-15 北京市商汤科技开发有限公司 双目摄像头拍照移动终端

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308015A (zh) * 2020-11-18 2021-02-02 盐城鸿石智能科技有限公司 一种基于3d结构光的新型深度恢复方案

Also Published As

Publication number Publication date
CN107577334A (zh) 2018-01-12

Similar Documents

Publication Publication Date Title
US20210350631A1 (en) Wearable augmented reality devices with object detection and tracking
US10437347B2 (en) Integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
US9524021B2 (en) Imaging surround system for touch-free display control
JP6259545B2 (ja) 3dシーンでジェスチャーを入力するシステム及び方法
KR101453815B1 (ko) 사용자의 시점을 고려하여 동작인식하는 인터페이스 제공방법 및 제공장치
CN102959616B (zh) 自然交互的交互真实性增强
US20110187820A1 (en) Depth camera compatibility
WO2016064435A1 (fr) Système et procédé de génération interactive immersive de multimédia
JP2015526927A (ja) カメラ・パラメータのコンテキスト駆動型調整
CN105760106A (zh) 一种智能家居设备交互方法和装置
US20140009384A1 (en) Methods and systems for determining location of handheld device within 3d environment
WO2018006481A1 (fr) Procédé et dispositif de commande par détection de mouvement pour terminal mobile
JP2017534940A (ja) 3dシーンでオブジェクトを再現するシステム及び方法
JP2013165366A (ja) 画像処理装置、画像処理方法及びプログラム
WO2011097049A2 (fr) Compatibilité de caméra de profondeur
CN107346172B (zh) 一种动作感应方法及装置
WO2017084319A1 (fr) Procédé de reconnaissance gestuelle et dispositif de sortie d'affichage de réalité virtuelle
CN107145822B (zh) 偏离深度相机的用户体感交互标定的方法和系统
WO2015093130A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations et programme
CN115439171A (zh) 商品信息展示方法、装置及电子设备
CN112073640B (zh) 全景信息采集位姿获取方法及装置、系统
CN210691314U (zh) 一种基于活体检测的接入控制系统及登录设备
CN109426336A (zh) 一种虚拟现实辅助选型设备
JP2013004001A (ja) 表示制御装置、表示制御方法、およびプログラム
KR101036107B1 (ko) 고유식별 정보를 이용한 증강 현실 구현시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16907985

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16907985

Country of ref document: EP

Kind code of ref document: A1