WO2017113674A1 - 基于智能设备实现体感控制的方法、系统以及智能设备 - Google Patents
基于智能设备实现体感控制的方法、系统以及智能设备 Download PDFInfo
- Publication number
- WO2017113674A1 WO2017113674A1 PCT/CN2016/088314 CN2016088314W WO2017113674A1 WO 2017113674 A1 WO2017113674 A1 WO 2017113674A1 CN 2016088314 W CN2016088314 W CN 2016088314W WO 2017113674 A1 WO2017113674 A1 WO 2017113674A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- motion trajectory
- camera
- image
- contour
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/013—Force feedback applied to a game
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20116—Active contour; Active surface; Snakes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Definitions
- the present invention relates to the field of computer vision technology, and in particular, to a method, system, and smart device for implementing somatosensory control based on an intelligent device.
- somatosensory games gradually enter people's lives.
- somatosensory game consoles use the somatosensory camera to sense human body movements to operate games, such as the Xbox360 somatosensory game produced by Microsoft Corporation.
- the Kinect uses three somatosensory cameras to acquire human body motions and converts them into operation commands to control the game, so that people can get a better operational feeling when playing games, and can make the human body exercise in the exercise state.
- the technical problem to be solved by the present invention is that the price of the somatosensory camera is expensive and hinders the application of the somatosensory technology in people's lives.
- an embodiment of the present invention provides a method for implementing a somatosensory control based on a smart device, the smart device having a camera, the method comprising: collecting user image data; Obtaining an image contour of the user according to the image data; acquiring a first motion trajectory of the user on the imaging plane according to the image contour; acquiring the user perpendicular to the camera according to the change of the feature length on the image contour and/or the focal length change of the camera a second motion trajectory in a direction of the imaging plane; generating somatosensory data according to the first motion trajectory and the second motion trajectory.
- the feature length comprises a hand contour length/width, a leg contour length/width or a head contour length/width.
- the method further comprises: separating the user image from the foreground and the background.
- the second motion trajectory of the user in a direction perpendicular to the imaging plane is obtained according to a change in the length of the feature on the image contour and/or a change in the focal length of the camera and according to the first motion trajectory and the second motion
- the trajectory generates between the somatosensory data, and further includes: correcting the second motion trajectory according to a distance between each part of the user body measured by the ranging module and the camera.
- the ranging module is an infrared ranging module or a laser ranging module.
- An embodiment of the present invention further provides a system for implementing a somatosensory control based on a smart device, the smart device having a camera, the system comprising: an acquisition unit for collecting user image data; and an image contour acquisition unit for Obtaining an image contour of the user; the first motion trajectory unit is configured to acquire a first motion trajectory of the user on the imaging plane according to the image contour; and the second motion trajectory unit is configured to change the feature length according to the image contour And/or a focal length change of the camera acquires a second motion trajectory of the user in a direction perpendicular to the imaging plane; a somatosensory data unit configured to generate somatosensory data according to the first motion trajectory and the second motion trajectory .
- the system further comprises: a separating unit, configured to separate the user image from the foreground and the background between the collecting of the user image data by the collecting unit and the acquiring of the image contour of the user by the image contour acquiring unit according to the user image data .
- a separating unit configured to separate the user image from the foreground and the background between the collecting of the user image data by the collecting unit and the acquiring of the image contour of the user by the image contour acquiring unit according to the user image data .
- the system further comprises: a correction unit, configured to acquire, in the second motion trajectory unit, a change in a feature length on the image contour and/or a focus change of the camera to obtain a user perpendicular to the imaging plane a second motion trajectory in the direction of the body and the somatosensory data unit according to The second motion trajectory is corrected between the first motion trajectory and the second motion trajectory to generate the somatosensory data according to the distance between the parts of the user body measured by the ranging module and the camera.
- a correction unit configured to acquire, in the second motion trajectory unit, a change in a feature length on the image contour and/or a focus change of the camera to obtain a user perpendicular to the imaging plane a second motion trajectory in the direction of the body and the somatosensory data unit according to The second motion trajectory is corrected between the first motion trajectory and the second motion trajectory to generate the somatosensory data according to the distance between the parts of the user body measured by the ranging module and the camera.
- the embodiment of the present invention further provides a smart device, including: a camera for collecting user image data; a processor for acquiring an image contour of the user according to the user image data, and acquiring the user on the imaging plane according to the image contour a first motion trajectory, acquiring a second motion trajectory of the user in a direction perpendicular to the imaging plane according to a change in a feature length on the image contour and/or a focal length change of the camera, and according to the first motion
- the trajectory and the second motion trajectory generate somatosensory data.
- the processor is further configured to receive a distance between parts of the user body measured by an external ranging module to the camera, and correct the second motion trajectory according to the distance.
- the embodiment of the present invention discloses a system for implementing a somatosensory control based on a smart device, the smart device having a camera, wherein the system comprises:
- One or more processors are One or more processors;
- One or more programs the one or more programs being stored in the memory, and when executed by the one or more processors, performing the following operations:
- the body feeling data is generated according to the first motion trajectory and the second motion trajectory.
- the system wherein the user image is separated from the foreground and the background between the acquiring user image data and the image contour of the user acquired according to the user image data.
- a second motion track of a user in a direction perpendicular to the imaging plane is obtained based on a change in feature length on the image contour and/or a change in focal length of the camera
- a distance between the trace and the somatosensory data unit according to the first motion trajectory and the second motion trajectory according to the first motion trajectory, the distance between each part of the user body measured by the ranging module and the camera, and the second motion trajectory Make corrections.
- a method, system, and smart device for implementing a somatosensory control based on a smart device using only a camera on a smart device such as a smart phone to acquire user image data, and obtaining a user's image on the imaging plane according to the image data a motion trajectory and a second motion trajectory in a direction perpendicular to the imaging plane, thereby obtaining a motion trajectory of the user in three-dimensional space to generate somatosensory data, allowing the user to experience the somatosensory technology without additional equipment, which is beneficial to the somatosensory technology. Promote the application.
- FIG. 1 is a schematic diagram of an application scenario for implementing somatosensory control based on a smart device according to an embodiment of the invention
- FIG. 2 shows a flow chart of a method for implementing somatosensory control based on a smart device according to an embodiment of the present invention
- FIG. 3 is a schematic diagram of a system for implementing somatosensory control based on a smart device according to an embodiment of the present invention
- FIG. 4 is a schematic diagram of a system for implementing somatosensory control based on a smart device with a processor according to an embodiment of the present invention
- FIG. 5 is a schematic diagram of a system for implementing somatosensory control based on a smart device with two processors according to an embodiment of the present invention.
- a method for implementing a somatosensory control based on a smart device It requires a smart device with a camera, which can be a smartphone, tablet, laptop, etc.
- a camera which can be a smartphone, tablet, laptop, etc.
- the user needs to keep a certain distance from the camera of the smart device, so that the camera can collect image data of the whole body of the user.
- some somatosensory controls only require hand motion control, in which case only the camera can capture image data from the user's hand.
- an embodiment of the present invention provides a method for implementing a somatosensory control based on a smart device, where the smart device has a camera, and the method includes the following steps:
- S1. Collect user image data. As shown in Figure 1, the camera captures image data of the user on the imaging plane, i.e., the x-y plane.
- This step is an optional step, and any existing image separation method can be used to separate the user image from the foreground and the background, which can reduce the interference of the foreground and background images, and reduce the amount of computation of the processor post-processing.
- the feature length may be the hand contour length/width, the leg contour length/width, the head contour length/width, etc., for example, when it is detected that the hand contour length becomes longer or the width becomes wider, the hand can be judged to move toward the camera, the hand When the length of the contour is shortened or the width is narrowed, the movement of the hand away from the camera can be judged, so that the change of each torso in the z direction can be judged.
- the camera constantly changes the focal length when capturing the user image to obtain clear imaging.
- the focal length change of the image head it can be judged whether the user is moving toward or away from the camera.
- the movement trajectory of the user in the direction perpendicular to the imaging plane can be judged.
- comprehensive judgment can be made according to the two to obtain more accurate results. .
- S6 Generate somatosensory data according to the first motion trajectory and the second motion trajectory.
- the first motion trajectory on the integrated imaging plane and the second motion trajectory in the direction perpendicular to the imaging plane can obtain the motion trajectory of the user in three-dimensional space, so that the somatosensory data can be obtained, and the somatosensory data is input to the somatosensory function.
- a method for implementing a somatosensory control based on a smart device using only a camera on a smart device such as a smart phone to acquire user image data, and obtaining a first motion trajectory and a vertical direction of the user on the imaging plane according to the image data.
- the second motion trajectory in the direction of the imaging plane thereby obtaining the motion trajectory of the user in the three-dimensional space to generate the somatosensory data, and allowing the user to experience the somatosensory technology without additional equipment, which is beneficial to the popularization and application of the somatosensory technology.
- the ranging module may be an infrared ranging module or a laser ranging module, and the ranging module may be wired or wireless.
- the method is connected with a smart device such as a smart phone to transmit the measured distance to the smart device, and the smart device acquires the distance between the parts of the user body measured by the ranging module to the camera, and according to the obtained distance
- the second motion trajectory is corrected, and finally more accurate body sensation data is generated according to the first motion trajectory and the corrected second motion trajectory.
- the embodiment of the invention further provides a system for implementing somatosensory control based on a smart device, the smart device having a camera, the system comprising:
- the collecting unit 1 is configured to collect user image data
- An image contour acquiring unit 3 configured to acquire an image contour of the user according to the user image data
- a first motion trajectory unit 4 configured to acquire a first user on the imaging plane according to the image contour Motion track
- the feature length includes a hand contour length /width, leg profile length/width or head profile length/width;
- the somatosensory data unit 7 is configured to generate somatosensory data according to the first motion trajectory and the second motion trajectory.
- a system for implementing a somatosensory control based on a smart device uses only a camera on a smart device such as a smart phone to acquire user image data, and obtains a first motion trajectory and a vertical direction of the user on the imaging plane according to the image data.
- the second motion trajectory in the direction of the imaging plane, thereby obtaining the motion trajectory of the user in the three-dimensional space to generate the somatosensory data, and allowing the user to experience the somatosensory technology without additional equipment, which is beneficial to the popularization and application of the somatosensory technology.
- the system for implementing the somatosensory control based on the smart device further includes: a separating unit 2, configured to collect the user image data between the collecting unit 1 and the image contour acquiring unit 3 to obtain the image of the user according to the user image data, and the user image Separated from the foreground and background.
- a separating unit 2 configured to collect the user image data between the collecting unit 1 and the image contour acquiring unit 3 to obtain the image of the user according to the user image data, and the user image Separated from the foreground and background.
- the system for implementing the somatosensory control based on the smart device further comprises: a correction unit 6 for acquiring, in the second motion trajectory unit 5, the user is perpendicular to the imaging plane according to the change of the feature length on the image contour and/or the focal length change of the camera
- the second motion trajectory in the direction and the somatosensory data unit 7 generate the somatosensory data according to the first motion trajectory and the second motion trajectory, and the distance between the parts of the user body measured by the ranging module to the camera, and the second The motion track is corrected.
- the ranging module is an infrared ranging module or a laser ranging module.
- the embodiment of the present invention further provides a smart device, which may be a smart phone, a tablet computer, a notebook computer, etc., and includes:
- a processor configured to acquire an image contour of the user according to the user image data, and acquire a first motion trajectory of the user on the imaging plane according to the image contour, according to the feature length of the image contour
- the change in degree and/or the change in the focal length of the camera acquires a second motion trajectory of the user in a direction perpendicular to the imaging plane, and generates somatosensory data from the first motion trajectory and the second motion trajectory.
- the smart device of the embodiment of the present invention can obtain the first motion trajectory of the user on the imaging plane and the second motion trajectory in the direction perpendicular to the imaging plane, thereby obtaining the motion trajectory of the user in the three-dimensional space.
- Generates somatosensory data allowing users to experience somatosensory technology without the need for additional equipment, which is conducive to the promotion and application of somatosensory technology.
- the processor is further configured to receive a distance between each part of the user body measured by the external ranging module to the camera, and correct the second motion trajectory according to the distance.
- the processor is further configured to receive a distance between each part of the user body measured by the external ranging module to the camera, and correct the second motion trajectory according to the distance.
- the embodiment discloses a system for implementing somatosensory control based on a smart device, wherein the smart device has a camera, wherein the system includes: one or more processors 200; a memory 100; one or more programs, the one Or a plurality of programs are stored in the memory 100, and when executed by the one or more processors 200, performing operations of: acquiring user image data; acquiring an image contour of the user according to the user image data; Obtaining a first motion trajectory of the user on the imaging plane; acquiring a second motion trajectory of the user in a direction perpendicular to the imaging plane according to a change in the feature length on the image contour and/or a focal length change of the camera; The first motion trajectory and the second motion trajectory generate somatosensory data.
- a processor 200 may be included, and as shown in FIG. 5, two processors 200 may be included.
- the user image is separated from the foreground and the background between the acquiring user image data and the image contour of the user according to the user image data.
- the system of the present embodiment preferably acquires a second motion trajectory of the user in a direction perpendicular to the imaging plane according to a change in the length of the feature on the image contour and/or a change in the focal length of the camera.
- the somatosensory data unit generates the second motion locus between the somatosensory data according to the first motion locus and the second motion locus, and the second motion locus is corrected according to the distance between the parts of the user body measured by the ranging module and the camera.
- embodiments of the invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
- the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
- These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
- the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Studio Devices (AREA)
Abstract
一种基于智能设备实现体感控制的方法、系统以及智能设备,其中所述方法包括:采集用户图像数据;根据用户图像数据获取用户的图像轮廓;根据所述图像轮廓获取用户在成像平面上的第一运动轨迹;根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹;根据所述第一运动轨迹和所述第二运动轨迹生成体感数据。
Description
本申请要求在2015年12月31日提交中国专利局、申请号为201511034014.6、发明名称为“基于智能设备实现体感控制的方法、系统以及智能设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本发明涉及计算机视觉技术领域,具体涉及一种基于智能设备实现体感控制的方法、系统以及智能设备。
随着计算机视觉技术的发展,体感游戏逐渐进入人们生活,相比于原有的通过手柄或者摇杆相比,体感游戏机通过体感摄像头感应人体动作来操作游戏,例如微软公司生产的Xbox360体感游戏机Kinect,通过三个体感摄像头来获取人体动作,并转换为操作指令来控制游戏,使得人们在玩游戏的时候能获得更好的操作感受,并且可以使人体在运动状态中得到锻炼。
但是,目前体感摄像头价格仍然较为昂贵,阻碍了部分用户体验体感游戏,而诸如智能手机、平板电脑等智能设备的普及程度则是非常广泛,如果能够利用智能手机上的摄像头来作为体感摄像头,将极大推动诸如体感游戏等体感技术在人们生活中的应用。
发明内容
本发明要解决的技术问题在于体感摄像头价格昂贵阻碍了体感技术在人们生活中的应用。
为此,本发明实施例提供了一种基于智能设备实现体感控制的方法,所述智能设备具有摄像头,所述方法包括:采集用户图像数据;根据用户
图像数据获取用户的图像轮廓;根据所述图像轮廓获取用户在成像平面上的第一运动轨迹;根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹;根据所述第一运动轨迹和所述第二运动轨迹生成体感数据。
优选地,所述特征长度包括手部轮廓长度/宽度、腿部轮廓长度/宽度或头部轮廓长度/宽度。
优选地,在所述采集用户图像数据与所述根据用户图像数据获取用户的图像轮廓之间,还包括:将用户图像与前景、背景分离。
优选地,在根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹与根据第一运动轨迹和第二运动轨迹生成体感数据之间,还包括:根据测距模块测量得到的用户身体各部分到所述摄像头之间的距离,对所述第二运动轨迹进行校正。
优选地,所述测距模块是红外测距模块或激光测距模块。
本发明实施例还提供了一种基于智能设备实现体感控制的系统,所述智能设备具有摄像头,所述系统包括:采集单元,用于采集用户图像数据;图像轮廓获取单元,用于根据用户图像数据获取用户的图像轮廓;第一运动轨迹单元,用于根据所述图像轮廓获取用户在成像平面上的第一运动轨迹;第二运动轨迹单元,用于根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹;体感数据单元,用于根据所述第一运动轨迹和所述第二运动轨迹生成体感数据。
优选地,所述系统还包括:分离单元,用于在所述采集单元采集用户图像数据与所述图像轮廓获取单元根据用户图像数据获取用户的图像轮廓之间,将用户图像与前景、背景分离。
优选地,所述系统还包括:校正单元,用于在所述第二运动轨迹单元根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹与所述体感数据单元根据
第一运动轨迹和第二运动轨迹生成体感数据之间,根据测距模块测量得到的用户身体各部分到所述摄像头之间的距离,对所述第二运动轨迹进行校正。
本发明实施例进一步还提供了一种智能设备,包括:摄像头,用于采集用户图像数据;处理器,用于根据用户图像数据获取用户的图像轮廓,根据所述图像轮廓获取用户在成像平面上的第一运动轨迹,根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹,并且根据所述第一运动轨迹和所述第二运动轨迹生成体感数据。
优选地,所述处理器还用于接收外部的测距模块测量得到的用户身体各部分到所述摄像头之间的距离,并根据所述距离对所述第二运动轨迹进行校正。
本实施例公开了一种基于智能设备实现体感控制的系统,所述智能设备具有摄像头,其特征在于,所述系统包括:
一个或者多个处理器;
存储器;
一个或者多个程序,所述一个或者多个程序存储在所述存储器中,当被所述一个或者多个处理器执行时,进行如下操作:
采集用户图像数据;
根据用户图像数据获取用户的图像轮廓;
根据所述图像轮廓获取用户在成像平面上的第一运动轨迹;
根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹;
根据所述第一运动轨迹和所述第二运动轨迹生成体感数据。
所述的系统,其中,在所述采集用户图像数据与所述根据用户图像数据获取用户的图像轮廓之间,将用户图像与前景、背景分离。
所述的系统,其中,在根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨
迹与所述体感数据单元根据第一运动轨迹和第二运动轨迹生成体感数据之间,根据测距模块测量得到的用户身体各部分到所述摄像头之间的距离,对所述第二运动轨迹进行校正。
根据本发明实施例的基于智能设备实现体感控制的方法、系统以及智能设备,仅利用诸如智能手机等智能设备上的摄像头来获取用户图像数据,并根据该图像数据得到用户在成像平面上的第一运动轨迹和垂直于成像平面的方向上的第二运动轨迹,从而得到用户在三维空间上的运动轨迹以生成体感数据,不需要额外的设备就可以让用户体验体感技术,有利于体感技术的推广应用。
通过参考附图会更加清楚的理解本发明的特征和优点,附图是示意性的而不应理解为对本发明进行任何限制,在附图中:
图1示出了根据本发明实施例的基于智能设备实现体感控制的应用场景示意图;
图2示出了根据本发明实施例的基于智能设备实现体感控制的方法的流程图;
图3示出了根据本发明实施例的基于智能设备实现体感控制的系统的示意图;
图4示出了本发明实施例具有一个处理器的基于智能设备实现体感控制的系统的示意图;
图5示出了本发明实施例具有二个处理器的基于智能设备实现体感控制的系统的示意图。
下面将结合附图对本发明的实施例进行详细描述。
如图1所示,根据本发明实施例的基于智能设备实现体感控制的方法,
其需要一个具有一个摄像头的智能设备,该智能设备可以是智能手机、平板电脑、笔记本电脑等。优选地,用户需要与该智能设备的摄像头保持一定距离,以使得摄像头能够采集到用户全身的图像数据。当然,某些体感控制仅需要用手部动作控制,在这种情况下,则可以仅使摄像头采集用户手部的图像数据。
如图2所示,本发明实施例提供了一种基于智能设备实现体感控制的方法,该智能设备具有摄像头,该方法包括如下步骤:
S1.采集用户图像数据。如图1所示,摄像头采集用户在成像平面,即x-y平面上的图像数据。
S2.将用户图像与前景、背景分离。该步骤是可选步骤,可以采用现有的任何图像分离的方法来将用户图像与前景、背景分离,这样可以减少前景、背景图像的干扰,减小处理器后期处理的运算量。
S3.根据用户图像数据获取用户的图像轮廓。对于体感控制而言,仅需要获取用户身体的运动轨迹即可,因此不需要关注用户身体图像的其他细节,抽取出图像轮廓可以减少处理器后期处理的运算量。
S4.根据图像轮廓获取用户在成像平面上的第一运动轨迹。由于图像是实时采集的,根据前后帧图像的变化就可以很容易地得到用户在x-y平面上的第一运动轨迹。
S5.根据图像轮廓上特征长度的变化和/或摄像头的焦距变化获取用户在垂直于成像平面的方向上的第二运动轨迹。用户距离摄像头越近,所生成的图像越大,因此当用户朝向摄像头运动时,所生成的图像会逐渐变大,从而可以根据图像轮廓上特征长度逐渐变大判断出用户朝向摄像头运动;而当用户远离摄像头运动时,所生成的图像会逐渐变小,从而可以根据图像轮廓上特征长度逐渐变小判断出用户远离摄像头运动。特征长度可以是手部轮廓长度/宽度、腿部轮廓长度/宽度、头部轮廓长度/宽度等,例如当检测到手部轮廓长度变长或宽度变宽即可以判断出手部朝向摄像头运动,手部轮廓长度变短或宽度变窄即可以判断出手部远离摄像头运动,从而就可以判断出各个躯干在z方向上的变化。同时,当用户在垂直于成像平面的
方向,即z方向上运动时,摄像头在捕捉用户图像时会不断改变焦距以获得清晰的成像,因此根据像头的焦距变化也能够判断出用户是在朝向或是在远离摄像头运动。采用这两种方式之一均可以判断出用户在垂直于成像平面的方向上的运动轨迹,当然为了得到更为准确的结果,也可以根据这两者进行综合判断,以得到更为准确的结果。
S6.根据第一运动轨迹和第二运动轨迹生成体感数据。综合成像平面上的第一运动轨迹和垂直于成像平面的方向上的第二运动轨迹就可以得到用户在三维空间上的运动轨迹,从而可以得到体感数据,将该体感数据输入到带有体感功能的智能电视或计算机就可以体验体感游戏。
根据本发明实施例的基于智能设备实现体感控制的方法,仅利用诸如智能手机等智能设备上的摄像头来获取用户图像数据,并根据该图像数据得到用户在成像平面上的第一运动轨迹和垂直于成像平面的方向上的第二运动轨迹,从而得到用户在三维空间上的运动轨迹以生成体感数据,不需要额外的设备就可以让用户体验体感技术,有利于体感技术的推广应用。
由于用户在垂直于成像平面的方向上的第二运动轨迹是根据图像轮廓上特征长度的变化和/或摄像头的焦距变化推算出来的,可能难以满足某些需要更精细的控制的场合的需要,因此有必要对第二运动轨迹进行校正。为此还需要加入一个测距模块,以更准确地获得用户在z方向上距离摄像头的距离,该测距模块可以是红外测距模块或激光测距模块,测距模块可以通过有线或无线的方式与诸如智能手机等智能设备连接,以将测得的距离传送到智能设备,智能设备获取测距模块测量得到的用户身体各部分到所述摄像头之间的距离,并根据所得到的距离对第二运动轨迹进行校正,最后根据第一运动轨迹和校正后的第二运动轨迹生成更精确的体感数据。
本发明实施例还提供了一种基于智能设备实现体感控制的系统,该智能设备具有摄像头,该系统包括:
采集单元1,用于采集用户图像数据;
图像轮廓获取单元3,用于根据用户图像数据获取用户的图像轮廓;
第一运动轨迹单元4,用于根据图像轮廓获取用户在成像平面上的第一
运动轨迹;
第二运动轨迹单元5,用于根据图像轮廓上特征长度的变化和/或摄像头的焦距变化获取用户在垂直于成像平面的方向上的第二运动轨迹,优选地,特征长度包括手部轮廓长度/宽度、腿部轮廓长度/宽度或头部轮廓长度/宽度;
体感数据单元7,用于根据第一运动轨迹和第二运动轨迹生成体感数据。
根据本发明实施例的基于智能设备实现体感控制的系统,仅利用诸如智能手机等智能设备上的摄像头来获取用户图像数据,并根据该图像数据得到用户在成像平面上的第一运动轨迹和垂直于成像平面的方向上的第二运动轨迹,从而得到用户在三维空间上的运动轨迹以生成体感数据,不需要额外的设备就可以让用户体验体感技术,有利于体感技术的推广应用。
优选地,上述基于智能设备实现体感控制的系统还包括:分离单元2,用于在采集单元1采集用户图像数据与图像轮廓获取单元3根据用户图像数据获取用户的图像轮廓之间,将用户图像与前景、背景分离。由此,可以减少前景、背景图像的干扰,减小处理器后期处理的运算量。
优选地,上述基于智能设备实现体感控制的系统还包括:校正单元6,用于在第二运动轨迹单元5根据图像轮廓上特征长度的变化和/或摄像头的焦距变化获取用户在垂直于成像平面的方向上的第二运动轨迹与体感数据单元7根据第一运动轨迹和第二运动轨迹生成体感数据之间,根据测距模块测量得到的用户身体各部分到摄像头之间的距离,对第二运动轨迹进行校正。优选地,测距模块是红外测距模块或激光测距模块。由此,可以得到更精确的体感数据,以满足某些需要更精细的控制的场合的需要。
本发明实施例还提供了一种智能设备,该智能设备可以是智能手机、平板电脑、笔记本电脑等,其包括:
摄像头,用于采集用户图像数据;
处理器,用于根据用户图像数据获取用户的图像轮廓,根据所述图像轮廓获取用户在成像平面上的第一运动轨迹,根据所述图像轮廓上特征长
度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹,并且根据所述第一运动轨迹和所述第二运动轨迹生成体感数据。
由此,仅需要本发明实施例的智能设备即可以得到用户在成像平面上的第一运动轨迹和垂直于成像平面的方向上的第二运动轨迹,从而得到用户在三维空间上的运动轨迹以生成体感数据,不需要额外的设备就可以让用户体验体感技术,有利于体感技术的推广应用。
优选地,该处理器还用于接收外部的测距模块测量得到的用户身体各部分到所述摄像头之间的距离,并根据该距离对第二运动轨迹进行校正。由此,可以得到更精确的体感数据,以满足某些需要更精细的控制的场合的需要。
本实施例公开了一种基于智能设备实现体感控制的系统,所述智能设备具有摄像头,其中,所述系统包括:一个或者多个处理器200;存储器100;一个或者多个程序,所述一个或者多个程序存储在所述存储器100中,当被所述一个或者多个处理器200执行时,进行如下操作:采集用户图像数据;根据用户图像数据获取用户的图像轮廓;根据所述图像轮廓获取用户在成像平面上的第一运动轨迹;根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹;根据所述第一运动轨迹和所述第二运动轨迹生成体感数据。具体为如图4所示可以包括一个处理器200,如图5所示可以包括二个处理器200。
本实施例的所述系统,优选为,在所述采集用户图像数据与所述根据用户图像数据获取用户的图像轮廓之间,将用户图像与前景、背景分离。
本实施例的所述系统,优选为,在根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹与所述体感数据单元根据第一运动轨迹和第二运动轨迹生成体感数据之间,根据测距模块测量得到的用户身体各部分到所述摄像头之间的距离,对所述第二运动轨迹进行校正。
本领域内的技术人员还应理解,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的,应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
虽然结合附图描述了本发明的实施方式,但是本领域技术人员可以在不脱离本发明的精神和范围的情况下作出各种修改和变型,这样的修改和变型均落入由所附权利要求所限定的范围之内。
Claims (13)
- 一种基于智能设备实现体感控制的方法,所述智能设备具有摄像头,其特征在于,所述方法包括:采集用户图像数据;根据用户图像数据获取用户的图像轮廓;根据所述图像轮廓获取用户在成像平面上的第一运动轨迹;根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹;根据所述第一运动轨迹和所述第二运动轨迹生成体感数据。
- 根据权利要求1所述的方法,其特征在于,所述特征长度包括手部轮廓长度/宽度、腿部轮廓长度/宽度或头部轮廓长度/宽度。
- 根据权利要求1所述的方法,其特征在于,在所述采集用户图像数据与所述根据用户图像数据获取用户的图像轮廓之间,还包括:将用户图像与前景、背景分离。
- 根据权利要求1-3中任一项所述的方法,其特征在于,在根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹与根据第一运动轨迹和第二运动轨迹生成体感数据之间,还包括:根据测距模块测量得到的用户身体各部分到所述摄像头之间的距离,对所述第二运动轨迹进行校正。
- 根据权利要求4所述的方法,其特征在于,所述测距模块是红外测距模块或激光测距模块。
- 一种基于智能设备实现体感控制的系统,所述智能设备具有摄像头,其特征在于,所述系统包括:采集单元,用于采集用户图像数据;图像轮廓获取单元,用于根据用户图像数据获取用户的图像轮廓;第一运动轨迹单元,用于根据所述图像轮廓获取用户在成像平面上的第一运动轨迹;第二运动轨迹单元,用于根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹;体感数据单元,用于根据所述第一运动轨迹和所述第二运动轨迹生成体感数据。
- 根据权利要求6所述的系统,其特征在于,还包括:分离单元,用于在所述采集单元采集用户图像数据与所述图像轮廓获取单元根据用户图像数据获取用户的图像轮廓之间,将用户图像与前景、背景分离。
- 根据权利要求6或7所述的系统,其特征在于,还包括:校正单元,用于在所述第二运动轨迹单元根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹与所述体感数据单元根据第一运动轨迹和第二运动轨迹生成体感数据之间,根据测距模块测量得到的用户身体各部分到所述摄像头之间的距离,对所述第二运动轨迹进行校正。
- 一种智能设备,其特征在于,包括:摄像头,用于采集用户图像数据;处理器,用于根据用户图像数据获取用户的图像轮廓,根据所述图像轮廓获取用户在成像平面上的第一运动轨迹,根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹,并且根据所述第一运动轨迹和所述第二运动轨迹生成体感数据。
- 根据权利要求9所述的智能设备,其特征在于,所述处理器还用于接收外部的测距模块测量得到的用户身体各部分到所述摄像头之间的距离,并根据所述距离对所述第二运动轨迹进行校正。
- 一种基于智能设备实现体感控制的系统,所述智能设备具有摄像头,其特征在于,所述系统包括:一个或者多个处理器;存储器;一个或者多个程序,所述一个或者多个程序存储在所述存储器中,当被所述一个或者多个处理器执行时,进行如下操作:采集用户图像数据;根据用户图像数据获取用户的图像轮廓;根据所述图像轮廓获取用户在成像平面上的第一运动轨迹;根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹;根据所述第一运动轨迹和所述第二运动轨迹生成体感数据。
- 根据权利要求11所述的系统,其特征在于,在所述采集用户图像数据与所述根据用户图像数据获取用户的图像轮廓之间,将用户图像与前景、背景分离。
- 根据权利要求11或12所述的系统,其特征在于,在根据所述图像轮廓上特征长度的变化和/或所述摄像头的焦距变化获取用户在垂直于所述成像平面的方向上的第二运动轨迹与所述体感数据单元根据第一运动轨迹和第二运动轨迹生成体感数据之间,根据测距模块测量得到的用户身体各部分到所述摄像头之间的距离,对所述第二运动轨迹进行校正。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP16763711.5A EP3206188A4 (en) | 2015-12-31 | 2016-07-04 | Method and system for realizing motion-sensing control based on intelligent device, and intelligent device |
JP2016570245A JP2018507448A (ja) | 2015-12-31 | 2016-07-04 | スマートデバイスに基づく体感制御の実現方法、システム及びスマートデバイス |
US15/243,966 US20170193668A1 (en) | 2015-12-31 | 2016-08-23 | Intelligent Equipment-Based Motion Sensing Control Method, Electronic Device and Intelligent Equipment |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201511034014.6A CN105894533A (zh) | 2015-12-31 | 2015-12-31 | 基于智能设备实现体感控制的方法、系统以及智能设备 |
CN201511034014.6 | 2015-12-31 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/243,966 Continuation US20170193668A1 (en) | 2015-12-31 | 2016-08-23 | Intelligent Equipment-Based Motion Sensing Control Method, Electronic Device and Intelligent Equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017113674A1 true WO2017113674A1 (zh) | 2017-07-06 |
Family
ID=57002309
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/088314 WO2017113674A1 (zh) | 2015-12-31 | 2016-07-04 | 基于智能设备实现体感控制的方法、系统以及智能设备 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20170193668A1 (zh) |
EP (1) | EP3206188A4 (zh) |
JP (1) | JP2018507448A (zh) |
CN (1) | CN105894533A (zh) |
WO (1) | WO2017113674A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106547357B (zh) * | 2016-11-22 | 2018-06-29 | 包磊 | 体感传感数据的通信处理方法及装置 |
CN107590823B (zh) * | 2017-07-21 | 2021-02-23 | 昆山国显光电有限公司 | 三维形态的捕捉方法和装置 |
CN109064776A (zh) * | 2018-09-26 | 2018-12-21 | 广东省交通规划设计研究院股份有限公司 | 预警方法、系统、计算机设备和存储介质 |
KR20210099988A (ko) | 2020-02-05 | 2021-08-13 | 삼성전자주식회사 | 뉴럴 네트워크의 메타 학습 방법 및 장치와 뉴럴 네트워크의 클래스 벡터 학습 방법 및 장치 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102074018A (zh) * | 2010-12-22 | 2011-05-25 | Tcl集团股份有限公司 | 一种基于深度信息的轮廓跟踪方法 |
CN102226880A (zh) * | 2011-06-03 | 2011-10-26 | 北京新岸线网络技术有限公司 | 一种基于虚拟现实的体感操作方法及系统 |
CN102350057A (zh) * | 2011-10-21 | 2012-02-15 | 上海魔迅信息科技有限公司 | 基于电视机顶盒实现体感游戏操控的系统及方法 |
WO2012128399A1 (en) * | 2011-03-21 | 2012-09-27 | Lg Electronics Inc. | Display device and method of controlling the same |
CN103345301A (zh) * | 2013-06-18 | 2013-10-09 | 华为技术有限公司 | 一种深度信息获取方法和装置 |
CN103679124A (zh) * | 2012-09-17 | 2014-03-26 | 原相科技股份有限公司 | 手势识别系统及方法 |
CN105138111A (zh) * | 2015-07-09 | 2015-12-09 | 中山大学 | 一种基于单摄像头的体感交互方法及系统 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6374225B1 (en) * | 1998-10-09 | 2002-04-16 | Enounce, Incorporated | Method and apparatus to prepare listener-interest-filtered works |
JP2002041038A (ja) * | 2000-07-31 | 2002-02-08 | Taito Corp | 仮想楽器演奏装置 |
JP2006107060A (ja) * | 2004-10-04 | 2006-04-20 | Sharp Corp | 入退室検知装置 |
US11325029B2 (en) * | 2007-09-14 | 2022-05-10 | National Institute Of Advanced Industrial Science And Technology | Virtual reality environment generating apparatus and controller apparatus |
JP5520463B2 (ja) * | 2008-09-04 | 2014-06-11 | 株式会社ソニー・コンピュータエンタテインメント | 画像処理装置、対象物追跡装置および画像処理方法 |
US8964298B2 (en) * | 2010-02-28 | 2015-02-24 | Microsoft Corporation | Video display modification based on sensor input for a see-through near-to-eye display |
US8576253B2 (en) * | 2010-04-27 | 2013-11-05 | Microsoft Corporation | Grasp simulation of a virtual object |
JP4650961B2 (ja) * | 2010-04-29 | 2011-03-16 | 株式会社バンダイナムコゲームス | ゲーム装置 |
JP5438601B2 (ja) * | 2010-06-15 | 2014-03-12 | 日本放送協会 | 人物動作判定装置およびそのプログラム |
US8475367B1 (en) * | 2011-01-09 | 2013-07-02 | Fitbit, Inc. | Biometric monitoring device having a body weight sensor, and methods of operating same |
US9734304B2 (en) * | 2011-12-02 | 2017-08-15 | Lumiradx Uk Ltd | Versatile sensors with data fusion functionality |
CN103577793B (zh) * | 2012-07-27 | 2017-04-05 | 中兴通讯股份有限公司 | 手势识别方法及装置 |
CA2825635A1 (en) * | 2012-08-28 | 2014-02-28 | Solink Corporation | Transaction verification system |
WO2014159726A1 (en) * | 2013-03-13 | 2014-10-02 | Mecommerce, Inc. | Determining dimension of target object in an image using reference object |
CN108537628B (zh) * | 2013-08-22 | 2022-02-01 | 贝斯普客公司 | 用于创造定制产品的方法和系统 |
US20150058427A1 (en) * | 2013-08-23 | 2015-02-26 | Jean Rene' Grignon | Limited Area Temporary Instantaneous Network |
KR102233728B1 (ko) * | 2013-10-31 | 2021-03-30 | 삼성전자주식회사 | 전자 장치의 제어 방법, 장치 및 컴퓨터 판독 가능한 기록 매체 |
KR20160065920A (ko) * | 2013-11-29 | 2016-06-09 | 인텔 코포레이션 | 얼굴 검출을 이용한 카메라 제어 |
JP2015158745A (ja) * | 2014-02-21 | 2015-09-03 | 日本電信電話株式会社 | 行動識別器生成装置、行動認識装置及びプログラム |
US9916010B2 (en) * | 2014-05-16 | 2018-03-13 | Visa International Service Association | Gesture recognition cloud command platform, system, method, and apparatus |
US9922236B2 (en) * | 2014-09-17 | 2018-03-20 | Toyota Motor Engineering & Manufacturing North America, Inc. | Wearable eyeglasses for providing social and environmental awareness |
US9612722B2 (en) * | 2014-10-31 | 2017-04-04 | Microsoft Technology Licensing, Llc | Facilitating interaction between users and their environments using sounds |
US10213688B2 (en) * | 2015-08-26 | 2019-02-26 | Warner Bros. Entertainment, Inc. | Social and procedural effects for computer-generated environments |
-
2015
- 2015-12-31 CN CN201511034014.6A patent/CN105894533A/zh active Pending
-
2016
- 2016-07-04 WO PCT/CN2016/088314 patent/WO2017113674A1/zh active Application Filing
- 2016-07-04 EP EP16763711.5A patent/EP3206188A4/en not_active Withdrawn
- 2016-07-04 JP JP2016570245A patent/JP2018507448A/ja active Pending
- 2016-08-23 US US15/243,966 patent/US20170193668A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102074018A (zh) * | 2010-12-22 | 2011-05-25 | Tcl集团股份有限公司 | 一种基于深度信息的轮廓跟踪方法 |
WO2012128399A1 (en) * | 2011-03-21 | 2012-09-27 | Lg Electronics Inc. | Display device and method of controlling the same |
CN102226880A (zh) * | 2011-06-03 | 2011-10-26 | 北京新岸线网络技术有限公司 | 一种基于虚拟现实的体感操作方法及系统 |
CN102350057A (zh) * | 2011-10-21 | 2012-02-15 | 上海魔迅信息科技有限公司 | 基于电视机顶盒实现体感游戏操控的系统及方法 |
CN103679124A (zh) * | 2012-09-17 | 2014-03-26 | 原相科技股份有限公司 | 手势识别系统及方法 |
CN103345301A (zh) * | 2013-06-18 | 2013-10-09 | 华为技术有限公司 | 一种深度信息获取方法和装置 |
CN105138111A (zh) * | 2015-07-09 | 2015-12-09 | 中山大学 | 一种基于单摄像头的体感交互方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN105894533A (zh) | 2016-08-24 |
US20170193668A1 (en) | 2017-07-06 |
JP2018507448A (ja) | 2018-03-15 |
EP3206188A1 (en) | 2017-08-16 |
EP3206188A4 (en) | 2017-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10674142B2 (en) | Optimized object scanning using sensor fusion | |
JP7457082B2 (ja) | 反応型映像生成方法及び生成プログラム | |
US10659769B2 (en) | Image processing apparatus, image processing method, and storage medium | |
US10755438B2 (en) | Robust head pose estimation with a depth camera | |
WO2019120032A1 (zh) | 模型构建方法、拍照方法、装置、存储介质及终端 | |
CN103310186B (zh) | 校正图像中用户的注视方向的方法和便携式终端 | |
US20160048964A1 (en) | Scene analysis for improved eye tracking | |
US9761013B2 (en) | Information notification apparatus that notifies information of motion of a subject | |
US20150077520A1 (en) | Information processor and information processing method | |
US20170316582A1 (en) | Robust Head Pose Estimation with a Depth Camera | |
KR101718837B1 (ko) | 응용프로그램의 제어방법, 장치 및 전자장비 | |
KR20170031733A (ko) | 디스플레이를 위한 캡처된 이미지의 시각을 조정하는 기술들 | |
WO2017113674A1 (zh) | 基于智能设备实现体感控制的方法、系统以及智能设备 | |
JP2015526927A (ja) | カメラ・パラメータのコンテキスト駆動型調整 | |
US20150109528A1 (en) | Apparatus and method for providing motion haptic effect using video analysis | |
US20100145232A1 (en) | Methods and apparatuses for correcting sport postures captured by a digital image processing apparatus | |
WO2022174594A1 (zh) | 基于多相机的裸手追踪显示方法、装置及系统 | |
US20150379333A1 (en) | Three-Dimensional Motion Analysis System | |
KR20170078176A (ko) | 동작 인식 기반의 게임을 제공하기 위한 장치, 이를 위한 방법 및 이 방법이 기록된 컴퓨터 판독 가능한 기록매체 | |
CN203630822U (zh) | 虚拟影像与真实场景相结合的舞台交互集成系统 | |
US20210133985A1 (en) | Method, system, and computer-accessible recording medium for motion recognition based on an atomic pose | |
US10291845B2 (en) | Method, apparatus, and computer program product for personalized depth of field omnidirectional video | |
KR20200135998A (ko) | 위치자세 검출 방법 및 장치, 전자 기기 및 저장 매체 | |
KR102147930B1 (ko) | 포즈 인식 방법 및 장치 | |
KR101414362B1 (ko) | 영상인지 기반 공간 베젤 인터페이스 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
REEP | Request for entry into the european phase |
Ref document number: 2016763711 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2016763711 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2016570245 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |