WO2017063530A1 - 动作信息识别方法和系统 - Google Patents

动作信息识别方法和系统 Download PDF

Info

Publication number
WO2017063530A1
WO2017063530A1 PCT/CN2016/101638 CN2016101638W WO2017063530A1 WO 2017063530 A1 WO2017063530 A1 WO 2017063530A1 CN 2016101638 W CN2016101638 W CN 2016101638W WO 2017063530 A1 WO2017063530 A1 WO 2017063530A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
action
change trend
joint
identified
Prior art date
Application number
PCT/CN2016/101638
Other languages
English (en)
French (fr)
Inventor
王鑫
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2017063530A1 publication Critical patent/WO2017063530A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Definitions

  • the present application relates to the field of computer vision technology, and in particular, to a method and system for identifying motion information.
  • motion information recognition (such as human motion recognition) is a newly emerging but important branch, the main purpose of which is to allow the computer to recognize the action that the action object is currently performing. Since the computer itself does not have the high-level understanding ability similar to humans, the use of computers for motion information recognition is a very challenging task.
  • the motion image information is extracted in time series from the generated motion image; and the motion feature information is matched with the motion database to recognize the human body motion.
  • the generated motion image is trained to obtain motion feature information by using an algorithm such as hidden Markov, adaboost, etc., and the motion feature information is matched with the action database; thereby identifying the action performed by the action object.
  • motion information recognition is mostly based on a complex algorithm, and the amount of calculation is large, resulting in poor timeliness of motion information recognition.
  • the purpose of the embodiments of the present application is to provide an action information identification method and system for improving the timeliness of motion information recognition.
  • an action information identifying method provided by an embodiment of the present application includes:
  • the action information matching the change trend information is queried from the pre-stored action database.
  • An action information recognition system comprising:
  • An acquiring unit configured to receive joint point coordinate data of the object to be recognized captured by the motion capture device within a preset duration
  • a first processing unit configured to calculate, according to joint point coordinate data of the object to be identified, joint angle data of the object to be identified
  • a second processing unit configured to obtain, according to the joint angle data, change trend information of the object to be identified within the preset duration
  • a matching unit configured to query, from the pre-stored action database, action information that matches the change trend information.
  • the action information identifying method and system provided by the embodiments of the present application identify the waiting to be determined by the trend information of the joint angle data of the object to be identified within a preset duration. Identify the action of the object. Therefore, the server only needs to calculate the feature information of the joint angle data of the object to be identified, thereby reducing the calculation amount, and improving the timeliness of the motion information recognition.
  • FIG. 1 is a flowchart of a method for identifying motion information provided in an embodiment of the present application
  • Figure 2 is a schematic view of a joint point of a human body provided in the present application.
  • FIG. 3 is a schematic diagram of three-dimensional coordinates of a left elbow joint point, a left shoulder joint point, and a left wrist joint point provided in the present application;
  • Figure 4 is a schematic view of the 20 joint angles provided in the present application.
  • FIG. 5 is a specific flow chart of the step S110 in Figure 1;
  • Figure 6 is a schematic diagram of the change trend information provided in the present application.
  • FIG. 7 is a specific flow chart of the step S130 in Figure 1;
  • Figure 8 is a schematic diagram of a matching process provided in the present application.
  • FIG. 10 is a schematic block diagram of an action information identification system provided in an embodiment of the present application.
  • FIG. 1 is a flowchart of a method for identifying motion information provided in an embodiment of the present application.
  • the action information identifying method includes the following steps:
  • the server receives joint point coordinate data of the object to be recognized captured by the motion capture device within a preset duration.
  • the motion capture device may be a somatosensory interactive terminal of a company, such as Microsoft's Kinect, Nintendo's Will, Sony's PS Move, and Asus's Xtion.
  • the motion capture device can acquire joint point coordinate data of the object to be identified in real time.
  • the joint point coordinate data may include three-dimensional coordinate data of a joint point.
  • the preset duration may be an empirical value set by an artificial one, for example 10 seconds.
  • FIG. 2 is a schematic diagram of the joint points of the human body.
  • the 20 joint points are: right hand joint point, right wrist joint point, right elbow joint point, right shoulder joint point, head joint point, shoulder center joint point, left shoulder joint point, left elbow joint point, left wrist joint point
  • the server receives the joint point data of the object to be identified as shown in FIG. 3, the three-dimensional coordinates of the left elbow joint point E are (E x , E y , E z ), and the three-dimensional coordinates of the left shoulder joint point S are (S x , S y , S z ), the three-dimensional coordinates of the left wrist joint point W are (W x , W y , W z ).
  • S110 Calculate joint angle data of the object to be identified according to joint point coordinate data of the object to be identified;
  • 20 joint angles can be constructed from 20 joint points of the human body in Fig. 2, which are the angles of the right wrist joint points (right hand joint point - right wrist joint point - right elbow joint point), for convenience Recording defines the angle of the right wrist joint point as the first angle 3A; likewise, the second angle 3B (right wrist joint point - right elbow joint point - right shoulder joint point); third angle 3C (right Elbow joint point - right shoulder joint point - shoulder center joint point); fourth angle 3D (head joint point - shoulder center joint point - right shoulder joint point); fifth angle 3E (head joint point - shoulder center joint point - spinal joint point); sixth angle 3F (head joint point - shoulder center joint point - left shoulder joint point); seventh angle 3G (right shoulder joint point - shoulder center joint point - left shoulder joint point); eighth clip Angle 3H (shoulder center joint point - left shoulder joint point - left elbow joint point); ninth angle 3J (left shoulder joint point - left elbow joint point - left wrist joint point); ten
  • the step S110 may include steps S111, S112, and S113, as shown in FIG. 5:
  • S111 Determine a first vector and a second vector corresponding to the target joint of the object to be identified according to the joint coordinate data of the object to be identified;
  • x 1 , y 1 , z 1 are the three-dimensional coordinates of the coordinate point A; x 2 , y 2 , and z 2 are the three-dimensional coordinates of the coordinate point B.
  • the three-dimensional coordinates of the left elbow joint point E in the ninth joint angle are (E x , E y , E z )
  • the three-dimensional coordinates of the left shoulder joint point S are (S x , S y , S z )
  • the three-dimensional coordinates of the left wrist joint point W are (W x , W y , W z ).
  • the server uses the above formula (1) to obtain the following first vector and second vector:
  • S112 Perform modulo calculation on the first vector and the second vector to obtain a modulus value corresponding to the first and second vectors;
  • the server uses the following equation (2) for the first vector Solving the modulo calculation, obtaining the first vector Corresponding modulus values:
  • the first vector Corresponding modulus value.
  • the server uses the following equation (3) for the second vector Solving the modulo calculation, obtaining the second vector Corresponding modulus values:
  • the second vector Corresponding modulus value.
  • S113 Calculate joint angle data of the target joint according to the first and second vectors and a modulus corresponding to the first and second vectors;
  • the server can obtain the joint angle data of the target joint by using the following formula (4):
  • is joint angle data
  • S120 Obtain, according to the joint angle data, change trend information of the object to be identified within the preset duration;
  • the change trend information includes a curve of joint angle variation.
  • step S120 may include the following steps:
  • the change trend information is obtained according to the sequence of the sampling time and the corresponding joint angle data at each sampling time.
  • the motion capture device samples the coordinate data of the joint point of the object to be identified according to a certain frequency, so each joint coordinate data of the sample in the preset duration corresponds to one sampling time.
  • the motion capture device samples the joint point coordinate data of the 10 groups of objects to be recognized within a preset duration, and correspondingly, the joint angle data of the 10 objects to be identified can be calculated through the step S110; further, the joint angle data can be obtained.
  • the joint angle data of the 10 objects to be identified corresponds to the sampling time within the preset duration.
  • the ten joint angle data are obtained in the order of the sampling time t (t0, t1, ..., t9), and the change trend information shown in FIG. 6 is obtained.
  • the abscissa represents the time t
  • the ordinate represents the joint angle ⁇
  • the black point represents the joint angle.
  • the 10 joint angle data in Fig. 6 constitute an angle change curve in the coordinate system.
  • the method further includes:
  • the action information to be stored is mapped to the action trend information corresponding to the action information and stored in the action database; the change trend information is a set of angle information.
  • the action information to be stored is a gesture-raising action
  • the server may map the hand-raising action to the change trend information corresponding to the hand-lifting action, so that the action information in the mapping relationship can be obtained according to the change trend information.
  • the server may further store the change trend information forming the mapping relationship into the action database for matching the object to be identified.
  • the server queries the pre-stored action database for action information that matches the change trend information
  • the pre-stored action database stores pre-stored change trend information.
  • the pre-stored change trend information is associated with action information.
  • the pre-stored change trend information represents joint point data collected by the same motion capture device (for example, Microsoft's Kinect, Nintendo's Will, Sony's PS Move, Asus's Xtion, etc.).
  • the step S130 may include steps S131, S132, and S133, as shown in FIG. 7:
  • the change trend information and the pre-stored change trend information in the action database are constructed in the same coordinate system in which the abscissa represents the time t and the ordinate represents the joint angle ⁇ , the change trend information and the action database can be The pre-existing trend information is calculated in the same coordinate system for similarity.
  • a 1 and a 2 are the minimum point and the maximum point of the sample angle curve A respectively; b 1 and b 2 are the minimum points of the angle B of the target joint point, respectively.
  • the coordinates of a 1 are (t 1 , ⁇ 1 )
  • the coordinates of a 2 are (t 2 , ⁇ 2 )
  • the coordinates of b 1 are (t 3 , ⁇ 3 )
  • the coordinates of b 2 are (t 4 , ⁇ ). 4 ).
  • the first distance from the maximum value point a 2 of the curve A to the maximum value point b 2 of the curve B can be calculated according to the following formula (5):
  • the second distance from the minimum value point b 1 of the curve A to the minimum value point of the curve B can be calculated according to the following formula (6):
  • the difference is an absolute value obtained by subtracting the first distance d 1 (A, B) and the second distance d 2 (A, B).
  • S132 Determine whether the difference is less than a preset threshold
  • the preset threshold may be an empirical value ⁇ that is artificially set.
  • the server queries the action information of the object to be identified as the action information associated with the pre-stored change trend information.
  • the action of the object to be identified is identified by the trend information of the joint angle data of the object to be identified within a preset time period. Therefore, the server only needs to calculate the feature information of the joint angle data of the object to be identified, thereby reducing the calculation amount, and improving the timeliness of the motion information recognition.
  • the step S130 may include S134, S135, and S136.
  • the steps are as shown in Figure 9:
  • S134 Perform similarity calculation on the change trend information and each change trend information pre-stored in the action database to obtain each difference value;
  • This step is similar to the process of calculating the difference in the step S131.
  • the difference is that in the step, the server calculates the similarity between the change trend information and each change trend information pre-stored in the action database, and obtains corresponding corresponding Difference.
  • S135 Determine whether a minimum difference among the differences is less than a preset threshold
  • This step is different from the step S132 in that it is first necessary to obtain a minimum difference among the differences, and then determine whether the minimum difference is less than a preset threshold.
  • the change trend information that is closest to the pre-stored change trend information of the object to be identified among all the pre-stored change trend information in the action database can be obtained. In this way, the motion information of the object to be identified can be more accurately identified.
  • FIG. 10 is a schematic block diagram of an action information identification system provided in an embodiment of the present application.
  • the action information identification system includes:
  • the acquiring unit 200 is configured to receive joint point coordinate data of the object to be recognized captured by the motion capture device within a preset duration;
  • the first processing unit 210 is configured to calculate joint angle data of the object to be identified according to joint point coordinate data of the object to be identified;
  • a second processing unit 220 configured to obtain, according to the joint angle data, change trend information of the object to be identified within the preset duration
  • the matching unit 230 is configured to query, from the pre-stored action database, action information that matches the change trend information.
  • the method further includes:
  • the storage unit is configured to map and store the action information to be stored and the change trend information corresponding to the action information into the action database; the change trend information is a set of the angle information.
  • the first processing unit specifically includes:
  • a first processing subunit configured to determine, according to the joint coordinate data of the object to be identified, the object to be identified a first vector and a second vector corresponding to the target joint;
  • a second processing sub-unit configured to perform modulo calculation on the first vector and the second vector to obtain a modulus value corresponding to the first and second vectors
  • a third processing subunit configured to calculate joint angle data of the target joint according to the first and second vectors and a modulus corresponding to the first and second vectors.
  • the change trend information includes a joint angle change curve
  • the second processing unit specifically includes:
  • a fourth processing subunit configured to acquire a sampling moment within the preset duration corresponding to the joint angle data
  • the fifth processing sub-unit is configured to obtain the change trend information according to the sequence of the sampling moments and the corresponding joint angle data at each sampling moment.
  • the matching unit specifically includes:
  • a first matching sub-unit configured to perform similarity calculation on the change trend information and the change trend information pre-stored in the action database to obtain a difference
  • a second matching subunit configured to determine whether the difference is less than a preset threshold
  • a third matching subunit configured to: when the difference is less than a preset threshold, query the action information of the object to be identified as the action information of the sample information.
  • the matching unit specifically includes:
  • a fourth matching subunit configured to perform similarity calculation on each change trend information pre-stored in the action database to obtain each difference value
  • a fifth matching subunit configured to determine whether a minimum difference among the differences is less than a preset threshold
  • a sixth matching subunit configured to: when the minimum difference among the differences is less than a preset threshold, querying the action information of the object to be identified as an action associated with the change trend information corresponding to the minimum difference information.
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • the controller can be implemented in any suitable manner, for example, the controller can take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (eg, software or firmware) executable by the (micro)processor.
  • computer readable program code eg, software or firmware
  • examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, The Microchip PIC18F26K20 and the Silicone Labs C8051F320, the memory controller can also be implemented as part of the memory's control logic.
  • the controller can be logically programmed by means of logic gates, switches, ASICs, programmable logic controllers, and embedding.
  • Such a controller can therefore be considered a hardware component, and the means for implementing various functions included therein can also be considered as a structure within the hardware component.
  • a device for implementing various functions can be considered as a software module that can be both a method of implementation and a structure within a hardware component.
  • the system, device, module or unit illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product having a certain function.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention may be implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer usable program code embodied therein. The form of a computer program product.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology.
  • the information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include temporary storage of computer readable media, such as modulated data signals and carrier waves.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the application can be described in the general context of computer-executable instructions executed by a computer, such as a program module.
  • program modules include routines, programs, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
  • the present application can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are connected through a communication network.
  • program modules can be located in both local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种动作信息识别方法及系统。所述方法包括:接收动作捕捉设备发送的在预设时长内捕捉到的待识别对象的关节坐标数据(S100);根据所述待识别对象的关节坐标数据计算得到所述待识别对象的关节夹角数据(S110);根据所述关节夹角数据获得所述待识别对象在所述预设时长内的变化趋势信息(S120);从预存的动作数据库中查询与所述变化趋势信息相匹配的动作信息(S130)。利用该方法可以实现提高动作信息识别的时效性。

Description

动作信息识别方法和系统
本申请要求2015年10月15日递交的申请号为201510671365.1、发明名称为“动作信息识别方法和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机视觉技术领域,特别涉及一种动作信息识别方法及系统。
背景技术
在计算机视觉技术领域,动作信息识别(例如人体动作识别)是一个刚刚兴起但十分重要的分支,其目的主要是让计算机可以识别出动作对象目前正在执行的动作。由于计算机本身并不具备类似于人的高层理解能力,因此利用计算机进行动作信息识别是一项极具挑战性的工作。
动作信息识别的应用前景是十分广阔的,比如在人机交互、视频会议、视频检索、病人自主监护、智能安全监控等场合都能够发挥重要的作用。所以对于动作信息识别这方面的研究是十分必要的。
现有技术中,一般是利用摄像机等设备捕捉动作对象的动作后,对产生的动作图像按照时间序列提取动作特征信息;并将所述动作特征信息与动作数据库进行匹配;从而识别出人体动作。具体地,对产生的动作图像利用算法如隐马尔可夫、adaboost等,训练得到动作特征信息,并将所述动作特征信息与动作数据库进行匹配;从而识别出动作对象做的动作。
在实现本申请过程中,发明人发现现有技术中至少存在如下问题:
现有技术中,动作信息识别大多基于复杂的算法,计算量大,导致动作信息识别的时效性差。
发明内容
本申请实施例的目的是提供一种动作信息识别方法及系统,用以提高动作信息识别的时效性。
为解决上述技术问题,本申请一实施例提供的一种动作信息识别方法,包括:
接收动作捕捉设备发送的在预设时长内捕捉到的待识别对象的关节点坐标数据;
根据所述待识别对象的关节点坐标数据计算得到所述待识别对象的关节夹角数据;
根据所述关节夹角数据获得所述待识别对象在所述预设时长内的变化趋势信息;
从预存的动作数据库中查询与所述变化趋势信息相匹配的动作信息。
一种动作信息识别系统,包括:
获取单元,用于接收动作捕捉设备发送的在预设时长内捕捉到的待识别对象的关节点坐标数据;
第一处理单元,用于根据所述待识别对象的关节点坐标数据计算得到所述待识别对象的关节夹角数据;
第二处理单元,用于根据所述关节夹角数据获得所述待识别对象在所述预设时长内的变化趋势信息;
匹配单元,用于从预存的动作数据库中查询与所述变化趋势信息相匹配的动作信息。
由以上本申请实施例提供的技术方案可见,本申请实施例提供的一种动作信息识别方法及系统,通过待识别对象的关节夹角数据在预设时长内的变化趋势信息来识别所述待识别对象的动作。如此服务器只需计算待识别对象的关节夹角数据这个特征信息即可,从而减少了计算量,可以实现提高动作信息识别的时效性。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请一实施例中提供的动作信息识别方法的流程图;
图2为本申请中提供的人体关节点的示意图;
图3为本申请中提供的左手肘关节点、左肩关节点及左手腕关节点的三维坐标的示意图;
图4为本申请中提供的20个关节夹角的示意图;
图5为图1中S110步骤的具体流程图;
图6为本申请中提供的变化趋势信息的示意图;
图7为图1中S130步骤的具体流程图;
图8为本申请中提供的匹配过程的示意图;
图9为图1中S130步骤的具体流程图;
图10为本申请一实施例中提供的动作信息识别系统的模块示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请中的技术方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
图1为本申请一实施例中提供的动作信息识别方法的流程图。本实施例中,所述动作信息识别方法包括如下步骤:
S100:接收动作捕捉设备发送的在预设时长内捕捉到的待识别对象的关节点坐标数据;
服务器接收动作捕捉设备发送的在预设时长内捕捉到的待识别对象的关节点坐标数据。
所述动作捕捉设备可以是某公司的体感交互终端,例如微软的Kinect、任天堂的Will、索尼的PS Move、华硕的Xtion等。
所述动作捕捉设备可以实时获取待识别对象的关节点坐标数据。
所述关节点坐标数据可以包括关节点的三维坐标数据。
所述预设时长可以是人为设定的一个经验值,例如10秒。
以微软的Kinect为例,如图2所示,为人体关节点的示意图。利用Kinect可以获取人体20种关节点的三维坐标。所述20种关节点分别为:右手关节点、右手腕关节点、右肘关节点、右肩关节点、头关节点、肩中心关节点、左肩关节点、左肘关节点、左手腕关节点、左手关节点、脊柱关节点、髋关节中心关节点、右髋关节点、右膝关节点、右踝关节点、右脚关节点、左髋关节点、左膝关节点、左踝关节点、左脚关节点。
举例说明,服务器接收到待识别对象的关节点数据如图3所示,左手肘关节点E的三维坐标为(Ex,Ey,Ez),左肩关节点S的三维坐标为(Sx,Sy,Sz),左手腕关节点W的三维坐标为(Wx,Wy,Wz)。
S110:根据所述待识别对象的关节点坐标数据计算得到所述待识别对象的关节夹角数据;
如图4所示,由图2中人体20种关节点可以构造出20种关节夹角,分别是右手腕关节点夹角(右手关节点-右手腕关节点-右肘关节点),为了方便记录将所述右手腕关节点夹角定义为第一夹角3A;同样地,第二夹角3B(右手腕关节点-右肘关节点-右肩关节点);第三夹角3C(右肘关节点-右肩关节点-肩中心关节点);第四夹角3D(头关节点-肩中心关节点-右肩关节点);第五夹角3E(头关节点-肩中心关节点-脊柱关节点);第六夹角3F(头关节点-肩中心关节点-左肩关节点);第七夹角3G(右肩关节点-肩中心关节点-左肩关节点);第八夹角3H(肩中心关节点-左肩关节点-左肘关节点);第九夹角3J(左肩关节点-左肘关节点-左手腕关节点);第十夹角3K(左肘关节点-左手腕关节点-左手关节点);第十一夹角3L(肩中心关节点-脊柱关节点-髋中心关节点);第十二夹角3M(脊柱关节点-髋中心关节点-右髋关节点);第十三夹角3N(髋中心关节点-右髋关节点-右膝关节点);第十四夹角3P(右髋关节点-右膝关节点-右踝关节点);第十五夹角3Q(右膝关节点-右踝关节点-右脚关节点);第十六夹角3R(右髋关节点-髋中心关节点-左髋关节点);第十七夹角3S(脊柱关节点-髋中心关节点-左髋关节点);第十八夹角3T(髋中心关节点-左髋关节点-左膝关节点);第十九夹角3U(左髋关节点-左膝关节点-左踝关节点);第二十夹角3V(左膝关节点-左踝关节点-左脚关节点)。
具体地,所述S110步骤,可以包括S111、S112及S113步骤,如图5所示:
S111:根据所述待识别对象的关节坐标数据确定与所述待识别对象的目标关节对应的第一向量和第二向量;
在三维空间坐标系中,对于任意两个不重合的三维坐标点例如坐标点A(x1,y1,z1),B(x2,y2,z2)组成的向量
Figure PCTCN2016101638-appb-000001
为如下公式:
Figure PCTCN2016101638-appb-000002
其中,x1,y1,z1为坐标点A的三维坐标;x2,y2,z2为坐标点B的三维坐标。
沿用S100步骤中所举的例子,如图3所示,第九关节夹角(左肩关节点-左肘关节点-左手腕关节点)中左手肘关节点E的三维坐标为(Ex,Ey,Ez),左肩关节点S的三维坐标为(Sx,Sy,Sz),左手腕关节点W的三维坐标为(Wx,Wy,Wz)。服务器利用上述式(1),则可以得到如下第一向量和第二向量:
Figure PCTCN2016101638-appb-000003
其中,
Figure PCTCN2016101638-appb-000004
为左手肘关节点到左肩关节点的第一向量。
Figure PCTCN2016101638-appb-000005
其中,
Figure PCTCN2016101638-appb-000006
为左手肘关节点到左手腕关节点的第二向量。
S112:对所述第一向量和第二向量求模计算,得到与所述第一、第二向量对应的模值;
继续沿用S111步骤中所举的例子;
服务器利用如下式(2)对第一向量
Figure PCTCN2016101638-appb-000007
求模计算,得到与所述第一向量
Figure PCTCN2016101638-appb-000008
对应的模值:
Figure PCTCN2016101638-appb-000009
其中,
Figure PCTCN2016101638-appb-000010
为所述第一向量
Figure PCTCN2016101638-appb-000011
对应的模值。
服务器利用如下式(3)对第二向量
Figure PCTCN2016101638-appb-000012
求模计算,得到与所述第二向量
Figure PCTCN2016101638-appb-000013
对应的模值:
Figure PCTCN2016101638-appb-000014
其中,
Figure PCTCN2016101638-appb-000015
为所述第二向量
Figure PCTCN2016101638-appb-000016
对应的模值。
S113:根据所述第一、第二向量及与所述第一、第二向量对应的模值计算得到所述目标关节的关节夹角数据;
继续沿用S112步骤中所举的例子;
服务器利用如下式(4)可以得到所述目标关节的关节夹角数据:
Figure PCTCN2016101638-appb-000017
其中,所述θ为关节夹角数据。
S120:根据所述关节夹角数据获得所述待识别对象在所述预设时长内的变化趋势信息;
所述变化趋势信息包括关节夹角变化曲线。
具体地,所述S120步骤,可以包括如下步骤:
获取所述关节夹角数据对应的在所述预设时长内的采样时刻;
根据所述采样时刻的顺序及各采样时刻上对应的所述关节夹角数据,获得变化趋势信息。
所述动作捕捉设备在采集待识别对象的关节点坐标数据时,是按照一定的频率采样的,所以在预设时长内每一个采样的关节点坐标数据都会对应有一个采样时刻。
假设在预设时长内所述动作捕捉设备采样了10组待识别对象的关节点坐标数据,相应地经过S110步骤可以计算得到10个待识别对象的关节夹角数据;进一步的,可以获取所述10个待识别对象的关节夹角数据对应在所述预设时长内的采样时刻。例如所述10个关节夹角数据按照采样时刻t的顺序(t0,t1…,t9),获得如图6所示的变化趋势信息示意图。其中,横坐标代表时间t,纵坐标代表关节夹角θ,黑点代表关节夹角。图6中10个关节夹角数据在坐标系中构成了一条夹角变化曲线。
在本申请实施例的另一个实施例中,在所述S120步骤之后,还包括:
将待存储的动作信息与该动作信息对应的变化趋势信息进行映射并存储到动作数据库中;所述变化趋势信息是若干夹角信息的集合。
一个待存储的动作信息如举手动作,服务器可以将所述举手动作与该举手动作对应的变化趋势信息进行映射,使得根据所述变化趋势信息可以获得成映射关系的动作信息。服务器还可以将所述形成映射关系的变化趋势信息存储到动作数据库中,供待识别对象进行匹配。
S130:从预存的动作数据库中查询与所述变化趋势信息相匹配的动作信息;
服务器从预存的动作数据库中查询与所述变化趋势信息相匹配的动作信息;
所述预存的动作数据库中存储有预存的变化趋势信息。所述预存的变化趋势信息关联有动作信息。所述预存的变化趋势信息代表与所述待识别对象都是由同一动作捕捉设备(例如微软的Kinect、任天堂的Will、索尼的PS Move、华硕的Xtion等)采集的关节点数据。
具体地,所述S130步骤,可以包括S131、S132和S133步骤,如图7所示:
S131:将所述变化趋势信息与动作数据库中预存的变化趋势信息进行相似度计算,得出差值;
由于所述变化趋势信息与动作数据库中预存的变化趋势信息都是在横坐标代表时间t,纵坐标代表关节夹角θ的相同坐标系中构建的,所以可以将所述变化趋势信息与动作数据库中预存的变化趋势信息在同一坐标系中进行相似度计算。
假设所述变化趋势信息为曲线A。所述动作数据库中预存的变化趋势信息为曲线B。
所述相似度计算,
(A1)获取所述曲线A及曲线B中极值点;
(A2)计算所述曲线A的极大值点到所述曲线B的极大值点的第一距离d1(A,B);
(A3)计算所述曲线A的极小值点到所述曲线B的极小值点的第二距离d2(A,B);
(A4)根据所述第一距离及第二距离,得到差值。
如图8所示,其中a1,a2分别为样本夹角曲线A的极小值点与极大值点;b1,b2分别为目标关节点的夹角曲线B的极小值点与极大值点。假设a1的坐标为(t1,θ1),a2的坐标为(t2,θ2),b1的坐标为(t3,θ3),b2的坐标为(t4,θ4)。
根据如下式(5)可以计算所述曲线A的极大值点a2到所述曲线B的极大值点b2的第一距离:
Figure PCTCN2016101638-appb-000018
根据如下式(6)可以计算所述曲线A的极小值点b1到所述曲线B的极小值点的第二距离:
Figure PCTCN2016101638-appb-000019
根据如下式(7)可以计算得到所述差值:
|d1(A,B)-d2(A,B)|                式(7)
即,所述差值为所述第一距离d1(A,B)和所述第二距离d2(A,B)相减后的绝对值。
S132:判断所述差值是否小于预设阈值;
所述预设阈值可以是人为设置的一个经验值ε。
S133:若是,则查询到所述待识别对象的动作信息为所述预存的变化趋势信息关联的动作信息;
在所述差值符合预设阈值时,服务器查询到所述待识别对象的动作信息为所述预存的变化趋势信息关联的动作信息。
通过待识别对象的关节夹角数据在预设时长内的变化趋势信息来识别所述待识别对象的动作。如此服务器只需计算待识别对象的关节夹角数据这个特征信息即可,从而减少了计算量,可以实现提高动作信息识别的时效性。
在本申请实施例的又一个实施例中,所述S130步骤,可以包括S134、S135和S136 步骤,如图9所示:
S134:将所述变化趋势信息与动作数据库中预存的每个变化趋势信息进行相似度计算,得出各差值;
本步骤与S131步骤中计算差值过程类似,不同之处在于,本步骤中服务器是将所述变化趋势信息与动作数据库中预存的每个变化趋势信息进行相似度计算的,并获得对应的各个差值。
S135:判断所述各差值中的最小差值是否小于预设阈值;
本步骤与S132步骤不同之处在于,首先需要得到各差值中最小差值,再判断所述最小差值是否小于预设阈值。
S136:若是,则查询到所述待识别对象的动作信息为所述最小差值对应的变化趋势信息关联的动作信息;
通过本实施例,可以获得动作数据库中所有预存的变化趋势信息中与待识别对象的变化趋势信息最接近预存的变化趋势信息。这样,可以更精确的识别出待识别对象的动作信息。
图10为本申请一实施例中提供的动作信息识别系统的模块示意图。本实施例中,所述动作信息识别系统包括:
获取单元200,用于接收动作捕捉设备发送的在预设时长内捕捉到的待识别对象的关节点坐标数据;
第一处理单元210,用于根据所述待识别对象的关节点坐标数据计算得到所述待识别对象的关节夹角数据;
第二处理单元220,用于根据所述关节夹角数据获得所述待识别对象在所述预设时长内的变化趋势信息;
匹配单元230,用于从预存的动作数据库中查询与所述变化趋势信息相匹配的动作信息。
优选地,在所述第二处理单元之后,还包括:
存储单元,用于将待存储的各动作信息与该动作信息对应的变化趋势信息进行映射并存储到动作数据库中;所述变化趋势信息是若干夹角信息的集合。
优选地,所述第一处理单元,具体包括:
第一处理子单元,用于根据所述待识别对象的关节坐标数据确定与所述待识别对象 的目标关节对应的第一向量和第二向量;
第二处理子单元,用于对所述第一向量和第二向量求模计算,得到与所述第一、第二向量对应的模值;
第三处理子单元,用于根据所述第一、第二向量及与所述第一、第二向量对应的模值计算得到所述目标关节的关节夹角数据。
优选地,在所述第二处理单元中,所述变化趋势信息包括关节夹角变化曲线,则所述第二处理单元,具体包括:
第四处理子单元,用于获取所述关节夹角数据对应的在所述预设时长内的采样时刻;
第五处理子单元,用于根据所述采样时刻的顺序及各采样时刻上对应的所述关节夹角数据,获得变化趋势信息。
优选地,所述匹配单元,具体包括:
第一匹配子单元,用于将所述变化趋势信息与动作数据库中预存的变化趋势信息进行相似度计算,得出差值;
第二匹配子单元,用于判断所述差值是否小于预设阈值;
第三匹配子单元,用于在所述差值小于预设阈值时,则查询到所述待识别对象的动作信息为所述样本信息的动作信息。
优选地,所述匹配单元,具体包括:
第四匹配子单元,用于将所述变化趋势信息与动作数据库中预存的每个变化趋势信息进行相似度计算,得出各差值;
第五匹配子单元,用于判断所述各差值中的最小差值是否小于预设阈值;
第六匹配子单元,用于在所述各差值中的最小差值小于预设阈值时,则查询到所述待识别对象的动作信息为所述最小差值对应的变化趋势信息关联的动作信息。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要 请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本申请时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的 计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本申请,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。

Claims (12)

  1. 一种动作信息识别方法,其特征在于,包括:
    接收动作捕捉设备发送的在预设时长内捕捉到的待识别对象的关节点坐标数据;
    根据所述待识别对象的关节点坐标数据计算得到所述待识别对象的关节夹角数据;
    根据所述关节夹角数据获得所述待识别对象在所述预设时长内的变化趋势信息;
    从预存的动作数据库中查询与所述变化趋势信息相匹配的动作信息。
  2. 如权利要求1所述的方法,其特征在于,在所述从预存的动作数据库中查询与所述变化趋势信息相匹配的动作信息之前,所述方法还包括:
    将待存储的各动作信息与该动作信息对应的变化趋势信息进行映射并存储到动作数据库中;所述变化趋势信息是若干夹角信息的集合。
  3. 如权利要求1所述的方法,其特征在于,根据所述待识别对象的关节坐标数据计算得到所述待识别对象的关节夹角数据,具体包括:
    根据所述待识别对象的关节坐标数据确定与所述待识别对象的目标关节对应的第一向量和第二向量;
    对所述第一向量和第二向量求模计算,得到与所述第一、第二向量对应的模值;
    根据所述第一、第二向量及与所述第一、第二向量对应的模值计算得到所述目标关节的关节夹角数据。
  4. 如权利要求1所述的方法,其特征在于,所述根据所述关节夹角数据获得所述待识别对象在所述预设时长内的变化趋势信息,具体包括:
    获取所述关节夹角数据对应的在所述预设时长内的采样时刻;
    根据所述采样时刻的顺序及各采样时刻上对应的所述关节夹角数据,获得变化趋势信息。
  5. 如权利要求1所述的方法,其特征在于,从预存的动作数据库中查询与所述变化趋势信息相匹配的动作信息,具体包括:
    将所述变化趋势信息与动作数据库中预存的变化趋势信息进行相似度计算,得出差值;
    判断所述差值是否小于预设阈值;
    若是,则查询到所述待识别对象的动作信息为所述预存的变化趋势信息关联的动作信息。
  6. 如权利要求1所述的方法,其特征在于,从预存的动作数据库中查询与所述变 化趋势信息相匹配的动作信息,具体包括:
    将所述变化趋势信息与动作数据库中预存的每个变化趋势信息进行相似度计算,得出各差值;
    判断所述各差值中的最小差值是否小于预设阈值;
    若是,则查询到所述待识别对象的动作信息为所述最小差值对应的变化趋势信息关联的动作信息。
  7. 一种动作信息识别系统,其特征在于,包括:
    获取单元,用于接收动作捕捉设备发送的在预设时长内捕捉到的待识别对象的关节点坐标数据;
    第一处理单元,用于根据所述待识别对象的关节点坐标数据计算得到所述待识别对象的关节夹角数据;
    第二处理单元,用于根据所述关节夹角数据获得所述待识别对象在所述预设时长内的变化趋势信息;
    匹配单元,用于从预存的动作数据库中查询与所述变化趋势信息相匹配的动作信息。
  8. 如权利要求7所述的系统,其特征在于,在所述第二处理单元之后,还包括:
    存储单元,用于将待存储的各动作信息与该动作信息对应的变化趋势信息进行映射并存储到动作数据库中;所述变化趋势信息是若干夹角信息的集合。
  9. 如权利要求7所述的系统,其特征在于,所述第一处理单元,具体包括:
    第一处理子单元,用于根据所述待识别对象的关节坐标数据确定与所述待识别对象的目标关节对应的第一向量和第二向量;
    第二处理子单元,用于对所述第一向量和第二向量求模计算,得到与所述第一、第二向量对应的模值;
    第三处理子单元,用于根据所述第一、第二向量及与所述第一、第二向量对应的模值计算得到所述目标关节的关节夹角数据。
  10. 如权利要求7所述的系统,其特征在于,所述第二处理单元,具体包括:
    第四处理子单元,用于获取所述关节夹角数据对应的在所述预设时长内的采样时刻;
    第五处理子单元,用于根据所述采样时刻的顺序及各采样时刻上对应的所述关节夹 角数据,获得变化趋势信息。
  11. 如权利要求7所述的系统,其特征在于,所述匹配单元,具体包括:
    第一匹配子单元,用于将所述变化趋势信息与动作数据库中预存的变化趋势信息进行相似度计算,得出差值;
    第二匹配子单元,用于判断所述差值是否小于预设阈值;
    第三匹配子单元,用于在所述差值小于预设阈值时,则查询到所述待识别对象的动作信息为所述样本信息的动作信息。
  12. 如权利要求7所述的系统,其特征在于,所述匹配单元,具体包括:
    第四匹配子单元,用于将所述变化趋势信息与动作数据库中预存的每个变化趋势信息进行相似度计算,得出各差值;
    第五匹配子单元,用于判断所述各差值中的最小差值是否小于预设阈值;
    第六匹配子单元,用于在所述各差值中的最小差值小于预设阈值时,则查询到所述待识别对象的动作信息为所述最小差值对应的变化趋势信息关联的动作信息。
PCT/CN2016/101638 2015-10-15 2016-10-10 动作信息识别方法和系统 WO2017063530A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510671365.1A CN106599762A (zh) 2015-10-15 2015-10-15 动作信息识别方法和系统
CN201510671365.1 2015-10-15

Publications (1)

Publication Number Publication Date
WO2017063530A1 true WO2017063530A1 (zh) 2017-04-20

Family

ID=58517809

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/101638 WO2017063530A1 (zh) 2015-10-15 2016-10-10 动作信息识别方法和系统

Country Status (2)

Country Link
CN (1) CN106599762A (zh)
WO (1) WO2017063530A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308437A (zh) * 2017-07-28 2019-02-05 上海形趣信息科技有限公司 动作识别纠错方法、电子设备、存储介质
CN111027473A (zh) * 2019-12-09 2020-04-17 山东省科学院自动化研究所 一种基于人体关节运动实时预测的目标识别方法及系统
CN111898571A (zh) * 2020-08-05 2020-11-06 北京华捷艾米科技有限公司 动作识别系统及方法
CN112435731A (zh) * 2020-12-16 2021-03-02 成都翡铭科技有限公司 一种判断实时姿势是否满足预设规则的方法
CN112487964A (zh) * 2020-11-27 2021-03-12 深圳市维海德技术股份有限公司 姿态检测识别方法、设备及计算机可读存储介质
CN115510927A (zh) * 2021-06-03 2022-12-23 中国移动通信集团四川有限公司 故障检测方法、装置及设备

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107520843A (zh) * 2017-08-22 2017-12-29 南京野兽达达网络科技有限公司 一种类人多自由度机器人的动作训练方法
CN108664119B (zh) * 2017-10-31 2020-11-03 中国农业大学 一种配置体感动作与虚拟操作间映射关系的方法及装置
CN108227930A (zh) * 2018-01-18 2018-06-29 四川斐讯信息技术有限公司 一种可穿戴设备的手势控制方法及可穿戴设备
CN109325456B (zh) * 2018-09-29 2020-05-12 佳都新太科技股份有限公司 目标识别方法、装置、目标识别设备及存储介质
CN109598190A (zh) * 2018-10-23 2019-04-09 深圳壹账通智能科技有限公司 用于动作识别的方法、装置、计算机设备及存储介质
CN111460868A (zh) * 2019-01-22 2020-07-28 上海形趣信息科技有限公司 动作识别纠错方法、系统、电子设备、存储介质
CN110458940B (zh) * 2019-07-24 2023-02-28 兰州未来新影文化科技集团有限责任公司 动作捕捉的处理方法和处理装置
TWI710972B (zh) * 2019-11-01 2020-11-21 緯創資通股份有限公司 基於原子姿勢的動作辨識方法及其系統與電腦可讀取記錄媒體
CN114190928B (zh) * 2021-12-27 2022-07-08 清华大学 险态工况下驾驶行为的识别方法、装置和计算机设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399637A (zh) * 2013-07-31 2013-11-20 西北师范大学 基于kinect人体骨骼跟踪控制的智能机器人人机交互方法
CN104038738A (zh) * 2014-06-04 2014-09-10 东北大学 一种提取人体关节点坐标的智能监控系统及方法
CN104899561A (zh) * 2015-05-27 2015-09-09 华南理工大学 一种并行化的人体行为识别方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100583127C (zh) * 2008-01-14 2010-01-20 浙江大学 一种基于模板匹配的视点无关的人体动作识别方法
CN102855470B (zh) * 2012-07-31 2015-04-08 中国科学院自动化研究所 基于深度图像的人体姿态估计方法
CN103020648B (zh) * 2013-01-09 2016-04-13 艾迪普(北京)文化科技股份有限公司 一种动作类型识别方法、节目播出方法及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399637A (zh) * 2013-07-31 2013-11-20 西北师范大学 基于kinect人体骨骼跟踪控制的智能机器人人机交互方法
CN104038738A (zh) * 2014-06-04 2014-09-10 东北大学 一种提取人体关节点坐标的智能监控系统及方法
CN104899561A (zh) * 2015-05-27 2015-09-09 华南理工大学 一种并行化的人体行为识别方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIU, FEI ET AL.: "Human Action Recongnition Method Based on Depth Images", vol. 40, no. 8, 31 August 2014 (2014-08-31), pages 168 - 172, ISSN: 1000-3428 *
TIAN, GUOHUI ET AL., A NOVEL HUMAN ACTIVITY RECOGNITION METHOD USING JOINT POINTS INFORMATION, vol. 36, no. 3, 31 May 2014 (2014-05-31), pages 285 - 291, ISSN: 1002-0446 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308437A (zh) * 2017-07-28 2019-02-05 上海形趣信息科技有限公司 动作识别纠错方法、电子设备、存储介质
CN111027473A (zh) * 2019-12-09 2020-04-17 山东省科学院自动化研究所 一种基于人体关节运动实时预测的目标识别方法及系统
CN111027473B (zh) * 2019-12-09 2023-05-26 山东省科学院自动化研究所 一种基于人体关节运动实时预测的目标识别方法及系统
CN111898571A (zh) * 2020-08-05 2020-11-06 北京华捷艾米科技有限公司 动作识别系统及方法
CN112487964A (zh) * 2020-11-27 2021-03-12 深圳市维海德技术股份有限公司 姿态检测识别方法、设备及计算机可读存储介质
CN112487964B (zh) * 2020-11-27 2023-08-01 深圳市维海德技术股份有限公司 姿态检测识别方法、设备及计算机可读存储介质
CN112435731A (zh) * 2020-12-16 2021-03-02 成都翡铭科技有限公司 一种判断实时姿势是否满足预设规则的方法
CN112435731B (zh) * 2020-12-16 2024-03-19 成都翡铭科技有限公司 一种判断实时姿势是否满足预设规则的方法
CN115510927A (zh) * 2021-06-03 2022-12-23 中国移动通信集团四川有限公司 故障检测方法、装置及设备
CN115510927B (zh) * 2021-06-03 2024-04-12 中国移动通信集团四川有限公司 故障检测方法、装置及设备

Also Published As

Publication number Publication date
CN106599762A (zh) 2017-04-26

Similar Documents

Publication Publication Date Title
WO2017063530A1 (zh) 动作信息识别方法和系统
Moon et al. Multiple kinect sensor fusion for human skeleton tracking using Kalman filtering
Kanade et al. First-person vision
WO2021082753A1 (zh) 蛋白质的结构信息预测方法、装置、设备及存储介质
US10108270B2 (en) Real-time 3D gesture recognition and tracking system for mobile devices
CN109325456B (zh) 目标识别方法、装置、目标识别设备及存储介质
WO2015186436A1 (ja) 画像処理装置、画像処理方法、および、画像処理プログラム
JP2018505457A (ja) アイトラッキングシステムのための改良されたキャリブレーション
WO2019015645A1 (zh) 图像处理方法及装置
US9734435B2 (en) Recognition of hand poses by classification using discrete values
WO2019057197A1 (zh) 运动目标的视觉跟踪方法、装置、电子设备及存储介质
KR101559502B1 (ko) 실시간 손 포즈 인식을 통한 비접촉식 입력 인터페이스 방법 및 기록 매체
Shu et al. Multi-modal feature constraint based tightly coupled monocular visual-lidar odometry and mapping
Shao et al. A new descriptor for multiple 3D motion trajectories recognition
WO2021056450A1 (zh) 图像模板的更新方法、设备及存储介质
Xu et al. Human action recognition based on Kinect and PSO-SVM by representing 3D skeletons as points in lie group
CN110738650A (zh) 一种传染病感染识别方法、终端设备及存储介质
US20150185851A1 (en) Device Interaction with Self-Referential Gestures
Alcantarilla et al. Learning visibility of landmarks for vision-based localization
KR101706864B1 (ko) 모션 센싱 입력기기를 이용한 실시간 손가락 및 손동작 인식
US20190383937A1 (en) SWITCHING AMONG DISPARATE SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) METHODS IN VIRTUAL, AUGMENTED, AND MIXED REALITY (xR) APPLICATIONS
US10852138B2 (en) Scalabale simultaneous localization and mapping (SLAM) in virtual, augmented, and mixed reality (xR) applications
KR101870542B1 (ko) 모션 인식 방법 및 장치
Shah et al. Gesture recognition technique: a review
Song et al. Real-time 3D hand tracking from depth images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16854909

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16854909

Country of ref document: EP

Kind code of ref document: A1