WO2017206005A1 - 一种基于光流检测和身体部分模型的多人姿态识别系统 - Google Patents

一种基于光流检测和身体部分模型的多人姿态识别系统 Download PDF

Info

Publication number
WO2017206005A1
WO2017206005A1 PCT/CN2016/083861 CN2016083861W WO2017206005A1 WO 2017206005 A1 WO2017206005 A1 WO 2017206005A1 CN 2016083861 W CN2016083861 W CN 2016083861W WO 2017206005 A1 WO2017206005 A1 WO 2017206005A1
Authority
WO
WIPO (PCT)
Prior art keywords
human body
image
optical flow
body part
model
Prior art date
Application number
PCT/CN2016/083861
Other languages
English (en)
French (fr)
Inventor
宫文娟
宫法明
朱朋海
李翛然
张雪娜
窦瑞华
崔佳
赵国梁
陈彤
Original Assignee
中国石油大学(华东)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国石油大学(华东) filed Critical 中国石油大学(华东)
Priority to PCT/CN2016/083861 priority Critical patent/WO2017206005A1/zh
Publication of WO2017206005A1 publication Critical patent/WO2017206005A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition

Definitions

  • the invention relates to the fields of computer image processing, optical flow detection, human body gesture recognition, moving object tracking, etc., and particularly relates to a multi-person gesture recognition system based on optical flow detection and body part model.
  • Human target recognition and tracking technology is an important research direction in the field of pattern recognition, image processing and artificial intelligence, and it is also a more active branch in the field of computer vision.
  • the motion image recognition and tracking based on video images accurately recognizes the human body target from a complex and changing environment, and tracks the human body target in real time according to the correlation between the target feature and the video.
  • the significance of moving human target recognition and tracking based on video images is not only reflected in scientific research, but also in life, industry and national defense. Practical products have also been extended from military to civilian industries, such as supermarkets, shopping malls, and stations.
  • the human target recognition and tracking technology can effectively alarm the abnormal events in special areas that need to be monitored for safety.
  • the offshore working platform is far away from the land, the working environment is complex, and the application of human target and tracking technology can effectively protect the operators. Safety and safe operation of the platform.
  • the invention proposes an algorithm for quickly and accurately recognizing a human body posture, and a human body gesture recognition system is developed according to the algorithm.
  • the invention uses the monitoring video as a data source, obtains the image optical flow information by calculating the flow field by means of the robust gradient algorithm, performs template response calculation on each extracted motion region, and performs human body positioning and human body through message transmission and back transmission. Attitude estimation, multi-person gesture recognition, personnel quantity statistics and personnel tracking.
  • the purpose of the invention is to obtain video information through the camera and automatically identify the human target in the video, thereby obtaining information such as personnel flow statistics, personnel tracking, and personnel area distribution. It is of great significance in the intelligent management of monitoring systems.
  • Step 1 acquiring the video information to be detected; Step 2, initializing the required parameters and assigning the corresponding values; Step 3, obtaining the sub-functions of the data constraint and the spatial consistency constraint, Obtaining the total objective function; step 4, judging whether the objective function is continuous, if continuous, directly performing robust estimation; otherwise, adding the thread to the objective function to become a continuous function and then performing robust estimation; step 5,
  • the objective function is iteratively optimized; in step 6, the optical flow threshold is set, and the effective motion region is intercepted; in step 7, the sliding window detection is performed for each motion region, and the response of the human body part model is calculated; and step 8, the image is calculated by message transmission.
  • step 1 the video information to be detected is acquired by the monitoring device, and the video information includes the length of the video, and the time A series of basic information such as the shooting time and the color type and picture size of the current image frame, and then the video information is input into the present invention through the human-machine interaction interface or automatically for the next processing.
  • step 2 corresponding parameters required in the algorithm are obtained according to the input video information, such as model selection, error size, filter operator, determination of pyramid level, and the like.
  • step 3 a subfunction of the data constraint is obtained.
  • I(x, y, t) I(x + u ⁇ t, y + v ⁇ t, t + ⁇ t)
  • (u, v) represents the state of the point (x, y) at time t
  • (u, v) is a The horizontal and vertical velocity of the point, ⁇ t is small.
  • the objective function of the data constraint is: I x means that I is partial to x, For the target area.
  • step 4 it is judged whether the objective function is continuous, and if it is continuous, the robustness estimation is directly performed; otherwise, the target function is added to the thread to make it a continuous function, and then the robustness estimation is performed.
  • the role of the robustness statistic is to find the parameters and optimize the fitted model.
  • a grid (i(s), j(s)) represents the pixel coordinates of point s.
  • the goal is to find a parameter value that minimizes the residual: ⁇ s is the scale parameter and ⁇ is our robustness estimator.
  • finding the optimal estimator becomes a quadratic equation. The least squares estimation problem.
  • Regularized objective function E(u) Representing s on the southeast and northwest points of the grid, ⁇ 1 and ⁇ 2 are scale parameters, and ⁇ 1 and ⁇ 2 may be different values. ⁇ i is a robustness estimate.
  • a linear process can be applied to data items and space items to eliminate outliers and make the objective function a continuous function.
  • the new objective function E(u,l): ⁇ s and ⁇ s are constants that control the smoothing and penalty terms, respectively.
  • P(x) is the penalty term, d, l ⁇ 0.
  • the abnormal process z(x) is:
  • step 5 the objective function is iteratively optimized.
  • the objective function E(u,v) in step 4 may be non-convex.
  • the SOR (Simultaneous Over-Relaxation) algorithm can be used to obtain the local minimum, and then the final value is obtained by iteration. In the iterative process, the super-relaxation parameter needs to be utilized. To correct the results.
  • the iteration formula is:
  • T(u s ) is the upper bound of the second-order partial derivative of E:
  • step 6 the optical flow field calculated by the above steps describes the motion attributes in the image frame. According to the characteristics of the specific motion scene, the optical flow threshold is set, and each effective motion region is extracted, and the pixel list and the bounding box of each region are extracted.
  • step 7 the image is convoluted using the model of the relative positional relationship between the body part of the body and the body part, and the response S(p i ) of each sliding window position in the image to the shape of the body part is calculated:
  • step 8 the main joints of the human body, such as the head, neck, shoulders, elbows, hands, torso, ankles, knees, and feet, are nodes of a tree structure, and the limbs of the human body are used as nodes in the tree structure.
  • the connection between the head as the root node of the tree structure, the hand and the foot as the leaf nodes of the tree structure, so that the tree structure thus established can represent the hinge model of the human body.
  • the body part blending model uses 5 to 6 templates to learn the shape and relative position information of each body part. Each template learns the shape characteristics of the shape of the body part and the relative positional features of the body part. Use the body part to mix the model and put each body The response of the body part is transmitted to the root node of the human tree model (ie, the node corresponding to the head) by means of message transmission, and the response value S(t) of each position of the image to the entire human body model is obtained:
  • step 9 based on the result of the human body positioning, in the tree structure of the human body model, returning from the root node to the leaf node, and sequentially determining the specific template number in the mixed model adopted by each body part, for each body Partial positioning is performed to obtain an estimate of the entire human body posture.
  • step 10 according to the principle of motion continuity of the same moving object in the adjacent two frames of images, the corresponding relationship of the moving objects is calculated: firstly, the size of the overlapping region of the motion region in the continuous image frame and its distance D are calculated, and the two are considered comprehensively.
  • the factor is to calculate the Corr:D 1-ratio of the moving object in the adjacent two frames of images, where D represents the moving feature of the moving object, and ratio represents the continuity feature of the moving object.
  • the correlation values between all pairs of motion regions in the adjacent two images are calculated and sorted, and the effective correspondence is obtained in order to realize multi-person gesture recognition and personnel tracking.
  • the multi-person gesture recognition and tracking system of the present invention extracts the image function of each frame of the video to be detected, and compares the changes of the optical flow vector in the continuous image frame, and lists the objective function according to the data constraint and the spatial consistency constraint. Then the robustness estimation optimization parameters are carried out, and then the iterative formula is used to correct the iterative and super-relaxation parameters. Finally, the optical region information obtained by the iteration is used to extract the motion region, and the extracted motion region is detected by sliding window, and the image and body parts are compared.
  • the similarity of the shape template, the tree structure is used to model the human body, and the response of the image to various parts of the human body is comprehensively calculated by means of information transmission to realize the positioning of the human body in the image; the positioning of the human body part is realized by returning the human body.
  • FIG. 1 is a general flow chart of a multi-person gesture recognition system of the present invention
  • FIG. 2 is a schematic diagram of a multi-person gesture recognition system of the present invention
  • FIG. 3 is a flow chart of an optical flow algorithm based on robustness in the present invention.
  • Figure 4 is a diagram showing the human body recognition effect of the oil exploration offshore platform of the present invention.
  • FIG. 5 is a multi-person human body recognition effect diagram of a petroleum exploration offshore platform according to the present invention.
  • the single-person gesture recognition system of the present invention includes: video information acquisition; parameter initialization; data constraint; Spatial Consistency Constraints; Robust Estimation; Process-to-Robust Transformation; Robustness to Process Transformation; Calculation of Optical Flow Field; Image Response to Various Body Parts; Integrated Response Calculation Based on Information Transfer; Positioning of the human body; single gesture recognition.
  • the multi-person gesture recognition system of the present invention includes single-person gesture recognition; computing correspondence; multi-person gesture recognition.
  • the robust optical flow algorithm mainly includes the following steps: first, initializing an initial value of an algorithm required parameter, and then listing an objective function according to a data constraint and a spatial consistency constraint, and then Determine whether the objective function is continuous. If it is continuous, perform robust estimation directly. If it is not continuous, add the thread to make it continuous and then perform robust estimation. Then optimize the parameters according to the robustness estimation, and finally according to the super-relaxation. The parameters are iterated and corrected for the obtained flow field to obtain a final optical flow estimate.
  • the human body posture recognition effect diagram of the petroleum exploration offshore platform of the present invention mainly includes an original image, an optical flow vector diagram, a color optical flow diagram, an optical flow diagram, a human body bounding box, a human body identification map before correction, and a correction processing.
  • the multi-person human body recognition effect map of the petroleum exploration offshore platform of the present invention mainly includes an original image, an optical flow vector diagram, a color optical flow diagram, an optical flow diagram, and a multi-human body identification map.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种人体姿态识别算法,包括以下步骤:获取视频中图像光流向量,得到图像中运动物体的区域,利用基于身体部分的方法对运动物体区域进行人体姿态识别。上述算法以获取图像序列中光流向量为基础,利用数据和空间约束检测出图像序列之间的光流向量变化,利用鲁棒性对参数进行优化,进行迭代和矫正获得光流信息,对提取的运动区域进行滑动窗口检测,比对图像与身体部分外形模板的相似度,利用树形结构对人体建模,将图像对人体各个部分的响应以信息传递的方式综合计算,实现人体在图像中的定位,通过回传进行人体部分的定位,实现人体姿态估计,计算运动区域的对应关系,进行多人姿态识别、人数统计和人员跟踪。

Description

一种基于光流检测和身体部分模型的多人姿态识别系统 技术领域
本发明涉及计算机图像处理,光流检测,人体姿态识别,运动物体跟踪等领域,具体涉及到一种基于光流检测和身体部分模型的多人姿态识别系统。
背景技术
人体目标识别与跟踪技术是模式识别、图像处理和人工智能领域一个重要的研究方向,同时也是计算机视觉领域内较活跃的一个分支。基于视频图像的运动人体识别与跟踪是从复杂变化的环境中准确识别出人体目标,根据目标特征及视频顿间的相关性对人体目标进行实时跟踪。基于视频图像的运动人体目标识别与跟踪的重大意义不仅体现在科学研究中,同样在生活、工业和国防方面也有所体现,实用的产品也已由军用延伸到了民用行业,如超市、商场、车站、银行、道路交通、长途客车、市内公交车等安装的有监控系统的场所。
人体目标的识别与跟踪技术可以有效的对需要进行安全监控的特殊区域中的异常事件进行自动报警,例如海上作业平台远离陆地、作业环境复杂,人体目标和跟踪技术的应用可以有效地保障作业人员的安全和平台的安全作业。
发明内容
本发明提出了一种快速精确识别人体姿态的算法,根据这个算法开发了人体姿态识别系统。本发明以监控视频为数据源,借助鲁棒性梯度算法,通过计算流场,获得图像光流信息,对提取的各个运动区域进行模板响应的计算,通过消息传递和回传进行人体定位和人体姿态估计,实现多人姿态识别、人员数量统计和人员跟踪。
本发明的目的是通过摄像头获取视频信息,自动识别视频中的人体目标,从而获得人员流量统计,人员追踪、人员区域分布等信息。在监控系统智能化管理中具有十分重要的意义。
本发明的目的可以通过一下步骤实现:步骤1,获取待检测视频信息;步骤2,对所需参数进行初始化并赋给相应的值;步骤3,获取数据约束和空间一致性约束的子函数,得到总目标函数;步骤4,判断目标函数是否连续,若连续,则直接进行鲁棒性估计,否则,对目标函数加入线程使之变为连续函数之后再进行鲁棒性估计;步骤5,对目标函数进行迭代优化;步骤6,设定光流阈值,截取有效的运动区域;步骤7,对每个运动区域进行滑动窗口检测,计算人体部分模型的响应;步骤8,通过消息传递计算图像中各个位置对人体模型的响应值;步骤9,通过回传对检测到的人体进行身体部分定位;步骤10,计算各个运动区域之间的对应关系,实现多人姿态识别、人员数量统计和人员跟踪。
本发明的具体技术方案实现如下:
在步骤1中,通过监控设备获取待检测的视频信息,视频信息包括视频的时间长度、拍 摄时间以及当前图像帧的色彩类型、图片大小等一系列基本信息,然后将该视频信息通过人机交互界面或者自动化输入到本发明中进行下一步的处理。
在步骤2中,根据输入的视频信息获取算法中所需求的相应参数,如模型选择、误差大小、滤波算子、金字塔水平的确定等。
在步骤3中,获取数据约束的子函数。I(x,y,t)=I(x+uδt,y+vδt,t+δt),(u,v)表示点(x,y)在时间t时的状态,(u,v)是一个点的水平和竖直方向的速度,δt很小。数据约束的目标函数为:
Figure PCTCN2016083861-appb-000001
Ix表示I对x求偏导,
Figure PCTCN2016083861-appb-000002
为目标区域。
获取空间一致性约束的子函数。当目标区域
Figure PCTCN2016083861-appb-000003
很小时,u=(u,v)的解需要通过增加空间相干性来进一步限制,空间相干性的目标函数为:
Figure PCTCN2016083861-appb-000004
ux表示u对x求偏导。结合数据约束和空间一致性约束,总目标函数为:E(u)=ED(u)+λEs(u)。
在步骤4中,判断目标函数是否连续,若连续,则直接进行鲁棒性估计,否则,对目标函数加入线程使之变为连续函数之后再进行鲁棒性估计。其中鲁棒性统计量的作用是查找参数和优化拟合模型。对于一幅m×m的图像,我们定义一个网格:
Figure PCTCN2016083861-appb-000005
(i(s),j(s))代表点s的像素坐标。在一个拟合模型中,对于一个测试集d={d0,d1,L,ds}s∈S,目标是找到一个参数值使得残差最小:
Figure PCTCN2016083861-appb-000006
σs是尺度参数,ρ是我们的鲁棒性估计量。当这个测量误差是正态分布的,寻找最优估计量就变为二次方程
Figure PCTCN2016083861-appb-000007
的最小二乘估计问题。
求出鲁棒性估计ρ之后,就可以进行再次完善光流的目标函数。正则化后的目标函数E(u):
Figure PCTCN2016083861-appb-000008
Figure PCTCN2016083861-appb-000009
代表s在网格上的东南西北的点,σ1和σ2是尺度参数,ρ1和ρ2可能是不同的值。ρi是鲁棒性估计。
在空间约束函数中加入线程,线性进程可以应用到数据项和空间项来剔除异常值,使目标函数变为连续函数。添加一个二元线性进程ls,n之后,新的目标函数E(u,l):
Figure PCTCN2016083861-appb-000010
αs和βs分别是控制平滑项和惩罚项的 常量。继续添加新的进程ds得到新的目标函数E(u,l,d):
Figure PCTCN2016083861-appb-000011
从进程到鲁棒性的转换就是将函数ρ添加到目标函数中。新的目标函数变为:
Figure PCTCN2016083861-appb-000012
从鲁棒性到进程的转换就是加入一个异常进程函数z(x)。目标函数E(u,d,l):
Figure PCTCN2016083861-appb-000013
P(x)是惩罚项,d,l≥0。异常进程z(x)为:
Figure PCTCN2016083861-appb-000014
在步骤5中,对目标函数进行迭代优化。步骤4中的目标函数E(u,v)也许是非凸的。利用SOR(Simultaneous Over-Relaxation)算法可以求得局部最小值,然后通过迭代求得最终值,在迭代过程中,需要利用超松弛参数
Figure PCTCN2016083861-appb-000015
来对结果进行矫正。迭代公式为:
Figure PCTCN2016083861-appb-000016
其中
Figure PCTCN2016083861-appb-000017
Figure PCTCN2016083861-appb-000018
T(us)是E的二阶偏导数的上界:
Figure PCTCN2016083861-appb-000019
在步骤6中,通过上述步骤计算得到的光流场描述的是图像帧中的运动属性。根据具体的运动场景特点,设定光流阈值,将各个有效的运动区域进行提取,提取各个区域的像素列表、包围盒等特征。
在步骤7中,利用训练好的人体身体部分外形和身体部分间相对位置关系模型,对图像进行卷积操作,计算图像中每个滑动窗口位置对身体部分外形的响应S(pi):
Figure PCTCN2016083861-appb-000020
在步骤8中,将人体的主要关节,例如头部、颈部、肩部、胳膊肘、手、躯干、胯部、膝盖、脚作为树形结构的节点,人体的肢体作为树形结构中节点间的连接,其中头部作为树形结构的根节点,手和脚作为树形结构的叶节点,这样建立的树形结构可以表示人体的铰链模型。
身体部分混合模型利用5到6个模板学习各个身体部分外形和相对位置信息。每个模板学习身体部分外形的形状特征和身体部分相对位置特征。利用身体部分混合模型,将每个身 体部分的响应通过消息传递的方式传递到人体树形模型的根节点(即头部对应的节点),得到图像各个位置对整个人体模型的响应值S(t):
Figure PCTCN2016083861-appb-000021
在步骤9中,基于人体定位的结果,在人体模型的树形结构中,从根节点到叶节点进行回传,依次确定每个身体部分采用的混合模型中的具体模板编号,对每个身体部分进行定位,得到整个人体姿态的估计。
在步骤10中,根据相邻两帧图像中同一运动物体的运动连续性原则,计算运动物体的对应关系:首先计算连续图像帧中运动区域的重叠区域大小ratio及其距离D,综合考虑这两个因素,计算相邻两帧图像中运动物体的关联Corr:D1-ratio,其中D表示的是运动物体的移动特征,ratio表示的是运动物体的连续性特征。当相邻图像帧的某一物体运动比较快,没有重叠区域时,即ratio为0,Corr依然有效,其值为D。
计算相邻两幅图像中,所有运动区域对之间的关联值并进行排序,依次得到有效的对应关系,实现多人姿态识别和人员跟踪。
本发明多人姿态识别和跟踪系统是在获取待检测视频每一张帧图像信息的基础上,通过对比连续图像帧中光流向量的变化,根据数据约束和空间一致性约束列出目标函数,然后进行鲁棒性估计优化参数,再利用迭代公式进行迭代和超松弛参数进行矫正,最后利用迭代获得的光流信息提取运动区域,对提取的运动区域进行滑动窗口检测,比对图像与身体部分外形模板的相似度,利用树形结构对人体建模,将图像对人体各个部分的响应通过信息传递的方式综合计算,实现人体在图像中的定位;通过回传进行人体部分的定位,实现人体姿态估计;计算运动区域的对应关系,进行多人姿态识别、人员数量统计和人员跟踪。
附图说明
图1为本发明多人姿态识别系统总体流程图;
图2为本发明多人姿态识别系统原理图;
图3为本发明中基于鲁棒性的光流算法流程图;
图4为本发明中石油勘探海上平台人体识别效果图;
图5为本发明中石油勘探海上平台多人人体识别效果图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述:
如图1所示,本发明单人姿态识别系统,包括:视频信息获取;参数初始化;数据约束; 空间一致性约束;鲁棒性估计;进程到鲁棒性的转换;鲁棒性到进程的转化;计算光流场;图像对各个身体部位的响应;基于信息传递的综合响应计算;基于回传的人体部分定位;单人姿态识别。
如图2所示,本发明多人姿态识别系统,包括单人姿态识别;计算对应关系;多人姿态识别。
如图3所示,本发明中基于鲁棒性的光流算法流程主要包含以下步骤:首先对算法所需参数赋初值进行初始化,然后根据数据约束和空间一致性约束列出目标函数,然后判断目标函数是否连续,若连续则直接进行鲁棒性估计,若不连续则先添加线程使之变为连续再进行鲁棒性估计,然后根据鲁棒性估计对参数进行优化,最后根据超松弛参数对求得的流场进行迭代和矫正得到最终的光流估计。
如图4所示,本发明中石油勘探海上平台人体姿态识别效果图主要包括原图、光流向量图、彩色光流图、光流图、人体包围盒、矫正前人体识别标示图、矫正处理后的人体识别标示图、人体识别区域放大图、人体各部分检测标示图。
如图5所示,本发明中石油勘探海上平台多人人体识别效果图主要包括原图、光流向量图、彩色光流图、光流图、多人体识别标示图。

Claims (11)

  1. 人体姿态识别系统,其特征在于利用光流信息的变化识别人体姿态,该系统包括:
    步骤1:获取待检测视频信息;
    步骤2,光流检测算法参数初始化;
    步骤3,利用数据约束和空间一致性约束相结合的方法得到目标函数;
    步骤4,分别对连续和非连续的目标函数进行鲁棒性估计,从而对参数进行优化;
    步骤5,利用鲁棒性梯度方法对局部解进行迭代,从而计算得出光流场;
    步骤6,通过设定阈值,截取有效的运动区域,提取各个运动物体;
    步骤7,对每个运动区域进行滑动窗口检测,计算当前图像帧对人体部分模型的响应;
    步骤8,利用树形结构对人体建模,通过消息传递计算图像对整个人体模型的响应;
    步骤9,通过回传对检测到的人体进行身体部分定位,实现人体姿态估计;
    步骤10,计算各个运动区域之间的对应关系,实现多人姿态识别、人员数量统计和人员跟踪。
  2. 如权利要求1通过监控设备获取待检测视频信息,输入多功能人体检测系统。
  3. 如权利要求1对所需参数进行初始化赋值,包括模型选择、误差大小、滤波算子、金字塔水平的确定等。
  4. 如权利要求1数据约束来源于对物体表面的观察,在一幅图像中,尽管物体的表面的区域位置会发生变化,但图像强度不会随时间变化。空间一致性约束体现了物体表面的空间关系,在一定范围内,物体表面的相邻区域在图像中可能属于同一个物体表面。分别根据两种约束特征获得子函数,并将其合并得到目标函数。
  5. 如权利要求1鲁棒性估计就是用来优化模型参数的。其目标有两个:
    (1)能够拟合大部分数据;
    (2)能够去除数据异常点。
  6. 如权利要求1根据前面数据约束和空间一致性约束得出的光流信息,通过鲁棒性梯度方法进行迭代优化计算出最终的光流场。
  7. 如权利要求1设定有效的光流阈值,从背景截取各个运动区域,实现运动物体的检测和区分。
  8. 如权利要求1对每个运动区域进行滑动窗口检测,利用学习好的身体部分模板对图像 进行卷积,计算当前图像帧对人体部分模型的响应。
  9. 如权利要求1利用树形结构建立人体铰链模型,从树形结构的叶节点将各个身体部分的响应传递到根节点,得到图像对整个人体模型的响应。
  10. 如权利要求1在树形结构中,通过回传综合的消息响应,对各个身体部分进行定位,实现各个运动区域的人体姿态估计。
  11. 如权利要求1根据运动物体的空间连续性计算各个运动区域之间的对应关系,实现多人姿态识别、人员数量统计和人员跟踪。
PCT/CN2016/083861 2016-05-30 2016-05-30 一种基于光流检测和身体部分模型的多人姿态识别系统 WO2017206005A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/083861 WO2017206005A1 (zh) 2016-05-30 2016-05-30 一种基于光流检测和身体部分模型的多人姿态识别系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/083861 WO2017206005A1 (zh) 2016-05-30 2016-05-30 一种基于光流检测和身体部分模型的多人姿态识别系统

Publications (1)

Publication Number Publication Date
WO2017206005A1 true WO2017206005A1 (zh) 2017-12-07

Family

ID=60478354

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/083861 WO2017206005A1 (zh) 2016-05-30 2016-05-30 一种基于光流检测和身体部分模型的多人姿态识别系统

Country Status (1)

Country Link
WO (1) WO2017206005A1 (zh)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830145A (zh) * 2018-05-04 2018-11-16 深圳技术大学(筹) 一种基于深度神经网络的人数统计方法及存储介质
CN109285182A (zh) * 2018-09-29 2019-01-29 北京三快在线科技有限公司 模型生成方法、装置、电子设备和计算机可读存储介质
CN109522850A (zh) * 2018-11-22 2019-03-26 中山大学 一种基于小样本学习的动作相似度评估方法
CN109711334A (zh) * 2018-12-26 2019-05-03 浙江捷尚视觉科技股份有限公司 一种基于时空光流场的atm尾随事件检测方法
CN109829435A (zh) * 2019-01-31 2019-05-31 深圳市商汤科技有限公司 一种视频图像处理方法、装置及计算机可读介质
CN109934183A (zh) * 2019-03-18 2019-06-25 北京市商汤科技开发有限公司 图像处理方法及装置、检测设备及存储介质
CN111414797A (zh) * 2019-01-07 2020-07-14 一元精灵有限公司 用于基于来自移动终端的视频的姿态序列的系统和方法
CN111428546A (zh) * 2019-04-11 2020-07-17 杭州海康威视数字技术股份有限公司 图像中人体标记方法、装置、电子设备及存储介质
CN111568305A (zh) * 2019-02-18 2020-08-25 北京奇虎科技有限公司 处理扫地机器人重定位的方法、装置及电子设备
CN111626199A (zh) * 2020-05-27 2020-09-04 多伦科技股份有限公司 面向大型多人车厢场景的异常行为分析方法
CN112085003A (zh) * 2020-09-24 2020-12-15 湖北科技学院 公共场所异常行为自动识别方法及装置、摄像机设备
CN112232296A (zh) * 2020-11-09 2021-01-15 北京爱笔科技有限公司 一种超参数探索方法及装置
CN113143257A (zh) * 2021-02-09 2021-07-23 国体智慧体育技术创新中心(北京)有限公司 基于个体运动行为层次模型的泛化应用系统及方法
CN113221832A (zh) * 2021-05-31 2021-08-06 常州纺织服装职业技术学院 基于三维人体数据的人体识别方法和系统
CN113255429A (zh) * 2021-03-19 2021-08-13 青岛根尖智能科技有限公司 一种视频中人体姿态估计与跟踪方法及系统
CN113436302A (zh) * 2021-06-08 2021-09-24 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) 一种人脸动画合成方法及系统
CN113822181A (zh) * 2021-09-08 2021-12-21 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) 一种基于肢体活跃度的行为心理异常检测方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271527A (zh) * 2008-02-25 2008-09-24 北京理工大学 一种基于运动场局部统计特征分析的异常行为检测方法
US7706571B2 (en) * 2004-10-13 2010-04-27 Sarnoff Corporation Flexible layer tracking with weak online appearance model
CN103049758A (zh) * 2012-12-10 2013-04-17 北京工业大学 融合步态光流图和头肩均值形状的远距离身份验证方法
CN104268520A (zh) * 2014-09-22 2015-01-07 天津理工大学 一种基于深度运动轨迹的人体动作识别方法
CN104951793A (zh) * 2015-05-14 2015-09-30 西南科技大学 一种基于stdf特征的人体行为识别算法
CN105260718A (zh) * 2015-10-13 2016-01-20 暨南大学 一种基于光流场的步态识别方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7706571B2 (en) * 2004-10-13 2010-04-27 Sarnoff Corporation Flexible layer tracking with weak online appearance model
CN101271527A (zh) * 2008-02-25 2008-09-24 北京理工大学 一种基于运动场局部统计特征分析的异常行为检测方法
CN103049758A (zh) * 2012-12-10 2013-04-17 北京工业大学 融合步态光流图和头肩均值形状的远距离身份验证方法
CN104268520A (zh) * 2014-09-22 2015-01-07 天津理工大学 一种基于深度运动轨迹的人体动作识别方法
CN104951793A (zh) * 2015-05-14 2015-09-30 西南科技大学 一种基于stdf特征的人体行为识别算法
CN105260718A (zh) * 2015-10-13 2016-01-20 暨南大学 一种基于光流场的步态识别方法

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830145A (zh) * 2018-05-04 2018-11-16 深圳技术大学(筹) 一种基于深度神经网络的人数统计方法及存储介质
CN109285182A (zh) * 2018-09-29 2019-01-29 北京三快在线科技有限公司 模型生成方法、装置、电子设备和计算机可读存储介质
CN109522850A (zh) * 2018-11-22 2019-03-26 中山大学 一种基于小样本学习的动作相似度评估方法
CN109522850B (zh) * 2018-11-22 2023-03-10 中山大学 一种基于小样本学习的动作相似度评估方法
CN109711334B (zh) * 2018-12-26 2021-02-05 浙江捷尚视觉科技股份有限公司 一种基于时空光流场的atm尾随事件检测方法
CN109711334A (zh) * 2018-12-26 2019-05-03 浙江捷尚视觉科技股份有限公司 一种基于时空光流场的atm尾随事件检测方法
CN111414797A (zh) * 2019-01-07 2020-07-14 一元精灵有限公司 用于基于来自移动终端的视频的姿态序列的系统和方法
CN111414797B (zh) * 2019-01-07 2023-05-23 一元精灵有限公司 用于估计对象的姿势和姿态信息的系统和方法
CN109829435A (zh) * 2019-01-31 2019-05-31 深圳市商汤科技有限公司 一种视频图像处理方法、装置及计算机可读介质
CN111568305A (zh) * 2019-02-18 2020-08-25 北京奇虎科技有限公司 处理扫地机器人重定位的方法、装置及电子设备
CN111568305B (zh) * 2019-02-18 2023-02-17 北京奇虎科技有限公司 处理扫地机器人重定位的方法、装置及电子设备
CN109934183A (zh) * 2019-03-18 2019-06-25 北京市商汤科技开发有限公司 图像处理方法及装置、检测设备及存储介质
CN111428546A (zh) * 2019-04-11 2020-07-17 杭州海康威视数字技术股份有限公司 图像中人体标记方法、装置、电子设备及存储介质
CN111428546B (zh) * 2019-04-11 2023-10-13 杭州海康威视数字技术股份有限公司 图像中人体标记方法、装置、电子设备及存储介质
CN111626199B (zh) * 2020-05-27 2023-08-08 多伦科技股份有限公司 面向大型多人车厢场景的异常行为分析方法
CN111626199A (zh) * 2020-05-27 2020-09-04 多伦科技股份有限公司 面向大型多人车厢场景的异常行为分析方法
CN112085003B (zh) * 2020-09-24 2024-04-05 湖北科技学院 公共场所异常行为自动识别方法及装置、摄像机设备
CN112085003A (zh) * 2020-09-24 2020-12-15 湖北科技学院 公共场所异常行为自动识别方法及装置、摄像机设备
CN112232296A (zh) * 2020-11-09 2021-01-15 北京爱笔科技有限公司 一种超参数探索方法及装置
CN113143257A (zh) * 2021-02-09 2021-07-23 国体智慧体育技术创新中心(北京)有限公司 基于个体运动行为层次模型的泛化应用系统及方法
CN113143257B (zh) * 2021-02-09 2023-01-17 国体智慧体育技术创新中心(北京)有限公司 基于个体运动行为层次模型的泛化应用系统及方法
CN113255429A (zh) * 2021-03-19 2021-08-13 青岛根尖智能科技有限公司 一种视频中人体姿态估计与跟踪方法及系统
CN113221832B (zh) * 2021-05-31 2023-07-11 常州纺织服装职业技术学院 基于三维人体数据的人体识别方法和系统
CN113221832A (zh) * 2021-05-31 2021-08-06 常州纺织服装职业技术学院 基于三维人体数据的人体识别方法和系统
CN113436302A (zh) * 2021-06-08 2021-09-24 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) 一种人脸动画合成方法及系统
CN113436302B (zh) * 2021-06-08 2024-02-13 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) 一种人脸动画合成方法及系统
CN113822181A (zh) * 2021-09-08 2021-12-21 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) 一种基于肢体活跃度的行为心理异常检测方法
CN113822181B (zh) * 2021-09-08 2024-05-24 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) 一种基于肢体活跃度的行为心理异常检测方法

Similar Documents

Publication Publication Date Title
WO2017206005A1 (zh) 一种基于光流检测和身体部分模型的多人姿态识别系统
Dai et al. Rgb-d slam in dynamic environments using point correlations
Yan et al. Unconstrained fashion landmark detection via hierarchical recurrent transformer networks
US7706571B2 (en) Flexible layer tracking with weak online appearance model
CN109558879A (zh) 一种基于点线特征的视觉slam方法和装置
Yin et al. Dynam-SLAM: An accurate, robust stereo visual-inertial SLAM method in dynamic environments
Wang et al. Point cloud and visual feature-based tracking method for an augmented reality-aided mechanical assembly system
GB2584400A (en) Processing captured images
AU2020300067B2 (en) Layered motion representation and extraction in monocular still camera videos
CN112966628A (zh) 一种基于图卷积神经网络的视角自适应多目标摔倒检测方法
Huang et al. Non-local weighted regularization for optical flow estimation
Bourdis et al. Camera pose estimation using visual servoing for aerial video change detection
Yu et al. Drso-slam: A dynamic rgb-d slam algorithm for indoor dynamic scenes
Tang et al. Fmd stereo slam: Fusing mvg and direct formulation towards accurate and fast stereo slam
Hu et al. Recovery of upper body poses in static images based on joints detection
Migniot et al. 3d human tracking in a top view using depth information recorded by the xtion pro-live camera
Dong et al. Standard and event cameras fusion for feature tracking
Girisha et al. Tracking humans using novel optical flow algorithm for surveillance videos
CN114581503A (zh) 煤矿井下环境建模方法及系统
Liu et al. Visual odometry algorithm based on deep learning
Henning et al. Bodyslam++: Fast and tightly-coupled visual-inertial camera and human motion tracking
Bahadori et al. Human body detection in the robocup rescue scenario
Li et al. Pedestrian target tracking algorithm based on improved RANSAC and KCF
CN112990060B (zh) 一种关节点分类和关节点推理的人体姿态估计分析方法
Zhu et al. 3D Human Motion Posture Tracking Method Using Multilabel Transfer Learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16903393

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16903393

Country of ref document: EP

Kind code of ref document: A1