CN114333235A - Human body multi-feature fusion falling detection method - Google Patents

Human body multi-feature fusion falling detection method Download PDF

Info

Publication number
CN114333235A
CN114333235A CN202111593285.0A CN202111593285A CN114333235A CN 114333235 A CN114333235 A CN 114333235A CN 202111593285 A CN202111593285 A CN 202111593285A CN 114333235 A CN114333235 A CN 114333235A
Authority
CN
China
Prior art keywords
human body
image
gradient
frame image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111593285.0A
Other languages
Chinese (zh)
Inventor
李胜
朱佳伟
史玉华
潘玥
李津津
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Institute of Information Engineering
Original Assignee
Anhui Institute of Information Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Institute of Information Engineering filed Critical Anhui Institute of Information Engineering
Priority to CN202111593285.0A priority Critical patent/CN114333235A/en
Publication of CN114333235A publication Critical patent/CN114333235A/en
Withdrawn legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a human body multi-feature fusion falling detection method, which belongs to the technical field of falling early warning and comprises the following steps: the method comprises the steps of firstly, obtaining an image sequence from a monitoring video, preprocessing an input image, removing noise in the image by using a median filtering algorithm, secondly, detecting a moving human body appearing in the monitoring video by using a PC-GMM moving object detection algorithm, and eliminating tiny discontinuity among the images by using morphological corrosion, expansion, opening operation and closing operation so as to obtain a smoother human body target. The method comprises a shape feature detection algorithm, a head track detection algorithm and a motion feature detection algorithm, wherein the shape feature detection method is used for detecting a falling event according to the change of the posture of a human body, the head track-based detection algorithm is used for judging falling behaviors according to the motion track of the head of the human body, and the human body motion feature-based detection algorithm is mainly used for detecting falling through space information and time information.

Description

一种人体多特征融合的跌倒检测方法A fall detection method based on multi-feature fusion of human body

技术领域technical field

本发明涉及跌倒预警技术领域,具体为一种人体多特征融合的跌倒检测方法。The invention relates to the technical field of fall early warning, in particular to a fall detection method integrating multiple features of a human body.

背景技术Background technique

跌倒是老年人受伤的主要原因之一,跌倒的频率随着年龄和虚弱等级[1]的增长而增长,65岁以上的老人每年大约有30%都会跌倒,而79岁以上的老人跌倒的比例甚至高达40%,跌倒老人中受到伤害的占20%至30%,许多老人跌倒后无法自己站立起来,不能及时救助治疗会导致严重的并发症,甚至造成死亡。Falls are one of the leading causes of injury among older adults, and the frequency of falls increases with age and frailty level [1], with approximately 30% of those over 65 falling each year compared to the percentage of those over 79 who fall It is even as high as 40%, and 20% to 30% of the elderly who fall are injured. Many elderly people cannot stand up by themselves after a fall. Failure to receive timely assistance and treatment will lead to serious complications and even death.

伴随着老年人口基数急剧扩大,我国也逐渐迈入人口老龄化阶段,据研究调查发现,在2015年时,我国65岁以上老人占人口总基数的 16.2%,人口超过2.2亿人,预测至2025年达到65岁以上的老人将会突破3亿人,“空巢家庭”[2]的数量迅速增长,高达1.2亿人,社会学家将“空巢家庭”定义为无子女或者子女不在身边的家庭,老年人身体机能减弱,会因身体不平衡而跌倒,导致骨折甚至摔成重伤,由于家中无人照料,以至于得不到及时的救助和治疗,即使有些老人能幸运的从跌倒中活下来,他们可能仍然需要医疗辅助来维持生命,还会对老人的心理造成极大的创伤。With the rapid expansion of the elderly population base, my country has gradually entered the stage of population aging. According to the research survey, in 2015, the elderly over 65 years old accounted for 16.2% of the total population base in my country, with a population of more than 220 million people. It is predicted to be 2025. The number of elderly people over the age of 65 will exceed 300 million, and the number of "empty-nest families"[2] has grown rapidly, reaching 120 million. Sociologists define "empty-nest families" as those with no children or children not around. Families, the elderly have weakened physical functions, and will fall due to unbalanced bodies, resulting in fractures or even serious injuries. Because there is no one to take care of them at home, they cannot get timely assistance and treatment, even if some elderly people are lucky enough to survive the fall. Down the road, they may still need medical assistance to stay alive, and it will cause great psychological trauma to the elderly.

智能跌倒检测系统可以定义为一种辅助设备,其主要作用是在老人发生跌倒的时候,第一时间发出警报,这在一定程度上能够减少老人跌倒后受到的伤害,不仅减轻了老人跌倒后心理上所产生的恐惧,而且可以在老人跌倒后及时提供援助,在现实的跌倒事件中,老人跌倒和对跌倒产生的恐惧两者互相作用,放大跌倒后受到的伤害,老人跌倒后会产生对跌倒的恐惧,而对跌倒的恐惧反过来又会增加老人跌倒的风险,在此背景下,智能跌倒检测系统在学术研究和实际应用领域都有巨大的研究意义。The intelligent fall detection system can be defined as an auxiliary device. Its main function is to issue an alarm at the first time when the elderly fall, which can reduce the injury suffered by the elderly to a certain extent, and not only reduce the psychological distress of the elderly after falling. It can also provide timely assistance after the elderly fall. In a real fall event, the elderly fall and the fear of falling interact with each other to magnify the damage after the fall. The fear of falling will in turn increase the risk of falling in the elderly. In this context, the intelligent fall detection system has great research significance in both academic research and practical applications.

根据传感器和检测方式的不同,现有的跌倒检测有以下3种技术:穿戴传感器检测技术,环境传感器检测技术和计算机视觉检测技术,穿戴传感器检测技术依赖于加速度计、陀螺仪等传感器连接到胸部、手腕和腰部,通过传感器收集人体的运动信息,通过对数据的变化情况来分辨人体的运动状态,是否有发生跌倒行为,基于环境传感器的跌倒检测技术和可穿戴传感器的检测技术最大的区别在于,前者是通过尝试从检测环境中采集人体跌倒过程中的音频信号和振动信号,然后通过信号数据分析进行分类,基于计算机视觉的检测技术已成为跌倒检测领域的研究热点,通常基于视觉的跌倒检测方法根据检测特点有以下3种方法:形状特征检测算法、头部轨迹检测算法和运动特征的检测算法,形状特征检测方法依据人体姿态的变化来检测跌倒事件,基于头部轨迹的检测算法是根据跟踪人体头部的运动轨迹来判断跌倒行为,基于人体运动特征的检测算法主要通过空间信息和时间信息对跌倒进行检测,According to the different sensors and detection methods, the existing fall detection technologies have the following three types: wearable sensor detection technology, environmental sensor detection technology and computer vision detection technology. Wearable sensor detection technology relies on sensors such as accelerometers and gyroscopes connected to the chest. , wrist and waist, collect the movement information of the human body through the sensor, and distinguish the movement state of the human body through the change of the data, whether there is falling behavior, the biggest difference between the fall detection technology based on environmental sensors and the detection technology of wearable sensors is that The former is to collect audio signals and vibration signals from the detection environment in the process of human falling, and then classify them through signal data analysis. Computer vision-based detection technology has become a research hotspot in the field of fall detection, usually visual-based fall detection. Methods According to the detection characteristics, there are the following three methods: shape feature detection algorithm, head trajectory detection algorithm and motion feature detection algorithm. The shape feature detection method detects falling events according to the change of human posture. Tracking the motion trajectory of the human head to determine the fall behavior, the detection algorithm based on human motion features mainly detects falls through spatial information and time information.

发明内容SUMMARY OF THE INVENTION

本发明提供的发明目的在于提供一种人体多特征融合的跌倒检测方法。通过本发明一种人体多特征融合的跌倒检测方法,结合硬件的切换和电平值的自检,首先可以快速确定干是否存在,提高操作都便捷性,其次是利用备用射频组件,可以保证设备设计的冗余度,最后是利用多次自检,可以提高安装的便捷性。The purpose of the invention provided by the present invention is to provide a fall detection method based on fusion of human body features. Through the fall detection method of human body multi-feature fusion of the present invention, combined with the switching of hardware and the self-checking of the level value, firstly, it is possible to quickly determine whether the stem exists, which improves the convenience of operation, and secondly, the use of spare radio frequency components can ensure the equipment The redundancy of the design, and finally the use of multiple self-checks, can improve the convenience of installation.

为了实现上述效果,本发明提供如下技术方案:一种人体多特征融合的跌倒检测方法,包括以下步骤:In order to achieve the above effects, the present invention provides the following technical solutions: a fall detection method for fusion of multiple features of a human body, comprising the following steps:

步骤一、从监控视频中获取图像序列,对输入图像进行预处理,利用中值滤波算法去除图像中的噪声;Step 1: Obtain an image sequence from the surveillance video, preprocess the input image, and use a median filter algorithm to remove noise in the image;

步骤二、通过PC-GMM运动目标检测算法检测出监控视频中出现的运动人体,利用形态学腐蚀、膨胀、开运算和闭运算消除图像间细小的间断,以此得到更加平滑的人体目标;Step 2: Detect the moving human body appearing in the surveillance video through the PC-GMM moving target detection algorithm, and use morphological erosion, expansion, opening operation and closing operation to eliminate small discontinuities between images, so as to obtain a smoother human body target;

步骤三、通过YUV色度和梯度特征相融合的阴影检测算法,检测步骤二中所获取的运动人体是否存在阴影区域,若存在阴影区域,对阴影进行消除,得到更加准确的人体目标;Step 3, through the shadow detection algorithm of YUV chromaticity and gradient feature fusion, detect whether there is a shadow area in the moving human body obtained in step 2, if there is a shadow area, eliminate the shadow to obtain a more accurate human body target;

步骤四、比较前后两帧图像检测结果的相似度和宽度、高度变化,利用目标跟踪启用判定机制,判断是否需要对运动人体进行跟踪,若判定机制判定需要对运动人体进行跟踪,利用PC-GMM和KCF相结合的跟踪算法获取人体跟踪位置,反之,则直接跳转到步骤五;Step 4: Compare the similarity, width and height changes of the detection results of the two frames before and after, and use the target tracking to enable the determination mechanism to determine whether the moving human body needs to be tracked. If the determination mechanism determines that the moving human body needs to be tracked, use PC-GMM The tracking algorithm combined with KCF obtains the tracking position of the human body, otherwise, jump directly to step 5;

S5、通过融合人体高宽比、有效面积比、质心高度变化以及重心斜率角度等多个特征,对人体是否处于跌倒状态进行判断,当跌倒检测系统确定人处于跌倒状态时,系统发出警告,并自动寻求帮助。S5. Judging whether the human body is in a falling state by integrating multiple features such as the body's aspect ratio, effective area ratio, center of mass height change, and center of gravity slope angle, when the fall detection system determines that the person is in a falling state, the system issues a warning and Ask for help automatically.

进一步的,包括以下步骤:根据S2中的操作步骤,Further, the following steps are included: according to the operation steps in S2,

S201、利用混合高斯模型背景差分法计算出输入图像的背景模型参数均值、方差和权重;S201, using the mixed Gaussian model background difference method to calculate the mean value, variance and weight of the background model parameters of the input image;

S202、通过相位相关法计算出输入相邻帧图像间的偏移量dt,偏移量的计算首先需要根据表达式:

Figure RE-GDA0003538701840000041
求出相邻帧图像x和y方向的偏移量;S202, the offset d t between the input adjacent frame images is calculated by the phase correlation method, and the calculation of the offset first needs to be based on the expression:
Figure RE-GDA0003538701840000041
Find the offsets in the x and y directions of the adjacent frame images;

S203、将偏移量dt进行累加,直至累加和大于设定的阈值dT,记下此时图像为第n-t帧图像,并将n-t帧图像作为当前待检测第 n帧图像的前一帧图像;S203: Accumulate the offset d t until the accumulated sum is greater than the set threshold d T , record the image as the nt frame image at this time, and use the nt frame image as the previous frame of the nth frame image currently to be detected image;

S204、对第n-t帧图像背景模型参数进行更新,将第n-t帧图像检测出运动目标位置的均值、方差和权重更新为第n-t帧图像相同位置的均值、方差和权重,参数更新表达式如下所示:S204, update the background model parameters of the n-t frame image, and update the mean value, variance and weight of the position of the moving target detected in the n-t frame image to the mean value, variance and weight of the same position of the n-t frame image, and the parameter update expression is as follows Show:

Figure RE-GDA0003538701840000042
Figure RE-GDA0003538701840000042

S205、通过第n-t帧图像更新完的背景模型重新构建待检测第n帧图像的背景模型,从而检测出运动目标在图像中的位置;S205, reconstruct the background model of the n-th frame image to be detected by the updated background model of the n-t frame image, thereby detecting the position of the moving target in the image;

S206、将检测出的运动目标进行形态学处理,去除细小间断的部分,平滑运动目标轮廓。S206 , perform morphological processing on the detected moving target, remove the small discontinuous part, and smooth the outline of the moving target.

进一步的,包括以下步骤:根据步骤二中的操作步骤,所述F1(u,v) 表示相邻前一帧图像的傅里叶变换,F2 *(u,v)表示相邻后一帧图像傅里叶变换的共轭,x0、y0分别为x和y方向的偏移量,相邻帧图像间的偏移量dt计算公式表示为

Figure RE-GDA0003538701840000043
Further, it includes the following steps: according to the operation steps in step 2, the F 1 (u, v) represents the Fourier transform of the adjacent previous frame image, and F 2 * (u, v) represents the adjacent next frame image The conjugate of the Fourier transform of the frame image, x 0 , y 0 are the offsets in the x and y directions, respectively, and the calculation formula of the offset d t between adjacent frame images is expressed as
Figure RE-GDA0003538701840000043

进一步的,包括以下步骤:根据步骤二中的操作步骤,所述 (x',y')object表示n-t帧图像检测出运动目标的像素点坐标。Further, it includes the following steps: according to the operation steps in step 2, the (x', y') object represents the pixel coordinates of the moving target detected by the nt frame image.

5.进一步的,包括以下步骤:根据步骤三中的操作步骤,5. Further, comprising the following steps: according to the operation steps in step 3,

S301、在YUV颜色空间利用亮度和色度信息筛选出候选阴影像素点,为了涵盖所有的阴影像素,需要把前景阈值设为较小的值,背景阈值设为较大的值;S301, using luminance and chromaticity information to filter out candidate shadow pixels in the YUV color space, in order to cover all shadow pixels, the foreground threshold needs to be set to a smaller value, and the background threshold needs to be set to a larger value;

S302、从上面提取的阴影像素中寻找像素连接的部分,每个部分分别对应不同的候选阴影区域;S302, find the part of the pixel connection from the shadow pixels extracted above, and each part corresponds to a different candidate shadow area;

S303、计算阴影区域像素点的梯度和方向,利用下列表示式计算得到该像素点的梯度和方向,表达式为S303, calculate the gradient and direction of the pixel point in the shadow area, and use the following expression to calculate the gradient and direction of the pixel point, and the expression is:

Figure RE-GDA0003538701840000051
Figure RE-GDA0003538701840000051

S304、计算背景和前景图像之间的梯度方向差值,可以将梯度方向转换为角的距离计算其差值,表达式为

Figure RE-GDA0003538701840000052
Figure RE-GDA0003538701840000053
S304. Calculate the gradient direction difference between the background and foreground images, and the gradient direction can be converted into the distance of the angle to calculate the difference, and the expression is:
Figure RE-GDA0003538701840000052
Figure RE-GDA0003538701840000053

S305、根据S4上式求出梯度方向差值,并计算出梯度方向相关性,表达式为S305. Calculate the gradient direction difference according to the above formula of S4, and calculate the gradient direction correlation, and the expression is:

Figure RE-GDA0003538701840000054
Figure RE-GDA0003538701840000054

进一步的,包括以下步骤:根据步骤三中的操作步骤,所述

Figure RE-GDA0003538701840000055
和θxy分别表示像素点(x,y)的梯度大小和方向,考虑到噪声因素的影响,设置梯度阈值,保留大于梯度阈值的阴影像素点,赋予边缘像素点更大的权值,因为这些像素点含有图像的边缘特征。Further, the following steps are included: according to the operation steps in step 3, the
Figure RE-GDA0003538701840000055
and θ xy represent the gradient size and direction of the pixel (x, y), respectively. Considering the influence of noise factors, set the gradient threshold, retain the shadow pixels larger than the gradient threshold, and give edge pixels greater weights, because these Pixels contain edge features of the image.

进一步的,包括以下步骤:根据步骤三中的操作步骤,所述

Figure RE-GDA0003538701840000061
Figure RE-GDA0003538701840000062
Figure RE-GDA0003538701840000063
分别表示前景图像和背景图像中像素点x和y方向的梯度值,
Figure RE-GDA0003538701840000064
表示梯度方向差值。Further, the following steps are included: according to the operation steps in step 3, the
Figure RE-GDA0003538701840000061
Figure RE-GDA0003538701840000062
and
Figure RE-GDA0003538701840000063
represent the gradient values of the pixels in the x and y directions of the foreground image and the background image, respectively,
Figure RE-GDA0003538701840000064
Indicates the gradient direction difference.

进一步的,包括以下步骤:根据步骤三中的操作步骤,所述n表示候选区域中的总像素个数,

Figure RE-GDA0003538701840000065
表示当梯度方向差值
Figure RE-GDA0003538701840000066
小于阈值τ时结果为1,反之则为0,当比值c大于一定的阈值则认为该候选区域为阴影区域,反之则认为是前景区域。Further, including the following steps: according to the operation steps in step 3, the n represents the total number of pixels in the candidate area,
Figure RE-GDA0003538701840000065
Indicates when the gradient direction difference
Figure RE-GDA0003538701840000066
When the ratio is less than the threshold τ, the result is 1; otherwise, it is 0. When the ratio c is greater than a certain threshold, the candidate area is considered as a shadow area, otherwise, it is considered as a foreground area.

进一步的,包括以下步骤:根据S3中的操作步骤,所述

Figure RE-GDA0003538701840000067
Figure RE-GDA0003538701840000068
分别表示前景图像和背景图像中像素点x和y方向的梯度值,
Figure RE-GDA0003538701840000069
表示梯度方向差值。Further, the following steps are included: according to the operation steps in S3, the
Figure RE-GDA0003538701840000067
and
Figure RE-GDA0003538701840000068
represent the gradient values of the pixels in the x and y directions of the foreground image and the background image, respectively,
Figure RE-GDA0003538701840000069
Indicates the gradient direction difference.

进一步的,包括以下步骤:根据步骤四中的操作步骤,Further, including the following steps: according to the operation steps in step 4,

S401、利用PC-GMM运动目标检测算法检测出运动目标;S401, using a PC-GMM moving target detection algorithm to detect a moving target;

S402、计算当前帧图像检测结果与前一帧图像检测结果的相似度以及高度和宽度的变化,判断是否利用跟踪算法对目标进行跟踪,若判定为需要跟踪目标,则执行步骤S3,否则,跳回到步骤步骤一;S402, calculate the similarity between the detection result of the current frame image and the detection result of the previous frame image and the changes in height and width, and determine whether to use the tracking algorithm to track the target, if it is determined that the target needs to be tracked, then perform step S3, otherwise, skip to go back to step one;

S403、计算当前帧图像的APCE值和前5帧图像的历史平均值 (APCE)avrageS403, calculate the APCE value of the current frame image and the historical average (APCE) avrage of the previous 5 frame images,

Figure RE-GDA00035387018400000610
Figure RE-GDA00035387018400000610

S404、比较两个值的大小,当APCE值小于(APCE)avrage且 PC-GMM检测无目标时,判定目标移出摄像头监测范围,当APCE 值大于(APCE)avrage时,将跟踪算法获取的目标区域,与高斯背景模型进行差分运算,提取出运动目标的二值图像。S404. Compare the magnitudes of the two values. When the APCE value is less than (APCE) avrage and the PC-GMM detects no target, it is determined that the target is moved out of the camera monitoring range. When the APCE value is greater than (APCE) avrage , the target area obtained by the tracking algorithm is tracked. , carry out the difference operation with the Gaussian background model, and extract the binary image of the moving target.

本发明提供了一种人体多特征融合的跌倒检测方法,具备以下有益效果:The present invention provides a fall detection method for fusion of human body features, which has the following beneficial effects:

该人体多特征融合的跌倒检测方法,形状特征检测算法、头部轨迹检测算法和运动特征的检测算法,形状特征检测方法依据人体姿态的变化来检测跌倒事件,基于头部轨迹的检测算法是根据跟踪人体头部的运动轨迹来判断跌倒行为,基于人体运动特征的检测算法主要通过空间信息和时间信息对跌倒进行检测。The human body multi-feature fusion fall detection method, the shape feature detection algorithm, the head trajectory detection algorithm and the motion feature detection algorithm, the shape feature detection method detects the fall event according to the change of the human body posture, and the head trajectory-based detection algorithm is based on Tracking the motion trajectory of the human head to determine the fall behavior, the detection algorithm based on human motion features mainly detects the fall through spatial information and time information.

附图说明Description of drawings

图1显示了基于相位相关-混合高斯模型的运动目标检测方法的流程示意图。Figure 1 shows a schematic flowchart of a moving object detection method based on a phase correlation-mixture Gaussian model.

图2显示了基于色度和梯度特征相融合的阴影消除方法的流程示意图。Figure 2 shows a schematic flowchart of a shadow removal method based on fusion of chrominance and gradient features.

图3显示了基于PC-GMM和KCF相结合的目标跟踪方法的流程示意图。Figure 3 shows a schematic flowchart of the target tracking method based on the combination of PC-GMM and KCF.

图4显示了基于人体多特征融合的跌倒检测方法流程示意图。Figure 4 shows a schematic flowchart of a fall detection method based on human body multi-feature fusion.

具体实施方式Detailed ways

本发明提供一种技术方案:请参阅图1-4,一种人体多特征融合的跌倒检测方法,包括以下步骤:The present invention provides a technical solution: please refer to FIGS. 1-4 , a fall detection method based on fusion of human body features includes the following steps:

步骤一、从监控视频中获取图像序列,对输入图像进行预处理,利用中值滤波算法去除图像中的噪声;Step 1: Obtain an image sequence from the surveillance video, preprocess the input image, and use a median filter algorithm to remove noise in the image;

步骤二、通过PC-GMM运动目标检测算法检测出监控视频中出现的运动人体,利用形态学腐蚀、膨胀、开运算和闭运算消除图像间细小的间断,以此得到更加平滑的人体目标;Step 2: Detect the moving human body appearing in the surveillance video through the PC-GMM moving target detection algorithm, and use morphological erosion, expansion, opening operation and closing operation to eliminate small discontinuities between images, so as to obtain a smoother human body target;

步骤三、通过YUV色度和梯度特征相融合的阴影检测算法,检测步骤二中所获取的运动人体是否存在阴影区域,若存在阴影区域,对阴影进行消除,得到更加准确的人体目标;Step 3, through the shadow detection algorithm of YUV chromaticity and gradient feature fusion, detect whether there is a shadow area in the moving human body obtained in step 2, if there is a shadow area, eliminate the shadow to obtain a more accurate human body target;

步骤四、比较前后两帧图像检测结果的相似度和宽度、高度变化,利用目标跟踪启用判定机制,判断是否需要对运动人体进行跟踪,若判定机制判定需要对运动人体进行跟踪,利用PC-GMM和KCF相结合的跟踪算法获取人体跟踪位置,反之,则直接跳转到步骤五;Step 4: Compare the similarity, width and height changes of the detection results of the two frames before and after, and use the target tracking to enable the determination mechanism to determine whether the moving human body needs to be tracked. If the determination mechanism determines that the moving human body needs to be tracked, use PC-GMM The tracking algorithm combined with KCF obtains the tracking position of the human body, otherwise, jump directly to step 5;

步骤五、通过融合人体高宽比、有效面积比、质心高度变化以及重心斜率角度等多个特征,对人体是否处于跌倒状态进行判断,当跌倒检测系统确定人处于跌倒状态时,系统发出警告,并自动寻求帮助。Step 5: Judging whether the human body is in a falling state by integrating multiple features such as the body aspect ratio, the effective area ratio, the change in the height of the center of mass, and the slope angle of the center of gravity. When the fall detection system determines that the person is in a falling state, the system issues a warning. and automatically ask for help.

具体的,包括以下步骤:根据S2中的操作步骤,Specifically, it includes the following steps: according to the operation steps in S2,

S201、利用混合高斯模型背景差分法计算出输入图像的背景模型参数均值、方差和权重;S201, using the mixed Gaussian model background difference method to calculate the mean value, variance and weight of the background model parameters of the input image;

S202、通过相位相关法计算出输入相邻帧图像间的偏移量dt,偏移量的计算首先需要根据表达式:

Figure RE-GDA0003538701840000081
求出相邻帧图像x和y方向的偏移量;S202, the offset d t between the input adjacent frame images is calculated by the phase correlation method, and the calculation of the offset first needs to be based on the expression:
Figure RE-GDA0003538701840000081
Find the offsets in the x and y directions of the adjacent frame images;

S203、将偏移量dt进行累加,直至累加和大于设定的阈值dT,记下此时图像为第n-t帧图像,并将n-t帧图像作为当前待检测第 n帧图像的前一帧图像;S203: Accumulate the offset d t until the accumulated sum is greater than the set threshold d T , record the image as the nt frame image at this time, and use the nt frame image as the previous frame of the nth frame image currently to be detected image;

S204、对第n-t帧图像背景模型参数进行更新,将第n-t帧图像检测出运动目标位置的均值、方差和权重更新为第n-1帧图像相同位置的均值、方差和权重,参数更新表达式如下所示:S204, update the background model parameters of the n-t frame image, update the mean, variance and weight of the position of the moving target detected in the n-t frame image to the mean, variance and weight of the same position in the n-1 frame image, and the parameter update expression As follows:

Figure RE-GDA0003538701840000091
Figure RE-GDA0003538701840000091

S205、通过第n-t帧图像更新完的背景模型重新构建待检测第n帧图像的背景模型,从而检测出运动目标在图像中的位置;S205, reconstruct the background model of the n-th frame image to be detected by the updated background model of the n-t frame image, thereby detecting the position of the moving target in the image;

S206、将检测出的运动目标进行形态学处理,去除细小间断的部分,平滑运动目标轮廓。S206 , perform morphological processing on the detected moving target, remove the small discontinuous part, and smooth the outline of the moving target.

具体的,包括以下步骤:根据步骤二中的操作步骤,所述 F1(u,v)表示相邻前一帧图像的傅里叶变换,F2 *(u,v)表示相邻后一帧图像傅里叶变换的共轭,x0、y0分别为x和y方向的偏移量,相邻帧图像间的偏移量dt计算公式表示为

Figure RE-GDA0003538701840000092
Specifically, it includes the following steps: according to the operation steps in step 2, the F 1 (u, v) represents the Fourier transform of the adjacent previous frame image, and F 2 * (u, v) represents the adjacent next frame image The conjugate of the Fourier transform of the frame image, x 0 , y 0 are the offsets in the x and y directions, respectively, and the calculation formula of the offset d t between adjacent frame images is expressed as
Figure RE-GDA0003538701840000092

具体的,包括以下步骤:根据S2中的操作步骤,所述 (x',y')object表示n-t帧图像检测出运动目标的像素点坐标。Specifically, it includes the following steps: according to the operation steps in S2, the (x', y') object represents the pixel coordinates of the moving object detected by the nt frame image.

5.具体的,包括以下步骤:根据步骤三中的操作步骤,5. Specifically, including the following steps: according to the operation steps in step 3,

S301、在YUV颜色空间利用亮度和色度信息筛选出候选阴影像素点,为了涵盖所有的阴影像素,需要把前景阈值设为较小的值,背景阈值设为较大的值;S301, using luminance and chromaticity information to filter out candidate shadow pixels in the YUV color space, in order to cover all shadow pixels, the foreground threshold needs to be set to a smaller value, and the background threshold needs to be set to a larger value;

S302、从上面提取的阴影像素中寻找像素连接的部分,每个部分分别对应不同的候选阴影区域;S302, find the part of the pixel connection from the shadow pixels extracted above, and each part corresponds to a different candidate shadow area;

S303、计算阴影区域像素点的梯度和方向,利用下列表示式计算得到该像素点的梯度和方向,表达式为S303, calculate the gradient and direction of the pixel point in the shadow area, and use the following expression to calculate the gradient and direction of the pixel point, and the expression is:

Figure RE-GDA0003538701840000101
Figure RE-GDA0003538701840000101

S304、计算背景和前景图像之间的梯度方向差值,可以将梯度方向转换为角的距离计算其差值,表达式为

Figure RE-GDA0003538701840000102
S304. Calculate the gradient direction difference between the background and foreground images, and the gradient direction can be converted into the distance of the angle to calculate the difference, and the expression is:
Figure RE-GDA0003538701840000102

S305、根据S4上式求出梯度方向差值,并计算出梯度方向相关性,表达式为S305: Calculate the gradient direction difference according to the above formula in S4, and calculate the gradient direction correlation, and the expression is:

Figure RE-GDA0003538701840000103
Figure RE-GDA0003538701840000103

具体的,包括以下步骤:根据步骤三中的操作步骤,所述

Figure RE-GDA0003538701840000104
和θxy分别表示像素点(x,y)的梯度大小和方向,考虑到噪声因素的影响,设置梯度阈值,保留大于梯度阈值的阴影像素点,赋予边缘像素点更大的权值,因为这些像素点含有图像的边缘特征。Specifically, it includes the following steps: according to the operation steps in step 3, the
Figure RE-GDA0003538701840000104
and θ xy represent the gradient size and direction of the pixel (x, y), respectively. Considering the influence of noise factors, set the gradient threshold, retain the shadow pixels larger than the gradient threshold, and give edge pixels greater weights, because these Pixels contain edge features of the image.

具体的,包括以下步骤:根据步骤三中的操作步骤,所述

Figure RE-GDA0003538701840000105
Figure RE-GDA0003538701840000106
Figure RE-GDA0003538701840000107
分别表示前景图像和背景图像中像素点x和y方向的梯度值,
Figure RE-GDA0003538701840000108
表示梯度方向差值。Specifically, it includes the following steps: according to the operation steps in step 3, the
Figure RE-GDA0003538701840000105
Figure RE-GDA0003538701840000106
and
Figure RE-GDA0003538701840000107
represent the gradient values of the pixels in the x and y directions of the foreground image and the background image, respectively,
Figure RE-GDA0003538701840000108
Indicates the gradient direction difference.

具体的,包括以下步骤:根据步骤三中的操作步骤,所述n 表示候选区域中的总像素个数,

Figure RE-GDA0003538701840000109
表示当梯度方向差值
Figure RE-GDA00035387018400001010
小于阈值τ时结果为1,反之则为0,当比值c大于一定的阈值则认为该候选区域为阴影区域,反之则认为是前景区域。Specifically, it includes the following steps: according to the operation steps in step 3, the n represents the total number of pixels in the candidate area,
Figure RE-GDA0003538701840000109
Indicates when the gradient direction difference
Figure RE-GDA00035387018400001010
When the ratio is less than the threshold τ, the result is 1; otherwise, it is 0. When the ratio c is greater than a certain threshold, the candidate area is considered as a shadow area, otherwise, it is considered as a foreground area.

具体的,包括以下步骤:根据步骤三中的操作步骤,所述

Figure RE-GDA00035387018400001011
Figure RE-GDA00035387018400001012
Figure RE-GDA00035387018400001013
分别表示前景图像和背景图像中像素点x和y方向的梯度值,
Figure RE-GDA0003538701840000111
表示梯度方向差值。Specifically, it includes the following steps: according to the operation steps in step 3, the
Figure RE-GDA00035387018400001011
Figure RE-GDA00035387018400001012
and
Figure RE-GDA00035387018400001013
represent the gradient values of the pixels in the x and y directions of the foreground image and the background image, respectively,
Figure RE-GDA0003538701840000111
Indicates the gradient direction difference.

具体的,包括以下步骤:根据步骤四中的操作步骤,Specifically, it includes the following steps: according to the operation steps in step 4,

S401、利用PC-GMM运动目标检测算法检测出运动目标;S401, using a PC-GMM moving target detection algorithm to detect a moving target;

S402、计算当前帧图像检测结果与前一帧图像检测结果的相似度以及高度和宽度的变化,判断是否利用跟踪算法对目标进行跟踪,若判定为需要跟踪目标,则执行步骤S3,否则,跳回到步骤步骤一;S402, calculate the similarity between the detection result of the current frame image and the detection result of the previous frame image and the changes in height and width, and determine whether to use the tracking algorithm to track the target, if it is determined that the target needs to be tracked, then perform step S3, otherwise, skip to go back to step one;

S403、计算当前帧图像的APCE值和前5帧图像的历史平均值 (APCE)avrageS403, calculate the APCE value of the current frame image and the historical average (APCE) avrage of the previous 5 frame images,

Figure RE-GDA0003538701840000112
Figure RE-GDA0003538701840000112

S404、比较两个值的大小,当APCE值小于(APCE)avrage且 PC-GMM检测无目标时,判定目标移出摄像头监测范围,当APCE 值大于(APCE)avrage时,将跟踪算法获取的目标区域,与高斯背景模型进行差分运算,提取出运动目标的二值图像。S404. Compare the magnitudes of the two values. When the APCE value is less than (APCE) avrage and the PC-GMM detects no target, it is determined that the target is moved out of the camera monitoring range. When the APCE value is greater than (APCE) avrage , the target area obtained by the tracking algorithm is tracked. , carry out the difference operation with the Gaussian background model, and extract the binary image of the moving target.

实施例的方法进行检测分析,并与现有技术进行对照,得出如下数据:The method of embodiment carries out detection and analysis, and compares with prior art, draws the following data:

预测精度prediction accuracy 抗干扰能力Anti-interference ability 预警效率Early warning efficiency 实施例Example 较高higher 较高higher 较高higher 现有技术current technology 较低lower 较低lower 较低 lower

根据上述表格数据可以得出,当实施实施例时,通过本发明一种人体多特征融合的跌倒检测方法,该人体多特征融合的跌倒检测方法,形状特征检测算法、头部轨迹检测算法和运动特征的检测算法,形状特征检测方法依据人体姿态的变化来检测跌倒事件,基于头部轨迹的检测算法是根据跟踪人体头部的运动轨迹来判断跌倒行为,基于人体运动特征的检测算法主要通过空间信息和时间信息对跌倒进行检测。According to the above table data, it can be concluded that when the embodiment is implemented, a fall detection method of human body multi-feature fusion of the present invention, the fall detection method of human body multi-feature fusion, shape feature detection algorithm, head trajectory detection algorithm and motion detection algorithm can be obtained. Feature detection algorithms, shape feature detection methods detect fall events based on changes in human posture, head trajectory-based detection algorithms are based on tracking the motion trajectory of the human head to determine fall behavior, and human motion feature-based detection algorithms are mainly through space. Information and time information for fall detection.

本发明提供了一种人体多特征融合的跌倒检测方法,包括以下步骤:步骤一、从监控视频中获取图像序列,对输入图像进行预处理,利用中值滤波算法去除图像中的噪声,步骤二、通过PC-GMM运动目标检测算法检测出监控视频中出现的运动人体,利用形态学腐蚀、膨胀、开运算和闭运算消除图像间细小的间断,以此得到更加平滑的人体目标,S201、利用混合高斯模型背景差分法计算出输入图像的背景模型参数均值、方差和权重,S202、通过相位相关法计算出输入相邻帧图像间的偏移量dt,偏移量的计算首先需要根据表达式:

Figure RE-GDA0003538701840000121
求出相邻帧图像x和 y方向的偏移量,S203、将偏移量dt进行累加,直至累加和大于设定的阈值dT,记下此时图像为第n-t帧图像,并将n-t帧图像作为当前待检测第n帧图像的前一帧图像,S204、对第n-t帧图像背景模型参数进行更新,将第n-t帧图像检测出运动目标位置的均值、方差和权重更新为第n-1帧图像相同位置的均值、方差和权重,参数更新表达式如下所示:The invention provides a fall detection method based on fusion of human body features, which includes the following steps: step 1: obtaining an image sequence from a monitoring video, preprocessing the input image, and removing noise in the image by using a median filtering algorithm; step 2 , The moving human body appearing in the surveillance video is detected by the PC-GMM moving target detection algorithm, and the small discontinuity between the images is eliminated by morphological corrosion, expansion, opening operation and closing operation, so as to obtain a smoother human body target. S201, using The background difference method of the mixed Gaussian model calculates the mean value, variance and weight of the background model parameters of the input image. S202, the offset d t between the input adjacent frame images is calculated by the phase correlation method. The calculation of the offset first needs to be based on the expression Mode:
Figure RE-GDA0003538701840000121
Find the offsets in the x and y directions of the adjacent frame images, S203, accumulate the offsets d t until the accumulated sum is greater than the set threshold d T , record the image at this time as the nt-th frame image, and add The nt frame image is used as the previous frame image of the nth frame image currently to be detected. S204, the background model parameters of the nt frame image are updated, and the mean value, variance and weight of the position of the moving target detected in the nt frame image are updated to the nth frame image. - The mean, variance and weight of the same position in the image of 1 frame, the parameter update expression is as follows:

Figure RE-GDA0003538701840000131
Figure RE-GDA0003538701840000131

S205、通过第n-t帧图像更新完的背景模型重新构建待检测第n帧图像的背景模型,从而检测出运动目标在图像中的位置,S206、将检测出的运动目标进行形态学处理,去除细小间断的部分,平滑运动目标轮廓,F1(u,v)表示相邻前一帧图像的傅里叶变换,F2 *(u,v)表示相邻后一帧图像傅里叶变换的共轭,x0、y0分别为x和y方向的偏移量,相邻帧图像间的偏移量dt计算公式表示为

Figure RE-GDA0003538701840000132
(x',y')object表示n-t帧图像检测出运动目标的像素点坐标,步骤三、通过YUV色度和梯度特征相融合的阴影检测算法,检测步骤二中所获取的运动人体是否存在阴影区域,若存在阴影区域,对阴影进行消除,得到更加准确的人体目标,S301、在YUV颜色空间利用亮度和色度信息筛选出候选阴影像素点,为了涵盖所有的阴影像素,需要把前景阈值设为较小的值,背景阈值设为较大的值,S302、从上面提取的阴影像素中寻找像素连接的部分,每个部分分别对应不同的候选阴影区域,S303、计算阴影区域像素点的梯度和方向,利用下列表示式计算得到该像素点的梯度和方向,表达式为S205, reconstruct the background model of the nth frame image to be detected by using the updated background model of the nth frame image, so as to detect the position of the moving object in the image, and S206, perform morphological processing on the detected moving object to remove small Intermittent part, smooth moving target contour, F 1 (u, v) represents the Fourier transform of the adjacent previous frame image, F 2 * (u, v) represents the common Fourier transform of the adjacent next frame image. Yoke, x 0 , y 0 are the offsets in the x and y directions respectively, and the calculation formula of the offset d t between adjacent frame images is expressed as
Figure RE-GDA0003538701840000132
(x', y') object represents the pixel coordinates of the moving target detected by the nt frame image. Step 3: Through the shadow detection algorithm fused with YUV chromaticity and gradient features, detect whether the moving body obtained in step 2 has shadows If there is a shadow area, eliminate the shadow to obtain a more accurate human target. S301. Use the luminance and chromaticity information to filter out candidate shadow pixels in the YUV color space. In order to cover all shadow pixels, the foreground threshold needs to be set. is a smaller value, the background threshold is set to a larger value, S302, find the part of the pixel connection from the shadow pixels extracted above, each part corresponds to a different candidate shadow area, S303, calculate the gradient of the shadow area pixels and direction, use the following expression to calculate the gradient and direction of the pixel point, the expression is

Figure RE-GDA0003538701840000133
Figure RE-GDA0003538701840000133

S304、计算背景和前景图像之间的梯度方向差值,可以将梯度方向转换为角的距离计算其差值,表达式为

Figure RE-GDA0003538701840000141
S305、根据步骤四上式求出梯度方向差值,并计算出梯度方向相关性,表达式为
Figure RE-GDA0003538701840000142
Figure RE-GDA0003538701840000143
和θxy分别表示像素点(x,y)的梯度大小和方向,考虑到噪声因素的影响,设置梯度阈值,保留大于梯度阈值的阴影像素点,赋予边缘像素点更大的权值,因为这些像素点含有图像的边缘特征,
Figure RE-GDA0003538701840000144
Figure RE-GDA0003538701840000145
分别表示前景图像和背景图像中像素点x和y方向的梯度值,
Figure RE-GDA0003538701840000148
表示梯度方向差值,n表示候选区域中的总像素个数,
Figure RE-GDA0003538701840000147
表示当梯度方向差值
Figure RE-GDA0003538701840000148
小于阈值τ时结果为1,反之则为0,当比值c大于一定的阈值则认为该候选区域为阴影区域,反之则认为是前景区域,
Figure RE-GDA0003538701840000149
Figure RE-GDA00035387018400001410
分别表示前景图像和背景图像中像素点x和y方向的梯度值,
Figure RE-GDA00035387018400001411
表示梯度方向差值,步骤四、比较前后两帧图像检测结果的相似度和宽度、高度变化,利用目标跟踪启用判定机制,判断是否需要对运动人体进行跟踪,若判定机制判定需要对运动人体进行跟踪,利用PC-GMM和KCF相结合的跟踪算法获取人体跟踪位置,反之,则直接跳转到步骤五,S401、利用PC-GMM运动目标检测算法检测出运动目标,S402、计算当前帧图像检测结果与前一帧图像检测结果的相似度以及高度和宽度的变化,判断是否利用跟踪算法对目标进行跟踪,若判定为需要跟踪目标,则执行步骤三,否则,跳回到步骤一,S403、计算当前帧图像的APCE值和前5帧图像的历史平均值(APCE)avrage
Figure RE-GDA00035387018400001412
S404、比较两个值的大小,当APCE值小于(APCE)avrage且PC-GMM检测无目标时,判定目标移出摄像头监测范围,当APCE值大于(APCE)avrage时,将跟踪算法获取的目标区域,与高斯背景模型进行差分运算,提取出运动目标的二值图像,步骤五、通过融合人体高宽比、有效面积比、质心高度变化以及重心斜率角度等多个特征,对人体是否处于跌倒状态进行判断,当跌倒检测系统确定人处于跌倒状态时,系统发出警告,并自动寻求帮助。S304. Calculate the gradient direction difference between the background and foreground images, and the gradient direction can be converted into the distance of the angle to calculate the difference, and the expression is:
Figure RE-GDA0003538701840000141
S305. Calculate the gradient direction difference according to the above formula in step 4, and calculate the gradient direction correlation, and the expression is:
Figure RE-GDA0003538701840000142
Figure RE-GDA0003538701840000143
and θ xy represent the gradient size and direction of the pixel (x, y), respectively. Considering the influence of noise factors, set the gradient threshold, retain the shadow pixels larger than the gradient threshold, and give edge pixels greater weights, because these Pixels contain edge features of the image,
Figure RE-GDA0003538701840000144
and
Figure RE-GDA0003538701840000145
represent the gradient values of the pixels in the x and y directions of the foreground image and the background image, respectively,
Figure RE-GDA0003538701840000148
represents the gradient direction difference, n represents the total number of pixels in the candidate region,
Figure RE-GDA0003538701840000147
Indicates when the gradient direction difference
Figure RE-GDA0003538701840000148
When the ratio is less than the threshold τ, the result is 1, otherwise it is 0. When the ratio c is greater than a certain threshold, the candidate area is considered to be a shadow area, otherwise, it is considered to be a foreground area.
Figure RE-GDA0003538701840000149
and
Figure RE-GDA00035387018400001410
represent the gradient values of the pixels in the x and y directions of the foreground image and the background image, respectively,
Figure RE-GDA00035387018400001411
Indicates the gradient direction difference. Step 4. Compare the similarity and width and height changes of the detection results of the two frames before and after, and use the target tracking to enable the judgment mechanism to judge whether it is necessary to track the moving body. If the judgment mechanism determines that the moving body needs to be tracked. Tracking, using the tracking algorithm combined with PC-GMM and KCF to obtain the human body tracking position, otherwise, jump directly to step 5, S401, use the PC-GMM moving target detection algorithm to detect the moving target, S402, calculate the current frame image detection The similarity between the result and the image detection result of the previous frame and the changes in height and width, determine whether to use the tracking algorithm to track the target, if it is determined that the target needs to be tracked, then execute step 3, otherwise, jump back to step 1, S403, Calculate the APCE value of the current frame image and the historical average (APCE) avrage of the previous 5 frame images,
Figure RE-GDA00035387018400001412
S404. Compare the magnitudes of the two values. When the APCE value is less than (APCE) avrage and the PC-GMM detects no target, it is determined that the target is moved out of the camera monitoring range. When the APCE value is greater than (APCE) avrage , the target area obtained by the tracking algorithm is tracked. , perform differential operation with the Gaussian background model, and extract the binary image of the moving target. Step 5: Through the fusion of multiple features such as the human body aspect ratio, effective area ratio, the height change of the center of mass, and the slope angle of the center of gravity, determine whether the human body is in a falling state. Judgment, when the fall detection system determines that a person is in a fall state, the system issues a warning and automatically seeks help.

尽管已经示出和描述了本发明的实施例,对于本领域的普通技术人员而言,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同物限定。Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, and substitutions can be made in these embodiments without departing from the principle and spirit of the invention and modifications, the scope of the present invention is defined by the appended claims and their equivalents.

Claims (10)

1. A human body multi-feature fusion fall detection method is characterized by comprising the following steps:
s1, acquiring an image sequence from the monitoring video, preprocessing an input image, and removing noise in the image by using a median filtering algorithm;
s2, detecting a moving human body appearing in the monitoring video through a PC-GMM moving object detection algorithm, and eliminating tiny discontinuity among images by utilizing morphological corrosion, expansion, opening operation and closing operation so as to obtain a smoother human body object;
s3, detecting whether the moving human body obtained in the step two has a shadow area through a shadow detection algorithm with fused YUV chrominance and gradient characteristics, and eliminating the shadow if the moving human body has the shadow area to obtain a more accurate human body target;
s4, comparing the similarity, the width and the height change of the detection results of the two frames of images before and after the detection, starting a judgment mechanism by utilizing target tracking, judging whether the moving human body needs to be tracked, if the judgment mechanism judges that the moving human body needs to be tracked, acquiring the human body tracking position by utilizing a tracking algorithm combining PC-GMM and KCF, and if not, directly jumping to the fifth step;
s5, judging whether the human body is in a falling state or not by fusing a plurality of characteristics such as the height-width ratio of the human body, the effective area ratio, the height change of the mass center, the slope angle of the mass center and the like, and when the falling detection system determines that the human body is in the falling state, the system gives an alarm and automatically seeks help.
2. The human body multi-feature fusion fall detection method as claimed in claim 1, comprising the steps of: according to the operation step in S2,
s201, calculating a background model parameter mean value, a variance and a weight of an input image by using a mixed Gaussian model background difference method;
s202, calculating the offset d between input adjacent frame images by a phase correlation methodtThe offset calculation first needs to be according to the expression:
Figure RE-FDA0003538701830000021
calculating the offset of the adjacent frame images in the x and y directions;
s203, offset dtAccumulating until the accumulated sum is larger than a set threshold value dTRecording the image at the moment as an n-t frame image, and taking the n-t frame image as a previous frame image of the current to-be-detected n frame image;
s204, updating the background model parameters of the n-t frame image, and updating the mean value, the variance and the weight of the position of the motion target detected by the n-t frame image into the mean value, the variance and the weight of the same position of the n-1 frame image, wherein the parameter updating expression is as follows:
Figure RE-FDA0003538701830000022
s205, reconstructing a background model of the nth frame image to be detected through the updated background model of the nth-t frame image, thereby detecting the position of the moving target in the image;
and S206, performing morphological processing on the detected moving target, removing fine discontinuous parts, and smoothing the contour of the moving target.
3. A human body multi-feature fusion fall detection method as claimed in claim 2, comprising the steps of: according to the operation procedure in S2, F1(u, v) Fourier transform of an adjacent previous frame image, F2 *(u, v) denotes the conjugate of the Fourier transform of the adjacent subsequent frame image, x0、y0Offset in x and y directions, respectively, offset d between adjacent frame imagestThe calculation formula is expressed as
Figure RE-FDA0003538701830000023
4. A human body multi-feature fusion fall detection method as claimed in claim 3, comprising the steps of: according to the operation in S2, the terms (x ', y')objectAnd representing the pixel point coordinates of the detected moving target of the n-t frame image.
5. The human body multi-feature fusion fall detection method as claimed in claim 4, comprising the steps of: according to the operation step in S3,
s301, screening out candidate shadow pixel points in a YUV color space by utilizing brightness and chrominance information, wherein in order to cover all shadow pixels, a foreground threshold value needs to be set to be a small value, and a background threshold value needs to be set to be a large value;
s302, searching pixel connection parts from the shadow pixels extracted from the upper surface, wherein each part corresponds to different candidate shadow areas;
s303, calculating the gradient and the direction of the pixel point in the shadow area, and calculating the gradient and the direction of the pixel point by using the following expression formula
Figure RE-FDA0003538701830000031
S304, calculating the difference value of the gradient directions between the background image and the foreground image, and calculating the difference value by converting the gradient directions into distances of angles, wherein the expression is
Figure RE-FDA0003538701830000032
S305, solving the gradient direction difference according to the formula in S4, and calculating the gradient direction correlation, wherein the expression is
Figure RE-FDA0003538701830000033
6. The human body multi-feature fusion fall detection method as claimed in claim 5, comprising the steps of: according to the operation procedure in S3, the
Figure RE-FDA0003538701830000034
And thetaxyThe gradient size and the gradient direction of the pixel points (x, y) are respectively expressed, the influence of noise factors is considered, the gradient threshold value is set, shadow pixel points larger than the gradient threshold value are reserved, and larger weight values are given to edge pixel points because the pixel points contain edge characteristics of the image.
7. The human body multi-feature fusion fall detection method as claimed in claim 6, comprising the steps of: according to the operation procedure in S3, the
Figure RE-FDA0003538701830000041
And
Figure RE-FDA0003538701830000042
Figure RE-FDA0003538701830000043
respectively representing the gradient values of the pixel points in the x direction and the y direction in the foreground image and the background image,
Figure RE-FDA0003538701830000044
the gradient direction difference is indicated.
8. The human body multi-feature fusion fall detection method as claimed in claim 7, comprising the steps of: according to the operation step in S3, n represents the total number of pixels in the candidate region,
Figure RE-FDA0003538701830000045
indicating the difference in gradient directions
Figure RE-FDA0003538701830000046
And when the ratio c is larger than a certain threshold value, the candidate area is considered as a shadow area, and otherwise, the candidate area is considered as a foreground area.
9. The human body multi-feature fusion fall detection method as claimed in claim 8, comprising the steps of: according to the operation procedure in S3, the
Figure RE-FDA0003538701830000047
And
Figure RE-FDA0003538701830000048
Figure RE-FDA0003538701830000049
respectively representing the gradient values of the pixel points in the x direction and the y direction in the foreground image and the background image,
Figure RE-FDA00035387018300000410
the gradient direction difference is indicated.
10. The human body multi-feature fusion fall detection method as claimed in claim 9, comprising the steps of: according to the operation step in S4,
s401, detecting a moving target by utilizing a PC-GMM moving target detection algorithm;
s402, calculating the similarity between the current frame image detection result and the previous frame image detection result and the change of the height and the width, judging whether a target is tracked by using a tracking algorithm, if the target is judged to be tracked, executing a step S3, otherwise, returning to the step S1;
s403, calculating APCE value of current frame image and historical average value (APCE) of previous 5 frame imageavrage
Figure RE-FDA0003538701830000051
S404, comparing the two values, when the APCE value is smaller than (APCE)avrageAnd when the PC-GMM detects no target, the target is judged to move out of the monitoring range of the camera, and when the APCE value is larger than (APCE)avrageAnd then, carrying out difference operation on the target area obtained by the tracking algorithm and the Gaussian background model, and extracting a binary image of the moving target.
CN202111593285.0A 2021-12-23 2021-12-23 Human body multi-feature fusion falling detection method Withdrawn CN114333235A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111593285.0A CN114333235A (en) 2021-12-23 2021-12-23 Human body multi-feature fusion falling detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111593285.0A CN114333235A (en) 2021-12-23 2021-12-23 Human body multi-feature fusion falling detection method

Publications (1)

Publication Number Publication Date
CN114333235A true CN114333235A (en) 2022-04-12

Family

ID=81054258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111593285.0A Withdrawn CN114333235A (en) 2021-12-23 2021-12-23 Human body multi-feature fusion falling detection method

Country Status (1)

Country Link
CN (1) CN114333235A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205981A (en) * 2022-09-08 2022-10-18 深圳市维海德技术股份有限公司 Standing posture detection method and device, electronic equipment and readable storage medium
CN117671799A (en) * 2023-12-15 2024-03-08 武汉星巡智能科技有限公司 Human body falling detection method, device, equipment and medium combining depth measurement
CN118314688A (en) * 2024-04-15 2024-07-09 河北省胸科医院 A dynamic tracking and state analysis method and clinical nursing system
CN118658207A (en) * 2024-07-16 2024-09-17 广州星瞳信息科技有限责任公司 Intelligent fall detection method, device, equipment and medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205981A (en) * 2022-09-08 2022-10-18 深圳市维海德技术股份有限公司 Standing posture detection method and device, electronic equipment and readable storage medium
CN115205981B (en) * 2022-09-08 2023-01-31 深圳市维海德技术股份有限公司 Standing posture detection method and device, electronic equipment and readable storage medium
CN117671799A (en) * 2023-12-15 2024-03-08 武汉星巡智能科技有限公司 Human body falling detection method, device, equipment and medium combining depth measurement
CN118314688A (en) * 2024-04-15 2024-07-09 河北省胸科医院 A dynamic tracking and state analysis method and clinical nursing system
CN118658207A (en) * 2024-07-16 2024-09-17 广州星瞳信息科技有限责任公司 Intelligent fall detection method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN114333235A (en) Human body multi-feature fusion falling detection method
Wang et al. Automatic fall detection of human in video using combination of features
CN110781733B (en) Image duplicate removal method, storage medium, network equipment and intelligent monitoring system
CN107657244B (en) A multi-camera-based human fall behavior detection system and its detection method
CN104866841B (en) A kind of human body target is run behavioral value method
CN102073851A (en) Method and system for automatically identifying urban traffic accident
CN106204640A (en) A kind of moving object detection system and method
JP2012053756A (en) Image processor and image processing method
CN114842397A (en) Real-time old man falling detection method based on anomaly detection
CN115116127A (en) A fall detection method based on computer vision and artificial intelligence
CN114469076B (en) Identity-feature-fused fall identification method and system for solitary old people
CN108509938A (en) A kind of fall detection method based on video monitoring
CN111275910A (en) Method and system for detecting border crossing behavior of escalator based on Gaussian mixture model
CN104036250A (en) Video pedestrian detecting and tracking method
CN108805021A (en) The real-time individual tumble behavioral value alarm method of feature based operator
TWI493510B (en) Falling down detection method
JP2020109644A (en) Fall detection method, fall detection apparatus, and electronic device
CN117037272B (en) Method and system for monitoring fall of old people
Ali et al. Human fall detection
CN110765925B (en) Method for detecting carrying object and identifying gait based on improved twin neural network
Dorgham et al. Improved elderly fall detection by surveillance video using real-time human motion analysis
Khraief et al. Vision-based fall detection for elderly people using body parts movement and shape analysis
CN111310689A (en) Method for recognizing human body behaviors in potential information fusion home security system
Lee et al. Automated abnormal behavior detection for ubiquitous healthcare application in daytime and nighttime
De et al. Fall detection approach based on combined two-channel body activity classification for innovative indoor environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220412