CN114333235A - Human body multi-feature fusion falling detection method - Google Patents
Human body multi-feature fusion falling detection method Download PDFInfo
- Publication number
- CN114333235A CN114333235A CN202111593285.0A CN202111593285A CN114333235A CN 114333235 A CN114333235 A CN 114333235A CN 202111593285 A CN202111593285 A CN 202111593285A CN 114333235 A CN114333235 A CN 114333235A
- Authority
- CN
- China
- Prior art keywords
- human body
- image
- gradient
- frame image
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 95
- 230000004927 fusion Effects 0.000 title claims abstract description 26
- 238000000034 method Methods 0.000 claims abstract description 21
- 230000033001 locomotion Effects 0.000 claims abstract description 15
- 230000008859 change Effects 0.000 claims abstract description 10
- 238000012544 monitoring process Methods 0.000 claims abstract description 9
- 230000000877 morphologic effect Effects 0.000 claims abstract description 9
- 230000007797 corrosion Effects 0.000 claims abstract description 3
- 238000005260 corrosion Methods 0.000 claims abstract description 3
- 238000001914 filtration Methods 0.000 claims abstract description 3
- 238000007781 pre-processing Methods 0.000 claims abstract description 3
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000007246 mechanism Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims 1
- 230000009191 jumping Effects 0.000 claims 1
- 238000012216 screening Methods 0.000 claims 1
- 230000006399 behavior Effects 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 9
- 230000006378 damage Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 208000027418 Wounds and injury Diseases 0.000 description 3
- 230000005484 gravity Effects 0.000 description 3
- 208000014674 injury Diseases 0.000 description 3
- 206010048744 Fear of falling Diseases 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000003628 erosive effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 208000036119 Frailty Diseases 0.000 description 1
- 206010071368 Psychological trauma Diseases 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 206010003549 asthenia Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003863 physical function Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000009430 psychological distress Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及跌倒预警技术领域,具体为一种人体多特征融合的跌倒检测方法。The invention relates to the technical field of fall early warning, in particular to a fall detection method integrating multiple features of a human body.
背景技术Background technique
跌倒是老年人受伤的主要原因之一,跌倒的频率随着年龄和虚弱等级[1]的增长而增长,65岁以上的老人每年大约有30%都会跌倒,而79岁以上的老人跌倒的比例甚至高达40%,跌倒老人中受到伤害的占20%至30%,许多老人跌倒后无法自己站立起来,不能及时救助治疗会导致严重的并发症,甚至造成死亡。Falls are one of the leading causes of injury among older adults, and the frequency of falls increases with age and frailty level [1], with approximately 30% of those over 65 falling each year compared to the percentage of those over 79 who fall It is even as high as 40%, and 20% to 30% of the elderly who fall are injured. Many elderly people cannot stand up by themselves after a fall. Failure to receive timely assistance and treatment will lead to serious complications and even death.
伴随着老年人口基数急剧扩大,我国也逐渐迈入人口老龄化阶段,据研究调查发现,在2015年时,我国65岁以上老人占人口总基数的 16.2%,人口超过2.2亿人,预测至2025年达到65岁以上的老人将会突破3亿人,“空巢家庭”[2]的数量迅速增长,高达1.2亿人,社会学家将“空巢家庭”定义为无子女或者子女不在身边的家庭,老年人身体机能减弱,会因身体不平衡而跌倒,导致骨折甚至摔成重伤,由于家中无人照料,以至于得不到及时的救助和治疗,即使有些老人能幸运的从跌倒中活下来,他们可能仍然需要医疗辅助来维持生命,还会对老人的心理造成极大的创伤。With the rapid expansion of the elderly population base, my country has gradually entered the stage of population aging. According to the research survey, in 2015, the elderly over 65 years old accounted for 16.2% of the total population base in my country, with a population of more than 220 million people. It is predicted to be 2025. The number of elderly people over the age of 65 will exceed 300 million, and the number of "empty-nest families"[2] has grown rapidly, reaching 120 million. Sociologists define "empty-nest families" as those with no children or children not around. Families, the elderly have weakened physical functions, and will fall due to unbalanced bodies, resulting in fractures or even serious injuries. Because there is no one to take care of them at home, they cannot get timely assistance and treatment, even if some elderly people are lucky enough to survive the fall. Down the road, they may still need medical assistance to stay alive, and it will cause great psychological trauma to the elderly.
智能跌倒检测系统可以定义为一种辅助设备,其主要作用是在老人发生跌倒的时候,第一时间发出警报,这在一定程度上能够减少老人跌倒后受到的伤害,不仅减轻了老人跌倒后心理上所产生的恐惧,而且可以在老人跌倒后及时提供援助,在现实的跌倒事件中,老人跌倒和对跌倒产生的恐惧两者互相作用,放大跌倒后受到的伤害,老人跌倒后会产生对跌倒的恐惧,而对跌倒的恐惧反过来又会增加老人跌倒的风险,在此背景下,智能跌倒检测系统在学术研究和实际应用领域都有巨大的研究意义。The intelligent fall detection system can be defined as an auxiliary device. Its main function is to issue an alarm at the first time when the elderly fall, which can reduce the injury suffered by the elderly to a certain extent, and not only reduce the psychological distress of the elderly after falling. It can also provide timely assistance after the elderly fall. In a real fall event, the elderly fall and the fear of falling interact with each other to magnify the damage after the fall. The fear of falling will in turn increase the risk of falling in the elderly. In this context, the intelligent fall detection system has great research significance in both academic research and practical applications.
根据传感器和检测方式的不同,现有的跌倒检测有以下3种技术:穿戴传感器检测技术,环境传感器检测技术和计算机视觉检测技术,穿戴传感器检测技术依赖于加速度计、陀螺仪等传感器连接到胸部、手腕和腰部,通过传感器收集人体的运动信息,通过对数据的变化情况来分辨人体的运动状态,是否有发生跌倒行为,基于环境传感器的跌倒检测技术和可穿戴传感器的检测技术最大的区别在于,前者是通过尝试从检测环境中采集人体跌倒过程中的音频信号和振动信号,然后通过信号数据分析进行分类,基于计算机视觉的检测技术已成为跌倒检测领域的研究热点,通常基于视觉的跌倒检测方法根据检测特点有以下3种方法:形状特征检测算法、头部轨迹检测算法和运动特征的检测算法,形状特征检测方法依据人体姿态的变化来检测跌倒事件,基于头部轨迹的检测算法是根据跟踪人体头部的运动轨迹来判断跌倒行为,基于人体运动特征的检测算法主要通过空间信息和时间信息对跌倒进行检测,According to the different sensors and detection methods, the existing fall detection technologies have the following three types: wearable sensor detection technology, environmental sensor detection technology and computer vision detection technology. Wearable sensor detection technology relies on sensors such as accelerometers and gyroscopes connected to the chest. , wrist and waist, collect the movement information of the human body through the sensor, and distinguish the movement state of the human body through the change of the data, whether there is falling behavior, the biggest difference between the fall detection technology based on environmental sensors and the detection technology of wearable sensors is that The former is to collect audio signals and vibration signals from the detection environment in the process of human falling, and then classify them through signal data analysis. Computer vision-based detection technology has become a research hotspot in the field of fall detection, usually visual-based fall detection. Methods According to the detection characteristics, there are the following three methods: shape feature detection algorithm, head trajectory detection algorithm and motion feature detection algorithm. The shape feature detection method detects falling events according to the change of human posture. Tracking the motion trajectory of the human head to determine the fall behavior, the detection algorithm based on human motion features mainly detects falls through spatial information and time information.
发明内容SUMMARY OF THE INVENTION
本发明提供的发明目的在于提供一种人体多特征融合的跌倒检测方法。通过本发明一种人体多特征融合的跌倒检测方法,结合硬件的切换和电平值的自检,首先可以快速确定干是否存在,提高操作都便捷性,其次是利用备用射频组件,可以保证设备设计的冗余度,最后是利用多次自检,可以提高安装的便捷性。The purpose of the invention provided by the present invention is to provide a fall detection method based on fusion of human body features. Through the fall detection method of human body multi-feature fusion of the present invention, combined with the switching of hardware and the self-checking of the level value, firstly, it is possible to quickly determine whether the stem exists, which improves the convenience of operation, and secondly, the use of spare radio frequency components can ensure the equipment The redundancy of the design, and finally the use of multiple self-checks, can improve the convenience of installation.
为了实现上述效果,本发明提供如下技术方案:一种人体多特征融合的跌倒检测方法,包括以下步骤:In order to achieve the above effects, the present invention provides the following technical solutions: a fall detection method for fusion of multiple features of a human body, comprising the following steps:
步骤一、从监控视频中获取图像序列,对输入图像进行预处理,利用中值滤波算法去除图像中的噪声;Step 1: Obtain an image sequence from the surveillance video, preprocess the input image, and use a median filter algorithm to remove noise in the image;
步骤二、通过PC-GMM运动目标检测算法检测出监控视频中出现的运动人体,利用形态学腐蚀、膨胀、开运算和闭运算消除图像间细小的间断,以此得到更加平滑的人体目标;Step 2: Detect the moving human body appearing in the surveillance video through the PC-GMM moving target detection algorithm, and use morphological erosion, expansion, opening operation and closing operation to eliminate small discontinuities between images, so as to obtain a smoother human body target;
步骤三、通过YUV色度和梯度特征相融合的阴影检测算法,检测步骤二中所获取的运动人体是否存在阴影区域,若存在阴影区域,对阴影进行消除,得到更加准确的人体目标;Step 3, through the shadow detection algorithm of YUV chromaticity and gradient feature fusion, detect whether there is a shadow area in the moving human body obtained in step 2, if there is a shadow area, eliminate the shadow to obtain a more accurate human body target;
步骤四、比较前后两帧图像检测结果的相似度和宽度、高度变化,利用目标跟踪启用判定机制,判断是否需要对运动人体进行跟踪,若判定机制判定需要对运动人体进行跟踪,利用PC-GMM和KCF相结合的跟踪算法获取人体跟踪位置,反之,则直接跳转到步骤五;Step 4: Compare the similarity, width and height changes of the detection results of the two frames before and after, and use the target tracking to enable the determination mechanism to determine whether the moving human body needs to be tracked. If the determination mechanism determines that the moving human body needs to be tracked, use PC-GMM The tracking algorithm combined with KCF obtains the tracking position of the human body, otherwise, jump directly to step 5;
S5、通过融合人体高宽比、有效面积比、质心高度变化以及重心斜率角度等多个特征,对人体是否处于跌倒状态进行判断,当跌倒检测系统确定人处于跌倒状态时,系统发出警告,并自动寻求帮助。S5. Judging whether the human body is in a falling state by integrating multiple features such as the body's aspect ratio, effective area ratio, center of mass height change, and center of gravity slope angle, when the fall detection system determines that the person is in a falling state, the system issues a warning and Ask for help automatically.
进一步的,包括以下步骤:根据S2中的操作步骤,Further, the following steps are included: according to the operation steps in S2,
S201、利用混合高斯模型背景差分法计算出输入图像的背景模型参数均值、方差和权重;S201, using the mixed Gaussian model background difference method to calculate the mean value, variance and weight of the background model parameters of the input image;
S202、通过相位相关法计算出输入相邻帧图像间的偏移量dt,偏移量的计算首先需要根据表达式:求出相邻帧图像x和y方向的偏移量;S202, the offset d t between the input adjacent frame images is calculated by the phase correlation method, and the calculation of the offset first needs to be based on the expression: Find the offsets in the x and y directions of the adjacent frame images;
S203、将偏移量dt进行累加,直至累加和大于设定的阈值dT,记下此时图像为第n-t帧图像,并将n-t帧图像作为当前待检测第 n帧图像的前一帧图像;S203: Accumulate the offset d t until the accumulated sum is greater than the set threshold d T , record the image as the nt frame image at this time, and use the nt frame image as the previous frame of the nth frame image currently to be detected image;
S204、对第n-t帧图像背景模型参数进行更新,将第n-t帧图像检测出运动目标位置的均值、方差和权重更新为第n-t帧图像相同位置的均值、方差和权重,参数更新表达式如下所示:S204, update the background model parameters of the n-t frame image, and update the mean value, variance and weight of the position of the moving target detected in the n-t frame image to the mean value, variance and weight of the same position of the n-t frame image, and the parameter update expression is as follows Show:
S205、通过第n-t帧图像更新完的背景模型重新构建待检测第n帧图像的背景模型,从而检测出运动目标在图像中的位置;S205, reconstruct the background model of the n-th frame image to be detected by the updated background model of the n-t frame image, thereby detecting the position of the moving target in the image;
S206、将检测出的运动目标进行形态学处理,去除细小间断的部分,平滑运动目标轮廓。S206 , perform morphological processing on the detected moving target, remove the small discontinuous part, and smooth the outline of the moving target.
进一步的,包括以下步骤:根据步骤二中的操作步骤,所述F1(u,v) 表示相邻前一帧图像的傅里叶变换,F2 *(u,v)表示相邻后一帧图像傅里叶变换的共轭,x0、y0分别为x和y方向的偏移量,相邻帧图像间的偏移量dt计算公式表示为 Further, it includes the following steps: according to the operation steps in step 2, the F 1 (u, v) represents the Fourier transform of the adjacent previous frame image, and F 2 * (u, v) represents the adjacent next frame image The conjugate of the Fourier transform of the frame image, x 0 , y 0 are the offsets in the x and y directions, respectively, and the calculation formula of the offset d t between adjacent frame images is expressed as
进一步的,包括以下步骤:根据步骤二中的操作步骤,所述 (x',y')object表示n-t帧图像检测出运动目标的像素点坐标。Further, it includes the following steps: according to the operation steps in step 2, the (x', y') object represents the pixel coordinates of the moving target detected by the nt frame image.
5.进一步的,包括以下步骤:根据步骤三中的操作步骤,5. Further, comprising the following steps: according to the operation steps in step 3,
S301、在YUV颜色空间利用亮度和色度信息筛选出候选阴影像素点,为了涵盖所有的阴影像素,需要把前景阈值设为较小的值,背景阈值设为较大的值;S301, using luminance and chromaticity information to filter out candidate shadow pixels in the YUV color space, in order to cover all shadow pixels, the foreground threshold needs to be set to a smaller value, and the background threshold needs to be set to a larger value;
S302、从上面提取的阴影像素中寻找像素连接的部分,每个部分分别对应不同的候选阴影区域;S302, find the part of the pixel connection from the shadow pixels extracted above, and each part corresponds to a different candidate shadow area;
S303、计算阴影区域像素点的梯度和方向,利用下列表示式计算得到该像素点的梯度和方向,表达式为S303, calculate the gradient and direction of the pixel point in the shadow area, and use the following expression to calculate the gradient and direction of the pixel point, and the expression is:
S304、计算背景和前景图像之间的梯度方向差值,可以将梯度方向转换为角的距离计算其差值,表达式为 S304. Calculate the gradient direction difference between the background and foreground images, and the gradient direction can be converted into the distance of the angle to calculate the difference, and the expression is:
S305、根据S4上式求出梯度方向差值,并计算出梯度方向相关性,表达式为S305. Calculate the gradient direction difference according to the above formula of S4, and calculate the gradient direction correlation, and the expression is:
进一步的,包括以下步骤:根据步骤三中的操作步骤,所述和θxy分别表示像素点(x,y)的梯度大小和方向,考虑到噪声因素的影响,设置梯度阈值,保留大于梯度阈值的阴影像素点,赋予边缘像素点更大的权值,因为这些像素点含有图像的边缘特征。Further, the following steps are included: according to the operation steps in step 3, the and θ xy represent the gradient size and direction of the pixel (x, y), respectively. Considering the influence of noise factors, set the gradient threshold, retain the shadow pixels larger than the gradient threshold, and give edge pixels greater weights, because these Pixels contain edge features of the image.
进一步的,包括以下步骤:根据步骤三中的操作步骤,所述 和分别表示前景图像和背景图像中像素点x和y方向的梯度值,表示梯度方向差值。Further, the following steps are included: according to the operation steps in step 3, the and represent the gradient values of the pixels in the x and y directions of the foreground image and the background image, respectively, Indicates the gradient direction difference.
进一步的,包括以下步骤:根据步骤三中的操作步骤,所述n表示候选区域中的总像素个数,表示当梯度方向差值小于阈值τ时结果为1,反之则为0,当比值c大于一定的阈值则认为该候选区域为阴影区域,反之则认为是前景区域。Further, including the following steps: according to the operation steps in step 3, the n represents the total number of pixels in the candidate area, Indicates when the gradient direction difference When the ratio is less than the threshold τ, the result is 1; otherwise, it is 0. When the ratio c is greater than a certain threshold, the candidate area is considered as a shadow area, otherwise, it is considered as a foreground area.
进一步的,包括以下步骤:根据S3中的操作步骤,所述和分别表示前景图像和背景图像中像素点x和y方向的梯度值,表示梯度方向差值。Further, the following steps are included: according to the operation steps in S3, the and represent the gradient values of the pixels in the x and y directions of the foreground image and the background image, respectively, Indicates the gradient direction difference.
进一步的,包括以下步骤:根据步骤四中的操作步骤,Further, including the following steps: according to the operation steps in step 4,
S401、利用PC-GMM运动目标检测算法检测出运动目标;S401, using a PC-GMM moving target detection algorithm to detect a moving target;
S402、计算当前帧图像检测结果与前一帧图像检测结果的相似度以及高度和宽度的变化,判断是否利用跟踪算法对目标进行跟踪,若判定为需要跟踪目标,则执行步骤S3,否则,跳回到步骤步骤一;S402, calculate the similarity between the detection result of the current frame image and the detection result of the previous frame image and the changes in height and width, and determine whether to use the tracking algorithm to track the target, if it is determined that the target needs to be tracked, then perform step S3, otherwise, skip to go back to step one;
S403、计算当前帧图像的APCE值和前5帧图像的历史平均值 (APCE)avrage,S403, calculate the APCE value of the current frame image and the historical average (APCE) avrage of the previous 5 frame images,
S404、比较两个值的大小,当APCE值小于(APCE)avrage且 PC-GMM检测无目标时,判定目标移出摄像头监测范围,当APCE 值大于(APCE)avrage时,将跟踪算法获取的目标区域,与高斯背景模型进行差分运算,提取出运动目标的二值图像。S404. Compare the magnitudes of the two values. When the APCE value is less than (APCE) avrage and the PC-GMM detects no target, it is determined that the target is moved out of the camera monitoring range. When the APCE value is greater than (APCE) avrage , the target area obtained by the tracking algorithm is tracked. , carry out the difference operation with the Gaussian background model, and extract the binary image of the moving target.
本发明提供了一种人体多特征融合的跌倒检测方法,具备以下有益效果:The present invention provides a fall detection method for fusion of human body features, which has the following beneficial effects:
该人体多特征融合的跌倒检测方法,形状特征检测算法、头部轨迹检测算法和运动特征的检测算法,形状特征检测方法依据人体姿态的变化来检测跌倒事件,基于头部轨迹的检测算法是根据跟踪人体头部的运动轨迹来判断跌倒行为,基于人体运动特征的检测算法主要通过空间信息和时间信息对跌倒进行检测。The human body multi-feature fusion fall detection method, the shape feature detection algorithm, the head trajectory detection algorithm and the motion feature detection algorithm, the shape feature detection method detects the fall event according to the change of the human body posture, and the head trajectory-based detection algorithm is based on Tracking the motion trajectory of the human head to determine the fall behavior, the detection algorithm based on human motion features mainly detects the fall through spatial information and time information.
附图说明Description of drawings
图1显示了基于相位相关-混合高斯模型的运动目标检测方法的流程示意图。Figure 1 shows a schematic flowchart of a moving object detection method based on a phase correlation-mixture Gaussian model.
图2显示了基于色度和梯度特征相融合的阴影消除方法的流程示意图。Figure 2 shows a schematic flowchart of a shadow removal method based on fusion of chrominance and gradient features.
图3显示了基于PC-GMM和KCF相结合的目标跟踪方法的流程示意图。Figure 3 shows a schematic flowchart of the target tracking method based on the combination of PC-GMM and KCF.
图4显示了基于人体多特征融合的跌倒检测方法流程示意图。Figure 4 shows a schematic flowchart of a fall detection method based on human body multi-feature fusion.
具体实施方式Detailed ways
本发明提供一种技术方案:请参阅图1-4,一种人体多特征融合的跌倒检测方法,包括以下步骤:The present invention provides a technical solution: please refer to FIGS. 1-4 , a fall detection method based on fusion of human body features includes the following steps:
步骤一、从监控视频中获取图像序列,对输入图像进行预处理,利用中值滤波算法去除图像中的噪声;Step 1: Obtain an image sequence from the surveillance video, preprocess the input image, and use a median filter algorithm to remove noise in the image;
步骤二、通过PC-GMM运动目标检测算法检测出监控视频中出现的运动人体,利用形态学腐蚀、膨胀、开运算和闭运算消除图像间细小的间断,以此得到更加平滑的人体目标;Step 2: Detect the moving human body appearing in the surveillance video through the PC-GMM moving target detection algorithm, and use morphological erosion, expansion, opening operation and closing operation to eliminate small discontinuities between images, so as to obtain a smoother human body target;
步骤三、通过YUV色度和梯度特征相融合的阴影检测算法,检测步骤二中所获取的运动人体是否存在阴影区域,若存在阴影区域,对阴影进行消除,得到更加准确的人体目标;Step 3, through the shadow detection algorithm of YUV chromaticity and gradient feature fusion, detect whether there is a shadow area in the moving human body obtained in step 2, if there is a shadow area, eliminate the shadow to obtain a more accurate human body target;
步骤四、比较前后两帧图像检测结果的相似度和宽度、高度变化,利用目标跟踪启用判定机制,判断是否需要对运动人体进行跟踪,若判定机制判定需要对运动人体进行跟踪,利用PC-GMM和KCF相结合的跟踪算法获取人体跟踪位置,反之,则直接跳转到步骤五;Step 4: Compare the similarity, width and height changes of the detection results of the two frames before and after, and use the target tracking to enable the determination mechanism to determine whether the moving human body needs to be tracked. If the determination mechanism determines that the moving human body needs to be tracked, use PC-GMM The tracking algorithm combined with KCF obtains the tracking position of the human body, otherwise, jump directly to step 5;
步骤五、通过融合人体高宽比、有效面积比、质心高度变化以及重心斜率角度等多个特征,对人体是否处于跌倒状态进行判断,当跌倒检测系统确定人处于跌倒状态时,系统发出警告,并自动寻求帮助。Step 5: Judging whether the human body is in a falling state by integrating multiple features such as the body aspect ratio, the effective area ratio, the change in the height of the center of mass, and the slope angle of the center of gravity. When the fall detection system determines that the person is in a falling state, the system issues a warning. and automatically ask for help.
具体的,包括以下步骤:根据S2中的操作步骤,Specifically, it includes the following steps: according to the operation steps in S2,
S201、利用混合高斯模型背景差分法计算出输入图像的背景模型参数均值、方差和权重;S201, using the mixed Gaussian model background difference method to calculate the mean value, variance and weight of the background model parameters of the input image;
S202、通过相位相关法计算出输入相邻帧图像间的偏移量dt,偏移量的计算首先需要根据表达式:求出相邻帧图像x和y方向的偏移量;S202, the offset d t between the input adjacent frame images is calculated by the phase correlation method, and the calculation of the offset first needs to be based on the expression: Find the offsets in the x and y directions of the adjacent frame images;
S203、将偏移量dt进行累加,直至累加和大于设定的阈值dT,记下此时图像为第n-t帧图像,并将n-t帧图像作为当前待检测第 n帧图像的前一帧图像;S203: Accumulate the offset d t until the accumulated sum is greater than the set threshold d T , record the image as the nt frame image at this time, and use the nt frame image as the previous frame of the nth frame image currently to be detected image;
S204、对第n-t帧图像背景模型参数进行更新,将第n-t帧图像检测出运动目标位置的均值、方差和权重更新为第n-1帧图像相同位置的均值、方差和权重,参数更新表达式如下所示:S204, update the background model parameters of the n-t frame image, update the mean, variance and weight of the position of the moving target detected in the n-t frame image to the mean, variance and weight of the same position in the n-1 frame image, and the parameter update expression As follows:
S205、通过第n-t帧图像更新完的背景模型重新构建待检测第n帧图像的背景模型,从而检测出运动目标在图像中的位置;S205, reconstruct the background model of the n-th frame image to be detected by the updated background model of the n-t frame image, thereby detecting the position of the moving target in the image;
S206、将检测出的运动目标进行形态学处理,去除细小间断的部分,平滑运动目标轮廓。S206 , perform morphological processing on the detected moving target, remove the small discontinuous part, and smooth the outline of the moving target.
具体的,包括以下步骤:根据步骤二中的操作步骤,所述 F1(u,v)表示相邻前一帧图像的傅里叶变换,F2 *(u,v)表示相邻后一帧图像傅里叶变换的共轭,x0、y0分别为x和y方向的偏移量,相邻帧图像间的偏移量dt计算公式表示为 Specifically, it includes the following steps: according to the operation steps in step 2, the F 1 (u, v) represents the Fourier transform of the adjacent previous frame image, and F 2 * (u, v) represents the adjacent next frame image The conjugate of the Fourier transform of the frame image, x 0 , y 0 are the offsets in the x and y directions, respectively, and the calculation formula of the offset d t between adjacent frame images is expressed as
具体的,包括以下步骤:根据S2中的操作步骤,所述 (x',y')object表示n-t帧图像检测出运动目标的像素点坐标。Specifically, it includes the following steps: according to the operation steps in S2, the (x', y') object represents the pixel coordinates of the moving object detected by the nt frame image.
5.具体的,包括以下步骤:根据步骤三中的操作步骤,5. Specifically, including the following steps: according to the operation steps in step 3,
S301、在YUV颜色空间利用亮度和色度信息筛选出候选阴影像素点,为了涵盖所有的阴影像素,需要把前景阈值设为较小的值,背景阈值设为较大的值;S301, using luminance and chromaticity information to filter out candidate shadow pixels in the YUV color space, in order to cover all shadow pixels, the foreground threshold needs to be set to a smaller value, and the background threshold needs to be set to a larger value;
S302、从上面提取的阴影像素中寻找像素连接的部分,每个部分分别对应不同的候选阴影区域;S302, find the part of the pixel connection from the shadow pixels extracted above, and each part corresponds to a different candidate shadow area;
S303、计算阴影区域像素点的梯度和方向,利用下列表示式计算得到该像素点的梯度和方向,表达式为S303, calculate the gradient and direction of the pixel point in the shadow area, and use the following expression to calculate the gradient and direction of the pixel point, and the expression is:
S304、计算背景和前景图像之间的梯度方向差值,可以将梯度方向转换为角的距离计算其差值,表达式为 S304. Calculate the gradient direction difference between the background and foreground images, and the gradient direction can be converted into the distance of the angle to calculate the difference, and the expression is:
S305、根据S4上式求出梯度方向差值,并计算出梯度方向相关性,表达式为S305: Calculate the gradient direction difference according to the above formula in S4, and calculate the gradient direction correlation, and the expression is:
具体的,包括以下步骤:根据步骤三中的操作步骤,所述和θxy分别表示像素点(x,y)的梯度大小和方向,考虑到噪声因素的影响,设置梯度阈值,保留大于梯度阈值的阴影像素点,赋予边缘像素点更大的权值,因为这些像素点含有图像的边缘特征。Specifically, it includes the following steps: according to the operation steps in step 3, the and θ xy represent the gradient size and direction of the pixel (x, y), respectively. Considering the influence of noise factors, set the gradient threshold, retain the shadow pixels larger than the gradient threshold, and give edge pixels greater weights, because these Pixels contain edge features of the image.
具体的,包括以下步骤:根据步骤三中的操作步骤,所述 和分别表示前景图像和背景图像中像素点x和y方向的梯度值,表示梯度方向差值。Specifically, it includes the following steps: according to the operation steps in step 3, the and represent the gradient values of the pixels in the x and y directions of the foreground image and the background image, respectively, Indicates the gradient direction difference.
具体的,包括以下步骤:根据步骤三中的操作步骤,所述n 表示候选区域中的总像素个数,表示当梯度方向差值小于阈值τ时结果为1,反之则为0,当比值c大于一定的阈值则认为该候选区域为阴影区域,反之则认为是前景区域。Specifically, it includes the following steps: according to the operation steps in step 3, the n represents the total number of pixels in the candidate area, Indicates when the gradient direction difference When the ratio is less than the threshold τ, the result is 1; otherwise, it is 0. When the ratio c is greater than a certain threshold, the candidate area is considered as a shadow area, otherwise, it is considered as a foreground area.
具体的,包括以下步骤:根据步骤三中的操作步骤,所述 和分别表示前景图像和背景图像中像素点x和y方向的梯度值,表示梯度方向差值。Specifically, it includes the following steps: according to the operation steps in step 3, the and represent the gradient values of the pixels in the x and y directions of the foreground image and the background image, respectively, Indicates the gradient direction difference.
具体的,包括以下步骤:根据步骤四中的操作步骤,Specifically, it includes the following steps: according to the operation steps in step 4,
S401、利用PC-GMM运动目标检测算法检测出运动目标;S401, using a PC-GMM moving target detection algorithm to detect a moving target;
S402、计算当前帧图像检测结果与前一帧图像检测结果的相似度以及高度和宽度的变化,判断是否利用跟踪算法对目标进行跟踪,若判定为需要跟踪目标,则执行步骤S3,否则,跳回到步骤步骤一;S402, calculate the similarity between the detection result of the current frame image and the detection result of the previous frame image and the changes in height and width, and determine whether to use the tracking algorithm to track the target, if it is determined that the target needs to be tracked, then perform step S3, otherwise, skip to go back to step one;
S403、计算当前帧图像的APCE值和前5帧图像的历史平均值 (APCE)avrage,S403, calculate the APCE value of the current frame image and the historical average (APCE) avrage of the previous 5 frame images,
S404、比较两个值的大小,当APCE值小于(APCE)avrage且 PC-GMM检测无目标时,判定目标移出摄像头监测范围,当APCE 值大于(APCE)avrage时,将跟踪算法获取的目标区域,与高斯背景模型进行差分运算,提取出运动目标的二值图像。S404. Compare the magnitudes of the two values. When the APCE value is less than (APCE) avrage and the PC-GMM detects no target, it is determined that the target is moved out of the camera monitoring range. When the APCE value is greater than (APCE) avrage , the target area obtained by the tracking algorithm is tracked. , carry out the difference operation with the Gaussian background model, and extract the binary image of the moving target.
实施例的方法进行检测分析,并与现有技术进行对照,得出如下数据:The method of embodiment carries out detection and analysis, and compares with prior art, draws the following data:
根据上述表格数据可以得出,当实施实施例时,通过本发明一种人体多特征融合的跌倒检测方法,该人体多特征融合的跌倒检测方法,形状特征检测算法、头部轨迹检测算法和运动特征的检测算法,形状特征检测方法依据人体姿态的变化来检测跌倒事件,基于头部轨迹的检测算法是根据跟踪人体头部的运动轨迹来判断跌倒行为,基于人体运动特征的检测算法主要通过空间信息和时间信息对跌倒进行检测。According to the above table data, it can be concluded that when the embodiment is implemented, a fall detection method of human body multi-feature fusion of the present invention, the fall detection method of human body multi-feature fusion, shape feature detection algorithm, head trajectory detection algorithm and motion detection algorithm can be obtained. Feature detection algorithms, shape feature detection methods detect fall events based on changes in human posture, head trajectory-based detection algorithms are based on tracking the motion trajectory of the human head to determine fall behavior, and human motion feature-based detection algorithms are mainly through space. Information and time information for fall detection.
本发明提供了一种人体多特征融合的跌倒检测方法,包括以下步骤:步骤一、从监控视频中获取图像序列,对输入图像进行预处理,利用中值滤波算法去除图像中的噪声,步骤二、通过PC-GMM运动目标检测算法检测出监控视频中出现的运动人体,利用形态学腐蚀、膨胀、开运算和闭运算消除图像间细小的间断,以此得到更加平滑的人体目标,S201、利用混合高斯模型背景差分法计算出输入图像的背景模型参数均值、方差和权重,S202、通过相位相关法计算出输入相邻帧图像间的偏移量dt,偏移量的计算首先需要根据表达式:求出相邻帧图像x和 y方向的偏移量,S203、将偏移量dt进行累加,直至累加和大于设定的阈值dT,记下此时图像为第n-t帧图像,并将n-t帧图像作为当前待检测第n帧图像的前一帧图像,S204、对第n-t帧图像背景模型参数进行更新,将第n-t帧图像检测出运动目标位置的均值、方差和权重更新为第n-1帧图像相同位置的均值、方差和权重,参数更新表达式如下所示:The invention provides a fall detection method based on fusion of human body features, which includes the following steps: step 1: obtaining an image sequence from a monitoring video, preprocessing the input image, and removing noise in the image by using a median filtering algorithm; step 2 , The moving human body appearing in the surveillance video is detected by the PC-GMM moving target detection algorithm, and the small discontinuity between the images is eliminated by morphological corrosion, expansion, opening operation and closing operation, so as to obtain a smoother human body target. S201, using The background difference method of the mixed Gaussian model calculates the mean value, variance and weight of the background model parameters of the input image. S202, the offset d t between the input adjacent frame images is calculated by the phase correlation method. The calculation of the offset first needs to be based on the expression Mode: Find the offsets in the x and y directions of the adjacent frame images, S203, accumulate the offsets d t until the accumulated sum is greater than the set threshold d T , record the image at this time as the nt-th frame image, and add The nt frame image is used as the previous frame image of the nth frame image currently to be detected. S204, the background model parameters of the nt frame image are updated, and the mean value, variance and weight of the position of the moving target detected in the nt frame image are updated to the nth frame image. - The mean, variance and weight of the same position in the image of 1 frame, the parameter update expression is as follows:
S205、通过第n-t帧图像更新完的背景模型重新构建待检测第n帧图像的背景模型,从而检测出运动目标在图像中的位置,S206、将检测出的运动目标进行形态学处理,去除细小间断的部分,平滑运动目标轮廓,F1(u,v)表示相邻前一帧图像的傅里叶变换,F2 *(u,v)表示相邻后一帧图像傅里叶变换的共轭,x0、y0分别为x和y方向的偏移量,相邻帧图像间的偏移量dt计算公式表示为(x',y')object表示n-t帧图像检测出运动目标的像素点坐标,步骤三、通过YUV色度和梯度特征相融合的阴影检测算法,检测步骤二中所获取的运动人体是否存在阴影区域,若存在阴影区域,对阴影进行消除,得到更加准确的人体目标,S301、在YUV颜色空间利用亮度和色度信息筛选出候选阴影像素点,为了涵盖所有的阴影像素,需要把前景阈值设为较小的值,背景阈值设为较大的值,S302、从上面提取的阴影像素中寻找像素连接的部分,每个部分分别对应不同的候选阴影区域,S303、计算阴影区域像素点的梯度和方向,利用下列表示式计算得到该像素点的梯度和方向,表达式为S205, reconstruct the background model of the nth frame image to be detected by using the updated background model of the nth frame image, so as to detect the position of the moving object in the image, and S206, perform morphological processing on the detected moving object to remove small Intermittent part, smooth moving target contour, F 1 (u, v) represents the Fourier transform of the adjacent previous frame image, F 2 * (u, v) represents the common Fourier transform of the adjacent next frame image. Yoke, x 0 , y 0 are the offsets in the x and y directions respectively, and the calculation formula of the offset d t between adjacent frame images is expressed as (x', y') object represents the pixel coordinates of the moving target detected by the nt frame image. Step 3: Through the shadow detection algorithm fused with YUV chromaticity and gradient features, detect whether the moving body obtained in step 2 has shadows If there is a shadow area, eliminate the shadow to obtain a more accurate human target. S301. Use the luminance and chromaticity information to filter out candidate shadow pixels in the YUV color space. In order to cover all shadow pixels, the foreground threshold needs to be set. is a smaller value, the background threshold is set to a larger value, S302, find the part of the pixel connection from the shadow pixels extracted above, each part corresponds to a different candidate shadow area, S303, calculate the gradient of the shadow area pixels and direction, use the following expression to calculate the gradient and direction of the pixel point, the expression is
S304、计算背景和前景图像之间的梯度方向差值,可以将梯度方向转换为角的距离计算其差值,表达式为S305、根据步骤四上式求出梯度方向差值,并计算出梯度方向相关性,表达式为 和θxy分别表示像素点(x,y)的梯度大小和方向,考虑到噪声因素的影响,设置梯度阈值,保留大于梯度阈值的阴影像素点,赋予边缘像素点更大的权值,因为这些像素点含有图像的边缘特征,和分别表示前景图像和背景图像中像素点x和y方向的梯度值,表示梯度方向差值,n表示候选区域中的总像素个数,表示当梯度方向差值小于阈值τ时结果为1,反之则为0,当比值c大于一定的阈值则认为该候选区域为阴影区域,反之则认为是前景区域,和分别表示前景图像和背景图像中像素点x和y方向的梯度值,表示梯度方向差值,步骤四、比较前后两帧图像检测结果的相似度和宽度、高度变化,利用目标跟踪启用判定机制,判断是否需要对运动人体进行跟踪,若判定机制判定需要对运动人体进行跟踪,利用PC-GMM和KCF相结合的跟踪算法获取人体跟踪位置,反之,则直接跳转到步骤五,S401、利用PC-GMM运动目标检测算法检测出运动目标,S402、计算当前帧图像检测结果与前一帧图像检测结果的相似度以及高度和宽度的变化,判断是否利用跟踪算法对目标进行跟踪,若判定为需要跟踪目标,则执行步骤三,否则,跳回到步骤一,S403、计算当前帧图像的APCE值和前5帧图像的历史平均值(APCE)avrage,S404、比较两个值的大小,当APCE值小于(APCE)avrage且PC-GMM检测无目标时,判定目标移出摄像头监测范围,当APCE值大于(APCE)avrage时,将跟踪算法获取的目标区域,与高斯背景模型进行差分运算,提取出运动目标的二值图像,步骤五、通过融合人体高宽比、有效面积比、质心高度变化以及重心斜率角度等多个特征,对人体是否处于跌倒状态进行判断,当跌倒检测系统确定人处于跌倒状态时,系统发出警告,并自动寻求帮助。S304. Calculate the gradient direction difference between the background and foreground images, and the gradient direction can be converted into the distance of the angle to calculate the difference, and the expression is: S305. Calculate the gradient direction difference according to the above formula in step 4, and calculate the gradient direction correlation, and the expression is: and θ xy represent the gradient size and direction of the pixel (x, y), respectively. Considering the influence of noise factors, set the gradient threshold, retain the shadow pixels larger than the gradient threshold, and give edge pixels greater weights, because these Pixels contain edge features of the image, and represent the gradient values of the pixels in the x and y directions of the foreground image and the background image, respectively, represents the gradient direction difference, n represents the total number of pixels in the candidate region, Indicates when the gradient direction difference When the ratio is less than the threshold τ, the result is 1, otherwise it is 0. When the ratio c is greater than a certain threshold, the candidate area is considered to be a shadow area, otherwise, it is considered to be a foreground area. and represent the gradient values of the pixels in the x and y directions of the foreground image and the background image, respectively, Indicates the gradient direction difference. Step 4. Compare the similarity and width and height changes of the detection results of the two frames before and after, and use the target tracking to enable the judgment mechanism to judge whether it is necessary to track the moving body. If the judgment mechanism determines that the moving body needs to be tracked. Tracking, using the tracking algorithm combined with PC-GMM and KCF to obtain the human body tracking position, otherwise, jump directly to step 5, S401, use the PC-GMM moving target detection algorithm to detect the moving target, S402, calculate the current frame image detection The similarity between the result and the image detection result of the previous frame and the changes in height and width, determine whether to use the tracking algorithm to track the target, if it is determined that the target needs to be tracked, then execute step 3, otherwise, jump back to
尽管已经示出和描述了本发明的实施例,对于本领域的普通技术人员而言,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同物限定。Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, and substitutions can be made in these embodiments without departing from the principle and spirit of the invention and modifications, the scope of the present invention is defined by the appended claims and their equivalents.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111593285.0A CN114333235A (en) | 2021-12-23 | 2021-12-23 | Human body multi-feature fusion falling detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111593285.0A CN114333235A (en) | 2021-12-23 | 2021-12-23 | Human body multi-feature fusion falling detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114333235A true CN114333235A (en) | 2022-04-12 |
Family
ID=81054258
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111593285.0A Withdrawn CN114333235A (en) | 2021-12-23 | 2021-12-23 | Human body multi-feature fusion falling detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114333235A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115205981A (en) * | 2022-09-08 | 2022-10-18 | 深圳市维海德技术股份有限公司 | Standing posture detection method and device, electronic equipment and readable storage medium |
CN117671799A (en) * | 2023-12-15 | 2024-03-08 | 武汉星巡智能科技有限公司 | Human body falling detection method, device, equipment and medium combining depth measurement |
CN118314688A (en) * | 2024-04-15 | 2024-07-09 | 河北省胸科医院 | A dynamic tracking and state analysis method and clinical nursing system |
CN118658207A (en) * | 2024-07-16 | 2024-09-17 | 广州星瞳信息科技有限责任公司 | Intelligent fall detection method, device, equipment and medium |
-
2021
- 2021-12-23 CN CN202111593285.0A patent/CN114333235A/en not_active Withdrawn
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115205981A (en) * | 2022-09-08 | 2022-10-18 | 深圳市维海德技术股份有限公司 | Standing posture detection method and device, electronic equipment and readable storage medium |
CN115205981B (en) * | 2022-09-08 | 2023-01-31 | 深圳市维海德技术股份有限公司 | Standing posture detection method and device, electronic equipment and readable storage medium |
CN117671799A (en) * | 2023-12-15 | 2024-03-08 | 武汉星巡智能科技有限公司 | Human body falling detection method, device, equipment and medium combining depth measurement |
CN118314688A (en) * | 2024-04-15 | 2024-07-09 | 河北省胸科医院 | A dynamic tracking and state analysis method and clinical nursing system |
CN118658207A (en) * | 2024-07-16 | 2024-09-17 | 广州星瞳信息科技有限责任公司 | Intelligent fall detection method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114333235A (en) | Human body multi-feature fusion falling detection method | |
Wang et al. | Automatic fall detection of human in video using combination of features | |
CN110781733B (en) | Image duplicate removal method, storage medium, network equipment and intelligent monitoring system | |
CN107657244B (en) | A multi-camera-based human fall behavior detection system and its detection method | |
CN104866841B (en) | A kind of human body target is run behavioral value method | |
CN102073851A (en) | Method and system for automatically identifying urban traffic accident | |
CN106204640A (en) | A kind of moving object detection system and method | |
JP2012053756A (en) | Image processor and image processing method | |
CN114842397A (en) | Real-time old man falling detection method based on anomaly detection | |
CN115116127A (en) | A fall detection method based on computer vision and artificial intelligence | |
CN114469076B (en) | Identity-feature-fused fall identification method and system for solitary old people | |
CN108509938A (en) | A kind of fall detection method based on video monitoring | |
CN111275910A (en) | Method and system for detecting border crossing behavior of escalator based on Gaussian mixture model | |
CN104036250A (en) | Video pedestrian detecting and tracking method | |
CN108805021A (en) | The real-time individual tumble behavioral value alarm method of feature based operator | |
TWI493510B (en) | Falling down detection method | |
JP2020109644A (en) | Fall detection method, fall detection apparatus, and electronic device | |
CN117037272B (en) | Method and system for monitoring fall of old people | |
Ali et al. | Human fall detection | |
CN110765925B (en) | Method for detecting carrying object and identifying gait based on improved twin neural network | |
Dorgham et al. | Improved elderly fall detection by surveillance video using real-time human motion analysis | |
Khraief et al. | Vision-based fall detection for elderly people using body parts movement and shape analysis | |
CN111310689A (en) | Method for recognizing human body behaviors in potential information fusion home security system | |
Lee et al. | Automated abnormal behavior detection for ubiquitous healthcare application in daytime and nighttime | |
De et al. | Fall detection approach based on combined two-channel body activity classification for innovative indoor environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220412 |