CN114333235A - Human body multi-feature fusion falling detection method - Google Patents

Human body multi-feature fusion falling detection method Download PDF

Info

Publication number
CN114333235A
CN114333235A CN202111593285.0A CN202111593285A CN114333235A CN 114333235 A CN114333235 A CN 114333235A CN 202111593285 A CN202111593285 A CN 202111593285A CN 114333235 A CN114333235 A CN 114333235A
Authority
CN
China
Prior art keywords
human body
image
gradient
frame image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111593285.0A
Other languages
Chinese (zh)
Inventor
李胜
朱佳伟
史玉华
潘玥
李津津
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Institute of Information Engineering
Original Assignee
Anhui Institute of Information Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Institute of Information Engineering filed Critical Anhui Institute of Information Engineering
Priority to CN202111593285.0A priority Critical patent/CN114333235A/en
Publication of CN114333235A publication Critical patent/CN114333235A/en
Withdrawn legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a human body multi-feature fusion falling detection method, which belongs to the technical field of falling early warning and comprises the following steps: the method comprises the steps of firstly, obtaining an image sequence from a monitoring video, preprocessing an input image, removing noise in the image by using a median filtering algorithm, secondly, detecting a moving human body appearing in the monitoring video by using a PC-GMM moving object detection algorithm, and eliminating tiny discontinuity among the images by using morphological corrosion, expansion, opening operation and closing operation so as to obtain a smoother human body target. The method comprises a shape feature detection algorithm, a head track detection algorithm and a motion feature detection algorithm, wherein the shape feature detection method is used for detecting a falling event according to the change of the posture of a human body, the head track-based detection algorithm is used for judging falling behaviors according to the motion track of the head of the human body, and the human body motion feature-based detection algorithm is mainly used for detecting falling through space information and time information.

Description

Human body multi-feature fusion falling detection method
Technical Field
The invention relates to the technical field of fall early warning, in particular to a fall detection method with human body multi-feature fusion.
Background
The falling is one of the main reasons of injuries of old people, the falling frequency increases with the increase of age and weakness grade [1], about 30 percent of old people over 65 years old fall each year, the falling proportion of the old people over 79 years old is even up to 40 percent, the fallen old people are 20 to 30 percent of the old people who are injured, many old people cannot stand by themselves after falling, and serious complications and even death can be caused if the old people cannot timely rescue and treat the old people.
Along with the rapid expansion of the population base of old people, China gradually advances to the population aging stage, according to research and investigation, the old people over 65 years old in China account for 16.2% of the total population base in 2015, the population exceeds 2.2 hundred million people, the old people reaching over 65 years old are predicted to break through 3 hundred million people and the number of 'empty nest families' 2 is rapidly increased and reaches 1.2 hundred million people, sociologists define 'empty nest families' as children or families with children not at their sides, the body functions of old people are weakened, the old people can fall down due to body imbalance, fracture and even fall into serious injury, because of no care of the old people at home, timely rescue and treatment can not be obtained, even if some old people can survive from falling down in luck, the old people can still need medical assistance to maintain life, and great psychological trauma can be caused to the old people.
The intelligent falling detection system can be defined as auxiliary equipment, the main effect is when the old man takes place to fall, the alarm is sent out at the very first time, this injury that receives after can reducing the old man to a certain extent tumbles, the produced fear of old man in the psychology after tumbleing has not only been lightened, and can in time provide assistance after the old man tumbles, in the incident of tumbleing of reality, the old man tumbles and interacts with both fear produced to tumbleing, enlarge the injury that receives after tumbleing, can produce the fear to tumbleing after the old man tumbles, and increase the risk that the old man tumbles in return to the fear of tumbleing, under this background, intelligent falling detection system all has huge research meaning in academic research and practical application field.
According to different sensors and detection modes, the existing fall detection has the following 3 technologies: the wearable sensor detection technology relies on sensors such as an accelerometer and a gyroscope to be connected to a chest, a wrist and a waist, collects motion information of a human body through the sensors, distinguishes the motion state of the human body through the change of data, and judges whether a falling behavior occurs, the environment sensor-based falling detection technology is the biggest difference from the wearable sensor detection technology in that the former technology collects audio signals and vibration signals in the falling process of the human body from a detection environment by trying to analyze the signal data, and then the computer vision-based falling detection technology becomes a research hotspot in the falling detection field, and the general vision-based falling detection method has the following 3 methods according to the detection characteristics: a shape feature detection algorithm, a head track detection algorithm and a motion feature detection algorithm, wherein the shape feature detection method detects falling incidents according to the change of human postures, the head track-based detection algorithm judges falling behaviors according to the motion track of the head of a tracked human body, the human motion feature-based detection algorithm mainly detects falling through space information and time information,
disclosure of Invention
The invention aims to provide a human body multi-feature fusion falling detection method. According to the human body multi-feature fusion falling detection method, whether the trunk exists or not can be rapidly determined by combining hardware switching and level value self-checking, the convenience of operation is improved, the redundancy of equipment design can be guaranteed by using the standby radio frequency assembly, and the convenience of installation can be improved by using multiple self-checks.
In order to achieve the above effects, the present invention provides the following technical solutions: a human body multi-feature fusion fall detection method comprises the following steps:
acquiring an image sequence from a monitoring video, preprocessing an input image, and removing noise in the image by using a median filtering algorithm;
detecting a moving human body appearing in the monitoring video through a PC-GMM moving object detection algorithm, and eliminating tiny gaps among images by utilizing morphological corrosion, expansion, opening operation and closing operation so as to obtain a smoother human body object;
step three, detecting whether the moving human body obtained in the step two has a shadow area or not through a shadow detection algorithm with fused YUV chrominance and gradient characteristics, and eliminating the shadow if the moving human body has the shadow area to obtain a more accurate human body target;
step four, comparing the similarity, the width and the height change of the detection results of the two frames of images before and after the detection, starting a judgment mechanism by utilizing target tracking, judging whether the moving human body needs to be tracked, if the judgment mechanism judges that the moving human body needs to be tracked, acquiring the human body tracking position by utilizing a tracking algorithm combining PC-GMM and KCF, and if not, directly jumping to the step five;
s5, judging whether the human body is in a falling state or not by fusing a plurality of characteristics such as the height-width ratio of the human body, the effective area ratio, the height change of the mass center, the slope angle of the mass center and the like, and when the falling detection system determines that the human body is in the falling state, the system gives an alarm and automatically seeks help.
Further, the method comprises the following steps: according to the operation step in S2,
s201, calculating a background model parameter mean value, a variance and a weight of an input image by using a mixed Gaussian model background difference method;
s202, calculating the offset d between input adjacent frame images by a phase correlation methodtThe offset calculation first needs to be according to the expression:
Figure RE-GDA0003538701840000041
calculating the offset of the adjacent frame images in the x and y directions;
s203, offset dtAccumulating until the accumulated sum is larger than a set threshold value dTRecording the image at the moment as an n-t frame image, and taking the n-t frame image as a previous frame image of the current to-be-detected n frame image;
s204, updating the background model parameters of the n-t frame image, and updating the mean value, the variance and the weight of the position of the motion target detected by the n-t frame image into the mean value, the variance and the weight of the same position of the n-t frame image, wherein the parameter updating expression is as follows:
Figure RE-GDA0003538701840000042
s205, reconstructing a background model of the nth frame image to be detected through the updated background model of the nth-t frame image, thereby detecting the position of the moving target in the image;
and S206, performing morphological processing on the detected moving target, removing fine discontinuous parts, and smoothing the contour of the moving target.
Further, the method comprises the following steps: according to the operating procedure in step two, F1(u, v) Fourier transform of an adjacent previous frame image, F2 *(u, v) denotes the conjugate of the Fourier transform of the adjacent subsequent frame image, x0、y0Offset in x and y directions, respectively, offset between adjacent frame imagesAmount of movement dtThe calculation formula is expressed as
Figure RE-GDA0003538701840000043
Further, the method comprises the following steps: according to the operation in step two, said (x ', y')objectAnd representing the pixel point coordinates of the detected moving target of the n-t frame image.
5. Further, the method comprises the following steps: according to the operation steps in the third step,
s301, screening out candidate shadow pixel points in a YUV color space by utilizing brightness and chrominance information, wherein in order to cover all shadow pixels, a foreground threshold value needs to be set to be a small value, and a background threshold value needs to be set to be a large value;
s302, searching pixel connection parts from the shadow pixels extracted from the upper surface, wherein each part corresponds to different candidate shadow areas;
s303, calculating the gradient and the direction of the pixel point in the shadow area, and calculating the gradient and the direction of the pixel point by using the following expression formula
Figure RE-GDA0003538701840000051
S304, calculating the difference value of the gradient directions between the background image and the foreground image, and calculating the difference value by converting the gradient directions into distances of angles, wherein the expression is
Figure RE-GDA0003538701840000052
Figure RE-GDA0003538701840000053
S305, solving the gradient direction difference according to the formula in S4, and calculating the gradient direction correlation, wherein the expression is
Figure RE-GDA0003538701840000054
Further, the method can be used for preparing a novel materialComprises the following steps: according to the operating steps in step three, the
Figure RE-GDA0003538701840000055
And thetaxyThe gradient size and the gradient direction of the pixel points (x, y) are respectively expressed, the influence of noise factors is considered, the gradient threshold value is set, shadow pixel points larger than the gradient threshold value are reserved, and larger weight values are given to edge pixel points because the pixel points contain edge characteristics of the image.
Further, the method comprises the following steps: according to the operating steps in step three, the
Figure RE-GDA0003538701840000061
Figure RE-GDA0003538701840000062
And
Figure RE-GDA0003538701840000063
respectively representing the gradient values of the pixel points in the x direction and the y direction in the foreground image and the background image,
Figure RE-GDA0003538701840000064
the gradient direction difference is indicated.
Further, the method comprises the following steps: according to the operation steps in step three, said n represents the total number of pixels in the candidate region,
Figure RE-GDA0003538701840000065
indicating the difference in gradient directions
Figure RE-GDA0003538701840000066
And when the ratio c is larger than a certain threshold value, the candidate area is considered as a shadow area, and otherwise, the candidate area is considered as a foreground area.
Further, the method comprises the following steps: according to the operation procedure in S3, the
Figure RE-GDA0003538701840000067
And
Figure RE-GDA0003538701840000068
respectively representing the gradient values of the pixel points in the x direction and the y direction in the foreground image and the background image,
Figure RE-GDA0003538701840000069
the gradient direction difference is indicated.
Further, the method comprises the following steps: according to the operation steps in the fourth step,
s401, detecting a moving target by utilizing a PC-GMM moving target detection algorithm;
s402, calculating the similarity between the current frame image detection result and the previous frame image detection result and the change of the height and the width, judging whether a target is tracked by using a tracking algorithm, if the target is judged to be required to be tracked, executing a step S3, otherwise, returning to the step I;
s403, calculating APCE value of current frame image and historical average value (APCE) of previous 5 frame imageavrage
Figure RE-GDA00035387018400000610
S404, comparing the two values, when the APCE value is smaller than (APCE)avrageAnd when the PC-GMM detects no target, the target is judged to move out of the monitoring range of the camera, and when the APCE value is larger than (APCE)avrageAnd then, carrying out difference operation on the target area obtained by the tracking algorithm and the Gaussian background model, and extracting a binary image of the moving target.
The invention provides a human body multi-feature fusion falling detection method, which has the following beneficial effects:
the human body multi-feature fusion falling detection method comprises a shape feature detection algorithm, a head track detection algorithm and a motion feature detection algorithm, wherein the shape feature detection method detects falling incidents according to changes of human body postures, the head track-based detection algorithm judges falling behaviors according to motion tracks of tracked human heads, and the human body motion feature-based detection algorithm mainly detects falling through space information and time information.
Drawings
Fig. 1 shows a schematic flow chart of a moving object detection method based on a phase correlation-gaussian mixture model.
FIG. 2 shows a flow diagram of a shadow elimination method based on the fusion of chrominance and gradient features.
FIG. 3 shows a flow chart of a target tracking method based on the combination of PC-GMM and KCF.
Fig. 4 shows a schematic flow chart of a fall detection method based on human body multi-feature fusion.
Detailed Description
The invention provides a technical scheme that: referring to fig. 1-4, a fall detection method with human body multi-feature fusion includes the following steps:
acquiring an image sequence from a monitoring video, preprocessing an input image, and removing noise in the image by using a median filtering algorithm;
detecting a moving human body appearing in the monitoring video through a PC-GMM moving object detection algorithm, and eliminating tiny gaps among images by utilizing morphological corrosion, expansion, opening operation and closing operation so as to obtain a smoother human body object;
step three, detecting whether the moving human body obtained in the step two has a shadow area or not through a shadow detection algorithm with fused YUV chrominance and gradient characteristics, and eliminating the shadow if the moving human body has the shadow area to obtain a more accurate human body target;
step four, comparing the similarity, the width and the height change of the detection results of the two frames of images before and after the detection, starting a judgment mechanism by utilizing target tracking, judging whether the moving human body needs to be tracked, if the judgment mechanism judges that the moving human body needs to be tracked, acquiring the human body tracking position by utilizing a tracking algorithm combining PC-GMM and KCF, and if not, directly jumping to the step five;
and fifthly, judging whether the human body is in a falling state or not by fusing a plurality of characteristics such as the height-width ratio of the human body, the effective area ratio, the height change of the mass center, the slope angle of the mass center and the like, and when the falling detection system determines that the human body is in the falling state, the system gives an alarm and automatically seeks help.
Specifically, the method comprises the following steps: according to the operation step in S2,
s201, calculating a background model parameter mean value, a variance and a weight of an input image by using a mixed Gaussian model background difference method;
s202, calculating the offset d between input adjacent frame images by a phase correlation methodtThe offset calculation first needs to be according to the expression:
Figure RE-GDA0003538701840000081
calculating the offset of the adjacent frame images in the x and y directions;
s203, offset dtAccumulating until the accumulated sum is larger than a set threshold value dTRecording the image at the moment as an n-t frame image, and taking the n-t frame image as a previous frame image of the current to-be-detected n frame image;
s204, updating the background model parameters of the n-t frame image, and updating the mean value, the variance and the weight of the position of the motion target detected by the n-t frame image into the mean value, the variance and the weight of the same position of the n-1 frame image, wherein the parameter updating expression is as follows:
Figure RE-GDA0003538701840000091
s205, reconstructing a background model of the nth frame image to be detected through the updated background model of the nth-t frame image, thereby detecting the position of the moving target in the image;
and S206, performing morphological processing on the detected moving target, removing fine discontinuous parts, and smoothing the contour of the moving target.
Specifically, the method comprises the following steps: according to the operating procedure in step two, F1(u, v) Fourier transform of an adjacent previous frame image, F2 *(uV) the conjugate of the Fourier transform of the adjacent subsequent frame image, x0、y0Offset in x and y directions, respectively, offset d between adjacent frame imagestThe calculation formula is expressed as
Figure RE-GDA0003538701840000092
Specifically, the method comprises the following steps: according to the operation in S2, the terms (x ', y')objectAnd representing the pixel point coordinates of the detected moving target of the n-t frame image.
5. Specifically, the method comprises the following steps: according to the operation steps in the third step,
s301, screening out candidate shadow pixel points in a YUV color space by utilizing brightness and chrominance information, wherein in order to cover all shadow pixels, a foreground threshold value needs to be set to be a small value, and a background threshold value needs to be set to be a large value;
s302, searching pixel connection parts from the shadow pixels extracted from the upper surface, wherein each part corresponds to different candidate shadow areas;
s303, calculating the gradient and the direction of the pixel point in the shadow area, and calculating the gradient and the direction of the pixel point by using the following expression formula
Figure RE-GDA0003538701840000101
S304, calculating the difference value of the gradient directions between the background image and the foreground image, and calculating the difference value by converting the gradient directions into distances of angles, wherein the expression is
Figure RE-GDA0003538701840000102
S305, solving the gradient direction difference according to the formula in S4, and calculating the gradient direction correlation, wherein the expression is
Figure RE-GDA0003538701840000103
In particularThe method comprises the following steps: according to the operating steps in step three, the
Figure RE-GDA0003538701840000104
And thetaxyThe gradient size and the gradient direction of the pixel points (x, y) are respectively expressed, the influence of noise factors is considered, the gradient threshold value is set, shadow pixel points larger than the gradient threshold value are reserved, and larger weight values are given to edge pixel points because the pixel points contain edge characteristics of the image.
Specifically, the method comprises the following steps: according to the operating steps in step three, the
Figure RE-GDA0003538701840000105
Figure RE-GDA0003538701840000106
And
Figure RE-GDA0003538701840000107
respectively representing the gradient values of the pixel points in the x direction and the y direction in the foreground image and the background image,
Figure RE-GDA0003538701840000108
the gradient direction difference is indicated.
Specifically, the method comprises the following steps: according to the operation steps in step three, said n represents the total number of pixels in the candidate region,
Figure RE-GDA0003538701840000109
indicating the difference in gradient directions
Figure RE-GDA00035387018400001010
And when the ratio c is larger than a certain threshold value, the candidate area is considered as a shadow area, and otherwise, the candidate area is considered as a foreground area.
Specifically, the method comprises the following steps: according to the operating steps in step three, the
Figure RE-GDA00035387018400001011
Figure RE-GDA00035387018400001012
And
Figure RE-GDA00035387018400001013
respectively representing the gradient values of the pixel points in the x direction and the y direction in the foreground image and the background image,
Figure RE-GDA0003538701840000111
the gradient direction difference is indicated.
Specifically, the method comprises the following steps: according to the operation steps in the fourth step,
s401, detecting a moving target by utilizing a PC-GMM moving target detection algorithm;
s402, calculating the similarity between the current frame image detection result and the previous frame image detection result and the change of the height and the width, judging whether a target is tracked by using a tracking algorithm, if the target is judged to be required to be tracked, executing a step S3, otherwise, returning to the step I;
s403, calculating APCE value of current frame image and historical average value (APCE) of previous 5 frame imageavrage
Figure RE-GDA0003538701840000112
S404, comparing the two values, when the APCE value is smaller than (APCE)avrageAnd when the PC-GMM detects no target, the target is judged to move out of the monitoring range of the camera, and when the APCE value is larger than (APCE)avrageAnd then, carrying out difference operation on the target area obtained by the tracking algorithm and the Gaussian background model, and extracting a binary image of the moving target.
The method of the examples was performed for detection analysis and compared to the prior art to yield the following data:
accuracy of prediction Anti-interference capability Efficiency of early warning
Examples Is higher than Is higher than Is higher than
Prior Art Is lower than Is lower than Is lower than
According to the table data, the embodiment shows that the human body multi-feature fusion falling detection method, the shape feature detection algorithm, the head track detection algorithm and the motion feature detection algorithm are adopted, the shape feature detection method detects falling incidents according to changes of human body postures, the head track-based detection algorithm judges falling behaviors according to the motion tracks of the tracked human body heads, and the human body motion feature-based detection algorithm mainly detects falling through space information and time information.
The invention provides a human body multi-feature fusion falling detection method, which comprises the following steps: step one, obtaining an image sequence from a monitoring video, preprocessing an input image, removing noise in the image by using a median filtering algorithm, and step two, detecting the motion appearing in the monitoring video by using a PC-GMM moving object detection algorithmMoving the human body, eliminating fine discontinuity between images by using morphological corrosion, expansion, opening operation and closing operation to obtain a smoother human body target, S201, calculating the mean value, variance and weight of background model parameters of an input image by using a mixed Gaussian model background difference method, S202, calculating the offset d between input adjacent frame images by using a phase correlation methodtThe offset calculation first needs to be according to the expression:
Figure RE-GDA0003538701840000121
obtaining the offset in the x and y directions of the adjacent frame images, S203, and calculating the offset dtAccumulating until the accumulated sum is larger than a set threshold value dTRecording the image at this time as an n-t frame image, taking the n-t frame image as a previous frame image of the current n frame image to be detected, S204, updating background model parameters of the n-t frame image, and updating the mean value, the variance and the weight of the position of the detected moving target of the n-t frame image into the mean value, the variance and the weight of the same position of the n-1 frame image, wherein a parameter updating expression is as follows:
Figure RE-GDA0003538701840000131
s205, reconstructing the background model of the nth frame image to be detected through the updated background model of the nth-t frame image so as to detect the position of the moving target in the image, S206, performing morphological processing on the detected moving target, removing fine and discontinuous parts, smoothing the contour of the moving target, and F1(u, v) Fourier transform of an adjacent previous frame image, F2 *(u, v) denotes the conjugate of the Fourier transform of the adjacent subsequent frame image, x0、y0Offset in x and y directions, respectively, offset d between adjacent frame imagestThe calculation formula is expressed as
Figure RE-GDA0003538701840000132
(x',y')objectExpressing the pixel point coordinates of the detected moving target of the n-t frame image, and performing step three by YUV chroma and gradient featureAnd (2) a sign-phase fused shadow detection algorithm, detecting whether a shadow region exists in the moving human body obtained in the step two, if so, eliminating the shadow to obtain a more accurate human body target, S301, screening out candidate shadow pixel points in YUV color space by utilizing brightness and chrominance information, setting a foreground threshold value to be a smaller value and a background threshold value to be a larger value in order to cover all the shadow pixels, S302, searching pixel connection parts from the shadow pixels extracted from the above, wherein each part respectively corresponds to different candidate shadow regions, S303, calculating the gradient and the direction of the pixel points in the shadow regions, and calculating the gradient and the direction of the pixel points by utilizing the following expression, wherein the expression is that
Figure RE-GDA0003538701840000133
S304, calculating the difference value of the gradient directions between the background image and the foreground image, and calculating the difference value by converting the gradient directions into distances of angles, wherein the expression is
Figure RE-GDA0003538701840000141
S305, solving a gradient direction difference value according to the formula in the step four, and calculating the gradient direction correlation, wherein the expression is
Figure RE-GDA0003538701840000142
Figure RE-GDA0003538701840000143
And thetaxyRespectively representing the gradient size and direction of the pixel points (x, y), setting a gradient threshold value in consideration of the influence of noise factors, reserving shadow pixel points larger than the gradient threshold value, giving greater weight to edge pixel points because the pixel points contain the edge characteristics of the image,
Figure RE-GDA0003538701840000144
and
Figure RE-GDA0003538701840000145
respectively representing the gradient values of the pixel points in the x direction and the y direction in the foreground image and the background image,
Figure RE-GDA0003538701840000148
representing the gradient direction difference, n representing the total number of pixels in the candidate region,
Figure RE-GDA0003538701840000147
indicating the difference in gradient directions
Figure RE-GDA0003538701840000148
When the value is less than the threshold tau, the result is 1, otherwise, the result is 0, when the ratio c is greater than a certain threshold value, the candidate area is considered as a shadow area, otherwise, the candidate area is considered as a foreground area,
Figure RE-GDA0003538701840000149
and
Figure RE-GDA00035387018400001410
respectively representing the gradient values of the pixel points in the x direction and the y direction in the foreground image and the background image,
Figure RE-GDA00035387018400001411
expressing the difference value of the gradient directions, step four, comparing the similarity, the width and the height change of the detection results of the front frame image and the rear frame image, starting a judging mechanism by utilizing target tracking, judging whether the moving human body needs to be tracked, if the judging mechanism judges that the moving human body needs to be tracked, acquiring the human body tracking position by utilizing a tracking algorithm combining PC-GMM and KCF, and otherwise, directly jumping to the fifth step, S401, detecting the moving target by using the PC-GMM moving target detection algorithm, S402, calculating the similarity between the current frame image detection result and the previous frame image detection result and the change of the height and the width, judging whether the target is tracked by using the tracking algorithm, if so, step three is executed, otherwise, the process jumps back to step one, S403, and the APCE value of the current frame image and the historical average value (APCE) of the previous 5 frame images are calculated.avrage
Figure RE-GDA00035387018400001412
S404, comparing the two values, when the APCE value is smaller than (APCE)avrageAnd when the PC-GMM detects no target, the target is judged to move out of the monitoring range of the camera, and when the APCE value is larger than (APCE)avrageAnd fifthly, judging whether the human body is in a falling state or not by fusing a plurality of characteristics of the height-width ratio, the effective area ratio, the height change of the mass center, the slope angle of the gravity center and the like of the human body, and when the falling detection system determines that the human body is in the falling state, the system gives a warning and automatically seeks help.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. A human body multi-feature fusion fall detection method is characterized by comprising the following steps:
s1, acquiring an image sequence from the monitoring video, preprocessing an input image, and removing noise in the image by using a median filtering algorithm;
s2, detecting a moving human body appearing in the monitoring video through a PC-GMM moving object detection algorithm, and eliminating tiny discontinuity among images by utilizing morphological corrosion, expansion, opening operation and closing operation so as to obtain a smoother human body object;
s3, detecting whether the moving human body obtained in the step two has a shadow area through a shadow detection algorithm with fused YUV chrominance and gradient characteristics, and eliminating the shadow if the moving human body has the shadow area to obtain a more accurate human body target;
s4, comparing the similarity, the width and the height change of the detection results of the two frames of images before and after the detection, starting a judgment mechanism by utilizing target tracking, judging whether the moving human body needs to be tracked, if the judgment mechanism judges that the moving human body needs to be tracked, acquiring the human body tracking position by utilizing a tracking algorithm combining PC-GMM and KCF, and if not, directly jumping to the fifth step;
s5, judging whether the human body is in a falling state or not by fusing a plurality of characteristics such as the height-width ratio of the human body, the effective area ratio, the height change of the mass center, the slope angle of the mass center and the like, and when the falling detection system determines that the human body is in the falling state, the system gives an alarm and automatically seeks help.
2. The human body multi-feature fusion fall detection method as claimed in claim 1, comprising the steps of: according to the operation step in S2,
s201, calculating a background model parameter mean value, a variance and a weight of an input image by using a mixed Gaussian model background difference method;
s202, calculating the offset d between input adjacent frame images by a phase correlation methodtThe offset calculation first needs to be according to the expression:
Figure RE-FDA0003538701830000021
calculating the offset of the adjacent frame images in the x and y directions;
s203, offset dtAccumulating until the accumulated sum is larger than a set threshold value dTRecording the image at the moment as an n-t frame image, and taking the n-t frame image as a previous frame image of the current to-be-detected n frame image;
s204, updating the background model parameters of the n-t frame image, and updating the mean value, the variance and the weight of the position of the motion target detected by the n-t frame image into the mean value, the variance and the weight of the same position of the n-1 frame image, wherein the parameter updating expression is as follows:
Figure RE-FDA0003538701830000022
s205, reconstructing a background model of the nth frame image to be detected through the updated background model of the nth-t frame image, thereby detecting the position of the moving target in the image;
and S206, performing morphological processing on the detected moving target, removing fine discontinuous parts, and smoothing the contour of the moving target.
3. A human body multi-feature fusion fall detection method as claimed in claim 2, comprising the steps of: according to the operation procedure in S2, F1(u, v) Fourier transform of an adjacent previous frame image, F2 *(u, v) denotes the conjugate of the Fourier transform of the adjacent subsequent frame image, x0、y0Offset in x and y directions, respectively, offset d between adjacent frame imagestThe calculation formula is expressed as
Figure RE-FDA0003538701830000023
4. A human body multi-feature fusion fall detection method as claimed in claim 3, comprising the steps of: according to the operation in S2, the terms (x ', y')objectAnd representing the pixel point coordinates of the detected moving target of the n-t frame image.
5. The human body multi-feature fusion fall detection method as claimed in claim 4, comprising the steps of: according to the operation step in S3,
s301, screening out candidate shadow pixel points in a YUV color space by utilizing brightness and chrominance information, wherein in order to cover all shadow pixels, a foreground threshold value needs to be set to be a small value, and a background threshold value needs to be set to be a large value;
s302, searching pixel connection parts from the shadow pixels extracted from the upper surface, wherein each part corresponds to different candidate shadow areas;
s303, calculating the gradient and the direction of the pixel point in the shadow area, and calculating the gradient and the direction of the pixel point by using the following expression formula
Figure RE-FDA0003538701830000031
S304, calculating the difference value of the gradient directions between the background image and the foreground image, and calculating the difference value by converting the gradient directions into distances of angles, wherein the expression is
Figure RE-FDA0003538701830000032
S305, solving the gradient direction difference according to the formula in S4, and calculating the gradient direction correlation, wherein the expression is
Figure RE-FDA0003538701830000033
6. The human body multi-feature fusion fall detection method as claimed in claim 5, comprising the steps of: according to the operation procedure in S3, the
Figure RE-FDA0003538701830000034
And thetaxyThe gradient size and the gradient direction of the pixel points (x, y) are respectively expressed, the influence of noise factors is considered, the gradient threshold value is set, shadow pixel points larger than the gradient threshold value are reserved, and larger weight values are given to edge pixel points because the pixel points contain edge characteristics of the image.
7. The human body multi-feature fusion fall detection method as claimed in claim 6, comprising the steps of: according to the operation procedure in S3, the
Figure RE-FDA0003538701830000041
And
Figure RE-FDA0003538701830000042
Figure RE-FDA0003538701830000043
respectively representing the gradient values of the pixel points in the x direction and the y direction in the foreground image and the background image,
Figure RE-FDA0003538701830000044
the gradient direction difference is indicated.
8. The human body multi-feature fusion fall detection method as claimed in claim 7, comprising the steps of: according to the operation step in S3, n represents the total number of pixels in the candidate region,
Figure RE-FDA0003538701830000045
indicating the difference in gradient directions
Figure RE-FDA0003538701830000046
And when the ratio c is larger than a certain threshold value, the candidate area is considered as a shadow area, and otherwise, the candidate area is considered as a foreground area.
9. The human body multi-feature fusion fall detection method as claimed in claim 8, comprising the steps of: according to the operation procedure in S3, the
Figure RE-FDA0003538701830000047
And
Figure RE-FDA0003538701830000048
Figure RE-FDA0003538701830000049
respectively representing the gradient values of the pixel points in the x direction and the y direction in the foreground image and the background image,
Figure RE-FDA00035387018300000410
the gradient direction difference is indicated.
10. The human body multi-feature fusion fall detection method as claimed in claim 9, comprising the steps of: according to the operation step in S4,
s401, detecting a moving target by utilizing a PC-GMM moving target detection algorithm;
s402, calculating the similarity between the current frame image detection result and the previous frame image detection result and the change of the height and the width, judging whether a target is tracked by using a tracking algorithm, if the target is judged to be tracked, executing a step S3, otherwise, returning to the step S1;
s403, calculating APCE value of current frame image and historical average value (APCE) of previous 5 frame imageavrage
Figure RE-FDA0003538701830000051
S404, comparing the two values, when the APCE value is smaller than (APCE)avrageAnd when the PC-GMM detects no target, the target is judged to move out of the monitoring range of the camera, and when the APCE value is larger than (APCE)avrageAnd then, carrying out difference operation on the target area obtained by the tracking algorithm and the Gaussian background model, and extracting a binary image of the moving target.
CN202111593285.0A 2021-12-23 2021-12-23 Human body multi-feature fusion falling detection method Withdrawn CN114333235A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111593285.0A CN114333235A (en) 2021-12-23 2021-12-23 Human body multi-feature fusion falling detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111593285.0A CN114333235A (en) 2021-12-23 2021-12-23 Human body multi-feature fusion falling detection method

Publications (1)

Publication Number Publication Date
CN114333235A true CN114333235A (en) 2022-04-12

Family

ID=81054258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111593285.0A Withdrawn CN114333235A (en) 2021-12-23 2021-12-23 Human body multi-feature fusion falling detection method

Country Status (1)

Country Link
CN (1) CN114333235A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205981A (en) * 2022-09-08 2022-10-18 深圳市维海德技术股份有限公司 Standing posture detection method and device, electronic equipment and readable storage medium
CN117671799A (en) * 2023-12-15 2024-03-08 武汉星巡智能科技有限公司 Human body falling detection method, device, equipment and medium combining depth measurement

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205981A (en) * 2022-09-08 2022-10-18 深圳市维海德技术股份有限公司 Standing posture detection method and device, electronic equipment and readable storage medium
CN115205981B (en) * 2022-09-08 2023-01-31 深圳市维海德技术股份有限公司 Standing posture detection method and device, electronic equipment and readable storage medium
CN117671799A (en) * 2023-12-15 2024-03-08 武汉星巡智能科技有限公司 Human body falling detection method, device, equipment and medium combining depth measurement

Similar Documents

Publication Publication Date Title
CN110119676B (en) Driver fatigue detection method based on neural network
CN104794731B (en) Multi-target detection tracking for ball machine control strategy
CN114333235A (en) Human body multi-feature fusion falling detection method
CN111275910B (en) Method and system for detecting border crossing behavior of escalator based on Gaussian mixture model
CN103279737B (en) A kind of behavioral value method of fighting based on space-time interest points
WO2022001961A1 (en) Detection method, detection device and detection system for moving target thrown from height
CN106204640A (en) A kind of moving object detection system and method
Ekinci et al. Silhouette based human motion detection and analysis for real-time automated video surveillance
CN110781733B (en) Image duplicate removal method, storage medium, network equipment and intelligent monitoring system
CN105139425A (en) People counting method and device
Liu et al. Moving object detection and tracking based on background subtraction
JP5106302B2 (en) Moving object tracking device
CN103996045B (en) A kind of smog recognition methods of the various features fusion based on video
CN107066968A (en) The vehicle-mounted pedestrian detection method of convergence strategy based on target recognition and tracking
Tang et al. Multiple-kernel adaptive segmentation and tracking (MAST) for robust object tracking
CN114926859B (en) Pedestrian multi-target tracking method in dense scene combining head tracking
CN110472614A (en) A kind of recognition methods for behavior of falling in a swoon
Li et al. Automatic passenger counting system for bus based on RGB-D video
JP2011035571A (en) Suspicious behavior detection method and suspicious behavior detector
CN103049748A (en) Behavior-monitoring method and behavior-monitoring system
Hariri et al. Vision based smart in-car camera system for driver yawning detection
Ekinci et al. Background estimation based people detection and tracking for video surveillance
CN110502995B (en) Driver yawning detection method based on fine facial action recognition
Sarker et al. Illegal trash thrower detection based on HOGSVM for a real-time monitoring system
JP5132509B2 (en) Moving object tracking device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220412

WW01 Invention patent application withdrawn after publication