CN110706255A - Fall detection method based on self-adaptive following - Google Patents
Fall detection method based on self-adaptive following Download PDFInfo
- Publication number
- CN110706255A CN110706255A CN201910911328.1A CN201910911328A CN110706255A CN 110706255 A CN110706255 A CN 110706255A CN 201910911328 A CN201910911328 A CN 201910911328A CN 110706255 A CN110706255 A CN 110706255A
- Authority
- CN
- China
- Prior art keywords
- human body
- following
- detection
- distance
- falling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a fall detection method based on self-adaptive following. The method comprises the following steps: s1, building a robot platform based on ROS; s2, generating depth three-dimensional data of a space by using an RGBD depth camera, and detecting a front human body and following the front human body at a certain distance by using a robot; s3, carrying out posture estimation on the human body in the RGB image by using the neural network model, and extracting the bone joint point coordinates of key parts of the human body; and S4, judging the posture of the human body by using a falling detection algorithm according to the space position and the space motion amount of the joint point of the human body, and giving an alarm by the system when the posture of the human body meets the falling judgment condition. The invention can follow and monitor the human body in real time, estimate the human body posture and quickly and accurately detect the human body falling. Through the collection, analysis and judgment of the camera images, the judgment result is sent to the remote terminal in real time, and the falling event of the human body is monitored in real time.
Description
Technical Field
The invention relates to the field of machine vision and the field of robots, in particular to a fall detection method based on adaptive following.
Background
In the aging society of the population, the number of empty nesters is increasing day by day, and the falling down is one of the big problems troubling the old. The fall incidence of old people is high, the problem is often serious, and the problem becomes a serious medical problem and a social problem in the current era. According to the world health organization study, 28% to 35% of people over 65 suffer a fall every year, a figure that increases to 42% in people over 70 years. According to data of world health organization, more than 50% of old people are in hospital because of falling, and the reason for falling in the unnatural death of the old people accounts for about 40%. Therefore, the automatic detection and alarm information sending device has important practical significance for carrying out automatic detection and sending out alarm information when the solitary old man falls down accidentally.
The current latest fall detection techniques can be divided into three categories: a fall detection technique based on wearable sensors; one is fall detection based on environmental sensors; one is vision based fall detection. The wearable type charger has the defects of inconvenience caused by wearing, high false alarm rate and frequent charging requirement. The environmental sensor is to use infrared, sound, vibration and other sensors to judge the occurrence of a fall event, but the technical defects of the type are that the detection accuracy is generally low and the installation is complex. The vision-based fall detection technology is to identify the human body contour motion features extracted by one or more cameras, but is easily affected by the illumination intensity and the observation visual angle, so the identification rate is low.
Disclosure of Invention
In order to solve the technical problem, the invention provides a fall detection method based on adaptive following.
In order to solve the technical problems, the invention adopts the following technical scheme:
and S1, building the ROS-based robot platform.
And S2, generating three-dimensional depth data of the space by using the RGBD depth camera, and detecting the front human body and following the front human body at a certain distance by using the robot.
S3, estimating the posture of the human body in the RGB image by using the neural network model, and extracting the bone joint point coordinates of key parts of the human body, wherein the key parts comprise 25 joint points of a nose, a neck, a left eye, a right eye, a left ear, a right ear, a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist, a right wrist, a left hip, a hip midpoint, a right hip, a left knee, a right knee, a left ankle, a right ankle, a left big toe, a right big toe, a left small toe, a right small toe, a left heel and a right heel.
And S4, judging the posture of the human body by using a falling detection algorithm according to the space position and the space motion amount of the joint point of the human body, and giving an alarm by the system when the posture of the human body meets the falling judgment condition.
The fall detection model step S3 is further used to establish an image coordinate system:
the coordinates of the upper left corner of the RBG image acquired by the camera are taken as the origin of coordinates (0,0), the horizontal right direction of the image is the positive direction of an x axis, and the vertical downward direction of the image is the positive direction of a y axis.
And combining the current frame image acquisition time t with the image coordinates (x, y) to form a continuous video space-time characteristic coordinate system (x, y, t), and associating the human body skeleton characteristic point information with the time to form a skeleton characteristic point space-time position data set.
The fall detection algorithm has the following decision conditions for a fall:
in the first detection feature, vspinemidThe falling speed v of the center point of the human bodyTIs a preset speed threshold. In the second detection feature, HbaseThe distance between the centers of the two hips and the ground, HTIs a predetermined distance threshold, tnowFor the current frame time, tfirstT is a preset time length for the frame time detected by the first detection feature. And judging that the human body falls down if the first detection characteristic is met and the second detection characteristic is met.
The fall detection algorithm specifically comprises the following steps:
the method comprises the following steps: calculating the descending speed v of the center point of the human bodyspinemidIf the falling speed exceeds a threshold value vTMarking that the first detection feature has been detected.
Step two: if the first detection characteristic is detected, starting the second detection, and calculating the distance H between the centers of the two hips and the groundbase。
Step three: if the hip joint is not detected or the distance value H between the centers of the two hips and the ground is not detectedbaseIs lower than HTAnd judging that the human body falls down.
Step four: if the distance H isbaseNot less than H within T time from the first detection of the markTOr the human body central point rises, and the first detection characteristic detected mark is cancelled. And returning to the step one.
Wherein v isspinemidThe calculation method comprises the following steps:
yi(t) is the Y-axis coordinate of the human skeleton node i at the frame time t, t0Is the last frame time, t1Is the current frame time.
To calculate HbaseFirstly, obtaining a ground equation Ax + By + C as 0, wherein A, B and C are parameters of the ground equation, and then calculating the distance H between the centers of the two hips and the groundbase:
The invention can follow and monitor the human body in real time, estimate the human body posture and quickly and accurately detect the falling of the human body; through the collection, analysis and judgment of the camera images, the judgment result is sent to the remote terminal in real time, and the falling event of the human body is monitored in real time.
Drawings
FIG. 1 is a schematic view of a robot working node;
FIG. 2 is a human skeletal feature point diagram;
FIG. 3 is a schematic diagram of a human body posture estimation neural network;
fig. 4 a fall detection algorithm flow chart.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
the overall workflow of the robot platform is shown in the attached figure 1. The method comprises the steps of starting a main core system, a camera drive and a bottom motion drive, and then starting a human body following node, a human body falling detection node and an alarm processing node. The camera drives to publish the depth image and the RGB image topics, and the bottom motion drive waits for other nodes to publish control information. The human body following node subscribes to the topic of the depth image, judges the distance of an object in a three-dimensional space, and if the human body appears in a certain distance in front of the depth camera, releases motor control information and controls the robot to follow the human body. The human body falling detection node subscribes to RGB image topics, and human body skeleton feature points are obtained by utilizing a neural network model. According to the spatial position and the spatial motion amount of the human body joint point, judging the human body posture by using a falling detection algorithm according to the figure 4, according with falling judgment conditions, and releasing alarm topic information by the nodes. And the alarm processing node subscribes an alarm topic, processes alarm information and sends the alarm information to the remote terminal.
In the human body falling detection node, a neural network model extracts coordinates of human body skeleton feature points in an RGB image, and key parts comprise 25 joint points of a nose, a neck, a left eye, a right eye, a left ear, a right ear, a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist, a right wrist, a left hip, a hip midpoint, a right hip, a left knee, a right knee, a left ankle, a right ankle, a left big toe, a right big toe, a left small toe, a right small toe, a left heel and a right heel. As shown in FIG. 2, the obtained skeleton information of human body specifically comprises 25 joint points Pi(t)={(xi(t),yi(t)) | i ═ 0,1,2, …,24}, and the corresponding labels are nose 0, neck 1, left eye 16, right eye 15, left ear 18, right ear 17, left shoulder 5, right shoulder 2, left elbow 6, right elbow 3, left wrist 7, right wrist 4, left hip 12, hip midpoint 8, right hip 9, left knee 13, right knee 10, left ankle 14, right ankle 11, left big toe 19, right big toe 22, left small toe 20, right small toe 23, left heel 21, right heel 24, respectively.
The human skeleton information is detected by a neural network model, and the schematic diagram of the neural network model is shown in fig. 3. The overall process of the neural network comprises the steps of converting an input picture into an image feature F through a 10-layer VGG19 network, and then dividing the image feature F into two branches to predict the confidence coefficient and the affinity vector of each key point of each point respectively. Where S is a confidence network, L is an affinity vector field network:
S1=ρ1(F)
L1=φ1(F)
the penalty function is the average square sum of the group _ truth value and the predicted value for both networks:
the final overall process is as follows: firstly, an original picture is input, simple Feature extraction is carried out through a basic network VGG to obtain a Feature map, and then prediction is respectively carried out through two branches at stage 1. The first branch is a branch of a key point, the branch is also a classical method of CPM, and in addition, a branch of PAF skeletal point trend is added on the basis of the branch. The subsequent stages are similar to the above, resulting in the output S, L of the network.
Claims (7)
1. A fall detection method based on adaptive following is characterized by comprising the following steps:
step one, building a robot platform based on ROS;
generating depth three-dimensional data of a space by using an RGBD depth camera, and detecting a front human body and following the front human body at a certain distance by using a robot;
thirdly, estimating the posture of the human body in the RGB image by using a neural network model, and extracting the coordinates of the bone joint points of key parts of the human body;
and step four, judging the posture of the human body by using a falling detection algorithm according to the space position and the space motion amount of the joint points of the human body, and giving an alarm if the posture accords with falling judgment conditions.
2. An adaptive following based fall detection method according to claim 1, characterized in that: the robot detects the human body in front and follows with a certain distance, and specifically: subscribing a depth image topic, judging the distance of an object in a three-dimensional space, and if a human body appears in front of a depth camera at a certain distance, releasing motor control information and controlling a robot to follow the human body.
3. An adaptive following based fall detection method according to claim 1, characterized in that: carrying out posture estimation on a human body in the RGB image by using a neural network model, specifically: and subscribing RGB image topics, and acquiring human skeleton feature points by utilizing a neural network model.
4. A fall detection method based on adaptive following as claimed in claim 1, characterized in that: the fall detection algorithm employs the following decision conditions for falls:
in the first detection feature, vspinemidThe falling speed v of the center point of the human bodyTIs a preset speed threshold; in the second detection feature, HbaseThe distance between the centers of the two hips and the ground, HTIs a predetermined distance threshold, tnowFor the current frame time, tfirstA frame time detected for the first detection feature, T being a preset time length; and judging that the human body falls down if the first detection characteristic is met and the second detection characteristic is met.
5. An adaptive following based fall detection method according to claim 4, characterized in that: the fall detection algorithm specifically comprises the following steps:
the method comprises the following steps: calculating the descending speed v of the center point of the human bodyspinemidIf the falling speed exceeds a threshold value vTMarking that the first detection feature has been detected;
step two: if the first detection characteristic is detected, starting the second detection, and calculating the distance H between the centers of the two hips and the groundbase;
Step three: if the hip joint is not detected or the distance value H between the centers of the two hips and the ground is not detectedbaseIs lower than HTAnd judging whether the human body falls down: if the distance H isbaseNot less than H within T time from the first detection of the markTOr the human body central point rises, and the first detection characteristic detected mark is cancelled; and returning to the step one.
6. An adaptive following based fall detection method according to claim 5, characterized in that: the falling speed v of the human body center pointspinemidThe calculation method is
Wherein y isi(t) is the Y-axis coordinate of the human skeleton node i at the frame time t, t0Is the last frame time, t1Is the current frame time.
7. An adaptive following based fall detection method according to claim 5, characterized in that: the distance H between the centers of the two hips and the groundbaseThe method is characterized in that: the calculation step is that the ground equation Ax + By + C is obtained as 0, and then the distance H between the centers of the two hips and the ground is calculatedbase:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910911328.1A CN110706255A (en) | 2019-09-25 | 2019-09-25 | Fall detection method based on self-adaptive following |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910911328.1A CN110706255A (en) | 2019-09-25 | 2019-09-25 | Fall detection method based on self-adaptive following |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110706255A true CN110706255A (en) | 2020-01-17 |
Family
ID=69196353
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910911328.1A Withdrawn CN110706255A (en) | 2019-09-25 | 2019-09-25 | Fall detection method based on self-adaptive following |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110706255A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111274954A (en) * | 2020-01-20 | 2020-06-12 | 河北工业大学 | Embedded platform real-time falling detection method based on improved attitude estimation algorithm |
CN111460908A (en) * | 2020-03-05 | 2020-07-28 | 中国地质大学(武汉) | Human body tumbling identification method and system based on OpenPose |
CN111767812A (en) * | 2020-06-18 | 2020-10-13 | 浙江大华技术股份有限公司 | Fall detection method, fall detection device and storage device |
CN112270807A (en) * | 2020-10-29 | 2021-01-26 | 怀化学院 | Old man early warning system that tumbles |
CN112418096A (en) * | 2020-11-24 | 2021-02-26 | 京东数科海益信息科技有限公司 | Method and device for detecting falling and robot |
CN112633059A (en) * | 2020-11-12 | 2021-04-09 | 泰州职业技术学院 | Falling remote monitoring system based on LabVIEW and MATLAB |
CN112784676A (en) * | 2020-12-04 | 2021-05-11 | 中国科学院深圳先进技术研究院 | Image processing method, robot, and computer-readable storage medium |
CN112837406A (en) * | 2021-01-11 | 2021-05-25 | 聚好看科技股份有限公司 | Three-dimensional reconstruction method, device and system |
CN113408390A (en) * | 2021-06-11 | 2021-09-17 | 广东工业大学 | Human behavior real-time identification method, system, device and storage medium |
CN113591722A (en) * | 2021-08-02 | 2021-11-02 | 山东大学 | Target person following control method and system of mobile robot |
CN113807197A (en) * | 2021-08-24 | 2021-12-17 | 苏州爱可尔智能科技有限公司 | Fall detection method and device |
CN114170685A (en) * | 2021-12-06 | 2022-03-11 | 南京美基森信息技术有限公司 | RGBD image-based detection method for falling behavior of pedestrian riding escalator |
-
2019
- 2019-09-25 CN CN201910911328.1A patent/CN110706255A/en not_active Withdrawn
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111274954A (en) * | 2020-01-20 | 2020-06-12 | 河北工业大学 | Embedded platform real-time falling detection method based on improved attitude estimation algorithm |
CN111460908A (en) * | 2020-03-05 | 2020-07-28 | 中国地质大学(武汉) | Human body tumbling identification method and system based on OpenPose |
CN111460908B (en) * | 2020-03-05 | 2023-09-01 | 中国地质大学(武汉) | Human body fall recognition method and system based on OpenPose |
CN111767812A (en) * | 2020-06-18 | 2020-10-13 | 浙江大华技术股份有限公司 | Fall detection method, fall detection device and storage device |
CN111767812B (en) * | 2020-06-18 | 2023-04-21 | 浙江大华技术股份有限公司 | Fall detection method, fall detection device and storage device |
CN112270807A (en) * | 2020-10-29 | 2021-01-26 | 怀化学院 | Old man early warning system that tumbles |
CN112633059A (en) * | 2020-11-12 | 2021-04-09 | 泰州职业技术学院 | Falling remote monitoring system based on LabVIEW and MATLAB |
CN112633059B (en) * | 2020-11-12 | 2023-10-20 | 泰州职业技术学院 | Fall remote monitoring system based on LabVIEW and MATLAB |
CN112418096A (en) * | 2020-11-24 | 2021-02-26 | 京东数科海益信息科技有限公司 | Method and device for detecting falling and robot |
CN112784676A (en) * | 2020-12-04 | 2021-05-11 | 中国科学院深圳先进技术研究院 | Image processing method, robot, and computer-readable storage medium |
CN112837406B (en) * | 2021-01-11 | 2023-03-14 | 聚好看科技股份有限公司 | Three-dimensional reconstruction method, device and system |
CN112837406A (en) * | 2021-01-11 | 2021-05-25 | 聚好看科技股份有限公司 | Three-dimensional reconstruction method, device and system |
CN113408390A (en) * | 2021-06-11 | 2021-09-17 | 广东工业大学 | Human behavior real-time identification method, system, device and storage medium |
CN113591722A (en) * | 2021-08-02 | 2021-11-02 | 山东大学 | Target person following control method and system of mobile robot |
CN113591722B (en) * | 2021-08-02 | 2023-09-12 | 山东大学 | Target person following control method and system for mobile robot |
CN113807197A (en) * | 2021-08-24 | 2021-12-17 | 苏州爱可尔智能科技有限公司 | Fall detection method and device |
CN114170685A (en) * | 2021-12-06 | 2022-03-11 | 南京美基森信息技术有限公司 | RGBD image-based detection method for falling behavior of pedestrian riding escalator |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110706255A (en) | Fall detection method based on self-adaptive following | |
CN109919132B (en) | Pedestrian falling identification method based on skeleton detection | |
Nizam et al. | Human fall detection from depth images using position and velocity of subject | |
Chen et al. | Fall detection system based on real-time pose estimation and SVM | |
CN110287825B (en) | Tumble action detection method based on key skeleton point trajectory analysis | |
Hasan et al. | Robust pose-based human fall detection using recurrent neural network | |
CN111582158A (en) | Tumbling detection method based on human body posture estimation | |
CN114067358A (en) | Human body posture recognition method and system based on key point detection technology | |
CN111553229B (en) | Worker action identification method and device based on three-dimensional skeleton and LSTM | |
CN114511931B (en) | Motion recognition method, device, equipment and storage medium based on video image | |
CN108898108B (en) | User abnormal behavior monitoring system and method based on sweeping robot | |
JP2019106631A (en) | Image monitoring device | |
CN113111767A (en) | Fall detection method based on deep learning 3D posture assessment | |
CN111881898B (en) | Human body posture detection method based on monocular RGB image | |
US20200279102A1 (en) | Movement monitoring system | |
CN112966628A (en) | Visual angle self-adaptive multi-target tumble detection method based on graph convolution neural network | |
CN111783702A (en) | Efficient pedestrian tumble detection method based on image enhancement algorithm and human body key point positioning | |
CN111461042A (en) | Fall detection method and system | |
CN116206370B (en) | Driving information generation method, driving device, electronic equipment and storage medium | |
CN111144174A (en) | System for identifying falling behavior of old people in video by using neural network and traditional algorithm | |
CN109543762B (en) | Multi-feature fusion gesture recognition system and method | |
CN115346272A (en) | Real-time tumble detection method based on depth image sequence | |
Tsai et al. | Predicting canine posture with smart camera networks powered by the artificial intelligence of things | |
CN113384267A (en) | Fall real-time detection method, system, terminal equipment and storage medium | |
CN117671794A (en) | Fall detection model training improvement method and fall detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200117 |
|
WW01 | Invention patent application withdrawn after publication |