CN115909503B - Fall detection method and system based on key points of human body - Google Patents

Fall detection method and system based on key points of human body Download PDF

Info

Publication number
CN115909503B
CN115909503B CN202211665527.7A CN202211665527A CN115909503B CN 115909503 B CN115909503 B CN 115909503B CN 202211665527 A CN202211665527 A CN 202211665527A CN 115909503 B CN115909503 B CN 115909503B
Authority
CN
China
Prior art keywords
frame
target
detection
key point
human body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211665527.7A
Other languages
Chinese (zh)
Other versions
CN115909503A (en
Inventor
刘振锋
张春阳
梁延研
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Digital Power Technology Co ltd
Original Assignee
Zhuhai Digital Power Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Digital Power Technology Co ltd filed Critical Zhuhai Digital Power Technology Co ltd
Priority to CN202211665527.7A priority Critical patent/CN115909503B/en
Publication of CN115909503A publication Critical patent/CN115909503A/en
Application granted granted Critical
Publication of CN115909503B publication Critical patent/CN115909503B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application discloses a method and a system for detecting falling based on human body key points, wherein the method for detecting falling based on human body key points comprises the following steps: s1, acquiring video information; s2, extracting features of each frame of the video information; s3, detecting human body key points of the extracted features to obtain unassociated human body key points of all targets; simultaneously, carrying out target detection on the extracted features to obtain a detection frame of each target; and S4, matching the detection frame information of the current target with the historical detection frame information, S5, and repeatedly executing the step S4 until all targets of the current frame are matched, and judging whether the object falls down according to the coordinates of the key points. According to the method, based on two steps of target matching and key point coordinate judgment, whether the object falls down or not can be accurately detected, and the method is efficiently deployed in an actual application scene.

Description

Fall detection method and system based on key points of human body
Technical Field
The application relates to the technical field of fall detection, in particular to a fall detection method and system based on key points of a human body.
Background
The falling event is easy to cause safety problems, especially for special crowds such as old people, children, pregnant women, patients and the like; a fall event may even be life threatening. Detecting the falling behavior is helpful for timely early warning, timely helping the falling person and guaranteeing the health state of the falling person. Currently, the main fall detection is mainly divided into two categories: wearable device and visual inspection. Wearable devices rely primarily on physical sensors, including: acceleration sensors, gyroscopic sensors, infrared sensors, etc. The wearable equipment is convenient to wear, but the duration is poor, and the situation that wearing is forgotten can also appear. The current stage of vision-based fall detection mainly relies on a camera to capture data, which is then transmitted back to a server for analysis. Although the method has higher precision, the method has the defects of large bandwidth loss, delay in transmission and the like. In addition, there are methods for fall detection using a human head, and such methods are susceptible to motion such as bending over and exercise, and have low accuracy.
Disclosure of Invention
The application aims to overcome the defects in the prior art, and provides a human body key point-based falling detection method and system capable of accurately detecting whether falling occurs.
The aim of the application is achieved by the following technical scheme:
a fall detection method based on key points of human body comprises the following steps:
s1, acquiring video information;
s2, extracting features of each frame of the video information;
s3, detecting human body key points of the extracted features to obtain unassociated human body key points of all targets; simultaneously, carrying out target detection on the extracted features to obtain a detection frame of each target;
s4, matching the detection frame information of the current target with the history detection frame information,
and S5, repeatedly executing the step S4 until all targets in the current frame are matched, and judging whether the object falls down or not according to the coordinates of the key points.
Preferably, matching the detection frame information of the current target with the history detection frame information in step S4 includes: calculating the distance D between the center point of the current target detection frame of the current frame and the center point of the target detection frame corresponding to the previous frame center Judging the distance D center Whether the first threshold value is smaller than a first preset threshold value; if yes, associating the ID of the current target of the current frame with the ID of the corresponding target of the previous frame; until all the targets of the current frame are matched.
Preferably, the current frame is calculatedDistance D between center point of current target detection frame and center point of target detection frame corresponding to previous frame center The formula of (2) is:
wherein x is c And y c Respectively x and y coordinates, x of the center point of the human body detection frame of the current frame c_prev And y c_prev The x and y coordinates of the center point of the human body detection frame of the previous frame are respectively.
Preferably, if the distance D center And if the distances are not smaller than the first preset threshold value, a new ID is allocated to the current target of the current frame, and the new target is indicated.
Preferably, determining whether the subject falls according to the coordinates of the key points in step S4 includes:
s51, judging whether the current target associated with the completed ID falls down, if not, executing the step S52;
s52, respectively calculating the distances between the first position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction; judging whether the distances between the first position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction are respectively larger than a second preset threshold value and a third preset threshold value; if yes, go to step S53;
s53, respectively calculating the distances between the second part key point of the current target and the corresponding part key point of the previous frame in the horizontal direction and the vertical direction; judging whether the distances between the second part key point of the current target and the corresponding part key point of the previous frame in the horizontal direction and the vertical direction are respectively larger than a fourth preset threshold value and a fifth preset threshold value; if yes, go to step S54;
s54, respectively calculating the distances between the third position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction; judging whether the distances between the third position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction are respectively larger than a sixth preset threshold value and a seventh preset threshold value, if so, executing the step S55; wherein, in the vertical direction, the first part is higher than the second part, and the second part is higher than the third part.
S55, the fall flag of the current target is set to True.
Preferably, the first location is a nose and the second location includes at least one of a left shoulder, a right shoulder, a left hip, and a right hip; the third location includes at least one of a left knee and a right knee.
Preferably, in step S52, if it is determined that the distances between the first location key point of the current target and the location key point corresponding to the previous frame in the horizontal direction and the vertical direction are not greater than the second preset threshold and the third preset threshold, respectively, the fall flag of the current target is set to False.
A fall detection system based on key points of a human body comprising: the system comprises a video acquisition module and a nerve calculation module; wherein the neural computing module comprises: the device comprises a target detection unit, a target matching unit and a fall detection judging unit; the video acquisition module is used for acquiring video information; the target detection unit is used for extracting the characteristics of each frame of the video information; detecting human body key points of the extracted features to obtain unassociated human body key points of all targets; simultaneously, carrying out target detection on the extracted features to obtain a detection frame of each target; the target matching unit is used for matching the detection frame information of the current target with the history detection frame information; the falling detection judging unit is used for judging whether the object falls according to the key point coordinates.
Preferably, the target detection unit comprises a backbone network MobileNetV2, a human body key point detection head and a target detection head which are sequentially connected; the backbone network MobileNetV2 is configured to perform feature extraction on each frame of video information; the human body key point detection head is used for detecting human body key points of the extracted features to obtain unassociated human body key points of all targets; the target detection head is used for carrying out target detection on the extracted characteristics to obtain a detection frame of each target.
Preferably, the human body key point detection head is a 1×1 convolution layer with an input channel of 2048 and an output channel of 17, and a confidence coefficient heat map with a shape of [17, height and width ] is obtained, wherein 17 corresponds to 17 human body key points; the target detection head outputs a detection frame of a human body, and parameters are [ x, y, h and w ], wherein x and y are the lower left corner coordinates of the detection frame, h is high, and w is wide.
Compared with the prior art, the application has the following advantages:
according to the application, the human body key points and the targets are detected by the extracted features, the unassociated human body key points and detection frames of all the targets are respectively obtained, the detection frame information of the current target is matched with the history detection frame information first until all the targets of the current frame are matched, and whether the targets fall down is judged according to the obtained key point coordinates, so that whether the targets fall down can be accurately detected based on the two steps of target matching and key point coordinate judgment, and the detection method is efficiently deployed in an actual application scene.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application. In the drawings:
fig. 1 is a block diagram of a fall detection system based on key points of a human body according to the present application.
Fig. 2 is a flow chart of a fall detection method based on key points of a human body according to the present application.
Fig. 3 is a schematic flow chart of determining whether an object falls according to coordinates of key points according to the present application.
Detailed Description
The application is further described below with reference to the drawings and examples.
Fig. 1 is a block diagram of a fall detection system based on key points of a human body according to the present application. As shown in fig. 1, a fall detection system based on key points of a human body includes: the system comprises a video acquisition module and a nerve calculation module; wherein the neural computing module comprises: the device comprises a target detection unit, a target matching unit and a fall detection judging unit; the video acquisition module is used for acquiring video information so as to be read by the nerve computation module; the target detection unit is used for extracting the characteristics of each frame of the video information; detecting human body key points of the extracted features to obtain unassociated human body key points of all targets; simultaneously, carrying out target detection on the extracted features to obtain a detection frame of each target; the target matching unit is used for matching the detection frame information of the current target with the history detection frame information; the falling detection judging unit is used for judging whether the object falls according to the key point coordinates.
In this embodiment, the target detection unit includes a backbone network MobileNetV2, a human body key point detection head, and a target detection head that are sequentially connected; the backbone network MobileNetV2 is configured to perform feature extraction on each frame of video information; the human body key point detection head is used for detecting human body key points of the extracted features to obtain unassociated human body key points of all targets; the target detection head is used for carrying out target detection on the extracted characteristics to obtain a detection frame of each target.
Still further, the video acquisition module includes a camera. Video information captured by the camera module is read in the form of image frames, transmitted to a backbone network MobileNet V2 for feature extraction, and the extracted features are input into the backbone network MobileNet V2: 1. the human body key point detection heads obtain unassociated human body key points of all targets, and the target detection heads obtain detection frames of all human bodies. Then, the target matching module matches the detection frame information of the current target with the historical detection frame information, and then judges whether the object falls down or not according to the coordinates of the key points. If the subject falls, a fall alert is sent to the server/other device via the wireless communication module. Wherein unassociated means that all target's keypoints are detected, but that the keypoint is not known to belong to that target object. The human body needs to be framed by a next step of detection frame, and the framed key points are divided into a group, namely the key points belong to the target human body.
In this embodiment, the human body key point detection head is a 1×1 convolution layer with an input channel of 2048 and an output channel of 17, and a confidence coefficient heat map with a shape of [17, height, width ] is obtained, where 17 corresponds to 17 human body key points; the target detection head outputs a detection frame of a human body, and parameters are [ x, y, h and w ], wherein x and y are the lower left corner coordinates of the detection frame, h is high, and w is wide. The method comprises the steps that unassociated human body key points of all targets are obtained through a human body key point detection head, detection frames of each human body obtained through the target detection head are grouped, human body key points in each human body detection frame are divided into the same group, and the same ID is allocated to the human body key points. The total number of the output key points is 17, and the key points are respectively: nose, left eye, right eye, left ear, right ear, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, right knee, left ankle, right ankle.
Deployment of neural networks on neural computing modules: the trained neural network model on the dataset, the pth file is converted to the onnx file, and then the 32bit model is quantized to the 8bit model by the quantization tool. Finally, the model is loaded into the mobile device.
Fig. 2 is a flow chart of a fall detection method based on key points of a human body according to the present application. As shown in fig. 2, a method for detecting a fall based on key points of a human body based on the mobile-end fall detection system includes:
s1, acquiring video information;
s2, extracting features of each frame of the video information;
s3, detecting human body key points of the extracted features to obtain unassociated human body key points of all targets; simultaneously, carrying out target detection on the extracted features to obtain a detection frame of each target; the target is the human body. Each human body (target) contains 4 attributes: ID. New keypoints, history keypoints, fall markers.
S4, matching the detection frame information of the current target with the history detection frame information,
and S5, repeatedly executing the step S4 until all targets in the current frame are matched, and judging whether the object falls down or not according to the coordinates of the key points. The steps S2-S5 are all completed at the mobile terminal, and whether the object falls down or not is judged according to the coordinates of the key points and then sent to the server terminal.
In this embodiment, matching the detection frame information of the current target with the history detection frame information in step S4 includes:
calculating the distance D between the center point of the current target detection frame of the current frame and the center point of the target detection frame corresponding to the previous frame center Judging the distance D center Whether the first threshold value is smaller than a first preset threshold value; if yes, associating the ID of the current target of the current frame with the ID of the corresponding target of the previous frame; until all the targets of the current frame are matched. Wherein, calculating the distance D between the center point of the current target detection frame of the current frame and the center point of the target detection frame corresponding to the previous frame center The formula of (2) is:
wherein x is c And y c Respectively x and y coordinates, x of the center point of the human body detection frame of the current frame c_prev And y c_prev The x and y coordinates of the center point of the human body detection frame of the previous frame are respectively. If D center If the ID of the target is smaller than the threshold value, the IDs of the two targets are associated; otherwise, comparing the center points of other human body detection frames of the previous frame.
In another embodiment, if the distance D center And if the distances are not smaller than the first preset threshold value, a new ID is allocated to the current target of the current frame, and the new target is indicated.
Fig. 3 is a schematic flow chart of determining whether an object falls according to coordinates of key points according to the present application. As shown in fig. 3, determining whether the object falls according to the coordinates of the key points in step S5 includes:
s51, judging whether the current target associated with the completed ID has fallen (namely judging whether the falling mark is True or false, if the target which has fallen in the previous frame is the target which has fallen in the previous frame, the falling mark is set as True), if not, executing the step S52; if the fall is marked as True, the key point coordinates of the current frame are updated to the historical key points,
in this embodiment, before step S51, the method further includes: s51, judging whether the target of the current frame is a newly appeared ID, if so, not performing operation; if not, step S51 is performed.
S52, respectively calculating the distances between the first position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction; judging whether the distances between the first position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction are respectively larger than a second preset threshold value and a third preset threshold value; if yes, go to step S53; specifically, calculating nose key point P of current frame 1 With the nose key point P of the previous frame 1_prev Distance D in x direction of (2) x_top Distance D in y direction y_top If D x_topx_top And D is y_topy_top Step S53 is performed; otherwise the fall flag is set to False.
In another embodiment, in step S52, if it is determined that the distances between the first location key point of the current target and the location key point corresponding to the previous frame in the horizontal direction and the vertical direction are not greater than the second preset threshold and the third preset threshold, respectively, the fall flag of the current target is set to False.
S53, respectively calculating the distances between the second part key point of the current target and the corresponding part key point of the previous frame in the horizontal direction and the vertical direction; judging whether the distances between the second part key point of the current target and the corresponding part key point of the previous frame in the horizontal direction and the vertical direction are respectively larger than a fourth preset threshold value and a fifth preset threshold value; if yes, go to step S54; specifically, the center point P of the left shoulder, the right shoulder, the left hip and the right hip of the current frame is calculated 2 Center point P of left shoulder, right shoulder, left hip and right hip of previous frame 2_prev Distance D in x direction of (2) x_mid Distance D in y direction y_mid If D x_midx_mid And D is y_mid >D y_mid Enter intoLine step S54; otherwise the fall flag is set to False.
S54, respectively calculating the distances between the third position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction; judging whether the distances between the third position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction are respectively larger than a sixth preset threshold value and a seventh preset threshold value, if so, executing the step S55; wherein, in the vertical direction, the first part is higher than the second part, and the second part is higher than the third part. In particular to calculate the center point P of the left knee and the right knee of the current frame 3 Center point P with left and right knees of previous frame 3_prev Distance D in x direction of (2) x_bot Distance D in y direction y_bot If D x_botx_bot And D is y_boty_bot Setting its fall flag to True; otherwise the fall flag is set to False.
S55, the fall flag of the current target is set to True. If a falling situation occurs, a falling signal can be sent to a server side for alarming.
Step S5 is followed by: judging whether the object falls according to the key point coordinates, acquiring falling information, and then sending the falling information to a server side. Wherein, the fall detection event judgment is completed at the mobile terminal device.
The method can be applied to places which are easy to fall such as stairs, handrail elevators and the like, and also can be applied to places with more patients for old people such as nursing homes, hospitals and the like. The method has the advantages of small bandwidth requirement, strong real-time performance, wide application range and the like.
The above embodiments are preferred examples of the present application, and the present application is not limited thereto, and any other modifications or equivalent substitutions made without departing from the technical aspects of the present application are included in the scope of the present application.

Claims (7)

1. The method for detecting the falling based on the key points of the human body is characterized by comprising the following steps of:
s1, acquiring video information;
s2, extracting features of each frame of the video information;
s3, detecting human body key points of the extracted features to obtain unassociated human body key points of all targets; simultaneously, carrying out target detection on the extracted features to obtain a detection frame of each target;
s4, matching the detection frame information of the current target with the history detection frame information,
s5, repeatedly executing the step S4 until all targets in the current frame are matched, and judging whether the object falls down according to the coordinates of the key points;
the step S4 of matching the detection frame information of the current target with the history detection frame information includes:
calculating the distance D between the center point of the current target detection frame of the current frame and the center point of the target detection frame corresponding to the previous frame center Judging the distance D center Whether the first threshold value is smaller than a first preset threshold value; if yes, associating the ID of the current target of the current frame with the ID of the corresponding target of the previous frame; until all targets of the current frame are matched;
calculating the distance D between the center point of the current target detection frame of the current frame and the center point of the target detection frame corresponding to the previous frame center The formula of (2) is:
wherein x is c And y c Respectively x and y coordinates, x of the center point of the human body detection frame of the current frame c_prev And y c_prev The x and y coordinates of the center point of the human body detection frame of the previous frame are respectively;
step S4 of determining whether the object falls according to the coordinates of the key points includes:
s51, judging whether the current target associated with the completed ID falls down, if not, executing the step S52;
s52, respectively calculating the distances between the first position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction; judging whether the distances between the first position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction are respectively larger than a second preset threshold value and a third preset threshold value; if yes, go to step S53;
s53, respectively calculating the distances between the second part key point of the current target and the corresponding part key point of the previous frame in the horizontal direction and the vertical direction; judging whether the distances between the second part key point of the current target and the corresponding part key point of the previous frame in the horizontal direction and the vertical direction are respectively larger than a fourth preset threshold value and a fifth preset threshold value; if yes, go to step S54;
s54, respectively calculating the distances between the third position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction; judging whether the distances between the third position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction are respectively larger than a sixth preset threshold value and a seventh preset threshold value, if so, executing the step S55; wherein, in the vertical direction, the first part is higher than the second part, and the second part is higher than the third part;
s55, the fall flag of the current target is set to True.
2. The human body key point-based fall detection method according to claim 1, wherein if the distance D center And if the distances are not smaller than the first preset threshold value, a new ID is allocated to the current target of the current frame, and the new target is indicated.
3. The human critical point-based fall detection method of claim 1, wherein the first location is a nose and the second location comprises at least one of a left shoulder, a right shoulder, a left hip, and a right hip; the third location includes at least one of a left knee and a right knee.
4. The method according to claim 1, wherein in step S52, if it is determined that the distances between the first location key point of the current target and the location key point corresponding to the previous frame in the horizontal direction and the vertical direction are not greater than the second preset threshold and the third preset threshold, respectively, the fall flag of the current target is set to False.
5. A fall detection system based on key points of a human body, comprising: the system comprises a video acquisition module and a nerve calculation module;
wherein the neural computing module comprises: the device comprises a target detection unit, a target matching unit and a fall detection judging unit;
the video acquisition module is used for acquiring video information;
the target detection unit is used for extracting the characteristics of each frame of the video information; detecting human body key points of the extracted features to obtain unassociated human body key points of all targets; simultaneously, carrying out target detection on the extracted features to obtain a detection frame of each target;
the target matching unit is used for matching the detection frame information of the current target with the history detection frame information;
the falling detection judging unit is used for judging whether the object falls according to the key point coordinates;
matching the detection frame information of the current target with the historical detection frame information comprises:
calculating the distance D between the center point of the current target detection frame of the current frame and the center point of the target detection frame corresponding to the previous frame center Judging the distance D center Whether the first threshold value is smaller than a first preset threshold value; if yes, associating the ID of the current target of the current frame with the ID of the corresponding target of the previous frame; until all targets of the current frame are matched;
calculating the distance D between the center point of the current target detection frame of the current frame and the center point of the target detection frame corresponding to the previous frame center The formula of (2) is:
wherein x is c And y e Respectively x and y coordinates, x of the center point of the human body detection frame of the current frame c_prev And y c_prev The x and y coordinates of the center point of the human body detection frame of the previous frame are respectively;
judging whether the object falls down according to the key point coordinates comprises:
s51, judging whether the current target associated with the completed ID falls down, if not, executing the step S52;
s52, respectively calculating the distances between the first position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction; judging whether the distances between the first position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction are respectively larger than a second preset threshold value and a third preset threshold value; if yes, go to step S53;
s53, respectively calculating the distances between the second part key point of the current target and the corresponding part key point of the previous frame in the horizontal direction and the vertical direction; judging whether the distances between the second part key point of the current target and the corresponding part key point of the previous frame in the horizontal direction and the vertical direction are respectively larger than a fourth preset threshold value and a fifth preset threshold value; if yes, go to step S54;
s54, respectively calculating the distances between the third position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction; judging whether the distances between the third position key point of the current target and the corresponding position key point of the previous frame in the horizontal direction and the vertical direction are respectively larger than a sixth preset threshold value and a seventh preset threshold value, if so, executing the step S55; wherein, in the vertical direction, the first part is higher than the second part, and the second part is higher than the third part;
s55, the fall flag of the current target is set to True.
6. The human body key point-based fall detection system according to claim 5, wherein the target detection unit comprises a backbone network MobileNetV2, a human body key point detection head, and a target detection head connected in sequence;
the backbone network MobileNetV2 is configured to perform feature extraction on each frame of video information;
the human body key point detection head is used for detecting human body key points of the extracted features to obtain unassociated human body key points of all targets;
the target detection head is used for carrying out target detection on the extracted characteristics to obtain a detection frame of each target.
7. The human keypoint-based fall detection system as claimed in claim 6, wherein the human keypoint detection head is a 1 x 1 convolution layer with an input channel of 2048 and an output channel of 17, resulting in a confidence heat map with a shape of [17, high, wide ], wherein 17 corresponds to 17 human keypoints; the target detection head outputs a detection frame of a human body, and parameters are [ x, y, h and w ], wherein x and y are the lower left corner coordinates of the detection frame, h is high, and w is wide.
CN202211665527.7A 2022-12-23 2022-12-23 Fall detection method and system based on key points of human body Active CN115909503B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211665527.7A CN115909503B (en) 2022-12-23 2022-12-23 Fall detection method and system based on key points of human body

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211665527.7A CN115909503B (en) 2022-12-23 2022-12-23 Fall detection method and system based on key points of human body

Publications (2)

Publication Number Publication Date
CN115909503A CN115909503A (en) 2023-04-04
CN115909503B true CN115909503B (en) 2023-09-29

Family

ID=86488180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211665527.7A Active CN115909503B (en) 2022-12-23 2022-12-23 Fall detection method and system based on key points of human body

Country Status (1)

Country Link
CN (1) CN115909503B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011016782A1 (en) * 2009-08-05 2011-02-10 Agency For Science, Technology And Research Condition detection methods and condition detection devices
CN111274954A (en) * 2020-01-20 2020-06-12 河北工业大学 Embedded platform real-time falling detection method based on improved attitude estimation algorithm
CN111401296A (en) * 2020-04-02 2020-07-10 浙江大华技术股份有限公司 Behavior analysis method, equipment and device
CN111798483A (en) * 2020-06-28 2020-10-20 浙江大华技术股份有限公司 Anti-blocking pedestrian tracking method and device and storage medium
CN111814767A (en) * 2020-09-02 2020-10-23 科大讯飞(苏州)科技有限公司 Fall detection method and device, electronic equipment and storage medium
CN114373142A (en) * 2021-11-22 2022-04-19 闽江学院 Pedestrian falling detection method based on deep learning
CN114818788A (en) * 2022-04-07 2022-07-29 北京邮电大学 Tracking target state identification method and device based on millimeter wave perception
CN115273227A (en) * 2022-07-14 2022-11-01 上海海事大学 Crew falling detection method and device based on improved Blazepos-LSTM

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6297822B2 (en) * 2013-11-19 2018-03-20 ルネサスエレクトロニクス株式会社 Detection device, detection system, and detection method
US11879960B2 (en) * 2020-02-13 2024-01-23 Masimo Corporation System and method for monitoring clinical activities
WO2021186655A1 (en) * 2020-03-19 2021-09-23 株式会社日立製作所 Fall risk evaluation system
CN112528850A (en) * 2020-12-11 2021-03-19 北京百度网讯科技有限公司 Human body recognition method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011016782A1 (en) * 2009-08-05 2011-02-10 Agency For Science, Technology And Research Condition detection methods and condition detection devices
CN111274954A (en) * 2020-01-20 2020-06-12 河北工业大学 Embedded platform real-time falling detection method based on improved attitude estimation algorithm
CN111401296A (en) * 2020-04-02 2020-07-10 浙江大华技术股份有限公司 Behavior analysis method, equipment and device
CN111798483A (en) * 2020-06-28 2020-10-20 浙江大华技术股份有限公司 Anti-blocking pedestrian tracking method and device and storage medium
CN111814767A (en) * 2020-09-02 2020-10-23 科大讯飞(苏州)科技有限公司 Fall detection method and device, electronic equipment and storage medium
CN114373142A (en) * 2021-11-22 2022-04-19 闽江学院 Pedestrian falling detection method based on deep learning
CN114818788A (en) * 2022-04-07 2022-07-29 北京邮电大学 Tracking target state identification method and device based on millimeter wave perception
CN115273227A (en) * 2022-07-14 2022-11-01 上海海事大学 Crew falling detection method and device based on improved Blazepos-LSTM

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视频监控的室内跌倒行为检测研究;李冬冬;《中国优秀硕士学位论文全文数据库社会科学Ⅱ辑》(第1期);第H123-257页 *

Also Published As

Publication number Publication date
CN115909503A (en) 2023-04-04

Similar Documents

Publication Publication Date Title
CN109919132B (en) Pedestrian falling identification method based on skeleton detection
CN112215185B (en) System and method for detecting falling behavior from monitoring video
CN112287759A (en) Tumble detection method based on key points
CN110738154A (en) pedestrian falling detection method based on human body posture estimation
US20220180534A1 (en) Pedestrian tracking method, computing device, pedestrian tracking system and storage medium
CN112149512A (en) Helmet wearing identification method based on two-stage deep learning
CN111241913A (en) Method, device and system for detecting falling of personnel
CN111062303A (en) Image processing method, system and computer storage medium
CN108958482B (en) Similarity action recognition device and method based on convolutional neural network
CN110598536A (en) Falling detection method and system based on human skeleton motion model
CN111144174A (en) System for identifying falling behavior of old people in video by using neural network and traditional algorithm
CN107122711A (en) A kind of night vision video gait recognition method based on angle radial transformation and barycenter
US20220366570A1 (en) Object tracking device and object tracking method
CN115082825A (en) Video-based real-time human body falling detection and alarm method and device
CN112270381A (en) People flow detection method based on deep learning
CN114469076A (en) Identity feature fused old solitary people falling identification method and system
CN109993116B (en) Pedestrian re-identification method based on mutual learning of human bones
CN114325573A (en) Method for rapidly detecting identity and position information of operation and maintenance personnel of transformer substation
CN115909503B (en) Fall detection method and system based on key points of human body
CN115841497B (en) Boundary detection method and escalator area intrusion detection method and system
CN117158955A (en) User safety intelligent monitoring method based on wearable monitoring equipment
CN115731563A (en) Method for identifying falling of remote monitoring personnel
CN115410113A (en) Fall detection method and device based on computer vision and storage medium
WO2022126668A1 (en) Method for pedestrian identification in public places and human flow statistics system
CN115330751A (en) Bolt detection and positioning method based on YOLOv5 and Realsense

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant