CN109389041B - Fall detection method based on joint point characteristics - Google Patents

Fall detection method based on joint point characteristics Download PDF

Info

Publication number
CN109389041B
CN109389041B CN201811044571.XA CN201811044571A CN109389041B CN 109389041 B CN109389041 B CN 109389041B CN 201811044571 A CN201811044571 A CN 201811044571A CN 109389041 B CN109389041 B CN 109389041B
Authority
CN
China
Prior art keywords
image
joint point
cut
video
joint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811044571.XA
Other languages
Chinese (zh)
Other versions
CN109389041A (en
Inventor
刘宁钟
袁鹏泰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201811044571.XA priority Critical patent/CN109389041B/en
Publication of CN109389041A publication Critical patent/CN109389041A/en
Application granted granted Critical
Publication of CN109389041B publication Critical patent/CN109389041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a falling detection method based on joint point characteristics, which comprises the following steps: s1And looking at the obtained object through yolo target detection algorithmProcessing the frequency frame, and intercepting an area containing a person; s2Extracting joint points in the human region through an openposition algorithm to obtain joint point information; s3Classifying the obtained joint points through an SVM classifier; s4Extracting the most important class from each frame of image; s5Filtering and simplifying the category sequence; s6And judging whether a falling event occurs in the video. The invention effectively solves the problem that different people have different body shapes, because only the information of the human joint points is considered and the external contour information of the people is not required to be considered; when a plurality of people exist in the video at a certain moment, only the stage with the largest weight value is extracted as the representative of the stage at the moment, so that the influence of other irrelevant stages is effectively avoided, and the identification accuracy rate under the condition that a plurality of people exist in the video is improved.

Description

Fall detection method based on joint point characteristics
Technical Field
The present invention relates to a fall detection method, and more particularly, to a fall detection method based on joint extraction and an SVM classifier.
Background
The problem of aging of the population worldwide is becoming more serious, and it is expected that the number of elderly (over 60 years of age) will reach more than 20 billion in 2050, so that the problem of safety of elderly becomes increasingly important. The most threatening safety problem of the old is accidental fall, and the old falls for various reasons, including heart attack, collision, slippery ground and the like. Meanwhile, the fall may cause various problems such as hip fracture, traumatic brain injury and limb fracture to the elderly, and even death of the elderly if the problems cannot be found in time. One survey in the united states shows that approximately 250 tens of thousands of elderly people enter hospital emergency departments each year as a result of falls, and about one-sixth of these elderly people entering emergency departments die as a result of untimely delivery.
Currently existing fall detection algorithms are mainly classified into three major categories, namely fall detection algorithms based on wearable equipment, fall detection algorithms based on environmental modes and fall detection algorithms based on computer vision. Although the detection algorithm based on the wearable device is flexible and convenient and simple to manufacture, the wearable device needs to be worn by a user for a long time, and brings great inconvenience to human activities; the environmental-based fall detection algorithm has relatively high use cost and certain limitation on the use area; the detection algorithm based on computer vision does not need to be worn, does not influence the daily activities of users, does not need to install a required sensor in a fixed environment, is naturally not limited by a field, and can be applied to a place where a camera is installed. Therefore, the algorithm based on computer vision has many advantages and wide application prospect. The fall detection method provided by the invention is a computer vision-based detection method.
The fall detection method based on computer vision faces many challenges, such as extracting people from a complex background, different human bodies and greatly reducing the detection accuracy when a plurality of people exist in a scene.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings in the prior art, and provides a falling detection method based on joint point characteristics.
In order to achieve the purpose, the invention adopts the following technical scheme:
a falling detection method based on joint point extraction comprises the following steps:
step 1: respectively processing each frame of image of the obtained video through a target detection algorithm, and intercepting an area containing a person;
step 2: extracting the joint points in the area intercepted in the step 1 to obtain joint point information, and normalizing the joint point information;
and step 3: classifying the joint point information of each frame of image to obtain a class sequence;
and 4, step 4: simplifying the category sequence obtained in the step 3, and representing a plurality of continuous same categories by one category;
and 5: and (4) judging whether a falling event occurs in the video or not through the simplified category sequence obtained in the step (4).
Preferably, in step 1, the target detection algorithm is a yolo algorithm.
Preferably, in the step 1, the specific steps of intercepting the human-containing region are as follows: expanding and cutting an area where a person exists, and representing the area where the human body is located as (x, y, w, h), wherein x is an abscissa of the upper left corner of the area, y is an ordinate of the upper left corner of the area, w is the width of the area, and h is the height of the area; the cutting formula is as follows:
x_cut=x*0.9 (1)
y_cut=y*0.9 (2)
w_cut=x_cut+w*1.2<image.cols?w*1.2:image.cols-x_cut (3)
h_cut=y_cut+h*1.2<image.rows?h*1.2:image.rows-y_cut (4)
wherein x _ cut is an abscissa of the upper left corner after cutting, y _ cut is an ordinate of the upper left corner after cutting, w _ cut is the width of the image after cutting, h _ cut is the height of the image after cutting, image.
Preferably, when a plurality of people exist in a single-frame image and two people are close enough, the situation that different cutting areas contain the same two people may exist, and then the problem of repeated identification occurs; for this problem, the present invention adopts the following formula:
Figure BDA0001792960970000021
wherein S1And S2When the formula (5) is greater than the threshold, only the region with the largest area in the two regions is reserved for the areas of the two regions after cutting.
Preferably, in step 2, the joint information of the person in the image is extracted by openposition algorithm.
Preferably, the joint point information obtained in step 2 is normalized, and the formula is as follows:
Figure BDA0001792960970000031
Figure BDA0001792960970000032
wherein (x _ nm, y _ nm) is the normalized joint point coordinate, and (x _ base, y _ base) is the coordinate of the reference joint point; when the reference joint point cannot be recognized, all the joint point coordinates are set to 0.
Preferably, in step 3, the joint point information is classified into 5 types by using an SVM classifier: normal phase pnormalFall phase pfallingLying stage playOther stages pothersAnd the case where the total joint point is 0; the normal stage is as follows: a normal upright walking and normal sitting state of the person; the lying stage is as follows: a state in which the whole body of the person lies on the ground; the fall phase is: the process of conversion of a person from a normal phase to a lying down phase; and other stages: other stages than the 3 stages described above.
Preferably, in step 3, different stages are assigned with different weight values according to different degrees of importance of the different stages, and if a plurality of persons exist in the video image, one stage with the highest weight value is extracted as a representative of the stage in which the joint point information of the video image is located.
Preferably, a sliding window is selected, the category sequence obtained in the step 3 is filtered, and the step 4 is performed after the abnormal category is removed.
Preferably, in step 5, whether a fall incident occurs in the video is determined by determining whether the simplified category sequence obtained in step 4 has continuous fall and lying states; if the video is existed, the video is proved to have a falling event, otherwise, the video is proved to have no falling event in the period of time.
Has the advantages that: 1. the method based on the joint point characteristics effectively solves the problem of different body shapes of different people, and only the information of the joint points of the people is considered, but the external contour information of the people is not required to be considered;
2. when a plurality of people exist in the video at a certain moment, namely a plurality of falling stages exist in the video at a certain moment, only the stage with the largest weight value is extracted as a representative of the stage at the moment, so that the influence of other unrelated stages is effectively avoided, and the identification accuracy rate under the condition that a plurality of people exist in the video is further improved;
3. the accuracy of the extraction of the joint points is improved by combining the yolo algorithm and the openpos algorithm.
Drawings
Fig. 1 is an algorithm flowchart of a fall detection method based on joint point characteristics according to the present invention.
FIG. 2 is a schematic diagram of the joint points extracted in the present invention, wherein 0-17 represent the joint points at different positions of the human body, respectively, 0-nose, 1-neck, 2-right shoulder joint, 3-right elbow joint, 4-right wrist joint, 5-left shoulder joint, 6-left elbow joint, 7-left wrist joint, 8-right hip joint, 9-right knee joint, 10-right ankle joint, 11-left hip joint, 12-left knee joint, 13-left ankle joint, 14-right eye, 15-left eye, 16-right ear, and 17-left ear.
Detailed Description
The present invention will be further explained with reference to examples.
Step 1: acquiring video data;
step 2: taking 1 frame of image every 5 frames for processing;
and step 3: processing the image acquired in the step 2 by a yolo (you Only Look one) algorithm, identifying whether a person exists in the frame of image, cutting out an area where the person exists if the person exists, entering the step 4, otherwise, returning to the step 2;
and 4, step 4: processing the image cut out in the step 3 by an openposition algorithm, extracting the joint point information of the person in the image, and normalizing the joint point information;
and 5: classifying the joint point information acquired in the step 4 through an SVM classifier;
step 6: extracting a most critical category from the categories obtained in the step 5, storing the category, repeating the steps 2 to 6 until the number of the stored categories reaches a certain value (200 is taken here), and sending the category data to the step 7;
and 7: selecting a sliding window, filtering the category sequence obtained in the step 6, and removing the category of the false detection;
and 8: simplifying the category sequence obtained in the step 7, and simplifying a plurality of continuous same categories into one category;
and step 9: judging whether the category data obtained in the step 8 exist in the successive falling and lying stages, if so, indicating that a falling event occurs in the video, otherwise, indicating that no falling event occurs in the video in the period.
For step 3, because the open position has the problem that the joint points can be recognized in the unmanned area by mistake, the method adopts the yolo algorithm to primarily recognize and position the people in the area, and sends the area with the people to the open position algorithm to extract the joint points.
The step 3 of detecting whether people exist in the image by the yolo algorithm is as follows: the yolo algorithm firstly carries out block processing on the image, and the image is divided into 7 × 7 small blocks; then each small block is responsible for predicting the category and the position of the target with the central point falling in the small block; finally, the category and the position of each target in the image can be accurately predicted through a neural network and a yolo algorithm; then, judging all object categories identified by the yolo algorithm, wherein if the categories are 'people', the situation that people exist in the image is indicated, and otherwise, the situation that people do not exist in the image is indicated. When a person is present in the image, we record the position of the "person" in the image.
The image cutting method in the step 3 is specifically as follows: the area where the human body exists is expanded and cut, and the area where the human body exists is expressed as (x, y, w, h), wherein x is the abscissa of the upper left corner of the area, y is the ordinate of the upper left corner of the area, w is the width of the area, and h is the height of the area. The cutting formula is as follows:
x_cut=x*0.9 (1)
y_cut=y*0.9 (2)
w_cut=x_cut+w*1.2<image.cols?w*1.2:image.cols-x_cut (3)
h_cut=y_cut+h*1.2<image.rows?h*1.2:image.rows-y_cut (4)
wherein image in the formula (3) is the width of the image, image in the formula (4) is the height of the image, and the formula (3) and the formula (4) are used for preventing the intercepted area from being out of range.
In step 3, since the extended cut is adopted, in the case where a plurality of persons exist and two persons are close enough, there may be a case where different cut regions contain the same two persons, thereby causing a problem of repeated recognition. The invention herein follows the idea of IOU (interaction over Union) to solve the problem, and the formula is as follows:
Figure BDA0001792960970000051
wherein S1And S2For the two cut regions, when the formula (5) is greater than a certain threshold, only the region with the largest area in the two regions is reserved, and the effect is best when the threshold is 85% through experimental tests.
The steps of extracting the joint points by the openposition algorithm in the step 4 are as follows: the opencast algorithm employs a network structure with two branches, one branch being responsible for predicting the positions of the joint points (only predicting where the joint points exist but not knowing which joint point is specific to which person), and the other branch being responsible for predicting the positional relationship between the joint points. The openposition algorithm simultaneously predicts the positions of the joint points and the degree of association between the joint points of the body through the network, and finally analyzes the two parts predicted in the front through the Hungarian algorithm to obtain the positions of the joint points of the human body (namely, what the positions of the shoulder joint points are, what the positions of the elbow joint points are, and the like).
For the normalization method in step 4, the joint point 1 in fig. 2 is selected as the reference point for normalization, and the normalization formula is as follows:
Figure BDA0001792960970000061
Figure BDA0001792960970000062
wherein) x _ nm and y _ nm) are normalized joint point coordinates, and (x _ base and y _ base) are coordinates of a reference joint point (i.e. joint point 1). When the reference joint point cannot be recognized, all the joint point coordinates are set to 0.
In step 5, the invention is totally classified into 5 categories, including 4 stages of falling and the case where the total joint is 0. The 4 phases of a fall are normal, fall, lie down and 4 others. The normal stage is as follows: a normal upright walking and normal sitting state of the person; the lying stage is as follows: a state in which the whole body of the person lies on the ground; the fall phase is: the process of conversion of a person from a normal phase to a lying down phase; the other stages are as follows: other stages than the remaining 3 stages, such as a sitting-down stage and a standing-up stage. Meanwhile, the invention endows different stages with different weight values according to different importance degrees of different stages, and the normal stage is expressed as p in the inventionnormalThe fall phase is denoted as pfallingThe lying phase is denoted as playAnd the other stages are denoted as pothersThe phase with a total joint point of 0 is denoted as pzero. The importance of the stages is as follows: p is a radical offalling>play>pothers>pnormal>pzeroTherefore, the invention assigns the following weights to each stage: omegafalling=4,ωlay=3,ωothers=2,ωnormal=1,ωzero=0。
The classification steps of the SVM classifier in the step 5 are as follows: firstly, training an SVM classifier through an Le2i fall detection data set, wherein the classification type of the SVM classifier is as described in the previous section; in the detection stage, the joint point information extracted by the openposition algorithm is sent to a trained SVM classifier for classification, so that the falling stage of the person corresponding to the input joint point is obtained.
In step 6, if the image existsA plurality of persons can extract a plurality of joint points through step 4, and a plurality of categories can be identified through step 5, so that a plurality of categories can be identified for the same frame of image, and the final identification result is influenced. Since the invention needs to detect whether there is a fall incident in the video, we only need to extract one of the categories that is most critical for identifying falls. The invention extracts key categories according to the weight value in the previous paragraph, and the category sequence identified from the ith frame image is { p }falling,play,pothers,pnormal,pzeroAnd the extraction process is as follows:
Figure BDA0001792960970000071
wherein the frameiThe ith frame of image in the video is shown, and finally, the most critical category is extracted from each frame of image.
In step 7, the invention selects a sliding window and filters the category sequence identified by the frame image in the sliding window. Since the classification of the frame image by the SVM classifier in the step 5 does not reach the recognition accuracy of 100%, the false detection is inevitable, and since the categories in the video are continuous, the method adopts a sliding window strategy, and fills the whole sliding window with the category mode recognized in the sliding window, so that the burrs in the sliding window can be effectively removed. The size of the sliding window taken by the present invention is 5, so the identification category contained in the sliding window is { p }falling,pfalling,pothers,pfalling,pfallingAnd then the filtering process is as follows:
Figure BDA0001792960970000072
wherein p isothersAnd effectively removing the false detection points by filtering operation.
In step 8, the class sequence with the removed burrs is obtained from step 7, so that the obtained class sequence in step 8 is a continuous class sequence, and because the obtained class sequence is continuous and the same class, the classes can be represented by one class, so that a simplified identification class sequence is obtained, and the invention also performs zero point removing processing on the class sequence (the zero point refers to the class p)zeroSince no matter pzeroIn which of the remaining categories it appears, removing it will not negatively affect the final recognition result, but will instead improve the recognition accuracy). Therefore, the judgment of whether the falling event exists in the video in the next step is greatly simplified, and the calculation efficiency and accuracy are further improved. Assume that we obtained the class sequence from step 7 as pnoraml…pnormal,pzero…pzero,pnoraml…pnormal,pfalling…pfalling,play…playThen, the sequence is simplified as follows:
Figure BDA0001792960970000073
in step 9, the present invention determines the category sequence obtained in step 8, if there are consecutive p in the categoryfallingAnd playAnd judging whether a falling event occurs in the video according to the category, otherwise, judging whether no falling occurs. I.e., if the sequence is { p }normal,pfalling,playIf the sequence is { p }, a fall event occurs in the videonormal,pothers,playAnd then no fall event occurs in the video.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (8)

1. A falling detection method based on joint point extraction is characterized by comprising the following steps:
step 1: respectively processing each frame of image of the obtained video through a target detection algorithm, and intercepting an area containing a person;
step 2: extracting the joint points in the area intercepted in the step 1 to obtain joint point information, and normalizing the joint point information; the normalization formula is as follows:
Figure FDA0002627673430000011
Figure FDA0002627673430000012
wherein (x _ nm, y _ nm) is the normalized joint point coordinate, and (x _ base, y _ base) is the coordinate of the reference joint point; when the reference joint point cannot be identified, setting all the joint point coordinates as 0;
and step 3: classifying the joint point information of each frame of image to obtain a class sequence; different weighted values are given to different stages according to different importance degrees of the different stages, and if a plurality of persons exist in the video image, a stage with the highest weighted value is extracted and used as a representative of a stage where joint point information of the video image is located;
and 4, step 4: simplifying the category sequence obtained in the step 3, and representing a plurality of continuous same categories by one category;
and 5: and (4) judging whether a falling event occurs in the video or not through the simplified category sequence obtained in the step (4).
2. A fall detection method based on joint point extraction as claimed in claim 1, wherein in step 1, the target detection algorithm is yolo algorithm.
3. The method for detecting falls based on joint point extraction as claimed in claim 1, wherein in step 1, the specific step of intercepting the person-containing region is as follows: expanding and cutting an area where a person exists, and representing the area where the human body is located as (x, y, w, h), wherein x is an abscissa of the upper left corner of the area, y is an ordinate of the upper left corner of the area, w is the width of the area, and h is the height of the area; the cutting formula is as follows:
x_cut=x*0.9 (3)
y_cut=y*0.9 (4)
w_cut=x_cut+w*1.2<image.cols?w*1.2:image.cols-x_cut (5)
h_cut=y_cut+h*1.2<image.rows?h*1.2:image.rows-y_cut (6)
wherein x _ cut is an abscissa of the upper left corner after cutting, y _ cut is an ordinate of the upper left corner after cutting, w _ cut is the width of the image after cutting, h _ cut is the height of the image after cutting, image.
4. The method as claimed in claim 3, wherein the problem of repeated recognition caused by the fact that multiple people exist in a single frame of image and two people are close enough can exist in different cutting areas containing the same two people is solved by the following formula:
Figure FDA0002627673430000021
wherein S1And S2For the areas of the two regions after cutting, when the formula (7) is greater than the threshold value, only the region with the largest area of the two regions is reserved.
5. The method as claimed in claim 1, wherein in step 2, the joint information of the human body in the image is extracted by openposition algorithm.
6. The method for fall detection based on joint point extraction as claimed in claim 1, wherein in step 3, an SVM classifier is used to classify the joint point information into 5 categories: normal phase pnormalFall phase pfallingLying stage playOther stages pothersAnd the case where the total joint point is 0; the normal stage is as follows: a normal upright walking and normal sitting state of the person; the lying stage is as follows: a state in which the whole body of the person lies on the ground; the fall phase is: the process of conversion of a person from a normal phase to a lying down phase; and other stages: other stages than the 3 stages described above.
7. The method as claimed in claim 6, wherein a sliding window is selected, the category sequence obtained in step 3 is filtered, and the step 4 is performed after removing the abnormal category.
8. The method for detecting falls based on the joint extraction as claimed in claim 6, wherein in step 5, it is determined whether a fall event occurs in the video by determining whether the simplified category sequence obtained in step 4 has continuous fall and lying states; if the video is existed, the video is proved to have a falling event, otherwise, the video is proved to have no falling event in the period of time.
CN201811044571.XA 2018-09-07 2018-09-07 Fall detection method based on joint point characteristics Active CN109389041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811044571.XA CN109389041B (en) 2018-09-07 2018-09-07 Fall detection method based on joint point characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811044571.XA CN109389041B (en) 2018-09-07 2018-09-07 Fall detection method based on joint point characteristics

Publications (2)

Publication Number Publication Date
CN109389041A CN109389041A (en) 2019-02-26
CN109389041B true CN109389041B (en) 2020-12-01

Family

ID=65418626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811044571.XA Active CN109389041B (en) 2018-09-07 2018-09-07 Fall detection method based on joint point characteristics

Country Status (1)

Country Link
CN (1) CN109389041B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109919132B (en) * 2019-03-22 2021-04-23 广东省智能制造研究所 Pedestrian falling identification method based on skeleton detection
TWI704499B (en) * 2019-07-25 2020-09-11 和碩聯合科技股份有限公司 Method and device for joint point detection
CN110604597B (en) * 2019-09-09 2020-10-27 李胜利 Method for intelligently acquiring fetal cardiac cycle images based on ultrasonic four-cavity cardiac section
CN110738154A (en) * 2019-10-08 2020-01-31 南京熊猫电子股份有限公司 pedestrian falling detection method based on human body posture estimation
CN111144263B (en) * 2019-12-20 2023-10-13 山东大学 Construction worker high-falling accident early warning method and device
CN111428703B (en) * 2020-06-15 2020-09-08 西南交通大学 Method for detecting pit leaning behavior of electric power operation and inspection personnel
CN112861686B (en) * 2021-02-01 2022-08-30 内蒙古大学 SVM-based image target detection method
CN113392751A (en) * 2021-06-10 2021-09-14 北京华捷艾米科技有限公司 Tumbling detection method based on human body skeleton nodes and related device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
US20140276238A1 (en) * 2013-03-15 2014-09-18 Ivan Osorio Method, system and apparatus for fall detection
CN104850846B (en) * 2015-06-02 2018-08-24 深圳大学 A kind of Human bodys' response method and identifying system based on deep neural network
CN105279483B (en) * 2015-09-28 2018-08-21 华中科技大学 A kind of tumble behavior real-time detection method based on depth image
CN106056035A (en) * 2016-04-06 2016-10-26 南京华捷艾米软件科技有限公司 Motion-sensing technology based kindergarten intelligent monitoring method
KR101760327B1 (en) * 2016-04-18 2017-07-21 조선대학교산학협력단 Fall detection method using camera
CN107301370B (en) * 2017-05-08 2020-10-16 上海大学 Kinect three-dimensional skeleton model-based limb action identification method
CN107220604A (en) * 2017-05-18 2017-09-29 清华大学深圳研究生院 A kind of fall detection method based on video

Also Published As

Publication number Publication date
CN109389041A (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN109389041B (en) Fall detection method based on joint point characteristics
CN111383421B (en) Privacy protection fall detection method and system
CN110287825B (en) Tumble action detection method based on key skeleton point trajectory analysis
Nghiem et al. Head detection using kinect camera and its application to fall detection
CN108509897A (en) A kind of human posture recognition method and system
CN113111865B (en) Fall behavior detection method and system based on deep learning
CN112801000B (en) Household old man falling detection method and system based on multi-feature fusion
CN112327288B (en) Radar human body action recognition method, radar human body action recognition device, electronic equipment and storage medium
Debard et al. Camera based fall detection using multiple features validated with real life video
CN114782874A (en) Anti-epidemic protection article wearing behavior standard detection method based on human body posture
CN111144174A (en) System for identifying falling behavior of old people in video by using neural network and traditional algorithm
Hasib et al. Vision-based human posture classification and fall detection using convolutional neural network
Ngo et al. Study on fall detection based on intelligent video analysis
Taghvaei et al. Image-based fall detection and classification of a user with a walking support system
Kavya et al. Fall detection system for elderly people using vision-based analysis
Yuwono et al. Fall detection using a Gaussian distribution of clustered knowledge, augmented radial basis neural-network, and multilayer perceptron
KR102423934B1 (en) Smart human search integrated solution through face recognition and multiple object tracking technology of similar clothes color
Al Nahian et al. Social group optimized machine-learning based elderly fall detection approach using interdisciplinary time-series features
CN109800686A (en) A kind of driver&#39;s smoking detection method based on active infrared image
KR101766467B1 (en) Alarming apparatus and methd for event occurrence, and providing method of event occurrence determination model
Soni et al. Single Camera based Real Time Framework for Automated Fall Detection
CN114359831A (en) Risk omen reasoning-oriented intelligent identification system and method for worker side-falling
Yoshihara et al. Automatic feature point detection using deep convolutional networks for quantitative evaluation of facial paralysis
Chong et al. Visual based fall detection with reduced complexity horprasert segmentation using superpixel
CN113342166A (en) Cervical vertebra movement identification method and system based on earphone for protecting privacy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant