CN112686211A - Fall detection method and device based on attitude estimation - Google Patents

Fall detection method and device based on attitude estimation Download PDF

Info

Publication number
CN112686211A
CN112686211A CN202110099051.4A CN202110099051A CN112686211A CN 112686211 A CN112686211 A CN 112686211A CN 202110099051 A CN202110099051 A CN 202110099051A CN 112686211 A CN112686211 A CN 112686211A
Authority
CN
China
Prior art keywords
human body
coordinates
preset
key point
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110099051.4A
Other languages
Chinese (zh)
Inventor
陈京荣
李伟彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202110099051.4A priority Critical patent/CN112686211A/en
Publication of CN112686211A publication Critical patent/CN112686211A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a falling detection method and a falling detection device based on attitude estimation, wherein the method comprises the steps of receiving image data collected by a preset image collecting device; extracting coordinates of a human body circumscribed rectangular frame from the image data; inputting the coordinates of the human body circumscribed rectangle frame into a preset model to obtain a plurality of human body key point coordinates; and carrying out posture classification on the human body key point coordinates, and detecting the falling behavior. The technical problems that the existing detection scheme is not accurate enough in detecting the falling behavior and is easy to cause missed detection are solved.

Description

Fall detection method and device based on attitude estimation
Technical Field
The invention relates to the technical field of detection, in particular to a falling detection method and device based on attitude estimation.
Background
In the current society, the aging problem of the population is increasingly serious, and at present, 2.09 million old people exist in China, and the China is the country with the most old people in the world. The children do not have enough time to take care of the parents, and the problems of difficult hospitalization, high hospitalization, traffic tension and the like restrict the old from occupying the sickbed of the public hospital. Aging is becoming a troublesome problem for individuals, homes and governments. Home care can partially solve the above-mentioned problem, but needs to track the state at home of solitary old man in real time, wherein, especially need to fall down the monitoring, because in the old man death cause ranking above 65 years old, fall down the position and live at the head.
The traditional monitoring system mainly comprises video monitoring, acceleration sensor monitoring, sound wave sensor monitoring and other methods. However, the traditional monitoring method mainly takes recording as a main part, and meanwhile, fall monitoring needs to be realized through a wearable sensor, so that a user needs to wear the monitoring device at any time, which is very inconvenient in practical application. The indoor environment is relatively complicated, the activity of a human body is easily shielded, and the influence of strong light is also easily received, the traditional image processing method has limitations, and the use of depth images for detecting falling needs to depend on special hardware Kinect, so that the cost is high, and the use is limited by distance. The traditional monitoring system is not accurate enough in monitoring the falling behavior, and the condition of missed detection is easy to occur.
Disclosure of Invention
The invention provides a falling detection method and device based on attitude estimation, which are used for solving the technical problems that the existing detection scheme is not accurate enough in detecting falling behaviors and is easy to generate missed detection.
The invention provides a fall detection method based on attitude estimation, which comprises the following steps:
receiving image data acquired by a preset image acquisition device;
extracting coordinates of a human body circumscribed rectangular frame from the image data;
inputting the coordinates of the human body circumscribed rectangle frame into a preset model to obtain a plurality of human body key point coordinates;
and carrying out posture classification on the human body key point coordinates, and detecting the falling behavior.
Optionally, the step of extracting coordinates of a human body bounding rectangle frame from the image data includes:
training a preset neural network model through a preset human body posture data set to obtain a trained neural network model;
and inputting the image data into the trained neural network model, and extracting coordinates of a human body external rectangular frame.
Optionally, the step of inputting the coordinates of the human body circumscribed rectangle frame into a preset model to obtain the coordinates of a plurality of human body key points includes:
inputting the coordinates of the human body circumscribed rectangle frame into a preset model, and acquiring a plurality of initial human body key point coordinates and an initial confidence corresponding to each initial human body key point coordinate;
sequentially judging whether the initial confidence corresponding to each initial human body key point coordinate is smaller than a preset threshold value or not;
if so, deleting the initial human body key point coordinates of which the initial confidence coefficient is less than or equal to a preset threshold value;
and determining the initial human body key point coordinates with the initial confidence coefficient larger than the preset threshold value as human body key point coordinates.
Optionally, the step of performing posture classification on the human body key point coordinates and detecting a falling behavior includes:
acquiring human body key point coordinates corresponding to the image data of a preset number of frames, and generating a coordinate sequence;
inputting the coordinate sequence into a preset Conv-LSTM network for posture classification, and detecting a falling behavior based on a posture classification result; wherein, an attention mechanism is added in the preset Conv-LSTM network.
Optionally, the body posture data set comprises body walking data, jumping data, lying down data and falling data.
Optionally, the method further comprises:
when the falling behavior is detected, an alarm signal is sent out.
The invention also provides a fall detection device based on attitude estimation, which comprises:
the receiving module is used for receiving image data collected by a preset image collecting device;
the extraction module is used for extracting coordinates of a human body external rectangular frame from the image data;
the acquisition module is used for inputting the coordinates of the human body circumscribed rectangular frame into a preset model and acquiring a plurality of human body key point coordinates;
and the detection module is used for carrying out posture classification on the human body key point coordinates and detecting the falling behavior.
Optionally, the extraction module includes:
the training submodule is used for training the preset neural network model through a preset human body posture data set to obtain a trained neural network model;
and the extraction submodule is used for inputting the image data into the trained neural network model and extracting the coordinates of the human body external rectangular frame.
The invention also provides an electronic device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the posture estimation based fall detection method of any of the above in accordance with instructions in the program code.
The invention also provides a computer readable storage medium for storing program code for performing a method of posture estimation based fall detection as described in any of the above.
According to the technical scheme, the invention has the following advantages: the invention receives image data collected by a preset image collecting device; extracting coordinates of a human body external rectangular frame from the image data; inputting coordinates of a human body circumscribed rectangle frame into a preset model, and acquiring a plurality of human body key point coordinates; and (4) carrying out posture classification on the coordinates of the key points of the human body and detecting the falling behavior. The technical problems that the existing detection scheme is not accurate enough in detecting the falling behavior and is easy to cause missed detection are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart illustrating steps of a fall detection method based on posture estimation according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating steps of a fall detection method based on posture estimation according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of YOLOv3 according to an embodiment of the present invention;
FIG. 4 is a block network diagram provided by an embodiment of the present invention;
FIG. 5 is a structural diagram of Conv-LSTM provided in an embodiment of the present invention;
fig. 6 is a block diagram of a fall detection apparatus based on posture estimation according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a fall detection method and device based on attitude estimation, which are used for solving the technical problems that the existing detection scheme is not accurate enough in fall detection and easy to generate missed detection.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a fall detection method based on posture estimation according to an embodiment of the present invention.
The invention provides a fall detection method based on attitude estimation, which comprises the following steps:
step 101, receiving image data collected by a preset image collecting device;
in the embodiment of the present invention, the image capturing device may be a camera or the like.
In a specific implementation, the embodiment of the invention can collect indoor image data by installing a camera in an indoor area.
In one example, the camera may be a camera with a human body tracking function to collect image data containing human body information as much as possible, so as to avoid that no relevant image is collected when an accident occurs to an elderly person.
102, extracting coordinates of a human body external rectangular frame from image data;
the human body external rectangular frame is a rectangular frame formed by connecting multiple points of the external contour of a human body. The set of coordinates of all the points is the coordinates of the circumscribed rectangular frame of the human body.
In the embodiment of the invention, after the image data is acquired, the position of the human body can be extracted from the image data so as to acquire the coordinates of the circumscribed rectangular frame of the human body in the image data.
103, inputting coordinates of a human body circumscribed rectangle frame into a preset model to obtain a plurality of human body key point coordinates;
in the embodiment of the invention, the acquired image data is input into the preset model, and a plurality of human body key point coordinates can be acquired.
In one example, the body key point coordinates may include 17 of the nose, left and right eyes, left and right ears, left and right shoulders, left and right elbows, left and right wrists, left and right hips, left and right knees, left and right ankles, and the like.
And 104, carrying out posture classification on the coordinates of the key points of the human body, and detecting the falling behavior.
In the embodiment of the invention, after the coordinates of the key points of the human body are obtained, the posture can be classified through the neural network model, so that whether the falling behavior exists in the image data is detected.
The invention receives image data collected by a preset image collecting device; extracting coordinates of a human body external rectangular frame from the image data; inputting coordinates of a human body circumscribed rectangle frame into a preset model, and acquiring a plurality of human body key point coordinates; and (4) carrying out posture classification on the coordinates of the key points of the human body and detecting the falling behavior. The technical problems that the existing detection scheme is not accurate enough in detecting the falling behavior and is easy to cause missed detection are solved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a fall detection method based on posture estimation according to another embodiment of the present invention. The method specifically comprises the following steps:
step 201, receiving image data collected by a preset image collecting device;
step 202, extracting coordinates of a human body external rectangular frame from image data;
in this embodiment of the present invention, step 202 may include:
training a preset neural network model through a preset human body posture data set to obtain a trained neural network model;
inputting the image data into the trained neural network model, and extracting the coordinates of the external rectangular frame of the human body.
In a specific implementation, the neural network YOLOv3 model can be used as a basic network, and is improved on the basis, separable convolution is introduced, and a preset neural network model is constructed. The network structure of YOLOv3 is shown in fig. 3. The 2 Convolitional modules plus one Residual module in the box on the left side of FIG. 3 are collectively referred to as a Darknet module. Referring to the design of MobileNetV2, the basic module Darknet of fig. 3 was replaced with the block network diagram of fig. 4. The left side of fig. 4 represents the network structure when the step size Stride is 1, the right side represents the network structure when the step size is 2, wherein Relu6 is the Relu function, but the maximum output value is limited to 6, so that the number of parameters can be greatly reduced and the detection speed of the network can be improved under the condition of ensuring the accuracy.
It should be noted that MobileNetV2 introduces a deep separable convolution that separates the convolution kernel into two convolution kernels, which perform two convolutions: the deep convolution and the point-by-point convolution can effectively reduce the calculation amount compared with the traditional convolution.
The preset neural network model is trained by adopting a preset human body posture data set, and image data are input into the trained model obtained by training, so that a regression frame of a human body in each frame of image data can be obtained, namely, the coordinates of an external rectangular frame of the human body are detected.
It should be noted that the body posture data set may include posture data such as body walking data, jumping data, lying data, and falling data.
Step 203, inputting coordinates of a human body circumscribed rectangle frame into a preset model, and acquiring coordinates of a plurality of human body key points;
in this embodiment of the present invention, step 203 may include:
inputting coordinates of a human body circumscribed rectangle frame into a preset model, and acquiring a plurality of initial human body key point coordinates and an initial confidence corresponding to each initial human body key point coordinate;
sequentially judging whether the initial confidence corresponding to each initial human body key point coordinate is smaller than a preset threshold value or not;
if so, deleting the initial human body key point coordinates of which the initial confidence coefficient is less than or equal to a preset threshold value;
and determining the initial human body key point coordinates with the initial confidence coefficient larger than a preset threshold value as the human body key point coordinates.
In specific implementation, the preset model may be an HRNet model, and when the external rectangular frame coordinates of the human body are obtained, the external rectangular frame coordinates of the human body may be input into the human body posture estimation network HRNet model, so as to obtain 17 initial human body key point coordinates and corresponding initial confidence levels.
The threshold is set to 0.1, and when the initial confidence of the coordinates of the initial human body key point is less than or equal to 0.1, the coordinates of the initial human body key point can be set to 0, which is equivalent to the coordinates of the point which is not detected. And determining other initial human body key point coordinates with initial confidence degrees larger than 0.1 as the human body key point coordinates.
Step 204, carrying out posture classification on the coordinates of the key points of the human body, and detecting falling behaviors;
in this embodiment of the present invention, step 204 may include:
acquiring human body key point coordinates corresponding to image data of a preset number of frames, and generating a coordinate sequence;
inputting the coordinate sequence into a preset Conv-LSTM network for posture classification, and detecting a falling behavior based on a posture classification result; wherein, an attention mechanism is added in the preset Conv-LSTM network.
In a specific implementation, a neural network Conv-LSTM model can be used as a base network, with an attention mechanism added. Using the coordinates of the 17 key points obtained above, taking data of each 10 frames as a new coordinate sequence, wherein the dimension of the sequence size is 17 x 2 x 10, the coordinates of the 17 key points of 10 frames of images and the x-axis and y-axis of each key point are represented, and the coordinates are input into a Conv-LSTM network for posture classification to detect the falling behavior.
The Conv-LSTM network according to the embodiment of the present invention may be as shown in fig. 5, where h is(T-1)And h(T)Representing the output, C, of each ConvLSTM cell(T-1)And C(T)Representing a hidden cell state, o(T)Is the output of the output gate, f(t)Is the output of a forgetting gate, i(T)Is the output of the entry, Wxi、Wxf、WxoRepresents x(T)Corresponding weight matrix, Whi、Whf、WhoRepresents h(T-1)Weight matrix of Wpsspc、Wpeepf、WpeepiRepresents C(T-1)The weight matrix of (2). tanh is the activation function, σ is the sigmod function, and BN represents batch normalization of Batchnormalization, which is calculated as follows:
i(T)=σ(Wxi*x(T)+Whi*h(T-1)+Wpeepi qC(T-1)+bi)
f(T)=σ(Wxf*x(T)+Whf*h(T-1)+Wpeepf qC(T-1)+bf)
C(T)=f(T)qC(T-1)+i(T)qt anh(Wxc*x(T)+Whc*h(T-1)+bc)
C(T)=f(T)qC(T-1)+i(T)qt anh(Wxc*x(T)+Whc*h(T-1)+bc)
h(T)=o(T)q tanh(C(T))
it should be noted that the core of Conv-LSTM is the same as LSTM, and the output of the previous layer is used as the input of the next layer. The difference lies in that after the Conv-LSTM is added with convolution operation, not only the time sequence relation can be obtained, but also the spatial features can be extracted like a convolution layer, so that the Conv-LSTM can simultaneously extract the temporal features and the spatial features (space-time features), and the switching between the states is also changed into convolution operation.
In addition, an Attention mechanism module is added behind the Conv-LSTM network, and the output H of the Conv-LSTM is changed to { H(1),h(2),...,h(n-1),h(n)Where n is the time step of the time step Conv-LSTM network, the attention weight coefficient at and the output O for each time step are calculated by the following equation:
ut=tanh(Wth(T))#
Figure BDA0002915044310000071
Figure BDA0002915044310000072
after the Attention mechanism is added, the accuracy of the network can be improved, the Attention mechanism is a series of Attention distribution coefficients which are equivalent to weight parameters and can concentrate on considering some parts which are considered to be important in an input sequence, and the accuracy of the network can be effectively improved by introducing the Attention mechanism.
And step 205, sending out an alarm signal when the falling behavior is detected.
In the embodiment of the invention, if the falling behavior is detected, an alarm signal can be sent to a preset terminal of the user.
The invention receives image data collected by a preset image collecting device; extracting coordinates of a human body external rectangular frame from the image data; inputting coordinates of a human body circumscribed rectangle frame into a preset model, and acquiring a plurality of human body key point coordinates; and (4) carrying out posture classification on the coordinates of the key points of the human body and detecting the falling behavior. The technical problems that the existing detection scheme is not accurate enough in detecting the falling behavior and is easy to cause missed detection are solved.
Referring to fig. 6, fig. 6 is a block diagram of a fall detection apparatus based on posture estimation according to an embodiment of the present invention.
The embodiment of the invention provides a fall detection device based on attitude estimation, which comprises:
the receiving module 601 is configured to receive image data acquired by a preset image acquisition device;
an extracting module 602, configured to extract coordinates of a human body circumscribed rectangle frame from the image data;
an obtaining module 603, configured to input coordinates of a human body circumscribed rectangle frame into a preset model, and obtain coordinates of a plurality of human body key points;
and the detection module 604 is used for carrying out posture classification on the coordinates of the key points of the human body and detecting the falling behavior.
In this embodiment of the present invention, the extracting module 601 includes:
the training submodule is used for training the preset neural network model through a preset human body posture data set to obtain a trained neural network model;
and the extraction submodule is used for inputting the image data into the trained neural network model and extracting the coordinates of the human body external rectangular frame.
In this embodiment of the present invention, the obtaining module 603 includes:
the initial human body relation point coordinate and initial confidence coefficient obtaining submodule is used for inputting the coordinates of the human body external rectangular frame into a preset model and obtaining a plurality of initial human body key point coordinates and an initial confidence coefficient corresponding to each initial human body key point coordinate;
the judgment submodule is used for sequentially judging whether the initial confidence corresponding to each initial human body key point coordinate is smaller than a preset threshold value;
the deleting submodule is used for deleting the initial human body key point coordinates of which the initial confidence coefficient is less than or equal to a preset threshold value if the initial human body key point coordinates are not equal to the preset threshold value;
and the human body key point coordinate determining module is used for determining the initial human body key point coordinate with the initial confidence coefficient larger than a preset threshold as the human body key point coordinate.
In an embodiment of the present invention, the detecting module 604 includes:
the coordinate sequence generation submodule is used for acquiring human body key point coordinates corresponding to image data of a preset number of frames and generating a coordinate sequence;
the detection submodule is used for inputting the coordinate sequence into a preset Conv-LSTM network for posture classification and detecting a falling behavior based on a posture classification result; wherein, an attention mechanism is added in the preset Conv-LSTM network.
In an embodiment of the invention, the body posture data set comprises body walking data, jumping data, lying down data and falling data.
In the embodiment of the present invention, the method further includes:
and the alarm module is used for sending out an alarm signal when detecting the falling behavior.
An embodiment of the present invention further provides an electronic device, where the device includes a processor and a memory:
the memory is used for storing the program codes and transmitting the program codes to the processor;
the processor is configured to execute the posture estimation based fall detection method of an embodiment of the invention according to instructions in the program code.
Embodiments of the present invention also provide a computer-readable storage medium for storing program codes for executing the fall detection method based on posture estimation of the embodiments of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for fall detection based on pose estimation, comprising:
receiving image data acquired by a preset image acquisition device;
extracting coordinates of a human body circumscribed rectangular frame from the image data;
inputting the coordinates of the human body circumscribed rectangle frame into a preset model to obtain a plurality of human body key point coordinates;
and carrying out posture classification on the human body key point coordinates, and detecting the falling behavior.
2. The method of claim 1, wherein the step of extracting coordinates of a bounding rectangle of the human body from the image data comprises:
training a preset neural network model through a preset human body posture data set to obtain a trained neural network model;
and inputting the image data into the trained neural network model, and extracting coordinates of a human body external rectangular frame.
3. The method according to claim 2, wherein the step of inputting the coordinates of the circumscribed rectangle frame of the human body into a preset model to obtain the coordinates of a plurality of key points of the human body comprises:
inputting the coordinates of the human body circumscribed rectangle frame into a preset model, and acquiring a plurality of initial human body key point coordinates and an initial confidence corresponding to each initial human body key point coordinate;
sequentially judging whether the initial confidence corresponding to each initial human body key point coordinate is smaller than a preset threshold value or not;
if so, deleting the initial human body key point coordinates of which the initial confidence coefficient is less than or equal to a preset threshold value;
and determining the initial human body key point coordinates with the initial confidence coefficient larger than the preset threshold value as human body key point coordinates.
4. The method of claim 3, wherein the step of gesture classification of the human body key point coordinates and fall detection comprises:
acquiring human body key point coordinates corresponding to the image data of a preset number of frames, and generating a coordinate sequence;
inputting the coordinate sequence into a preset Conv-LSTM network for posture classification, and detecting a falling behavior based on a posture classification result; wherein, an attention mechanism is added in the preset Conv-LSTM network.
5. The method of claim 2, wherein the body posture data set comprises body walking data, jumping data, lying down data, and falling data.
6. The method of claim 4, further comprising:
when the falling behavior is detected, an alarm signal is sent out.
7. An attitude estimation based fall detection apparatus, comprising:
the receiving module is used for receiving image data collected by a preset image collecting device;
the extraction module is used for extracting coordinates of a human body external rectangular frame from the image data;
the acquisition module is used for inputting the coordinates of the human body circumscribed rectangular frame into a preset model and acquiring a plurality of human body key point coordinates;
and the detection module is used for carrying out posture classification on the human body key point coordinates and detecting the falling behavior.
8. The apparatus of claim 7, wherein the extraction module comprises:
the training submodule is used for training the preset neural network model through a preset human body posture data set to obtain a trained neural network model;
and the extraction submodule is used for inputting the image data into the trained neural network model and extracting the coordinates of the human body external rectangular frame.
9. An electronic device, comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the posture estimation based fall detection method of any one of claims 1-6 according to instructions in the program code.
10. A computer-readable storage medium characterized in that the computer-readable storage medium is configured to store program code for performing the posture estimation based fall detection method of any of claims 1-6.
CN202110099051.4A 2021-01-25 2021-01-25 Fall detection method and device based on attitude estimation Pending CN112686211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110099051.4A CN112686211A (en) 2021-01-25 2021-01-25 Fall detection method and device based on attitude estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110099051.4A CN112686211A (en) 2021-01-25 2021-01-25 Fall detection method and device based on attitude estimation

Publications (1)

Publication Number Publication Date
CN112686211A true CN112686211A (en) 2021-04-20

Family

ID=75459126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110099051.4A Pending CN112686211A (en) 2021-01-25 2021-01-25 Fall detection method and device based on attitude estimation

Country Status (1)

Country Link
CN (1) CN112686211A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591683A (en) * 2021-07-28 2021-11-02 北京百度网讯科技有限公司 Attitude estimation method, attitude estimation device, electronic equipment and storage medium
CN114627427A (en) * 2022-05-18 2022-06-14 齐鲁工业大学 Fall detection method, system, storage medium and equipment based on spatio-temporal information
CN117423138A (en) * 2023-12-19 2024-01-19 四川泓宝润业工程技术有限公司 Human body falling detection method, device and system based on multi-branch structure
WO2024036825A1 (en) * 2022-08-16 2024-02-22 深圳先进技术研究院 Attitude processing method, apparatus and system, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784280A (en) * 2019-01-18 2019-05-21 江南大学 Human bodys' response method based on Bi-LSTM-Attention model
CN109919132A (en) * 2019-03-22 2019-06-21 广东省智能制造研究所 A kind of pedestrian's tumble recognition methods based on skeleton detection
CN112215185A (en) * 2020-10-21 2021-01-12 成都信息工程大学 System and method for detecting falling behavior from monitoring video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784280A (en) * 2019-01-18 2019-05-21 江南大学 Human bodys' response method based on Bi-LSTM-Attention model
CN109919132A (en) * 2019-03-22 2019-06-21 广东省智能制造研究所 A kind of pedestrian's tumble recognition methods based on skeleton detection
CN112215185A (en) * 2020-10-21 2021-01-12 成都信息工程大学 System and method for detecting falling behavior from monitoring video

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591683A (en) * 2021-07-28 2021-11-02 北京百度网讯科技有限公司 Attitude estimation method, attitude estimation device, electronic equipment and storage medium
CN114627427A (en) * 2022-05-18 2022-06-14 齐鲁工业大学 Fall detection method, system, storage medium and equipment based on spatio-temporal information
WO2024036825A1 (en) * 2022-08-16 2024-02-22 深圳先进技术研究院 Attitude processing method, apparatus and system, and storage medium
CN117423138A (en) * 2023-12-19 2024-01-19 四川泓宝润业工程技术有限公司 Human body falling detection method, device and system based on multi-branch structure
CN117423138B (en) * 2023-12-19 2024-03-15 四川泓宝润业工程技术有限公司 Human body falling detection method, device and system based on multi-branch structure

Similar Documents

Publication Publication Date Title
CN111666857B (en) Human behavior recognition method, device and storage medium based on environment semantic understanding
Feng et al. Spatio-temporal fall event detection in complex scenes using attention guided LSTM
Lu et al. Deep learning for fall detection: Three-dimensional CNN combined with LSTM on video kinematic data
CN112686211A (en) Fall detection method and device based on attitude estimation
US11074436B1 (en) Method and apparatus for face recognition
CN110458061B (en) Method for identifying old people falling down and accompanying robot
Charfi et al. Optimized spatio-temporal descriptors for real-time fall detection: comparison of support vector machine and Adaboost-based classification
EP2924543B1 (en) Action based activity determination system and method
JP7185805B2 (en) Fall risk assessment system
Fan et al. Fall detection via human posture representation and support vector machine
CN113111767A (en) Fall detection method based on deep learning 3D posture assessment
Mastorakis et al. Fall detection without people: A simulation approach tackling video data scarcity
CN112560723A (en) Fall detection method and system based on form recognition and speed estimation
Iazzi et al. Fall detection based on posture analysis and support vector machine
Yao et al. A fall detection method based on a joint motion map using double convolutional neural networks
Zambanini et al. Detecting falls at homes using a network of low-resolution cameras
Lu et al. Visual guided deep learning scheme for fall detection
An et al. VFP290k: A large-scale benchmark dataset for vision-based fallen person detection
Liu et al. Automatic fall risk detection based on imbalanced data
Hung et al. Fall detection with two cameras based on occupied area
CN112380946B (en) Fall detection method and device based on end-side AI chip
Dai Vision-based 3d human motion analysis for fall detection and bed-exiting
Suarez et al. AFAR: a real-time vision-based activity monitoring and fall detection framework using 1D convolutional neural networks
Sharma et al. Automatic human activity recognition in video using background modeling and spatio-temporal template matching based technique
Mulyono et al. Design and Implementation of Real-time Object Detection for Blind using Convolutional Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination