CN112998697B - Tumble injury degree prediction method and system based on skeleton data and terminal - Google Patents

Tumble injury degree prediction method and system based on skeleton data and terminal Download PDF

Info

Publication number
CN112998697B
CN112998697B CN202110198589.0A CN202110198589A CN112998697B CN 112998697 B CN112998697 B CN 112998697B CN 202110198589 A CN202110198589 A CN 202110198589A CN 112998697 B CN112998697 B CN 112998697B
Authority
CN
China
Prior art keywords
injury
falling
lstm network
data
evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110198589.0A
Other languages
Chinese (zh)
Other versions
CN112998697A (en
Inventor
刘晞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110198589.0A priority Critical patent/CN112998697B/en
Publication of CN112998697A publication Critical patent/CN112998697A/en
Application granted granted Critical
Publication of CN112998697B publication Critical patent/CN112998697B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • A61B5/1117Fall detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Physiology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Alarm Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a falling injury degree prediction method, a system and a terminal based on skeleton data, wherein the skeleton data in the falling process are collected, and each part of a skeleton of a body is subjected to vector representation; establishing a fall-to-injury evaluation model, wherein a first ST-LSTM network is used for detecting an injured part when the user falls, and a second ST-LSTM network is used for evaluating injury degree; and inputting the processed data into the falling injury evaluation model to obtain the injury key part and the injury degree evaluation result thereof. The injury evaluation model effectively extracts the space-time characteristics of joints and body parts, and implements an attention mechanism through each layer, so that the influence of different falling modes on the body injury parts and the influence of different parts on the falling injury degree are better distinguished, and the falling injury degree evaluation precision is effectively improved. The method can be suitable for evaluating the injury degree of each high-risk part of the body during falling, so that the simultaneous evaluation of the injury degrees of a plurality of parts becomes possible.

Description

Tumble injury degree prediction method and system based on skeleton data and terminal
Technical Field
The application relates to the technical field of tumble injury evaluation, in particular to a tumble injury degree prediction method, system and terminal based on skeleton data.
Background
A human fall incident may cause physical injury, and even death in the case of a serious person. Relevant studies have shown that the first three parts that are most injured by a fall event are the head, hip and knee, respectively. The severity of the fall injury is related to factors such as height of the fall, velocity, kinetic energy, and acceleration of the impact location. The injury degree caused by falling is evaluated, so that effective measures can be taken in the emergency treatment process, and reference can be provided for subsequent reasonable nursing measures.
At present, the injury severity scale is mainly used for assessing the severity of the falling injury clinically, such as a craniocerebral injury standard, a craniocerebral injury model and a simple injury scale, but mainly aims at assessing the injury degree of car accidents, sports or pedestrian accidents. In some researches, a simulation experiment is adopted, a finite element model is established based on human body electronic computer tomography data to perform a stress analysis experiment on the easily injured part, and the injury degree possibly caused in a corresponding falling scene is evaluated, for example, Majumder uses a pelvis-femur-soft tissue complex three-dimensional finite element model to evaluate the influence of backward falling on pelvis injury.
However, the traditional scale assessment method is based on subjective judgment, the assessment result is greatly influenced by the subjectivity of an assessor, and the problems of subjective difference and universality exist. And the finite element simulation method aiming at the specific part only considers the injury degree evaluation of a single part.
Disclosure of Invention
In order to solve the technical problems, the following technical scheme is provided:
in a first aspect, an embodiment of the present application provides a method for predicting a fall injury degree based on skeleton data, where the method includes: collecting skeleton data in the falling process, wherein the collecting time is set according to the falling duration; vector representation is carried out on each part of the skeleton of the body; establishing a falling injury evaluation model, wherein the injury evaluation model comprises a first ST-LSTM network and a second ST-LSTM network, the first ST-LSTM network is used for detecting an injured part when a person falls, and the second ST-LSTM network is used for evaluating the injury degree; and inputting the processed data into the falling injury evaluation model to obtain the injury key part and the injury degree evaluation result thereof.
By adopting the implementation mode, the injury evaluation model effectively extracts the space-time characteristics of joints and body parts, and implements an attention mechanism through each layer, so that the influence of different falling modes on the body injury parts and the influence of different parts on the falling injury degree are better distinguished, and the falling injury degree evaluation precision is effectively improved. The method can be suitable for evaluating the injury degree of each high-risk part of the body during falling, so that the simultaneous evaluation of the injury degrees of a plurality of parts becomes possible.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the acquiring skeleton data during a fall includes: collecting human skeleton node sequence data in the collection time according to a preset frequency; and carrying out frame sampling on the human body skeleton node sequence data, dividing the input sequence into a preset number of equal-length sections, randomly selecting a frame from each equal-length section, and obtaining a training sample.
With reference to the first aspect, in a second possible implementation manner of the first aspect, the detecting, by the first ST-LSTM network, an injury part when a fall occurs includes: aggregating all node information; normalizing the information quantity to obtain attention weights of hidden state information of different joint points at different time points; carrying out weighted summation on each column by using attention weight to obtain weighted characteristic representation of different joint points; inputting the weighted output state into a full connection layer to obtain a feature vector; and calculating the predicted value of the network, namely the probability distribution of the injured part, by using the classification numerical value output by the full connection layer.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, the evaluating the injury degree by the second ST-LSTM network includes: taking the hidden representation and the output label probability distribution obtained by the first ST-LSTM network as the input of the second ST-LSTM network; applying attention to the second ST-LSTM network, evaluating the input information amount of the second ST-LSTM network at each spatiotemporal step through the context information; obtaining an attention weight probability vector and a weighted output representation thereof; mapping the weighted output of the second ST-LSTM network to a class label vector through a full-connection network to obtain a characteristic vector, and predicting the falling injury degree through a softmax classifier; and outputting the result with the maximum probability as the injury grade evaluation result.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect, the method further includes performing training optimization on the fall injury assessment model.
With reference to the fourth possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, the training and optimizing the fall injury assessment model includes: training the two layers of classifiers in a combined manner, training a model by using a training sample, and expressing an error value between a model predicted value and a sample true value by taking a negative log-likelihood function as a loss function; and (4) completing model training optimization through a back propagation minimization loss function.
In a second aspect, an embodiment of the present application provides a system for predicting a fall injury degree based on skeleton data, the system including: the data acquisition module is used for acquiring skeleton data in the falling process, wherein the acquisition time is set according to the falling duration; the data processing module is used for carrying out vector representation on each part of the skeleton of the body; the system comprises a model establishing module, a first monitoring module and a second monitoring module, wherein the model establishing module is used for establishing a falling injury evaluation model, the injury evaluation model comprises a first ST-LSTM network and a second ST-LSTM network, the first ST-LSTM network is used for detecting an injured part when a person falls, and the second ST-LSTM network is used for evaluating the injury degree; and the evaluation module is used for inputting the processed data into the falling injury evaluation model and acquiring the injury key part and the injury degree evaluation result thereof.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the data acquisition module includes: the acquisition unit is used for acquiring human body bone node sequence data within the acquisition time according to preset frequency; the first acquisition unit is used for carrying out frame sampling on the human body skeleton node sequence data, dividing the input sequence into equal-length sections with preset quantity, and randomly selecting one frame from each equal-length section to acquire a training sample.
With reference to the second aspect, in a second possible implementation manner of the second aspect, the first ST-LSTM network includes: the aggregation unit is used for aggregating all the node information; the second acquisition unit is used for normalizing the information quantity to obtain attention weights of hidden state information of different joint points at different time points; the third acquisition unit is used for carrying out weighted summation on each column by using the attention weight to obtain weighted characteristic representations of different joint points; the fourth acquisition unit is used for inputting the weighted output state to the full connection layer to acquire a feature vector; and the calculating unit is used for calculating the predicted value of the network, namely the probability distribution of the injured part, from the classification numerical values output by the full connection layer.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, the second ST-LSTM network includes: a fifth obtaining unit, configured to use the hidden representation and the output label probability distribution obtained by the first ST-LSTM network as input of the second ST-LSTM network; a weight evaluation unit for applying attention to the second ST-LSTM network and evaluating an input information amount of the second ST-LSTM network at each spatiotemporal step through context information; a sixth obtaining unit, configured to obtain an attention weight probability vector and a weighted output representation thereof; the prediction unit is used for mapping the weighted output of the second ST-LSTM network to a class label vector through a full-connection network to obtain a characteristic vector, and predicting the falling injury degree through a softmax classifier; and the output unit is used for outputting the maximum probability as the injury grade evaluation result.
In a third aspect, an embodiment of the present application provides a terminal, including: a processor; a memory for storing processor executable instructions; the processor executes the method for predicting the injury degree of a fall based on the skeleton data according to the first aspect or any possible implementation manner of the first aspect, and evaluates the injury level of the fall of the user.
Drawings
Fig. 1 is a schematic flowchart of a fall injury degree prediction method based on skeleton data according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a three-dimensional skeleton and a schematic diagram of a three-axis coordinate system according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a damage assessment model according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an ST-LSTM network provided by an embodiment of the present application;
fig. 5 is a schematic diagram of a fall injury degree prediction system based on skeleton data according to an embodiment of the present application;
fig. 6 is a schematic diagram of a terminal according to an embodiment of the present application.
Detailed Description
The present embodiment is described below with reference to the accompanying drawings and the detailed description.
The embodiment of the application provides a falling injury degree prediction method based on human skeleton data, which is used for evaluating high-risk parts and injury severity degrees thereof when falling accidents occur, is suitable for various falls occurring in daily activity scenes, and provides references for emergency treatment and follow-up nursing.
Referring to fig. 1, the method includes:
and S101, collecting skeleton data in the falling process, wherein the collecting time is set according to the falling duration.
Human skeleton node data are gathered based on Kinect v2 sensor to this application, and sampling frequency is 30 Hz. Since the duration of the fall process does not exceed 2 seconds, the length of the data sequence is set to 2 seconds, and the bone node data of 60 frames of images is obtained. The method comprises the steps of carrying out frame sampling on collected human body bone node sequence data, dividing an input sequence into 20 equal-length sections, randomly selecting one frame from each equal-length section to obtain 20 frames of time sequence data to serve as training samples.
The label of the training sample, namely the real injury degree caused by falling, is divided into injury grades through the inter-frame speed of the joints at the falling high-risk part, and the calculation is as follows:
Figure GDA0003548843840000061
where Δ t is the inter-frame spacing time, Δ S is the euclidean distance of the joint points of two consecutive frames:
Figure GDA0003548843840000062
wherein
Figure GDA0003548843840000066
Three-dimensional coordinates of the joint p in n frames and n-1 frames, respectively.
The method mainly considers the injury procedure evaluation of three high-risk parts of the head, the hip and the knee, and the injury degree is divided into four grades respectively, as shown in a table 1.
TABLE 1 injury rating for head, hip and knee
Figure GDA0003548843840000063
S102, each part of the skeleton of the body is represented by a vector.
In this embodiment, p represents the coordinates of a joint point, and e represents an edge coordinate vector connecting two adjacent joint points. As shown in FIG. 2, p is used respectivelyrsh,plshRepresenting the coordinates of the right and left shoulder joint points, p respectivelylhjc,plkjcRepresenting the coordinates of the left hip and knee, by prhjc,prkjcRepresenting the coordinates of the right hip joint and the coordinates of the right knee joint, and vectors of all parts of the body are represented as follows:
the body torso vector is represented as:
eboby=pclav-pcasi (3)
wherein p isclav,pcasiRespectively representing the coordinates of the middle point of the left and right shoulder joints and the coordinates of the middle point of the left and right hip joints, calculated as follows:
Figure GDA0003548843840000064
Figure GDA0003548843840000065
the shoulder vector is:
esh=prsh-plsh (6)
left leg bone vector representation:
elleg=plhjc-plkjc (7)
Right leg bone vector representation:
erleg=prhjc-prkjc (8)
s103, establishing a falling injury evaluation model, wherein the injury evaluation model comprises a first ST-LSTM network and a second ST-LSTM network.
As shown in fig. 3, the present embodiment employs a double-layer ST-LSTM (space-time long short term memory) structure, predicts a fall injury part based on human skeleton data, and evaluates an injury degree of a high-risk part. The entire network model consists of two layers of ST-LSTM networks and introduces an attention mechanism for each layer of LSTM. The first layer of ST-LSTM network encodes a skeleton sequence, a self-attention mechanism is added to screen information of different joints, the injured part is detected when the patient falls, and the result is input into the second layer of ST-LSTM network; the second tier ST-LSTM network applies attention to the input at all spatiotemporal steps to generate a weighted attention representation of the fall action sequence, evaluating the extent of injury.
In the ST-LSTM model, the skeletal joints and body parts in the same frame are arranged in a chain, passing in the spatial direction, with the corresponding joints on different frames performing the passing in sequence (temporal direction), as shown in fig. 4. This embodiment focuses on M joint point coordinates and N body part vectors, and the skeleton data representation obtained should be a matrix of (M + N) × 3. And adding a time sequence T, folding the data coordinates of the skeleton nodes, and expressing the data coordinates as a matrix of T multiplied by 3(M + N), wherein the vector expression among the nodes is consistent with the node expression.
Each ST-LSTM cell has a new input xj,t(representing a joint or body part vector j at frame t), a hidden representation (h) of the same joint point or body vector at the previous time stepj,t-1) And a hidden representation of the previous joint in the same time step (h)j-1,t). The ST-LSTM unit comprises an input gate ij,tAnd an output gate oj,tAnd two forgetting gates corresponding to two input channels of context information, respectively:
Figure GDA0003548843840000081
the time domain is represented by a time field,
Figure GDA0003548843840000082
representing the spatial domain. The ST-LSTM formula is as follows:
Figure GDA0003548843840000083
Figure GDA0003548843840000084
hj,t=oj,t⊙tanh(cj,t) (11)
cj,tand hj,tRepresenting the cell state and the representation of the hidden cell at spatio-temporal steps (j, t), u, respectivelyj,tDenotes modulated input,. sigma.is sigmoid activation function, W is affine transformation model parameter,. alpha.denotes multiplication by element, and tanh denotes tanh activation function.
The first layer ST-LSTM network prejudges the falling injury part, the input space-time step length (j, t) is the three-dimensional coordinate of the joint point j at the frame number t, the information of different joint points is different, therefore, the attention network is adopted to carry out self-adaptive attention on the key node, and the hidden state h isj,tThe method comprises space structure information and time dynamic information, and is beneficial to guiding the selection of key nodes. Firstly, information of all nodes is aggregated:
Figure GDA0003548843840000085
wherein W e1Is a parameter matrix. Normalizing the information quantity by using Sigmoid to obtain the attention weights of hidden state information of different joint points at different time points:
αj,t=Sigmoid(UStanh(Whhj,t+Wqqj,t+bu)+bs)
Wh,Wq,USare all learnable parameter matrices, bs,buIs a deviation.
And carrying out weighted summation on each column by using the attention weight to obtain weighted characteristic representations of different joint points:
Figure GDA0003548843840000091
output state after weighting
Figure GDA0003548843840000092
Inputting the data into a full connection layer to obtain a characteristic vector b ═ b1,b2,b3}. Calculating the predicted value of the network, namely the probability distribution of the injured part by using the classification value output by the full connection layer by using softmax:
Figure GDA0003548843840000093
final result output by softmax layer
Figure GDA0003548843840000094
Representing the probability of input data being calculated and predicted as the m-th part, and using
Figure GDA0003548843840000095
Represents the probability distribution of injury at three different sites predicted at spatio-temporal step (j, t).
And the second layer ST-LSTM combines the injury part information obtained by the first layer ST-LSTM and implements an attention mechanism, learns the correlation of the falling part on the falling injury degree, and evaluates the falling injury degree.
Taking the hidden representation and the output label probability distribution obtained by the ST-LSTM of the first layer as the input of the second layer:
Figure GDA0003548843840000096
attention is paid to the second layer ST-LSTM network, and the size of the input information amount of the second layer ST-LSTM at each spatiotemporal step is evaluated by context information. The second layer ST-LSTM outputs hidden state as H j,tTo get attention weight:
pj,t=tanh(Hj,t)
calculation of attention weight probability vector beta by softmax functionj,t
Figure GDA0003548843840000097
The resulting weighted output is expressed as:
Figure GDA0003548843840000101
weighted output of second layer ST-LSTM
Figure GDA0003548843840000102
Mapping the full-connection network onto the class label vector to obtain a characteristic vector y ═ y1,y2,y3,y4Predicting the injury degree of the fall through a softmax classifier:
Figure GDA0003548843840000103
and finally outputting the result with the maximum probability as the injury grade evaluation result.
The embodiment of the application also trains and optimizes the fall injury evaluation model, specifically trains the two layers of classifiers in a combined manner, uses the training sample training model, and adopts a negative log-likelihood function as a loss function to represent the error value between the model prediction value and the sample true value:
the objective function of the first layer ST-LSTM is expressed as:
Figure GDA0003548843840000104
where K is the number of samples, M is the number of injury site categories (M is 3 in this case), B represents the output vector,
Bimindicates whether the real wound site of the ith sample is m,
Figure GDA0003548843840000105
the probability that the damage site of the ith sample is predicted to be m by the representation model.
The loss function of the second layer ST-LSTM is expressed as:
Figure GDA0003548843840000106
where K is the number of samples, C is the injury level (in this case C ═ 4), Y represents the output vectorijIndicating whether the true damage level of the ith sample is j,
Figure GDA0003548843840000107
the probability that the model predicts an i-th sample damage level as j is represented.
The overall model loss function is:
Q=L(B)+L(Y)
and (5) completing model training by back propagation to minimize a loss function.
And S104, inputting the processed data into the falling injury evaluation model, and acquiring injury key parts and injury degree evaluation results thereof.
Collecting skeleton data of a human body moving process in real time, calculating the maximum inter-frame speed of joints and each joint vector in the falling process, and inputting the maximum inter-frame speed and each joint vector into the S103 trained falling injury evaluation model to obtain injury key parts and injury degree evaluation results.
The embodiment provides a falling injury degree prediction method based on human skeleton data. The whole network structure consists of two layers of ST-LSTM networks, and the space-time characteristics of each joint and each body part are effectively extracted; the first layer of the network expresses the influence of different falling modes on the falling injury generating part through an attention mechanism, and prejudges the falling injury key part; and the second layer of the network combines the injury part distribution information obtained by the first layer to implement an attention mechanism, learns the correlation of the falling part to the falling injury degree and evaluates the injury degree of the high-risk part.
Corresponding to the method for predicting the injury degree of a fall based on human skeleton data provided in the foregoing embodiment, the present application also provides an embodiment of a system for predicting the injury degree of a fall based on human skeleton data, and referring to fig. 5, the system 20 for predicting the injury degree of a fall based on human skeleton data includes: a data acquisition module 201, a data processing module 202, a model building module 203 and an evaluation module 204.
The data acquisition module 201 is configured to acquire skeleton data in a falling process, where acquisition time is set according to a falling duration. The data processing module 202 is configured to perform vector representation on each part of the body skeleton. The model establishing module 203 is configured to establish a fall injury assessment model, where the injury assessment model includes a first ST-LSTM network and a second ST-LSTM network, the first ST-LSTM network is configured to detect an injured part during a fall, and the second ST-LSTM network is configured to assess an injury degree. The evaluation module 204 is configured to input the processed data into the trained fall injury evaluation model, and obtain injury key parts and injury degree evaluation results thereof.
Further, the data acquisition module 201 includes: the device comprises an acquisition unit and a first acquisition unit.
And the acquisition unit is used for acquiring human body bone node sequence data in the acquisition time according to the preset frequency. The first acquisition unit is used for carrying out frame sampling on the human body skeleton node sequence data, dividing the input sequence into equal-length sections with preset quantity, and randomly selecting one frame from each equal-length section to acquire a training sample.
The fall injury assessment model built by the model building module 203 includes a first ST-LSTM network and a second ST-LSTM network.
The first ST-LSTM network comprises: and the aggregation unit is used for aggregating all the node information. And the second acquisition unit is used for normalizing the information quantity to obtain the attention weight of the hidden state information of different joint points at different time points. And the third acquisition unit is used for performing weighted summation on each column by using the attention weight to obtain weighted characteristic representations of different joint points. And the fourth acquisition unit is used for inputting the weighted output state to the full connection layer to acquire the feature vector. And the calculating unit is used for calculating the predicted value of the network, namely the probability distribution of the injured part, from the classification numerical value output by the full connection layer.
The second ST-LSTM network comprises: and the fifth acquisition unit is used for taking the hidden representation and the output label probability distribution obtained by the first ST-LSTM network as the input of the second ST-LSTM network. And the weight evaluation unit is used for implementing attention on the second ST-LSTM network and evaluating the input information quantity of the second ST-LSTM network at each spatio-temporal step through the context information. A sixth obtaining unit for obtaining the attention weight probability vector and its weighted output representation. And the prediction unit is used for mapping the weighted output of the second ST-LSTM network to the class label vector through the full-connection network to obtain a feature vector, and predicting the falling injury degree through the softmax classifier. And the output unit is used for outputting the result with the maximum probability as the injury grade evaluation result.
The embodiment of the present application also provides an embodiment of a terminal, and referring to fig. 6, the terminal 30 includes a processor 301, a memory 302, and a communication interface 303.
In fig. 6, the processor 301, the memory 302, and the communication interface 303 may be connected to each other by a bus; the bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
The processor 301 generally controls the overall functions of the terminal 30, such as starting the terminal 30 and acquiring skeleton data during the fall process after the terminal 30 is started, wherein the acquisition time is set according to the fall duration; vector representation is carried out on each part of the skeleton of the body; establishing a falling injury evaluation model, wherein the injury evaluation model comprises a first ST-LSTM network and a second ST-LSTM network, the first ST-LSTM network is used for detecting an injured part when a person falls, and the second ST-LSTM network is used for evaluating the injury degree; and inputting the processed data into the falling injury evaluation model to obtain the injury key part and the injury degree evaluation result thereof.
The processor 301 may be a general-purpose processor such as a Central Processing Unit (CPU), a Network Processor (NP), or a combination of a CPU and an NP. The processor may also be a Microprocessor (MCU). The processor may also include a hardware chip. The hardware chips may be Application Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a Field Programmable Gate Array (FPGA), or the like.
Memory 302 is configured to store computer-executable instructions to support the operation of terminal 30 data. The memory 301 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
After the terminal 30 is started, the processor 301 and the memory 302 are powered on, and the processor 301 reads and executes the computer executable instructions stored in the memory 302 to complete all or part of the steps in the embodiment of the fall injury degree prediction method based on skeleton data.
The communication interface 303 is used for the terminal 30 to transmit data, for example, to realize communication with the Kinect v2 sensor. The communication interface 303 includes a wired communication interface, and may also include a wireless communication interface. The wired communication interface comprises a USB interface, a Micro USB interface and an Ethernet interface. The wireless communication interface may be a WLAN interface, a cellular network communication interface, a combination thereof, or the like.
In an exemplary embodiment, the terminal 30 provided by the embodiments of the present application further includes a power supply component that provides power to the various components of the terminal 30. The power components may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal 30.
A communications component configured to facilitate communications between the terminal 30 and other devices in a wired or wireless manner. The terminal 30 may access a wireless network based on a communication standard, such as WiFi, 4G or 5G, or a combination thereof. The communication component receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. The communication component also includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal 30 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Of course, the above description is not limited to the above examples, and technical features that are not described in the present application may be implemented by or using the prior art, and are not described herein again; the above embodiments and drawings are only for illustrating the technical solutions of the present application and not for limiting the present application, and the present application is only described in detail with reference to the preferred embodiments instead, it should be understood by those skilled in the art that changes, modifications, additions or substitutions within the spirit and scope of the present application may be made by those skilled in the art without departing from the spirit of the present application, and the scope of the claims of the present application should also be covered.

Claims (2)

1. A system for predicting a fall injury level based on skeletal data, the system comprising:
the data acquisition module is used for acquiring skeleton data in the falling process, wherein the acquisition time is set according to the falling duration;
the data processing module is used for carrying out vector representation on each part of the skeleton of the body;
the system comprises a model establishing module, a first monitoring module and a second monitoring module, wherein the model establishing module is used for establishing a falling injury evaluation model, the injury evaluation model comprises a first ST-LSTM network and a second ST-LSTM network, the first ST-LSTM network is used for detecting an injured part when a person falls, and the second ST-LSTM network is used for evaluating the injury degree;
The first ST-LSTM network comprises:
the aggregation unit is used for aggregating all the node information;
the second acquisition unit is used for normalizing the information quantity to obtain attention weights of hidden state information of different joint points at different time points;
the third acquisition unit is used for carrying out weighted summation on each column by using the attention weight to obtain weighted characteristic representations of different joint points;
the fourth acquisition unit is used for inputting the weighted output state to the full connection layer to acquire a feature vector;
the calculation unit is used for calculating a predicted value of the network, namely probability distribution of the injured part, from the classification numerical values output by the full connection layer;
the second ST-LSTM network comprises:
a fifth obtaining unit, configured to use the hidden representation and the output label probability distribution obtained by the first ST-LSTM network as input of the second ST-LSTM network;
a weight evaluation unit for applying attention to the second ST-LSTM network and evaluating an input information amount of the second ST-LSTM network at each spatiotemporal step through context information;
a sixth obtaining unit for obtaining an attention weight probability vector and a weighted output representation thereof;
the prediction unit is used for mapping the weighted output of the second ST-LSTM network to a class label vector through a full-connection network to obtain a characteristic vector, and predicting the falling injury degree through a softmax classifier;
The output unit is used for outputting the result with the maximum probability as the injury grade evaluation result;
and the evaluation module is used for inputting the processed data into the falling injury evaluation model and acquiring the injury key part and the injury degree evaluation result thereof.
2. The system of claim 1, wherein the data acquisition module comprises:
the acquisition unit is used for acquiring human body bone node sequence data within the acquisition time according to a preset frequency;
the first acquisition unit is used for carrying out frame sampling on the human body skeleton node sequence data, dividing the input sequence into equal-length sections with preset quantity, and randomly selecting one frame from each equal-length section to acquire a training sample.
CN202110198589.0A 2021-02-22 2021-02-22 Tumble injury degree prediction method and system based on skeleton data and terminal Expired - Fee Related CN112998697B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110198589.0A CN112998697B (en) 2021-02-22 2021-02-22 Tumble injury degree prediction method and system based on skeleton data and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110198589.0A CN112998697B (en) 2021-02-22 2021-02-22 Tumble injury degree prediction method and system based on skeleton data and terminal

Publications (2)

Publication Number Publication Date
CN112998697A CN112998697A (en) 2021-06-22
CN112998697B true CN112998697B (en) 2022-06-14

Family

ID=76406446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110198589.0A Expired - Fee Related CN112998697B (en) 2021-02-22 2021-02-22 Tumble injury degree prediction method and system based on skeleton data and terminal

Country Status (1)

Country Link
CN (1) CN112998697B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627427B (en) * 2022-05-18 2022-09-23 齐鲁工业大学 Fall detection method, system, storage medium and equipment based on spatio-temporal information
CN116453648B (en) * 2023-06-09 2023-09-05 华侨大学 Rehabilitation exercise quality assessment system based on contrast learning

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108378830A (en) * 2018-03-09 2018-08-10 芜湖博高光电科技股份有限公司 It is a kind of to monitor the non-contact vital sign survey meter fallen down
CN108549841A (en) * 2018-03-21 2018-09-18 南京邮电大学 A kind of recognition methods of the Falls Among Old People behavior based on deep learning
CN109820515A (en) * 2019-03-01 2019-05-31 中南大学 The method of more sensing fall detections on TensorFlow platform based on LSTM neural network
CN109920208A (en) * 2019-01-31 2019-06-21 深圳绿米联创科技有限公司 Tumble prediction technique, device, electronic equipment and system
KR20190143543A (en) * 2018-06-14 2019-12-31 (주)밸류파인더스 Methode for Performance Improvement of Portfolio Asset Allocation Using Recurrent Reinforcement Learning
CN110633736A (en) * 2019-08-27 2019-12-31 电子科技大学 Human body falling detection method based on multi-source heterogeneous data fusion
CN110647812A (en) * 2019-08-19 2020-01-03 平安科技(深圳)有限公司 Tumble behavior detection processing method and device, computer equipment and storage medium
CN110659677A (en) * 2019-09-10 2020-01-07 电子科技大学 Human body falling detection method based on movable sensor combination equipment
CN110659595A (en) * 2019-09-10 2020-01-07 电子科技大学 Tumble type and injury part detection method based on feature classification
CN111582095A (en) * 2020-04-27 2020-08-25 西安交通大学 Light-weight rapid detection method for abnormal behaviors of pedestrians

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8280682B2 (en) * 2000-12-15 2012-10-02 Tvipr, Llc Device for monitoring movement of shipped goods
TWI410235B (en) * 2010-04-21 2013-10-01 Univ Nat Chiao Tung Apparatus for identifying falls and activities of daily living
CN103308069B (en) * 2013-06-04 2015-06-24 电子科技大学 Falling-down detection device and method
US10242443B2 (en) * 2016-11-23 2019-03-26 General Electric Company Deep learning medical systems and methods for medical procedures
CN109394229A (en) * 2018-11-22 2019-03-01 九牧厨卫股份有限公司 A kind of fall detection method, apparatus and system
CN109635721B (en) * 2018-12-10 2020-06-30 山东大学 Video human body falling detection method and system based on track weighted depth convolution order pooling descriptor
US11410540B2 (en) * 2019-08-01 2022-08-09 Fuji Xerox Co., Ltd. System and method for event prevention and prediction
CN111274954B (en) * 2020-01-20 2022-03-15 河北工业大学 Embedded platform real-time falling detection method based on improved attitude estimation algorithm

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108378830A (en) * 2018-03-09 2018-08-10 芜湖博高光电科技股份有限公司 It is a kind of to monitor the non-contact vital sign survey meter fallen down
CN108549841A (en) * 2018-03-21 2018-09-18 南京邮电大学 A kind of recognition methods of the Falls Among Old People behavior based on deep learning
KR20190143543A (en) * 2018-06-14 2019-12-31 (주)밸류파인더스 Methode for Performance Improvement of Portfolio Asset Allocation Using Recurrent Reinforcement Learning
CN109920208A (en) * 2019-01-31 2019-06-21 深圳绿米联创科技有限公司 Tumble prediction technique, device, electronic equipment and system
CN109820515A (en) * 2019-03-01 2019-05-31 中南大学 The method of more sensing fall detections on TensorFlow platform based on LSTM neural network
CN110647812A (en) * 2019-08-19 2020-01-03 平安科技(深圳)有限公司 Tumble behavior detection processing method and device, computer equipment and storage medium
CN110633736A (en) * 2019-08-27 2019-12-31 电子科技大学 Human body falling detection method based on multi-source heterogeneous data fusion
CN110659677A (en) * 2019-09-10 2020-01-07 电子科技大学 Human body falling detection method based on movable sensor combination equipment
CN110659595A (en) * 2019-09-10 2020-01-07 电子科技大学 Tumble type and injury part detection method based on feature classification
CN111582095A (en) * 2020-04-27 2020-08-25 西安交通大学 Light-weight rapid detection method for abnormal behaviors of pedestrians

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Comprehensive evaluation of skeleton features-based fall detection from Microsoft Kinect v2;Mona Saleh Alzahrani, etc;《Signal, Image and Video Processing》;20190515;第1431-1439页 *
Hierarchical LSTM Framework for Long-Term Sea Surface Temperature Forecasting;Xi Liu, Tyler Wilson, Pang-Ning Tan,Lifeng Luo;《2019 IEEE International Conference on Data Science and Advanced Analytics》;20200123;全文 *
Human Daily Activity Recognition for Healthcare Using Wearable and Visual Sensing Data;Xi Li, Lei Liu, Steven J. Simske, Jerry Liu;《2016 IEEE International Conference on Healthcare Informatics》;20161208;第24-31页 *
基于CNN 和LSTM 混合模型的人体跌倒行为研究;厍向阳,苏学威;《计算机应用研究》;20191231;第36卷(第12期);第3857-3868页 *

Also Published As

Publication number Publication date
CN112998697A (en) 2021-06-22

Similar Documents

Publication Publication Date Title
Nguyen et al. Trends in human activity recognition with focus on machine learning and power requirements
Wang et al. A data fusion-based hybrid sensory system for older people’s daily activity and daily routine recognition
CN112998697B (en) Tumble injury degree prediction method and system based on skeleton data and terminal
CN110210563A (en) The study of pattern pulse data space time information and recognition methods based on Spike cube SNN
CN111626116B (en) Video semantic analysis method based on fusion of multi-attention mechanism and Graph
CN112990211A (en) Neural network training method, image processing method and device
CN110232412B (en) Human gait prediction method based on multi-mode deep learning
CN110135476A (en) A kind of detection method of personal safety equipment, device, equipment and system
WO2017197375A1 (en) System and methods for facilitating pattern recognition
CN113111767A (en) Fall detection method based on deep learning 3D posture assessment
Wang et al. Risk assessment for musculoskeletal disorders based on the characteristics of work posture
CN110659677A (en) Human body falling detection method based on movable sensor combination equipment
CN112986492A (en) Method and device for establishing gas concentration prediction model
Khatiwada et al. Automated human activity recognition by colliding bodies optimization-based optimal feature selection with recurrent neural network
CN116343284A (en) Attention mechanism-based multi-feature outdoor environment emotion recognition method
CN113988263A (en) Knowledge distillation-based space-time prediction method in industrial Internet of things edge equipment
Kale et al. Human posture recognition using artificial neural networks
Gao et al. Logic-enhanced adaptive network-based fuzzy classifier for fall recognition in rehabilitation
CN114418183B (en) Livestock and poultry health sign big data internet of things detection system
Lin et al. Adaptive multi-modal fusion framework for activity monitoring of people with mobility disability
CN114417242B (en) Big data detection system for livestock and poultry activity information
CN114913585A (en) Household old man falling detection method integrating facial expressions
CN116263949A (en) Weight measurement method, device, equipment and storage medium
Islamadina et al. Performance of deep learning benchmark models on thermal imagery of pain through facial expressions
Khatiwada et al. Automated human activity recognition by colliding bodies optimization (CBO)-based optimal feature selection with rnn

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220614