CN109543762B - Multi-feature fusion gesture recognition system and method - Google Patents

Multi-feature fusion gesture recognition system and method Download PDF

Info

Publication number
CN109543762B
CN109543762B CN201811431810.7A CN201811431810A CN109543762B CN 109543762 B CN109543762 B CN 109543762B CN 201811431810 A CN201811431810 A CN 201811431810A CN 109543762 B CN109543762 B CN 109543762B
Authority
CN
China
Prior art keywords
human body
node
foot
force
chest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811431810.7A
Other languages
Chinese (zh)
Other versions
CN109543762A (en
Inventor
洪榛
洪淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201811431810.7A priority Critical patent/CN109543762B/en
Publication of CN109543762A publication Critical patent/CN109543762A/en
Application granted granted Critical
Publication of CN109543762B publication Critical patent/CN109543762B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a multi-feature fusion gesture recognition system and a multi-feature fusion gesture recognition method, wherein the system comprises a management terminal, a cloud server, a wireless network and human body nodes, the human body nodes comprise chest nodes, foot nodes L and foot nodes R, the foot nodes L comprise a second single chip microcomputer, a second 2.4G module, a second power supply module and a first force-sensitive sensor group, and the foot nodes R comprise a third single chip microcomputer, a second air pressure sensor, a third 2.4G module, a third power supply module and a second force-sensitive sensor group. The invention detects the posture change of the upper body and the feet of the human body through the combined acceleration, the posture angle and the height difference percentage, monitors the weight and the heart change of the human body by combining the pressure intensity characteristics of the soles, and identifies the posture by using the parameters obtained by cloud computing training, can effectively identify the daily behavior posture of the body, can inquire on a terminal, and has wide application prospect.

Description

Multi-feature fusion gesture recognition system and method
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a multi-feature fusion gesture recognition system and method.
Background
With the development of sensor technology and internet of things technology, gesture recognition is more and more widely applied. In the field of medical health, the device can be mainly used for detecting abnormal behaviors such as human falling and the like and daily behaviors, reducing the damage of falling to weak groups such as the old and the like, and helping normal people reduce or correct bad living habits such as sedentary and long-standing; the method can also be applied to the VR game industry, and the experience of the game is greatly enhanced through the gesture recognition of the player.
Present gesture recognition technology mainly relies on camera and wearable equipment, and the problem exists as follows:
(1) the camera is used for analyzing by collecting images, so that the personal privacy of a user is difficult to ensure;
(2) the camera is sensitive to light, and only an infrared camera can be relied on in a dark environment, so that the cost is high;
(3) the current wearable equipment mainly relies on acceleration sensor or force sensitive sensor, and the characteristic is comparatively single, and the erroneous judgement rate is higher.
Disclosure of Invention
In order to overcome the defects of the existing gesture recognition system, the invention provides a multi-feature fusion gesture recognition system and a method, and aims to solve the problems that the gesture detection technology in the prior art is limited by environment, simple in function, high in misjudgment rate and the like.
In order to achieve the above object, the present invention has the following configurations:
the multi-feature fusion gesture recognition system comprises a management terminal, a cloud server, a wireless network and human body nodes; wherein the human body nodes comprise a chest node, a foot node L and a foot node R; the chest node comprises a first single chip microcomputer, a 9-axis sensor, a first air pressure sensor, a Wi-Fi module, a first 2.4G module and a first power module; the foot node L comprises a second single chip microcomputer, a second 2.4G module, a second power supply module and a first force-sensitive sensor group positioned in the interlayer of the left insole; the foot node R comprises a third single chip microcomputer, a second air pressure sensor, a third 2.4G module, a third power supply module and a second force-sensitive sensor group positioned in a right insole interlayer;
the Wi-Fi module is communicated with the cloud server through a wireless network, the 9-axis sensor, the first air pressure sensor, the Wi-Fi module and the first 2.4G module are all connected with the first single chip microcomputer, and the first power supply module is used for supplying power to the chest node;
the first force-sensitive sensor group and the second 2.4G module are connected with the second single chip microcomputer, and the second power supply module is used for supplying power to the foot node L;
the second air pressure sensor, the third 2.4G module and the second force-sensitive sensor group are all connected with the third single chip microcomputer, and the third power supply module is used for supplying power to the foot node R.
Optionally, the foot node L is located in a cavity of the left heel of a shoe, a left insole is located above the foot node L, and the first force-sensitive sensor group is located in an interlayer of the left insole.
Optionally, the foot node R is located in a cavity of a right heel, a right insole is located above the foot node R, and the second force-sensitive sensor group is located in an interlayer of the right insole.
Optionally, the first force-sensitive sensor group and the second force-sensitive sensor group are respectively composed of 8 force-sensitive sensors, and the voltage outputs of the force-sensitive sensors are respectively Li(i∈[1,8]) And Ri(i∈[1,8]) In the first force-sensitive sensor group, force-sensitive sensors L1At the first phalange of the left foot, a force sensor L2Force sensor L3Hem force sensor L4A force sensor L positioned at the metatarsophalangeal joint of the left foot5Hem force sensor L6On the outer side of the left foot, a force sensor L7Hem force sensor L8Is positioned at the heel; among the second force-sensitive sensors, the force-sensitive sensor R1At the first phalange of the right foot, a force sensor R2Force sensor R3And force sensor R4At the metatarsophalangeal joint of the left foot, a force-sensitive sensor R5And force sensor R6On the lateral foot side of the left foot.
The embodiment of the invention also provides a multi-feature fusion gesture recognition method, which comprises the following steps:
(1) informing a human body node to acquire user posture parameters by adopting a management terminal;
(2) a chest node in the human body nodes informs a foot node L and a foot node R of collecting data, the foot node L collects voltage data of a first force-sensitive sensor group, the foot node R collects voltage data of a second force-sensitive sensor group and data of a second air pressure sensor, and the chest node collects three-axis angles of 9-axis sensors, three-axis acceleration data and data of the first air pressure sensor;
(3) calculating the inclination angle between the human body and the horizontal plane, the human body combined acceleration, the chest-foot height difference percentage and the unit area stress of each point provided with a second force-sensitive sensor group according to the data collected by the human body nodes, judging the current state, if the current state is a training or updating state, sending the calculation result and the labels of the human body posture categories corresponding to the calculation result to a cloud server, continuing the step (4), and if the current state is an identification state, continuing the step (5);
(4) the cloud server calculates the division indexes and division values of different human posture categories according to the calculation result of the step (3) and the human posture category labels corresponding to the calculation result;
(5) and (4) judging the human body posture category according to the calculation result of the step (3) and the division indexes and the division values of different human body posture categories.
Optionally, the step (2) includes the steps of:
(2-1) the chest node respectively sends commands to the foot node L and the foot node R through a first 2.4G module;
(2-2) after the node L receives the command of the chest node through the second 2.4G module, the collected voltage data L of the first force-sensitive sensor group1~L8Sending the data to the chest node through a second 2.4G module;
(2-3) after the foot node R receives the command of the chest node through the third 2.4G module, the collected voltage data R of the second force-sensitive sensor group1~R8And data P of the second air pressure sensor2Sending the data to a chest node through a third 2.4G module;
(2-4) the chest node respectively receives data of the foot node L and the foot node R, and simultaneously acquires three-axis angles (x, y, z) and three-axis acceleration data (ax, ay, az) of the 9-axis sensor and first air pressure sensor data P1
Optionally, the step (3) includes the steps of:
(3-1) calculating the inclination angle BTA of the human body with the horizontal plane according to the following formula:
Figure BDA0001882793870000031
(3-2) calculating the human body resultant acceleration ha according to the following formula:
Figure BDA0001882793870000032
(3-3) calculating the percent difference in thoracic-foot height, HP, according to the following formula:
HP=44330·((P2/P0)1/5.255-(P1/P0)1/5.255)/H0
in the formula, P0Is at standard atmospheric pressure, H0Is the height of the user;
(3-4) calculating the unit area stress LPA of each point in the first force-sensitive sensor group and the second force-sensitive sensor group according to the following formulai、RPai
LPai=0.2/(ln(Li)-1.17)-0.2
RPai=0.2/(ln(Ri)-1.17)-0.2
(3-5) calculating the switching values LPaD of the points of the first force-sensitive sensor group and the second force-sensitive sensor group respectively according to the following formulai、RPaDi
LPaDi=ε(LPai-ρ)
RPaDi=ε(RPai-ρ)
In the formula, rho is a preset sole pressure threshold value;
(3-6) calculating the switching value comprehensive output LPaD of the first force-sensitive sensor group and the second force-sensitive sensor group according to the following formulaSUM、RPaDSUM
Figure BDA0001882793870000033
Figure BDA0001882793870000041
Optionally, the step (4) includes the steps of:
(4-1) the cloud server respectively calculates: best score value HP of percent difference between foot and chest height HP for walking and sitting1Optimum score value HP of percent difference in chest-foot height HP for squat and pick-up postures2The human body and level of squatting and sittingOptimum division value theta of surface inclination angle BTA1Optimum dividing value theta of human body inclination angle BTA with horizontal plane in squatting and picking posture2
And (4-2) finishing the determination of the human body posture parameters, and returning the calculated division indexes and the optimal division values to the chest nodes by the cloud server.
Optionally, the step (5) includes the steps of:
(5-1) firstly, detecting the motion degree of the human body according to the human body combined acceleration ha, and detecting the inclination degree of the upper body of the human body and the pressure of the sole of a foot by combining the inclination angle BTA of the human body and the horizontal plane;
if ha is not less than 15m/s2Or | ha | is less than or equal to 5m/s2And the next moment HP<P2,BTA<θ2,LPaDSUM1,RPaDSUM1If the person falls, judging that the person falls, and continuing to perform the step (5-2); if 5m/s2<|ha|<15m/s2Then (5-3) is carried out;
(5-2) if | ax<5m/s2And x is less than or equal to 0 degree, the person falls forward; if ax-<5m/s2And x>0 DEG, the person falls backwards; if ay<5m/s2And y is>0 DEG, the person falls down leftwards; if ay<5m/s2And y is less than or equal to 0 degrees, the person falls down rightwards;
(5-3) detecting whether the human body has descending behavior according to the chest-foot height difference percentage HP, if the HP is less than or equal to P1Judging to be sitting, squatting or picking up, and continuing to carry out (5-4); otherwise, judging that the user walks or stands, and carrying out (5-6);
(5-4) comprehensively considering the height difference percentage HP of the chest and the feet and the inclination angle BTA of the human body and the horizontal plane, if the BTA is more than or equal to theta1And P is2≤HP≤P1Then, it is determined as a sitting posture, if θ2≤BTA<θ1And HP<P2Judging the squatting is performed; if BTA<θ2,HP≤P1And LPaDSUM≥ω1Or RPaDSUM≥ω1If yes, judging to be picking, and continuing to (5-5) judging the specific type of the picking;
(5-5) if LPaDSUM≥ω1And RPaDSUM≥ω1Then it is determined as picking forward(ii) a If LPaDSUM≥ω1And RPaDSUMIf the value is 0, the result is judged to be picked up leftwards; if LPaDSUM0 and RPaDSUM≥ω1If yes, the right picking is judged;
(5-6) calculating step frequency according to the variation cycle of the pressure intensity of the sole, and if the step frequency is extremely small, judging that the user stands; if the step frequency accords with the walking rule of the human body, the walking is judged.
Optionally, the method further comprises the steps of:
(6) if the gesture recognition result is that the user falls down or unhealthy behaviors such as standing for a long time, sitting for a long time and the like are detected, reminding family members or the user through short messages or voice;
(7) and the chest node uploads the recognized posture result to the cloud server, and the management terminal displays a data curve and the recognized posture result according to the data of the cloud server.
The multi-feature fusion gesture recognition system and method provided by the invention have the following beneficial effects:
the invention designs a multi-feature fusion gesture recognition system by utilizing a sensor technology and an Internet of things technology; modeling analysis is carried out on the normal and abnormal behavior gestures of the human body by utilizing the pressure, height, angle and acceleration characteristics, and a multi-characteristic fusion gesture recognition method is designed; the invention can effectively identify the human body posture, and realize the alarm reminding of the abnormal posture and the real-time display of the daily posture data.
Drawings
FIG. 1 is a schematic diagram of a multi-feature fusion gesture recognition system in accordance with an embodiment of the present invention;
FIG. 2 is a schematic view of the placement of first and second sensor groups in an insole according to an embodiment of the present invention;
FIG. 3 is a flow chart of a multi-feature fusion gesture recognition method according to an embodiment of the invention;
reference numerals in the drawings: the management terminal 100, the cloud server 200, the wireless network 300, the human body node 400, the chest node 410, the foot node L420, the foot node R430, the first single chip microcomputer 411, the 9-axis sensor 412, the first air pressure sensor 413, the Wi-Fi module 414, the first 2.4G module 415, the first power module 416, the second single chip microcomputer 421, the second 2.4G module 422, the second power module 423, the first force-sensitive sensor group 424, the third single chip microcomputer 431, the second air pressure sensor 432, the third 2.4G module 433, the third power module 434, and the second force-sensitive sensor group 435.
Detailed Description
The technical scheme of the invention is explained in detail below with reference to fig. 1 to 3:
as shown in fig. 1, to solve the technical problem in the prior art, an embodiment of the present invention provides a multi-feature fusion gesture recognition system. The system includes a management terminal 100, a cloud server 200, a wireless network 300, and a human body node 400. Wherein body node 400 includes a chest node 410, a foot node L420, and a foot node R430. The chest node 410 includes a first single chip microcomputer 411, a 9-axis sensor 412, a first air pressure sensor 413, a Wi-Fi module 414, a first 2.4G module 415, and a first power module 416. The Wi-Fi module 414 is in communication with the cloud server 200 through the wireless network 300, the 9-axis sensor 412, the first air pressure sensor 413, the Wi-Fi module 414 and the first 2.4G module 415 are all connected with the first single chip microcomputer 411, and the first power supply module 416 is used for supplying power to the chest node 410. The foot node L420 comprises a second single chip microcomputer 421, a second 2.4G module 422, a second power supply module 423 and a first force-sensitive sensor group 424 positioned in the interlayer of the left insole. The foot node L420 is positioned in the cavity of the left heel, a left insole is arranged above the foot node L, and the first force-sensitive sensor group 424 is positioned in the interlayer of the left insole; the first force-sensitive sensor group 424 and the second 2.4G module 422 are both connected with the second single chip 421, and the second power supply module 423 is used for supplying power to the foot node L420. The foot node R430 includes a third single chip microcomputer 431, a second air pressure sensor 432, a third 2.4G module 433, a third power module 434 and a second force sensor group 435 located in the right insole layer. The foot node R430 is positioned in a cavity of the right heel, a right insole is arranged above the foot node R, and the second force-sensitive sensor group 435 is positioned in the interlayer of the right insole; the second air pressure sensor 432, the third 2.4G module 433 and the second force sensor group 435 are all connected with a third single chip microcomputer 431, and the third power supply module 434 is used for supplying power to a foot node R430.
As shown in FIG. 2, which is a schematic view of the layout of the first force sensor 424 and the second force sensor 435 in the insole of the present invention, the first force sensor 424 and the second force sensor 435 are respectively composed of 8 force sensors, and the voltage outputs thereof are respectively Li(i∈[1,8]) And Ri(i∈[1,8]). Taking the first force-sensitive sensor group 424 as an example, L1At the first phalange, L2、L3And L4At the metatarsophalangeal joint, L5And L6On the lateral side of the foot, L7And L8Is located at the heel.
As shown in fig. 3, an embodiment of the present invention further provides a multi-feature fusion gesture recognition method, including the following steps:
step (1) notifying the human body node 400 to train or update the user posture parameters by means of the management terminal 100, and marking the specific posture, such as: walking, standing, sitting, etc., executing the step (2);
step (2) human body node 400 data acquisition and communication:
(2-1) the chest node 410 sends commands to the foot nodes L/R420 and 430, respectively, through the first 2.4G module 415;
(2-2) after the foot node L420 receives the chest node command through the second 2.4G module 422, the collected voltage data L of the first force-sensitive sensor group 424 is acquired1~L8To the chest node 410 by the second 2.4G module 422;
(2-3) after the foot node R430 receives the command of the chest node 410 through the third 2.4G module 433, the collected voltage data R of the second force-sensitive sensor group 4351~R8And second barometric sensor 432 data P2Sent to the chest node 410 by the third 2.4G module 433;
(2-4) the chest module 410 receives data of the foot node L420 and the foot node R430, respectively, and simultaneously acquires three-axis angles (x, y, z) and three-axis acceleration data (ax, ay, az) of the 9-axis sensor 412, and data P of the first air pressure sensor 4131
The data preprocessing process in the step (3) is as follows, and after the processing is finished, the processing result and the original data are uploaded to the cloud server 200; if the parameter is in the parameter training or updating state, continuing to execute the step (4); if the daily use state is achieved, the step (5) is executed;
(3-1) calculating the inclination angle BTA of the human body with the horizontal plane according to the following formula:
Figure BDA0001882793870000061
(3-2) calculating the human body resultant acceleration ha according to the following formula:
Figure BDA0001882793870000071
(3-3) calculating the percent difference in thoracic-foot height, HP, according to the following formula:
HP=44330·((P2/P0)1/5.255-(P1/P0)1/5.255)/H0
in the formula, P0Is at standard atmospheric pressure, H0Is the height of the user.
(3-4) calculating the unit area stress LPA of each point of the first force-sensitive sensor 424 and the second force-sensitive sensor group 435 according to the following formulai、RPai
LPai=0.2/(ln(Li)-1.17)-0.2
RPai=0.2/(ln(Ri)-1.17)-0.2
(3-5) calculating the switching value LPAD of each of the first force sensor 424 and the second force sensor 435 according to the following formulai、RPaDi
LPaDi=ε(LPai-ρ)
RPaDi=ε(RPai-ρ)
In the formula, rho is the sole pressure threshold value and is 0.45N/cm2
(3-6) calculating the switching value comprehensive output LPAD of the first force sensor 424 and the second force sensor group 435, respectively, according to the following formulaSUM、RPaDSUM
Figure BDA0001882793870000072
Figure BDA0001882793870000073
Step (4), the cloud server 200 processes:
(4-1) the cloud server 200 calculates: best score value HP for walking and sitting HP1Optimum division value HP of squat and pick-up posture HP2Optimum score θ for Squat and Sitting BTA1Optimum squat and pick BTA score θ2
(4-2) the human body posture parameter is determined, and the cloud server 200 returns the parameter to the chest node 410;
and (5) gesture recognition:
(5-1) firstly, detecting the motion degree of the human body according to ha, and detecting the inclination degree of the upper body of the human body and the pressure of the sole of the foot by combining BTA; if ha is not less than 15m/s2Or | ha | is less than or equal to 5m/s2And the next moment HP<P2,BTA<θ2,LPaDSUM1,RPaDSUM1If the person falls, judging that the person falls, and continuing to perform the step (5-2); if 5m/s2<|ha|<15m/s2Then (5-3) is carried out;
(5-2) if | ax<5m/s2And x is less than or equal to 0 degree, the person falls forward; if ax-<5m/s2And x>0 DEG, the person falls backwards; if ay<5m/s2And y is>0 DEG, the person falls down leftwards; if ay<5m/s2And y is less than or equal to 0 degrees, the person falls down rightwards;
(5-3) detecting whether the human body has descending behavior according to HP, if the HP is less than or equal to P1Judging to be sitting, squatting or picking up, and continuing to carry out (5-4); otherwise, judging that the user walks or stands, and carrying out (5-6);
(5-4) comprehensively considering HP and BTA, if BTA is not less than theta1And P is2≤HP≤P1Then, it is determined as a sitting posture, if θ2≤BTA<θ1And HP<P2Judging the squatting is performed; if BTA<θ2,HP≤P1And LPaDSUM≥ω1Or RPaDSUM≥ω1If yes, judging to be picking, and continuing to (5-5) judging the specific type of the picking;
(5-5) if LPaDSUM≥ω1And RPaDSUM≥ω1If yes, the image is judged to be picked forward; if LPaDSUM≥ω1And RPaDSUMIf the value is 0, the result is judged to be picked up leftwards; if LPaDSUM0 and RPaDSUM≥ω1If yes, the right picking is judged;
(5-6) calculating step frequency according to the variation cycle of the pressure intensity of the sole, and if the step frequency is extremely small, judging that the user stands; if the step frequency accords with the walking rule of the human body, the human body is judged to be walking;
(5-7) after the gesture recognition is finished, performing the step (6) and the step (7);
step (6) alarm reminding: if the posture recognition result is that the user falls down or unhealthy behaviors such as standing for a long time, sitting for a long time and the like are detected, family members or the user is reminded;
step (7) the management terminal 100 displays: the chest node 410 uploads the recognized posture result to the cloud server 200, and the management terminal 100 displays relevant information such as a data curve and the current posture according to the data of the cloud server 200.
By adopting the multi-feature fusion gesture recognition system and method, the daily behavior gesture of the body can be effectively recognized, the historical records can be inquired on the terminal, and meanwhile, the alarm reminding can be realized when a falling event or unhealthy gestures such as sedentary sitting and long standing occur. The posture change of the upper body of a human body is detected through the inclination angle, the resultant acceleration and the posture angle, the vertical distance change of the upper body and the feet is analyzed through the height difference percentage of the chest and the feet, the gravity and the heart change of the human body are monitored by combining the pressure intensity characteristics of the soles, and finally, the posture is recognized through all parameters obtained through cloud computing training; the method has the advantages that the accuracy is improved, the privacy of the user is protected, the application range is not limited by scenes, and the method has wide application prospects in the medical health industry and the game industry.
In this specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (4)

1. A multi-feature fusion gesture recognition method is characterized by comprising the following steps:
(1) informing a human body node to acquire user posture parameters by adopting a management terminal;
(2) a chest node in the human body nodes informs a foot node L and a foot node R of collecting data, the foot node L collects voltage data of a first force-sensitive sensor group, the foot node R collects voltage data of a second force-sensitive sensor group and data of a second air pressure sensor, and the chest node collects three-axis angles of 9-axis sensors, three-axis acceleration data and data of the first air pressure sensor;
(3) calculating the inclination angle between the human body and the horizontal plane, the human body combined acceleration, the chest-foot height difference percentage and the unit area stress of each point provided with a second force-sensitive sensor group according to the data collected by the human body nodes, judging the current state, if the current state is a training or updating state, sending the calculation result and the labels of the human body posture categories corresponding to the calculation result to a cloud server, continuing the step (4), and if the current state is an identification state, continuing the step (5);
(4) the cloud server calculates the division indexes and division values of different human posture categories according to the calculation result of the step (3) and the human posture category labels corresponding to the calculation result;
(5) judging the human body posture category according to the calculation result of the step (3) and the division indexes and the division values of different human body posture categories;
the step (2) comprises the following steps:
(2-1) the chest node respectively sends commands to the foot node L and the foot node R through a first 2.4G module;
(2-2) after the node L receives the command of the chest node through the second 2.4G module, the collected voltage data L of the first force-sensitive sensor group1~L8Sending the data to the chest node through a second 2.4G module;
(2-3) after the foot node R receives the command of the chest node through the third 2.4G module, the collected voltage data R of the second force-sensitive sensor group1~R8And data P of the second air pressure sensor2Sending the data to a chest node through a third 2.4G module;
(2-4) the chest node respectively receives data of the foot node L and the foot node R, and simultaneously acquires three-axis angles (x, y, z) and three-axis acceleration data (ax, ay, az) of the 9-axis sensor and first air pressure sensor data P1
The step (3) comprises the following steps:
(3-1) calculating the inclination angle BTA of the human body with the horizontal plane according to the following formula:
Figure FDA0002911855900000011
(3-2) calculating the human body resultant acceleration ha according to the following formula:
Figure FDA0002911855900000021
(3-3) calculating the percent difference in thoracic-foot height, HP, according to the following formula:
HP=44330·((P2/P0)1/5.255-(P1/P0)1/5.255)/H0
in the formula, P0Is at standard atmospheric pressure, H0Is the height of the user;
(3-4) calculating the unit area stress LPA of each point in the first force-sensitive sensor group and the second force-sensitive sensor group according to the following formulai、RPai
LPai=0.2/(ln(Li)-1.17)-0.2
RPai=0.2/(ln(Ri)-1.17)-0.2
(3-5) according toThe switching value LPAD of each point of the first force-sensitive sensor group and the second force-sensitive sensor group is respectively calculated by the following formulai、RPaDi
LPaDi=ε(LPai-ρ)
RPaDi=ε(RPai-ρ)
In the formula, rho is a preset sole pressure threshold value;
(3-6) calculating the switching value comprehensive output LPaD of the first force-sensitive sensor group and the second force-sensitive sensor group according to the following formulaSUM、RPaDSUM
Figure FDA0002911855900000022
Figure FDA0002911855900000023
2. The multi-feature fusion gesture recognition method according to claim 1, wherein the step (4) comprises the steps of:
(4-1) the cloud server respectively calculates: best score value HP of percent difference between foot and chest height HP for walking and sitting1Optimum score value HP of percent difference in chest-foot height HP for squat and pick-up postures2Optimum dividing value theta of the inclination angle BTA of the squat and sitting human body to the horizontal plane1Optimum dividing value theta of human body inclination angle BTA with horizontal plane in squatting and picking posture2
And (4-2) finishing the determination of the human body posture parameters, and returning the calculated division indexes and the optimal division values to the chest nodes by the cloud server.
3. The multi-feature fusion gesture recognition method according to claim 2, wherein the step (5) comprises the steps of:
(5-1) firstly, detecting the motion degree of the human body according to the human body combined acceleration ha, and detecting the inclination degree of the upper body of the human body and the pressure of the sole of a foot by combining the inclination angle BTA of the human body and the horizontal plane;
if ha is not less than 15m/s2Or | ha | is less than or equal to 5m/s2And HP < P at the next time2,BTA<θ2,LPaDSUM<ω1,RPaDSUM<ω1If the person falls, judging that the person falls, and continuing to perform the step (5-2); if 5m/s2<|ha|<15m/s2Then (5-3) is carried out;
(5-2) if | ax | < 5m/s2And x is less than or equal to 0 degree, the person falls forward; if | ax | < 5m/s2And x is more than 0 degree, then falling backwards; if | ay | is less than 5m/s2If y is more than 0 degree, the person falls down leftwards; if | ay | is less than 5m/s2And y is less than or equal to 0 degrees, the person falls down rightwards;
(5-3) detecting whether the human body has descending behavior according to the chest-foot height difference percentage HP, if the HP is less than or equal to P1Judging to be sitting, squatting or picking up, and continuing to carry out (5-4); otherwise, judging that the user walks or stands, and carrying out (5-6);
(5-4) comprehensively considering the height difference percentage HP of the chest and the feet and the inclination angle BTA of the human body and the horizontal plane, if the BTA is more than or equal to theta1And P is2≤HP≤P1Then, it is determined as a sitting posture, if θ2≤BTA<θ1And HP < P2Judging the squatting is performed; if BTA < theta2,HP≤P1And LPaDSUM≥ω1Or RPaDSUM≥ω1If yes, judging to be picking, and continuing to (5-5) judging the specific type of the picking;
(5-5) if LPaDSUM≥ω1And RPaDSUM≥ω1If yes, the image is judged to be picked forward; if LPaDSUM≥ω1And RPaDSUMIf the value is 0, the result is judged to be picked up leftwards; if LPaDSUM0 and RPaDSUM≥ω1If yes, the right picking is judged;
(5-6) calculating step frequency according to the variation cycle of the pressure intensity of the sole, and if the step frequency is extremely small, judging that the user stands; if the step frequency accords with the walking rule of the human body, the walking is judged.
4. The multi-feature fusion gesture recognition method of claim 3, further comprising the steps of:
(6) if the posture recognition result is that the person falls down or the person is detected to be in a long standing or sitting unhealthy behavior, reminding the family or the user through a short message or voice;
(7) and the chest node uploads the recognized posture result to the cloud server, and the management terminal displays a data curve and the recognized posture result according to the data of the cloud server.
CN201811431810.7A 2018-11-28 2018-11-28 Multi-feature fusion gesture recognition system and method Active CN109543762B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811431810.7A CN109543762B (en) 2018-11-28 2018-11-28 Multi-feature fusion gesture recognition system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811431810.7A CN109543762B (en) 2018-11-28 2018-11-28 Multi-feature fusion gesture recognition system and method

Publications (2)

Publication Number Publication Date
CN109543762A CN109543762A (en) 2019-03-29
CN109543762B true CN109543762B (en) 2021-04-06

Family

ID=65851912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811431810.7A Active CN109543762B (en) 2018-11-28 2018-11-28 Multi-feature fusion gesture recognition system and method

Country Status (1)

Country Link
CN (1) CN109543762B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427890B (en) * 2019-08-05 2021-05-11 华侨大学 Multi-person attitude estimation method based on deep cascade network and centroid differentiation coding
CN113686256B (en) * 2021-08-19 2024-05-31 广州市偶家科技有限公司 Intelligent shoe and squatting action recognition method
CN116250830A (en) * 2023-02-22 2023-06-13 武汉易师宝信息技术有限公司 Human body posture judging and identifying system, device and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003236002A (en) * 2002-02-20 2003-08-26 Honda Motor Co Ltd Method and apparatus for protecting body
JP2006158431A (en) * 2004-12-02 2006-06-22 Kaoru Uchida Fall prevention training auxiliary device
CN103076619A (en) * 2012-12-27 2013-05-01 山东大学 System and method for performing indoor and outdoor 3D (Three-Dimensional) seamless positioning and gesture measuring on fire man
CN106448057A (en) * 2016-10-27 2017-02-22 浙江理工大学 Multisensor fusion based fall detection system and method
CN106887115A (en) * 2017-01-20 2017-06-23 安徽大学 Old people falling monitoring device and falling risk assessment method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003236002A (en) * 2002-02-20 2003-08-26 Honda Motor Co Ltd Method and apparatus for protecting body
JP2006158431A (en) * 2004-12-02 2006-06-22 Kaoru Uchida Fall prevention training auxiliary device
CN103076619A (en) * 2012-12-27 2013-05-01 山东大学 System and method for performing indoor and outdoor 3D (Three-Dimensional) seamless positioning and gesture measuring on fire man
CN106448057A (en) * 2016-10-27 2017-02-22 浙江理工大学 Multisensor fusion based fall detection system and method
CN106887115A (en) * 2017-01-20 2017-06-23 安徽大学 Old people falling monitoring device and falling risk assessment method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Real-time Action Recognition and Fall Detection Based on Smartphone;Yunkun Ning 等;《2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society》;20180721;第4418-4422页 *
基于多传感融合的老人跌倒检测算法研究;屠碧琪;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180115(第01期);正文第16页第2段、46页5.1节 *
跌倒检测系统的研究进展;郑娱 等;《中国医学物理学杂志》;20140731;第31卷(第4期);第5073页第4段 *

Also Published As

Publication number Publication date
CN109543762A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
CN104146712B (en) Wearable plantar pressure detection device and plantar pressure detection and attitude prediction method
CN109543762B (en) Multi-feature fusion gesture recognition system and method
CN110706255A (en) Fall detection method based on self-adaptive following
US11047706B2 (en) Pedometer with accelerometer and foot motion distinguishing method
CN106887115B (en) Old people falling monitoring device and falling risk assessment method
Hegde et al. The pediatric SmartShoe: wearable sensor system for ambulatory monitoring of physical activity and gait
US11318035B2 (en) Instrumented orthotic
CN105795571B (en) A kind of data collecting system and method for ectoskeleton pressure footwear
US20150005910A1 (en) Motion information processing apparatus and method
US20110246123A1 (en) Personal status monitoring
Li et al. Pre-impact fall detection based on a modified zero moment point criterion using data from Kinect sensors
CN103211599A (en) Method and device for monitoring tumble
CN109171734A (en) Human body behavioural analysis cloud management system based on Fusion
CN108334827B (en) Gait identity authentication method based on intelligent shoe and intelligent shoe
CN110946585A (en) Fall detection system and method based on data fusion and BP neural network
Jatesiktat et al. An elderly fall detection using a wrist-worn accelerometer and barometer
CN112617806B (en) Intelligent device for walking gesture training
CN112115827A (en) Falling behavior identification method based on human body posture dynamic characteristics
CN103632133B (en) Human gesture recognition method
CN114469074A (en) Fall early warning method, system, equipment and computer storage medium
CN115346272A (en) Real-time tumble detection method based on depth image sequence
CN108958478B (en) Pedal action recognition and evaluation method in virtual assembly operation
CN109730660B (en) Infant wearing equipment and user side
CN114973048A (en) Method and device for correcting rehabilitation action, electronic equipment and readable medium
Wang et al. Comparison of four machine learning algorithms for a pre-impact fall detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant