CN113297919A - Rehabilitation action detection method, device, equipment and medium based on posture recognition - Google Patents

Rehabilitation action detection method, device, equipment and medium based on posture recognition Download PDF

Info

Publication number
CN113297919A
CN113297919A CN202110474993.6A CN202110474993A CN113297919A CN 113297919 A CN113297919 A CN 113297919A CN 202110474993 A CN202110474993 A CN 202110474993A CN 113297919 A CN113297919 A CN 113297919A
Authority
CN
China
Prior art keywords
joint point
frame
picture
rehabilitation
joint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110474993.6A
Other languages
Chinese (zh)
Inventor
周霆
阮宏洋
徐文婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiaopeng Technology Co ltd
Original Assignee
Shanghai Xiaopeng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xiaopeng Technology Co ltd filed Critical Shanghai Xiaopeng Technology Co ltd
Priority to CN202110474993.6A priority Critical patent/CN113297919A/en
Publication of CN113297919A publication Critical patent/CN113297919A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Social Psychology (AREA)
  • Biophysics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a rehabilitation action detection method, a device, equipment and a medium based on gesture recognition, which comprises the following steps: inputting the acquired rehabilitation action picture into a preset HigherHRNet model to obtain the joint point position in each frame of picture; calculating the joint point distance between the continuous frames and the change angle of the limbs according to the joint point position in each frame of picture; calculating the moving speed of the joint point according to the joint point distance between the continuous frames; and obtaining the accuracy of the detected rehabilitation action according to the joint point position in each frame of picture, the joint point distance between the continuous frames, the moving speed of the joint point, the change angle of the limbs and the preset characteristic importance parameter. The detection method provided by the embodiment of the disclosure focuses more on the continuity of the rehabilitation action in the video, adopts the multi-frame image for detection, improves the detection accuracy, can adapt to various actions, and can change the judgment standard only by presetting some action characteristic importance parameters.

Description

Rehabilitation action detection method, device, equipment and medium based on posture recognition
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a rehabilitation action detection method, a rehabilitation action detection device, rehabilitation action detection equipment and a rehabilitation action detection medium based on gesture recognition.
Background
The criteria for medically judging whether a patient is successfully cured are: successful healing is 50% of surgical success + 50% of recovery success. With the comprehensive understanding of people on disease treatment, rehabilitation training plays an increasingly important role in the whole disease treatment process. Firstly, physical strength of a patient can be obviously recovered through rehabilitation training, physical functions of the patient can be well recovered through simple skeletal muscle movement and aerobic training, and resistance of the patient can be improved; secondly, in the training process, the confidence of the patient on the victory disease can be improved, and the victory disease has certain treatment effect on the disease per se in physiology and psychology; thirdly, the training device has obvious promotion effect on special joints and motion functions, especially on operations of hands, joints, spines and the like, and the early functional training after the operation is very important for restoring the functions of operation organs. Therefore, the rehabilitation training after the operation is performed in an early stage and is performed gradually under the guidance of the doctor, so that a good rehabilitation training effect can be achieved.
The existing joint point recognition is mainly applied to posture recognition on conventional actions, and the posture recognition on rehabilitation actions is mainly to compare a single-frame image joint point with a single-frame standard action after recognition to obtain the standard degree of the action. The technology firstly ignores that rehabilitation training is a continuous process, and the action process is also important and can not be ignored. Secondly, the rehabilitation action has hundreds of different actions, when the actions are faced with the different actions, the prior art needs to classify the action to which kind of action belongs first, and then compares the actions with the kind of action to obtain a standard degree, on one hand, the hundreds of classifications can cause incorrect classification and wrong guidance, on the other hand, the excessive classifications can cause too long time and can not meet the requirement of real-time property.
Disclosure of Invention
The embodiment of the disclosure provides a rehabilitation action detection method, a rehabilitation action detection device, rehabilitation action detection equipment and a rehabilitation action detection medium based on posture recognition. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present disclosure provides a rehabilitation motion detection method based on gesture recognition, including:
inputting the acquired rehabilitation action picture into a preset HigherHRNet model to obtain the joint point position in each frame of picture;
calculating the joint point distance between the continuous frames and the change angle of the limbs according to the joint point position in each frame of picture;
calculating the moving speed of the joint point according to the joint point distance between the continuous frames;
and obtaining the accuracy of the detected rehabilitation action according to the joint point position in each frame of picture, the joint point distance between the continuous frames, the moving speed of the joint point, the change angle of the limbs and the preset characteristic importance parameter.
In an optional embodiment, before inputting the acquired rehabilitation motion picture into the preset highherhrnet model, the method further includes:
identifying one or more rehabilitation training users by adopting a face identification method;
when the rehabilitation training user is successfully identified and positioned, the video image of the user for rehabilitation training is shot.
In an alternative embodiment, calculating the joint distance between successive frames from the joint position in each frame of the picture comprises:
taking continuous multi-frame pictures as a sample, calculating the joint point distance between frames in the sample according to the following formula,
Figure BDA0003046684330000021
Figure BDA0003046684330000022
wherein I represents the number of joint points, 13 joint points in total, N represents the number of frames in one sample,
Figure BDA0003046684330000023
representing the distance from frame to frame at all the joints,
Figure BDA0003046684330000024
indicating the position of each joint point in the sample nth frame picture,
Figure BDA0003046684330000025
representing the distance from frame to frame of a particular joint.
In an alternative embodiment, the calculation of the angle of change of the limbs according to the position of the joint point in each frame of picture comprises:
respectively obtaining left and right forearm slopes, left and right big arm slopes, left and right shank slopes and left and right thigh slopes according to joint points connecting four limbs;
and calculating the angles of the left and right small arms and the left and right large arms and the angles of the left and right calves and the left and right thighs according to the slopes of the left and right small arms, the slopes of the left and right large arms, the slopes of the left and right calves and the slopes of the left and right thighs.
In an alternative embodiment, calculating the angle of the left and right forearm and the angle of the left and right calf and the left and right thigh from the left and right forearm slope, the left and right upper arm slope, the left and right calf slope, and the left and right thigh slope comprises:
Figure BDA0003046684330000031
Figure BDA0003046684330000032
wherein l1Represents the slope of the left forearm, l2Represents the slope of the right forearm, l3Indicates the slope of the left forearm, l4Represents the slope of the right big arm,/5Representing the slope of the left calf, l6Representing the slope of the right calf, l7Denotes the left thigh slope, l8Represents the slope of the right thigh, θ1Denotes the angle of the left forearm and the left forearm, theta2Representing the angle, θ, of the right forearm and the right forearm3Representing the angle of the left calf to the left thigh, θ4Representing the angle of the right calf to the right thigh.
In an alternative embodiment, calculating the moving speed of the joint based on the joint distance between successive frames comprises:
calculating the moving speed of the joint point according to the time occupied by the N frames of pictures in the sample and the joint point distance between the frames, wherein the specific formula is as follows:
Figure BDA0003046684330000033
wherein, ViRepresents the moving speed of the i-th joint point, DiIndicating the distance from frame to frame at the ith joint, N the number of frames in a sample, and fps the number of frames transmitted per second of the picture.
In an optional embodiment, the obtaining the detected rehabilitation action accuracy according to the joint point position in each frame of picture, the joint point distance between consecutive frames, the moving speed of the joint point, the change angle of the limbs and the preset feature importance parameter includes:
Figure BDA0003046684330000034
wherein alpha ise(e-1, 2,3,4) represents the preset characteristic importance parameter of each rehabilitation action, S represents the accuracy of the detected rehabilitation action,
Figure BDA0003046684330000035
indicating the position of the joint point in the f-th frame picture,
Figure BDA0003046684330000036
representing the distance, V, of the joint point from frame to frameIRepresenting the speed of movement, theta, of the joint point(i,j)Indicating the varying angle of the limbs.
In a second aspect, an embodiment of the present disclosure provides a rehabilitation motion detection apparatus based on gesture recognition, including:
the input module is used for inputting the acquired rehabilitation action picture into a preset HigherHRNet model to obtain the joint point position in each frame of picture;
the first calculation module is used for calculating the joint point distance between the continuous frames and the change angle of the limbs according to the joint point position in each frame of picture;
the second calculation module is used for calculating the moving speed of the joint point according to the joint point distance between the continuous frames;
and the detection module is used for obtaining the detected rehabilitation action accuracy according to the joint point position in each frame of picture, the joint point distance between the continuous frames, the moving speed of the joint point, the change angle of the limbs and the preset characteristic importance parameter.
In a third aspect, the disclosed embodiments provide a rehabilitation motion detection device based on gesture recognition, including a processor and a memory storing program instructions, where the processor is configured to execute the rehabilitation motion detection method based on gesture recognition provided by the above embodiments when executing the program instructions.
In a fourth aspect, the disclosed embodiments provide a computer-readable medium, on which computer-readable instructions are stored, where the computer-readable instructions are executable by a processor to implement a rehabilitation motion detection method based on gesture recognition provided in the foregoing embodiments.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the rehabilitation action detection method based on posture recognition provided by the embodiment of the disclosure not only focuses on posture recognition of a single frame, but also extracts the rehabilitation action process. In addition, the method can adapt to various actions, the judgment standard can be changed only by presetting some action characteristic importance coefficients, and the adaptability of the method is improved. The joint point locating method can also identify rehabilitation training of multiple persons, can identify target patients and non-target patients, and enables the joint points to be located more accurately by adopting a network structure of HigherHRNet.
The method automatically detects the standard degree of the rehabilitation action, so that the patient can adjust the rehabilitation action, the rehabilitation action of the patient is more standard, the patient can be better recovered, and the burden of a rehabilitation doctor is also reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a method of gesture recognition based rehabilitation motion detection, according to an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating a HigherHRNet network architecture in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating a three-dimensional body coordinate system according to an exemplary embodiment;
fig. 4 is a schematic structural diagram illustrating a rehabilitation motion detection device based on gesture recognition according to an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a configuration of a gesture recognition based rehabilitation motion detection device according to an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating a computer storage medium in accordance with an exemplary embodiment.
Detailed Description
The following description and the drawings sufficiently illustrate specific embodiments of the invention to enable those skilled in the art to practice them.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of systems and methods consistent with certain aspects of the invention, as detailed in the appended claims.
In the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The rehabilitation motion detection method based on gesture recognition according to the embodiment of the present application will be described in detail with reference to fig. 1 to 3.
Referring to fig. 1, the method specifically includes the following steps;
s101, inputting the acquired rehabilitation action picture into a preset HigherHRNet model to obtain the joint point position in each frame of picture.
In a possible implementation manner, before performing step S101, the method further includes recognizing one or more rehabilitation training users by using a face recognition method, and when the rehabilitation training users are successfully recognized and located, capturing video images of the rehabilitation training users.
In an exemplary scene, a mobile terminal such as a tablet computer and a mobile phone is used for shooting a rehabilitation training video of a user, a face recognition system is installed in the mobile terminal, one or more rehabilitation training users can be recognized from the video, if the rehabilitation training users are successfully recognized and located through the mobile terminal, the user is prompted to start to do rehabilitation training actions through voice, and the mobile terminal device starts to continuously shoot the video, wherein the FPS of the video is not lower than 20, and the resolution ratio can be 1080 x 1060.
Further, posture recognition is carried out according to the shot rehabilitation training video, and joint point positions in each frame of picture are obtained.
Specifically, the shot rehabilitation training video is input into a preset HigherHRNet model, and the joint point position in each frame of picture is calculated and obtained through an AI algorithm aiming at each frame of rehabilitation action picture transmitted in real time
Figure BDA0003046684330000061
Where f denotes the several pictures and i denotes the several joints.
The high herHRNet model has the advantages that the joint points of a plurality of persons can be accurately identified, and the position positioning precision of the joint points is high. In the HigherHRNet model in the embodiment of the disclosure, a heat map with better precision is generated by combining quadratic interpolation and deconvolution, so that the precision of joint point positioning can be finally higher, and a specific network structure is shown in fig. 2.
Referring to fig. 2, the highherhrnet model may include a stem module, a first-stage network module, a second-stage network module, a third-stage network module, a deconvolution module, and a feature refinement module, which are connected in sequence.
Inputting the rehabilitation action picture into a stem module for preprocessing, extracting a first feature map of the rehabilitation action picture, and reducing the resolution of the first feature map to 1/4 of the original picture, wherein the stem module can comprise two convolution layers, and the convolution kernels are the same. For example, a convolution kernel of 3 × 64 may be used, with a step size of 2 and a padding parameter (padding) of 1 for the convolution operation.
The first level network module may be configured to reduce the width of the first profile to a predetermined value C. Specifically, the first-stage network module may include two residual error units and one convolution layer, wherein each residual error unit is formed by a bottleneck layer (bottleeck) with a width (number of channels) of 64; the convolutional layer may be a 3x3 convolutional layer, reducing the width of the first feature map to C. For example, C may be set to 32.
The second level network module may include one or two resolution blocks, each of which may include two 3x3 convolutional layers, and three residual units.
The third-level network module comprises three resolution modules and three residual error units, wherein each resolution module comprises 2 3 × 3 convolutions, and the product widths of the convolutions are C, 2C and 4C respectively.
And the decoding module takes the feature map from the HigherHrnet and the prediction heat map as input and generates a new feature map with the resolution 2 times larger than that of the input feature map. In the present embodiment, a high-quality and high-resolution feature map can be efficiently generated by deconvolution.
The feature refining module can comprise 4 residual error units to refine the output feature map, and finally obtain and output joint point positions in one frame of rehabilitation action picture.
In addition, the network module connected between the third-stage network module and the deconvolution module in fig. 2 functions to realize aggregation of feature maps, and "1/4" in fig. 2 indicates the resolution of the feature map output by the corresponding module relative to the size of its original image.
According to the steps, the position information of the joint points in each frame of picture can be accurately obtained.
S102, calculating the joint point distance between the continuous frames and the change angle of the limbs according to the joint point position in each frame of picture.
In an alternative embodiment, calculating the joint distance between successive frames from the joint position in each frame of the picture comprises: taking a continuous multi-frame picture as a sample, calculating the joint point distance between frames in the sample according to the following formula, wherein N is the number of frames in the sample, and the skilled person can set the joint point distance by himself/herself, for example, the number of frames is generally not greater than 10 and not less than 5.
Figure BDA0003046684330000071
Figure BDA0003046684330000072
Wherein I represents the number of joint points, 13 joint points in total, N represents the number of frames in one sample,
Figure BDA0003046684330000073
representing the distance from frame to frame at all the joints,
Figure BDA0003046684330000074
indicating the position of each joint point in the sample nth frame picture,
Figure BDA0003046684330000075
representing the distance from frame to frame of a particular joint.
Further, calculating the change angle of the limbs according to the positions of the joint points in each frame of picture, comprising: respectively obtaining left and right forearm slopes, left and right big arm slopes, left and right shank slopes and left and right thigh slopes according to joint points connecting four limbs; and calculating the angles of the left and right small arms and the left and right large arms and the angles of the left and right calves and the left and right thighs according to the slopes of the left and right small arms, the slopes of the left and right large arms, the slopes of the left and right calves and the slopes of the left and right thighs.
As the movement of four limbs of the human body is three-dimensional, when the movement angle is required to be obtained, a three-dimensional coordinate system is required to be set firstly, the direction opposite to the human face is taken as the y-axis direction, the direction perpendicular to the y-axis on the ground is taken as the x-axis, and the intersection point from the head to the x-axis and the y-axis is taken as the z-axis. Fig. 3 is a schematic diagram of a three-dimensional coordinate system of a human body.
Specifically, the embodiment of the present disclosure includes 13 joint points, which are the head, the left and right shoulders, the left and right elbows, the left and right wrists, the left and right hips, the left and right knees, and the left and right ankles, respectively. The slope l of the left and right forearm can be obtained according to the joint point connecting the wrist and the elbow1,l2The slope l of the left and right upper arms can be obtained from the joint point connecting the elbow and the shoulder3,l4The left and right calf slopes l can be obtained from the joint point connecting the ankle and the knee5,l6Left and right thigh slopes l can be obtained from a joint point connecting the knee and hip7,l8
Further, calculating the angle between the left and right forearm and the angle between the left and right calf and the left and right thigh according to the slope of the left and right forearm, the slope of the left and right calf, and the slope of the left and right thigh, comprising:
Figure BDA0003046684330000081
Figure BDA0003046684330000082
wherein l1Represents the slope of the left forearm, l2Represents the slope of the right forearm, l3Indicates the slope of the left forearm, l4Represents the slope of the right big arm,/5Representing the slope of the left calf, l6Representing the slope of the right calf, L7Denotes the left thigh slope, l8Represents the slope of the right thigh, θ1Denotes the angle of the left forearm and the left forearm, theta2Representing the angle, θ, of the right forearm and the right forearm3Representing the angle of the left calf to the left thigh, θ4Representing the angle of the right calf to the right thigh.
S103 calculates the moving speed of the joint point from the joint point distance between the successive frames.
In an alternative embodiment, calculating the moving speed of the joint based on the joint distance between successive frames comprises: calculating the moving speed of the joint point according to the time occupied by the N frames of pictures in the sample and the joint point distance between the frames, wherein the specific formula is as follows:
Figure BDA0003046684330000083
wherein, ViRepresents the ithSpeed of movement of joint points, DiIndicating the distance from frame to frame at the ith joint, N the number of frames in a sample, and fps the number of frames transmitted per second of the picture.
S104, according to the joint point position in each frame of picture, the joint point distance between the continuous frames, the moving speed of the joint point, the change angle of the limbs and the preset characteristic importance parameter, the detected rehabilitation action accuracy is obtained.
In the prior art, rehabilitation motions have hundreds of different motions, and when the motions are different, the motions belong to which class of motions need to be classified first, and then the classification is compared with the class of standard motions to obtain a standard degree, on one hand, the classification of hundreds of classes can cause incorrect classification and wrong guidance, on the other hand, too many classes can cause too long time and cannot meet the real-time requirement, so that the single motion standard cannot adapt to various motions.
Therefore, the embodiment of the present disclosure uses the joint point position in each frame of picture, the joint point distance between consecutive frames, the moving speed of the joint point, and the change angle of the limbs as the motion characteristics, and different motion characteristics may be set in advance to have different motion characteristic importance parameters, for example, the rehabilitation motion of knee surgery requires that the lower leg and the thigh are bent at 90 °, and the motion speed cannot be too slow. Different action characteristic importance parameters can be set according to different rehabilitation actions.
Further, the detected rehabilitation action accuracy is obtained according to the joint point position in each frame of picture, the joint point distance between the continuous frames, the moving speed of the joint point, the change angle of the limbs and the preset characteristic importance parameter, and comprises the following steps:
Figure BDA0003046684330000091
wherein alpha ise(e-1, 2,3,4) represents the preset characteristic importance parameter of each rehabilitation action, S represents the accuracy of the detected rehabilitation action,
Figure BDA0003046684330000092
indicating the position of the joint point in the f-th frame picture,
Figure BDA0003046684330000093
representing the distance, V, of the joint point from frame to frameIRepresenting the speed of movement, theta, of the joint point(i,j)Indicating the varying angle of the limbs.
According to the step, the detection method can adapt to various different actions, the judgment standard can be changed only by presetting some action characteristic importance coefficients, and the adaptability of the method is improved.
Optionally, after the detected rehabilitation action accuracy is obtained, a voice prompt can be sent in real time in a voice broadcasting mode, and the user can adjust the rehabilitation action according to voice prompt information to improve the rehabilitation training effect.
Optionally, after the detected rehabilitation action accuracy is obtained, the rehabilitation action accuracy information can be sent to the doctor end and the user end, so that the doctor can know the training condition of the patient conveniently.
According to the rehabilitation action detection method based on posture recognition provided by the embodiment of the disclosure, not only is the single-frame posture recognition emphasized, but also the rehabilitation action process is extracted, and the detection accuracy is improved. In addition, the embodiment can also adapt to various actions, the judgment standard can be changed only by presetting some action characteristic importance coefficients, and the adaptability of the method is improved.
The embodiment of the present disclosure further provides a rehabilitation motion detection apparatus based on gesture recognition, which is used for executing the rehabilitation motion detection method based on gesture recognition of the above embodiment, as shown in fig. 4, the apparatus includes:
the input module 401 is configured to input the acquired rehabilitation action picture into a preset highherhrnet model to obtain a joint point position in each frame of picture;
a first calculating module 402, configured to calculate joint distances between consecutive frames and change angles of limbs according to the joint positions in each frame of the image;
a second calculating module 403, configured to calculate a moving speed of the joint point according to the joint point distance between consecutive frames;
the detection module 404 is configured to obtain the detected rehabilitation action accuracy according to the joint point position in each frame of the picture, the joint point distance between consecutive frames, the moving speed of the joint point, the change angle of the limbs, and a preset feature importance parameter.
It should be noted that, when the rehabilitation motion detection device based on gesture recognition provided in the above embodiment executes the rehabilitation motion detection method based on gesture recognition, the above division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the rehabilitation motion detection device based on posture recognition provided by the above embodiment and the rehabilitation motion detection method based on posture recognition belong to the same concept, and details of the implementation process are found in the method embodiment and are not described herein again.
The embodiment of the present disclosure further provides an electronic device corresponding to the rehabilitation motion detection method based on gesture recognition provided in the foregoing embodiment, so as to execute the rehabilitation motion detection method based on gesture recognition.
Please refer to fig. 5, which illustrates a schematic diagram of an electronic device according to some embodiments of the present application. As shown in fig. 5, the electronic apparatus includes: the processor 500, the memory 501, the bus 502 and the communication interface 503, wherein the processor 500, the communication interface 503 and the memory 501 are connected through the bus 502; the memory 501 stores a computer program that can be executed on the processor 500, and the processor 500 executes the rehabilitation motion detection method based on gesture recognition provided by any one of the foregoing embodiments of the present application when executing the computer program.
The Memory 501 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 503 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
Bus 502 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 501 is used for storing a program, and the processor 500 executes the program after receiving an execution instruction, and the rehabilitation motion detection method based on gesture recognition disclosed in any of the foregoing embodiments of the present application may be applied to the processor 500, or implemented by the processor 500.
The processor 500 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 500. The Processor 500 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 501, and the processor 500 reads the information in the memory 501, and completes the steps of the method in combination with the hardware thereof.
The electronic device provided by the embodiment of the application and the rehabilitation motion detection method based on gesture recognition provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic device.
Referring to fig. 6, the computer readable storage medium is an optical disc 600, on which a computer program (i.e., a program product) is stored, and when the computer program is executed by a processor, the computer program performs the rehabilitation motion detection method based on gesture recognition provided in any of the embodiments.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
The computer-readable storage medium provided by the above-mentioned embodiment of the present application and the rehabilitation motion detection method based on gesture recognition provided by the embodiment of the present application have the same inventive concept, and have the same beneficial effects as the method adopted, run or implemented by the application program stored in the computer-readable storage medium.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A rehabilitation motion detection method based on posture recognition is characterized by comprising the following steps:
inputting the acquired rehabilitation action picture into a preset HigherHRNet model to obtain the joint point position in each frame of picture;
calculating the joint point distance between the continuous frames and the change angle of the limbs according to the joint point position in each frame of picture;
calculating the moving speed of the joint point according to the joint point distance between the continuous frames;
and obtaining the accuracy of the detected rehabilitation action according to the joint point position in each frame of picture, the joint point distance between the continuous frames, the moving speed of the joint point, the change angle of the limbs and the preset characteristic importance parameter.
2. The method according to claim 1, wherein before inputting the acquired rehabilitation motion picture into the preset highherhrnet model, the method further comprises:
identifying one or more rehabilitation training users by adopting a face identification method;
and when the rehabilitation training user is successfully identified and positioned, shooting a video image of the user for rehabilitation training.
3. The method of claim 1, wherein calculating the joint distance between successive frames according to the joint position in each frame of the picture comprises:
taking continuous multi-frame pictures as a sample, calculating the joint point distance between frames in the sample according to the following formula,
Figure FDA0003046684320000011
Figure FDA0003046684320000012
wherein I represents the number of joint points, 13 joint points in total, N represents the number of frames in one sample,
Figure FDA0003046684320000013
representing the distance from frame to frame at all the joints,
Figure FDA0003046684320000014
indicating the position of each joint point in the sample nth frame picture,
Figure FDA0003046684320000015
representing the distance from frame to frame of a particular joint.
4. The method of claim 1, wherein calculating the angle of change of the limbs according to the position of the joint point in each frame of the picture comprises:
respectively obtaining left and right forearm slopes, left and right big arm slopes, left and right shank slopes and left and right thigh slopes according to joint points connecting four limbs;
and calculating the angles between the left and right small arms and the left and right large arms and the angles between the left and right shanks and the left and right thighs according to the slopes of the left and right small arms, the slopes of the left and right large arms, the slopes of the left and right shanks and the slopes of the left and right thighs.
5. The method of claim 4, wherein calculating the left and right forearm to left and right forearm and calf to left and right thigh angles from the left and right forearm slope, left and right calf slope, and left and right thigh slope comprises:
Figure FDA0003046684320000021
Figure FDA0003046684320000022
wherein l1Represents the slope of the left forearm, l2Represents the slope of the right forearm, l3Indicates the slope of the left forearm, l4Represents the slope of the right big arm,/5Representing the slope of the left calf, l6Representing the slope of the right calf, l7Denotes the left thigh slope, l8Represents the slope of the right thigh, θ1Denotes the angle of the left forearm and the left forearm, theta2Representing the angle, θ, of the right forearm and the right forearm3Representing the angle of the left calf to the left thigh, θ4Representing the angle of the right calf to the right thigh.
6. The method of claim 3, wherein calculating the velocity of movement of the articulation point based on the articulation point distance between the successive frames comprises:
calculating the moving speed of the joint point according to the time occupied by the N frames of pictures in the sample and the joint point distance between the frames, wherein the specific formula is as follows:
Figure FDA0003046684320000023
wherein, ViRepresents the moving speed of the i-th joint point, DiIndicating the distance from frame to frame at the ith joint, N the number of frames in a sample, and fps the number of frames transmitted per second of the picture.
7. The method of claim 1, wherein obtaining the detected rehabilitation action accuracy according to the joint point position in each frame of picture, the joint point distance between the consecutive frames, the moving speed of the joint point, the change angle of the limbs and the preset feature importance parameter comprises:
Figure FDA0003046684320000031
wherein,αe(e-1, 2,3,4) represents the preset characteristic importance parameter of each rehabilitation action, S represents the accuracy of the detected rehabilitation action,
Figure FDA0003046684320000032
indicating the position of the joint point in the f-th frame picture,
Figure FDA0003046684320000033
representing the distance, V, of the joint point from frame to frameIRepresenting the speed of movement, theta, of the joint point(i,j)Indicating the varying angle of the limbs.
8. A rehabilitation motion detection device based on posture recognition is characterized by comprising:
the input module is used for inputting the acquired rehabilitation action picture into a preset HigherHRNet model to obtain the joint point position in each frame of picture;
the first calculation module is used for calculating the joint point distance between the continuous frames and the change angle of the limbs according to the joint point position in each frame of picture;
the second calculation module is used for calculating the moving speed of the joint point according to the joint point distance between the continuous frames;
and the detection module is used for obtaining the detected rehabilitation action accuracy according to the joint point position in each frame of picture, the joint point distance between the continuous frames, the moving speed of the joint point, the change angle of the limbs and the preset characteristic importance parameter.
9. A rehabilitation motion detection device based on gesture recognition, characterized by comprising a processor and a memory storing program instructions, the processor being configured to perform the rehabilitation motion detection method based on gesture recognition according to any one of claims 1 to 7 when executing the program instructions.
10. A computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement a method of gesture recognition based rehab motion detection as claimed in any of claims 1 to 7.
CN202110474993.6A 2021-04-29 2021-04-29 Rehabilitation action detection method, device, equipment and medium based on posture recognition Pending CN113297919A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110474993.6A CN113297919A (en) 2021-04-29 2021-04-29 Rehabilitation action detection method, device, equipment and medium based on posture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110474993.6A CN113297919A (en) 2021-04-29 2021-04-29 Rehabilitation action detection method, device, equipment and medium based on posture recognition

Publications (1)

Publication Number Publication Date
CN113297919A true CN113297919A (en) 2021-08-24

Family

ID=77320614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110474993.6A Pending CN113297919A (en) 2021-04-29 2021-04-29 Rehabilitation action detection method, device, equipment and medium based on posture recognition

Country Status (1)

Country Link
CN (1) CN113297919A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114052726A (en) * 2021-11-25 2022-02-18 湖南中科助英智能科技研究院有限公司 Thermal infrared human body gait recognition method and device in dark environment
CN117409930A (en) * 2023-12-13 2024-01-16 江西为易科技有限公司 Medical rehabilitation data processing method and system based on AI technology

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114052726A (en) * 2021-11-25 2022-02-18 湖南中科助英智能科技研究院有限公司 Thermal infrared human body gait recognition method and device in dark environment
CN117409930A (en) * 2023-12-13 2024-01-16 江西为易科技有限公司 Medical rehabilitation data processing method and system based on AI technology
CN117409930B (en) * 2023-12-13 2024-02-13 江西为易科技有限公司 Medical rehabilitation data processing method and system based on AI technology

Similar Documents

Publication Publication Date Title
CN110969114A (en) Human body action function detection system, detection method and detector
US20210315486A1 (en) System and Method for Automatic Evaluation of Gait Using Single or Multi-Camera Recordings
CN107909060A (en) Gymnasium body-building action identification method and device based on deep learning
CN109753891A (en) Football player's orientation calibration method and system based on human body critical point detection
Chaudhari et al. Yog-guru: Real-time yoga pose correction system using deep learning methods
CN111476097A (en) Human body posture assessment method and device, computer equipment and storage medium
CN113297919A (en) Rehabilitation action detection method, device, equipment and medium based on posture recognition
JP2021064367A (en) Motion recognition method, motion recognition device, and electronic device
US20220207921A1 (en) Motion recognition method, storage medium, and information processing device
CN113255522B (en) Personalized motion attitude estimation and analysis method and system based on time consistency
US20220057856A1 (en) Method and system for providing real-time virtual feedback
CN112568898A (en) Method, device and equipment for automatically evaluating injury risk and correcting motion of human body motion based on visual image
CN111368787A (en) Video processing method and device, equipment and computer readable storage medium
CN114495169A (en) Training data processing method, device and equipment for human body posture recognition
US20220222975A1 (en) Motion recognition method, non-transitory computer-readable recording medium and information processing apparatus
CN115035037A (en) Limb rehabilitation training method and system based on image processing and multi-feature fusion
CN114973048A (en) Method and device for correcting rehabilitation action, electronic equipment and readable medium
Solongontuya et al. Novel side pose classification model of stretching gestures using three-layer LSTM
CN112102451A (en) Common camera-based wearable virtual live broadcast method and equipment
Kishore et al. Smart yoga instructor for guiding and correcting yoga postures in real time
CN116343325A (en) Intelligent auxiliary system for household body building
CN113842622B (en) Motion teaching method, device, system, electronic equipment and storage medium
CN115346640A (en) Intelligent monitoring method and system for closed-loop feedback of functional rehabilitation training
CN115006822A (en) Intelligent fitness mirror control system
CN112861606A (en) Virtual reality hand motion recognition and training method based on skeleton animation tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination