CN115530814A - Child motion rehabilitation training method based on visual posture detection and computer deep learning - Google Patents

Child motion rehabilitation training method based on visual posture detection and computer deep learning Download PDF

Info

Publication number
CN115530814A
CN115530814A CN202211288552.8A CN202211288552A CN115530814A CN 115530814 A CN115530814 A CN 115530814A CN 202211288552 A CN202211288552 A CN 202211288552A CN 115530814 A CN115530814 A CN 115530814A
Authority
CN
China
Prior art keywords
rehabilitation
rehabilitation training
training
children
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211288552.8A
Other languages
Chinese (zh)
Inventor
郑朋飞
陈修宁
庄汉杰
郭若宜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Childrens Hospital of Nanjing Medical University
Original Assignee
Nanjing Childrens Hospital of Nanjing Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Childrens Hospital of Nanjing Medical University filed Critical Nanjing Childrens Hospital of Nanjing Medical University
Priority to CN202211288552.8A priority Critical patent/CN115530814A/en
Publication of CN115530814A publication Critical patent/CN115530814A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Physiology (AREA)
  • Veterinary Medicine (AREA)
  • Multimedia (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Psychiatry (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Dentistry (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Computing Systems (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A children movement rehabilitation training method based on visual posture detection and computer deep learning can realize posture discrimination based on angle and distance calculation and machine learning and posture analysis based on state sequences through a deep learning human posture detection algorithm, and complete the formulation of an individualized rehabilitation training scheme; the dependence degree on a rehabilitation therapist is reduced, the standard quantification of the rehabilitation training effect can be realized, and the rehabilitation quality is improved; the game and animation elements which are interesting to the children are integrated, the participation degree of the children is improved, and the aim of the motion rehabilitation of the children is efficiently realized.

Description

Child motion rehabilitation training method based on visual posture detection and computer deep learning
Technical Field
The invention relates to the field of exercise rehabilitation methods, in particular to a child exercise rehabilitation training method based on visual posture detection and computer deep learning.
Background
Traditional child motor rehabilitation training is usually performed by a professional rehabilitation therapist. Through different rehabilitation training schemes, the infant needs to complete specific exercises to promote exercise rehabilitation. However, such treatment methods have the following four problems: (1) the degree of dependence on rehabilitation therapists is high; (2) it is difficult to quantify the rehabilitation training effect standard; (3) The interestingness is low, the compliance of children is poor, and the rehabilitation effect is poor; (4) Needs a special rehabilitation institution, is limited by the field and causes time waste.
With the technological progress, the fusion of multiple subjects such as image processing, mode recognition, artificial intelligence and the like is widely concerned, and the method has great application potential in the field of children exercise rehabilitation. However, there is a lack of a sports rehabilitation training system that can provide a personalized rehabilitation training scheme by recognizing human body gestures.
Disclosure of Invention
The invention aims to design a child motion rehabilitation training method based on visual posture detection and computer deep learning, and the rehabilitation efficiency and effect are improved.
A children movement rehabilitation training method based on visual posture detection and computer deep learning comprises the following steps:
step 1, starting a camera, selecting a patient client, inputting basic information of a patient, and selecting a training scene and display content;
step 2, determining and entering a specific rehabilitation program task through retrieval;
step 3, completely presenting the body part needing to be trained and recovered in an image acquisition frame in a screen to further obtain recognition, preprocessing a picture by adopting a human posture detection algorithm of deep learning, and inputting the preprocessed picture into a posture detection model constructed based on the deep learning;
step 4, adopting gesture discrimination based on angle and distance calculation and machine learning and gesture analysis based on state sequence to complete actions according to character step prompt on the left side of the screen and human animation simulation prompt;
step 5, the patient selects a date and a corresponding training schedule in the rehabilitation training prescription to start training;
and 6, after each rehabilitation training is finished, automatically storing the training video and the result, and selecting whether to upload the training video and the result to a doctor end for subsequent confirmation by the patient.
Further, in step 1, the display content is realized by an OpenGL graphics rendering technology.
Further, in step 2, a rehabilitation program task corresponding to a part or a disease is obtained by searching for the part needing rehabilitation training or searching for the disease name.
Furthermore, in step 3, after the normalized coordinates are identified and output, parameters of angles, distances and a movable range are displayed in real time, the positions of the joints of the human body are calculated in real time, and the position change is mapped to corresponding key points of the model, so that the follow-up effect of the model is realized.
Further, the gesture detection model inputs RGB images and outputs 33 human body key point coordinates, the heat map and the bias are used only in a training stage through a regression method combining the heat map and the bias, the pyramid-type feature extraction structure can remarkably improve the prediction quality, the heat maps of all joint points are predicted through a coding and decoding network, and the coordinates of all the joint points are regressed through another coder. The heat map is discarded in the reasoning stage, only the regression part is reserved, the reasoning speed is greatly improved, and the real-time operation can be realized without precision loss; the method comprises the steps that normalized key point coordinates are input into an action classification model, confidence coefficients of each category are output, a logistic regression model is built through a convolutional neural network, the problems that data of input coordinate points are discrete and cannot be linearly represented are solved, the lightweight classification model can run in super real time, and specific mark points are displayed on a screen and change in time along with body position changes.
Further, in step 4, the posture discrimination strategies that need different requirements according to different selected rehabilitation programs include:
identifying by a specific angle: for the rehabilitation of joints, judging the effectiveness of postures by calculating the angle formed by joint points;
identifying by a specific distance: judging the posture according to the distance between the feature points selected by the part;
detecting by a machine learning algorithm: the method comprises the steps of collecting expected attitude samples, applying a machine learning algorithm to the samples and outputting a classification model, and inputting collected images to the model when in use so as to obtain a classification judgment result.
Further, in step 4, the detection method based on the state sequence includes the following steps:
step 4-1, dividing concrete actions into several position-related states;
4-2, selecting a proper threshold value or a proper discrimination model for each state;
4-3, performing state discrimination and division on the output topological points by using the discrimination method in the step (2), and storing division results into a sequence list, wherein the most recent state sequence is recorded in the sequence list;
and 4-4, performing sequence analysis on the sequence list according to requirements, and judging a motion result according to state transition.
Further, in step 6, the patient's training video and training record are selected by the patient whether to share with the doctor, supporting one-to-one online communication.
The invention achieves the following beneficial effects: (1) The method can realize posture discrimination based on angle and distance calculation and machine learning and posture analysis based on state sequences through a deep-learning human posture detection algorithm, and complete the formulation of an individualized rehabilitation training scheme; (2) The dependence degree on a rehabilitation therapist is reduced, the standard quantification of the rehabilitation training effect can be realized, and the rehabilitation quality is improved; (3) The game and animation elements which are interesting to the children are integrated, the participation degree of the children is improved, and the aim of the motion rehabilitation of the children is efficiently realized.
Drawings
Fig. 1 is a flowchart of a child exercise rehabilitation training method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of human body posture detection for deep learning according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of gesture detection constructed based on deep learning in the embodiment of the present invention.
Fig. 4 is a schematic view of an elbow joint rehabilitation process according to an embodiment of the invention.
FIG. 5 is a schematic diagram of Mask-RCNN according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a feature pyramid FPN according to an embodiment of the present disclosure.
FIG. 7 is a schematic diagram of a suggestion box in MASK model training according to an embodiment of the present invention.
FIG. 8 is a diagram of a real box in MASK model training according to an embodiment of the present invention.
Fig. 9 is a schematic diagram illustrating key joint point identification according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the drawings in the specification.
A children movement rehabilitation training method based on visual posture detection and computer deep learning comprises the following steps:
step 1, starting a process, starting a computer or a mobile phone camera, selecting a patient client, inputting basic information of a patient, selecting a training scene, a head portrait garment, equipment and the like (such as ultraman, kungfu panda, flower fairy and the like), and displaying by using an OpenGL graphic rendering technology.
And 2, searching the part needing rehabilitation training or searching the part needing rehabilitation training by using the disease name, and entering a specific rehabilitation program task.
Step 3, putting a body part needing to be trained and rehabilitated into an image acquisition frame in a screen, adjusting the position and the distance from the body part to the image acquisition frame through the direction and the position of a mobile computer or a mobile phone camera or a rehabilitated patient, enabling the rehabilitated part to be completely presented in the image acquisition frame, automatically identifying a specific joint or a body surface mark by software, adopting a deep learning human posture detection algorithm, acquiring a picture in real time through the camera, preprocessing the picture, inputting a posture detection model constructed based on deep learning, presetting human body topology key points in the detection model, outputting normalized coordinates of the topology joint points after processing the input image, and accurately displaying the specific mark points on the screen and changing in time along with the change of the body position. The software can display parameters such as angles, distances, moving ranges and the like in real time, calculate the positions of the joints of the human body in real time, and map the position changes to corresponding key points of the model to realize the follow-up effect of the model.
The human posture detection algorithm for deep learning adopts the MASK R-CNN segmentation algorithm, and refers to FIGS. 5-8. In FIG. 5, mask-RCNN uses Resnet101 as the backbone feature extraction network, corresponding to the CNN part in the image. After the feature extraction, the feature pyramid structure is constructed using the feature layers whose length and width are compressed twice, three times, four times, and five times. In fig. 6, the feature pyramid FPN is constructed to realize multi-scale feature fusion, and in the Mask R-CNN, the results of compressing the length and width twice C2, three times C3, four times C4, and five times C5 in the trunk feature extraction network are taken out to construct the feature pyramid structure.
The extracted P2, P3, P4, P5 and P6 can be used as an effective characteristic layer of the RPN network, the RPN suggestion frame network is used for carrying out the next operation on the effective characteristic layer, and the prior frame is decoded to obtain the suggestion frame.
When the suggestion frame is obtained, the effective characteristic layers used are P2, P3, P4, P5 and P6, the effective characteristic layers use the same RPN suggestion frame network to obtain the prior frame adjusting parameters and whether the interior of the prior frame contains an object or not.
Then, first, convolution with a number of channels of 3 × 3 being 512 is performed. Then, a priori number × 4 convolution (predicting the variation of each proposed frame at each grid point on the effective feature layer) and a priori number × 2 convolution (predicting whether an object is contained inside each proposed frame at each grid point on the effective feature layer) are performed respectively. Equivalently, the whole image is divided into a plurality of grids; then 3 a priori boxes are established from each grid center. When the input images are different in size, the number of the advice boxes also changes. Mask R-CNN operates here, through the backbone feature extraction network, a plurality of common feature layers can be obtained, and then a proposal box intercepts the common feature layers.
Then, each characteristic point in the image is extracted, and further optimization is carried out until an accurate range is determined.
Referring to FIGS. 7-8, training of MASK models:
1. training of the frame network is suggested: and calculating a loss function, wherein the loss function is relative to the prediction result of the Mask R-CNN suggestion box network. The picture is required to be input into a network of a current Mask R-CNN suggestion frame to obtain the result of the suggestion frame; meanwhile, coding is needed, and the coding is to convert the position information format of the real frame into format information of the Mask R-CNN suggestion frame prediction result. It is necessary to find a priori frame corresponding to each real frame of each picture for training, and find out what the recommended frame prediction result should be if such a real frame is desired, and give feedback information.
Classiffer model training: and calculating the coincidence degree of all the suggestion frames and the real frames, screening, if the coincidence degree of a certain real frame and the suggestion frame is more than 0.5, determining that the suggestion frame is a positive sample, and if the coincidence degree is less than 0.5, determining that the suggestion frame is a negative sample.
And 3.Mask model training: when the proposed frame network is used for intercepting a common feature layer required by a mask model, the intercepting condition is different from that of a real frame, so that the position of the frame used for intercepting relative to the real frame needs to be calculated, and correct semantic segmentation information is obtained.
And 4, finishing the action according to the step prompt of characters on the left side of the screen and the prompt of the simulated human animation by adopting posture judgment based on angle and distance calculation and machine learning and posture analysis based on a state sequence.
The posture judgment strategies generally include the following according to different requirements for selecting different rehabilitation programs:
(1) Identifying by a specific angle: for example, elbow joint flexion and extension training focuses on joint points at the shoulder, elbow and wrist, and the effectiveness of the posture is judged by calculating the angle formed by the joint points.
(2) Identifying by a specific distance: for example, the motion of touching the face by the hand can be determined based on the distance between the hand and the face feature point.
In particular, with reference to fig. 9, assuming the person is doing with the sagittal plane facing the screen, by identifying several key joint points (three points in the figure) when the knee is flexed, the distance of the three points from the screen is identified. Then, the included angle formed by connecting the hip joint point and the ankle joint point with the knee joint point is the bending angle of the knee joint, and the measurement of other angles is similar. The purpose of rehabilitation training is that the action is qualified only when the action reaches the range.
(3) Detecting by a machine learning algorithm: and acquiring an expected attitude sample, applying a machine learning algorithm to the sample and outputting a classification model, and inputting an acquired image into the model when the model is used to obtain a classification judgment result.
In order to dynamically analyze and judge the posture and improve the judgment accuracy, a detection method based on a state sequence is adopted, and the method comprises the following steps:
(1) Firstly, dividing specific actions into a plurality of position-related states;
(2) Selecting a suitable threshold or discriminant model for each state;
(3) Performing state discrimination and division on the output topological points by using the discrimination method in the step (2), and storing division results into a sequence list, wherein the most recent state sequence is recorded in the sequence list;
(4) And performing sequence analysis on the sequence list according to requirements, and judging a motion result according to state transition.
Examples are as follows: take the elbow joint flexion and extension training as an example. First, the computer monitors and records the patient's initial range of elbow joint motion, e.g., 30-90-95. And then inputting a final training target, such as 0-90-135 degrees according to the elbow joint movement range of normal people or according to the medical advice of a doctor, finally setting a total rehabilitation training period, frequency and other related parameters according to the medical advice, customizing a personalized rehabilitation training prescription through one key of computer software, or automatically generating a training prescription directly through the software according to a preset optimal rehabilitation plan, and adjusting according to actual conditions in the later period.
And 5, selecting a date and a corresponding training schedule in the rehabilitation training prescription by the patient to start training. If the elbow joint activity degree is trained to 25-90-100 degrees by the training target, the patient needs to complete the action according to the animation guidance of the dummy on the left side of the screen, and the elbow joint is turned off when the activity range reaches 25-90-100 degrees and reaches more than 10 times. And if the patient needs to carry out the multi-dimensional rehabilitation training of the joint or the movement rehabilitation training of a plurality of different joints, automatically jumping to the next course.
And 6, after each rehabilitation training, automatically storing the training videos and the results, directly sending the rehabilitation training results and the videos to a doctor in charge by the patient, clicking and checking by the doctor at the doctor client, and communicating with the patient through real-time software.
In the above process, 2D/3D graphics rendering and model binding are achieved: step 1, selecting a virtual training scene and equipment, wherein an OpenGL graphic rendering technology is used in the process of displaying; binding key points of the model and joint points of the human body, calculating positions of the joint points of the human body in real time in the step 3 and the step 4, and mapping the position change to the corresponding key points of the model to realize the follow-up effect of the model.
In step 6, the training video and the training record of the patient can be shared with the doctor, and meanwhile, one-to-one online communication is supported.
The above description is only a preferred embodiment of the present invention, and the scope of the present invention is not limited to the above embodiment, but equivalent modifications or changes made by those skilled in the art according to the present disclosure should be included in the scope of the present invention as set forth in the appended claims.

Claims (8)

1. A children movement rehabilitation training method based on visual posture detection and computer deep learning is characterized in that: the method comprises the following steps:
step 1, starting a camera, selecting a patient client, inputting basic information of a patient, and selecting a training scene and display content;
step 2, determining a specific rehabilitation program task through retrieval and entering;
step 3, completely presenting the body part needing to be trained and recovered in an image acquisition frame in a screen to further obtain recognition, preprocessing a picture by adopting a human posture detection algorithm of deep learning, and inputting the preprocessed picture into a posture detection model constructed based on the deep learning;
step 4, adopting attitude discrimination based on angle and distance calculation and machine learning and attitude analysis based on state sequence to complete actions according to character step prompt on the left side of the screen and simulated human animation prompt;
step 5, the patient selects a date and a corresponding training schedule in the rehabilitation training prescription to start training;
and 6, after each rehabilitation training is finished, automatically storing the training video and the result, and selecting whether to upload the training video and the result to the doctor end for subsequent confirmation by the patient.
2. The children's motion rehabilitation training method based on visual gesture detection and deep computer learning of claim 1, characterized in that: in step 1, the display content is realized by an OpenGL graphics rendering technology.
3. The children's motor rehabilitation training method based on visual gesture detection and computer deep learning of claim 1, characterized in that: in step 2, the part needing rehabilitation training is searched or the disease name is searched, and the rehabilitation program task corresponding to the part or the disease is obtained.
4. The children's motor rehabilitation training method based on visual gesture detection and computer deep learning of claim 1, characterized in that: and 3, after the normalized coordinates are recognized and output, simultaneously displaying parameters of angles, distances and a moving range in real time, calculating the positions of the joints of the human body in real time, and mapping the position changes to corresponding key points of the model to realize the follow-up effect of the model.
5. The children's motion rehabilitation training method based on visual gesture detection and deep computer learning of claim 1, characterized in that: in step 3, inputting an RGB image and outputting 33 human body key point coordinates by the gesture detection model, using a heat map and an offset only in a training stage by a regression method by combining the heat map and the offset, wherein the pyramid-type feature extraction structure can remarkably improve the prediction quality, predicting the heat maps of all joint points by using a coding and decoding network, and regressing the coordinates of all joints by using another coder; the heat map is discarded in the reasoning stage, only the regression part is reserved, the reasoning speed is improved, and the real-time operation can be realized without precision loss; the method comprises the steps that normalized key point coordinates are input into an action classification model, confidence coefficients of all classes are output, a logistic regression model is built through a convolutional neural network, the problems that data of input coordinate points are discrete and cannot be linearly represented are solved, the lightweight classification model operates in super real time, and specific mark points are displayed on a screen and change in time along with body position changes.
6. The children's motion rehabilitation training method based on visual gesture detection and deep computer learning of claim 1, characterized in that: in step 4, according to different needs of different selected rehabilitation programs, different posture judgment strategies comprise:
identifying by a specific angle: for the rehabilitation of joints, judging the validity of the posture by calculating the angle formed by joint points;
identifying by a specific distance: judging the posture according to the distance between the feature points selected by the part;
detecting by a machine learning algorithm: the method comprises the steps of collecting expected gesture samples, applying a machine learning algorithm to the samples and outputting a classification model, and inputting collected images to the model when the classification model is used so as to obtain a classification judgment result.
7. The children's motion rehabilitation training method based on visual gesture detection and deep computer learning of claim 1, characterized in that: in step 4, the detection method based on the state sequence comprises the following steps:
step 4-1, dividing concrete actions into several position-related states;
4-2, selecting a proper threshold value or a proper discriminant model for each state;
4-3, performing state discrimination and division on the output topological points by using an angle and distance discrimination method, and storing division results into a sequence list, wherein the list records the nearest state sequence;
and 4-4, performing sequence analysis on the sequence list according to requirements, and judging a motion result according to state transition.
8. The children's motor rehabilitation training method based on visual gesture detection and computer deep learning of claim 1, characterized in that: in step 6, the training video and the training record of the patient are selected by the patient to be shared with the doctor or not, and one-to-one online communication is supported.
CN202211288552.8A 2022-10-20 2022-10-20 Child motion rehabilitation training method based on visual posture detection and computer deep learning Pending CN115530814A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211288552.8A CN115530814A (en) 2022-10-20 2022-10-20 Child motion rehabilitation training method based on visual posture detection and computer deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211288552.8A CN115530814A (en) 2022-10-20 2022-10-20 Child motion rehabilitation training method based on visual posture detection and computer deep learning

Publications (1)

Publication Number Publication Date
CN115530814A true CN115530814A (en) 2022-12-30

Family

ID=84735133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211288552.8A Pending CN115530814A (en) 2022-10-20 2022-10-20 Child motion rehabilitation training method based on visual posture detection and computer deep learning

Country Status (1)

Country Link
CN (1) CN115530814A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116661609A (en) * 2023-07-27 2023-08-29 之江实验室 Cognitive rehabilitation training method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116661609A (en) * 2023-07-27 2023-08-29 之江实验室 Cognitive rehabilitation training method and device, storage medium and electronic equipment
CN116661609B (en) * 2023-07-27 2024-03-01 之江实验室 Cognitive rehabilitation training method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN111126272B (en) Posture acquisition method, and training method and device of key point coordinate positioning model
CN106650687B (en) Posture correction method based on depth information and skeleton information
CN108764120B (en) Human body standard action evaluation method
CN112184705B (en) Human body acupuncture point identification, positioning and application system based on computer vision technology
JP2021524113A (en) Image processing methods and equipment, imaging equipment, and storage media
Chaudhari et al. Yog-guru: Real-time yoga pose correction system using deep learning methods
CN110728220A (en) Gymnastics auxiliary training method based on human body action skeleton information
CN113362452A (en) Hand gesture three-dimensional reconstruction method and device and storage medium
CN113255522B (en) Personalized motion attitude estimation and analysis method and system based on time consistency
CN114998983A (en) Limb rehabilitation method based on augmented reality technology and posture recognition technology
CN117671738B (en) Human body posture recognition system based on artificial intelligence
CN113947809A (en) Dance action visual analysis system based on standard video
Liu Aerobics posture recognition based on neural network and sensors
WO2020147791A1 (en) Image processing method and device, image apparatus, and storage medium
CN115530814A (en) Child motion rehabilitation training method based on visual posture detection and computer deep learning
Li Application of IoT-enabled computing technology for designing sports technical action characteristic model
CN111312363B (en) Double-hand coordination enhancement system based on virtual reality
Qianwen Application of motion capture technology based on wearable motion sensor devices in dance body motion recognition
CN111310655A (en) Human body action recognition method and system based on key frame and combined attention model
CN116543455A (en) Method, equipment and medium for establishing parkinsonism gait damage assessment model and using same
CN113569775B (en) Mobile terminal real-time 3D human motion capturing method and system based on monocular RGB input, electronic equipment and storage medium
CN114120371A (en) System and method for diagram recognition and action correction
Yang et al. Wushu movement evaluation method based on Kinect
Liang et al. Interactive Experience Design of Traditional Dance in New Media Era Based on Action Detection
CN117423166B (en) Motion recognition method and system according to human body posture image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination