CN116612857A - Motion monitoring system, method and storage medium for shoulder and neck rehabilitation - Google Patents

Motion monitoring system, method and storage medium for shoulder and neck rehabilitation Download PDF

Info

Publication number
CN116612857A
CN116612857A CN202310627991.5A CN202310627991A CN116612857A CN 116612857 A CN116612857 A CN 116612857A CN 202310627991 A CN202310627991 A CN 202310627991A CN 116612857 A CN116612857 A CN 116612857A
Authority
CN
China
Prior art keywords
motion
user
information
training
joint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310627991.5A
Other languages
Chinese (zh)
Inventor
董皓
李景阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Allin Technology Co ltd
Original Assignee
Beijing Allin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Allin Technology Co ltd filed Critical Beijing Allin Technology Co ltd
Priority to CN202310627991.5A priority Critical patent/CN116612857A/en
Publication of CN116612857A publication Critical patent/CN116612857A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/945User interactive design; Environments; Toolboxes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Medical Informatics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A motion monitoring system for shoulder and neck rehabilitation, wherein it comprises a memory; and a processor, comprising at least: an acquisition module configured to acquire image information; a joint point identification module configured to identify the joint point, coordinates of the corresponding joint point, and predicted value information in the image information based on the trained visual model; a human body determination module configured to determine whether the image information contains complete human body information or key joint point information related to a training exercise to be performed; an action guidance module configured to obtain training exercise information; an action determination module configured to recognize whether a training exercise of the user meets an expected requirement; and an output module configured to output the related prompts. The application also relates to a motion monitoring method and a storage medium for implementing the method. The application can optimize the calculation complexity of monitoring and feeding back the motion and can accurately and timely feed back the motion of shoulder and neck rehabilitation.

Description

Motion monitoring system, method and storage medium for shoulder and neck rehabilitation
Technical Field
The application relates to a motion monitoring system, a method and a storage medium for shoulder and neck rehabilitation.
Background
In the prior art, after a patient performs a shoulder and neck operation, the patient needs to perform certain exercise every day to recover the activity of the shoulder and neck so as to achieve the purposes of normal activities and life. However, when a patient exercises, whether the exercise is standard or not and whether the exercise reaches the standard or not cannot be determined, and the purpose of shoulder and neck exercise can be achieved or not. Thus, there is a need for a motion monitoring system to accomplish the monitoring, guidance, etc. cues to a patient during training.
However, there are various significant problems with existing motion monitoring feedback systems for guiding a user for shoulder and neck rehabilitation. On the one hand, the motion monitoring products currently existing on the market for shoulder and neck rehabilitation are basically some exercise machines or rehabilitation devices, which can simply restrict the actions of the user based on their mechanical and physical construction, but cannot monitor and feed back the nonstandard actions existing therein to the user. On the other hand, there are also some motion monitoring products for shoulder and neck rehabilitation based on wearable sensors, but such products require relatively complex wearing and calibration procedures before motion monitoring can be a significant burden for the user who needs to perform shoulder and neck rehabilitation. Further, while there are some products that monitor motion based on real-time acquisition of video, most of these products monitor and feedback based on comparison of the monitored motion to standard motion, in which case the computational process is relatively complex and the computational effort requirements are relatively high. In addition, most of the rehabilitation devices existing on the market are based on cervical vertebra rehabilitation, and the monitoring effect on the movement of the shoulder is not ideal.
Thus, there is a need for a motion monitoring system for shoulder and neck rehabilitation that on the one hand optimizes the computational complexity of monitoring and feedback for the motion of the user and on the other hand enables accurate and timely feedback for the shoulder and neck rehabilitation motion.
Disclosure of Invention
According to a first aspect of the present application there is provided a motion monitoring system for shoulder and neck rehabilitation, wherein it comprises a memory; and a processor, the processor comprising at least: an acquisition module configured to acquire image information of a user; a joint point recognition module configured to recognize a joint point of the user in the image information acquired by the acquisition module, coordinates of a corresponding joint point in the image information, and predicted value information of the recognized joint point based on the trained visual model; a human body determination module configured to determine whether the acquired image information contains complete human body information or key joint point information related to a training exercise to be performed, based on the predicted value information of the joint point identified by the joint point identification module; an action guidance module configured to acquire training exercise information for guiding a user for shoulder and neck rehabilitation based on an action or a series of actions selected by the user; a motion determination module configured to identify whether a training motion of a user performing a motion based on the motion guidance module meets an expected requirement; and an output module configured to output a related prompt based on the determination result of the human body determination module and/or the motion determination module and/or based on the training exercise information of the motion guidance module.
Further, the action determination module is further configured to determine whether a shooting angle for the image information is correct.
Further, the processor also includes a particular joint exclusion module configured to receive particular joint exclusion information manually entered by the user.
Further, the action determination module determines whether the training motion currently performed by the user meets the expected requirement based on the angle and/or the coordinate position relation between the joint points related to the training action in the joint points identified by the joint point identification module when the user performs the training motion.
Further, for prone position training exercise, when the motion determination module determines that the photographing angle is incorrect, the motion determination module performs coordinate processing based on a Pythagorean theorem based on an angle of a connecting line between the shoulder joint and the hip joint with respect to a Y axis and a difference value of coordinates of corresponding X axes of the shoulder joint and the hip joint, which are parallel or coincident with a vertical direction and a lateral direction, respectively, and calculates the photographing angle such that the transformed coordinates correspond to coordinates of image information of the correct photographing angle.
Further, the trained visual model performs recognition training for the joints of the human body based on the pre-acquired images and/or image streams and/or video data regarding all the poses of the various persons and the poses of the images and/or image streams and/or video data include prone data.
Further, the athletic monitoring system also includes a human-machine interface configured to receive initial information entered by the user, the initial information including specific node exclusion information and training action selection information of the user.
According to another aspect of the present application, there is provided a motion monitoring method for shoulder and neck rehabilitation, wherein the motion monitoring method comprises at least the steps of: starting a motion monitoring system for shoulder and neck rehabilitation; selecting a training exercise by a user; prompting the user to be in a correct initial position; acquiring image information of the user in real time; identifying the user's joint point, corresponding joint point coordinates, and predicted value information of the identified joint point in the acquired image information based on the trained visual model; making a human body decision based on the predicted value information of the identified joint points to determine whether a complete human body is detected in the image information or whether key joint point information related to a training action to be performed is contained; if the image information is determined to detect the complete human body or include key joint points, prompting a user to change the orientation and/or perform training action information such as actions; judging whether the action executed by the user meets the requirement or not based on the image information acquired when the user performs the action; if the action does not meet the requirement, prompting the user to correct the action.
Further, for the prone position training action, the method further includes a determination process of determining whether the shooting angle is correct.
According to another aspect of the present application, a storage medium storing instructions implementing a motion monitoring method for shoulder and neck rehabilitation is provided.
By means of the motion monitoring system, the motion monitoring method and the storage medium for shoulder and neck rehabilitation, the motion monitoring system, the motion monitoring method and the storage medium for shoulder and neck rehabilitation can optimize the computational complexity of monitoring and feeding back the motion of a user on one hand, and can accurately and timely feed back the motion of shoulder and neck rehabilitation on the other hand.
Drawings
The above and other advantages of the application will now be described with reference to the accompanying drawings, which are for illustrative purposes only, wherein:
FIG. 1 shows a schematic block diagram of a motion monitoring system for shoulder and neck rehabilitation according to the present application;
FIG. 2 shows a flow chart of a motion monitoring method for shoulder and neck rehabilitation in accordance with a preferred embodiment of the present application;
fig. 3 shows an overall block diagram of a motion monitoring system for shoulder and neck rehabilitation according to a preferred embodiment of the present application.
Detailed Description
In the embodiments of the present application, the embodiments of the present application are described using three directions perpendicular to each other. Specifically, the vertical direction Y or Y axis coincides with the direction of gravity. The longitudinal direction Z or Z axis is parallel to the horizontal ground and perpendicular to the vertical direction Y. The transverse direction X or X-axis is also parallel to the horizontal ground and perpendicular to both the vertical direction Y and the longitudinal direction Z, so that the plane in which both the transverse direction X and the longitudinal direction Z lie together is parallel to the horizontal ground. Further, the lateral direction X may also be more specifically defined as a horizontal direction displayed in the image.
Fig. 1 shows a schematic block diagram of a motion monitoring system 1 for shoulder and neck rehabilitation according to the application.
A motion monitoring system for shoulder and neck rehabilitation according to an embodiment of the present application comprises at least a memory 14 and a processor 12, the processor 12 comprising at least: an acquisition module 1202, a joint point identification module 1204, a human body judgment module 1206, an action judgment module 1208, an action guidance module 1210 and an output module 1212. As will be appreciated by those skilled in the art, in the context of the present application, the various modules that implement the functionality of the processor are merely exemplary.
The acquisition module 1202 is configured to acquire image information of a user. As an example, in an embodiment of the application, the acquisition module 1202 of the motion monitoring system 1 for shoulder and neck rehabilitation may be, for example, an optical acquisition device such as a camera or the like. The image information of the user obtained by the acquisition module 1202 is passed to various other modules in the processor for various processing. In an embodiment of the application, the acquisition module is configured to acquire image information of the user in real time.
The joint point identification module 1204 is configured to identify, based on the trained visual model, a joint point of the user in the image information acquired by the acquisition module 1202, corresponding joint point coordinates in the image information, and predicted value information of the identified joint point. By way of example, the joint points identified by the joint point identification module 1204 include 17 joint points such as eyes, ears, nose, shoulders, elbows, wrists, hips, knees, ankles, etc., but this is merely exemplary, and one skilled in the art can add or subtract or transform one or more of these joint points as appropriate without departing from the scope of the application. Further, in embodiments of the present application, the trained visual model performs recognition training of the joints of the human body, for example, based on pre-acquired images and/or image streams and/or video data regarding all gestures of various people. In an embodiment of the application, the pose of the image and/or image stream and/or video data for training the visual model comprises prone data. In an embodiment of the present application, the predicted value information of the identified joint point represents probability value information of the joint point identified due to the visual model. It should be understood that in the trained visual model, the joint point that identifies the acquired image information actually corresponds to the pixel (pixel coordinate) in the identified image information that has the highest probability of being the particular joint point, and therefore, in the embodiment of the present application, the pixel (pixel coordinate) corresponding to the identified joint point is also correspondingly determined to be the predicted value information (probability information) of the joint point. As will be appreciated by one of ordinary skill in the art, within the scope of the present application, the articulation point recognition module 1204 is further configured to recognize, in real-time, the user's articulation points in the image information acquired by the acquisition module 1202, the corresponding articulation point coordinates in the image information, and the predicted value information of the identified articulation points based on the trained visual model. Further, in embodiments of the present application, the desired visual model is pre-trained using TnesorFlow.
The human body determination module 1206 is configured to determine whether or not the acquired image information contains complete human body information based on the predicted value information of the joint point identified by the joint point identification module 1204. Specifically, based on the above, since each of the nodes identified by the node identification module 1206 contains predicted value information (probability information) that the pixel (pixel coordinate) corresponding to the identified node is indeed that node, it is possible to determine whether or not the acquired image information contains the complete human body information based on these predicted information values. In particular and without limitation, it may be determined whether the respective predicted value information for each of the nodes identified by the node identification module 1204 is greater than a predetermined threshold, e.g., greater than 0.5, indicating that all of the identified nodes are likely human nodes, and thus it may be determined that the image information includes complete human information. Further, it may be determined whether the corresponding predicted value information of each of the nodes identified by the node identification module 1204 is less than a predetermined threshold, for example, less than 0.2, indicating that all of the identified nodes are highly unlikely to be human nodes, and thus it may be determined that no human information is contained in the image information at all. As another example, independent comprehensive calculations, such as summation, log and absolute values, minimum value decisions, etc., may be performed on the predicted value information for each of the nodes identified by the node identification module 1204 to determine whether the image information contains complete human body information. As an example of the summation in the independent integrated calculation, for example, each predicted value information associated with the identified joint point may be summed and the summed value compared with a predetermined value to determine whether or not the complete human body information is included in the acquired image information. As another example, in addition to the above-described independent calculation method, the comprehensive calculation may be performed by adopting a predetermined calculation method integrated into TnesorFlow or other similar visual training model, and the invocation and principle of the predetermined calculation method are well known in the art and will not be described in detail herein.
As an example of the minimum determination in the above-described independent comprehensive operation, for example, the minimum value in each predicted value information associated with the identified joint point may be compared with a predetermined value to determine whether or not the complete human body information is included in the acquired image information, for example, when the minimum value in the predicted value information is less than 0.1, it is indicated that there is a case where it is highly impossible for the joint point in the identified joint point, and it may be determined that there is a case where the complete human body information is not included in the acquired image information. Of course, it is possible to further count the predicted value information smaller than the predetermined value, and when there is a minimum value smaller than the predetermined value and the number of predicted values smaller than the predetermined value is smaller than a certain total number, it is determined that there is a case where the complete human body information is not included in the acquired image information.
Alternatively, the human body determination module 1206 is configured to determine whether key joint point information related to the shoulder and neck rehabilitation exercise or the training action to be performed is included in the acquired image information based on the predicted value information of the joint point identified by the joint point identification module 1204. The movement or action of the neck and shoulder rehabilitation will be described in detail later. For shoulder and neck training exercises or training actions, these exercises or actions may be relevant only to a specific joint point, so that the exercise monitoring system 1 does not need to keep focusing on other joint points which are irrelevant to the exercises or actions when monitoring, and therefore, only the key joint points which are relevant to the shoulder and neck rehabilitation exercises or actions to be performed need to be effectively covered, which can significantly reduce the burden of the monitoring system 1 when monitoring the exercises or training actions and can adapt to the image/image streams of different fields or view angles.
Alternatively and optionally, in embodiments of the application, the processor 12 may also include a particular joint exclusion module 1214 that may receive particular joint exclusion information manually entered by a user. In the case where some users are missing specific groups of certain limbs or organs, various processing may be performed on the acquired image information, such as by the joint point identification module 1204, based on the specific joint point exclusion information acquired by the specific joint exclusion module 1214, before performing various processing on the acquired image information. Thus, in this case, the trained visual model for the joint point recognition module 1204 performs recognition training of the joints of the human body, for example, also based on pre-acquired images and/or image streams and/or video data regarding all the poses of the various persons including the presence of the absence of a particular joint point.
The motion guidance module 1208 is configured to obtain training exercise information for guiding the user for shoulder and neck rehabilitation based on the motion or series of motions selected by the user. As an example, movement information for performing shoulder and neck rehabilitation is stored in the memory 14 and related selected data is acquired by the action guidance module 1208 based on user selection. For example, when a user selects an action, the action guidance module 1208 obtains a training name and obtains relevant training exercise information. The training exercise information includes, for example, the distance and orientation of the user from the exercise monitoring system 1, the placement of the exercise monitoring system 1, the action key that the user needs to pay attention to, and various time information related to the training exercise. For example, the training exercise information includes various relevant information that instructs the user to face the camera, be about 3 meters from the cell phone, the start time of the training exercise, the duration of the training exercise, and the like.
The motion determination module 1210 is configured to identify whether the training motion of the user performing the motion based on the motion guidance module 1208 meets the expected requirements. In an embodiment of the present application, the action determination module is configured to determine whether the training motion currently performed by the user meets the expected requirement based on the positional relationship between the joints related to the training action among the respective joints identified by the joint identification module 1204 when the user performs the training motion. Further, in an embodiment of the present application, the action determination module 1210 determines whether the training motion currently performed by the user meets the expected requirement based on the angular and/or coordinate positional relationship between each of the nodes identified by the node identification module 1204 and the node associated with the training motion when the user performs the training motion. Of course, as will be appreciated by those skilled in the art, the criteria described above for determining whether a training motion meets the expected requirements are merely exemplary, and for example, the motion determination module 1210 may also be configured to determine whether the training motion currently performed by the user meets the expected requirements based on the change in positional relationship or the rate or magnitude of change between the various image frames between the various nodes identified by the node identification module 1204 as the user performs the training motion within the scope of the present application. By way of example and not limitation, the action determination module 1210 is configured to determine a positional relationship between the nodes associated with the training action among the respective nodes identified by the node identification module based on the connection between the associated nodes.
Although specific functions of the action determination module 1210 are described herein, in an embodiment of the present application, the action determination module 1210 is further configured to determine whether the photographing angle for the image information is correct. Specifically, the action determination module 1210 is further configured to determine whether the shooting angle is correct based on the coordinate relationship of the selected one of the identified nodes. For example, for a user in a prone position, when the Y coordinates of the shoulder joint and the hip joint on the side proximate to the ground are substantially the same (the line between the shoulder joint and the hip joint is substantially parallel with respect to the Y axis) (e.g., the difference <30 °), the action determination module determines that the photographing angle is correct and does not perform additional processing on the coordinates of the image information, otherwise, determines that the photographing angle is incorrect. When the action determination module determines that the photographing angle is incorrect, on the one hand, a prompt for adjusting the angle may be output via the output module 1212, and on the other hand, the photographing angle may be calculated via the action determination module 1210 based on the pythagorean theorem, specifically, based on the angle of the connecting line between the shoulder joint and the hip joint with respect to the Y-axis and the difference value of the corresponding X-axis coordinates of the shoulder joint and the hip joint, coordinate processing is performed and the corresponding photographing angle is calculated such that the transformed coordinates correspond to the coordinates of the image information of the correct photographing angle, so that the action determination module 1210 can determine whether the action satisfies the requirement based on the photographing angle and the corresponding processed coordinates in the subsequent action determination process.
The output module 1212 is configured to output relevant prompts for training actions based on the determination results of the human body determination module 1206 and/or the action determination module 1210 and based on training movement information of the action guidance module 1208. As an example, the output module 1212 can be configured to output a negative determination determined by the human body determination module 1206 and/or the action determination module 1210 or to feedback information related to training exercise information to the user in audio and/or video and/or a combination thereof to achieve a relevant improvement by the user to meet various requirements. Advantageously, the output module 1212 is preferably in the form of a screen, such as a liquid crystal display, an organic light emitting diode, or the like. It is contemplated that the output device may also be, by way of example and not limitation, output device hardware such as a voice broadcast device, projection device, or the like, or a combination thereof.
Optionally, in an embodiment of the present application, the motion monitoring system 1 of the present application further comprises a human-machine interaction interface, wherein the human-machine interaction interface is configured to receive initial information input by a user, the initial information including, for example, specific node exclusion information and training action selection information of the user. The human-machine interaction interface is configured to receive user-initiated information in any form that is user-implementable (e.g., voice input, text input, image recognition). By way of example and not limitation, the human-machine interaction interface may be implemented as any hardware or combination of hardware, keyboard, mouse, touch screen, joystick, microphone, etc. that can receive initial information entered by a user.
In the field of shoulder and neck rehabilitation, as an example, training actions can be generally distinguished into standing posture training actions and prone posture training actions. Still further, as an example, the operation flow of the motion monitoring system 1 for shoulder and neck rehabilitation described in the present application will be described below by selecting the standing posture training motion and the prone posture training motion, respectively, but it should be noted that the present application is not limited thereto regardless of the training motion or the training motion series actually selected by the user. The division of standing and lying training actions is merely exemplary, as one skilled in the art will appreciate that other types of training actions may exist, such as sitting training actions. Still further, in the embodiment of the present application, although described as the exercise monitoring system 1 for shoulder and neck rehabilitation exercise, the scope of the present application may not be limited thereto, but the exercise monitoring system may also be used for rehabilitation exercise monitoring of other sites, even for various ranges such as action standard monitoring of a user of an athlete, based on the idea and design of the present application.
Now, the specific operation of the motion monitoring system 1 of the present application for standing posture training motions will be described in detail: after the user starts the motion monitoring system 1, the user inputs specific joint exclusion information (optionally) and selects training motion via the man-machine interaction interface, and then the action guiding module 1208 prompts the user to face the camera and be about 3 meters away from the mobile phone via the output module 1212; the acquisition module 1202 starts acquiring image information of the user in real time, and the joint point identification module 1204 starts identifying the joint point of the user in the image information acquired by the acquisition module 1202, corresponding joint point coordinates in the image information and predicted value information of the identified joint point based on the trained visual model; the human body judging module 1206 starts to judge the human body based on the predicted value information of the joint point identified in the joint point identifying module after the user selects the training exercise for a certain time interval or in real time; when the human body decision module 1296 determines that a complete human body has not been detected or that a critical node associated with the training action to be performed has not been detected, outputting a prompt for a human body decision failure via the output module 1212 and prompting the user to improve the action and repeating the human body decision process until the human body decision is successful; when the human body determination is successful, the motion guidance module 1208 prompts the user via the output module 1212 to change orientation (e.g., from facing the camera to facing the camera) and/or to perform training motion information such as motion.
For the standing posture training motion, the training motion may include an initial motion, for example, the two arms naturally drop and approach the thighs, at this time, the coordinates of the shoulder joint, the elbow joint and the wrist joint determined by the joint point recognition module 1204 based on the image information acquired by the acquisition module 1202 are substantially equal in terms of X coordinates but are greatly different in terms of Y coordinates, so the motion determination module 1210 determines the respective angles between the connecting lines and the Y axis/X axis based on the connecting lines between the shoulder joint and the wrist joint coordinates, at this time, the angle with respect to the Y axis/X axis should be substantially 0 °/90 °, and thus, taking the Y axis as an example, when the angle < = 10 °, the motion determination module 1210 determines that the initial motion satisfies the requirement, and the motion guidance module further outputs the next motion, for example, the motion of lifting the head top on the two hands via the output module 1212. When the included angle is >10 °, the action determination module 1210 determines that the initial action does not meet the requirement and outputs a prompt via the output module 1212 that the hands are naturally drooping, proximate to the legs.
As an example, when the user performs the next motion, the user's arms are required to be kept consistent, the motion determination module 1210 is configured to determine whether the two arms are always kept in parallel relation during the next motion based on the shoulder joint and wrist joint coordinate connection, and when the motion is not kept parallel, the motion determination module 1210 determines that the motion is not satisfactory and outputs a motion non-specification via the output module 1212, and the arms require a parallel prompt.
As an example, when the user needs to keep his arm straight while performing the next motion, the motion determination module 1210 is configured to determine whether the angle is <10 ° or >170 ° based on the angle between the shoulder joint and elbow joint line and the elbow joint and wrist joint line, and when the angle is not within the range of <10 ° or >170 °, the motion determination module 1210 determines that the motion does not meet the requirement and outputs a prompt that the motion is not satisfactory via the output module 1212.
As an example, when the user performs the next motion, it is required that the user's arm maintains a certain motion (e.g., maintains a horizontal level) for a certain time, the motion determination module 1210 is configured to determine whether the arm is ascending or descending based on the change in the angle of the shoulder joint and wrist joint coordinate line with respect to a specific coordinate axis (e.g., X-axis, Y-axis, or Z-axis) in the image information respectively acquired by the acquisition module 1202 at two times, for example, for a standard motion, the angle of the shoulder joint and wrist joint coordinate line with respect to the Y-axis in the image information respectively acquired by the acquisition module 1202 at two times appears as the previously recognized angle < the currently recognized angle and the difference is >5 °, which indicates that the user's arm sags down and does not meet the motion requirement and outputs a prompt that the motion is not compliant via the output module 1212. Of course, in some actions, the action determination module 1210 may prompt the user via the output module 1212 to maintain an action indication such as this action when the action meets the requirements. When the motion is maintained, whether the two arms are straightened or not is judged, and the judging process of whether the two arms are parallel or not is consistent with the above.
Further, now, the specific operation of the motion monitoring system 1 of the present application for prone position training exercises will be described in detail: when the user activates the motion monitoring system 1, the user inputs specific joint exclusion information (optionally) and selects training motions via the human-computer interaction interface, and then the action guidance module 1206 prompts the user to lie down, the arm faces the camera, the mobile phone angle shoots the whole body as much as possible, and does not shoot from the head or the foot via the output module; the acquisition module 1202 starts acquiring image information of the user in real time, and the joint point identification module 1204 starts identifying the joint point of the user in the image information acquired by the acquisition module 1202, corresponding joint point coordinates in the image information and predicted value information of the identified joint point based on the trained visual model; the human body judging module 1206 starts to perform human body judgment based on the predicted value information of the joint points identified in the joint point identifying module 1204 after the user selects the training exercise for a certain time interval or in real time; when the human body decision module 1206 determines that a complete human body is not detected or that a key node associated with a training action to be performed is not detected, outputting a prompt of a human body decision failure via the output module 1212 and prompting the user to improve the action and repeating the human body decision process until the human body decision is successful; when the human body determination is successful, the action guidance module 1208 prompts the user via the output module 1212 to change orientation and/or take training action information such as actions.
It should be noted that, in the case of performing the prone position training motion, before the motion guidance module 1208 outputs training motion information, it is necessary to determine the shooting angle of the acquisition module 1202, and the shooting angle is determined by the motion determination module 1210 according to the foregoing, which is not described herein. Of course, the process of making such a shooting angle determination via the action determination module 1210 is not mandatory but exemplary. In the embodiments herein below regarding prone training actions, the action determination module 1210 is considered to have implemented determining the shooting angle of the acquisition module 1202 and appropriately transforming the coordinates of the image information so that the action determination module 1210 can make action determinations based on a normally placed XYZ coordinate system.
As an example, in the case of prone position training motion, the training motion may include an initial motion, for example, the initial motion of the shoulder supination training of the dumbbell is that the affected arm needs to be parallel to the Y axis, so the motion determination module 1210 determines the criteria based on whether the connection line between the elbow joint and the wrist joint is parallel to the Y axis, which is similar to the initial motion case for standing position training motion described above, and will not be repeated here.
As an example, when the user performs the next motion, the motion determination module 1210 is configured to determine whether the arm is ascending or descending based on the angle change of the elbow joint and wrist joint coordinate line and/or the shoulder joint and elbow joint line with respect to a specific coordinate axis (e.g., X-axis, Y-axis, or Z-axis) on the affected side in the image information respectively acquired by the acquisition module 1202 at two times, for example, when the user is required to keep a certain motion (e.g., the forearm is kept upright) for a certain time, for example, the determination process is similar to the determination process for keeping the arm horizontal in the standing posture training motion described above, and will not be repeated here. When the action does not meet the criteria, the action determination module 1210 outputs a prompt that the action is not satisfactory by the output module 1212.
Although the specific processes of the motion monitoring system 1 for shoulder and neck rehabilitation of the present application are described in detail above for the standing posture training actions and the prone posture training actions, respectively, these processes are merely exemplary and not limiting. Those skilled in the art can make various suitable modifications based on the disclosure of the present application without departing from the scope of the application.
Advantageously, the Memory 14 of the athletic monitoring system 1 for neck and shoulder rehabilitation within the scope of the present application may comprise, for example, a Memory such as a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic or optical disk, or other hardware storage that may store data. Further, the memory according to the present application may comprise a database, cloud storage, etc. software storage. Further, the memory may comprise any software program which may also store the functions for implementing the motion monitoring system 1 for shoulder and neck rehabilitation of the present application.
Fig. 2 shows a flow chart of a motion monitoring method for shoulder and neck rehabilitation according to a preferred embodiment of the application. In an embodiment of the application, a motion monitoring method for shoulder and neck rehabilitation comprises the following steps.
At step 200, a motion monitoring system for shoulder and neck rehabilitation is activated.
At step 202, a training exercise is selected by a user. At step 202, information such as specific node exclusion information via user input may also be included.
At step 204, the user is prompted to be in the correct initial position. For example, the user is prompted to be facing the camera, about 3 meters from the cell phone, etc.
At step 206, image information of the user is acquired in real time.
At step 208, the user's node points, corresponding node point coordinates, and predicted value information for the identified node points in the acquired image information are identified based on the trained visual model.
At step 210, a human body decision is made based on the predicted value information of the identified joint points to determine whether a complete human body is detected in the image information or whether a critical joint point related to the training action to be performed is detected.
At step 212, when the human body determination is successful, the user is prompted to change the orientation and/or perform training motion information such as motion.
At step 214, it is determined whether the action performed by the user meets the requirements based on the acquired image information.
At step 216, the user is prompted to revise the action when the action does not meet the requirements.
For prone training actions, it may also include a determination of whether the shooting angle is correct, see for example the relevant notes for the action determination module above.
As shown in fig. 3, fig. 3 shows an overall structure diagram of a motion monitoring system 1 for shoulder and neck rehabilitation according to an embodiment of the present application, wherein the motion monitoring system for shoulder and neck rehabilitation generally comprises at least the following components based on the same inventive concept: a processor 301, a memory 302, a communication interface 303, and a bus 304; wherein, the processor 301, the memory 302, and the communication interface 303 complete the communication with each other through the bus 304; the communication interface 303 is used for realizing information interaction communication of a motion monitoring system for shoulder and neck rehabilitation and information transmission of other software or hardware; the processor 301 is configured to invoke a computer program in the memory 302, which when executed implements a procedure performed by the motion monitoring system for neck and shoulder rehabilitation as described previously herein.
Based on the same inventive concept, a further embodiment of the present application provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements a procedure performed by the motion monitoring system 1 for neck and shoulder rehabilitation as described in the foregoing, which is not described herein.
Further, the logic instructions in the memory described above may be implemented in the form of software functional units and stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, or in a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the procedures performed by the system for assisting in disease reasoning according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The working principle and the beneficial effects of the computer program stored on the computer readable storage medium provided by the embodiment of the present application are similar to those of the motion monitoring system for neck and shoulder rehabilitation provided by the above embodiment, and the detailed content and the description of the above embodiment are referred to, so that the embodiment of the present application will not be described in detail.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules can be selected according to actual needs to achieve the purpose of the embodiment of the application. Those of ordinary skill in the art will understand and implement the present application without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in the form of a software product, which may be stored in a computer-readable storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the various embodiments or portions of the embodiments.
It should also be understood that various modifications may be made according to specific requirements. For example, custom hardware may also be used, and/or particular elements may be implemented in hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. For example, some or all of the disclosed systems for assisting disease reasoning and processes performed thereby may be implemented by programming hardware (e.g., programmable logic circuits including Field Programmable Gate Arrays (FPGAs) and/or Programmable Logic Arrays (PLAs)) in an assembly language or hardware programming language such as VERILOG, VHDL, c++ using logic and algorithms in accordance with the present disclosure.
It should also be appreciated that the process performed by the motion monitoring system for neck and shoulder rehabilitation described above may be implemented in a server-client mode. For example, a client may receive data entered by a user and send the data to a server. The client may also receive data input by the user, perform a part of the processes performed by the motion monitoring system for neck and shoulder rehabilitation, and send the processed data to the server. The server may receive data from the client and perform another part of the procedure performed by the aforementioned system for assisting in disease reasoning or the procedure performed by the aforementioned motion monitoring system for neck and shoulder rehabilitation and return the result of the execution to the client. The client may receive from the server the result of the execution of the procedure performed by the motion monitoring system for neck and shoulder rehabilitation and may present to the user, for example, through the output module.
It should also be appreciated that the components of the athletic monitoring system for neck and shoulder rehabilitation may be distributed over a network. For example, some processes may be performed using one processor while other processes may be performed by another processor remote from the one processor. Other components of the athletic monitoring system for neck and shoulder rehabilitation may be similarly distributed. In this way, a motion monitoring system for neck and shoulder rehabilitation may be interpreted as a distributed computing system that performs processing at multiple locations.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present application is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (10)

1. A motion monitoring system for shoulder and neck rehabilitation, wherein it comprises a memory; and
a processor, the processor comprising at least:
an acquisition module configured to acquire image information of a user;
a joint point identification module configured to identify a joint point of the user in the image information acquired by the acquisition module, coordinates of a corresponding joint point in the image information, and predicted value information of the identified joint point based on a trained visual model;
a human body determination module configured to determine whether or not the acquired image information contains complete human body information or key joint point information related to a training exercise to be performed, based on the predicted value information of the joint point identified by the joint point identification module;
an action guidance module configured to obtain training exercise information for guiding the user for shoulder and neck rehabilitation based on the action or series of actions selected by the user;
a motion determination module configured to identify whether the training motion of the user based on the motion performed by the motion guidance module meets an expected requirement; and
an output module configured to output a related prompt based on a determination result of the human body determination module and/or the motion determination module and/or based on the training exercise information of the motion guidance module.
2. The motion monitoring system of claim 1, wherein the action determination module is further configured to determine whether a shooting angle for the image information is correct.
3. The motion monitoring system of claim 1, wherein the processor further comprises a particular joint exclusion module configured to receive particular joint exclusion information manually entered by the user.
4. The athletic monitoring system of claim 1, wherein the action determination module determines whether the training motion currently performed by the user meets an expected requirement based on an angular and/or coordinate positional relationship between a joint of the respective joints identified by the joint identification module as the user performs the training motion.
5. The motion monitoring system according to claim 2, wherein for prone training motion, when the motion determination module determines that the photographing angle is incorrect, the motion determination module performs coordinate processing based on a Pythagorean theorem based on an angle of a connecting line between a shoulder joint and a hip joint with respect to a Y-axis and a difference value of respective X-axis coordinates of the shoulder joint and the hip joint and calculates the photographing angle such that the transformed coordinates correspond to coordinates of the image information of the correct photographing angle, wherein the Y-axis and the X-axis are parallel to or coincident with a vertical direction and a lateral direction, respectively.
6. The motion monitoring system according to claim 1, wherein the trained visual model performs recognition training for joints of a human body based on pre-acquired images and/or image streams and/or video data regarding all gestures of various persons and the gestures of the images and/or image streams and/or video data include prone position data.
7. The athletic monitoring system of claim 1, further comprising a human-machine interaction interface configured to receive initial information entered by the user, the initial information including node-specific exclusion information and training action selection information of the user.
8. A motion monitoring method for shoulder and neck rehabilitation, wherein the motion monitoring method comprises at least the steps of:
starting a motion monitoring system for shoulder and neck rehabilitation;
selecting a training exercise by a user;
prompting the user to be in a correct initial position;
acquiring image information of the user in real time;
identifying an articulation point of the user in the acquired image information, corresponding articulation point coordinates, and predicted value information of the identified articulation point based on a trained visual model;
making a human body decision based on the predicted value information of the identified joint points to determine whether a complete human body is detected in the image information or critical joint point information related to training exercises to be performed is contained;
if the fact that the complete human body is detected in the image information is determined, prompting the user to change the direction and/or perform training action information such as actions;
judging whether the action executed by the user meets the requirement or not based on the image information acquired when the user performs the action;
and if the action does not meet the requirement, prompting the user to correct the action.
9. The motion monitoring method according to claim 8, wherein for the prone position training action, the method further comprises a decision process of determining whether the shooting angle is correct.
10. A storage medium storing instructions which, when executed, implement the method of claim 8 or 9.
CN202310627991.5A 2023-05-31 2023-05-31 Motion monitoring system, method and storage medium for shoulder and neck rehabilitation Pending CN116612857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310627991.5A CN116612857A (en) 2023-05-31 2023-05-31 Motion monitoring system, method and storage medium for shoulder and neck rehabilitation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310627991.5A CN116612857A (en) 2023-05-31 2023-05-31 Motion monitoring system, method and storage medium for shoulder and neck rehabilitation

Publications (1)

Publication Number Publication Date
CN116612857A true CN116612857A (en) 2023-08-18

Family

ID=87685180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310627991.5A Pending CN116612857A (en) 2023-05-31 2023-05-31 Motion monitoring system, method and storage medium for shoulder and neck rehabilitation

Country Status (1)

Country Link
CN (1) CN116612857A (en)

Similar Documents

Publication Publication Date Title
US12033076B2 (en) Systems and methods for assessing balance and form during body movement
US11069144B2 (en) Systems and methods for augmented reality body movement guidance and measurement
CN109102857B (en) Intelligent limb rehabilitation training system and method
US20200197744A1 (en) Method and system for motion measurement and rehabilitation
US9987520B2 (en) Method and system for monitoring and feed-backing on execution of physical exercise routines
Zhao et al. Rule-based human motion tracking for rehabilitation exercises: realtime assessment, feedback, and guidance
JP4594157B2 (en) Exercise support system, user terminal device thereof, and exercise support program
EP2726164B1 (en) Augmented-reality range-of-motion therapy system and method of operation thereof
WO2017181717A1 (en) Electronic coaching method and system
CN112464918B (en) Body-building action correcting method and device, computer equipment and storage medium
WO2019008771A1 (en) Guidance process management system for treatment and/or exercise, and program, computer device and method for managing guidance process for treatment and/or exercise
KR102320960B1 (en) Personalized home training behavior guidance and correction system
CN115227234B (en) Cardiopulmonary resuscitation pressing action assessment method and system based on camera
Huang et al. Smartglove for upper extremities rehabilitative gaming assessment
US20230241454A1 (en) Multi-input automatic monitoring of motion tracking system and actuation
JP2020141806A (en) Exercise evaluation system
KR102704882B1 (en) Rehabilitation training system using 3D body precision tracking technology
Agarwal et al. FitMe: a fitness application for accurate pose estimation using deep learning
JP2019016254A (en) Method and system for evaluating user posture
Kishore et al. Smart yoga instructor for guiding and correcting yoga postures in real time
CN116612857A (en) Motion monitoring system, method and storage medium for shoulder and neck rehabilitation
CN117095789A (en) Systems and methods for providing rehabilitation guidance in a virtual environment
CN112541382A (en) Method and system for assisting movement and identification terminal equipment
US20160249866A1 (en) Criteria for valid range of motion capture
GB2575299A (en) Method and system for directing and monitoring exercise

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination