CN115153517A - Testing method, device, equipment and storage medium for timing, standing and walking test - Google Patents

Testing method, device, equipment and storage medium for timing, standing and walking test Download PDF

Info

Publication number
CN115153517A
CN115153517A CN202210844714.5A CN202210844714A CN115153517A CN 115153517 A CN115153517 A CN 115153517A CN 202210844714 A CN202210844714 A CN 202210844714A CN 115153517 A CN115153517 A CN 115153517A
Authority
CN
China
Prior art keywords
depth data
test
test action
frame
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210844714.5A
Other languages
Chinese (zh)
Other versions
CN115153517B (en
Inventor
朱文成
冯振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongke Ruiyi Information Technology Co ltd
Original Assignee
Beijing Zhongke Ruiyi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Ruiyi Information Technology Co ltd filed Critical Beijing Zhongke Ruiyi Information Technology Co ltd
Priority to CN202210844714.5A priority Critical patent/CN115153517B/en
Publication of CN115153517A publication Critical patent/CN115153517A/en
Application granted granted Critical
Publication of CN115153517B publication Critical patent/CN115153517B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1121Determining geometric values, e.g. centre of rotation or angular range of movement

Abstract

The application discloses a testing method, a testing device, testing equipment and a storage medium for timing standing walking test, and relates to the technical field of intelligent medical treatment. The specific implementation mode comprises the following steps: aiming at each test action in the timing standing walking test, acquiring a joint point of the movement of the subject at the starting moment of each test action; and calculating the motion parameter value of the test action according to each joint point at the starting moment of the test action. The method and the device have the advantages that the test action in the timing standing walking test is realized, the standard unified and quantitative motion parameter values are objectively provided, objective and quantitative data basis is provided for clinical diagnosis, and the objectivity and the accuracy of the clinical diagnosis are favorably improved. And moreover, on the basis of the joint point information of the testee of the timing standing-up walking test and the motion parameter value of the test action, the test process of the timing standing-up walking test has data records, and further the backtracking can be realized.

Description

Testing method, device, equipment and storage medium for timing, standing and walking test
Technical Field
The application relates to the technical field of data processing, in particular to the technical field of intelligent medical treatment, and particularly relates to a testing method and device for timing standing walking test, electronic equipment and a storage medium.
Background
The timed up-and-down walking Test (TUG) is a reliable, economical, safe and time-saving method for assessing overall functional mobility, which requires a subject to first sit on a chair, then stand up, walk straight 3 meters at normal speed after standing up and then turn around, walk straight three meters after turning around to the front of the chair to turn around and sit down, and simultaneously time.
Currently, when a timed up-walking test is clinically used for evaluating the overall functional mobility of a subject, quantitative evaluation parameters can be given only by the total time of the subject for completing the timed up-walking test action, the evaluation of the overall functional mobility of the subject is very dependent on subjective judgment and experience of doctors, and the method can finally cause the diagnosis results of different doctors to have deviation, and the evaluation process is not traceable.
Disclosure of Invention
Aiming at the problems of subjectivity and incapability of backtracking in the timing and standing walking test in the prior art, the invention provides a testing method and device for the timing and standing walking test, electronic equipment and a storage medium.
According to a first aspect, there is provided a method for testing a timed-rise walking test, comprising:
for each test action in the timing standing walking test, acquiring a joint point of the movement of the subject at the starting moment of each test action;
and calculating the motion parameter value of the test action according to each joint point at the starting moment of the test action.
According to a second aspect, there is provided a test device for a timed-rise walk test, comprising:
a joint point acquisition unit configured to acquire, for each test action in the timed-up walking test, a joint point of the movement of the subject at a start time of each test action;
and the parameter calculating unit is used for calculating the motion parameter value of the test action according to each joint point at the starting moment of the test action.
According to a third aspect, there is provided an electronic device comprising: one or more processors; a storage device to store one or more programs that, when executed by one or more processors, cause the one or more processors to implement a method of any embodiment of a test method, such as a timed walk test.
According to a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the method of any of the embodiments of the testing method as a timed-rise walk test.
According to the embodiment of the present application, after the joint points of the movement of the subject at the starting time of each test action are acquired, the movement parameter values of the test action can be calculated based on the joint points at the starting time of the test action. The method and the device realize the test action in the timing standing walking test, objectively provide the standard uniform and quantitative motion parameter values, provide objective and quantitative data basis for clinical diagnosis, and are beneficial to improving the objectivity and accuracy of the clinical diagnosis. And on the basis of the joint point information of the testee of the timing standing walking test and the motion parameter value of the test action, the test process of the timing standing walking test has data records, and further backtracking can be realized.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram to which some embodiments of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a testing method for a timed-rise walk test according to the present application;
FIG. 3 is a schematic diagram of the exercise process of the timed-rise walking test according to the embodiment of the present application;
FIG. 4 is a schematic view of a human joint according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a data acquisition device according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an application scenario of a test method of the timed-rise walk test according to the present application;
FIG. 7 is a flow diagram of yet another embodiment of a test method for a timed-rise walk test according to the present application;
FIG. 8 is a schematic diagram of one embodiment of a test apparatus for timed walk testing according to the present application;
fig. 9 is a block diagram of an electronic device for implementing a test method for a timed-rise walk test according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application to assist in understanding, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the test method of the timed-rise-walk test or the test apparatus of the timed-rise-walk test of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as a video application, a live application, an instant messaging tool, a mailbox client, social platform software, and the like, may be installed on the terminal devices 101, 102, and 103.
Here, the terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be electronic devices, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like, which receive depth information including joint points collected by visual devices, including but not limited to depth sensors, depth cameras, and the like, and then calculate motion parameter values of test actions; but also an electronic device integrated with a vision device. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing support for the terminal devices 101, 102, 103. The background server may receive depth information including the joint points acquired by the vision device, further calculate motion parameter values of the test actions, and feed back a processing result (for example, the motion parameter values) to the terminal device.
It should be noted that the test method for the time-lapse walking test provided in the embodiment of the present application may be executed by the server 105 or the terminal devices 101, 102, and 103, and accordingly, the test apparatus for the time-lapse walking test may be disposed in the server 105 or the terminal devices 101, 102, and 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for an implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a testing method for a timed-rise walk test according to the present application is shown. The testing method for the timing standing walking test comprises the following steps:
step 201, for each test action in the timed-rise walking test, acquiring a joint point of the movement of the subject at the starting time of each test action.
Step 202, calculating motion parameter values of the test action according to the joint points at the starting time of the test action.
In this embodiment, an execution subject (for example, the server or the terminal device shown in fig. 1) on which the test method of the timed-rise walking test is executed obtains the joint points of the movement of the subject at the start time of each test action, and further, the movement parameter values of the test action can be calculated based on each joint point at the start time of the test action.
In the method provided by the above embodiment of the present application, after acquiring the joints of the subject's motion at the starting time of each test motion, the motion parameter values of the test motion can be calculated based on each joint point at the starting time of the test motion. The method and the device realize the purpose of objectively providing standard uniform and quantitative motion parameter values aiming at the test action in the timing standing walking test, further provide objective and quantitative data basis for clinical diagnosis, provide objective standard uniform reference and comparable diagnosis parameters for clinicians, obtain standard uniform and quantitative diagnosis parameters to assist the clinicians in diagnosis even for different doctors, and are favorable for improving the objectivity and the accuracy of the clinical diagnosis. And on the basis of the joint point information of the movement of the testee and the movement parameter value of the test action in the timing standing walking test, the test process of the timing standing walking test has data records, and further the backtracking can be realized. In addition, the image data of the whole period of the timing-up walking test synchronously acquired with the depth information can be reserved, so that the evaluation condition of the testee can be longitudinally evaluated by combining the image data and the depth information.
In some optional implementations of this embodiment, the test action is an action to be tested by the subject in a timed up walking test, as shown in fig. 3, in which the subject is first required to sit on a seat, then stands up, walks straight at normal speed for 3 meters after standing up, then turns around, and walks straight three meters after turning around to the front of the chair to turn around to sit, and therefore, the test actions include, but are not limited to: resting sitting, standing up, walking, turning around and sitting down.
In some optional implementation manners of this embodiment, in order to achieve obtaining of the joint point of the movement of the subject at the starting time of each test action, in the whole cycle of the timed-rise walking test, depth information may be collected for each test action according to a time sequence, and each test action may correspond to multiple frames of depth data; and the different test actions are synchronously timed, so that the depth data acquired in the time period of each test action are the frame data corresponding to the test action, and therefore, the joint point of the motion of the subject at the starting moment of the test action can be acquired from the first frame depth data in the time period of each test action.
In some optional implementations of the present embodiment, in order to more accurately acquire the moving joint point of the subject at the starting time of each test action, in the present embodiment, a method of acquiring the moving joint point of the subject at the starting time of each test action is also provided, including:
determining the type of the test action corresponding to each frame of depth data for the depth information of the full period of the timed standing-up walking test, for example, the type of the test action includes but is not limited to a static sitting position, a standing up, a walking, a turning body and a sitting down, and each frame of depth data corresponds to one type of the test action;
for each test action, the frame depth data corresponding to the test action of the same type form a motion state, the first frame depth data of the motion state is determined as the frame data of the starting time of the test action of the type according to the time sequence, the articulation point of the motion of the subject is obtained from the first frame depth data, for example, for the test action of the standing type, the frame depth data of all the test actions of the type standing form a motion state, which can be called as the standing state, the first frame depth data of the standing state is determined as the frame data of the starting time of the standing test action according to the time sequence of each frame, and the articulation point of the motion of the subject is obtained from the frame depth data.
In some optional implementation manners of this embodiment, each frame depth data corresponding to the same type of test action forms a motion state, each type of test action may correspond to a motion state, the motion state corresponds to a type of test action one to one, and based on the type of test action, the motion states include, but are not limited to, a static sitting state, a standing state, a walking state, a turning state, and a sitting state, and therefore, in a full cycle of the timed standing-up walking test, according to a test timing sequence, the motion states may sequentially include a static sitting state, a standing state, a walking state, a turning state, and a sitting state.
In some optional implementation manners of this embodiment, in the process of determining the type of the test action corresponding to each frame of depth data, the depth information may be collected, and simultaneously, the image data of the full test period is synchronously collected, and for the image data and the depth data at the same time, the type of the test action of the frame of image at the time is determined as the type of the test action corresponding to the frame of depth data at the time.
In some optional implementations of the present embodiment, in order to determine the type of the test action corresponding to each frame of depth data more accurately, in this embodiment, a method for determining the type of the test action corresponding to each frame of depth data based on motion characteristics is further provided, including:
acquiring a joint point of the motion of the subject from each frame of depth data;
extracting the motion characteristics of each joint point;
and determining the type of the test action corresponding to each frame of depth data according to the motion characteristics of each joint point of each frame of depth data.
In some optional implementations of this embodiment, the joint point of the subject's motion may be any motion-related joint point in the frame data, for example, as shown in fig. 4 (the dots in fig. 4 represent joint points), the joint point of the subject's motion includes, but is not limited to, a waist joint point, left and right thigh joint points, left and right knee joint points, left and right toe joint points, left and right ankle joint points, and the like.
In some optional implementations of this embodiment, the motion characteristic of the joint point may be any characteristic capable of representing the motion of the joint point, for example, including but not limited to a velocity characteristic of each joint point, an angle characteristic of a waist joint point, a relative position characteristic of the waist joint point with respect to a visual device, a distance characteristic of the waist joint point with respect to the foot, and the like.
In some optional implementations of the embodiment, in the process of extracting the motion feature of each joint point, the motion feature of the joint point may be directly extracted by a corresponding feature device, for example, the feature about the angle of the joint point may be extracted by a gyroscope, and the velocity feature of the joint point may be extracted by an accelerometer.
In some optional implementations of the present embodiment, in order to accurately extract a motion feature of each joint point and further improve accuracy of the timed walking test, in the present embodiment, a method for extracting a motion feature of each joint point based on spatial coordinates is further provided, and includes:
collecting the space coordinate of each joint point; specifically, the Depth information including the joint point may be collected by a vision device (e.g., a Depth sensor, a Depth camera, etc.), the spatial coordinate of the joint point may be determined with reference to the vision device, for example, as shown in fig. 5, the vision device exemplifies a Depth camera (Depth camera), the x-axis of the spatial coordinate may be the horizontal direction of the Depth camera, the y-axis of the spatial coordinate may be the vertical direction of the Depth camera, and the z-axis of the spatial coordinate may be the Depth direction of the Depth camera; in addition, the image data of the test period may be acquired synchronously, the image data may be acquired by an image acquisition device (e.g., a camera, etc.), as shown in fig. 5, the image data is acquired by a color camera (RGB camera), for example, and a certain angle may exist between the depth camera and the color camera.
The motion characteristics of the joint points are determined from the spatial coordinates of the joint points, for example, the motion characteristics may be calculated from the spatial coordinates of the joint points.
In some optional implementation manners of the present embodiment, in the process of determining the motion characteristic of the joint point according to the spatial coordinate of the joint point, in order to improve the accuracy of determining the motion characteristic while considering both the calculation amount and the calculation efficiency, in the present embodiment, a method for determining the motion characteristic is provided. For example, for a motion feature related to a spatial position, that is, for a motion feature, such as a velocity feature, that cannot be determined according to spatial coordinates of a joint point of one frame of depth data, a motion feature of the joint point in the current frame of depth data may be determined according to spatial coordinates of the joint point in two frames of depth data, where one frame of depth data of the two frames of depth data is current frame depth data, and the other frame of depth data of the two frames of depth data is nth frame depth data before or after the current frame of depth data, or one frame of depth data of the two frames of depth data is mth frame depth data before the current frame of depth data, and the other frame of depth data of the two frames of depth data is lth frame depth data after the current frame of depth data, and N, M, and L are positive integers.
For example, M and L take two as an example, and the two-frame depth data takes second frame depth data before the current frame depth data and second frame depth data after the current frame depth data as an example, and determines the motion feature of the joint point at the time corresponding to the current frame depth data, which is related to the spatial position, according to the spatial coordinates of the joint point in the second frame depth data before the current frame depth data and the spatial coordinates of the joint point in the second frame depth data after the current frame depth data.
In some optional implementations of the embodiment, each motion characteristic may be calculated by the following formula, for example, taking calculation of a speed characteristic of Pelvis (Pelvis) as an example, i represents current frame depth data, and then the ith frame depth data corresponds to a speed characteristic v of a pelvic joint point at a time instant i Comprises the following steps:
Figure BDA0003751132370000081
wherein x is i+2 ,y i+2 ,z i+2 Three-axis coordinates, x, representing respectively the second frame depth data following the i-th frame depth data i-2 ,y i-2 ,z i-2 Respectively, three-axis coordinates of second frame depth data before the ith frame depth data, and Δ t represents time of five frame depth data. The velocity profile of the other joint points is calculated similarly to Pelvis.
Taking the angular characteristics of the lumbar articulation points as an example, item iAngular feature θ of waist joint point at time corresponding to frame depth data i Comprises the following steps:
θ i =acos(l 1 ·l 2 /||l 1 ||||l 2 ||)
wherein l 1 =(HipLeft.x i -KneeLeft.x i ,HipLeft.x i -KneeLeft.x i ,HipLeft.z i -KneeLeft.z i ),l 2 Is the Y axis, hipLeft is the left hip, kneeLeft is the left knee, x i For the corresponding time X-axis coordinate, z, of the ith frame depth data i And the coordinate of the Z axis of the corresponding time of the depth data of the ith frame.
The relative position characteristic of the waist joint point in the ith frame depth data relative to the visual device is d i
d i =Pelvis.z i
The distance characteristic of the waist joint point and the left foot in the ith frame depth data is Dis _ LeftFoot i The distance between the waist joint and the right foot is characterized as Dis _ RightFoot i
Figure BDA0003751132370000091
Figure BDA0003751132370000092
Wherein FootRight is the right foot; footLeft is the left foot.
In some optional implementation manners of this embodiment, in order to determine the type of the test action corresponding to each frame of depth data more conveniently, efficiently, and accurately based on the motion characteristics, in this embodiment, it is proposed that a value of the motion characteristics of each joint point of each frame of depth data forms a vector; and inputting the vector into a machine learning classifier, and outputting a classification result by the machine learning classifier, wherein the classification result represents the type of the test action corresponding to each frame of depth data.
In some optional implementations of the embodiment, the machine learning classifier may be implemented by selecting an existing classifier, for example, the machine learning classifier includes, but is not limited to, a Support Vector Machine (SVM), a Random Forest (RF), and a long-term memory model (LSTM).
The long-time and short-time memory model and the sequence model are time sequence models, and the relation between the characteristics of the previous time and the next time can be automatically combined to obtain a better classification result.
The support vector machine is a generalized linear classifier for classifying data according to a supervised learning mode, a decision boundary of the support vector machine is a maximum margin hyperplane for solving a learning sample, the support vector machine can carry out nonlinear classification on high-dimensional data due to the kernel method, and the support vector machine can carry out type classification of test actions by using the classifier with the rbf kernel method.
The random forest is a classifier subject to a few majority, a base classifier is trained by using random sampling features and random sampling samples, and then the classifier is combined.
The long-time and short-time memory model is a time sequence model classifier, realizes the processing of long-time and short-time memory through a gate control mechanism, can automatically select input and forget, and effectively weakens the disappearance of gradients, so the long-time sequence classifier has strong processing capacity.
In some optional implementation manners of this embodiment, in the process of training the machine learning classifier, the depth information may be collected, and simultaneously, the image data of the whole test period is collected synchronously, for the image data and the depth data at the same time, the motion characteristic values of each joint point in the frame of depth data at the time are extracted, the motion characteristic values of each joint point form a vector, the type of the test action of the frame of image at the time is used as a label of the vector, and the vector and the corresponding label are used as samples for training the machine learning classifier.
In some optional implementations of this embodiment, the classification result output by the machine learning classifier may be in any form, and may represent and distinguish different types of test actions, for example, may be in a text form, a character form, or a number form. Taking the form of numbers as an example, a static seat can be represented by "0", then a static seat motion state can be represented by a sequence of a string of numbers "0", a standing up by "1", a standing up motion state can be represented by a sequence of a string of numbers "1", a walking by "2", a walking motion state can be represented by a sequence of a string of numbers "2", a turning up by "3", a turning up motion state can be represented by a sequence of a string of numbers "3", a sitting down by "4", a sitting down motion state can be represented by a sequence of a string of numbers "4", and so on.
In some optional implementation manners of this embodiment, in order to avoid an influence caused by data jump and further improve accuracy of type classification, in this embodiment, it is proposed that after the type of the test action corresponding to each frame of depth data is determined by the machine learning classifier, a classification result is corrected and smoothed, and a situation that an abnormality occurs in the classification result of a certain frame of depth data is solved. For example, for current frame depth data, the type of a test action corresponding to at least one frame of depth data before the current frame of depth data, the type of a test action corresponding to the current frame of depth data, and the type of a test action corresponding to at least one frame of depth data after the current frame of depth data are counted; and determining the type of the test action with the largest number as the type of the test action corresponding to the current frame depth data.
For example, counting the types of the test actions corresponding to 5 frames of depth data is taken as an example, counting the types of the test actions corresponding to two frames of depth data before the current frame of depth data, the types of the test actions corresponding to the current frame of depth data, and the types of the test actions corresponding to two frames of depth data after the current frame of depth data, and determining the type of the test action with the largest number as the type of the test action corresponding to the current frame of depth data. The method comprises the steps of counting two frames of depth data before the current frame of depth data, the current frame of depth data and two frames of depth data after the current frame of depth data, determining the type of the test action with the largest number according to a minority obeying majority rule, and voting the type of the test action to be the type of the test action corresponding to the current frame of depth data.
In some optional implementation manners of this embodiment, in order to eliminate an erroneous prediction of a motion state and further ensure accuracy of motion state division in a full period of the timing-up walking test, in this embodiment, after each frame of depth data corresponding to the same type of test action forms one motion state, the full period of the timing-up walking test is divided into a plurality of different motion states according to a time sequence; aiming at the type of the test action corresponding to the current frame depth data, matching the motion state corresponding to the type of the test action corresponding to at least two continuous frames of depth data with the motion state rule according to the time sequence; and when the matching fails, adjusting the type of the test action corresponding to the current frame depth data according to the motion state rule, wherein the at least two continuous frames of depth data comprise the current frame depth data.
For example, the consecutive at least two frames of depth data may be the current frame of depth data and at least one frame of depth data before the current frame of depth data, may be the current frame of depth data and at least one frame of depth data after the current frame of depth data, and may also be the current frame of depth data, at least one frame of depth data after the current frame of depth data, and at least one frame of depth data before the current frame of depth data.
In some optional implementations of this embodiment, the exercise state rule may include a rule that ensures that the exercise state corresponds to the test action in one-to-one correspondence according to a time sequence in the timing-based walk test process, for example, the exercise state rule includes, but is not limited to, the following rules:
1. the state of the sitting still is not connected with the walking state, the turning state and the sitting state;
2. the standing state is not connected with a static sitting state, a turning state and a sitting state;
3. the walking state is not connected with a static sitting state and a standing state;
4. the turning state is not connected with a static sitting state and a standing state;
5. the sitting state is not connected with the standing state, the walking state and the turning state.
In some optional implementation manners of this embodiment, after dividing the full cycle of the timed up-and-down walking test into a plurality of motion states corresponding to the test actions according to a time sequence, template matching may be performed to extract frame data at the start time of each test action by:
standing start point (i.e., frame data at the start of the standing test operation): k is standup =i,if label i ==1and label i-1 ==0
Walk start point (i.e., frame data at the start of the walk test operation): k gait =i,if label i ==2and label i-1 ==1
or label i ==3and label i-1 ==1
Turn-around start point (i.e., frame data at the start of turn-around test operation): k turnaround =i,if label i ==3and label i-1 ==2
Sit-down start point (i.e., frame data of start time of sit-down test action): k is sitdown =i,if label i ==3and label i-1 ==4
or label i ==1and label i-1 ==4
End point of sitting (i.e., frame data at the start of sitting test action): k sitstill =i,if label i ==4and label i-1 ==0
Wherein, 0 represents a still sitting position state, 1 represents a standing up state, 2 represents a walking state, 3 represents a turning state, 4 represents a sitting down state, and i represents the depth data of the current frame.
In some optional implementations of the present embodiment, after obtaining the respective joint points at the starting time of each test action, in order to improve the accuracy and effectiveness of the motion parameter values, the motion parameter values of the test actions may be accurately calculated by the following methods, including:
acquiring the space coordinates of each joint point at the starting moment of each test action;
the motion parameter value of the previous test action is calculated according to the space coordinates of the joint points at the starting time of the two adjacent test actions, and the starting time of the next test action is also the ending time of the previous test action, namely the joint point at the starting time of the next test action is taken as the joint point at the ending time of the previous test action, so the motion parameter value of the previous test action can be accurately calculated by combining the space coordinates of each joint point at the starting time of the two adjacent test actions.
In some optional implementations of the present embodiment, in order to improve the accuracy of the test, quantized and diversified motion parameter values are provided, for example, the motion parameter values of the test actions include but are not limited to: the speed of the standing test action, the speed of the sitting test action, the angle of the trunk swaying during the test standing action, the speed of the walking test action, and the time required for the turning test action.
In some optional implementations of the present embodiment, the respective motion parameter values may be calculated by the following principle:
1. standing speed (m/s): distance/time between standing starting point and walking starting point. Specifically, the distance between the standing start point and the walking start point can be calculated by the spatial coordinates of a certain joint point at the standing start time and the spatial coordinates of the same joint point at the walking start time, and the time is the time length from the standing start time to the walking start time.
2. Trunk shaking degree (m) at standing up: and in the standing process, the maximum included angle between the trunk and the Y axis of the coordinate system is formed.
3. Sitting speed (m/s): distance/time between the set-down starting point and the set-down ending point.
4. Walking speed (m/s): distance/time between walking start point and turning start point.
5. Turn-around time(s): the time between the turning starting point and the walking starting point.
In some optional implementations of the present embodiment, with continuing reference to fig. 6, fig. 6 is a schematic diagram of an application scenario of the test method for the timed-rise walking test according to the present embodiment. In the application scenario of fig. 6, the execution body 601 acquires, for each test action in the timed-rise walking test, the joint point 602 of the subject's motion at the start time of each of the test actions. The execution body 601 calculates a motion parameter value 603 of the test motion from each of the joint points 602 at the start time of the test motion.
With further reference to FIG. 7, a flow 700 of yet another embodiment of a testing method for a timed-rise walk test is illustrated. The process 700 includes the following steps:
step 701, collecting joint point information of motion of a human body by using visual equipment such as a depth sensor and the like in the process of timing, standing and walking test, and obtaining a space coordinate of a joint point;
step 702, extracting the motion characteristics of the joint points based on the space coordinates of the joint points;
703, forming motion characteristics of the joint points of each frame of depth data into a vector, inputting the vector into a machine learning classifier, and outputting a classification result by the machine learning classifier, wherein the classification result represents the type of the test action corresponding to each frame of depth data, each frame of depth data corresponding to the same type of test action forms a motion state, and the motion state can be represented as a label sequence of the same classification result, and then dividing the full cycle of the timed-rise walking test into a plurality of motion states;
step 704, post-processing the label sequence formed by the classification result output by the machine learning classifier to obtain an accurate label sequence, wherein the post-processing comprises voting to determine the classification result of the current frame depth data based on the classification result of the 5 frames of depth data and adjusting the classification result of the current frame depth data according to the motion state rule;
step 705, according to the label sequence obtained by post-processing, finding out a key frame of each test action in the whole period of the timing standing walking test by using a template matching mode, wherein the key frame of the test action is a frame of depth data of the starting moment of the test action;
step 706, according to the spatial coordinates of the joint points in the key frame, the motion parameter values of the test actions in the timing standing walking test are quantitatively calculated.
With further reference to fig. 8, as an implementation of the method shown in the above figures, the present application provides an embodiment of a test apparatus for a timed-rise walking test, the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the embodiment of the apparatus may further include the same or corresponding features or effects as the embodiment of the method shown in fig. 2, except for the features described below. The device can be applied to various electronic equipment.
As shown in fig. 8, the test apparatus 800 for the walk-up test includes: a joint point acquisition unit 801 and a parameter calculation unit 802. Wherein the joint point acquiring unit 801 is configured to acquire, for each test action in the timed-up walking test, a joint point of the movement of the subject at the start time of each test action; a parameter calculating unit 802 configured to calculate a motion parameter value of the test action according to each of the joint points at the starting time of the test action.
In this embodiment, the detailed processing and the technical effects of the joint point obtaining unit 801 and the parameter calculating unit 802 of the testing device 800 for the timed up-walking test can refer to the related descriptions of step 201 and step 202 in the corresponding embodiment of fig. 2, which are not repeated herein.
In some optional implementations of this embodiment, the joint point obtaining unit 801 includes: a type determination module configured to determine a type of the test action corresponding to each frame of depth data for depth information of a full period of the timed up-and-down walking test; the joint point acquisition module is configured to, for each test action, form a motion state from the frame depth data corresponding to the test action of the same type, determine, according to a time sequence, first frame depth data of the motion state as frame data of a starting time of the test action of the type, and acquire the joint point of the motion of the subject from the first frame depth data.
In some optional implementations of this embodiment, the type determining module includes:
an articulation point acquisition sub-module configured to acquire the articulation points of the subject's motion from the per-frame depth data;
a motion feature extraction sub-module configured to extract a motion feature of each of the joint points;
a type determining sub-module configured to determine a type of the test action in the depth data of each frame according to the motion characteristics of the joint points of the depth data of each frame.
In some optional implementations of this embodiment, the motion feature extraction submodule is specifically configured to acquire spatial coordinates of each of the joint points; and determining the motion characteristics of the joint points according to the space coordinates of the joint points.
In some optional implementations of this embodiment, the motion feature extraction sub-module is further configured to determine, for a motion feature related to a spatial position, a motion feature of the joint in a current frame depth data according to spatial coordinates of the joint in two frames of depth data, where one of the two frames of depth data is the current frame depth data, and the other of the two frames of depth data is an nth frame depth data before or after the current frame depth data, or one of the two frames of depth data is an mth frame depth data before the current frame depth data, and the other of the two frames of depth data is an lth frame depth data after the current frame depth data, and N, M, and L are positive integers.
In some optional implementations of this embodiment, the type determining sub-module is specifically configured to combine values of the motion features of the joint points of the depth data of each frame into a vector; inputting the vector into a machine learning classifier, and outputting a classification result by the machine learning classifier, wherein the classification result represents the type of the test action corresponding to each frame of depth data.
In some optional implementations of this embodiment, the apparatus further includes:
after determining the type of the test action corresponding to each frame of depth data, the type counting unit is configured to count, for a current frame of depth data, the type of the test action corresponding to at least one frame of depth data before the current frame of depth data, the type of the test action corresponding to the current frame of depth data, and the type of the test action corresponding to at least one frame of depth data after the current frame of depth data;
and the type adjusting unit is configured to determine the type of the test action with the largest number as the type of the test action corresponding to the current frame depth data.
In some optional implementations of this embodiment, the apparatus further includes:
the motion state dividing unit is configured to divide the full period of the timing-up walking test into a plurality of different motion states according to a time sequence after each frame of depth data corresponding to the test actions of the same type form a motion state;
the motion state matching unit is configured to match motion states corresponding to the types of the test actions corresponding to at least two continuous frames of depth data with motion state rules according to a time sequence aiming at the types of the test actions corresponding to the current frame of depth data;
and the motion state adjusting unit is configured to adjust the type of the test action corresponding to the current frame depth data according to the motion state rule when the matching fails, wherein the at least two continuous frames of depth data comprise the current frame depth data.
In some optional implementations of this embodiment, the parameter calculating unit is specifically configured to obtain spatial coordinates of each joint point at the starting time of each test action; and calculating the motion parameter value of the previous test action according to the space coordinates of the joint points at the starting time of two adjacent test actions, wherein the joint point at the starting time of the latter test action is taken as the joint point at the ending time of the previous test action.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 9 is a block diagram of an electronic device according to the test method of the timed up walking test in the embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 9, the electronic apparatus includes: one or more processors 901, memory 902, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing some of the necessary operations (e.g., as an array of servers, a group of blade servers, or a multi-processor system). Fig. 9 illustrates an example of a processor 901.
Memory 902 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method for testing timed-rise walking tests provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the test method of the timed-rise walking test provided herein.
The memory 902, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the test method of the time-lapse walking test in the embodiment of the present application (for example, the joint point acquisition unit 801 and the parameter calculation unit 802 shown in fig. 8). The processor 901 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 902, that is, implements the test method of the timed walk-up test in the above method embodiments.
The memory 902 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device of the test method of the timed-up walking test, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 902 may optionally include memory remotely located from the processor 901, which may be connected over a network to the processing electronics of the test method of the timed walk-up test. Examples of such networks include, but are not limited to, the internet, enterprise/medical facility intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the testing method for the timed-rise walking test may further include: an input device 903 and an output device 904. The processor 901, the memory 902, the input device 903 and the output device 904 may be connected by a bus or other means, and fig. 9 illustrates the connection by a bus as an example.
The input device 903 may receive input numeric or character information and generate key signal input related to user setting and function control of the electronic device of the test method of the time-lapse walking test, such as an input device like a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, a sensor that can capture human motion information and/or physiological information, and the like. The output devices 904 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touchscreen, a Head Mounted Display (HMD).
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a joint point acquisition unit and a parameter calculation unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, the parameter calculation unit may also be described as a "calculation unit of motion parameters".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carrying one or more programs which, when executed by the apparatus, cause the apparatus to: aiming at each test action in the timing standing walking test, acquiring a joint point of the movement of the subject at the starting moment of each test action; and calculating the motion parameter value of the test action according to each joint point at the starting moment of the test action.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (13)

1. A method of testing a timed-rise walk test, the method comprising:
aiming at each test action in the timing standing-up walking test, acquiring a joint point of the movement of the subject at the starting moment of each test action;
and calculating the motion parameter value of the test action according to each joint point at the starting moment of the test action.
2. The method of claim 1, wherein acquiring the joint point of motion of the subject at the start of each of the test actions comprises:
determining the type of the test action corresponding to each frame of depth data according to the depth information of the full period of the timing standing walking test;
for each test action, the frame depth data corresponding to the test action of the same type form a motion state, the first frame depth data of the motion state is determined as the frame data of the starting time of the test action of the type according to time sequence, and the joint point of the motion of the subject is obtained from the first frame depth data.
3. The method of claim 2, wherein determining the type of the test action to which the per-frame depth data corresponds comprises:
obtaining the joint points of the subject's motion from the per-frame depth data;
extracting the motion characteristics of each joint point;
and determining the type of the test action corresponding to each frame of depth data according to the motion characteristics of each joint point of each frame of depth data.
4. The method of claim 3, wherein extracting the motion features of each of the articulation points comprises:
collecting the space coordinates of each joint point;
and determining the motion characteristics of the joint points according to the space coordinates of the joint points.
5. The method of claim 4, wherein determining the motion characteristic of the joint point from the spatial coordinates of the joint point comprises:
for motion features related to a spatial position, determining motion features of a joint point in current frame depth data according to spatial coordinates of the joint point in two frames of depth data, wherein one frame of depth data in the two frames of depth data is the current frame depth data, the other frame of depth data in the two frames of depth data is nth frame depth data before or after the current frame depth data, or one frame of depth data in the two frames of depth data is mth frame depth data before the current frame depth data, the other frame of depth data in the two frames of depth data is lth frame depth data after the current frame depth data, and N, M, and L are positive integers.
6. The method of claim 3, wherein determining the type of the test action corresponding to the each frame of depth data according to the motion characteristics of the joint points of the each frame of depth data comprises:
combining values of motion features of the joint points of each frame of depth data into a vector;
inputting the vector into a machine learning classifier, and outputting a classification result by the machine learning classifier, wherein the classification result represents the type of the test action corresponding to each frame of depth data.
7. The method of claim 6, wherein, after determining the type of test action to which each frame of depth data corresponds, the method further comprises:
counting the type of the test action corresponding to at least one frame of depth data before the current frame of depth data, the type of the test action corresponding to the current frame of depth data and the type of the test action corresponding to at least one frame of depth data after the current frame of depth data according to the current frame of depth data;
and determining the type of the test action with the largest number as the type of the test action corresponding to the current frame depth data.
8. The method of claim 2, wherein after the frame depth data corresponding to the same type of test action constitutes a motion state, the method further comprises:
dividing the whole period of the timing standing walking test into a plurality of different motion states according to a time sequence;
aiming at the type of the test action corresponding to the current frame depth data, matching the motion state corresponding to the type of the test action corresponding to at least two continuous frames of depth data with a motion state rule according to a time sequence;
and when the matching fails, adjusting the type of the test action corresponding to the current frame depth data according to the motion state rule, wherein the at least two continuous frames of depth data comprise the current frame depth data.
9. The method according to any one of claims 1 to 8, wherein calculating a motion parameter value of the test action from the respective articulation point at the starting moment of the test action comprises:
acquiring the space coordinates of each joint point at the starting moment of each test action;
and calculating the motion parameter value of the previous test action according to the space coordinates of the joint points at the starting time of two adjacent test actions, wherein the joint point at the starting time of the latter test action is taken as the joint point at the ending time of the previous test action.
10. The method of any of claims 1 to 8, wherein the motion parameter values of the test action are: comprises that
The speed of the standing test action, the speed of the sitting test action, the angle of the trunk swaying during the test standing action, the speed of the walking test action and the time required for the turning test action.
11. A test device for timed-rise walk testing, the device comprising:
a joint point acquisition unit configured to acquire, for each test action in the timed-rise walking test, a joint point of movement of the subject at a start time of each test action;
and the parameter calculating unit is used for calculating the motion parameter value of the test action according to each joint point at the starting moment of the test action.
12. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-10.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 10.
CN202210844714.5A 2022-07-18 2022-07-18 Testing method, device, equipment and storage medium for timing, standing and walking test Active CN115153517B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210844714.5A CN115153517B (en) 2022-07-18 2022-07-18 Testing method, device, equipment and storage medium for timing, standing and walking test

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210844714.5A CN115153517B (en) 2022-07-18 2022-07-18 Testing method, device, equipment and storage medium for timing, standing and walking test

Publications (2)

Publication Number Publication Date
CN115153517A true CN115153517A (en) 2022-10-11
CN115153517B CN115153517B (en) 2023-03-28

Family

ID=83494137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210844714.5A Active CN115153517B (en) 2022-07-18 2022-07-18 Testing method, device, equipment and storage medium for timing, standing and walking test

Country Status (1)

Country Link
CN (1) CN115153517B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080031512A1 (en) * 2006-03-09 2008-02-07 Lars Mundermann Markerless motion capture system
CN103230664A (en) * 2013-04-17 2013-08-07 南通大学 Upper limb movement rehabilitation training system and method based on Kinect sensor
US20160370854A1 (en) * 2015-06-16 2016-12-22 Wilson Steele Method and System for Analyzing a Movement of a Person
JP2017080200A (en) * 2015-10-29 2017-05-18 キヤノンマーケティングジャパン株式会社 Information processing device, information processing method and program
US20170344919A1 (en) * 2016-05-24 2017-11-30 Lumo BodyTech, Inc System and method for ergonomic monitoring in an industrial environment
JP2019012453A (en) * 2017-06-30 2019-01-24 キヤノンマーケティングジャパン株式会社 Information processing device, control method therefor, and program
US20190224528A1 (en) * 2018-01-22 2019-07-25 K-Motion Interactive, Inc. Method and System for Human Motion Analysis and Instruction
CN112472074A (en) * 2020-11-27 2021-03-12 吉林农业科技学院 Sitting gait data acquisition and analysis system based on acceleration sensor
US20210110146A1 (en) * 2019-10-15 2021-04-15 Fujitsu Limited Action recognition method and apparatus and electronic equipment
CN114267088A (en) * 2022-03-02 2022-04-01 北京中科睿医信息科技有限公司 Gait information processing method and device and electronic equipment
WO2022070651A1 (en) * 2020-09-30 2022-04-07 美津濃株式会社 Information processing device and information processing method
CN114532986A (en) * 2022-02-09 2022-05-27 北京中科睿医信息科技有限公司 Human body balance measurement method and system based on three-dimensional space motion capture
CN114663913A (en) * 2022-02-28 2022-06-24 电子科技大学 Human body gait parameter extraction method based on Kinect

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080031512A1 (en) * 2006-03-09 2008-02-07 Lars Mundermann Markerless motion capture system
CN103230664A (en) * 2013-04-17 2013-08-07 南通大学 Upper limb movement rehabilitation training system and method based on Kinect sensor
US20160370854A1 (en) * 2015-06-16 2016-12-22 Wilson Steele Method and System for Analyzing a Movement of a Person
JP2017080200A (en) * 2015-10-29 2017-05-18 キヤノンマーケティングジャパン株式会社 Information processing device, information processing method and program
US20170344919A1 (en) * 2016-05-24 2017-11-30 Lumo BodyTech, Inc System and method for ergonomic monitoring in an industrial environment
JP2019012453A (en) * 2017-06-30 2019-01-24 キヤノンマーケティングジャパン株式会社 Information processing device, control method therefor, and program
US20190224528A1 (en) * 2018-01-22 2019-07-25 K-Motion Interactive, Inc. Method and System for Human Motion Analysis and Instruction
US20210110146A1 (en) * 2019-10-15 2021-04-15 Fujitsu Limited Action recognition method and apparatus and electronic equipment
WO2022070651A1 (en) * 2020-09-30 2022-04-07 美津濃株式会社 Information processing device and information processing method
CN112472074A (en) * 2020-11-27 2021-03-12 吉林农业科技学院 Sitting gait data acquisition and analysis system based on acceleration sensor
CN114532986A (en) * 2022-02-09 2022-05-27 北京中科睿医信息科技有限公司 Human body balance measurement method and system based on three-dimensional space motion capture
CN114663913A (en) * 2022-02-28 2022-06-24 电子科技大学 Human body gait parameter extraction method based on Kinect
CN114267088A (en) * 2022-03-02 2022-04-01 北京中科睿医信息科技有限公司 Gait information processing method and device and electronic equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHIA-YEH HSIEH等: "Automatic Subtask Segmentation Approach of the Timed Up and Go Test for Mobility Assessment System Using Wearable Sensors" *
OKKO LOHMANN等: "Skeleton Timed Up and Go" *
冯振等: "焊接技术及自动化专业课程体系教学改革" *
张绪树等: "人体下肢运动摄影及其运动学数据的确定" *

Also Published As

Publication number Publication date
CN115153517B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
US10898755B2 (en) Method for providing posture guide and apparatus thereof
US20220335851A1 (en) Identification and analysis of movement using sensor devices
US20210049353A1 (en) Ai-based physical function assessment system
Rocha et al. System for automatic gait analysis based on a single RGB-D camera
Westlund et al. Motion Tracker: Camera-based monitoring of bodily movements using motion silhouettes
JP2011123411A (en) Motion analyzer and motion analyzing method
Savoie et al. Automation of the timed-up-and-go test using a conventional video camera
Wei et al. Real-time limb motion tracking with a single imu sensor for physical therapy exercises
Du et al. RETRACTED: Research on the intelligent model of progress in physical education training based on motion sensor
Wei et al. Using sensors and deep learning to enable on-demand balance evaluation for effective physical therapy
Cimorelli et al. Portable in-clinic video-based gait analysis: validation study on prosthetic users
Ciklacandir et al. A comparison of the performances of video-based and imu sensor-based motion capture systems on joint angles
CN115153517B (en) Testing method, device, equipment and storage medium for timing, standing and walking test
Romeo et al. Video based mobility monitoring of elderly people using deep learning models
Huang et al. Image-recognition-based system for precise hand function evaluation
CN116543455A (en) Method, equipment and medium for establishing parkinsonism gait damage assessment model and using same
Abd Shattar et al. Experimental Setup for Markerless Motion Capture and Landmarks Detection using OpenPose During Dynamic Gait Index Measurement
Martínez-Zarzuela et al. VIDIMU. Multimodal video and IMU kinematic dataset on daily life activities using affordable devices
CN115299934B (en) Method, device, equipment and medium for determining test action
CN114863567B (en) Method and device for determining gait information
Jackson et al. Computer-assisted approaches for measuring, segmenting, and analyzing functional upper extremity movement: a narrative review of the current state, limitations, and future directions
Cimorelli et al. Validation of portable in-clinic video-based gait analysis for prosthesis users
Hu et al. Effective evaluation of HGcnMLP method for markerless 3D pose estimation of musculoskeletal diseases patients based on smartphone monocular video
Lau et al. Cost-benefit analysis reference framework for human motion capture and analysis systems
CN115857678B (en) Eye movement testing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant