WO2023163104A1 - Joint angle learning estimation system, joint angle learning system, joint angle estimation device, joint angle learning method, and computer program - Google Patents

Joint angle learning estimation system, joint angle learning system, joint angle estimation device, joint angle learning method, and computer program Download PDF

Info

Publication number
WO2023163104A1
WO2023163104A1 PCT/JP2023/006719 JP2023006719W WO2023163104A1 WO 2023163104 A1 WO2023163104 A1 WO 2023163104A1 JP 2023006719 W JP2023006719 W JP 2023006719W WO 2023163104 A1 WO2023163104 A1 WO 2023163104A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning
angle
acceleration
joint
angular velocity
Prior art date
Application number
PCT/JP2023/006719
Other languages
French (fr)
Japanese (ja)
Inventor
在勲 李
ツィゲ タデッセ アレマヨゥ
伸吾 岡本
Original Assignee
国立大学法人愛媛大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人愛媛大学 filed Critical 国立大学法人愛媛大学
Publication of WO2023163104A1 publication Critical patent/WO2023163104A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb

Definitions

  • the present invention relates to technology for estimating joint angles of individuals such as humans.
  • the identified movements are used, for example, as follows. It is used for patient diagnosis in the field of healthcare. Alternatively, in the field of sports, it is used for checking the form of athletes. Or, in the field of virtual reality, it is used to reproduce a user's movements with an avatar. Alternatively, in the field of animation, it is used to reproduce user movements with characters.
  • a marker is attached to a plurality of parts of the human body one by one, and movement is specified by photographing the trajectory of each marker during walking with a video camera.
  • a sensor such as an IMU (Inertial Measurement Unit) is attached instead of the marker, and movement is specified based on the acceleration and angular velocity measured by each sensor.
  • a wearing device having relatively rotatable arm portions, a waist attachment portion, and an angle sensor for detecting the relative rotation angle between the two is attached to the human body, and the angles of the joints of the human body are measured. identify.
  • An individual angle learning and estimating system includes: a first inertial sensor attached to a predetermined portion of a learning target individual to obtain acceleration and angular velocity of the predetermined portion; Learning by performing machine learning using acquisition means for acquiring the angles of the joints of the target individual as correct angles, and using the acceleration and angular velocity of a predetermined part of the learning target individual as input data and the correct angles as correct data.
  • a second inertial sensor attached to a predetermined part of an inference target individual to determine the acceleration and angular velocity of the predetermined part of the inference target individual; estimating means for estimating joint angles of the inference target individual by inputting accelerations and angular velocities of parts into the learned model.
  • the predetermined part is, for example, one of the waist, left thigh, right thigh, left shin, right shin, left leg, and right leg.
  • a plurality of predetermined parts may be provided. For example, accelerations and angular velocities at three points, the waist, left leg, and right leg, may be obtained.
  • the joint angle can be specified with less burden on the subject than before.
  • Attaching a second inertial sensor to one of the waist, left thigh, right thigh, left shin, right shin, left leg, and right leg to calculate the acceleration and angular velocity to estimate the angle of the joint of the target individual is more accurate.
  • the angle of the joint can be specified well.
  • a plurality of second inertial sensors may be attached. In particular, attaching a second inertial sensor to the hip, left leg, and right leg can provide more accurate results.
  • FIG. 1 is a diagram showing an example of the overall configuration of a joint angle estimation system
  • FIG. It is a figure which shows the example of the hardware constitutions of a learning inference apparatus. It is a figure which shows the example of a functional structure of a learning inference apparatus.
  • FIG. 11 is a flowchart for explaining an example of the flow of data set preparation processing;
  • FIG. 4 is a diagram showing examples of mounting positions of a plurality of inertial sensors;
  • FIG. 4 is a diagram showing an example of changes in acceleration and angular velocity in the moving coordinate system of the inertial sensor on the waist and an example of changes in angles of four joints of the subject.
  • FIG. 4 is a diagram illustrating an example of the flow of machine learning processing; It is a figure which shows the example of a neural network.
  • FIG. 10 is a diagram showing an example of a trained neural network; 4 is a flowchart for explaining an example of the overall processing flow of a joint angle estimation program;
  • FIG. 10 is a diagram showing an example of MAE in each of the first to third patterns;
  • FIG. 10 is a diagram showing an example of MAE when inertial sensors are fixed to respective parts;
  • FIG. 10 illustrates an example of a method of controlling a prosthesis;
  • FIG. 1 is a diagram showing an example of the overall configuration of a joint angle estimation system 1.
  • FIG. 2 is a diagram showing an example of the hardware configuration of the learning inference device 2.
  • FIG. 3 is a diagram showing an example of the functional configuration of the learning inference device 2. As shown in FIG.
  • the joint angle estimating system 1 shown in FIG. 1 is a system for estimating joint angles when a person 1A to be inferred is walking by AI (Artificial Intelligence) technology. 3 and the like.
  • the learning inference device 2 and each inertial sensor 3 are connected wirelessly.
  • the learning inference device 2 generates a learned model by machine learning, and estimates the angles of the joints of the inference target person 1A when the inference target person 1A is walking based on the learned model.
  • a case where a personal computer is used as the learning inference device 2 will be described below as an example.
  • the learning inference device 2 includes a main processor 20, a RAM (Random Access Memory) 21, a ROM (Read Only Memory) 22, an auxiliary storage device 23, a network interface 24, a serial interface 25, and a wireless communication device 26. , a display 27, a keyboard 28, a pointing device 29, and the like.
  • ROM 22 or the auxiliary storage device 23 an operating system as well as computer programs such as a joint angle estimation program 40 and a sensor application 41 are installed.
  • the joint angle estimation program 40 the shift window value extraction unit 401, the joint angle calculation unit 402, the data set registration unit 403, the data set storage unit 404, the machine learning unit 405, the learned model storage unit 406, the joint Functions such as an angle inference unit 407 and an inference result output unit 408 are realized.
  • the RAM 21 is the main memory of the learning inference device 2.
  • the RAM 21 is appropriately loaded with computer programs such as a joint angle estimation program 40 and a sensor application 41 .
  • the main processor 20 executes computer programs loaded into the RAM 21 .
  • a GPU Graphics Processing Unit
  • a CPU Central Processing Unit
  • the main processor 20 executes computer programs loaded into the RAM 21 .
  • the network interface 24 communicates with other devices by protocols such as TCP/IP (Transmission Control Protocol/Internet Protocol).
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • NIC Network Interface Card
  • Wi-Fi Wireless Fidelity
  • the serial interface 25 communicates with peripheral devices by a serial communication method.
  • a board conforming to standards such as USB (Universal Serial Bus) is used as the serial interface 25 .
  • the wireless communication device 26 communicates with peripheral devices by short-range wireless.
  • a board conforming to standards such as Bluetooth is used as the wireless communication device 26 .
  • the serial interface 25 and wireless communication device 26 are used in particular to communicate with the inertial sensor 3 .
  • the serial interface 25 and wireless communication device 26 are selectively used according to the standard of the inertial sensor 3 .
  • the display 27 displays a screen for inputting commands or data, a screen showing results of operations by the main processor 20, or the like.
  • the keyboard 28 and pointing device 29 are input devices for the operator to enter commands or data.
  • the inertial sensor 3 is a device that measures the acceleration and angular velocity of each of the three axes (X-axis, Y-axis, Z-axis) and is generally called an "IMU (Inertial Measurement Unit)" or an “inertial measurement unit.” be.
  • IMU Inertial Measurement Unit
  • a commercially available IMU is used as the inertial sensor 3 .
  • An IMU provided by the MTw Awinda system of Xsens in the Netherlands will be described below as an example.
  • MTw Awinda has the following functions. Multiple (up to 30) IMUs can measure acceleration and angular velocity while maintaining synchronization accuracy of 1/100,000 second using a proprietary protocol. As will be described later, in this embodiment, the angle of the joint is calculated or estimated every 1/100th of a second, so these IMU measurement results can be used in almost accurate synchronization. Then, each IMU can transmit the measurement result to the personal computer by ZigBee.
  • MTw Awinda provides receiver and manager software for personal computers.
  • the receiver is a ZigBee standard USB device for receiving data transmitted from the IMU.
  • Manager software is a computer program for changing IMU settings and properties.
  • a receiver is connected to the serial interface 25 , and manager software is installed as a sensor application 41 in the auxiliary storage device 23 .
  • FIG. 4 is a flowchart illustrating an example of the flow of data set preparation processing.
  • FIG. 5 is a diagram showing examples of mounting positions of the plurality of inertial sensors 31-37.
  • FIG. 6 is a diagram showing an example of changes in acceleration and angular velocity in the movement coordinate system of inertial sensor 31 on the waist and an example of changes in angles of four joints of subject 1B.
  • the shift window value extraction unit 401, the joint angle calculation unit 402, and the data set registration unit 403 (see FIG. 3) of the learning and inference device 2 extract sex, age, weight,
  • a process of collecting information from a plurality of subjects 1B (1B1, 1B2, . . . ) of various heights and generating a data set for machine learning is executed according to the procedure shown in FIG. A case where information is collected from a subject 1B1 will be described below as an example.
  • the operator inputs age, sex, weight, and height to the learning inference device 2 via the keyboard 28 or pointing device 29 as attributes of the subject 1B1.
  • the dataset registration unit 403 acquires an attribute vector C indicating these attributes (#101 in FIG. 4).
  • attribute vector Cv the attribute vector C of the v-th subject 1B will be referred to as "attribute vector Cv”.
  • the operator collects data on the state of each of the waist, left and right thighs, left and right shins, and left and right legs while the subject 1B1 is walking as follows.
  • the operator prepares seven inertial sensors 3, one waist belt, two thigh belts, two shin belts, and two leg belts. These belts are exclusive to MTw Awinda, and after initializing with all seven inertial sensors 3 aligned in the same posture, the inertial sensors 3 are adjusted one by one to a predetermined posture. can be installed with At the time of this initialization, the attitude angles of all the inertial sensors come to have a certain attitude angle on the space (fixed) coordinate system, and generally the attitude is consistent with the space coordinate system.
  • subject 1B1 After attaching one inertial sensor 3 to each of these belts, subject 1B1 attached these belts to his waist, left and right thighs, left and right shins, and left and right leg segments ( part). Thereby, as shown in FIG. 5, the inertial sensor 3 is fixed at each specific position of each segment.
  • inertial sensors 3 fixed to the segments of the waist, left thigh, right thigh, left shin, right shin, left leg, and right leg are referred to as “inertial sensor 31,” “inertial sensor 32,” . 37”.
  • the inertial sensors 31, 32, . . . , 37 are given identification numbers in advance. In this embodiment, it is assumed that the inertial sensors 31, 32, . It can be said that the identification numbers "1", “2”, ..., "7” respectively identify the waist, left thigh, right thigh, left shin, right shin, left leg, and right leg.
  • the subject 1B1 straightens the left hip joint, the right hip joint, the left knee joint, and the right knee joint so that the respective angles are zero degrees. stand. It is preferable to stand with the back of the head, back, buttocks and calves against a vertical wall. The state of standing straight like this is the standard state.
  • the operator uses the sensor application 41 to calibrate the inertial sensors 31 to 37 when the subject 1B1 is in the reference state.
  • Attitude is represented by quaternions, but may also be represented by Euler angles.
  • a posture is represented by a quaternion will be described.
  • the traveling direction (frontal direction) of the subject 1B1 can be identified by the positions and postures of these seven segments, but it may also be identified by making the subject 1B1 walk a few steps.
  • the position and posture in the reference state of each of the seven segments such as the waist are set in the learning inference device 2.
  • Subject 1B1 then continues walking for about 10 minutes.
  • the inertial sensors 31 to 37 measure the acceleration and angular velocity of each of the three axes at a sampling rate of 100 Hz while walking, and calculate the quaternion of the posture of the inertial sensor itself. Then, by the inertial sensors 31 to 37 and the sensor application 41, the acceleration of the waist, left thigh, right thigh, left shin, right shin, left leg, and right leg at each time (every 1/100th of a second), Angular velocity and quaternion are obtained. These accelerations and angular velocities are input to the shift window value extraction unit 401 (see FIG. 3) as values in the moving coordinate system (local moving coordinate system coordinate system) for each inertial sensor 3, and the quaternions are used to calculate joint angles. It is input to section 402 .
  • the acceleration at time t of each of the three axes of the moving coordinate system of the segment whose identification number is “s” will be described as “acceleration ax_s_t”, “acceleration ay_s_t”, and “acceleration az_s_t”, and the angular velocity at time t will be “Angular velocity ⁇ x_s_t”, “Angular velocity ⁇ y_s_t”, and “Angular velocity ⁇ z_s_t” are described, and a quaternion is described as "Quaternion Qs_t".
  • the time of the m-th measurement is represented as "m-1”. Therefore, the times of the 1st, 2nd, 3rd, . . . measurements are "0", “1", “2", .
  • the acceleration and angular velocity of each of the seven segments, such as the waist changed.
  • the angles of the joints of subject 1B1 also change.
  • the hip acceleration and angular velocity change as shown in FIG. 6(A).
  • the angles of the left hip joint, right hip joint, left knee joint, and right knee joint change as shown in FIG. 6(B). Note that these angles are calculated by the joint angle calculator 402 as described later.
  • the learning inference device 2 when the acceleration and angular velocity of each segment are input to the shift window value extraction unit 401 and the quaternion of each segment is input to the joint angle calculation unit 402 (#102 in FIG. 4), the following processing is performed. done.
  • the shift window value extraction unit 401 extracts the values belonging to the shift windows 4A (4A1, 4A2, . ⁇ #106).
  • the u-th shift window 4A is a window whose range is from time "20(u-1)" to time "99+20(u-1)". Therefore, the first shift window 4A1 is a window covering time 0 to 99 as shown in FIG. 6(A).
  • the second shift window 4A2 is a window that covers a range shifted 0.2 seconds to the right from the shift window 4A1, that is, the range from time 20 to 119.
  • the third and subsequent shift windows 4A are windows in a range shifted rightward from the previous shift window 4A by 0.2 seconds.
  • the shift window value extraction unit 401 sets the first shift window 4A as the current window (#103, #104). Accelerations and angular velocities that fall within the current window are extracted from the accelerations and angular velocities received from the inertial sensor 31 (#105). That is, first, the acceleration and angular velocity that fall within the shift window 4A1 are extracted.
  • the extracted acceleration and angular velocity are represented by a 100x6 matrix.
  • the acceleration and angular velocity extracted from shift window 4A1 are is represented by the matrix
  • the extracted matrix is hereinafter referred to as "motion matrix B".
  • the motion matrix B for the u-th time period (shift window 4A) is described as “motion matrix Bu”
  • motion matrix B for the u-th time period for the v-th subject 1B is described as "motion matrix Bv_u”.
  • the joint angle calculator 402 calculates the angle at the last time of the current window, that is, the angle at the right end of each of the left hip joint, right hip joint, left knee joint, and right knee joint (#106).
  • the joint angle calculator 402 calculates the angle formed by the segments of the hip and the left thigh based on the quaternion at the last time of the current window, as the angle of the left hip joint. For example, the angle of the left hip joint at the last time in the first time period is calculated based on quaternions Q1_99 and Q2_99. Similarly, as the angle of the right hip joint, the angle formed by both segments is calculated based on the quaternions of the waist and the right thigh at the last time of the current window.
  • the joint angle calculator 402 calculates the angle formed by the segments of the left thigh and the left shin based on the quaternion at the last time of the current window, as the angle of the left knee joint.
  • the angle between the segments of the right thigh and the right shin is calculated based on the quaternion at the last time of the current window.
  • angle ⁇ HL The calculated angles of the left hip joint, right hip joint, left knee joint, and right knee joint are hereinafter referred to as “angle ⁇ HL”, “angle ⁇ HR”, “angle ⁇ KL”, and “angle ⁇ KR”.
  • angle ⁇ HL_u the angle of the u-th time period
  • angle ⁇ HR_u the angle of the u-th time period
  • angle ⁇ KL_u the angle of the u-th time period of v-th subject 1B
  • angle ⁇ HL_v_u each angle of the u-th time period of v-th subject 1B is described as “angle ⁇ HL_v_u”, “angle ⁇ HR_v_u”, “angle ⁇ KL_v_u”, and “angle ⁇ KR_v_u”.
  • the data set registration unit 403 stores the attribute vector C acquired in step #101, the motion matrix Bu extracted by the shift window value extraction unit 401 in step #105, the angle ⁇ HL_u calculated by the joint angle calculation unit 402 in step #106, A data set consisting of the angle ⁇ HR_u, the angle ⁇ KL_u, and the angle ⁇ KR_u is stored as the data set 50 in the data set storage unit 404 (#107).
  • shift window value extraction unit 401, joint angle calculation unit 402, and data set registration unit 403 shift the current window to the right by 0.2 seconds (Yes in #108, #109), and step #104 to step #104.
  • the process of #107 is performed. However, if 100 ⁇ 6 accelerations and angular velocities are not aligned even after the shift, that is, if there is no continuation (No in #108), the collection of data sets from subject 1B1 ends.
  • the shift window value extraction unit 401, the joint angle calculation unit 402, and the data set registration unit 403 collect the data sets 50 from each subject 1B other than the subject 1B1 by the same method, and store them in the data set storage unit 404. (Yes in #110, #101 to #109).
  • FIG. 7 is a diagram illustrating an example of the flow of machine learning processing.
  • FIG. 8 is a diagram showing an example of the neural network 60. As shown in FIG.
  • the machine learning unit 405 (see FIG. 3) performs machine learning based on the data set 50 stored in the data set storage unit 404 to obtain the angles of the left hip joint, the right hip joint, the left knee joint, and the right knee joint. Generate a trained model to infer The machine learning method will be described below with reference to FIGS. 7 and 8. FIG.
  • the machine learning phase mainly includes a training step, a verification step, and a test step, as shown in FIG.
  • Neural network 60 as shown in FIG. 8 is prepared in advance in the machine learning unit 405 .
  • Neural network 60 is composed of first network 61 , second network 62 and third network 63 .
  • the first network 61 is an LSTM (Long Short-term Memory) neural network that calculates and outputs a feature matrix that indicates the features of the motion matrix B extracted by the shift window 4A.
  • LSTM Long Short-term Memory
  • the third network 63 is a network consisting of one or more connected layers and outputs a row vector indicating the characteristics of the attribute vector C.
  • it is composed of the bonding layer 63A.
  • the coupling layer 63A has 32 ⁇ 4 weighting factors and 32 biasing factors, and when an attribute vector C is input, it outputs a 32 ⁇ 1 row vector indicating the characteristics of the attribute vector C.
  • the second network 62 is a network consisting of a plurality of fully-connected layers, and in this embodiment is composed of four fully-connected layers 62A-62D.
  • the feature matrix output from the first network 61 is expanded one-dimensionally (256 ⁇ 1) and input to the fully connected layer 62A.
  • Output values of all units of the fully connected layer 62A are input to each unit (node) of the fully connected layer 62B.
  • the output values (a 128 ⁇ 1 row vector) from fully connected layer 62B are merged with the output values from connected layer 63A into a 160 ⁇ 1 row vector. All values of the 160 ⁇ 1 row vector are then input to each unit of the fully connected layer 62C.
  • the output values of all units of the joint layer 63A and the output values of all units of the joint layer 62B are input to each unit of the joint layer 62C. Further, the output values of all units of the fully connected layer 62C are input to each unit of the fully connected layer 62D.
  • the angles ⁇ HL, ⁇ HR, ⁇ KL, and ⁇ KR of the left hip joint, right hip joint, left knee joint, and right knee joint are output from each of the four units of the fully connected layer 62D.
  • a part of the dataset 50 stored in the dataset storage unit 404 is selected as training data and used for learning of the neural network 60.
  • a portion of the remaining data set 50 is selected as validation data and used for tuning the neural network 60 .
  • the rest are then selected as test data and used for the final evaluation of neural network 60 .
  • 87% of the dataset 50 is selected as training data, 12% as validation data, and 1% as test data.
  • the machine learning unit 405 repeatedly executes the training step and the verification step alternately multiple times.
  • the neural network 60 is trained based on the data set 50 selected as training data.
  • the verification may be performed by the holdout method, the cross-validation method, or any other known method.
  • the machine learning unit 405 trains the neural network 60 as follows, for example, using a certain data set 50a.
  • the motion matrix B included in the data set 50a is input to the first network 61 as input data (explanatory variable data)
  • the attribute vector C is input to the third network 63 as input data
  • the first network 61 the Arithmetic processing of each layer of the second network 62 and the third network 63 is performed.
  • angles ⁇ HL, ⁇ HR, ⁇ KL, and ⁇ KR of the left hip joint, right hip joint, left knee joint, and right knee joint are calculated and output from the fully connected layer 62D.
  • the machine learning unit 405 uses the data of the angles ⁇ HL, ⁇ HR, ⁇ KL, and the angle ⁇ KR included in the data set 50a as teacher data (correct data), the first network 61, the second network 62, and Adjust each parameter (eg weighting factor or bias value) of the third network 63 . That is, each parameter is adjusted so that the difference between the angle ⁇ HL and the angle ⁇ HL, the difference between the angle ⁇ HR and the angle ⁇ HR, the difference between the angle ⁇ KL and the angle ⁇ KL, and the difference between the angle ⁇ KR and the angle ⁇ KR are reduced.
  • each parameter eg weighting factor or bias value
  • the machine learning unit 405 trains the neural network 60 using the other data set 50 selected as training data in a similar manner.
  • the machine learning unit 405 verifies the output values (estimated values) of the trained neural network 60 using the data set 50 selected as verification data, and tunes the hyperparameters of this neural network 60.
  • the machine learning unit 405 trains the neural network 60 tuned in the first verification step by the method described above, and in the second verification step, trains the neural network 60 trained in the second training step. Tune network 60 in the manner described above. From the third time onwards, the processing of the training step and the verification step is performed in the same way.
  • a neural network 60 trained and tuned by repeating such a set of training steps and verification steps (that is, epochs) is hereinafter referred to as a "learned neural network 67". Then, in the test step, the final evaluation of the trained neural network 67 is performed using the data set 50 selected as test data.
  • the machine learning unit 405 stores the trained neural network 67 as a trained model in the trained model storage unit 406. If it does not have a constant accuracy, then each step can be continued with another new data set 50 until a constant accuracy is obtained. Alternatively, the width of the shift window 4A may be changed, the data set 50 may be collected again, and the machine learning may be redone.
  • FIG. 9 is a diagram showing an example of the trained neural network 67. As shown in FIG.
  • the angles of the left hip joint, the right hip joint, the left knee joint, and the right knee joint when the inference target person 1A is walking are mainly obtained by the learned neural network 67 and one inertial sensor 3. can guess.
  • the inertial sensor 3 will be referred to as an "inertial sensor 38" to distinguish it from the inertial sensors 31-37. Also, it is assumed that the inertial sensor 38 is given "8" as an identification number.
  • the inertial sensor 38 is attached to the waist belt in a predetermined posture.
  • the guess subject 1A wears this belt so that the inertial sensor 38 is arranged at a specific position on the waist of the guess subject 1A.
  • the inertial sensor 38 is fixed at a specific position on the waist of the inference target person 1A. This specific position is the same as the specific position during the preparation of the dataset.
  • the inference target person 1A inputs his or her own age, gender, weight, and height attributes to the learning inference device 2.
  • the vector representing the input attribute is described as "attribute vector C'”.
  • the inertial sensor 38 measures the acceleration and angular velocity of each of the three axes at a sampling rate of 100 Hz. Then, the inertial sensor 38 or the sensor application 41 obtains the hip acceleration and angular velocity at each time (every 1/100th of a second) in the movement coordinate system, and inputs them to the joint angle inferring unit 407 (see FIG. 3). .
  • the joint angle inference unit 407 performs inference processing. The procedure of this processing will be described below with reference to FIG.
  • the joint angle inferring unit 407 inputs the motion matrix B′ representing the hip acceleration and angular velocity at the time when the joint angle is to be inferred and the hip acceleration and angular velocity for the most recent second to the first network 61 of the trained neural network 67.
  • the attribute vector C' is input to the third network 63.
  • the motion matrix B' is a 100 ⁇ 6 matrix, and the motion matrix B' representing the acceleration and angular velocity at time t and the acceleration and angular velocity for the most recent second is is.
  • the joint angle inference unit 407 calculates the angles ⁇ HL_t, ⁇ HR_t, and ⁇ KL_t from the fully connected layer 62D by performing calculations on the layers constituting the first network 61, the second network 62, and the third network 63, respectively. , and ⁇ KR_t. These angles are the angles of the left hip joint, right hip joint, left knee joint, and right knee joint of the guess subject 1A at time t.
  • the inference result output unit 408 outputs the angles ⁇ HL_t, ⁇ HR_t, ⁇ KL_t, and ⁇ KR_t obtained by the joint angle inference unit 407 at each time t. For example, each angle is displayed on the display 27 as a line graph. Alternatively, table data indicating each angle is generated and transmitted to an external device.
  • the inference result output unit 408 generates an animation of a pedestrian in which the left hip joint, right hip joint, left knee joint, and right knee joint change to angles ⁇ HL_t, ⁇ HR_t, ⁇ KL_t, and ⁇ KR_t, respectively, as follows, It may be displayed on the display 27 .
  • the inference result output unit 408 prepares a three-dimensional model of the pedestrian's human body. While the pedestrian is walking, the angles ⁇ HL_t, ⁇ HR_t, ⁇ KL_t, and ⁇ KR_t are obtained by the inertial sensor 38 or the like at each time t. change the three-dimensional model to Then, an image of the human body is generated by rendering the changed three-dimensional model and displayed on the display 27 . This reproduces the animation.
  • the angles ⁇ HL_t, ⁇ HR_t, ⁇ KL_t, and ⁇ KR_t are obtained at a sampling rate of 100 Hz. may be changed.
  • the inference result output unit 408 may calculate the center of gravity of the inference target person 1A at each time t.
  • the center of gravity may be calculated by a known method. For example, since the software of the MTw Awinda system described above has a function of calculating the center of gravity, the software may be used to calculate the center of gravity.
  • the center of gravity may be calculated using AI technology, for example, as follows.
  • Various walking postures of a human having an average body shape are captured in advance, and the angle and center of gravity of each part (left hip joint, right hip joint, left knee joint, and right knee joint) in each posture are obtained.
  • a learned model for the center of gravity is generated in advance by performing machine learning using the angle of each part as explanatory variable data and the center of gravity as teacher data.
  • the inference result output unit 408 calculates the center of gravity at each time t by inputting the angles ⁇ HL_t, ⁇ HR_t, ⁇ KL_t, and ⁇ KR_t into this learned model while the inference target person 1A is walking.
  • a human with an average body shape is made to take various postures in advance, the posture is measured by motion capture, and the center of gravity in each posture is obtained.
  • a learned model for the center of gravity is generated in advance by performing machine learning using each posture as explanatory variable data and the center of gravity as teacher data.
  • the inference result output unit 408 calculates the center of gravity at each time t by inputting the posture of the three-dimensional model for the animation described above into this learned model.
  • the center of gravity is defined as the value of the position coordinates of the foot on the ground on the reference coordinate system. However, when both feet are on the ground, the foot that landed first should be used as the reference. Also, an animation of a pedestrian may be displayed such that the position of the center of gravity is indicated by a marker such as a point.
  • the inference result output unit 408 may further detect and output variations based on changes in the center of gravity and posture during walking.
  • FIG. 10 is a flowchart for explaining an example of the overall processing flow of the joint angle estimation program 40. As shown in FIG.
  • the learning inference device 2 executes AI construction processing and joint angle inference processing based on the joint angle estimation program 40 according to the procedure shown in FIG.
  • the learning inference device 2 generates and stores a dataset 50 in the dataset preparation phase (#11 in FIG. 10). These processing methods are as described with reference to FIG. A trained neural network 67 is generated and stored by performing machine learning based on the data set 50 (#12, #13). These processing methods are as described with reference to FIGS. 7 and 8. FIG.
  • the learning inference device 2 acquires the hip acceleration, angular velocity, and attributes of the inference target person 1A (#14, #15), the learning inference device 2, based on these information and the learned neural network 67, determines whether the inference target person 1A is walking. Angles ⁇ HL_t, ⁇ HR_t, ⁇ KL_t, and ⁇ KR_t of the left hip joint, right hip joint, left knee joint, and right knee joint are estimated (#16) and output (#17). These processing methods are as described with reference to FIG. Furthermore, an animation showing the state of the inference target person 1A may be displayed based on the inference result, or the center of gravity may be calculated and the position thereof may be shown along with the animation.
  • the data required to generate a trained model for estimating the four joint angles can be obtained by using the same type of seven inertial sensors 3 as sensors.
  • only one inertial sensor 3 may be attached to the inference target person 1A during inference. Therefore, it is possible to specify the joint angle while reducing the burden on the inferred subject 1A more than before.
  • FIG. 11 is a diagram showing examples of MAE in each of the first to third patterns.
  • FIG. 12 is a diagram showing an example of MAE when the inertial sensor 38 is fixed to each part.
  • FIG. 13 is a diagram showing an example of a control method for the robotic prosthesis 71.
  • the learning inference device 2 performs machine learning using only the hip acceleration and angular velocity of the seven segments as input data. It may be used as input data for machine learning.
  • the motion matrix B shown in FIG. 8 the following 100 ⁇ 18 matrix in which data for the waist, left leg, and right leg are arranged in columns may be used as input data.
  • the inertial sensor 38 is not only attached to a specific position on the waist of the inference target person 1A, but also one inertial sensor 3 is attached to each specific position of the left leg and right leg, and the learning inference device 2 , the acceleration and angular velocity of the waist, left leg, and right leg are determined by these three inertial sensors 3, respectively.
  • the first network 61 see FIG. 9
  • the trained neural network 67 By inputting the 100 ⁇ 18 matrix of these accelerations and angular velocities into the first network 61 (see FIG. 9) of the trained neural network 67, the angle of each of the four joints of the subject 1A is estimated.
  • the learning inference device 2 uses four attributes of age, gender, weight, and height as input data for machine learning. good. Alternatively, other attributes may be used. Alternatively, machine learning may be performed without using the attribute vector C. In this case, the third network 63 is not provided in the neural network 60 (see FIG. 8). Also, each unit of the fully connected layer 62D of the second network 62 receives only the output value from the fully connected layer 62C.
  • an LSTM neural network is used as the first network 61 (see FIGS. 8 and 9), but other forms of neural networks may be used.
  • CNN Convolutional Neural Network
  • BLSTM Bidirectional LSTM
  • WNN Widelet Neural Network
  • the motion matrix B is converted into a grayscale image in machine learning and input to the first network 61 .
  • the motion matrix B' is converted to a grayscale image and input to the first network 61 in inference.
  • a line graph image such as that shown in the shift window 4A of FIG. 6A may be used.
  • the accuracy of inference is examined based on the MAE (Mean Absolute Error) of inference for each of the following three patterns (1) to (3).
  • inertial sensors 3 acquire data on the waist, left and right thighs, left and right shins, and left and right legs, and perform machine learning. This is a pattern in which waist data is acquired by the inertial sensor 38 and inferred. Attribute vector C is not used.
  • the second pattern is a modification of the first pattern to use attribute vectors (age, gender, weight, and height) in both machine learning and inference.
  • attribute vectors age, gender, weight, and height
  • the third pattern is a modification of the first pattern in which inertial sensors 3 on the left and right feet are added to perform inference.
  • the results shown in FIG. 11(C) were obtained. According to this result, it can be seen that all MAEs are lower than in the case of the first pattern.
  • the learning inference device 2 performs machine learning using the hip acceleration and angular velocity as input data.
  • Machine learning may be performed using the acceleration and angular velocity of any one of as input data.
  • the acceleration and angular velocity of the left thigh instead of the motion matrix B shown in FIG. 8, the following 100 ⁇ 6 matrix may be used as input data.
  • the inertial sensor 38 is fixed at a specific position of the left thigh instead of the specific position of the waist of the inferred subject 1A, and the learning inference device 2 uses the inertial sensor 38 to detect the acceleration of the waist and the Find the angular velocity.
  • the first network 61 see FIG. 9 of the learned neural network 67, the angle of each of the four joints of the subject 1A is estimated.
  • FIG. 12 shows the MAE when machine learning and inference are performed by BLSTM with the inertial sensors 38 fixed to the waist, thighs, shins, and feet, respectively.
  • the MAE corresponding to the waist is the same as the MAE corresponding to BLSTM among the MAEs shown in FIG. 11(A).
  • fixing the inertial sensor 38 to the thigh, shin, or foot provides better reasoning than fixing it to the waist. Best fixed to the shin. However, since it is easier to fix it to the waist than to fix it to the shin, it is possible to use different fixing destinations depending on the environment or purpose.
  • the learning inference device 2 sets the actual angle of the left hip joint at the time of calibration as the reference angle, and sets the difference between the reference angle and the actual angle of the left hip joint during walking as the angle of the left hip joint in the data set 50. may be as shown. The same is true for the right hip joint, left knee joint, and right knee joint.
  • the learning inference device 2 generated a versatile trained neural network 67 by performing machine learning using the data set 50 collected from multiple subjects 1B.
  • the trained neural network 67 dedicated to this subject 1B may be generated by collecting the data set 50 from one specific subject 1B and performing machine learning.
  • there is no need to provide the third network 63 in the neural network 60 and only the output values from the fully connected layer 62C are input to each unit of the fully connected layer 62D of the second network 62.
  • a neural network may be generated to infer the angles of the left and right ankle joints (ankle joints).
  • the learning inference device 2 acquires the angle of the left ankle joint as correct data based on the quaternions of the left shin and the left leg respectively, machine learning by acquiring the angle of the right ankle joint as correct data.
  • a neural network may be generated to infer the angle of the open legs in the front-back direction (angle of both legs).
  • the learning inference device 2 acquires the forward and backward leg angles based on the quaternions of the left and right thighs as correct data, and performs machine learning. good.
  • a neural network may be generated to infer the anteroposterior angles of the left and right shoulder joints.
  • one inertia sensor 3 is fixed to each of the left and right upper arms, the angle of the left shoulder joint is acquired as correct data based on the quaternions of the waist and the left upper arm, respectively, and based on the quaternions of the waist and the right upper arm. Then, the angle of the right shoulder joint is obtained as correct data, and machine learning is performed.
  • the learning reasoning device 2 can be used in various situations such as medical care, sports, and entertainment.
  • the learning reasoning device 2 can be used for patient rehabilitation as follows.
  • the staff of the medical institution fixes the inertial sensor 38 to a specific position on the patient's waist, and inputs the patient's age, gender, weight, and height attribute values to the learning inference device 2 .
  • the acceleration and angular velocity of the waist at each time (for example, every 1/100th of a second) in the movement coordinate system are obtained and input to the learning inference device 2 .
  • the learning inference device 2 estimates the angles HL_t, ⁇ HR_t, ⁇ KL_t, and ⁇ KR_t of the patient's left hip joint, right hip joint, left knee joint, and right knee joint at each time t ( inference). Furthermore, the center of gravity is calculated. Then, an animation is displayed showing how the patient walks. At this time, changes in the position of the center of gravity are also reproduced. Furthermore, the variation in posture and center of gravity during walking is inspected and output.
  • the staff advises the patient on the preferred way to walk while watching the animation and variations in posture and center of gravity together with the patient.
  • the learning reasoning device 2 may also display an animation representing an ideal gait next to the animation of the patient.
  • a unique ID is assigned to each patient, and the patient's information (name, address, contact information, face photo, chart, information on artificial limbs or prosthetic limbs, rehabilitation history, signature of consent to disclaimer, or Code information for preventing falsification of such information, etc.) may be associated with the ID and managed in the learning inference device 2 .
  • the staff can support rehabilitation more efficiently and reliably than before.
  • the angle inferred by the learning inference device 2 may be used to control the prosthetic leg.
  • the control method will be described by taking as an example the case where the guess subject 1A2 uses the robot prosthetic leg 71 for the left leg.
  • the robot prosthetic leg 71 corresponds to the area below the left knee of a human to the toe of the left foot, and has a joint in a portion corresponding to the ankle.
  • the joint has approximately the same range of motion as the ankle joint of a human left leg, and is extended and bent by a motor and a controller that controls the motor.
  • the inertial sensor 38 and the robot prosthetic leg 71 are fixed in advance at respective predetermined positions of the person to be guessed 1A2, and the age, sex, weight, and height attributes of the person to be guessed 1A2 are input to the learning and reasoning device 2.
  • the acceleration and angular velocity of the waist at each time in the movement coordinate system are obtained and input to the learning inference device 2.
  • the learning inference device 2 estimates the angle of each joint at each time t by the method illustrated in FIG.
  • the controller of the robot prosthesis 71 controls the motor based on the transmitted angle (command angle). For example, the motor is controlled so that the joint of the prosthetic leg 71 of the robot forms the commanded angle. By driving the joints of the robot prosthesis 71 in this way, natural walking becomes possible.
  • the learning inference device 2 uses the most recent data including the time when the joint angle is formed as input data paired with the joint angle (correct data).
  • Machine learning was performed using the acceleration and angular velocity at each time for 1 second, but machine learning may be performed using the acceleration and angular velocity at each time in the immediately following time zone, or both immediately before and after Machine learning may be performed using the acceleration and angular velocity at each time in the time period.
  • matrix may be used as input data.
  • the acceleration and angular velocity at each time in the immediately following time zone are similarly input to the learned neural network 67 during inference.
  • the learning inference device 2 uses a shift window 4A with a time width of 1 second and a shift width of 0.2 seconds to acquire the data set 50.
  • a shift window with a duration of 1.2 seconds and a shift width of 0.5 seconds may be used.
  • a matrix matrix of acceleration and angular velocity corresponding to the shift width is used as the motion matrix B'. That is, when the time width of shift window 4A is 1.2 seconds, a matrix of 120 ⁇ 6 is used.
  • the inertial sensors 31 and 38 are attached to a dedicated belt in a predetermined posture, and the belt is worn at a specific position on the waist of the subject 1B or the inferred subject 1A.
  • Inertial sensors 31, 38 are fixed in a specific posture.
  • the inertial sensors 31 and 38 may be fixed slightly shifted or tilted from a specific position. Therefore, in each of machine learning and inference, calibration may be performed so that the positions and orientations of the inertial sensors 31 and 38 in the reference state are regarded as the initial positions.
  • the learning inference device 2 acquires the angles of the left hip joint, the right hip joint, the left knee joint, and the right knee joint based on the positions and orientations of the segments obtained by the inertial sensors 31-35.
  • a marker may be attached to each segment, the position and orientation of each segment may be specified by photographing the marker with one or more cameras, and each angle may be calculated. That is, it may be calculated by so-called optical motion capture.
  • one or more depth cameras may identify the three-dimensional position of each segment and calculate each angle.
  • each angle may be obtained by a mounting tool as described in Patent Document 2.
  • a neural network for inferring joint angles during walking is generated.
  • Neural networks can also be generated to infer angles during other actions such as walking, skiing, and so on.
  • a neural network for inferring the angles of human joints is generated as the trained neural network 67.
  • animals having bodies and limbs other than humans, such as dogs, cats, and horses, may also be used.
  • a neural network can also be generated to infer the joint angles of an individual.
  • each function shown in FIG. 3 is provided in the learning inference device 2, but may be distributed and provided in a plurality of devices.
  • the shift window value extraction unit 401, the joint angle calculation unit 402, the data set registration unit 403, and the data set storage unit 404 are provided in the first computer
  • the machine learning unit 405 is provided in the second computer
  • the trained model A storage unit 406, a joint angle inference unit 407, and an inference result output unit 408 are provided in the third computer.
  • a first computer, a second computer, and a third computer are connected by a communication line such as the Internet, a LAN (Local Area Network), or a public line.
  • the first computer, the second computer, and the third computer perform the following processes based on the first computer program, the second computer program, and the third computer program, respectively.
  • the first computer transmits the data set 50 stored in the data set storage unit 404 to the second computer along with a machine learning command. Then, the second computer generates a trained neural network 67 by performing machine learning using the received data set 50, and transmits it to the third computer. The third computer stores the trained neural network 67 in the trained model storage unit 406 . Then, the learned neural network 67 is used to estimate the four joint angles of the person 1A to be estimated.
  • joint angle estimating system 1 and the learning inference device 2 as a whole or the configuration of each part, the contents of the processing, the order of the processing, the configuration of the data set, the range of the shift window 4A, etc. can be changed as appropriate in line with the spirit of the present invention. be able to.
  • Joint angle estimation system 1B Subject (learning target individual) 2 Learning inference device (individual angle learning system, individual angle estimation device) 31 inertial sensor (first inertial sensor) 37 inertial sensor (second inertial sensor) 401 shift window value extraction unit (first acquisition unit) 402 joint angle calculation unit (acquisition means, second acquisition means) 405 machine learning unit (learning means) 407 joint angle inference unit (inference means) 50 data sets (input data, correct data) 67 Trained Neural Network (Trained Model)

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Dentistry (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Physiology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

[Problem] To identify the angle of a joint in a subject while more reducing a load on the subject than ever before. [Solution] This system is provided with: an inertial sensor 31 that is attached to a predetermined part in a subject 1B who is an object person of a learning for the purpose of determining an acceleration rate and an angular velocity of the predetermined part; inertial sensors 32, 33 that are respectively attached to the right and left thighs of the subject 1B for the purpose of determining acceleration rates and angular velocities respectively of the right and left thighs of the subject 1B; and a learning estimation device 2. The learning estimation device 2 calculates a correct angle formed between the predetermined part in the subject 1B and each of the right and left thighs of the subject 1B on the basis of the above-determined acceleration rates and angular velocities, and performs a mechanical learning using the acceleration rate and the angular velocity of the predetermined part in the subject 1B as input data and also using the correct angle as correct data, thereby generating a learned model. Subsequently, an acceleration rate and an angular velocity of a predetermined part in an estimation object person 1A are input into the learned model to estimate an object angle formed between the predetermined part in the estimation object person 1A and each of the right and left thighs of the estimation object person 1A.

Description

関節角度学習推測システム、関節角度学習システム、関節角度推測装置、関節角度学習方法、およびコンピュータプログラムJoint angle learning and estimating system, joint angle learning system, joint angle estimating device, joint angle learning method, and computer program
 本発明は、人間などの個体の関節の角度を推測するための技術に関する。 The present invention relates to technology for estimating joint angles of individuals such as humans.
 ヘルスケア、スポーツ、バーチャルリアリティ、およびアニメーションなど様々な分野において、人間の歩行中の動きを特定する技術が活用されている。特定された動きが、例えば次のように使用される。ヘルスケアの分野において、患者の診断のために使用される。または、スポーツの分野において、選手のフォームのチェックのために使用される。または、バーチャルリアリティの分野において、ユーザの動きをアバタで再現するために使用される。または、アニメーションの分野において、ユーザの動きをキャラクタで再現するために使用される。 Technology that identifies human movements while walking is being used in various fields such as healthcare, sports, virtual reality, and animation. The identified movements are used, for example, as follows. It is used for patient diagnosis in the field of healthcare. Alternatively, in the field of sports, it is used for checking the form of athletes. Or, in the field of virtual reality, it is used to reproduce a user's movements with an avatar. Alternatively, in the field of animation, it is used to reproduce user movements with characters.
 人間の歩行中の動きを特定する方法として、従来、次の方法が知られている。人体の複数の部位にマーカを1つずつ付け、歩行中の各マーカの軌跡をビデオカメラで撮影することによって動きを特定する。または、マーカの代わりにIMU(Inertial Measurement Unit)などのセンサを付け、各センサが測定する加速度および角速度などに基づいて動きを特定する。これらの方法によって腰、太腿、および脛などのセグメントの位置および姿勢を特定すれば、関節の角度を特定することができる。例えば、特許文献1に記載される方法によると、前腕および上腕に1つずつ慣性センサを取り付け、これらの慣性センサそれぞれによって計測される角速度および加速度に基づいて肘関節の角度を算出する。 The following methods are conventionally known as methods for identifying movements during human walking. A marker is attached to a plurality of parts of the human body one by one, and movement is specified by photographing the trajectory of each marker during walking with a video camera. Alternatively, a sensor such as an IMU (Inertial Measurement Unit) is attached instead of the marker, and movement is specified based on the acceleration and angular velocity measured by each sensor. Once the positions and poses of segments such as hips, thighs, and shins are determined by these methods, the angles of the joints can be determined. For example, according to the method described in Patent Document 1, one inertial sensor is attached to each of the forearm and the upper arm, and the angle of the elbow joint is calculated based on the angular velocity and acceleration measured by each of these inertial sensors.
 そのほか、人間の歩行中の関節の角度を取得する方法として、次のような方法が提案されている。特許文献2に記載される方法によると、相対的に回転可能なアーム部および腰部装着部と両者の相対回転角度を検出する角度センサとを有する装着具を人体に装着し、人体の関節の角度を特定する。 In addition, the following method has been proposed as a method for acquiring the angles of joints during human walking. According to the method described in Patent Literature 2, a wearing device having relatively rotatable arm portions, a waist attachment portion, and an angle sensor for detecting the relative rotation angle between the two is attached to the human body, and the angles of the joints of the human body are measured. identify.
特表2021-503340号公報Japanese Patent Publication No. 2021-503340 特開2018-134724号公報JP 2018-134724 A
 上述の通り、人間の歩行中の関節の角度を特定する方法が幾つか存在する。しかし、いずれの方法も、多数のセンサを人体に付けたり、大掛かりな装着具を装着したりする必要がある。または、多数のマーカを付けてビデオカメラで撮影する必要がある。このように、従来の方法は、関節の角度を特定する対象者に負担が掛かる。 As mentioned above, there are several methods for identifying the joint angles during human walking. However, in any method, it is necessary to attach a large number of sensors to the human body or wear a large-scale attachment. Alternatively, it is necessary to attach a large number of markers and shoot with a video camera. Thus, the conventional method imposes a burden on the subject to identify the angles of the joints.
 本発明は、このような課題に鑑み、対象者にとっての負担を従来よりも軽減して関節の角度を特定することを目的とする。 In view of such problems, it is an object of the present invention to specify the joint angle while reducing the burden on the subject more than before.
 本発明の一形態に係る個体角度学習推測システムは、学習の対象である学習対象個体の所定の部位の加速度および角速度を求めるために当該所定の部位に取り付けられる第一の慣性センサと、前記学習対象個体の関節の角度を正解角度として取得する取得手段と、入力データとして前記学習対象個体の所定の部位の加速度および角速度を用いかつ正解データとして前記正解角度を用いて機械学習を行うことによって学習済モデルを生成する学習手段と、推測の対象である推測対象個体の所定の部位の加速度および角速度を求めるために当該所定の部位に取り付けられる第二の慣性センサと、前記推測対象個体の所定の部位の加速度および角速度を前記学習済モデルに入力することによって当該推測対象個体の関節の角度を推測する、推測手段と、を有する。 An individual angle learning and estimating system according to one aspect of the present invention includes: a first inertial sensor attached to a predetermined portion of a learning target individual to obtain acceleration and angular velocity of the predetermined portion; Learning by performing machine learning using acquisition means for acquiring the angles of the joints of the target individual as correct angles, and using the acceleration and angular velocity of a predetermined part of the learning target individual as input data and the correct angles as correct data. a second inertial sensor attached to a predetermined part of an inference target individual to determine the acceleration and angular velocity of the predetermined part of the inference target individual; estimating means for estimating joint angles of the inference target individual by inputting accelerations and angular velocities of parts into the learned model.
 所定の部位は、例えば、腰、左太腿、右太腿、左脛、右脛、左足、および右足のうちのいずれかである。所定の部位が複数であってもよい。例えば、腰、左足、および右足の3個所の加速度および角速度を求めてもよい。 The predetermined part is, for example, one of the waist, left thigh, right thigh, left shin, right shin, left leg, and right leg. A plurality of predetermined parts may be provided. For example, accelerations and angular velocities at three points, the waist, left leg, and right leg, may be obtained.
 本発明によると、対象者にとっての負担を従来よりも軽減して関節の角度を特定することができる。 According to the present invention, the joint angle can be specified with less burden on the subject than before.
 腰、左太腿、右太腿、左脛、右脛、左足、および右足のいずれかに第二の慣性センサを取り付けて加速度および角速度を算出し推測対象個体の関節の角度を推測すると、より良好に関節の角度を特定することができる。第二の慣性センサを複数、取り付けても構わない。特に、腰、左足、および右足に第二の慣性センサを取り付けると、より精度の高い結果を得ることができる。 Attaching a second inertial sensor to one of the waist, left thigh, right thigh, left shin, right shin, left leg, and right leg to calculate the acceleration and angular velocity to estimate the angle of the joint of the target individual is more accurate. The angle of the joint can be specified well. A plurality of second inertial sensors may be attached. In particular, attaching a second inertial sensor to the hip, left leg, and right leg can provide more accurate results.
関節角度推測システムの全体的な構成の例を示す図である。1 is a diagram showing an example of the overall configuration of a joint angle estimation system; FIG. 学習推論装置のハードウェア構成の例を示す図である。It is a figure which shows the example of the hardware constitutions of a learning inference apparatus. 学習推論装置の機能的構成の例を示す図である。It is a figure which shows the example of a functional structure of a learning inference apparatus. データセット準備処理の流れの例を説明するフローチャートである。FIG. 11 is a flowchart for explaining an example of the flow of data set preparation processing; FIG. 複数の慣性センサそれぞれの取付け位置の例を示す図である。FIG. 4 is a diagram showing examples of mounting positions of a plurality of inertial sensors; 腰の慣性センサの移動座標系における加速度および角速度の変化の例ならびに被験者の4つの関節の角度の変化の例を示す図である。FIG. 4 is a diagram showing an example of changes in acceleration and angular velocity in the moving coordinate system of the inertial sensor on the waist and an example of changes in angles of four joints of the subject. 機械学習の処理の流れの例を示す図である。FIG. 4 is a diagram illustrating an example of the flow of machine learning processing; ニューラルネットワークの例を示す図である。It is a figure which shows the example of a neural network. 学習済ニューラルネットワークの例を示す図である。FIG. 10 is a diagram showing an example of a trained neural network; 関節角度推測プログラムによる全体的な処理の流れの例を説明するフローチャートである。4 is a flowchart for explaining an example of the overall processing flow of a joint angle estimation program; 第一ないし第三のパターンそれぞれにおけるMAEの例を示す図である。FIG. 10 is a diagram showing an example of MAE in each of the first to third patterns; 各部位に慣性センサを固定した場合のMAEの例を示す図である。FIG. 10 is a diagram showing an example of MAE when inertial sensors are fixed to respective parts; 義足の制御の方法の例を示す図である。FIG. 10 illustrates an example of a method of controlling a prosthesis;
 〔1.システムの全体構成〕
 図1は、関節角度推測システム1の全体的な構成の例を示す図である。図2は、学習推論装置2のハードウェア構成の例を示す図である。図3は、学習推論装置2の機能的構成の例を示す図である。
[1. Overall system configuration]
FIG. 1 is a diagram showing an example of the overall configuration of a joint angle estimation system 1. As shown in FIG. FIG. 2 is a diagram showing an example of the hardware configuration of the learning inference device 2. As shown in FIG. FIG. 3 is a diagram showing an example of the functional configuration of the learning inference device 2. As shown in FIG.
 図1に示す関節角度推測システム1は、推測対象者1Aが歩行しているときの関節の角度をAI(Artificial Intelligence)の技術によって推測するシステムであって、学習推論装置2および複数の慣性センサ3などによって構成される。学習推論装置2と各慣性センサ3とは無線によって接続される。 The joint angle estimating system 1 shown in FIG. 1 is a system for estimating joint angles when a person 1A to be inferred is walking by AI (Artificial Intelligence) technology. 3 and the like. The learning inference device 2 and each inertial sensor 3 are connected wirelessly.
 学習推論装置2は、機械学習によって学習済モデルを生成し、推測対象者1Aが歩行しているときの推測対象者1Aの関節の角度を学習済モデルなどに基づいて推定する。以下、パーソナルコンピュータが学習推論装置2として用いられる場合を例に説明する。 The learning inference device 2 generates a learned model by machine learning, and estimates the angles of the joints of the inference target person 1A when the inference target person 1A is walking based on the learned model. A case where a personal computer is used as the learning inference device 2 will be described below as an example.
 学習推論装置2は、図2に示すように、メインプロセッサ20、RAM(Random Access Memory)21、ROM(Read Only Memory)22、補助記憶装置23、ネットワークインタフェース24、シリアルインタフェース25、無線通信装置26、ディスプレイ27、キーボード28、およびポインティングデバイス29などによって構成される。 As shown in FIG. 2, the learning inference device 2 includes a main processor 20, a RAM (Random Access Memory) 21, a ROM (Read Only Memory) 22, an auxiliary storage device 23, a network interface 24, a serial interface 25, and a wireless communication device 26. , a display 27, a keyboard 28, a pointing device 29, and the like.
 ROM22または補助記憶装置23には、オペレーティングシステムのほか関節角度推測プログラム40およびセンサアプリケーション41などのコンピュータプログラムがインストールされている。 In the ROM 22 or the auxiliary storage device 23, an operating system as well as computer programs such as a joint angle estimation program 40 and a sensor application 41 are installed.
 関節角度推測プログラム40によると、図3に示すシフトウィンドウ値抽出部401、関節角度算出部402、データセット登録部403、データセット記憶部404、機械学習部405、学習済モデル記憶部406、関節角度推論部407、および推論結果出力部408などの機能が実現される。 According to the joint angle estimation program 40, the shift window value extraction unit 401, the joint angle calculation unit 402, the data set registration unit 403, the data set storage unit 404, the machine learning unit 405, the learned model storage unit 406, the joint Functions such as an angle inference unit 407 and an inference result output unit 408 are realized.
 RAM21は、学習推論装置2のメインメモリである。RAM21には、適宜、関節角度推測プログラム40およびセンサアプリケーション41などのコンピュータプログラムがロードされる。 The RAM 21 is the main memory of the learning inference device 2. The RAM 21 is appropriately loaded with computer programs such as a joint angle estimation program 40 and a sensor application 41 .
 メインプロセッサ20は、RAM21にロードされたコンピュータプログラムを実行する。メインプロセッサ20として、GPU(Graphics Processing Unit)またはCPU(Central Processing Unit)などが用いられる。 The main processor 20 executes computer programs loaded into the RAM 21 . A GPU (Graphics Processing Unit), a CPU (Central Processing Unit), or the like is used as the main processor 20 .
 ネットワークインタフェース24は、TCP/IP(Transmission Control Protocol/Internet Protocol)などのプロトコルによって他の装置と通信する。ネットワークインタフェース24として、NIC(Network Interface Card)またはWi-Fi用の通信装置が用いられる。 The network interface 24 communicates with other devices by protocols such as TCP/IP (Transmission Control Protocol/Internet Protocol). As the network interface 24, a NIC (Network Interface Card) or a communication device for Wi-Fi is used.
 シリアルインタフェース25は、シリアル通信方式によって周辺機器と通信する。シリアルインタフェース25としてUSB(Universal Serial Bus)などの規格に準拠したボードが用いられる。 The serial interface 25 communicates with peripheral devices by a serial communication method. A board conforming to standards such as USB (Universal Serial Bus) is used as the serial interface 25 .
 無線通信装置26は、近距離無線によって周辺機器と通信する。無線通信装置26としてBluetoothなどの規格に準拠したボードが用いられる。 The wireless communication device 26 communicates with peripheral devices by short-range wireless. A board conforming to standards such as Bluetooth is used as the wireless communication device 26 .
 シリアルインタフェース25および無線通信装置26は、特に、慣性センサ3と通信するために用いられる。慣性センサ3の規格に応じてシリアルインタフェース25および無線通信装置26が使い分けられる。 The serial interface 25 and wireless communication device 26 are used in particular to communicate with the inertial sensor 3 . The serial interface 25 and wireless communication device 26 are selectively used according to the standard of the inertial sensor 3 .
 ディスプレイ27は、コマンドもしくはデータを入力するための画面またはメインプロセッサ20による演算の結果を表わす画面などを表示する。 The display 27 displays a screen for inputting commands or data, a screen showing results of operations by the main processor 20, or the like.
 キーボード28およびポインティングデバイス29は、コマンドまたはデータなどをオペレータが入力するための入力装置である。 The keyboard 28 and pointing device 29 are input devices for the operator to enter commands or data.
 慣性センサ3は、3軸(X軸、Y軸、Z軸)それぞれの加速度および角速度を計測する装置であって、一般に「IMU(Inertial Measurement Unit)」または「慣性計測装置」などと呼ばれる装置である。慣性センサ3として、市販のIMUが用いられる。以下、オランダのXsens社のMTw Awindaシステムで提供されるIMUが用いられる場合を例に説明する。 The inertial sensor 3 is a device that measures the acceleration and angular velocity of each of the three axes (X-axis, Y-axis, Z-axis) and is generally called an "IMU (Inertial Measurement Unit)" or an "inertial measurement unit." be. A commercially available IMU is used as the inertial sensor 3 . A case where an IMU provided by the MTw Awinda system of Xsens in the Netherlands is used will be described below as an example.
 MTw Awindaは、次の機能を有する。複数(最大30個)のIMUが、独自のプロトコルで10万分の1秒の同期精度を保ちながら加速度および角速度を計測することができる。後述するように本実施形態では、100分の1秒ごとに関節の角度を算出しまたは推測するので、これらのIMUの計測結果をほぼ正確に同期させて使用することができる。そして、各IMUは、計測結果をパーソナルコンピュータへZigBeeで送信することができる。 MTw Awinda has the following functions. Multiple (up to 30) IMUs can measure acceleration and angular velocity while maintaining synchronization accuracy of 1/100,000 second using a proprietary protocol. As will be described later, in this embodiment, the angle of the joint is calculated or estimated every 1/100th of a second, so these IMU measurement results can be used in almost accurate synchronization. Then, each IMU can transmit the measurement result to the personal computer by ZigBee.
 さらに、MTw Awindaは、パーソナルコンピュータ用の受信機およびマネージャソフトウェアを提供する。受信機は、IMUから発信されるデータを受信するためのZigBee規格のUSBデバイスである。マネージャソフトウェアは、IMUの設定値およびプロパティを変更するためのコンピュータプログラムである。シリアルインタフェース25に受信機が繋がれており、補助記憶装置23にマネージャソフトウェアがセンサアプリケーション41としてインストールされている。 In addition, MTw Awinda provides receiver and manager software for personal computers. The receiver is a ZigBee standard USB device for receiving data transmitted from the IMU. Manager software is a computer program for changing IMU settings and properties. A receiver is connected to the serial interface 25 , and manager software is installed as a sensor application 41 in the auxiliary storage device 23 .
 以下、図3に示す学習推論装置2の各部の処理および図1に示す慣性センサ3の処理などを、データセットの準備のフェーズ、機械学習のフェーズ、および推論のフェーズに大別して説明する。 The processing of each part of the learning inference device 2 shown in FIG. 3 and the processing of the inertial sensor 3 shown in FIG.
 〔2.データセットの準備〕
 図4は、データセット準備処理の流れの例を説明するフローチャートである。図5は、複数の慣性センサ31~37それぞれの取付け位置の例を示す図である。図6は、腰の慣性センサ31の移動座標系における加速度および角速度の変化の例ならびに被験者1Bの4つの関節の角度の変化の例を示す図である。
[2. Data set preparation]
FIG. 4 is a flowchart illustrating an example of the flow of data set preparation processing. FIG. 5 is a diagram showing examples of mounting positions of the plurality of inertial sensors 31-37. FIG. 6 is a diagram showing an example of changes in acceleration and angular velocity in the movement coordinate system of inertial sensor 31 on the waist and an example of changes in angles of four joints of subject 1B.
 学習推論装置2のシフトウィンドウ値抽出部401、関節角度算出部402、およびデータセット登録部403(図3参照)は、関節角度推測システム1のオペレータによる操作に基づいて、性別、年齢、体重、および身長が様々な複数の被験者1B(1B1、1B2、…)から情報を収集し機械学習用のデータセットを生成する処理を図4に示す手順で実行する。以下、ある被験者1B1から情報が収集される場合を例に説明する。 The shift window value extraction unit 401, the joint angle calculation unit 402, and the data set registration unit 403 (see FIG. 3) of the learning and inference device 2 extract sex, age, weight, A process of collecting information from a plurality of subjects 1B (1B1, 1B2, . . . ) of various heights and generating a data set for machine learning is executed according to the procedure shown in FIG. A case where information is collected from a subject 1B1 will be described below as an example.
 オペレータは、被験者1B1の属性として、年齢、性別、体重、および身長をキーボード28またはポインティングデバイス29によって学習推論装置2へ入力する。データセット登録部403は、これらの属性を示す属性ベクトルCを取得する(図4の#101)。以下、特にv番目の被験者1Bの属性ベクトルCを「属性ベクトルCv」と記載する。 The operator inputs age, sex, weight, and height to the learning inference device 2 via the keyboard 28 or pointing device 29 as attributes of the subject 1B1. The dataset registration unit 403 acquires an attribute vector C indicating these attributes (#101 in FIG. 4). Hereinafter, the attribute vector C of the v-th subject 1B will be referred to as "attribute vector Cv".
 さらに、オペレータは、被験者1B1が歩行している際の腰、左右の太腿、左右の脛、および左右の足それぞれの状態のデータを次のように収集する。 Furthermore, the operator collects data on the state of each of the waist, left and right thighs, left and right shins, and left and right legs while the subject 1B1 is walking as follows.
 オペレータは、慣性センサ3を7台、腰用のベルトを1本、太腿用のベルトを2本、脛用のベルトを2本、および足用のベルトを2本、それぞれ用意する。これらのベルトはMTw Awindaの専用品であって、最初に慣性センサ3の7台すべてが同じ姿勢に一致させられた状態で初期化が行われた後、慣性センサ3が1台ずつ所定の姿勢で取り付けられる。この初期化の際にすべての慣性センサの姿勢角は空間(固定)座標系上である姿勢角を持つようになり、一般的には空間座標系と一致した姿勢となる。 The operator prepares seven inertial sensors 3, one waist belt, two thigh belts, two shin belts, and two leg belts. These belts are exclusive to MTw Awinda, and after initializing with all seven inertial sensors 3 aligned in the same posture, the inertial sensors 3 are adjusted one by one to a predetermined posture. can be installed with At the time of this initialization, the attitude angles of all the inertial sensors come to have a certain attitude angle on the space (fixed) coordinate system, and generally the attitude is consistent with the space coordinate system.
 これらのベルトに慣性センサ3を1台ずつ取り付けたら、被験者1B1は、これらのベルトをMTw Awindaのマニュアルに従って被験者1B1自身の腰、左右の太腿、左右の脛、および左右の足の各セグメント(部位)に装着する。これにより、図5に示すように、各セグメントの各特定の位置に慣性センサ3が固定される。 After attaching one inertial sensor 3 to each of these belts, subject 1B1 attached these belts to his waist, left and right thighs, left and right shins, and left and right leg segments ( part). Thereby, as shown in FIG. 5, the inertial sensor 3 is fixed at each specific position of each segment.
 以下、腰、左太腿、右太腿、左脛、右脛、左足、および右足それぞれのセグメントに固定された慣性センサ3を「慣性センサ31」、「慣性センサ32」、…、「慣性センサ37」と記載する。 Hereinafter, the inertial sensors 3 fixed to the segments of the waist, left thigh, right thigh, left shin, right shin, left leg, and right leg are referred to as “inertial sensor 31,” “inertial sensor 32,” . 37”.
 慣性センサ31、32、…、37には、予め識別番号が与えられている。本実施形態では、慣性センサ31、32、…、37に識別番号としてそれぞれ「1」、「2」、…、「7」が与えられているものとする。「1」、「2」、…、「7」の各識別番号は、腰、左太腿、右太腿、左脛、右脛、左足、および右足をそれぞれ識別していると、言える。 The inertial sensors 31, 32, . . . , 37 are given identification numbers in advance. In this embodiment, it is assumed that the inertial sensors 31, 32, . It can be said that the identification numbers "1", "2", ..., "7" respectively identify the waist, left thigh, right thigh, left shin, right shin, left leg, and right leg.
 被験者1B1は、これらの慣性センサ3が被験者1B1自身の各セグメントの特定の位置に固定されたら、左股関節、右股関節、左膝関節、および右膝関節それぞれの角度がゼロ度なるように真っ直ぐに立つ。垂直の壁に後頭部、背中、尻、および脹脛を付けて立つのが望ましい。このように真っ直ぐに立った状態が基準の状態である。 When these inertial sensors 3 are fixed at specific positions of each segment of the subject 1B1, the subject 1B1 straightens the left hip joint, the right hip joint, the left knee joint, and the right knee joint so that the respective angles are zero degrees. stand. It is preferable to stand with the back of the head, back, buttocks and calves against a vertical wall. The state of standing straight like this is the standard state.
 そして、オペレータは、被験者1B1が基準の状態であるときに、センサアプリケーション41を使用することによって慣性センサ31~37のキャリブレーションを行う。これにより、腰、左太腿、右太腿、左脛、右脛、左足、および右足の7つのセグメントそれぞれの、基準の状態における位置および姿勢が定まり、これら7つのセグメントそれぞれの3軸の加速度および角速度を、慣性センサ31~37の計測結果に基づいて得ることができるようになる。姿勢は、クォータニオンによって表わされるが、オイラー角によって表わしてもよい。以下、クォータニオンによって姿勢が表わされる場合を例に説明する。なお、被験者1B1の進行方向(正面方向)は、これら7つのセグメントそれぞれの位置および姿勢によって特定され得るが、被験者1B1に数歩、歩行させることに特定してもよい。 Then, the operator uses the sensor application 41 to calibrate the inertial sensors 31 to 37 when the subject 1B1 is in the reference state. This defines the position and posture of each of the seven segments of the hip, left thigh, right thigh, left shin, right shin, left foot, and right foot in the reference state, and the three-axis acceleration of each of these seven segments. and angular velocity can be obtained based on the measurement results of the inertial sensors 31-37. Attitude is represented by quaternions, but may also be represented by Euler angles. In the following, an example in which a posture is represented by a quaternion will be described. The traveling direction (frontal direction) of the subject 1B1 can be identified by the positions and postures of these seven segments, but it may also be identified by making the subject 1B1 walk a few steps.
 キャリブレーションが完了したら、腰などの7つのセグメントそれぞれの、基準の状態における位置および姿勢が学習推論装置2に設定される。そして、被験者1B1は、約10分間、歩行し続ける。 After the calibration is completed, the position and posture in the reference state of each of the seven segments such as the waist are set in the learning inference device 2. Subject 1B1 then continues walking for about 10 minutes.
 慣性センサ31~37は、歩行中に100Hzのサンプリングレートで3軸それぞれの加速度および角速度を計測し、慣性センサ自身の姿勢のクォータニオンを算出する。そして、慣性センサ31~37およびセンサアプリケーション41によって、各時刻の(100分の1秒ごとの)、腰、左太腿、右太腿、左脛、右脛、左足、および右足それぞれの加速度、角速度、およびクォータニオンが得られる。これらの加速度および角速度は、各慣性センサ3ごとの移動座標系(local moving coordinate system の座標系))での値としてシフトウィンドウ値抽出部401(図3参照)に入力され、クォータニオンは関節角度算出部402に入力される。 The inertial sensors 31 to 37 measure the acceleration and angular velocity of each of the three axes at a sampling rate of 100 Hz while walking, and calculate the quaternion of the posture of the inertial sensor itself. Then, by the inertial sensors 31 to 37 and the sensor application 41, the acceleration of the waist, left thigh, right thigh, left shin, right shin, left leg, and right leg at each time (every 1/100th of a second), Angular velocity and quaternion are obtained. These accelerations and angular velocities are input to the shift window value extraction unit 401 (see FIG. 3) as values in the moving coordinate system (local moving coordinate system coordinate system) for each inertial sensor 3, and the quaternions are used to calculate joint angles. It is input to section 402 .
 以下、識別番号が「s」であるセグメントの、移動座標系の3軸それぞれの、時刻tにおける加速度を「加速度ax_s_t」、「加速度ay_s_t」、「加速度az_s_t」と記載し、時刻tにおける角速度を「角速度ωx_s_t」、「角速度ωy_s_t」、「角速度ωz_s_t」と記載し、クォータニオンを「クォータニオンQs_t」と記載する。また、m番目の計測の時刻を「m-1」と表わすものとする。したがって、1、2、3、…番目の計測の時刻は「0」、「1」、「2」、…である。 Hereinafter, the acceleration at time t of each of the three axes of the moving coordinate system of the segment whose identification number is "s" will be described as "acceleration ax_s_t", "acceleration ay_s_t", and "acceleration az_s_t", and the angular velocity at time t will be "Angular velocity ωx_s_t", "Angular velocity ωy_s_t", and "Angular velocity ωz_s_t" are described, and a quaternion is described as "Quaternion Qs_t". Also, the time of the m-th measurement is represented as "m-1". Therefore, the times of the 1st, 2nd, 3rd, . . . measurements are "0", "1", "2", .
 被験者1B1の歩行中、腰などの7つの各セグメントの加速度および角速度が変化する。さらに、被験者1B1の関節の角度も変化する。例えば、腰の加速度および角速度が、図6(A)に示すように変化する。また、左股関節、右股関節、左膝関節、および右膝関節それぞれの角度が、図6(B)に示すように変化する。なお、これらの角度は、後述するように、関節角度算出部402によって算出される。  While subject 1B1 was walking, the acceleration and angular velocity of each of the seven segments, such as the waist, changed. Furthermore, the angles of the joints of subject 1B1 also change. For example, the hip acceleration and angular velocity change as shown in FIG. 6(A). Also, the angles of the left hip joint, right hip joint, left knee joint, and right knee joint change as shown in FIG. 6(B). Note that these angles are calculated by the joint angle calculator 402 as described later.
 学習推論装置2において、各セグメントの加速度および角速度がシフトウィンドウ値抽出部401に入力され、各セグメントのクォータニオンが関節角度算出部402に入力されると(図4の#102)、次の処理が行われる。 In the learning inference device 2, when the acceleration and angular velocity of each segment are input to the shift window value extraction unit 401 and the quaternion of each segment is input to the joint angle calculation unit 402 (#102 in FIG. 4), the following processing is performed. done.
 シフトウィンドウ値抽出部401は、入力された腰の加速度および角速度のうちの、複数の時間帯のそれぞれのシフトウィンドウ4A(4A1、4A2、…)に属する値を次のように抽出する(#103~#106)。 The shift window value extraction unit 401 extracts the values belonging to the shift windows 4A (4A1, 4A2, . ~#106).
 u番目のシフトウィンドウ4Aは、時刻「20(u-1)」から時刻「99+20(u-1)」までを範囲とするウィンドウである。したがって、1番目のシフトウィンドウ4A1は、図6(A)に示す通り、時刻0~99を範囲とするウィンドウである。2番目のシフトウィンドウ4A2は、シフトウィンドウ4A1から右へ0.2秒シフトした範囲すなわち時刻20~119を範囲とするウィンドウである。3番目以降のシフトウィンドウ4Aも同様に、直前のシフトウィンドウ4Aから右へ0.2秒シフトした範囲のウィンドウである。 The u-th shift window 4A is a window whose range is from time "20(u-1)" to time "99+20(u-1)". Therefore, the first shift window 4A1 is a window covering time 0 to 99 as shown in FIG. 6(A). The second shift window 4A2 is a window that covers a range shifted 0.2 seconds to the right from the shift window 4A1, that is, the range from time 20 to 119. Similarly, the third and subsequent shift windows 4A are windows in a range shifted rightward from the previous shift window 4A by 0.2 seconds.
 シフトウィンドウ値抽出部401は、1番目のシフトウィンドウ4Aをカレントウィンドウに設定し(#103、#104)。慣性センサ31から受信した加速度および角速度の中から、カレントウィンドウに納まる加速度および角速度を抽出する(#105)。つまり、最初は、シフトウィンドウ4A1に納まる加速度および角速度を抽出する。 The shift window value extraction unit 401 sets the first shift window 4A as the current window (#103, #104). Accelerations and angular velocities that fall within the current window are extracted from the accelerations and angular velocities received from the inertial sensor 31 (#105). That is, first, the acceleration and angular velocity that fall within the shift window 4A1 are extracted.
 抽出された加速度および角速度は、100×6の行列によって表わされる。例えば、シフトウィンドウ4A1から抽出された加速度および角速度は、
Figure JPOXMLDOC01-appb-M000001
という行列で表わされる。以下、抽出された行列を「モーション行列B」と記載する。特にu番目の時間帯(シフトウィンドウ4A)のモーション行列Bを「モーション行列Bu」と記載し、v番目の被験者1Bのu番目の時間帯のモーション行列Bを「モーション行列Bv_u」と記載する。
The extracted acceleration and angular velocity are represented by a 100x6 matrix. For example, the acceleration and angular velocity extracted from shift window 4A1 are
Figure JPOXMLDOC01-appb-M000001
is represented by the matrix The extracted matrix is hereinafter referred to as "motion matrix B". In particular, the motion matrix B for the u-th time period (shift window 4A) is described as "motion matrix Bu", and the motion matrix B for the u-th time period for the v-th subject 1B is described as "motion matrix Bv_u".
 関節角度算出部402は、左股関節、右股関節、左膝関節、および右膝関節それぞれの、カレントウィンドウの最後の時刻すなわち右端における角度を次のように算出する(#106)。 The joint angle calculator 402 calculates the angle at the last time of the current window, that is, the angle at the right end of each of the left hip joint, right hip joint, left knee joint, and right knee joint (#106).
 左股関節の角度は腰と左太腿との角度であり、右股関節の角度は腰と右太腿との角度であると、言える。そこで、関節角度算出部402は、左股関節の角度として、腰および左太腿それぞれの、カレントウィンドウの最後の時刻におけるクォータニオンに基づいて、両セグメントの成す角度を算出する。例えば、1番目の時間帯の最後の時刻における、左股関節の角度を、クォータニオンQ1_99、Q2_99に基づいて算出する。同様に、右股関節の角度として、腰および右太腿それぞれの、カレントウィンドウの最後の時刻におけるクォータニオンに基づいて、両セグメントの成す角度を算出する。 It can be said that the angle of the left hip joint is the angle between the waist and the left thigh, and the angle of the right hip joint is the angle between the waist and the right thigh. Therefore, the joint angle calculator 402 calculates the angle formed by the segments of the hip and the left thigh based on the quaternion at the last time of the current window, as the angle of the left hip joint. For example, the angle of the left hip joint at the last time in the first time period is calculated based on quaternions Q1_99 and Q2_99. Similarly, as the angle of the right hip joint, the angle formed by both segments is calculated based on the quaternions of the waist and the right thigh at the last time of the current window.
 また、左膝関節の角度は左太腿と左脛との角度であり、右膝関節の角度は右太腿と右脛との角度であると、言える。そこで、関節角度算出部402は、左膝関節の角度として、左太腿および左脛それぞれの、カレントウィンドウの最後の時刻におけるクォータニオンに基づいて、両セグメントの成す角度を算出する。右膝関節の角度として、右太腿および右脛それぞれの、カレントウィンドウの最後の時刻におけるクォータニオンに基づいて、両セグメントの成す角度を算出する。 Also, it can be said that the angle of the left knee joint is the angle between the left thigh and the left shin, and the angle of the right knee joint is the angle between the right thigh and the right shin. Therefore, the joint angle calculator 402 calculates the angle formed by the segments of the left thigh and the left shin based on the quaternion at the last time of the current window, as the angle of the left knee joint. As the angle of the right knee joint, the angle between the segments of the right thigh and the right shin is calculated based on the quaternion at the last time of the current window.
 以下、算出された左股関節、右股関節、左膝関節、および右膝関節それぞれの角度を「角度θHL」、「角度θHR」、「角度θKL」、および「角度θKR」と記載する。特にu番目の時間帯の角度を「角度θHL_u」、「角度θHR_u」、「角度θKL_u」、および「角度θKR_u」と記載し、v番目の被験者1Bのu番目の時間帯の各角度を「角度θHL_v_u」、「角度θHR_v_u」、「角度θKL_v_u」、および「角度θKR_v_u」と記載する。 The calculated angles of the left hip joint, right hip joint, left knee joint, and right knee joint are hereinafter referred to as "angle θHL", "angle θHR", "angle θKL", and "angle θKR". In particular, the angle of the u-th time period is described as "angle θHL_u", "angle θHR_u", "angle θKL_u", and "angle θKR_u", and each angle of the u-th time period of v-th subject 1B is described as "angle θHL_v_u”, “angle θHR_v_u”, “angle θKL_v_u”, and “angle θKR_v_u”.
 データセット登録部403は、ステップ#101で取得した属性ベクトルC、ステップ#105でシフトウィンドウ値抽出部401が抽出したモーション行列Bu、およびステップ#106で関節角度算出部402が算出した角度θHL_u、角度θHR_u、角度θKL_u、および角度θKR_uからなるデータセットをデータセット50としてデータセット記憶部404に記憶させる(#107)。 The data set registration unit 403 stores the attribute vector C acquired in step #101, the motion matrix Bu extracted by the shift window value extraction unit 401 in step #105, the angle θHL_u calculated by the joint angle calculation unit 402 in step #106, A data set consisting of the angle θHR_u, the angle θKL_u, and the angle θKR_u is stored as the data set 50 in the data set storage unit 404 (#107).
 そして、シフトウィンドウ値抽出部401、関節角度算出部402、およびデータセット登録部403は、カレントウィンドウを0.2秒だけ右方へシフトさせ(#108でYes、#109)、ステップ#104~#107の処理を行う。ただし、シフトさせても100×6個の加速度および角速度が揃わない場合すなわち続きがない場合は(#108でNo)、被験者1B1からのデータセットの収集を終了する。 Then, shift window value extraction unit 401, joint angle calculation unit 402, and data set registration unit 403 shift the current window to the right by 0.2 seconds (Yes in #108, #109), and step #104 to step #104. The process of #107 is performed. However, if 100×6 accelerations and angular velocities are not aligned even after the shift, that is, if there is no continuation (No in #108), the collection of data sets from subject 1B1 ends.
 さらに、シフトウィンドウ値抽出部401、関節角度算出部402、およびデータセット登録部403は、同様の方法によって被験者1B1以外の被験者1Bそれぞれからデータセット50を収集してデータセット記憶部404に記憶させる(#110でYes、#101~#109)。 Furthermore, the shift window value extraction unit 401, the joint angle calculation unit 402, and the data set registration unit 403 collect the data sets 50 from each subject 1B other than the subject 1B1 by the same method, and store them in the data set storage unit 404. (Yes in #110, #101 to #109).
 シフトウィンドウ4Aが0.2秒ずつシフトするので、1人の被験者1Bについて、歩行時間すなわち10分間にシフトウィンドウ4Aが約3000個、表われる。よって、1人の被験者1Bから約3000組のデータセット50が得られる。したがって、例えば200人の被験者1Bがいる場合は、約60万組のデータセット50が得られる。 Since the shift window 4A shifts by 0.2 seconds, about 3000 shift windows 4A appear during the walking time, ie, 10 minutes, for one subject 1B. Therefore, about 3000 data sets 50 are obtained from one subject 1B. Therefore, if there are 200 subjects 1B, for example, approximately 600,000 data sets 50 are obtained.
 〔3.機械学習〕
 図7は、機械学習の処理の流れの例を示す図である。図8は、ニューラルネットワーク60の例を示す図である。
[3. machine learning]
FIG. 7 is a diagram illustrating an example of the flow of machine learning processing. FIG. 8 is a diagram showing an example of the neural network 60. As shown in FIG.
 機械学習部405(図3参照)は、データセット記憶部404に記憶されたデータセット50に基づいて機械学習を行うことによって、左股関節、右股関節、左膝関節、および右膝関節それぞれの角度を推測するための学習済モデルを生成する。以下、図7および図8を参照しながら、機械学習の方法を説明する。 The machine learning unit 405 (see FIG. 3) performs machine learning based on the data set 50 stored in the data set storage unit 404 to obtain the angles of the left hip joint, the right hip joint, the left knee joint, and the right knee joint. Generate a trained model to infer The machine learning method will be described below with reference to FIGS. 7 and 8. FIG.
 機械学習フェーズは、図7に示すように、主に訓練ステップ、検証ステップ、およびテストステップが含まれる。  The machine learning phase mainly includes a training step, a verification step, and a test step, as shown in FIG.
 機械学習部405には、図8に示すようなニューラルネットワーク60が予め用意されている。ニューラルネットワーク60は、第一のネットワーク61、第二のネットワーク62、および第三のネットワーク63によって構成される。 A neural network 60 as shown in FIG. 8 is prepared in advance in the machine learning unit 405 . Neural network 60 is composed of first network 61 , second network 62 and third network 63 .
 第一のネットワーク61は、LSTM(Long Short-term Memory)のニューラルネットワークであって、シフトウィンドウ4Aによって抽出されたモーション行列Bの特徴を示す特徴行列を算出し出力する。 The first network 61 is an LSTM (Long Short-term Memory) neural network that calculates and outputs a feature matrix that indicates the features of the motion matrix B extracted by the shift window 4A.
 第三のネットワーク63は、1つまたは複数の結合層からなるネットワークであって、属性ベクトルCの特徴を示す行ベクトルを出力する。本実施形態では、結合層63Aによって構成される。そして、結合層63Aは、32×4の重み係数および32のバイアス係数を有し、属性ベクトルCが入力されると、属性ベクトルCの特徴を示す32×1の行ベクトルを出力する。 The third network 63 is a network consisting of one or more connected layers and outputs a row vector indicating the characteristics of the attribute vector C. In this embodiment, it is composed of the bonding layer 63A. The coupling layer 63A has 32×4 weighting factors and 32 biasing factors, and when an attribute vector C is input, it outputs a 32×1 row vector indicating the characteristics of the attribute vector C.
 第二のネットワーク62は、複数の全結合層からなるネットワークであって、本実施形態では、4つの全結合層62A~62Dによって構成される。全結合層62Aには、第一のネットワーク61から出力される特徴行列が一次元(256×1)に展開されて入力される。全結合層62Bの各ユニット(ノード)へ全結合層62Aの全ユニットの出力値が入力される。全結合層62Bからの出力値(128×1の行ベクトル)は、結合層63Aからの出力値とマージされ、160×1の行ベクトルになる。そして、160×1の行ベクトルのすべての値が全結合層62Cの各ユニットへ入力される。つまり、結合層63Aの全ユニットの出力値、および、全結合層62Bの全ユニットの出力値が、全結合層62Cの各ユニットへ入力される。さらに、全結合層62Dの各ユニットへ全結合層62Cの全ユニットの出力値が入力される。そして、全結合層62Dの4つのユニットそれぞれから左股関節、右股関節、左膝関節、および右膝関節それぞれの角度ψHL、ψHR、ψKL、およびψKRが出力される。 The second network 62 is a network consisting of a plurality of fully-connected layers, and in this embodiment is composed of four fully-connected layers 62A-62D. The feature matrix output from the first network 61 is expanded one-dimensionally (256×1) and input to the fully connected layer 62A. Output values of all units of the fully connected layer 62A are input to each unit (node) of the fully connected layer 62B. The output values (a 128×1 row vector) from fully connected layer 62B are merged with the output values from connected layer 63A into a 160×1 row vector. All values of the 160×1 row vector are then input to each unit of the fully connected layer 62C. That is, the output values of all units of the joint layer 63A and the output values of all units of the joint layer 62B are input to each unit of the joint layer 62C. Further, the output values of all units of the fully connected layer 62C are input to each unit of the fully connected layer 62D. The angles ψHL, ψHR, ψKL, and ψKR of the left hip joint, right hip joint, left knee joint, and right knee joint are output from each of the four units of the fully connected layer 62D.
 データセット記憶部404に記憶されているデータセット50のうちの一部が訓練データとして選出され、ニューラルネットワーク60の学習のために使用される。残りのデータセット50のうちの一部が検証データとして選出され、ニューラルネットワーク60のチューニングのために使用される。そして、残りがテストデータとして選出されニューラルネットワーク60の最終の評価のために使用される。例えば、データセット50のうちの87%が訓練データとして選出され、12%が検証データとして選出され、1%がテストデータとして選出される。 A part of the dataset 50 stored in the dataset storage unit 404 is selected as training data and used for learning of the neural network 60. A portion of the remaining data set 50 is selected as validation data and used for tuning the neural network 60 . The rest are then selected as test data and used for the final evaluation of neural network 60 . For example, 87% of the dataset 50 is selected as training data, 12% as validation data, and 1% as test data.
 図7に示すように、機械学習部405は、訓練ステップおよび検証ステップを交互に複数回、繰り返し実行する。訓練ステップにおいて、訓練データとして選出したデータセット50に基づいてニューラルネットワーク60を訓練する。なお、検証は、ホールドアウト法で行ってもよいし、交差検証法で行ってもよいし、その他の公知の方法で行ってもよい。 As shown in FIG. 7, the machine learning unit 405 repeatedly executes the training step and the verification step alternately multiple times. In the training step, the neural network 60 is trained based on the data set 50 selected as training data. The verification may be performed by the holdout method, the cross-validation method, or any other known method.
 機械学習部405は、例えば、あるデータセット50aを使用して次のようにニューラルネットワーク60を訓練する。データセット50aに含まれるモーション行列Bを入力データ(説明変数データ)として第一のネットワーク61へ入力し、属性ベクトルCを入力データとして第三のネットワーク63へ入力し、第一のネットワーク61、第二のネットワーク62、および第三のネットワーク63の各層の演算処理を行う。これにより、左股関節、右股関節、左膝関節、および右膝関節それぞれの角度ψHL、ψHR、ψKL、およびψKRが算出されて全結合層62Dから出力される。 The machine learning unit 405 trains the neural network 60 as follows, for example, using a certain data set 50a. The motion matrix B included in the data set 50a is input to the first network 61 as input data (explanatory variable data), the attribute vector C is input to the third network 63 as input data, the first network 61, the Arithmetic processing of each layer of the second network 62 and the third network 63 is performed. As a result, angles ψHL, ψHR, ψKL, and ψKR of the left hip joint, right hip joint, left knee joint, and right knee joint are calculated and output from the fully connected layer 62D.
 そして、機械学習部405は、データセット50aに含まれる角度θHL、θHR、θKL、および角度θKRのデータを教師データ(正解データ)として使用し、第一のネットワーク61、第二のネットワーク62、および第三のネットワーク63の各パラメータ(例えば、重み係数またはバイアス値)を調整する。つまり、角度ψHLと角度θHLとの差、角度ψHRと角度θHRとの差、角度ψKLと角度θKLとの差、および角度ψKRと角度θKRとの差が小さくなるように各パラメータを調整する。 Then, the machine learning unit 405 uses the data of the angles θHL, θHR, θKL, and the angle θKR included in the data set 50a as teacher data (correct data), the first network 61, the second network 62, and Adjust each parameter (eg weighting factor or bias value) of the third network 63 . That is, each parameter is adjusted so that the difference between the angle ψHL and the angle θHL, the difference between the angle ψHR and the angle θHR, the difference between the angle ψKL and the angle θKL, and the difference between the angle ψKR and the angle θKR are reduced.
 機械学習部405は、同様の方法で、訓練データとして選出した他のデータセット50を使用してニューラルネットワーク60を訓練する。 The machine learning unit 405 trains the neural network 60 using the other data set 50 selected as training data in a similar manner.
 検証ステップにおいて、機械学習部405は、検証データとして選出したデータセット50を用いて、訓練されたニューラルネットワーク60の出力値(推定値)を検証し、このニューラルネットワーク60のハイパーパラメータをチューニングする。 In the verification step, the machine learning unit 405 verifies the output values (estimated values) of the trained neural network 60 using the data set 50 selected as verification data, and tunes the hyperparameters of this neural network 60.
 機械学習部405は、2回目の訓練ステップにおいて、1回目の検証ステップでチューニングされたニューラルネットワーク60を上述の方法で訓練し、2回目の検証ステップにおいて、2回目の訓練ステップで訓練されたニューラルネットワーク60を上述の方法でチューニングする。3回目以降も同様の要領で訓練ステップおよび検証ステップの処理を行う。 In the second training step, the machine learning unit 405 trains the neural network 60 tuned in the first verification step by the method described above, and in the second verification step, trains the neural network 60 trained in the second training step. Tune network 60 in the manner described above. From the third time onwards, the processing of the training step and the verification step is performed in the same way.
 以下、このような訓練ステップおよび検証ステップのセット(つまり、エポック)を複数回、繰り返すことよって訓練されチューニングされたニューラルネットワーク60を「学習済ニューラルネットワーク67」と記載する。そして、テストステップにおいて、テストデータとして選出したデータセット50を用いて学習済ニューラルネットワーク67の最終の評価を行う。 A neural network 60 trained and tuned by repeating such a set of training steps and verification steps (that is, epochs) is hereinafter referred to as a "learned neural network 67". Then, in the test step, the final evaluation of the trained neural network 67 is performed using the data set 50 selected as test data.
 学習済ニューラルネットワーク67が一定の精度を有する場合は、機械学習部405は、学習済モデルとして学習済ニューラルネットワーク67を学習済モデル記憶部406に記憶させる。一定の精度を有しない場合は、一定の精度が得られるまで、新たなデータセット50をさらに用意して各ステップを継続すればよい。または、シフトウィンドウ4Aの幅を変更してデータセット50を収集し直して機械学習をやり直してもよい。 When the trained neural network 67 has a certain accuracy, the machine learning unit 405 stores the trained neural network 67 as a trained model in the trained model storage unit 406. If it does not have a constant accuracy, then each step can be continued with another new data set 50 until a constant accuracy is obtained. Alternatively, the width of the shift window 4A may be changed, the data set 50 may be collected again, and the machine learning may be redone.
 〔4.推論〕
 図9は、学習済ニューラルネットワーク67の例を示す図である。
[4. inference〕
FIG. 9 is a diagram showing an example of the trained neural network 67. As shown in FIG.
 機械学習が完了すると、推測対象者1Aが歩行しているときの左股関節、右股関節、左膝関節、および右膝関節それぞれの角度を、主に学習済ニューラルネットワーク67および1つの慣性センサ3によって推測することができる。以下、この慣性センサ3を慣性センサ31~37と区別するために「慣性センサ38」と記載する。また、慣性センサ38には識別番号として「8」が与えられているものとする。 When the machine learning is completed, the angles of the left hip joint, the right hip joint, the left knee joint, and the right knee joint when the inference target person 1A is walking are mainly obtained by the learned neural network 67 and one inertial sensor 3. can guess. Hereinafter, the inertial sensor 3 will be referred to as an "inertial sensor 38" to distinguish it from the inertial sensors 31-37. Also, it is assumed that the inertial sensor 38 is given "8" as an identification number.
 慣性センサ38は、慣性センサ37と同様に、腰用のベルトに所定の姿勢で取り付けられている。推測対象者1Aは、このベルトを、推測対象者1A自身の腰の特定の位置に慣性センサ38が配置されるように装着する。これにより、推測対象者1Aの腰の特定の位置に慣性センサ38が固定される。この特定の位置は、データセットの準備の際の特定の位置と同じである。 Like the inertial sensor 37, the inertial sensor 38 is attached to the waist belt in a predetermined posture. The guess subject 1A wears this belt so that the inertial sensor 38 is arranged at a specific position on the waist of the guess subject 1A. As a result, the inertial sensor 38 is fixed at a specific position on the waist of the inference target person 1A. This specific position is the same as the specific position during the preparation of the dataset.
 推測対象者1Aは、自分自身の年齢、性別、体重、および身長の各属性を学習推論装置2へ入力する。以下、入力された属性を表わすベクトルを「属性ベクトルC’」記載する。 The inference target person 1A inputs his or her own age, gender, weight, and height attributes to the learning inference device 2. Hereinafter, the vector representing the input attribute is described as "attribute vector C'".
 推測対象者1Aが歩行を開始すると、慣性センサ38は、100Hzのサンプリングレートで3軸それぞれの加速度および角速度を計測する。そして、慣性センサ38またはセンサアプリケーション41によって、移動座標系における各時刻の(100分の1秒ごとの)腰の加速度および角速度が得られ、関節角度推論部407(図3参照)に入力される。 When the inferred subject 1A starts walking, the inertial sensor 38 measures the acceleration and angular velocity of each of the three axes at a sampling rate of 100 Hz. Then, the inertial sensor 38 or the sensor application 41 obtains the hip acceleration and angular velocity at each time (every 1/100th of a second) in the movement coordinate system, and inputs them to the joint angle inferring unit 407 (see FIG. 3). .
 すると、関節角度推論部407は、推論の処理を行う。以下、この処理の手順を、図9を参照しながら説明する。 Then, the joint angle inference unit 407 performs inference processing. The procedure of this processing will be described below with reference to FIG.
 関節角度推論部407は、関節角度を推論したい時刻の腰の加速度および角速度ならびに直近の1秒間の腰の加速度および角速度を表わすモーション行列B’を学習済ニューラルネットワーク67の第一のネットワーク61へ入力するとともに、属性ベクトルC’を第三のネットワーク63へ入力する。モーション行列B’は、100×6の行列であって、時刻tの加速度および角速度ならびにその直近の1秒間の加速度および角速度を表わすモーション行列B’は、
Figure JPOXMLDOC01-appb-M000002
である。
The joint angle inferring unit 407 inputs the motion matrix B′ representing the hip acceleration and angular velocity at the time when the joint angle is to be inferred and the hip acceleration and angular velocity for the most recent second to the first network 61 of the trained neural network 67. At the same time, the attribute vector C' is input to the third network 63. The motion matrix B' is a 100×6 matrix, and the motion matrix B' representing the acceleration and angular velocity at time t and the acceleration and angular velocity for the most recent second is
Figure JPOXMLDOC01-appb-M000002
is.
 そして、関節角度推論部407は、第一のネットワーク61、第二のネットワーク62、および第三のネットワーク63それぞれを構成する層の演算を行うことによって、全結合層62Dから角度ψHL_t、ψHR_t、ψKL_t、およびψKR_tを得る。これらの角度が、時刻tにおける、推測対象者1Aの左股関節、右股関節、左膝関節、および右膝関節それぞれの角度である。 Then, the joint angle inference unit 407 calculates the angles ψHL_t, ψHR_t, and ψKL_t from the fully connected layer 62D by performing calculations on the layers constituting the first network 61, the second network 62, and the third network 63, respectively. , and ψKR_t. These angles are the angles of the left hip joint, right hip joint, left knee joint, and right knee joint of the guess subject 1A at time t.
 推論結果出力部408は、関節角度推論部407によって得られた、各時刻tの角度ψHL_t、ψHR_t、ψKL_t、およびψKR_tを出力する。例えば、それぞれの角度を折れ線グラフにしてディスプレイ27に表示する。または、それぞれの角度を示すテーブルのデータを生成し、外部の装置へ送信する。 The inference result output unit 408 outputs the angles ψHL_t, ψHR_t, ψKL_t, and ψKR_t obtained by the joint angle inference unit 407 at each time t. For example, each angle is displayed on the display 27 as a line graph. Alternatively, table data indicating each angle is generated and transmitted to an external device.
 または、推論結果出力部408は、左股関節、右股関節、左膝関節、および右膝関節がそれぞれ角度ψHL_t、ψHR_t、ψKL_t、およびψKR_tに変化する歩行者のアニメーションを例えば次のように生成し、ディスプレイ27に表示させてもよい。 Alternatively, the inference result output unit 408 generates an animation of a pedestrian in which the left hip joint, right hip joint, left knee joint, and right knee joint change to angles ψHL_t, ψHR_t, ψKL_t, and ψKR_t, respectively, as follows, It may be displayed on the display 27 .
 推論結果出力部408は、歩行者の人体の三次元モデルを用意する。歩行者の歩行中、慣性センサ38などによって時刻tごとの角度ψHL_t、ψHR_t、ψKL_t、およびψKR_tが得られるので、左股関節、右股関節、左膝関節、および右膝関節がそれぞれの角度になるように三次元モデルを変化させる。そして、変化後の三次元モデルをレンダリングすることによって人体の画像を生成し、ディスプレイ27に表示させる。これにより、アニメーションが再現される。 The inference result output unit 408 prepares a three-dimensional model of the pedestrian's human body. While the pedestrian is walking, the angles ψHL_t, ψHR_t, ψKL_t, and ψKR_t are obtained by the inertial sensor 38 or the like at each time t. change the three-dimensional model to Then, an image of the human body is generated by rendering the changed three-dimensional model and displayed on the display 27 . This reproduces the animation.
 なお、慣性センサ38によると、100Hzのサンプリングレートで角度ψHL_t、ψHR_t、ψKL_t、およびψKR_tが得られるが、ディスプレイ27のリフレッシュレートに合わせて角度ψHL_t、ψHR_t、ψKL_t、およびψKR_tを間引いて三次元モデルを変化させてもよい。 According to the inertial sensor 38, the angles ψHL_t, ψHR_t, ψKL_t, and ψKR_t are obtained at a sampling rate of 100 Hz. may be changed.
 また、推論結果出力部408は、推測対象者1Aの各時刻tにおける重心を算出してもよい。重心は、公知の方法によって算出すればよい。例えば、重心を算出する機能が上述のMTw Awindaシステムのソフトウェアに備わっているので、このソフトウェアによって重心を算出してもよい。 In addition, the inference result output unit 408 may calculate the center of gravity of the inference target person 1A at each time t. The center of gravity may be calculated by a known method. For example, since the software of the MTw Awinda system described above has a function of calculating the center of gravity, the software may be used to calculate the center of gravity.
 または、AIの技術によって重心を例えば次のように算出してもよい。予め、平均的な体型を有する人間の歩行中の様々な姿勢をキャプチャし、各姿勢における各部位(左股関節、右股関節、左膝関節、および右膝関節)の角度および重心を求める。各部位の角度を説明変数データとして用いかつ重心を教師データとして用いて機械学習を行うことによって、重心用の学習済モデルを生成しておく。 Alternatively, the center of gravity may be calculated using AI technology, for example, as follows. Various walking postures of a human having an average body shape are captured in advance, and the angle and center of gravity of each part (left hip joint, right hip joint, left knee joint, and right knee joint) in each posture are obtained. A learned model for the center of gravity is generated in advance by performing machine learning using the angle of each part as explanatory variable data and the center of gravity as teacher data.
 そして、推論結果出力部408は、推測対象者1Aの歩行中、角度ψHL_t、ψHR_t、ψKL_t、およびψKR_tをこの学習済モデルに入力することによって、各時刻tの重心を算出する。 Then, the inference result output unit 408 calculates the center of gravity at each time t by inputting the angles ψHL_t, ψHR_t, ψKL_t, and ψKR_t into this learned model while the inference target person 1A is walking.
 または、予め、平均的な体型を有する人間に様々な姿勢を取らせ、モーションキャプチャによって姿勢を計測するとともに、各姿勢における重心を求める。各姿勢を説明変数データとして用いかつ重心を教師データとして用いて機械学習を行うことによって、重心用の学習済モデルを生成しておく。 Alternatively, a human with an average body shape is made to take various postures in advance, the posture is measured by motion capture, and the center of gravity in each posture is obtained. A learned model for the center of gravity is generated in advance by performing machine learning using each posture as explanatory variable data and the center of gravity as teacher data.
 そして、推論結果出力部408は、上述のアニメーションのための三次元モデルの姿勢をこの学習済モデルに入力することによって、各時刻tの重心を算出する。 Then, the inference result output unit 408 calculates the center of gravity at each time t by inputting the posture of the three-dimensional model for the animation described above into this learned model.
 なお、重心は、着地しているほうの足の基準座標系上での位置座標の値として定義する。ただし、両足が着地している際は、先に着地していたほうの足を基準とすればよい。また、重心の位置が点などのマーカで表わされるように歩行者のアニメーションを表示してもよい。 The center of gravity is defined as the value of the position coordinates of the foot on the ground on the reference coordinate system. However, when both feet are on the ground, the foot that landed first should be used as the reference. Also, an animation of a pedestrian may be displayed such that the position of the center of gravity is indicated by a marker such as a point.
 推論結果出力部408は、さらに、歩行中の重心および姿勢それぞれの変化に基づいてそれぞれのバラつきを検知し、出力してもよい。 The inference result output unit 408 may further detect and output variations based on changes in the center of gravity and posture during walking.
 〔5.全体の処理の流れおよび本実施形態の効果〕
 図10は、関節角度推測プログラム40による全体的な処理の流れの例を説明するフローチャートである。
[5. Overall Processing Flow and Effect of the Present Embodiment]
FIG. 10 is a flowchart for explaining an example of the overall processing flow of the joint angle estimation program 40. As shown in FIG.
 次に、学習推論装置2の全体的な処理の流れを、フローチャートを参照しながら説明する。 Next, the overall processing flow of the learning inference device 2 will be described with reference to a flowchart.
 学習推論装置2は、AIの構築の処理および関節の角度の推論の処理を関節角度推測プログラム40に基づいて、図10に示す手順で実行する。 The learning inference device 2 executes AI construction processing and joint angle inference processing based on the joint angle estimation program 40 according to the procedure shown in FIG.
 学習推論装置2は、データセットの準備のフェーズにおいて、データセット50を生成し記憶する(図10の#11)。これらの処理の方法は、図4で説明した通りである。データセット50に基づいて機械学習を行うことによって、学習済ニューラルネットワーク67を生成し記憶する(#12、#13)。これらの処理の方法は、図7および図8で説明した通りである。 The learning inference device 2 generates and stores a dataset 50 in the dataset preparation phase (#11 in FIG. 10). These processing methods are as described with reference to FIG. A trained neural network 67 is generated and stored by performing machine learning based on the data set 50 (#12, #13). These processing methods are as described with reference to FIGS. 7 and 8. FIG.
 そして、学習推論装置2は、推測対象者1Aの腰の加速度、角速度、および属性を取得すると(#14、#15)、これらの情報および学習済ニューラルネットワーク67に基づいて推測対象者1Aの歩行中の左股関節、右股関節、左膝関節、および右膝関節それぞれの角度ψHL_t、ψHR_t、ψKL_t、およびψKR_tを推測し(#16)、出力する(#17)。これらの処理の方法は、図9で説明した通りである。さらに、推論結果に基づいて推測対象者1Aの様子を表わすアニメーションを表示してもよいし、重心を算出してアニメーションとともにその位置が示されるようにしてもよい。 Then, when the learning inference device 2 acquires the hip acceleration, angular velocity, and attributes of the inference target person 1A (#14, #15), the learning inference device 2, based on these information and the learned neural network 67, determines whether the inference target person 1A is walking. Angles ψHL_t, ψHR_t, ψKL_t, and ψKR_t of the left hip joint, right hip joint, left knee joint, and right knee joint are estimated (#16) and output (#17). These processing methods are as described with reference to FIG. Furthermore, an animation showing the state of the inference target person 1A may be displayed based on the inference result, or the center of gravity may be calculated and the position thereof may be shown along with the animation.
 本実施形態によると、4つの関節角度を推測するための学習済モデルを生成するために必要なデータを、同じ種類の7つの慣性センサ3をセンサとして使用することによって取得することができる。これにより、従来よりも効率的に学習済モデルを生成し、人間などの個体の歩行中の関節の角度を特定することができる。さらに、推論の際に推測対象者1Aに装着するセンサは1つの慣性センサ3だけでよい。したがって、推測対象者1Aにとっての負担を従来よりも軽減して関節の角度を特定することができる。 According to this embodiment, the data required to generate a trained model for estimating the four joint angles can be obtained by using the same type of seven inertial sensors 3 as sensors. As a result, it is possible to generate a learned model more efficiently than before, and to identify the angles of the joints of an individual such as a human being while walking. Furthermore, only one inertial sensor 3 may be attached to the inference target person 1A during inference. Therefore, it is possible to specify the joint angle while reducing the burden on the inferred subject 1A more than before.
 〔6.変形例〕
 図11は、第一ないし第三のパターンそれぞれにおけるMAEの例を示す図である。図12は、各部位に慣性センサ38を固定した場合のMAEの例を示す図である。図13は、ロボット義足71の制御の方法の例を示す図である。
[6. Modification]
FIG. 11 is a diagram showing examples of MAE in each of the first to third patterns. FIG. 12 is a diagram showing an example of MAE when the inertial sensor 38 is fixed to each part. FIG. 13 is a diagram showing an example of a control method for the robotic prosthesis 71. FIG.
 本実施形態では、学習推論装置2は、7つのセグメントのうちの腰の加速度および角速度のみを入力データとして使用して機械学習を行ったが、左足の加速度および角速度ならびに右足の加速度および角速度をさらに入力データとして使用して機械学習を行ってもよい。この場合は、図8に示したモーション行列Bの代わりに、腰、左足、右足それぞれのデータを列方向に並べた下記のような100×18の行列を入力データとして使用すればよい。
Figure JPOXMLDOC01-appb-M000003
In this embodiment, the learning inference device 2 performs machine learning using only the hip acceleration and angular velocity of the seven segments as input data. It may be used as input data for machine learning. In this case, instead of the motion matrix B shown in FIG. 8, the following 100×18 matrix in which data for the waist, left leg, and right leg are arranged in columns may be used as input data.
Figure JPOXMLDOC01-appb-M000003
 また、推論の際は、推測対象者1Aの腰の特定の位置に慣性センサ38を付けるだけでなく、左足および右足それぞれの特定の位置に慣性センサ3を1つずつ付け、学習推論装置2は、これら3つの慣性センサ3によって腰、左足、および右足それぞれの加速度および角速度を求める。これらの加速度および角速度の、100×18の行列を学習済ニューラルネットワーク67の第一のネットワーク61(図9参照)へ入力することによって、推測対象者1Aの4つの関節それぞれの角度を推測する。 In the inference, the inertial sensor 38 is not only attached to a specific position on the waist of the inference target person 1A, but also one inertial sensor 3 is attached to each specific position of the left leg and right leg, and the learning inference device 2 , the acceleration and angular velocity of the waist, left leg, and right leg are determined by these three inertial sensors 3, respectively. By inputting the 100×18 matrix of these accelerations and angular velocities into the first network 61 (see FIG. 9) of the trained neural network 67, the angle of each of the four joints of the subject 1A is estimated.
 本実施形態では、学習推論装置2は、機械学習の入力データとして年齢、性別、体重、および身長の4つの属性を使用したが、これらの属性のうち任意の1~3つを使用してもよい。または、他の属性を使用してもよい。または、属性ベクトルCを使用せずに機械学習を行ってもよい。この場合は、ニューラルネットワーク60(図8参照)に第三のネットワーク63を設けない。また、第二のネットワーク62の全結合層62Dの各ユニットには、全結合層62Cからの出力値だけが入力される。 In this embodiment, the learning inference device 2 uses four attributes of age, gender, weight, and height as input data for machine learning. good. Alternatively, other attributes may be used. Alternatively, machine learning may be performed without using the attribute vector C. In this case, the third network 63 is not provided in the neural network 60 (see FIG. 8). Also, each unit of the fully connected layer 62D of the second network 62 receives only the output value from the fully connected layer 62C.
 本実施形態では、第一のネットワーク61(図8、図9参照)として、LSTMのニューラルネットワークを用いたが、他の形態のニューラルネットワークを用いてもよい。例えば、CNN(Convolutional Neural Network)、BLSTM(Bidirectional LSTM)、またはWNN(Wavelet Neural Network)を用いてもよい。CNNを用いる場合は、機械学習においてモーション行列Bをグレースケールの画像に変換して第一のネットワーク61へ入力する。同様に、推論においてモーション行列B’をグレースケールの画像に変換して第一のネットワーク61へ入力する。グレースケールの画像の代わりに、図6(A)のシフトウィンドウ4Aの中に示したような折れ線グラフの画像を用いてもよい。 In this embodiment, an LSTM neural network is used as the first network 61 (see FIGS. 8 and 9), but other forms of neural networks may be used. For example, CNN (Convolutional Neural Network), BLSTM (Bidirectional LSTM), or WNN (Wavelet Neural Network) may be used. When using CNN, the motion matrix B is converted into a grayscale image in machine learning and input to the first network 61 . Similarly, the motion matrix B' is converted to a grayscale image and input to the first network 61 in inference. Instead of the grayscale image, a line graph image such as that shown in the shift window 4A of FIG. 6A may be used.
 ここで、以下の(1)~(3)の3つのパターンそれぞれの推論のMAE(Mean Absolute Error)に基づいて推論の精度を検討する。 Here, the accuracy of inference is examined based on the MAE (Mean Absolute Error) of inference for each of the following three patterns (1) to (3).
 (1)第一のパターン
 第一のパターンは、7つの慣性センサ3(31~37)によって腰、左右の太腿、左右の脛、および左右の足のデータを取得して機械学習を行い、慣性センサ38によって腰ののデータを取得して推論するパターンである。属性ベクトルCは、使用しない。
(1) First pattern In the first pattern, seven inertial sensors 3 (31 to 37) acquire data on the waist, left and right thighs, left and right shins, and left and right legs, and perform machine learning. This is a pattern in which waist data is acquired by the inertial sensor 38 and inferred. Attribute vector C is not used.
 WNN、LSTM、BLSTM、およびCNNそれぞれのニューラルネットワークを使用して第一のパターンで実験したところ、図11(A)に示す結果が得られた。この結果によると、推論の対象によって4つのニューラルそれぞれのMAEの良し悪しが異なるが、概ねMAEが6~9の範囲に収まっている。しかし、全体的に見るとBLSTMの場合に精度が最も高いと考えられる。 When experiments were conducted with the first pattern using WNN, LSTM, BLSTM, and CNN neural networks, the results shown in Fig. 11(A) were obtained. According to this result, although the MAE of each of the four neurals differs depending on the object of inference, the MAE generally falls within the range of 6-9. Overall, however, BLSTM appears to have the highest accuracy.
 (2)第二のパターン
 第二のパターンは、機械学習および推論の両方で属性ベクトル(年齢、性別、体重、および身長)を使用するように第一のパターンを変更したものである。4つのニューラルネットワークそれぞれを使用して第二のパターンで実験したところ、図11(B)に示す結果が得られた。この結果によると、MAEの合計値が第一のパターンの場合よりも下がっており、属性ベクトルを使用することによって一定の効果が得られることが分かる。
(2) Second Pattern The second pattern is a modification of the first pattern to use attribute vectors (age, gender, weight, and height) in both machine learning and inference. Experiments with the second pattern using each of the four neural networks yielded the results shown in FIG. 11(B). According to this result, the total value of MAE is lower than in the case of the first pattern, and it can be seen that a certain effect can be obtained by using the attribute vector.
 (3)第三のパターン
 第三のパターンは、左右の足の慣性センサ3を加えて推論を行うように第一のパターンを変更したものである。4つのニューラルネットワークそれぞれを使用して第三のパターンで実験したところ、図11(C)に示す結果が得られた。この結果によると、すべてのMAEが第一のパターンの場合よりも下がっていることが分かる。
(3) Third Pattern The third pattern is a modification of the first pattern in which inertial sensors 3 on the left and right feet are added to perform inference. When experimenting with the third pattern using each of the four neural networks, the results shown in FIG. 11(C) were obtained. According to this result, it can be seen that all MAEs are lower than in the case of the first pattern.
 これらの実験によるとデータの種類が多いほど精度が高くなることが分かるが、データの種類が多いほど推論の準備に時間が掛かる。そこで、使用するパターンを環境または目的に応じて使い分ければよい。 According to these experiments, it can be seen that the more types of data, the higher the accuracy, but the more types of data, the more time it takes to prepare inference. Therefore, the pattern to be used should be properly used according to the environment or purpose.
 本実施形態では、学習推論装置2は、腰の加速度および角速度を入力データとして使用して機械学習を行ったが、代わりに左太腿、右太腿、左脛、右脛、左足、および右足のうちのいずれか1つの加速度および角速度を入力データとして使用して機械学習を行ってもよい。例えば、左太腿の加速度および角速度を入力データとして使用する場合は、図8に示したモーション行列Bの代わりに、下記のような100×6の行列を入力データとして使用すればよい。
Figure JPOXMLDOC01-appb-M000004
In this embodiment, the learning inference device 2 performs machine learning using the hip acceleration and angular velocity as input data. Machine learning may be performed using the acceleration and angular velocity of any one of as input data. For example, when using the acceleration and angular velocity of the left thigh as input data, instead of the motion matrix B shown in FIG. 8, the following 100×6 matrix may be used as input data.
Figure JPOXMLDOC01-appb-M000004
 そして、推論の際は、推測対象者1Aの腰の特定の位置の代わりに左太腿の特定の位置に慣性センサ38を固定し、学習推論装置2は、この慣性センサ38によって腰の加速度および角速度を求める。これらの加速度および角速度の、100×6の行列を学習済ニューラルネットワーク67の第一のネットワーク61(図9参照)へ入力することによって、推測対象者1Aの4つの関節それぞれの角度を推測する。 Then, during the inference, the inertial sensor 38 is fixed at a specific position of the left thigh instead of the specific position of the waist of the inferred subject 1A, and the learning inference device 2 uses the inertial sensor 38 to detect the acceleration of the waist and the Find the angular velocity. By inputting the 100×6 matrix of these accelerations and angular velocities into the first network 61 (see FIG. 9) of the learned neural network 67, the angle of each of the four joints of the subject 1A is estimated.
 慣性センサ38を腰、太腿、脛、および足それぞれに固定してBLSTMによって機械学習および推論を行った場合のMAEは、図12に示す通りである。なお、図12に示すMAEのうち腰に対応するMAEは、図11(A)に示すMAEのうちBLSTMに対応するMAEと同じである。 FIG. 12 shows the MAE when machine learning and inference are performed by BLSTM with the inertial sensors 38 fixed to the waist, thighs, shins, and feet, respectively. Among the MAEs shown in FIG. 12, the MAE corresponding to the waist is the same as the MAE corresponding to BLSTM among the MAEs shown in FIG. 11(A).
 図12に示すMAEによると、太腿、脛、または足に慣性センサ38を固定させると、腰に固定させる場合よりも良好に推論できることが分かる。脛に固定させるのが最も良好である。ただし、脛に固定させるよりも腰に固定させるほうが容易なので、環境または目的に応じて固定先を使い分ければよい。 According to the MAE shown in FIG. 12, it can be seen that fixing the inertial sensor 38 to the thigh, shin, or foot provides better reasoning than fixing it to the waist. Best fixed to the shin. However, since it is easier to fix it to the waist than to fix it to the shin, it is possible to use different fixing destinations depending on the environment or purpose.
 ところで、被験者1Bにとって、自分の左股関節の角度がゼロ度になるように立つことが難しいことがある。そこで、学習推論装置2は、キャリブレーションの際の左股関節の実際の角度を基準角度に設定し、基準角度と歩行中の左股関節の実際の角度との差を左股関節の角度としてデータセット50に示すようにしてもよい。右股関節、左膝関節、および右膝関節についても同様である。 By the way, it is sometimes difficult for subject 1B to stand so that the angle of his left hip joint is zero degrees. Therefore, the learning inference device 2 sets the actual angle of the left hip joint at the time of calibration as the reference angle, and sets the difference between the reference angle and the actual angle of the left hip joint during walking as the angle of the left hip joint in the data set 50. may be as shown. The same is true for the right hip joint, left knee joint, and right knee joint.
 本実施形態では、学習推論装置2は、複数の被験者1Bから収集したデータセット50を使用して機械学習を行うことによって、汎用性のある学習済ニューラルネットワーク67を生成した。しかし、特定の1人の被験者1Bからデータセット50を収集し機械学習を行うことによって、この被験者1B専用の学習済ニューラルネットワーク67を生成してもよい。この場合も、ニューラルネットワーク60に第三のネットワーク63を設ける必要がなく、第二のネットワーク62の全結合層62Dの各ユニットに全結合層62Cからの出力値だけが入力される。 In this embodiment, the learning inference device 2 generated a versatile trained neural network 67 by performing machine learning using the data set 50 collected from multiple subjects 1B. However, the trained neural network 67 dedicated to this subject 1B may be generated by collecting the data set 50 from one specific subject 1B and performing machine learning. In this case also, there is no need to provide the third network 63 in the neural network 60, and only the output values from the fully connected layer 62C are input to each unit of the fully connected layer 62D of the second network 62.
 本実施形態では、股関節の角度および膝関節の角度を推論するためのニューラルネットワーク(学習済モデル)を生成する場合を例に説明したが、本発明によると、他の関節の角度を推論するためのニューラルネットワークを生成することもできる。 In the present embodiment, the case of generating a neural network (learned model) for inferring the hip joint angle and knee joint angle has been described as an example. It is also possible to generate a neural network of
 例えば、左右それぞれの足関節(足首の関節)の角度を推論するためのニューラルネットワークを生成してもよい。この場合は、データセットの準備のフェーズにおいて、学習推論装置2は、左脛および左足それぞれのクォータニオンに基づいて左の足関節の角度を正解データとして取得し、右脛および右足それぞれのクォータニオンに基づいて右の足関節の角度を正解データとして取得し、機械学習を行えばよい。 For example, a neural network may be generated to infer the angles of the left and right ankle joints (ankle joints). In this case, in the data set preparation phase, the learning inference device 2 acquires the angle of the left ankle joint as correct data based on the quaternions of the left shin and the left leg respectively, machine learning by acquiring the angle of the right ankle joint as correct data.
 または、前後方向の開脚の角度(両脚の角度)を推論するためのニューラルネットワークを生成してもよい。この場合は、データセットの準備のフェーズにおいて、学習推論装置2は、左太腿および右太腿それぞれのクォータニオンに基づいて前後方向の開脚の角度を正解データとして取得し、機械学習を行えばよい。 Alternatively, a neural network may be generated to infer the angle of the open legs in the front-back direction (angle of both legs). In this case, in the data set preparation phase, the learning inference device 2 acquires the forward and backward leg angles based on the quaternions of the left and right thighs as correct data, and performs machine learning. good.
 または、左右の肩関節の前後方向の角度を推論するためのニューラルネットワークを生成してもよい。この場合は、左右の上腕に慣性センサ3を1つずつ固定し、腰および左上腕それぞれのクォータニオンに基づいて左の肩関節の角度を正解データとして取得し、腰および右上腕それぞれのクォータニオンに基づいて右の肩関節の角度を正解データとして取得し、機械学習を行えばよい。 Alternatively, a neural network may be generated to infer the anteroposterior angles of the left and right shoulder joints. In this case, one inertia sensor 3 is fixed to each of the left and right upper arms, the angle of the left shoulder joint is acquired as correct data based on the quaternions of the waist and the left upper arm, respectively, and based on the quaternions of the waist and the right upper arm. Then, the angle of the right shoulder joint is obtained as correct data, and machine learning is performed.
 学習推論装置2は、医療、スポーツ、およびエンタテインメントなど様々な場面で使用され得る。例えば、患者のリハビリテーションのために学習推論装置2が次のように使用され得る。 The learning reasoning device 2 can be used in various situations such as medical care, sports, and entertainment. For example, the learning reasoning device 2 can be used for patient rehabilitation as follows.
 医療機関のスタッフは、患者の腰の特定の位置に慣性センサ38を固定し、患者の年齢、性別、体重、および身長の各属性値を学習推論装置2へ入力する。患者が歩行し始めると、移動座標系における各時刻の(例えば、100分の1秒ごとの)腰の加速度および角速度が得られ、学習推論装置2に入力される。 The staff of the medical institution fixes the inertial sensor 38 to a specific position on the patient's waist, and inputs the patient's age, gender, weight, and height attribute values to the learning inference device 2 . When the patient starts walking, the acceleration and angular velocity of the waist at each time (for example, every 1/100th of a second) in the movement coordinate system are obtained and input to the learning inference device 2 .
 すると、学習推論装置2は、図9に例示した方法によって、患者の各時刻tの左股関節、右股関節、左膝関節、および右膝関節それぞれの角度HL_t、ψHR_t、ψKL_t、およびψKR_tを推測(推論)する。さらに、重心を算出する。そして、患者が歩行する様子をアニメーションによって表示する。この際に、重心の位置の変化も再現される。さらに、歩行中の姿勢および重心それぞれのバラつきを検視して出力する。 Then, the learning inference device 2 estimates the angles HL_t, ψHR_t, ψKL_t, and ψKR_t of the patient's left hip joint, right hip joint, left knee joint, and right knee joint at each time t ( inference). Furthermore, the center of gravity is calculated. Then, an animation is displayed showing how the patient walks. At this time, changes in the position of the center of gravity are also reproduced. Furthermore, the variation in posture and center of gravity during walking is inspected and output.
 スタッフは、患者と一緒にアニメーションならびに姿勢および重心それぞれのバラつきを見ながら、好ましい歩き方を患者にアドバイスする。学習推論装置2は、さらに、患者のアニメーションの横に理想的な歩き方を表わすアニメーションを表示してもよい。 The staff advises the patient on the preferred way to walk while watching the animation and variations in posture and center of gravity together with the patient. The learning reasoning device 2 may also display an animation representing an ideal gait next to the animation of the patient.
 患者ごとにユニークなIDを付与しておき、患者の情報(氏名、住所、連絡先、顔写真、カルテ、義肢もしくは義足などの装具の情報、リハビリテーションの履歴、免責事項への同意のサイン、またはこれらの情報の改竄防止用の符号情報など)をIDと対応付けて学習推論装置2において管理してもよい。スタッフは、このような情報を参照することによって、従来よりも効率的にかつ確実にリハビリテーションをサポートすることができる。 A unique ID is assigned to each patient, and the patient's information (name, address, contact information, face photo, chart, information on artificial limbs or prosthetic limbs, rehabilitation history, signature of consent to disclaimer, or Code information for preventing falsification of such information, etc.) may be associated with the ID and managed in the learning inference device 2 . By referring to such information, the staff can support rehabilitation more efficiently and reliably than before.
 学習推論装置2によって推論された角度を義足の制御のために使用してもよい。ここで、図13に示すように、左足用のロボット義足71を推測対象者1A2が使用する場合を例に、制御方法について説明する。 The angle inferred by the learning inference device 2 may be used to control the prosthetic leg. Here, as shown in FIG. 13, the control method will be described by taking as an example the case where the guess subject 1A2 uses the robot prosthetic leg 71 for the left leg.
 ロボット義足71は、人間の左膝の下の辺りから左足の爪先までに相当し、足首に相当する部分に関節を有する。関節は、人間の左足の足関節とほぼ同じ可動域を有し、モータおよびモータを制御するコントローラによって伸展したり屈曲したりする。 The robot prosthetic leg 71 corresponds to the area below the left knee of a human to the toe of the left foot, and has a joint in a portion corresponding to the ankle. The joint has approximately the same range of motion as the ankle joint of a human left leg, and is extended and bent by a motor and a controller that controls the motor.
 予め、慣性センサ38およびロボット義足71が推測対象者1A2のそれぞれの所定の位置に固定され、推測対象者1A2の年齢、性別、体重、および身長の各属性が学習推論装置2へ入力される。 The inertial sensor 38 and the robot prosthetic leg 71 are fixed in advance at respective predetermined positions of the person to be guessed 1A2, and the age, sex, weight, and height attributes of the person to be guessed 1A2 are input to the learning and reasoning device 2.
 推測対象者1A2が歩行し始めると、移動座標系における各時刻の腰の加速度および角速度が得られ、学習推論装置2に入力される。学習推論装置2は、図9に例示した方法によって、各関節の各時刻tの角度を推測し、ロボット義足71のコントローラへ左足関節の角度を送信する。 When the inference target person 1A2 starts walking, the acceleration and angular velocity of the waist at each time in the movement coordinate system are obtained and input to the learning inference device 2. The learning inference device 2 estimates the angle of each joint at each time t by the method illustrated in FIG.
 すると、ロボット義足71のコントローラは、送信されてきた角度(指令角度)に基づいてモータを制御する。例えば、ロボット義足71の関節が指令角度をなすようにモータを制御する。このようにロボット義足71の関節を駆動することによって、自然な歩行が可能になる。 Then, the controller of the robot prosthesis 71 controls the motor based on the transmitted angle (command angle). For example, the motor is controlled so that the joint of the prosthetic leg 71 of the robot forms the commanded angle. By driving the joints of the robot prosthesis 71 in this way, natural walking becomes possible.
 本実施形態では、図6(A)(B)に示したように、学習推論装置2は、関節角度(正解データ)のペアになる入力データとして、その関節角度が成された時刻を含む直近1秒間の各時刻の加速度および角速度を使用して機械学習を行ったが、直後の時間帯の各時刻の加速度および角速度を使用して機械学習を行ってもよいし、直前および直後の両方の時間帯の各時刻の加速度および角速度を使用して機械学習を行ってもよい。 In the present embodiment, as shown in FIGS. 6A and 6B, the learning inference device 2 uses the most recent data including the time when the joint angle is formed as input data paired with the joint angle (correct data). Machine learning was performed using the acceleration and angular velocity at each time for 1 second, but machine learning may be performed using the acceleration and angular velocity at each time in the immediately following time zone, or both immediately before and after Machine learning may be performed using the acceleration and angular velocity at each time in the time period.
 つまり、例えば、その関節角度が成された時刻が「T」である場合に、
Figure JPOXMLDOC01-appb-M000005
の行列を入力データとして使用してもよい。この場合は、推論の際にも同様に、直後の時間帯の各時刻の加速度および角速度を学習済ニューラルネットワーク67へ入力する。
That is, for example, when the time when the joint angle is formed is "T",
Figure JPOXMLDOC01-appb-M000005
matrix may be used as input data. In this case, the acceleration and angular velocity at each time in the immediately following time zone are similarly input to the learned neural network 67 during inference.
 または、
Figure JPOXMLDOC01-appb-M000006
の行列を入力データとして使用してもよい。なお、α+β=100、である。この場合は、推論の際にも同様に、直前および直後の両方の時間帯の各時刻の加速度および角速度を学習済ニューラルネットワーク67へ入力する。
or
Figure JPOXMLDOC01-appb-M000006
matrix may be used as input data. Note that α+β=100. In this case, the acceleration and angular velocity at each time in both the immediately preceding and immediately succeeding time zones are similarly input to the learned neural network 67 during inference.
 本実施形態では、図6に示したように、学習推論装置2は、データセット50を取得するためにシフトウィンドウ4Aとして、時間幅が1秒でありシフト幅が0.2秒であるシフトウィンドウを使用したが、他の時間幅またはシフト幅のシフトウィンドウを使用してもよい。例えば、時間幅が1.2秒でありシフト幅が0.5秒であるシフトウィンドウを使用してもよい。また、この場合は、推論において、シフト幅に応じた行列(加速度および角速度の行列)をモーション行列B’として使用する。つまり、シフトウィンドウ4Aの時間幅が1.2秒である場合は、120×6の行列を使用する。 In this embodiment, as shown in FIG. 6, the learning inference device 2 uses a shift window 4A with a time width of 1 second and a shift width of 0.2 seconds to acquire the data set 50. was used, other time widths or shift widths of shift windows may be used. For example, a shift window with a duration of 1.2 seconds and a shift width of 0.5 seconds may be used. Also, in this case, in the inference, a matrix (matrix of acceleration and angular velocity) corresponding to the shift width is used as the motion matrix B'. That is, when the time width of shift window 4A is 1.2 seconds, a matrix of 120×6 is used.
 本実施形態では、専用品であるベルトに慣性センサ31、38を所定の姿勢で取り付け、そのベルトを被験者1Bまたは推測対象者1Aの腰の特定の位置に装着するので、腰の特定の位置に慣性センサ31、38が特定の姿勢で固定される。しかし、慣性センサ31、38が特定の位置から少しずれたり傾いたりして固定されることがある。そこで、機械学習および推論それぞれの際に、基準の状態における慣性センサ31、38の位置および姿勢を初期の位置と見做すようにキャリブレーションを行ってもよい。 In the present embodiment, the inertial sensors 31 and 38 are attached to a dedicated belt in a predetermined posture, and the belt is worn at a specific position on the waist of the subject 1B or the inferred subject 1A. Inertial sensors 31, 38 are fixed in a specific posture. However, the inertial sensors 31 and 38 may be fixed slightly shifted or tilted from a specific position. Therefore, in each of machine learning and inference, calibration may be performed so that the positions and orientations of the inertial sensors 31 and 38 in the reference state are regarded as the initial positions.
 本実施形態では、学習推論装置2は、左股関節、右股関節、左膝関節、および右膝関節それぞれの角度を、慣性センサ31~35によって求められる各セグメントの位置および姿勢に基づいて取得したが、他の方法で取得してもよい。例えば、各セグメントにマーカを付着し、1つまたは複数のカメラでマーカを撮影することによって各セグメントの位置および姿勢を特定し、各角度を算出してもよい。つまり、いわゆる光学式モーションキャプチャによって算出してもよい。または、1つまたは複数のデプスカメラで各セグメントの三次元位置を特定し、各角度を算出してもよい。または、特許文献2に記載されるような装着具によって各角度を取得してもよい。 In the present embodiment, the learning inference device 2 acquires the angles of the left hip joint, the right hip joint, the left knee joint, and the right knee joint based on the positions and orientations of the segments obtained by the inertial sensors 31-35. , may be obtained in other ways. For example, a marker may be attached to each segment, the position and orientation of each segment may be specified by photographing the marker with one or more cameras, and each angle may be calculated. That is, it may be calculated by so-called optical motion capture. Alternatively, one or more depth cameras may identify the three-dimensional position of each segment and calculate each angle. Alternatively, each angle may be obtained by a mounting tool as described in Patent Document 2.
 本実施形態では、学習済ニューラルネットワーク67として、歩行時の関節の角度を推論するためのニューラルネットワークを生成したが、ジャンプ時、ランニング時、階段を昇る時、階段を降りる時、自転車を漕いでいる時、スキーをしている時など他の行動の時の角度を推論するためのニューラルネットワークを生成することもできる。 In this embodiment, as the learned neural network 67, a neural network for inferring joint angles during walking is generated. Neural networks can also be generated to infer angles during other actions such as walking, skiing, and so on.
 本実施形態では、学習済ニューラルネットワーク67として、人間の関節の角度を推論するためのニューラルネットワークを生成したが、胴体および体肢を有する動物であれば、犬、猫、および馬など人間以外の個体の関節の角度を推論するためのニューラルネットワークを生成することもできる。 In this embodiment, a neural network for inferring the angles of human joints is generated as the trained neural network 67. However, animals having bodies and limbs other than humans, such as dogs, cats, and horses, may also be used. A neural network can also be generated to infer the joint angles of an individual.
 本実施形態では、図3に示した各機能を学習推論装置2に設けたが、複数の装置に分散して設けてもよい。例えば、シフトウィンドウ値抽出部401、関節角度算出部402、データセット登録部403、およびデータセット記憶部404を第一のコンピュータに設け、機械学習部405を第二のコンピュータに設け、学習済モデル記憶部406、関節角度推論部407、および推論結果出力部408を第三のコンピュータに設ける。第一のコンピュータ、第二のコンピュータ、および第三のコンピュータを、インターネット、LAN(Local Area Network)、または公衆回線などの通信回線によって接続する。第一のコンピュータ、第二のコンピュータ、および第三のコンピュータは、それぞれ、第一のコンピュータプログラム、第二のコンピュータプログラム、および第三のコンピュータプログラムに基づいて次の処理を行う。 In the present embodiment, each function shown in FIG. 3 is provided in the learning inference device 2, but may be distributed and provided in a plurality of devices. For example, the shift window value extraction unit 401, the joint angle calculation unit 402, the data set registration unit 403, and the data set storage unit 404 are provided in the first computer, the machine learning unit 405 is provided in the second computer, and the trained model A storage unit 406, a joint angle inference unit 407, and an inference result output unit 408 are provided in the third computer. A first computer, a second computer, and a third computer are connected by a communication line such as the Internet, a LAN (Local Area Network), or a public line. The first computer, the second computer, and the third computer perform the following processes based on the first computer program, the second computer program, and the third computer program, respectively.
 第一のコンピュータは、データセット記憶部404に記憶されたデータセット50を、機械学習の指令とともに第二のコンピュータへ送信する。すると、第二のコンピュータは、受信したデータセット50を用いて機械学習を行うことによって学習済ニューラルネットワーク67を生成し、第三のコンピュータへ送信する。第三のコンピュータは、学習済ニューラルネットワーク67を学習済モデル記憶部406によって記憶する。そして、学習済ニューラルネットワーク67を用いて推測対象者1Aの4つの関節角度を推測する。 The first computer transmits the data set 50 stored in the data set storage unit 404 to the second computer along with a machine learning command. Then, the second computer generates a trained neural network 67 by performing machine learning using the received data set 50, and transmits it to the third computer. The third computer stores the trained neural network 67 in the trained model storage unit 406 . Then, the learned neural network 67 is used to estimate the four joint angles of the person 1A to be estimated.
 その他、関節角度推測システム1、学習推論装置2の全体または各部の構成、処理の内容、処理の順序、データセットの構成、シフトウィンドウ4Aの範囲などは、本発明の趣旨に沿って適宜変更することができる。 In addition, the joint angle estimating system 1 and the learning inference device 2 as a whole or the configuration of each part, the contents of the processing, the order of the processing, the configuration of the data set, the range of the shift window 4A, etc. can be changed as appropriate in line with the spirit of the present invention. be able to.
  1 関節角度推測システム
  1B 被験者(学習対象個体)
  2 学習推論装置(個体角度学習システム、個体角度推測装置)
  31 慣性センサ(第一の慣性センサ)
  37 慣性センサ(第二の慣性センサ)
  401 シフトウィンドウ値抽出部(第一の取得手段)
  402 関節角度算出部(取得手段、第二の取得手段)
  405 機械学習部(学習手段)
  407 関節角度推論部(推測手段)
  50 データセット(入力データ、正解データ)
  67 学習済ニューラルネットワーク(学習済モデル)
1 Joint angle estimation system 1B Subject (learning target individual)
2 Learning inference device (individual angle learning system, individual angle estimation device)
31 inertial sensor (first inertial sensor)
37 inertial sensor (second inertial sensor)
401 shift window value extraction unit (first acquisition unit)
402 joint angle calculation unit (acquisition means, second acquisition means)
405 machine learning unit (learning means)
407 joint angle inference unit (inference means)
50 data sets (input data, correct data)
67 Trained Neural Network (Trained Model)

Claims (12)

  1.  学習の対象である学習対象個体の所定の部位の加速度および角速度を求めるために当該所定の部位に取り付けられる第一の慣性センサと、
     前記学習対象個体の関節の角度を正解角度として取得する取得手段と、
     入力データとして前記学習対象個体の所定の部位の加速度および角速度を用いかつ正解データとして前記正解角度を用いて機械学習を行うことによって学習済モデルを生成する学習手段と、
     推測の対象である推測対象個体の所定の部位の加速度および角速度を求めるために当該所定の部位に取り付けられる第二の慣性センサと、
     前記推測対象個体の所定の部位の加速度および角速度を前記学習済モデルに入力することによって当該推測対象個体の関節の角度を推測する、推測手段と、
     を有することを特徴とする関節角度学習推測システム。
    a first inertial sensor attached to a predetermined part of a learning target individual to obtain acceleration and angular velocity of the predetermined part;
    Acquisition means for acquiring the angle of the joint of the learning target individual as a correct angle;
    learning means for generating a trained model by performing machine learning using the acceleration and angular velocity of a predetermined part of the learning target individual as input data and using the correct angle as correct answer data;
    a second inertial sensor attached to a predetermined portion of the inference target individual to determine the acceleration and angular velocity of the predetermined portion;
    estimating means for estimating joint angles of the inference target individual by inputting the acceleration and angular velocity of a predetermined part of the inference target individual into the learned model;
    A joint angle learning and estimating system comprising:
  2.  学習の対象である学習対象個体の所定の部位の加速度および角速度を取得する第一の取得手段と、
     前記学習対象個体の関節の角度を正解角度として取得する第二の取得手段と、
     入力データとして前記所定の部位の加速度および角速度を用いかつ正解データとして前記正解角度を用いて機械学習を行うことによって学習済モデルを生成する学習手段と、
     を有することを特徴とする関節角度学習システム。
    a first acquisition means for acquiring acceleration and angular velocity of a predetermined part of a learning target individual that is a learning target;
    a second acquiring means for acquiring the angle of the joint of the learning target individual as a correct angle;
    learning means for generating a learned model by performing machine learning using the acceleration and angular velocity of the predetermined part as input data and using the correct angle as correct data;
    A joint angle learning system characterized by comprising:
  3.  前記学習手段は、前記正解角度が成された時刻の直前または直後の所定の長さの時間帯における前記所定の部位の加速度および角速度を前記入力データとして用いて前記機械学習を行うことによって前記学習済モデルを生成する、
     請求項2に記載の関節角度学習システム。
    The learning means performs the machine learning using, as the input data, the acceleration and angular velocity of the predetermined part in a time period of a predetermined length immediately before or after the time when the correct angle is formed. generate a finished model,
    The joint angle learning system according to claim 2.
  4.  前記第二の取得手段は、前記学習対象個体の所定の部位と当該学習対象個体の体肢を構成するセグメントのうちの複数の所定のセグメントそれぞれとが成す角度を当該所定の部位の加速度および角速度ならびに当該複数の所定のセグメントそれぞれの加速度および角速度に基づいて算出することによって、前記正解角度を取得する、
     請求項2または請求項3に記載の関節角度学習システム。
    The second obtaining means obtains an angle formed by a predetermined part of the learning target individual and each of a plurality of predetermined segments among segments forming a limb of the learning target individual by calculating acceleration and angular velocity of the predetermined part. and obtaining the correct angle by calculating based on the acceleration and angular velocity of each of the plurality of predetermined segments;
    The joint angle learning system according to claim 2 or 3.
  5.  前記複数の所定のセグメントには、左太腿および右太腿が含まれており、
     前記第二の取得手段は、前記所定の部位と前記左太腿とが成す左股関節角度および当該所定の部位と前記右太腿とが成す右股関節角度を前記正解角度として取得し、
     前記学習手段は、前記左股関節角度および前記右股関節角度を前記正解データとして用いて前記機械学習を行うことによって前記学習済モデルを生成する、
     請求項4に記載の関節角度学習システム。
    the plurality of predetermined segments includes a left thigh and a right thigh;
    The second acquisition means acquires, as the correct angles, a left hip joint angle formed between the predetermined portion and the left thigh and a right hip joint angle formed between the predetermined portion and the right thigh, and
    The learning means generates the learned model by performing the machine learning using the left hip joint angle and the right hip joint angle as the correct data.
    The joint angle learning system according to claim 4.
  6.  前記複数の所定のセグメントには、さらに左脛および右脛が含まれており、
     前記第二の取得手段は、さらに前記左太腿と前記左脛とが成す左膝関節角度および前記右太腿と前記右脛とが成す右膝関節角度を当該左太腿の加速度および角速度ならびに当該右太腿の加速度および角速度に基づいて前記正解角度として取得し、
     前記学習手段は、前記左膝関節角度および前記右膝関節角度をさらに前記正解データとして用いて前記機械学習を行うことによって前記学習済モデルを生成する、
     請求項5に記載の関節角度学習システム。
    the plurality of predetermined segments further includes a left shin and a right shin;
    The second obtaining means further obtains the left knee joint angle formed by the left thigh and the left shin and the right knee joint angle formed by the right thigh and the right shin, as well as the acceleration and angular velocity of the left thigh. Acquire as the correct angle based on the acceleration and angular velocity of the right thigh,
    The learning means generates the learned model by performing the machine learning using the left knee joint angle and the right knee joint angle as the correct data.
    The joint angle learning system according to claim 5.
  7.  前記学習手段は、前記学習対象個体の年齢、性別、体重、または身長をさらに前記入力データとして用いて前記機械学習を行うことによって前記学習済モデルを生成する、
     請求項2ないし請求項6のいずれかに記載の関節角度学習システム。
    The learning means generates the learned model by performing the machine learning using the age, sex, weight, or height of the learning target individual as the input data.
    The joint angle learning system according to any one of claims 2 to 6.
  8.  前記所定の部位は、胴体、腰、左太腿、右太腿、左脛、右脛、左足、および右足のうちのいずれかである、
     請求項2ないし請求項7のいずれかに記載の関節角度学習システム。
    The predetermined part is any one of the torso, waist, left thigh, right thigh, left shin, right shin, left leg, and right leg.
    The joint angle learning system according to any one of claims 2 to 7.
  9.  推測の対象である推測対象個体の所定の部位の加速度および角速度を、請求項2ないし請求項7のいずれかの関節角度学習システムによって生成された学習済モデルに入力することによって、当該推測対象個体の関節の角度を推測する推測手段、
     を有することを特徴とする関節角度推測装置。
    By inputting the acceleration and angular velocity of a predetermined part of the inference target individual to be inferred into the learned model generated by the joint angle learning system according to any one of claims 2 to 7, the inference target individual guessing means for guessing the joint angles of
    A joint angle estimating device comprising:
  10.  学習の対象である学習対象個体の所定の部位に取り付けた第一の慣性センサを用いて当該所定の部位の加速度および角速度を取得し、
     前記学習対象個体の関節の角度を正解角度として取得し、
     入力データとして前記学習対象個体の所定の部位の加速度および角速度を用いかつ正解データとして前記正解角度を用いて機械学習をコンピュータに行わせることによって学習済モデルを生成する、
     ことを特徴とする関節角度学習方法。
    Acquiring the acceleration and angular velocity of a predetermined part of the learning target individual, which is the object of learning, using a first inertial sensor attached to the predetermined part,
    Acquiring the angle of the joint of the learning target individual as a correct angle,
    generating a learned model by causing a computer to perform machine learning using the acceleration and angular velocity of a predetermined part of the learning target individual as input data and using the correct angle as correct data;
    A joint angle learning method characterized by:
  11.  コンピュータに、
     学習の対象である学習対象個体の所定の部位の加速度および角速度を取得する第一の取得処理を実行させ、
     前記学習対象個体の関節の角度を正解角度として取得する第二の取得処理を実行させ、
     入力データとして前記所定の部位の加速度および角速度を用いかつ正解データとして前記正解角度を用いて機械学習システムに機械学習を行わせることによって学習済モデルを生成する学習処理を実行させる、
     ことを特徴とするコンピュータプログラム。
    to the computer,
    executing a first acquisition process for acquiring the acceleration and angular velocity of a predetermined part of the learning target individual that is the target of learning;
    executing a second acquisition process for acquiring the angle of the joint of the learning target individual as a correct angle;
    causing a machine learning system to perform machine learning using the acceleration and angular velocity of the predetermined part as input data and using the correct angle as correct data to execute a learning process for generating a learned model;
    A computer program characterized by:
  12.  コンピュータに、
     推測の対象である推測対象個体の所定の部位の加速度および角速度を取得する処理を実行させ、
     請求項2ないし請求項7のいずれかの関節角度学習システムによって生成された学習済モデルへ、取得した前記推測対象個体の所定の部位の加速度および角速度を入力することによって、当該推測対象個体の関節の角度を推測する処理を実行させる、
     ことを特徴とするコンピュータプログラム。
    to the computer,
    executing a process of acquiring the acceleration and angular velocity of a predetermined part of the inference target individual that is the inference target;
    By inputting the acquired acceleration and angular velocity of a predetermined part of the inference target individual to the learned model generated by the joint angle learning system according to any one of claims 2 to 7, the joint of the inference target individual to perform the process of guessing the angle of
    A computer program characterized by:
PCT/JP2023/006719 2022-02-28 2023-02-24 Joint angle learning estimation system, joint angle learning system, joint angle estimation device, joint angle learning method, and computer program WO2023163104A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-029326 2022-02-28
JP2022029326 2022-02-28

Publications (1)

Publication Number Publication Date
WO2023163104A1 true WO2023163104A1 (en) 2023-08-31

Family

ID=87766142

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/006719 WO2023163104A1 (en) 2022-02-28 2023-02-24 Joint angle learning estimation system, joint angle learning system, joint angle estimation device, joint angle learning method, and computer program

Country Status (1)

Country Link
WO (1) WO2023163104A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010125287A (en) * 2008-12-01 2010-06-10 Gifu Univ Digital joint angle estimating device
JP2014033739A (en) * 2012-08-07 2014-02-24 Nippon Telegr & Teleph Corp <Ntt> Gait measuring apparatus, method and program
JP2014208257A (en) * 2014-06-11 2014-11-06 国立大学法人東北大学 Gait analysis system
JP2020157127A (en) * 2020-06-30 2020-10-01 ミネベアミツミ株式会社 Biological state determination device and biological state determination method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010125287A (en) * 2008-12-01 2010-06-10 Gifu Univ Digital joint angle estimating device
JP2014033739A (en) * 2012-08-07 2014-02-24 Nippon Telegr & Teleph Corp <Ntt> Gait measuring apparatus, method and program
JP2014208257A (en) * 2014-06-11 2014-11-06 国立大学法人東北大学 Gait analysis system
JP2020157127A (en) * 2020-06-30 2020-10-01 ミネベアミツミ株式会社 Biological state determination device and biological state determination method

Similar Documents

Publication Publication Date Title
US11803241B2 (en) Wearable joint tracking device with muscle activity and methods thereof
WO2018196227A1 (en) Evaluation method, device, and system for human motor capacity
Roetenberg et al. Xsens MVN: Full 6DOF human motion tracking using miniature inertial sensors
JP6660110B2 (en) Gait analysis method and gait analysis system
US10576326B2 (en) Method and system for measuring, monitoring, controlling and correcting a movement or a posture of a user
US11318350B2 (en) Systems and methods for real-time data quantification, acquisition, analysis, and feedback
Ahmadi et al. 3D human gait reconstruction and monitoring using body-worn inertial sensors and kinematic modeling
US20180070864A1 (en) Methods and devices for assessing a captured motion
JP7107264B2 (en) Human Body Motion Estimation System
JP2016140591A (en) Motion analysis and evaluation device, motion analysis and evaluation method, and program
US20220409098A1 (en) A wearable device for determining motion and/or a physiological state of a wearer
Cafolla et al. An experimental characterization of human torso motion
Hindle et al. Inertial-based human motion capture: A technical summary of current processing methodologies for spatiotemporal and kinematic measures
Santos et al. A low-cost wireless system of inertial sensors to postural analysis during human movement
Chen et al. IMU-based estimation of lower limb motion trajectory with graph convolution network
WO2019008689A1 (en) Information processing device, information processing system, and information processing method
US20190117129A1 (en) Systems, devices, and methods for determining an overall strength envelope
WO2023163104A1 (en) Joint angle learning estimation system, joint angle learning system, joint angle estimation device, joint angle learning method, and computer program
Madrigal et al. Hip and lower limbs 3D motion tracking using a double-stage data fusion algorithm for IMU/MARG-based wearables sensors
Alemayoh et al. LocoESIS: Deep-Learning-Based Leg-Joint Angle Estimation from a Single Pelvis Inertial Sensor
WO2021039642A1 (en) Three-dimensional reconstruction device, method, and program
Huang et al. Wearable sensors for Detecting and Measuring kinetic characteristics
CN114053679A (en) Exercise training method and system
JP2021108825A (en) Estimation device of deep muscle state
RU2106695C1 (en) Method for representation of virtual space for user and device which implements said method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23758211

Country of ref document: EP

Kind code of ref document: A1