WO2023027046A1 - プログラム、情報処理装置、および情報処理方法 - Google Patents
プログラム、情報処理装置、および情報処理方法 Download PDFInfo
- Publication number
- WO2023027046A1 WO2023027046A1 PCT/JP2022/031632 JP2022031632W WO2023027046A1 WO 2023027046 A1 WO2023027046 A1 WO 2023027046A1 JP 2022031632 W JP2022031632 W JP 2022031632W WO 2023027046 A1 WO2023027046 A1 WO 2023027046A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- data
- rotations
- legs
- program according
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title description 48
- 238000003672 processing method Methods 0.000 title description 2
- 238000011156 evaluation Methods 0.000 claims description 17
- 230000001133 acceleration Effects 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 4
- 210000002414 leg Anatomy 0.000 description 62
- 230000036541 health Effects 0.000 description 23
- 230000004048 modification Effects 0.000 description 21
- 238000012986 modification Methods 0.000 description 21
- 238000010586 diagram Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 10
- 230000033001 locomotion Effects 0.000 description 9
- 238000001514 detection method Methods 0.000 description 8
- 230000004044 response Effects 0.000 description 8
- 238000012549 training Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 3
- 230000000737 periodic effect Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000003213 activating effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000002554 cardiac rehabilitation Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003862 health status Effects 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 230000004202 respiratory function Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 208000036119 Frailty Diseases 0.000 description 1
- 210000000577 adipose tissue Anatomy 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 206010003549 asthenia Diseases 0.000 description 1
- 238000009534 blood test Methods 0.000 description 1
- 230000008933 bodily movement Effects 0.000 description 1
- 230000037182 bone density Effects 0.000 description 1
- 230000000747 cardiac effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000009535 clinical urine test Methods 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 210000004351 coronary vessel Anatomy 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000001351 cycling effect Effects 0.000 description 1
- 230000037213 diet Effects 0.000 description 1
- 235000005911 diet Nutrition 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000004821 distillation Methods 0.000 description 1
- 230000035622 drinking Effects 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009207 exercise therapy Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000000034 method Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000009206 nuclear medicine Methods 0.000 description 1
- 229940126701 oral medication Drugs 0.000 description 1
- 230000036284 oxygen consumption Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000000391 smoking effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 230000009182 swimming Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/22—Ergometry; Measuring muscular strength or the force of a muscular blow
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B69/00—Training appliances or apparatus for special sports
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
Definitions
- the present disclosure relates to a program, an information processing device, and an information processing method.
- Aerobic exercise plays a central role in, for example, diet and exercise therapy in cardiac rehabilitation.
- Fitness biking, jogging, walking, swimming, aerobics dancing, and the like are known as exercise events corresponding to aerobic exercise.
- fitness bikes have advantages such as being able to be installed even in a limited space in the home and less burden on the knees.
- Fitness bike users can perform exercise similar to cycling by pedaling with their legs. The number of rotations of the user's legs is one of the evaluation indexes of the exercise load amount of the user of the fitness bike.
- Patent Document 1 describes changing the content of an image displayed on an HMD (Head Mounted Display) based on information based on the operator's rotation operation of the pedal section.
- a magnetic detection element in a pedal device detects rotation of a pedal portion per unit time and outputs the detection result to an information processing device.
- Patent Document 1 The technology of Patent Document 1 is premised on application to a pedal device that includes means for detecting rotation, such as a magnetic detection element, and means for outputting the detection result of rotation to an information processing device. In other words, Patent Literature 1 does not consider how to obtain information on the number of revolutions of the user's legs for a normal fitness bike that does not have such means.
- the purpose of the present disclosure is to estimate the number of rotations of human legs under various circumstances.
- a program causes a computer to function as means for acquiring a user moving image showing a user during exercise and means for estimating the number of rotations of the user's legs based on the user moving image.
- FIG. 1 is a block diagram showing the configuration of an information processing system according to an embodiment
- FIG. It is a block diagram showing the configuration of the client device of the present embodiment. It is a block diagram which shows the structure of the server of this embodiment.
- FIG. 1 is an explanatory diagram of the outline of this embodiment; It is a figure which shows the data structure of the teacher data set of this embodiment. 4 is a flowchart of information processing according to the embodiment; It is a figure which shows the example of a screen displayed in the information processing of this embodiment.
- FIG. 10 is a diagram showing the data structure of a teacher data set of modification 1;
- FIG. 1 is a block diagram showing the configuration of the information processing system of this embodiment.
- the information processing system 1 includes a client device 10 and a server 30 .
- the client device 10 and server 30 are connected via a network (for example, the Internet or an intranet) NW.
- NW a network
- the client device 10 is an example of an information processing device that transmits requests to the server 30 .
- the client device 10 is, for example, a smart phone, a tablet terminal, or a personal computer.
- the server 30 is an example of an information processing device that provides the client device 10 with a response in response to a request sent from the client device 10 .
- the server 30 is, for example, a web server.
- FIG. 2 is a block diagram showing the configuration of the client device of this embodiment.
- the client device 10 includes a storage device 11, a processor 12, an input/output interface 13, and a communication interface .
- Client device 10 is connected to display 15 , camera 16 and depth sensor 17 .
- the storage device 11 is configured to store programs and data.
- the storage device 11 is, for example, a combination of ROM (Read Only Memory), RAM (Random Access Memory), and storage (eg, flash memory or hard disk).
- Programs include, for example, the following programs. ⁇ Program of OS (Operating System) ⁇ Program of application that executes information processing (for example, web browser, rehabilitation application, or fitness application)
- the data includes, for example, the following data. ⁇ Databases referenced in information processing ⁇ Data obtained by executing information processing (that is, execution results of information processing)
- the processor 12 is a computer that implements the functions of the client device 10 by activating programs stored in the storage device 11 .
- Processor 12 is, for example, at least one of the following: ⁇ CPU (Central Processing Unit) ⁇ GPU (Graphic Processing Unit) ⁇ ASIC (Application Specific Integrated Circuit) ⁇ FPGA (Field Programmable Gate Array)
- the input/output interface 13 acquires information (e.g., user instructions, images, sounds) from input devices connected to the client device 10, and outputs information (e.g., images, sounds) to output devices connected to the client device 10. command).
- the input device is, for example, camera 16, depth sensor 17, microphone, keyboard, pointing device, touch panel, sensor, or a combination thereof.
- Output devices are, for example, display 15, speakers, or a combination thereof.
- Communication interface 14 is configured to control communication between client device 10 and an external device (eg, server 30).
- communication interface 14 may include a module for communication with server 30 (eg, a WiFi module, a mobile communication module, or a combination thereof).
- the display 15 is configured to display images (still images or moving images).
- the display 15 is, for example, a liquid crystal display or an organic EL display.
- the camera 16 is configured to take pictures and generate image signals.
- the depth sensor 17 is, for example, LIDAR (Light Detection And Ranging).
- the depth sensor 17 is configured to measure the distance (depth) from the depth sensor 17 to surrounding objects (eg, a user).
- FIG. 3 is a block diagram showing the configuration of the server of this embodiment.
- the server 30 includes a storage device 31, a processor 32, an input/output interface 33, and a communication interface .
- the storage device 31 is configured to store programs and data.
- Storage device 31 is, for example, a combination of ROM, RAM, and storage.
- Programs include, for example, the following programs. ⁇ OS program ⁇ Application program that executes information processing
- the data includes, for example, the following data. ⁇ Databases referenced in information processing ⁇ Execution results of information processing
- the processor 32 is a computer that implements the functions of the server 30 by activating programs stored in the storage device 31 .
- Processor 32 is, for example, at least one of the following: ⁇ CPU ⁇ GPU ⁇ ASICs ⁇ FPGA
- the input/output interface 33 is configured to obtain information (eg, user instructions) from input devices connected to the server 30 and output information to output devices connected to the server 30 .
- Input devices are, for example, keyboards, pointing devices, touch panels, or combinations thereof.
- An output device is, for example, a display.
- the communication interface 34 is configured to control communication between the server 30 and an external device (eg, client device 10).
- an external device eg, client device 10
- FIG. 4 is an explanatory diagram of the outline of this embodiment.
- the camera 16 of the client device 10 captures the appearance (for example, the whole body) of the user US1 during exercise.
- the example of FIG. 4 shows an example in which the user US1 performs a pedaling exercise (e.g., fitness bike, ergometer, bicycle), the user US1 performs any exercise that involves leg rotation (i.e., periodic movement). exercise (aerobic exercise or anaerobic exercise).
- the camera 16 captures the appearance of the user US1 from the front or obliquely in front.
- the depth sensor 17 measures the distance (depth) from the depth sensor 17 to each part of the user US1. Note that it is also possible to generate three-dimensional video data by combining, for example, moving image data (two-dimensional) generated by the camera 16 and depth data generated by, for example, the depth sensor 17 .
- the client device 10 at least refers to the video data acquired from the camera 16 and analyzes the user's skeleton during exercise.
- the client device 10 may further refer to depth data acquired from the depth sensor 17 in order to better analyze the user's skeleton during exercise.
- the client device 10 transmits to the server 30 data related to the skeleton of the user US1 during exercise (hereinafter referred to as “user skeleton data”) based on the analysis results of the moving image data (or moving image data and depth data).
- the server 30 estimates the number of leg rotations of the user US1 by applying the learned model LM1 (an example of an "estimation model") to the acquired user skeleton data.
- the server 30 transmits an estimation result (for example, a numerical value indicating the number of leg rotations of the user US1 per unit time) to the client device 10 .
- the information processing system 1 estimates the number of rotations of the legs of the user US1 based on the moving image (or moving image and depth) of the user US1 during exercise. Therefore, according to the information processing system 1, even if the user US1 exercises using training equipment that does not include means for detecting the number of rotations of the leg or means for outputting the detection result, the The number of rotations of the user's legs can be estimated. In other words, it is possible to make an estimation about the number of rotations of a human leg under various conditions.
- FIG. 5 is a diagram showing the data structure of the teacher data set of this embodiment.
- the teacher data set includes multiple teacher data.
- Teacher data is used to train or evaluate a target model.
- Teacher data includes a sample ID, input data, and correct answer data.
- a sample ID is information that identifies teacher data.
- Input data is the data that is input to the target model during training or evaluation.
- the input data correspond to the examples used when training or evaluating the target model.
- the input data includes skeletal data of the subject.
- the skeletal data of the subject is data (for example, feature values) relating to the skeletal structure of the subject during exercise.
- the subject may be the same person as the user whose leg rotation speed is estimated during operation of the information processing system 1, or may be a different person.
- the target model may learn the user's individuality and the estimation accuracy may be improved.
- allowing the subject to be a different person from the user has the advantage of facilitating enrichment of the teacher data set.
- the subjects may be composed of multiple people including the user or multiple people not including the user.
- Skeletal data includes, for example, data on velocity or acceleration of each part of the subject (may include data on changes in muscle parts used by the subject or data on blurring of the subject's bodily sensation).
- At least part of the skeletal data can be obtained by analyzing the skeletal structure of the exercising subject with reference to the subject video data (or subject video data and subject depth data).
- the iOS ® 14 SDK, Vision, or other skeleton detection algorithms are available for skeleton analysis.
- skeletal data for the teacher data set can be obtained, for example, by having the subject perform exercise while wearing motion sensors on each part of the subject.
- Subject video data is data related to subject videos showing subjects in motion.
- a subject moving image is typically a moving image of the subject captured so that at least the subject's lower half of the body (specifically, the subject's legs) is included in the capturing range.
- Subject video data can be acquired by, for example, capturing the appearance (e.g., the whole body) of the subject during exercise from the front or obliquely in front (e.g., 45 degrees in front) with a camera (for example, a camera mounted on a smartphone). be.
- Subject depth data is data on the distance (depth) from the depth sensor to each part (typically legs) of the subject during exercise.
- the subject depth data can be obtained by operating the depth sensor when capturing the subject moving image.
- the correct data is data corresponding to the correct answer for the corresponding input data (example).
- the target model is trained (supervised learning) to produce an output that is closer to the correct data for the input data.
- the correct answer data includes at least one of an evaluation index for the number of rotations of the legs and an index that serves as material for determining the evaluation index.
- the leg rotation rate metric may include at least one of the following: ⁇ Cumulative number of rotations ⁇ Number of rotations per unit time (that is, rotation speed) ⁇ Time differentiation of rotation speed (that is, rotation acceleration)
- the leg rotation number index may be any index for quantitatively grasping the leg rotation (that is, periodic movement), and is not limited to the indices exemplified here.
- the leg rotation index may include an index that can be calculated based on the above index, such as distance traveled (the product of accumulated rotation speed (cadence) and distance traveled per pedal rotation) and exercise load.
- the exercise load is an index for quantitatively evaluating the exercise load.
- Exercise load can be expressed numerically using at least one of the following: ⁇ Energy (calorie) consumption ⁇ Oxygen consumption ⁇ Heart rate
- Correct data can be obtained, for example, by actually measuring the number of rotations of the subject's legs when recording the subject's video using an appropriate sensor (for example, a cadence sensor).
- the correct data is obtained by exercising the subject with a motion sensor (e.g., acceleration sensor) attached to the leg, and estimating the number of rotations of the subject's leg using a predetermined algorithm or a learned model based on the sensing results of the motion sensor. It can also be obtained by doing Alternatively, correct data may be provided by a person who has viewed the subject moving image by measuring the number of rotations of the subject's leg.
- a motion sensor e.g., acceleration sensor
- the estimation model used by the server 30 corresponds to a trained model created by supervised learning using the teacher data set (FIG. 5), or a derived model or distilled model of the trained model.
- FIG. 6 is a flow chart of information processing in this embodiment.
- FIG. 7 is a diagram showing an example of a screen displayed during information processing according to this embodiment.
- Information processing is started, for example, when one of the following start conditions is established. ⁇ The information processing was called by another process. - The user performed an operation to call up information processing. - The client device 10 has entered a predetermined state (for example, a predetermined application has been activated). ⁇ The specified date and time has arrived. - A predetermined time has passed since a predetermined event.
- the client device 10 performs sensing (S110). Specifically, the client device 10 enables the operation of the camera 16 to start shooting a video of the user exercising (hereinafter referred to as “user video”).
- a user moving image is typically a moving image of the user such that at least the user's lower half of the body (more specifically, the user's legs) is included in the shooting range.
- the client device 10 starts measuring the distance from the depth sensor 17 to each part of the user during exercise (hereinafter referred to as "user depth").
- the client device 10 executes data acquisition (S111). Specifically, the client device 10 acquires sensing results generated by various sensors enabled in step S110. For example, the client device 10 acquires user video data from the camera 16 and acquires user depth data from the depth sensor 17 .
- the client device 10 executes the request (S112). Specifically, the client device 10 refers to the data acquired in step S111 and generates a request. The client device 10 transmits the generated request to the server 30 .
- the request can include, for example, at least one of the following. - Data acquired in step S111 (for example, user video data or user depth data) ⁇ Data obtained by processing the data obtained in step S111 ⁇ User skeleton data obtained by analyzing the user moving image data (or user moving image data and user depth data) obtained in step S111
- the server 30 performs an estimation (S130) on the number of leg rotations. Specifically, the server 30 acquires the input data of the estimation model based on the request acquired from the client device 10 . Input data includes user skeleton data as well as teacher data. The server 30 estimates the number of rotations of the user's legs by applying the estimation model to the input data. As an example, the server 30 estimates at least one evaluation index relating to the number of rotations of the user's legs.
- the server 30 executes a response (S131). Specifically, the server 30 generates a response based on the estimation result in step S130.
- the server 30 transmits the generated response to the client device 10 .
- the response can include at least one of the following. ⁇ Data corresponding to the result of estimation regarding the number of rotations of the leg ⁇ Data obtained by processing the result of estimation regarding the number of rotations of the leg (for example, screen data to be displayed on the display 15 of the client device 10, or generating the screen) data referenced for
- the client device 10 executes information presentation (S113) after step S131. Specifically, the client device 10 causes the display 15 to display information based on the response acquired from the server 30 (that is, the estimated result of the number of rotations of the user's legs). However, the information may be presented to the user's mentor (eg, medical personnel or trainer) on the terminal used by the trainer instead of, or in addition to, the user. Alternatively, the information may present content that enhances the user's exercise experience (eg, scenery or video game footage that is controlled depending on the results of the estimation of the number of leg rotations). Such content may be presented via a display of an external device such as an HMD or other output device instead of the display 15 .
- an external device such as an HMD or other output device instead of the display 15 .
- the client device 10 causes the display 15 to display a screen P10 (FIG. 7).
- Screen P10 includes a display object A10 and an operation object B10.
- the operation object B10 receives an operation of designating an evaluation index relating to the number of revolutions of the leg to be displayed on the display object A10.
- the operation object B10 corresponds to a check box.
- the display object A10 displays changes over time in the results of estimating the evaluation index.
- the display object A10 displays a graph showing changes over time in the result of estimating the rotation speed (rpm), which is the evaluation index specified in the operation object B10, every five seconds.
- rpm rotation speed
- the display object A10 may display a graph showing temporal changes in the results of estimating the plurality of evaluation indices in a superimposed manner. Graphs may be displayed separately.
- the client device 10 After step S113, the client device 10 ends the information processing (FIG. 6). However, when estimating the number of rotations of the user's legs in real time while the user is exercising, the client device 10 may return to data acquisition (S111) after step S113.
- the information processing system 1 of the embodiment estimates the number of rotations of the user's legs based on the moving image of the user during exercise.
- the number of rotations of the user's legs can be estimated. be able to. In other words, it is possible to make an estimation about the number of rotations of a human leg under various conditions.
- the information processing system 1 may estimate the number of rotations of the user's legs by applying an estimation model to the input data based on the moving image of the user during exercise. This enables a statistical estimation of the number of rotations of the user's legs to be made in a short period of time.
- the estimated model may correspond to a learned model created by supervised learning using the aforementioned teacher data set (FIG. 5), or a derived or distilled model of the learned model. Thereby, an estimation model can be constructed efficiently.
- the input data to which the estimation model is applied may include data regarding the user's anatomy in motion. This makes it possible to improve the accuracy of the estimation model.
- the input data to which the estimation model is applied may include data (that is, user depth data) regarding the depth from the reference point (that is, depth sensor 17) to each part of the user when the user moving image was captured. This makes it possible to improve the accuracy of the estimation model.
- the information processing system 1 may estimate at least one of the accumulated number of rotations of the user's legs, the rotational speed, the rotational acceleration, or the running distance converted from the accumulated number of rotations of the legs. This allows the user's leg rotation rate (which may include real-time rotation rate) to be properly evaluated.
- the user video may be a video of the user captured so that at least the user's lower half of the body (preferably the user's legs) is included in the capturing range. This makes it possible to improve the accuracy of the estimation model.
- the user video may be a video of the user pedaling. This makes it possible to improve the accuracy of the estimation model.
- the information processing system 1 may present information based on the result of estimation regarding the number of rotations of the user's legs. This allows the user, or their instructor, to be informed about the number of rotations of the user's legs, or to control content (e.g., scenery or video game footage) to enhance the user's exercise experience. can.
- the information processing system 1 may present an evaluation index of the number of rotations of the user's legs. This allows the receiver of the information to appropriately grasp the number of rotations of the user's legs.
- the information processing system 1 may present information regarding changes over time in the evaluation index of the number of rotations of the user's legs. This allows the recipient of the information to grasp the temporal change in the number of rotations of the user's legs.
- Modification 1 will be described. Modification 1 is an example of modifying the input data for the estimation model.
- Modification 1 is an example of estimating the number of rotations of the user's legs by applying an estimation model to input data based on both the user's moving image and the user's health condition.
- Health conditions include at least one of the following: ⁇ Age, gender, height, weight, body fat percentage, muscle mass, bone density, history of current illness, medical history, oral medication history, surgery history, lifestyle history (e.g., smoking history, drinking history, activities of daily living (ADL), frailty) score, etc.) ⁇ Family history ⁇ Respiratory function test results ⁇ Test results other than respiratory function tests (e.g., blood test, urine test, electrocardiogram (including Holter electrocardiogram), echocardiography, X-ray, CT scan (cardiac morphology) including CT and coronary artery CT), MRI examination, nuclear medicine examination, PET examination, etc.) ⁇ Data acquired during cardiac rehabilitation (including Borg index)
- FIG. 8 is a diagram showing the data structure of the teacher data set of Modification 1. As shown in FIG.
- the teacher data set of Modification 1 includes a plurality of teacher data.
- Teacher data is used to train or evaluate a target model.
- Teacher data includes a sample ID, input data, and correct answer data.
- the sample ID and correct answer data are as described in this embodiment.
- Input data is the data that is input to the target model during training or evaluation.
- the input data correspond to the examples used when training or evaluating the target model.
- the input data is subject skeletal data (ie, relatively dynamic data) and subject health data (ie, relatively static data).
- subject skeletal data of the subject is as described in this embodiment.
- Data on the health status of subjects can be obtained in various ways.
- the data regarding the subject's health condition may be obtained during the subject's exercise, or at any timing before or after exercise (including at rest).
- Data on the subject's health condition may be obtained based on a report from the subject or the attending physician, or may be obtained by extracting information linked to the subject in the medical information system. , may be obtained via the subject's app (eg, health care app).
- the estimation model used by the server 30 is a trained model created by supervised learning using a teacher data set (FIG. 8), or a derived model of the trained model. Or it corresponds to the distillation model.
- the client device 10 performs sensing (S110) in the same manner as in FIG.
- the client device 10 executes data acquisition (S111). Specifically, the client device 10 acquires sensing results generated by various sensors enabled in step S110. For example, the client device 10 acquires user video data from the camera 16 and acquires user depth data from the depth sensor 17 .
- the client device 10 acquires data regarding the user's health condition (hereinafter referred to as "user health condition data").
- user health condition data data regarding the user's health condition
- the client device 10 may acquire user health condition data based on an operation (report) by the user or the doctor in charge, or extract information linked to the user in the medical information system.
- User health data may be acquired, or user health data may be acquired via a user's app (for example, a healthcare app).
- the client device 10 may acquire user health condition data at a timing different from step S111 (for example, before step S110, at the same timing as step S110, or after step S111).
- the client device 10 executes the request (S112). Specifically, the client device 10 refers to the data acquired in step S111 and generates a request. The client device 10 transmits the generated request to the server 30 .
- the request can include, for example, at least one of the following. - Data acquired in step S111 (for example, user video data, user depth data, or user health data) ⁇ Data obtained by processing the data obtained in step S111 ⁇ User skeleton data obtained by analyzing the user moving image data (or user moving image data and user depth data) obtained in step S111
- the server 30 performs an estimation (S130) on the number of leg rotations. Specifically, the server 30 acquires the input data of the estimation model based on the request acquired from the client device 10 .
- the input data includes user skeleton data and user health data as well as teacher data.
- the server 30 estimates the rotation speed of the leg by applying the estimation model to the input data. As an example, the server 30 estimates at least one evaluation index relating to the number of rotations of the user's legs.
- step S130 the server 30 executes a response (S131) as in FIG.
- step S131 the client device 10 executes information presentation (S113) as in FIG.
- the information processing system 1 of Modification 1 applies an estimation model to the input data based on both the user's moving image and the user's health condition, so that the user's leg Make an assumption about the number of revolutions. This makes it possible to perform highly accurate estimation by further considering the health condition of the user. For example, even if there is a difference between the user's health condition and the subject's health condition from which the training data is based, a reasonable estimation can be made.
- the storage device 11 may be connected to the client device 10 via the network NW.
- the display 15 may be built into the client device 10 .
- Storage device 31 may be connected to server 30 via network NW.
- the information processing system of the embodiment and modification 1 can also be implemented by a stand-alone computer.
- the client device 10 alone may estimate the number of rotations of the leg using the estimation model.
- Each step of the above information processing can be executed by either the client device 10 or the server 30.
- the server 30 may acquire the user skeleton data by analyzing the user moving image (or the user moving image and the user depth).
- an example of shooting a user video using the camera 16 of the client device 10 has been shown.
- the user moving image may be shot using a camera other than camera 16 .
- An example of measuring the user depth using the depth sensor 17 of the client device 10 has been shown.
- the user depth may be measured using a depth sensor other than depth sensor 17 .
- the information processing system 1 of the present embodiment and modification 1 can also be applied to a video game in which the progress of the game is controlled according to the player's bodily movements (for example, the number of leg rotations).
- the information processing system 1 may estimate the number of rotations of the user's legs during game play, and determine any one of the following according to the result of the estimation.
- Quality e.g., difficulty
- quantity of video game-related tasks e.g., stages, missions, quests
- Video game-related benefits e.g., in-game currency, items, bonuses
- quality e.g. kind
- a microphone mounted on the client device 10 or a microphone connected to the client device 10 may receive sound waves emitted by the user when capturing a user moving image (that is, while the user is exercising) and generate sound data. . Sound data, together with user skeleton data, may constitute input data for the estimation model.
- the sound uttered by the user is, for example, at least one of the following. sound waves emitted by the rotation of the user's legs (e.g., sounds produced by pedals or drives connected to pedals) ⁇ Sounds caused by the user's breathing or vocalization
- acceleration data can be used as part of the input data for the estimation model.
- the user's skeleton may be analyzed with reference to acceleration data.
- Acceleration data can be obtained, for example, by having the user carry or wear the client device 10 or wearable device equipped with an acceleration sensor when capturing a user moving image (that is, while the user is exercising).
- leg rotation is not limited to circular motion such as pedaling, and may include general periodic motion such as stepping.
- the number of rotations of the leg can be appropriately read as the stepping number or the number of steps.
- Modified Example 1 an example of applying an estimation model to input data based on health conditions was shown. However, it is also possible to build multiple estimation models based on (at least in part) the subject's health status. In this case, (at least part of) the user's health may be referenced to select the estimation model. In this further variation, the input data for the estimation model may be data not based on the user's health, or data based on the user's health and user videos.
- information processing system 10 client device 11: storage device 12: processor 13: input/output interface 14: communication interface 15: display 16: camera 17: depth sensor 30: server 31: storage device 32: processor 33: input/output interface 34: communication interface
Abstract
Description
情報処理システムの構成について説明する。図1は、本実施形態の情報処理システムの構成を示すブロック図である。
クライアント装置10及びサーバ30は、ネットワーク(例えば、インターネット又はイントラネット)NWを介して接続される。
クライアント装置の構成について説明する。図2は、本実施形態のクライアント装置の構成を示すブロック図である。
・OS(Operating System)のプログラム
・情報処理を実行するアプリケーション(例えば、ウェブブラウザ、リハビリアプリ、またはフィットネスアプリ)のプログラム
・情報処理において参照されるデータベース
・情報処理を実行することによって得られるデータ(つまり、情報処理の実行結果)
・CPU(Central Processing Unit)
・GPU(Graphic Processing Unit)
・ASIC(Application Specific Integrated Circuit)
・FPGA(Field Programmable Gate Array)
入力デバイスは、例えば、カメラ16、深度センサ17、マイクロホン、キーボード、ポインティングデバイス、タッチパネル、センサ、又は、それらの組合せである。
出力デバイスは、例えば、ディスプレイ15、スピーカ、又は、それらの組合せである。
具体的には、通信インタフェース14は、サーバ30との通信のためのモジュール(例えば、WiFiモジュール、移動通信モジュール、またはそれらの組み合わせ)を含むことができる。
サーバの構成について説明する。図3は、本実施形態のサーバの構成を示すブロック図である。
・OSのプログラム
・情報処理を実行するアプリケーションのプログラム
・情報処理において参照されるデータベース
・情報処理の実行結果
・CPU
・GPU
・ASIC
・FPGA
入力デバイスは、例えば、キーボード、ポインティングデバイス、タッチパネル、又は、それらの組合せである。
出力デバイスは、例えば、ディスプレイである。
本実施形態の概要について説明する。図4は、本実施形態の概要の説明図である。
本実施形態の教師データセットについて説明する。図5は、本実施形態の教師データセットのデータ構造を示す図である。
・累積回転数
・単位時間あたりの回転数(つまり回転速度)
・回転速度の時間微分(つまり回転加速度)
ただし、脚の回転数の指標は、脚の回転(つまり、周期的な動き)を定量的に把握するための任意の指標であってよく、ここに例示した指標に限定されない。脚の回転数の指標は、走行距離(累積回転数(ケイデンス)とペダル1回転あたりの走行距離との積)、運動負荷量のように上記指標に基づいて算出可能な指標を含んでもよい。
・エネルギー(カロリー)消費量
・酸素消費量
・心拍数
サーバ30によって用いられる推定モデルは、教師データセット(図5)を用いた教師あり学習により作成された学習済みモデル、または当該学習済みモデルの派生モデルもしくは蒸留モデルに相当する。
本実施形態の情報処理について説明する。図6は、本実施形態の情報処理のフローチャートである。図7は、本実施形態の情報処理において表示される画面例を示す図である。
・他の処理によって情報処理が呼び出された。
・ユーザが情報処理を呼び出すための操作を行った。
・クライアント装置10が所定の状態(例えば、所定のアプリの起動)になった。
・所定の日時が到来した。
・所定のイベントから所定の時間が経過した。
具体的には、クライアント装置10は、カメラ16の動作を有効にすることで、運動中のユーザの動画(以下、「ユーザ動画」という)の撮影を開始する。ユーザ動画は、典型的には、少なくともユーザの下半身(具体的には、ユーザの脚)が撮影範囲に含まれるように、当該ユーザを撮影した動画である。
具体的には、クライアント装置10は、ステップS110において有効とした各種センサによって生成されたセンシング結果を取得する。例えば、クライアント装置10は、カメラ16からユーザ動画データを取得し、深度センサ17からユーザ深度データを取得する。
具体的には、クライアント装置10は、ステップS111において取得したデータを参照し、リクエストを生成する。クライアント装置10は、生成したリクエストをサーバ30へ送信する。リクエストは、例えば、以下の少なくとも1つを含むことができる。
・ステップS111において取得したデータ(例えば、ユーザ動画データ、またはユーザ深度データ)
・ステップS111において取得したデータを加工したデータ
・ステップS111において取得したユーザ動画データ(或いは、ユーザ動画データおよびユーザ深度データ)を解析することで取得したユーザ骨格データ
具体的には、サーバ30は、クライアント装置10から取得したリクエストに基づいて、推定モデルの入力データを取得する。入力データは、教師データと同様に、ユーザ骨格データを含む。サーバ30は、入力データに推定モデルを適用することで、ユーザの脚の回転数に関する推定を行う。一例として、サーバ30は、ユーザの脚の回転数に関する評価指標の少なくとも1つを推定する。
具体的には、サーバ30は、ステップS130における推定の結果に基づくレスポンスを生成する。サーバ30は、生成したレスポンスをクライアント装置10へ送信する。一例として、レスポンスは以下の少なくとも1つを含むことができる。
・脚の回転数に関する推定の結果に相当するデータ
・脚の回転数に関する推定の結果を加工したデータ(例えば、クライアント装置10のディスプレイ15に表示されるべき画面のデータ、または当該画面を生成するために参照されるデータ)
具体的には、クライアント装置10は、サーバ30から取得したレスポンス(つまり、ユーザの脚の回転数に関する推定の結果)に基づく情報をディスプレイ15に表示させる。
ただし、情報は、ユーザの代わりに、またはユーザに加えて、ユーザの指導者(例えば、医療関係者、またはトレーナー)向けに当該指導者の使用する端末に提示されてもよい。或いは、情報として、ユーザの運動体験を演出するコンテンツ(例えば、脚の回転数に関する推定の結果に応じて制御される風景またはビデオゲームの映像)が提示されてもよい。かかるコンテンツは、ディスプレイ15の代わりに、HMDなどの外部装置のディスプレイ、または他の出力装置を介して提示されてもよい。
操作オブジェクトB10は、表示オブジェクトA10に表示させる脚の回転数に関する評価指標を指定する操作を受け付ける。図7の例では、操作オブジェクトB10は、チェックボックスに相当する。
表示オブジェクトA10は、上記評価指標を推定した結果の経時的変化を表示する。図7の例では、表示オブジェクトA10は、操作オブジェクトB10において指定されている評価指標である回転速度(rpm)を5秒毎に推定した結果の経時的変化を示すグラフを表示する。
操作オブジェクトB10において複数の評価指標が指定されている場合に、表示オブジェクトA10には、複数の評価指標を推定した結果の経時的変化を示すグラフを重畳して表示してもよいし、これらのグラフを個別に表示してもよい。
以上説明したように、実施形態の情報処理システム1は、運動中のユーザの動画に基づいて当該ユーザの脚の回転数に関する推定を行う。これにより、脚の回転数を検知する手段、または検知結果を出力する手段を備えていないトレーニング機器を用いてユーザが運動を行った場合であっても、当該ユーザの脚の回転数を推定することができる。つまり、多様な状況下で人間の脚の回転数に関する推定を行うことができる。
変形例1について説明する。変形例1は、推定モデルに対する入力データを変形する例である。
・年齢
・性別
・身長
・体重
・体脂肪率
・筋肉量
・骨密度
・現病歴
・既往歴
・内服歴
・手術歴
・生活歴(例えば、喫煙歴、飲酒歴、日常生活動作(ADL)、フレイルスコア、など)
・家族歴
・呼吸機能検査の結果
・呼吸機能検査以外の検査結果(例えば、血液検査、尿検査、心電図検査(ホルター心電図検査を含む)、心臓超音波検査、X線検査、CT検査(心臓形態CT・冠動脈CT含む)、MRI検査、核医学検査、PET検査、などの結果)
・心臓リハビリテーション施行中に取得されたデータ(Borg指数含む)
変形例1の教師データセットについて説明する。図8は、変形例1の教師データセットのデータ構造を示す図である。
変形例1において、サーバ30によって用いられる推定モデルは、教師データセット(図8)を用いた教師あり学習により作成された学習済みモデル、または当該学習済みモデルの派生モデルもしくは蒸留モデルに相当する。
変形例1の情報処理について図6を用いて説明する。
具体的には、クライアント装置10は、ステップS110において有効とした各種センサによって生成されたセンシング結果を取得する。例えば、クライアント装置10は、カメラ16からユーザ動画データを取得し、深度センサ17からユーザ深度データを取得する。
具体的には、クライアント装置10は、ステップS111において取得したデータを参照し、リクエストを生成する。クライアント装置10は、生成したリクエストをサーバ30へ送信する。リクエストは、例えば、以下の少なくとも1つを含むことができる。
・ステップS111において取得したデータ(例えば、ユーザ動画データ、ユーザ深度データ、またはユーザ健康状態データ)
・ステップS111において取得したデータを加工したデータ
・ステップS111において取得したユーザ動画データ(或いは、ユーザ動画データおよびユーザ深度データ)を解析することで取得したユーザ骨格データ
具体的には、サーバ30は、クライアント装置10から取得したリクエストに基づいて、推定モデルの入力データを取得する。入力データは、教師データと同様に、ユーザ骨格データ、およびユーザ健康状態データを含む。サーバ30は、入力データに推定モデルを適用することで、脚の回転数に関する推定を行う。一例として、サーバ30は、ユーザの脚の回転数に関する評価指標の少なくとも1つを推定する。
ステップS131の後に、クライアント装置10は図6と同様に、情報提示(S113)を実行する。
以上説明したように、変形例1の情報処理システム1は、ユーザ動画およびユーザの健康状態の双方に基づく入力データに推定モデルを適用することで、当該ユーザの脚の回転数に関する推定を行う。これにより、ユーザの健康状態をさらに考慮して、高精度な推定を行うことができる。例えば、ユーザの健康状態と、教師データの元となった被験者の健康状態との間に差異がある場合であっても、妥当な推定を行うことができる。
記憶装置11は、ネットワークNWを介して、クライアント装置10と接続されてもよい。ディスプレイ15は、クライアント装置10に内蔵されてもよい。記憶装置31は、ネットワークNWを介して、サーバ30と接続されてもよい。
・ユーザに与えられる、ビデオゲームに関する課題(例えば、ステージ、ミッション、クエスト)の質(例えば難易度)、または量
・ユーザに与えられる、ビデオゲームに関する特典(例えば、ゲーム内通貨、アイテム、ボーナス)の質(例えば種類)、または量
・ユーザの脚の回転により発する音波(例えば、ペダル、またはペダルに接続された駆動部から生じる音)
・ユーザの呼吸、または発声に伴って生じる音
10 :クライアント装置
11 :記憶装置
12 :プロセッサ
13 :入出力インタフェース
14 :通信インタフェース
15 :ディスプレイ
16 :カメラ
17 :深度センサ
30 :サーバ
31 :記憶装置
32 :プロセッサ
33 :入出力インタフェース
34 :通信インタフェース
Claims (13)
- コンピュータを、
運動中のユーザの写ったユーザ動画を取得する手段、
前記ユーザ動画に基づいて、前記ユーザの脚の回転数に関する推定を行う手段
として機能させるプログラム。 - 前記脚の回転数に関する推定を行う手段は、前記ユーザ動画に基づく入力データに、推定モデルを適用することで、前記ユーザの脚の回転数に関する推定を行う、
請求項1に記載のプログラム。 - 前記推定モデルは、運動中の被験者の写った被験者動画に関するデータを含む入力データと、当該入力データの各々に関連付けられた正解データとを含む教師データセットを用いた教師あり学習により作成された学習済みモデル、または当該学習済みモデルの派生モデルもしくは蒸留モデルに相当する、
請求項2に記載のプログラム。 - 前記推定モデルを適用される入力データは、前記ユーザの骨格に関するデータを含む、
請求項2または請求項3に記載のプログラム。 - 前記推定モデルを適用される入力データは、基準点から前記ユーザの各部位までの深度に関するデータにさらに基づく、
請求項2乃至請求項4のいずれかに記載のプログラム。 - 前記脚の回転数に関する推定を行う手段は、脚の累積回転数、回転速度、回転加速度、または脚の累積回転数から換算した走行距離の少なくとも1つを推定する、
請求項1乃至請求項5のいずれかに記載のプログラム。 - 前記ユーザ動画は、少なくとも前記ユーザの下半身が撮影範囲に含まれるように当該ユーザを撮影した動画である、
請求項1乃至請求項6のいずれかに記載のプログラム。 - 前記ユーザ動画は、ペダルを漕いでいる前記ユーザを撮影した動画である、
請求項1乃至請求項7のいずれかに記載のプログラム。 - 前記コンピュータを、前記ユーザの脚の回転数に関する推定の結果に基づく情報を提示する手段としてさらに機能させる、
請求項1乃至請求項8のいずれかに記載のプログラム。 - 前記脚の回転数に関する推定を行う手段は、脚の回転数に関する評価指標を推定し、
前記提示する手段は、前記評価指標を提示する、
請求項9に記載のプログラム。 - 前記提示する手段は、前記評価指標の経時的変化を提示する、
請求項10に記載のプログラム。 - 運動中のユーザの写ったユーザ動画を取得する手段と、
前記ユーザ動画に基づいて、前記ユーザの脚の回転数に関する推定を行う手段と
を具備する、情報処理装置。 - コンピュータが、
運動中のユーザの写ったユーザ動画を取得することと、
前記ユーザ動画に基づいて、前記ユーザの脚の回転数に関する推定を行うことと
を具備する、方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023509558A JP7411945B2 (ja) | 2021-08-26 | 2022-08-22 | プログラム、情報処理装置、および情報処理方法 |
JP2023212332A JP2024025826A (ja) | 2021-08-26 | 2023-12-15 | プログラム、情報処理装置、および情報処理方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021137960 | 2021-08-26 | ||
JP2021-137960 | 2021-08-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023027046A1 true WO2023027046A1 (ja) | 2023-03-02 |
Family
ID=85322763
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/031632 WO2023027046A1 (ja) | 2021-08-26 | 2022-08-22 | プログラム、情報処理装置、および情報処理方法 |
Country Status (2)
Country | Link |
---|---|
JP (2) | JP7411945B2 (ja) |
WO (1) | WO2023027046A1 (ja) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001017565A (ja) * | 1999-07-08 | 2001-01-23 | Erugotekku:Kk | シミュレーション・システム |
JP2007500547A (ja) * | 2003-07-31 | 2007-01-18 | ファースト プリンシプルズ インコーポレイテッド | パフォーマンスを改善する方法および装置 |
CN108114405A (zh) * | 2017-12-20 | 2018-06-05 | 中国科学院合肥物质科学研究院 | 基于3d深度摄像头和柔性力敏传感器的跑步机自适应系统 |
JP2019025134A (ja) * | 2017-08-01 | 2019-02-21 | 株式会社大武ルート工業 | 動作推定装置及び動作推定プログラム |
JP2019071963A (ja) * | 2017-10-12 | 2019-05-16 | 大日本印刷株式会社 | トレーニング装置及びプログラム |
WO2020218368A1 (ja) * | 2019-04-26 | 2020-10-29 | 塁 佐藤 | 運動用設備 |
WO2021132426A1 (ja) * | 2019-12-26 | 2021-07-01 | 国立大学法人東京大学 | スマートトレッドミル |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201520980A (zh) | 2013-11-26 | 2015-06-01 | Nat Univ Chung Cheng | 可即時估算踏頻之視頻裝置 |
JP7057959B2 (ja) | 2016-08-09 | 2022-04-21 | 住友ゴム工業株式会社 | 動作解析装置 |
-
2022
- 2022-08-22 WO PCT/JP2022/031632 patent/WO2023027046A1/ja active Application Filing
- 2022-08-22 JP JP2023509558A patent/JP7411945B2/ja active Active
-
2023
- 2023-12-15 JP JP2023212332A patent/JP2024025826A/ja active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001017565A (ja) * | 1999-07-08 | 2001-01-23 | Erugotekku:Kk | シミュレーション・システム |
JP2007500547A (ja) * | 2003-07-31 | 2007-01-18 | ファースト プリンシプルズ インコーポレイテッド | パフォーマンスを改善する方法および装置 |
JP2019025134A (ja) * | 2017-08-01 | 2019-02-21 | 株式会社大武ルート工業 | 動作推定装置及び動作推定プログラム |
JP2019071963A (ja) * | 2017-10-12 | 2019-05-16 | 大日本印刷株式会社 | トレーニング装置及びプログラム |
CN108114405A (zh) * | 2017-12-20 | 2018-06-05 | 中国科学院合肥物质科学研究院 | 基于3d深度摄像头和柔性力敏传感器的跑步机自适应系统 |
WO2020218368A1 (ja) * | 2019-04-26 | 2020-10-29 | 塁 佐藤 | 運動用設備 |
WO2021132426A1 (ja) * | 2019-12-26 | 2021-07-01 | 国立大学法人東京大学 | スマートトレッドミル |
Also Published As
Publication number | Publication date |
---|---|
JPWO2023027046A1 (ja) | 2023-03-02 |
JP7411945B2 (ja) | 2024-01-12 |
JP2024025826A (ja) | 2024-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11037369B2 (en) | Virtual or augmented reality rehabilitation | |
KR100772497B1 (ko) | 골프 클리닉 시스템 및 그것의 운용방법 | |
US9364714B2 (en) | Fuzzy logic-based evaluation and feedback of exercise performance | |
Schönauer et al. | Full body interaction for serious games in motor rehabilitation | |
Bergamini et al. | Wheelchair propulsion biomechanics in junior basketball players: A method for the evaluation of the efficacy of a specific training program | |
TWI396572B (zh) | 健身系統 | |
CN111477297A (zh) | 个人计算设备 | |
JP2005095570A (ja) | 画像表示システム、画像表示装置、画像表示方法 | |
Nunes et al. | Motivating people to perform better in exergames: Competition in virtual environments | |
JP2018503413A (ja) | 心肺適応能評価 | |
Annaswamy et al. | Using biometric technology for telehealth and telerehabilitation | |
KR20160138682A (ko) | 복합 생체신호를 이용한 능동형 스피닝 트레이닝 시스템 | |
Rahman et al. | Modeling therapy rehabilitation sessions using non-invasive serious games | |
WO2014133920A2 (en) | Using a true representation of effort for fitness | |
WO2023027046A1 (ja) | プログラム、情報処理装置、および情報処理方法 | |
JP2019024579A (ja) | リハビリテーション支援システム、リハビリテーション支援方法及びプログラム | |
Karkar et al. | KinFit: A factual aerobic sport game with stimulation support | |
Georgiadis et al. | A remote rehabilitation training system using Virtual Reality | |
Eichhorn et al. | Development of an Exergame for individual rehabilitation of patients with cardiovascular diseases | |
KR102556863B1 (ko) | 사용자 맞춤형 운동 훈련 방법 및 시스템 | |
WO2023026785A1 (ja) | プログラム、情報処理装置、および情報処理方法 | |
JP7333537B2 (ja) | プログラム、情報処理装置、および情報処理方法 | |
JP7303595B1 (ja) | プログラム、情報処理装置、および情報処理方法 | |
Chartomatsidis et al. | Development and evaluation of a motion-based exercise game for balance improvement | |
Morando et al. | Biophysical and motion features extraction for an effective home-based rehabilitation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2023509558 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22861334 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022861334 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022861334 Country of ref document: EP Effective date: 20240326 |