WO2023047621A1 - 情報処理システム、情報処理方法およびプログラム - Google Patents
情報処理システム、情報処理方法およびプログラム Download PDFInfo
- Publication number
- WO2023047621A1 WO2023047621A1 PCT/JP2022/006339 JP2022006339W WO2023047621A1 WO 2023047621 A1 WO2023047621 A1 WO 2023047621A1 JP 2022006339 W JP2022006339 W JP 2022006339W WO 2023047621 A1 WO2023047621 A1 WO 2023047621A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- target
- analysis
- processing system
- motion
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 95
- 238000003672 processing method Methods 0.000 title claims description 9
- 238000004458 analytical method Methods 0.000 claims abstract description 355
- 230000033001 locomotion Effects 0.000 claims abstract description 313
- 238000000605 extraction Methods 0.000 claims description 62
- 239000000284 extract Substances 0.000 claims description 55
- 238000011156 evaluation Methods 0.000 claims description 45
- 230000009471 action Effects 0.000 claims description 34
- 238000000034 method Methods 0.000 claims description 27
- 238000012549 training Methods 0.000 claims description 22
- 230000008859 change Effects 0.000 claims description 15
- 230000007704 transition Effects 0.000 claims description 8
- 239000000463 material Substances 0.000 claims description 5
- 230000004044 response Effects 0.000 claims description 4
- 230000001144 postural effect Effects 0.000 abstract 2
- 230000036544 posture Effects 0.000 description 110
- 238000012545 processing Methods 0.000 description 50
- 230000000694 effects Effects 0.000 description 26
- 208000024891 symptom Diseases 0.000 description 24
- 238000004422 calculation algorithm Methods 0.000 description 21
- 210000002683 foot Anatomy 0.000 description 16
- 238000007405 data analysis Methods 0.000 description 14
- 230000036541 health Effects 0.000 description 13
- 101100216226 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) APC2 gene Proteins 0.000 description 11
- 230000006399 behavior Effects 0.000 description 10
- 238000003745 diagnosis Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 238000007781 pre-processing Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 101100057171 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) ATG21 gene Proteins 0.000 description 6
- 210000001624 hip Anatomy 0.000 description 6
- 238000010801 machine learning Methods 0.000 description 6
- 210000003423 ankle Anatomy 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 4
- 238000013077 scoring method Methods 0.000 description 4
- 230000000386 athletic effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 210000001930 leg bone Anatomy 0.000 description 3
- 101150078951 mai-2 gene Proteins 0.000 description 3
- 230000003542 behavioural effect Effects 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 229940079593 drug Drugs 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 210000001513 elbow Anatomy 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 210000002414 leg Anatomy 0.000 description 2
- 210000003205 muscle Anatomy 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 210000002832 shoulder Anatomy 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 241000699670 Mus sp. Species 0.000 description 1
- 208000028752 abnormal posture Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 210000000577 adipose tissue Anatomy 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000009194 climbing Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 235000013402 health food Nutrition 0.000 description 1
- 210000001981 hip bone Anatomy 0.000 description 1
- 210000001596 intra-abdominal fat Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000002483 medication Methods 0.000 description 1
- 230000004060 metabolic process Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
- 210000000689 upper leg Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B69/00—Training appliances or apparatus for special sports
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Definitions
- the present invention relates to an information processing system, an information processing method, and a program.
- Pose estimation technology extracts multiple keypoints from an image of a target person or object (if the target is a human, multiple feature points representing shoulders, elbows, wrists, hips, knees, ankles, etc.) and extracts key points. This technology estimates the pose of a target based on the relative positions of points. Posture estimation technology is expected to be applied in a wide range of fields such as learning support in sports, healthcare, automated driving, and danger prediction.
- a series of actions of the target can be regarded as a combination of a plurality of characteristic actions (phases). If analysis is performed for each phase, a series of actions can be accurately analyzed. Conventional methods do not classify motions by phase. Therefore, it is not possible to accurately evaluate the entire series of actions.
- the present disclosure proposes an information processing system, an information processing method, and a program capable of accurately evaluating a series of actions as a whole while appropriately extracting key phases.
- a state machine that detects a plurality of phases included in a series of motions of the target based on posture information of the target extracted from moving image data; and a motion analysis unit that analyzes the motion of the information processing system. Further, according to the present disclosure, there are provided an information processing method in which information processing of the information processing system is executed by a computer, and a program for causing a computer to implement the information processing of the information processing system.
- FIG. 10 is a diagram showing another example of a notification mode of analysis information; It is a figure which shows the variation of a system configuration.
- FIG. 1 is a diagram showing an example of a motion analysis service.
- the motion analysis service is a service that analyzes the target TG's motion based on the video data CD and presents appropriate intervention information VI.
- Motion analysis services can be applied to a wide range of fields, such as learning support in sports, healthcare, automated driving, and danger prediction.
- the motions to be analyzed are defined appropriately according to the field to which the motion analysis service is applied and the purpose of the analysis.
- the motion analysis service is implemented by the information processing system 1 as shown in FIG.
- the information processing system 1 has a client terminal 100 , a motion analysis server 200 , a trainer terminal 300 , a family terminal 400 and a service provider server 500 .
- the client terminal 100, motion analysis server 200, trainer terminal 300, family terminal 400 and service provider server 500 are connected via a network NW (see FIG. 9).
- NW see FIG. 9
- the information processing system 1 includes the trainer terminal 300, the family terminal 400, and the service provider server 500 in the example of FIG. 1, these are not essential.
- the client terminal 100 is an information terminal such as a smartphone, tablet terminal, and laptop computer.
- the client terminal 100 is owned by a client who has requested motion analysis of the target TG.
- a client is, for example, a target TG or a family FM of the target TG.
- the client terminal 100 transmits to the motion analysis server 200 the video data MD showing the video of the target TG regarding sports and fitness.
- the motion analysis server 200 analyzes the motion of the target TG based on the moving image data MD.
- a series of actions of the target is captured as a combination of a plurality of characteristic actions arranged along the time axis.
- the motion analysis server 200 extracts individual characteristic motions as phases. Boundaries between phases are defined based on predetermined indices.
- the motion analysis server 200 evaluates a series of motions by performing motion analysis for each phase.
- the motion analysis server 200 generates analysis information MAI indicating the evaluation result and transmits it to the client terminal 100, trainer terminal 300 and family terminal 400.
- the target TG, trainer and family FM can grasp the operating state of the target TG based on the transmitted analysis information MAI.
- the analysis information MAI includes evaluation results based on comparison with model actions.
- the trainer diagnoses the target TG based on the analysis information MAI received by the trainer terminal 300 .
- the trainer transmits diagnostic information indicating diagnostic results to the motion analysis server 200 via the trainer terminal 300 .
- the motion analysis server 200 transmits diagnostic information to the service provider server 500 together with the analysis information MAI.
- the service provider server 500 extracts product sales information such as training equipment suitable for the target TG from the product sales database based on the analysis information MAI and/or diagnostic information, and transmits the extracted product sales information to the motion analysis server 200 .
- the motion analysis server 200 generates intervention information VI for the target TG based on the analysis information MAI, diagnostic information, and product sales information, and transmits it to the client terminal 100 .
- the intervention information VI includes diagnostic results of the target TG, authentication of athletic ability, various suggestions for improving movement, and product sales information.
- FIG. 2 is a block diagram showing an example of the functional configuration of the information processing system 1. As shown in FIG.
- the client terminal 100 has a sensor unit 110 , an input device 120 and a display device 170 .
- the sensor unit 110 tracks the activity of the target TG and collects the amount of activity and exercise data of the target TG.
- the movement data includes moving image data MD showing the movement of the target TG.
- the input device 120 includes various input devices capable of inputting inquiry data for health screening.
- the display device 170 displays various determination results (analysis information MAI) and intervention information VI obtained by the motion analysis of the target TG.
- the sensor unit 110 includes a fitness tracker, a camera 160, a GPS (Global Positioning System), an acceleration sensor, and a gyro sensor.
- Input devices 120 include touch panels, keyboards, mice, eye-tracking devices, and voice input devices.
- the display device 170 includes an LCD (Liquid Crystal Display) or an OLED (Organic Light Emitting Diode).
- the client terminal 100 transmits the target TG's vital data, exercise data, and interview data to the motion analysis server 200 .
- the motion analysis server 200 analyzes the motion of the target TG based on various data acquired from the client terminal 100 .
- the motion analysis server 200 has an activity calculation unit 210 , an evaluation unit 220 , an intervention information generation unit 230 and a storage device 290 .
- the activity calculation unit 210 calculates the activity information of the target TG based on the sensor data and interview data.
- the activity information includes various information indicating the activity of the target TG, such as the amount of activity of the target TG (number of steps, heart rate, calories burned, etc.) and exercise status.
- the activity calculation unit 210 includes a sensor data analysis unit 211, a feature amount extraction unit 212, and an interview data analysis unit 213.
- the medical interview data analysis unit 213 is included in the activity calculation unit 210, but the medical interview data analysis unit 213 is not necessarily required for sports coaching only. In that case, it is not necessary to input medical interview data.
- the sensor data analysis unit 211 detects the amount of activity of the target TG based on the sensing result of the fitness sensor.
- the sensor data analysis unit 211 analyzes the moving image data MD showing the movement of the target TG, and extracts the posture information HPI (see FIG. 6) of the target TG.
- the feature amount extraction unit 212 extracts feature amounts based on the indices stored in the index database 295 from the posture information HPI.
- the index database 295 stores indices for motion analysis for each type of exercise.
- the index includes various information for motion analysis.
- Each exercise event is associated with one or more determination items that are targets of motion analysis.
- the storage device 290 (indicator database 295) stores, for each determination item, video shooting conditions, feature amount definition information, and a motion analysis algorithm AL as indices for motion analysis.
- learning support in sports uses different judgment items and motion analysis indicators for each sport.
- learning support for soccer dribbling, shooting, heading, and the like are defined as basic motions.
- index database 295 individual basic actions are defined as judgment items.
- the moving image data MD is acquired for each determination item, and motion analysis is also performed for each determination item.
- the interview data analysis unit 213 diagnoses lifestyle habits and body conditions (weight, body fat, visceral fat, basal metabolism, muscle mass, blood flow, skin age, etc.) based on the interview data.
- the medical interview data can include information such as disease status, medication history, pain site, and site desired to be improved.
- the diagnosis result is used when judging whether or not exercise is possible.
- the medical interview data analysis unit 213 determines whether or not it is appropriate to exercise the target TG based on the diagnosis result. If the target TG is overburdened or it is determined that it is not appropriate to exercise the target TG, the medical interview data analysis unit 213 sends an alert to the client terminal 100 prompting the user to stop exercising or confirm with a doctor. to notify.
- the interview data analysis unit 213 may make an inquiry to the trainer terminal 300 instead of notifying the client terminal 100 of the alert.
- the evaluation unit 220 analyzes the motion of the target TG based on the feature amount extracted by the feature amount extraction unit 212 and the motion analysis algorithm AL stored in the index database 295 . For example, the evaluation unit 220 classifies a series of motions of the target TG recorded in the moving image data MD into a plurality of phases based on the motion analysis algorithm AL, and analyzes the motions for each phase. A method for detecting the boundaries of the phases and a method for evaluating the motion of each phase are defined in the motion analysis algorithm AL. The evaluation unit 220 generates analysis information MAI indicating the evaluation result of the series of actions.
- the intervention information generation unit 230 generates intervention information VI for the target TG based on the analysis information MAI.
- the intervention information VI includes information (judgment information) that serves as judgment material for prompting the target TG to improve its motion, or a training plan for the target TG.
- the intervention information generator 230 extracts one or more symptoms of the target TG from the analysis information MAI, and determines a training plan based on the priority determined for each symptom and the severity of each symptom. .
- the intervention information generation unit 230 can also determine a training plan by referring to information such as interview data in addition to the analysis information MAI.
- Symptoms include peculiarities of the target TG compared to the target movement. For example, when throwing a ball, the correct form is to swing the arm down after pushing the elbow forward. However, there are also people who throw the ball straight away, like in shot put, without putting their elbows forward. By typifying the form, the symptoms can be classified. Symptom definitions and symptom classification algorithms are defined in index database 295 . If multiple symptoms are detected in the target TG, the intervention information generator 230 presents, for example, a training plan based on the highest priority symptom.
- the priority of symptoms and the training plan for each symptom are stored in the solution database 294.
- One or more training plans are associated with each symptom in the solution database 294 .
- the intervention information generator 230 can present another training plan linked to the symptom based on the progress of improvement of the symptom.
- the intervention information generator 230 determines the current level of athletic ability of the target TG based on the analysis information MAI. Level information is used for comparison with other athletes of similar athletic ability, age or sporting history, and for accreditation of skill improvement. For example, the intervention information generator 230 generates authentication information for authenticating the current level of the target TG.
- the storage device 290 can have a personal information database 291 , an anonymized sensing information database 292 , an intervention information database 293 , a solution database 294 and an index database 295 .
- the personal information database 291 stores information about the target TG individual, such as the target TG's age, height, weight, medical history, and medications taken.
- personal information is stored on the motion analysis server 200 as a database.
- the personal information of the target TG may be stored in the client terminal 100.
- the anonymized sensing information database 292 stores the past sensing data of the target TG used by the activity calculation unit 210.
- Past sensing data is stored as anonymized data in association with anonymously processed information such as age, gender, and disease.
- the intervention information database 293 stores the intervention information VI generated by the intervention information generator 230 in association with the activity information of the target TG.
- the solution database 294 stores solutions for each sport type used by the intervention information generation unit 230 .
- contents such as advice, educational contents, and training/exercise programs are stored for each event.
- Each content may be stored in association with an evaluation result or diagnosis result. This provides appropriate advice and content according to the state of the target TG.
- the index database 295 stores indices used by the evaluation unit 220 .
- the index database 295 includes definition information of feature quantities and motion analysis algorithms AL using the feature quantities as indicators for motion analysis.
- the motion analysis algorithm AL may be based on a specific threshold, or may be based on a learning model on which machine learning has been performed.
- the trainer terminal 300 is an information terminal such as a smart phone, tablet terminal, notebook computer, or desktop computer.
- the trainer terminal 300 has an evaluation section 310 and a diagnostic information database 390 .
- the trainer terminal 300 receives and displays the target TG's health information, analysis information MAI, symptom information, and the like transmitted from the motion analysis server 200 .
- the evaluation unit 310 evaluates the current exercise of the target TG based on the information input by the trainer, the health information and analysis information MAI of the target TG received by the motion analysis server 200, and the information stored in the diagnosis information database 390. is diagnosed, and the diagnosis result and advice according to the diagnosis result are transmitted to the motion analysis server 200 .
- the diagnosis result and advice corresponding to the diagnosis result may be directly transmitted to the client terminal 100 possessed by the target TG without going through the motion analysis server 200 .
- the diagnostic information database 390 stores diagnostic information on which the target TG was diagnosed in the past.
- the trainer can understand the target TG's health condition and behavioral changes even if they live far away from the target TG. This allows the trainer to remotely diagnose the current state of exercise of the target TG and provide the diagnosis results.
- the family terminal 400 is an information terminal such as a smart phone, tablet terminal, notebook computer, and desktop computer.
- the family terminal 400 receives and displays the target TG's activity information and analysis information MAI transmitted from the motion analysis server 200 .
- the target TG's family FM can know the target TG's activity state and behavior change even if they live far away from the target TG.
- the service provider server 500 has a sales database 591.
- the product sales database 591 stores product sales information PSI such as health food suitable for each health information and analysis information MAI.
- the service provider server 500 receives the target TG's health information, analysis information MAI, symptom information, and the like transmitted from the motion analysis server 200 .
- the service provider server 500 searches the product sales database 591 for product sales information corresponding to the received health information of the target TG, analysis information MAI, and information on symptoms.
- the service provider server 500 transmits the retrieved product sales information to the motion analysis server 200 .
- the service provider can recommend product sales information such as training equipment based on the target TG's health condition and behavioral changes. It can be provided to the target TG. It is desirable that the target TG's health information, analysis information MAI, and symptom information received by the trainer terminal 300 and the service provider server 500 be anonymized.
- the activity analysis server 200 calculates activity information and provides analysis information MAI and intervention information VI.
- the server that calculates activity information and the server that evaluates actions and provides analysis information MAI and intervention information VI may be configured separately.
- the service provider server 500 has been described as a server different from the motion analysis server 200, the motion analysis server 200 and the service provider server 500 may be integrated.
- the motion analysis server 200 includes various databases.
- a database server including various databases may be provided separately.
- each database in storage device 290 may be managed by a server different from motion analysis server 200 .
- FIG. 3 is a flowchart showing an outline of motion analysis processing.
- the client terminal 100 acquires sensor data and interview data from the sensor unit 110 and the input device 120 (step S1).
- the client terminal 100 transmits the acquired sensor data and interview data to the motion analysis server 200 (step S2).
- the medical interview data analysis unit 213 extracts the health information of the target TG from the medical interview data (step S3).
- the sensor data analysis unit 211 extracts the amount of activity of the target TG and the posture information HPI of the target TG during exercise from the sensor data (step S4).
- the feature quantity extraction unit 212 extracts from the index database 295 motion analysis indices (feature quantity definition information, motion analysis algorithm AL) corresponding to the type of exercise serving as a judgment item.
- the feature amount extraction unit 212 extracts feature amounts from the posture information HPI based on the feature amount definition information (step S5).
- the evaluation unit 220 analyzes the motion of the target TG by applying the extracted feature amount data to the motion analysis algorithm AL. Motion analysis is performed for each phase of motion.
- the evaluation unit 220 classifies the example motion of the target TG into a plurality of phases based on the motion analysis algorithm AL. In the case of fitness, the evaluator 220 may analyze the motion in consideration of the information on the amount of activity.
- the evaluation unit 220 evaluates a series of motions of the target TG based on the analysis results of motions for each phase, and generates analysis information MAI indicating the evaluation results (step S6).
- the intervention information generation unit 230 acquires diagnostic information and product sales information related to the analysis information MAI from the trainer terminal 300 and the service provider server 500 .
- the intervention information generation unit 230 generates intervention information VI for intervening in the target TG based on the analysis information MAI, diagnosis information, symptom-related information, and product sales information (step S7).
- the intervention information generator 230 transmits the generated intervention information VI to the client terminal 100 (step S8).
- the client terminal 100 displays the intervention information VI on the display device 170 to make the target TG recognize the exercise status (step S9). This prompts the target TG to change its behavior.
- FIG. 4 is a flowchart illustrating an example of moving image acquisition processing.
- the client terminal 100 recognizes a person (target TG) whose motion is to be analyzed.
- the target TG may be recognized as a person in the center of the field of view of the camera 160, or the target TG may be authenticated by account information, face authentication, fingerprint authentication, or the like.
- Step SA2 Preparing for shooting>
- the client terminal 100 determines whether or not exercise is possible based on the interview data. When the client terminal 100 determines that exercise is possible, the client terminal 100 determines determination items and shooting conditions in preparation for shooting.
- Judgment items are selected by the target TG or trainer.
- the set training items may be determined as determination items.
- the determination item is determined based on user input information (selection of target TG), for example.
- the client terminal 100 extracts the shooting conditions associated with the judgment items from the index database 295 and notifies the target TG of them using audio and video.
- the imaging conditions include the reference for the positional relationship between the target TG and the camera 160, the position of the target TG within the angle of view (for example, the coordinates of both shoulders, the position of the center line of the skeleton), and the like.
- the client terminal 100 determines that the shooting position of the camera 160 does not satisfy the above criteria, the client terminal 100 notifies the target TG using audio or video.
- the determination of whether or not the shooting position satisfies the above criteria may be made by another analysis device such as the motion analysis server 200 . Part of the determination (eg pose estimation only) may be performed by the client terminal 100 and the rest by other analysis equipment. Further, when the positional relationship between the target TG and the camera 160 is detected using a ToF sensor or the like, the image of the camera 160 is corrected based on the detected positional relationship so that the above-described criteria are satisfied. good too.
- the client terminal 100 can detect its own horizontality using a gyro sensor or the like, and can notify the target TG when it is tilted from the horizontal.
- a gyro sensor or the like When analyzing the motion of the target TG, it may be necessary to accurately know in what direction and how much the posture of the target TG is tilted from the vertical direction, depending on the determination item. In this case, the target TG is asked to adjust the horizontality of the client terminal 100 as preparation before photographing.
- the client terminal 100 determines by image analysis that the target TG cannot be separated from the background with high accuracy, it can notify the target TG.
- the target TG can be accurately detected from the background by image analysis. inseparable. Even if another person exists in the background of the target TG, the target TG and the other person cannot be analyzed separately. If the target TG cannot be separated from the background, the posture information HPI of the target TG cannot be extracted with high accuracy. Therefore, the target TG is notified and asked to adjust the photographing position and lighting conditions.
- Step SA3 Acquire moving image>
- the client terminal 100 shoots a moving image.
- the client terminal 100 may shoot a video for assessment before shooting a video regarding a judgment item.
- a video for assessment means a video showing basic actions such as standing up, walking, climbing stairs, and getting up, which is acquired to analyze the exercise capacity of the target TG.
- the assessment video is used together with the interview data as a judgment material for analyzing the health condition of the target TG.
- Instructions to start and end video recording can be input by voice.
- the poses at the start and end of the motion may be detected by image analysis, and when these poses are detected, processing for starting and ending moving image shooting may be automatically performed.
- Step SA4 Preprocessing for motion analysis> After the video regarding the determination item is captured, the client terminal 100 performs preprocessing for motion analysis by the motion analysis server 200 as necessary.
- the client terminal 100 When analyzing the motion of the target TG using the moving image data MD, the number of frame images that actually need to be analyzed is not so large (for example, 1 to 10 per phase). If all the frame images included in the moving image data MD are analyzed by the high-performance motion analysis server 200, the analysis cost will increase. Therefore, the client terminal 100 extracts a specific action scene (hereinafter referred to as a specific scene) that is expected to include an important frame image representing the action of the phase as preprocessing for action analysis. A specific scene is extracted corresponding to each phase. The client terminal 100 transmits only frame images of specific scenes to the motion analysis server 200 .
- a specific action scene hereinafter referred to as a specific scene
- the client terminal 100 analyzes the moving image data MD acquired in the low image quality mode (for example, resolution of 368 pixels ⁇ 368 pixels per frame) and predicts the reception timing of a specific scene.
- the client terminal 100 switches the acquisition mode of the moving image data MD from the low image quality mode to the high image quality mode (for example, a resolution of 640 pixels ⁇ 480 pixels per frame) in accordance with the predicted timing, and obtains high image quality frame images.
- Send to the motion analysis server 200 .
- the client terminal 100 transmits frame images of scenes other than the specific scene to the motion analysis server 200 with low image quality.
- the features of the specific scene to be extracted are specified in the motion analysis algorithm AL.
- the specific scene is detected based on the contour information of the target TG, the posture information LPI (see FIG. 5), and the positional relationship between the specific object OB used by the target TG and the target TG.
- the specific object OB is, for example, a soccer ball in the case of soccer, and a golf club and a golf ball in the case of golf.
- the timing after the specified second is specified as a phase to be analyzed, respectively.
- the phase determination conditions are defined based on, for example, specific joint angles and relative positions between the ball, which is the object OB, and specific body feature points (key points).
- the client terminal 100 extracts a plurality of specific scenes each including the above phases (i) to (iv) from the moving image data MD. Even the low-performance client terminal 100 can perform the detection of a specific scene at high speed. Since only the frame images included in the specific scene are subject to motion analysis, the cost of analysis by the motion analysis server 200 can be reduced.
- ⁇ Pretreatment Flow>> 5 to 7 are diagrams for explaining specific examples of preprocessing. The flow of FIG. 7 will be described below with reference to FIGS. 5 and 6. FIG.
- the client terminal 100 shoots a video of the target TG (step SD1).
- the moving image data MD is composed of a plurality of frame images FI arranged in chronological order.
- a moving image includes a specific scene to be analyzed and scenes before and after the specific scene.
- the client terminal 100 extracts one or more frame images FI (specific frame images SFI) representing specific scenes from the moving image data MD (step SD2). Determination of the specific scene is performed based on the motion of the target TG, for example.
- the motion of the target TG is based on, for example, the target TG posture information LPI (first analysis model information indicating the result of low-accuracy posture estimation by H.143).
- the above completes the preprocessing for extracting the target for high-precision posture estimation.
- the extracted frame images are subject to motion analysis by the motion analysis server 200 .
- the motion analysis server 200 extracts posture information HPI of the target TG for each frame image SFI from one or more extracted specific frame images SFI (step SD3).
- the pose information HPI of the target TG is extracted only from one or more specific frame images SFI using, for example, a high-precision high-complexity second analysis model 297 (see FIG. 9).
- the motion analysis server 200 extracts posture information HPI indicating the motion timing of each phase from among the extracted one or more posture information HPI (information indicating the highly accurate posture estimation result by the second analysis model 297). Thereby, a plurality of phases included in a series of operations are detected.
- the motion analysis server 200 analyzes the motion of the target TG for each phase using the posture information HPI indicating the motion timing of each phase (step SD4).
- the client terminal 100 receives the analysis information MAI from the motion analysis server 200 and notifies it to the target TG (step SD5).
- low-precision posture estimation is performed by the client terminal 100 and high-precision posture estimation is performed by the motion analysis server 200, but the sharing of posture estimation is not limited to this.
- client terminal 100 may perform all posture estimation (low-precision posture estimation and high-precision posture estimation), or motion analysis server 200 may perform all posture estimation. In either case, the excellent effects of fast detection of specific scenes and reduction of total three strikes in high-precision pose estimation are obtained.
- FIG. 8 is a flowchart showing an example of analysis/evaluation processing.
- ⁇ Step SB1 Posture Estimation> From the client terminal 100 , a plurality of images of specific scenes are extracted from the moving image data MD and transmitted to the motion analysis server 200 .
- the motion analysis server 200 performs posture analysis for each specific scene. Pose analysis is performed using known pose estimation techniques. For example, the motion analysis server 200 uses a deep learning technique to extract a plurality of key points KP (a plurality of feature points representing shoulders, elbows, wrists, waists, knees, ankles, etc.; see FIG. 10) from the image of the target TG. to extract The motion analysis server 200 estimates the posture of the target TG based on the relative positions of the extracted keypoints KP.
- KP key points representing shoulders, elbows, wrists, waists, knees, ankles, etc.
- the motion analysis server 200 extracts posture information HPI of the target TG from each frame image included in the specific scene.
- the posture information HPI means information indicating the position (coordinates) of each keypoint KP and the positional relationship (joint angles, etc.) between the keypoints KP.
- the motion analysis server 200 is an information processing device with higher performance than the client terminal 100 . Therefore, the posture information HPI is extracted with higher accuracy than when posture analysis is performed by the client terminal 100 . Using highly accurate posture information HPI also increases the accuracy of motion analysis.
- the motion analysis algorithm AL defines definition information on how to define the posture of each phase.
- the definition of posture is based on, for example, the positional relationship (angle, distance, etc.) between keypoints KP, and the mode of movement of a specific keypoint KP (moving direction, moving speed, change in moving speed, etc.). is done.
- the posture may be defined based on the positional relationship with a specific object OB (such as a ball) used by the target TG.
- Multiple postures may be defined in one phase. By analyzing multiple postures, it is possible to analyze the transition of postures occurring within the same phase. For example, in a golf swing, the transition of the posture during the swing may be evaluated. Looking only at the start and end of the phase, it is not possible to know what kind of swing was performed during that time. By analyzing one or more frame images SFI between the start and end of the phase, it is possible to grasp the transition of posture within the same phase. This allows you to check whether the correct operation was performed.
- the motion analysis server 200 extracts one or more frame images SFI defined in the definition information from one or more frame images SFI included in the specific scene. As a result, one or more orientations associated with the same phase defined in the definition information are detected.
- a posture determination method a determination method based on a threshold value or a determination method based on machine learning such as deep learning may be used.
- ⁇ Step SB3 Evaluation> ⁇ Categorization>>
- One or more evaluation items are defined for each phase in the motion analysis algorithm AL.
- Individual evaluation items and scoring criteria are set by trainers and coaches. When standard evaluation items and scoring criteria are known, known evaluation items and scoring criteria may be used as they are. For example, the Barthel Index (BI) and the Functional Independence Measure (FIM) are commonly used as methods for evaluating ADL (Activities of Daily Living). By using known evaluation items and scoring criteria, the state of the target TG is better understood.
- BI Barthel Index
- FIM Functional Independence Measure
- the motion analysis server 200 extracts the posture information HPI from the frame image SFI representing the motion of the phase.
- the motion analysis server 200 scores the extracted posture information HPI for each evaluation item. Scoring may be performed on individual posture information HPI, or may be performed on average posture information HPI over a plurality of frames.
- a scoring method a scoring method based on a threshold may be used, or a scoring method based on machine learning such as deep learning may be used.
- the timing of grading may be in real time, or may be after moving image shooting.
- Step SB5 Classification of symptoms>
- the motion analysis server 200 detects features of the motion of the target TG based on the scoring results of each phase.
- the motion analysis server 200 classifies the symptoms of the target TG based on motion characteristics.
- a classification method a classification method based on a threshold may be used, or a classification method based on machine learning such as deep learning may be used.
- the motion analysis algorithm AL defines, for example, a plurality of classification items set by trainers and coaches.
- the motion analysis server 200 evaluates a series of motions of the target TG based on the scoring results of each evaluation item and the symptom classification results.
- the motion analysis server 200 compares the evaluation result of the target TG with the evaluation result of others (model person, other sports member) or the past evaluation result of the target TG, and notifies the target TG of the comparison result. can be done.
- a comparison method there is a method of displaying a plurality of skeletal images to be compared in an overlay display or a parallel display. At this time, it is preferable to match the sizes of the skeleton images.
- the motion analysis server 200 generates analysis information MAI indicating evaluation results of a series of motions, and reports to the target TG, family FM, and the like.
- the analysis information MAI includes various types of information for supporting the target TG, such as the target TG's current situation (grading results, symptom classification results), symptom progression, advice, and recommended training plans. be The timing of the report can be set arbitrarily.
- FIG. 9 is a diagram illustrating an example of a functional configuration related to analysis/intervention processing.
- the client terminal 100 has a processing device 130 , a storage device 140 and a communication device 150 .
- the processing device 130 has a moving image acquisition section 131 , a shooting condition determination section 132 , a scene extraction section 133 and an output section 134 .
- the moving image acquisition unit 131 acquires the moving image data MD of the target TG captured by the camera 160 .
- a moving image includes a plurality of specific scenes corresponding to each phase.
- the scene extraction unit 133 acquires the moving image data MD from the moving image acquisition unit 131.
- the scene extraction unit 133 extracts one or more frame images SFI representing specific scenes for each phase from the moving image data MD.
- the number of frame images SFI to be extracted is, for example, 1 or more and 10 or less.
- the scene extraction unit 133 determines a specific scene based on the action of the target TG.
- the scene extraction unit 133 compares the motion characteristics of the target TG with the scene information 142 to determine a specific scene.
- the scene extraction unit 133 detects switching to a specific scene based on the posture analysis result of the frame image group before the specific scene.
- the scene extraction unit 133 extracts one or more frame images FI having a resolution higher than that of the frame image group, which are acquired in response to switching to the specific scene, as one or more specific frame images SFI representing the specific scene.
- the scene information 142 a plurality of specific scenes corresponding to each phase and determination conditions for determining each specific scene are defined in association with each other.
- the definition information of the specific scene and the method of determining the specific scene are specified in the operation algorithm AL.
- the client terminal 100 extracts the definition information of the specific scene and the determination method of the specific scene from the index database 295 and stores them as the scene information 142 in the storage device 140 .
- the scene extraction unit 133 extracts the posture information LPI of the target TG using, for example, the first analysis model 143 obtained by machine learning.
- the first analysis model 143 is, for example, an analysis model whose posture estimation accuracy is lower than that of the analysis model (second analysis model 297) used when the motion analysis server 200 extracts the posture information HPI.
- the scene extraction unit 133 detects switching to a specific scene based on a change in the posture of the target TG estimated from the extracted posture information LPI.
- the moving image data MD contains information on a series of actions including multiple specific scenes that occur in chronological order.
- the scene extraction unit 133 determines which specific scene is occurring from an individual point of view while considering the context before and after the action flow. For example, in the action of shooting a soccer ball, first, a specific scene corresponding to the above phase (i) is determined, and then (ii), (iii) and (iv) are performed in order from the video data after (i). A specific scene corresponding to the phase of is determined. Each specific scene is determined based on the assumed body motion for each specific scene.
- the scene extraction unit 133 detects, for example, the action of the target TG when the target TG and a specific object OB (such as a ball in the case of soccer) are in a predetermined positional relationship, or the target TG and a specific object OB, a change to a specific scene is detected.
- the specific scene can be determined with higher accuracy than when the specific scene is determined based only on the relative positional relationship between the skeletons.
- a peculiar area is defined based on the relative positional relationship with the ball, where it is assumed that the pivot foot will not move much when stepping on the pivot foot.
- a singular area is defined as an image area having a radius A ⁇ r (r is the radius of the ball and A is a number greater than 1) from the center of the ball, for example.
- the scene extraction unit 133 extracts a frame image in which the distance between the pivot foot and the ball is within a threshold value as a reference frame image.
- the scene extracting unit 133 extracts N frame images FI up to the reference frame image (N is an integer equal to or greater than 1) from the frame image FI that is (N ⁇ 1) frames before the reference frame image.
- the scene extracting unit 133 extracts a skeleton region in which the skeleton of the ankle of the target TG fits for each of the N frame images FI.
- the scene extracting unit 133 extracts a skeletal motion region that accommodates all of the N skeletal regions.
- the scene extraction unit 133 determines that the pivot foot has been stepped on when the size of the skeletal motion region is within the threshold and the skeletal motion region is included in the peculiar region.
- the scene extraction unit 133 extracts one or more frame images FI indicating the timing at which the pivot foot is stepped on from the moving image data MD.
- the scene extraction unit 133 proceeds to extract the frame image FI of the specific scene corresponding to (ii) above.
- the scene extraction unit 133 determines, for example, the timing at which the extension of the foot detected as the pivot foot passes the ball as the specific scene corresponding to (ii) above.
- the determination of the specific scene corresponding to (ii) above is performed on the moving image data MD after the specific scene corresponding to (i) above.
- the specific scene corresponding to (ii) above occurs immediately after the specific scene corresponding to (i) above. Therefore, if there is a scene in which the extension of the foot detected as the pivot foot passes through the ball within a predetermined time immediately after the specific scene corresponding to (i) above, that scene corresponds to (ii) above.
- the scene extraction unit 133 determines the scene as a specific scene corresponding to (ii) above, and extracts one or more frame images FI representing the specific scene from the moving image data MD.
- the scene extraction unit 133 proceeds to extract the frame image FI of the specific scene corresponding to (iii) above.
- the scene extracting unit 133 determines, for example, the timing at which the distance between the center of the waist and the center of the ball narrows and then widens at a speed greater than the speed at which the distance narrows as the specific scene corresponding to (iii) above.
- the scene extraction unit 133 calculates the distance between the center of the hipbone and the center of the ball in each frame image FI, and divides the difference in the distance between the frames by the diameter of the ball. , it is determined that the mode of change in distance is reversed. The scene extraction unit 133 determines the scene immediately before the mode of change in distance is reversed as the specific scene corresponding to (iii) above.
- the determination of the specific scene corresponding to (iii) above is performed on the moving image data MD after the specific scene corresponding to (ii) above.
- the specific scene corresponding to (iii) above occurs immediately after the specific scene corresponding to (ii) above. Therefore, if the above-described change in distance occurs within a predetermined period of time immediately after the specific scene corresponding to (ii) above, there is a high possibility that the scene is the specific scene corresponding to (iii) above. Therefore, the scene extraction unit 133 determines the scene as a specific scene corresponding to (iii) above, and extracts one or more frame images FI representing the specific scene from the moving image data MD.
- the scene extraction unit 133 proceeds to extract the frame image FI of the specific scene corresponding to (iv) above.
- the frame image FI of the specific scene corresponding to (iv) above is used to analyze the posture after shooting.
- the specific scene corresponding to (iv) above is defined as a scene after a predetermined time has elapsed after the specific scene corresponding to (iii) above.
- the time it takes to detect a pose suitable for analysis depends on the speed of the ball entry and motion. Therefore, how much time elapses after the shot is determined as the specific scene corresponding to the above (iv) differs for each target TG. Therefore, in consideration of individual differences, the scene extraction unit 133 determines that, for example, a frame time that is a predetermined number of times the number of frames from the specific scene corresponding to the above (ii) to the specific scene corresponding to the above (iii) has elapsed. The timing is determined as a specific scene corresponding to (iv) above.
- the accuracy of posture estimation changes depending on the scale of the neural network used in the analysis model.
- many keypoints KP are extracted from the image data, and various motions of the target TG are estimated with high accuracy. Even if information is missing due to occlusion or the like, the keypoints KP of the target TG are extracted with high accuracy.
- Methods of increasing the scale of a neural network include a method of increasing feature maps (channels) and a method of deepening layers. Either method increases the processing amount of the convolution operation and decreases the calculation speed. There is a trade-off between attitude estimation accuracy and calculation speed.
- the scene extraction unit 133 extracts the posture information LPI of the target TG from all the frame images FI that make up the moving image data MD using, for example, the first analysis model 143 with a small neural network scale and low precision and low computational complexity. . If only the specific scene of the target TG is determined, it is sufficient to grasp the rough motion of the target TG. Even if there is a lack of information due to occlusion or the like, the characteristics of the motion can be grasped by the rough change in posture. Therefore, even if the first analysis model 143 with low accuracy and low computational complexity is used, the action scene of the target TG can be determined. When the first analysis model 143 is used, the processing amount of the convolution operation for each frame image FI is small, so even if the moving image data MD is large, rapid processing is possible.
- Data of one or more frame images SFI representing a specific scene are transmitted to the motion analysis server 200 via the communication device 150 .
- the motion analysis server 200 uses the received one or more frame images SFI to perform motion analysis of the phase corresponding to the specific scene.
- the output unit 134 receives the evaluation result (analysis information MAI) based on the motion analysis from the motion analysis server 200 via the communication device 150 .
- the output unit 134 notifies the target TG of the received analysis information MAI. Notifications are made, for example, by a combination of text, graphics and sound.
- the shooting condition determination unit 132 determines the shooting direction of the target TG when acquiring the moving image data MD, based on the type of motion (determination item) to be subjected to motion analysis.
- the index database 295 defines one or more photographing directions in which photographing should be performed for each determination item.
- the shooting direction is determined from the point of view of ease of grasping the motion. For example, the frontal direction (perpendicular to the frontal plane), the lateral direction (perpendicular to the sagittal plane), or both the frontal and lateral directions of the target TG, etc., based on the motion characteristics to be analyzed.
- the shooting direction is determined as follows.
- the imaging condition determination unit 132 notifies the target TG of the imaging direction defined in the index database 295 .
- the posture information HPI of the target TG in the front direction and side direction can be extracted from one moving image data MD. In such a case, there is no need to shoot front and side images separately.
- the storage device 140 stores, for example, shooting condition information 141, scene information 142, a first analysis model 143 and a program 144.
- the shooting condition information 141 includes information on shooting conditions defined in the motion analysis algorithm AL.
- the client terminal 100 extracts information about the shooting conditions from the index database 295 and stores it as shooting condition information 141 in the storage device 140 .
- the shooting condition information 141 and the scene information 142 may be downloaded from the index database 295 or installed in the client terminal 100 from the beginning.
- the program 144 is a program that causes a computer to execute information processing of the client terminal 100 .
- the processing device 130 performs various processes according to the program 144 .
- the storage device 140 may be used as a work area that temporarily stores the processing results of the processing device 130 .
- Storage device 140 includes, for example, any non-transitory storage media such as semiconductor storage media and magnetic storage media.
- the storage device 140 includes, for example, an optical disk, a magneto-optical disk, or a flash memory.
- the program 144 is stored, for example, in a non-transitory computer-readable storage medium.
- the processing device 130 is, for example, a computer configured with a processor and memory.
- the memory of the processing device 130 includes RAM (Random Access Memory) and ROM (Read Only Memory).
- the processing device 130 By executing the program 144 , the processing device 130 functions as a video acquisition section 131 , a shooting condition determination section 132 , a scene extraction section 133 and an output section 134 .
- the motion analysis server 200 has a processing device 250 , a storage device 290 and a communication device 260 .
- the processing device 250 has a posture information extraction unit 214 , a state machine 221 and a motion analysis unit 222 .
- Posture information extraction section 214 is included in sensor data analysis section 211 .
- State machine 221 and motion analysis unit 222 are included in evaluation unit 220 .
- the posture information extraction unit 214 acquires one or more frame images SFI representing a specific scene transmitted from the client terminal 100 via the communication device 260 .
- the posture information extraction unit 214 uses the second analysis model 297 obtained by machine learning to extract posture information HPI of the target TG for each frame image SFI from one or more frame images SFI representing a specific scene.
- the second analysis model 297 is an analysis model with a higher orientation estimation accuracy than the analysis model (first analysis model 143) used when the scene extraction unit 133 determines a specific scene.
- the posture information extraction unit 214 extracts the posture information HPI of the target TG from one or more specific frame images SFI using, for example, a second analysis model 297 with a large scale neural network and high accuracy and high computational complexity. Only one or more specific frame images SFI selected from a plurality of frame images FI forming the moving image data MD are subjected to the posture estimation processing by the posture information extraction unit 214 . Therefore, even if the processing amount of convolution operation for each frame image SFI is large, rapid processing is possible.
- the state machine 221 detects a plurality of phases included in a series of operations of the target TG based on the posture information HPI of the target TG. For example, the state machine 221 matches features contained in the target TG's pose information HPI with the phase information 298 .
- the phase information 298, a plurality of phases to be analyzed and judgment conditions for judging each phase are defined in association with each other.
- the phase definition information and the phase determination method (the phase boundary detection method) are specified in the operation algorithm AL.
- phase information 298 indicates various information about phases (definition information and determination method for each phase, etc.) defined in the operation algorithm AL.
- the state machine 221 extracts one or more posture information HPIs according to the collation result from among the one or more posture information HPIs extracted by the posture information extraction unit 214 .
- One or more pieces of posture information HPI extracted based on the collation result indicate the posture of the target TG in the phase defined in the phase information 298, respectively.
- the state machine 221 detects multiple phases included in the series of motions based on the posture information HPI acquired from multiple directions. As a result, a plurality of phases are detected while compensating for blind spot information.
- the motion analysis unit 222 acquires posture information HPI of the target TG in the specific scene extracted by the posture information extraction unit 214 (posture information extracted for each frame image SFI from one or more frame images SFI included in the specific scene). do.
- the motion analysis unit 222 extracts one or more frame images SFI representing the phases detected by the state machine 221 from one or more frame images SFI included in the specific scene as analysis targets.
- the motion analysis unit 222 extracts one or more frame images SFI to be analyzed for each phase based on the phase detection results obtained from the state machine 221 .
- the motion analysis unit 222 analyzes the posture information HPI to be analyzed for each phase, and generates analysis information MAI indicating the evaluation result of a series of motions.
- the motion analysis method (definition of scoring items, scoring method, etc.) is specified in the motion analysis algorithm AL.
- the motion analysis unit 222 performs motion analysis based on the motion analysis algorithm AL acquired from the index database 295 .
- the motion analysis unit 222 scores the motion of each phase based on one or more scoring items set for each phase, and generates analysis information MAI based on the scoring results of each phase.
- Motion analysis unit 222 transmits analysis information MAI to client terminal 100 , trainer terminal 300 , family terminal 400 and service provider server 500 via communication device 260 .
- the storage device 290 stores role model information 296 , second analysis model 297 , phase information 298 and program 299 .
- the program 299 is a program that causes a computer to execute information processing of the motion analysis server 200 .
- the processing device 250 performs various processes according to programs 299 stored in the storage device 290 .
- the storage device 290 may be used as a work area that temporarily stores the processing results of the processing device 250 .
- Storage device 290 includes, for example, any non-transitory storage media such as semiconductor storage media and magnetic storage media.
- the storage device 290 includes, for example, an optical disk, magneto-optical disk, or flash memory.
- the program 299 is stored, for example, in a non-transitory computer-readable storage medium.
- the processing device 250 is, for example, a computer configured with a processor and memory.
- the memory of processing unit 250 includes RAM and ROM.
- Processing device 250 functions as sensor data analysis unit 211 , evaluation unit 220 , posture information extraction unit 214 , state machine 221 and motion analysis unit 222 by executing program 299 .
- the service provider server 500 has a processing device 510 , a storage device 590 and a communication device 520 .
- the processing device 510 has an information acquisition unit 511 and a sales information generation unit 512 .
- the information acquisition unit 511 acquires the analysis information MAI via the communication device 520 .
- the product sales information generation unit 512 extracts from the product sales database 591 information on a product group suitable for the target TG's exercise status based on the analysis information MAI acquired from the information acquisition unit 511 .
- the product sales information generation unit 512 generates product sales information based on the extracted product group information, and transmits the product sales information to the motion analysis server 200 via the communication device 520 .
- the motion analysis server 200 generates intervention information VI using the analysis information MAI and product sales information, and transmits the intervention information VI to the client terminal 100 .
- the storage device 590 stores a sales database 591 and a program 592.
- the program 592 is a program that causes a computer to execute information processing of the processing device 510 .
- the processing device 510 functions as an information acquisition unit 511 and a product sales information generation unit 512 by executing a program 592 .
- the configurations of storage device 590 and processing device 510 are similar to storage device 290 and processing device 250 of motion analysis server 200 .
- FIG. 10 is a diagram showing an example of analysis information MAI.
- the motion analysis unit 222 generates analysis information MAI based on the analysis results for each phase.
- the output unit 134 displays the analysis information MAI on the display device 170 together with the image of the moving image data MD that is the analysis target. For example, the output unit 134 suspends the movement of the target TG for each phase and displays the analysis information MAI together with the still image IM of the target TG in the phase.
- the output unit 134 notifies the first analysis information MAI1 and the second analysis information MAI2 as the analysis information MAI.
- the first analysis information MAI1 includes, for each phase, information indicating a comparison between the motion of the target TG and the motion of a specific person RM (for example, a professional athlete) serving as a model for the motion.
- the second analysis information MAI2 includes information indicating a guideline for bringing the motion of the target TG closer to the motion of the target specific person RM. In the example of FIG. 10, the comment "The kicking leg is being raised high" and the evaluation score "86 points" are shown as the second analysis information MAI2. The evaluation points indicate the degree of achievement based on preset criteria.
- the motion analysis algorithm AL defines motion information to be compared in motion analysis.
- role model information 296 indicates information about a comparison target defined in the action algorithm AL (information on the action of the specific person RM, etc.).
- the analysis information MAI may include information indicating the transition of the scoring results of each phase for each scoring item from the past to the present.
- the first analysis information MAI1 includes, for example, skeleton information SI of the target TG and reference skeleton information RSI (skeletal information of the specific person RM) serving as a reference for comparison in each phase.
- the reference skeleton information RSI is generated using, for example, skeleton information obtained by correcting the skeleton information of the specific person RM in each phase based on the physical difference between the target TG and the specific person RM.
- Reference skeleton information RSI in each phase is included in role model information 296 .
- the scale of the reference skeleton information RSI is set as follows. First, one or more bones suitable for comparing the physiques of the specific person RM and the target TG are defined. For example, in the example of FIG. 10, the spine and leg bones are defined as the reference for comparison.
- the motion analysis unit 222 detects the lengths of the spine and leg bones at the timing when the postures of the specific person RM and the target TG are aligned.
- the motion analysis unit 222 calculates the ratio of the sum of the lengths of the spine and leg bones as the ratio of the body sizes of the specific person RM and the target TG, and changes the scale of the skeleton of the specific person RM based on this ratio. . This facilitates comparison with the specific person RM and makes it easier to understand how the target TG should operate.
- first reference skeleton information RSI1 for example, first reference skeleton information RSI1, second reference skeleton information RSI2, and third reference skeleton information RSI3 are displayed.
- the first reference skeleton information RSI1 is skeleton information of the behavior of the specific person RM serving as a model.
- the second reference skeletal information RSI2 is skeletal information of a motion at a specific level that is less than a model (for example, a level of 80 points when the model is 100 points).
- the first reference skeleton information RSI1 and the second reference skeleton information RSI3 are model skeleton information at the timing when the waist position matches the target TG.
- the third reference skeleton information RSI3 is model skeleton information at the timing when the position of the pivot foot matches the target TG.
- the third reference skeleton information RSI3 for example, together with the skeleton information SI of the target TG, is always displayed in conjunction with the movement of the target TG during the period of a series of actions from stepping on the pivot foot to immediately after impact.
- the third reference skeleton information RSI3 is used to compare a series of motions from stepping on the pivot foot to immediately after impact with the target TG. Therefore, unlike the first reference skeleton information RSI1 and the second reference skeleton information RSI2, the third reference skeleton information RSI3 indicates the skeleton information of the whole body.
- the time required for a series of actions differs between the specific person RM and the target TG. Therefore, effective timing for comparison (for example, impact timing or stepping timing) is defined, and the third reference skeleton information RSI3 is superimposed on the target TG so that the defined timings match.
- effective timing for comparison for example, impact timing or stepping timing
- the third reference skeleton information RSI3 is superimposed on the target TG so that the defined timings match.
- the stepping timings are matched, but which timing should be aligned is appropriately set according to the purpose of the lesson.
- the output unit 134 offsets and displays the position of the third reference skeleton information RSI3, for example, so that the position of the ankle of the target TG and the position of the ankle of the specific person RM match at a defined timing. This makes it easier to understand how much the stepping positions of the target TG and the specific person RM are different.
- the output unit 134 selectively displays the skeleton information corresponding to the part of the target TG to be analyzed in the phase as the skeleton information SI of the target TG, the first reference vocative information RSI1 and the second reference skeleton information RSI2. .
- the skeleton information of the waist and legs is selectively displayed as the skeleton information SI, the first reference vocative information RSI1, and the second reference skeleton information RSI2.
- the first reference skeleton information RSI1 and the second reference skeleton information RSI2 may be displayed in conjunction with the movement of the target TG at all times during the series of actions. However, in order to clarify the comparison with the specific person RM, it is also possible to display the first reference skeleton information RSI1 and the second reference skeleton information RSI2 at the timing when the movement of the specific person RM diverges.
- the output unit 134 outputs the skeleton information SI of the target TG, the first reference skeleton information RSI1 and the first 2 Display the reference skeleton information RSI2.
- the output unit 134 highlights the skeleton of the target TG where the difference between the skeleton information SI of the target TG and the first reference skeleton information RSI1 exceeds the allowable standard.
- the time required for a series of actions differs between the specific person RM and the target TG. Therefore, effective timings for comparison are defined as phases, and the first reference skeleton information RSI1 is superimposed on the target TG so that the defined phases match. This facilitates comparison with the specific person RM and makes it easier to understand how the target TG should operate.
- FIG. 11 is a diagram illustrating an example of a notification mode of analysis information MAI.
- the analysis information MAI is displayed superimposed on the frame image indicating the operation timing of each phase.
- the display device 170 pauses the reproduction of the analysis moving image data AMD at the operation timing of each phase. Then, the display device 170 displays a still image IM in which the analysis information MAI is superimposed on the frame image of each phase.
- the reproduction of the analysis moving image data AMD is paused for each phase, and the analysis information MAI of the corresponding phase is notified.
- the moving image data MD may be reproduced in slow motion so that the posture of the target TG can be easily confirmed. At this time, slow-motion playback may be applied only to the section from the first phase to the last phase, and the images before and after that section may be played back at normal playback speed.
- FIG. 11 shows an example in which three phases A1 to A3 are set.
- Phase A1 is the timing of stepping on the pivot foot.
- Phase A2 is the timing of impact.
- Phase A3 is the timing immediately after the impact (a specified few seconds after the impact).
- the display device 170 reproduces the moving image of the target TG based on the reproduction operation on the client terminal 100 .
- the display device 170 pauses the reproduction of the moving image data MD at the timing when the phase A1 is reproduced.
- the display device 170 displays a still image IM (first still image IM1) in which the analysis information MAI of the motion of the target TG in phase A1 is superimposed on the frame image FI in phase A1.
- the display device 170 starts playing the moving image from phase A1 onward when a preset time elapses from the playing operation on the client terminal 100 or the start of displaying the first still image IM1.
- the display device 170 pauses the reproduction of the moving image data MD at the timing when phase A2 is reproduced. Then, the display device 170 displays a still image IM (second still image IM2) in which the analysis information MAI of the motion of the target TG in phase A2 is superimposed on the frame image FI in phase A2. After that, the display device 170 starts playing the moving image from phase A2 onward when a preset time elapses from the playing operation on the client terminal 100 or the start of displaying the second still image IM2.
- a still image IM second still image IM22
- the display device 170 pauses the reproduction of the moving image data MD at the timing when phase A3 is reproduced. Then, the display device 170 displays a still image IM (third still image IM3) in which the analysis information MAI of the motion of the target TG in phase A3 is superimposed on the frame image FI in phase A3. After that, the display device 170 starts playing the moving image from phase A3 onward when a preset time has passed since the client terminal 100 was operated to play or the third still image IM3 started to be displayed.
- a still image IM third still image IM3
- the display device 170 can pause the movement of the target TG in each phase and display the analysis information MAI in each phase together with the still image IM of the target TG in each phase.
- the display device 170 reproduces the remaining moving images to the end.
- FIG. 12 is a diagram showing another example of a notification mode of analysis information MAI.
- FIG. 12 shows an example in which the motion analysis service is applied to golf learning support.
- FIG. 12 shows an example in which six phases are set. For example, the timing of the backswing, the timing of the downswing, the timing immediately before impact, the timing of impact, the timing immediately after impact, and the timing of follow-through are each set as a phase to be analyzed.
- the moving image data MD is paused at the timing when the operation of each phase is reproduced, and the analysis information MAI is superimposed and displayed.
- the analysis information MAI of the past phase continues to be displayed on the display device 170 without being erased.
- the reference skeleton information RSI that serves as a model is not displayed.
- the analysis information MAI of a certain phase continued to be displayed on the screen without being erased.
- the display mode of the analysis information MAI is not limited to this. After the analysis information MAI of a certain phase is notified, the analysis information MAI is once erased until the next phase is displayed, and when the last phase is reproduced or the analysis information MAI of all phases is notified, , the analysis information MAI of all the phases may be collectively redisplayed when the moving image of is reproduced to the end.
- the first analysis information MAI1 is presented as information indicating comparison with others.
- the first analysis information MAI1 may include information indicating a comparison with past target TG operations.
- the first analysis information MAI1 can include skeleton information SI of the current target TG and skeleton information SI of the past target TG that serves as a reference for comparison.
- the output unit 134 outputs the skeleton information of the current target TG at the timing when the difference exceeding the allowable standard occurs between the skeleton information SI of the current target TG and the reference skeleton information RSI indicating the motion of the specific person RM. Display the SI and skeleton information SI of the past target TG. The output unit 134 highlights the skeleton of the target TG where the difference between the skeleton information SI of the current target TG and the reference skeleton information RSI exceeds the allowable standard.
- the notification method of the analysis information MAI is not limited to this.
- the client terminal 100 may generate new moving image data (modified moving image data) incorporating the analysis information MAI, and the generated modified moving image data may be reproduced on the display device 170 .
- analysis information MAI is written in a frame image indicating each phase of the corrected moving image data. In the corrected video data, the movement of the target TG is stopped for each phase, and the still image IM of the target TG including the analysis information MAI is displayed for a predetermined time, and then the subsequent video is resumed for the next phase. display is adjusted to
- the modified video data may be generated by the motion analysis unit 222.
- the motion analysis unit 222 can transmit the generated modified video data to the client terminal 100, the trainer terminal 300, the family terminal 400 and the service provider server 500 together with the analysis information MAI or instead of the analysis information MAI. .
- FIG. 13 is a diagram showing variations of the system configuration.
- the information processing system 1A on the upper side of FIG. 13 has a configuration in which the sensor unit 110 is built into the client terminal 100, as in FIG.
- the sensor unit 110 is provided as a device independent of the client terminal 100 .
- the sensor data detected by the sensor unit 110 is temporarily stored in the client terminal 100 and then transmitted to the motion analysis server 200 .
- the device owned by the service provider is the server (service provider server 500).
- the device owned by the service provider does not necessarily have to be a server, and may be information terminals such as smart phones, tablet terminals, notebook computers, and desktop computers.
- the information processing system 1 has a state machine 221 and a motion analysis section 222 .
- the state machine 221 detects a plurality of phases included in a series of motions of the target TG based on the posture information HPI of the target TG extracted from the moving image data CD.
- the motion analysis unit 222 analyzes the motion of the target TG for each phase using the posture information HPI.
- the processing of the information processing system 1 is executed by a computer.
- the program of the present disclosure causes a computer to implement the processing of the information processing system 1 .
- the information processing system 1 has a scene extraction unit 133 and a posture information extraction unit 214 .
- the scene extraction unit 133 extracts one or more specific frame images SFI representing specific scenes corresponding to each phase from the moving image data MD.
- the posture information extraction unit 214 extracts posture information HPI of the target TG from the extracted one or more specific frame images SFI.
- posture information HPI is extracted only from frame images FI of specific scenes that require analysis (specific frame images SFI).
- specific frame images SFI specific frame images
- the moving image data MD before and after the specific scene do not contribute to motion analysis. Omitting image processing of data regions that do not contribute to motion analysis reduces the time and cost required for motion analysis.
- the scene extraction unit 133 detects switching to a specific scene based on the posture analysis results of the frame image group before the specific scene.
- the scene extraction unit 133 extracts one or more frame images FI having a higher resolution than the frame image group, which are acquired in response to switching to the specific scene, as one or more specific frame images SFI.
- the reception timing of the specific scene is predicted based on the moving image data MD acquired in the low image quality mode.
- the acquisition mode of the moving image data MD is switched from the low image quality mode to the high image quality mode in accordance with the predicted timing.
- the posture information HPI of the target TG is extracted from the moving image data MD acquired in the high image quality mode. Therefore, the posture information HPI to be analyzed can be extracted with high accuracy while specifying the specific scene with a low processing load.
- the scene extraction unit extracts a specific scene based on the movement of the target TG when the target TG and the specific object OB are in a predetermined positional relationship, or based on a change in the positional relationship between the target TG and the specific object OB. Detect transitions.
- the specific scene is detected with higher accuracy than when the specific scene is detected based only on the relative positional relationship between the skeletons.
- the scene extraction unit 133 extracts the posture information LPI of the target TG using an analysis model (first analysis model 143) whose posture estimation accuracy is lower than that of the analysis model (second analysis model 297) used by the posture information extraction unit 214. Extract.
- the scene extraction unit 133 detects switching to a specific scene based on a change in posture of the target TG estimated from the extracted posture information LPI.
- the simple first analysis model 143 is used to quickly estimate the motion of the target TG at low cost. Accurate motion analysis is not required if only specific scenes are detected. By varying the pose estimation accuracy of the first analysis model 143 used for determining a specific scene and the second analysis model 297 used for detailed motion analysis, low-cost and efficient motion analysis can be performed.
- the information processing system 1 has an imaging condition determination unit 132 .
- the shooting condition determination unit 132 determines the shooting direction of the target TG when acquiring the moving image data MD, based on the type of motion targeted for motion analysis.
- moving image data MD suitable for motion analysis can be easily obtained.
- the state machine 221 detects multiple phases included in a series of motions based on posture information HPI obtained from multiple directions.
- the motion analysis unit 222 scores the motion of each phase based on one or more scoring items set for each phase.
- the motion analysis unit 222 generates analysis information MAI indicating evaluation results of a series of motions based on the scoring results of each phase.
- the information processing system 1 has an output unit 134 .
- the output unit 134 suspends the movement of the target TG for each phase and displays the analysis information MAI together with the still image IM of the target TG in the phase.
- the analysis results are provided in a manner linked to the playback scene of the video. Therefore, the operation of the targeted target TG and its analysis result can be efficiently grasped.
- the analysis information MAI includes information indicating comparison with the target operation.
- the analysis information MAI includes skeleton information SI of the target TG and reference skeleton information RSI that serves as a reference for comparison.
- the output unit selectively displays skeleton information corresponding to the part of the target TG to be analyzed in the phase as skeleton information SI and reference skeleton information RSI of the target TG.
- the output unit displays the skeleton information SI of the target TG and the reference skeleton information RSI at the timing when a difference exceeding the allowable standard occurs between the skeleton information SI of the target TG and the reference skeleton information RSI.
- the output unit highlights the skeleton of the target TG in the portion where the difference between the skeleton information SI of the target TG and the reference skeleton information RSI exceeds the allowable standard.
- the analysis information MAI includes information indicating a guideline for bringing the motion of the target TG closer to the target motion.
- the analysis information MAI includes information indicating comparison with past target TG operations.
- the analysis information MAI includes the skeleton information SI of the current target TG and the skeleton information SI of the past target TG that serves as a reference for comparison.
- the output unit 134 outputs the skeleton information SI of the current target TG and the past Display skeleton information SI of the target TG.
- the timing at which there is a deviation from the target motion can be easily grasped.
- the output unit 134 highlights the skeleton of the target TG where the difference between the skeleton information SI of the current target TG and the reference skeleton information RSI exceeds the allowable standard.
- the analysis information MAI includes information indicating the transition of the scoring results for each scoring item in each phase from the past to the present.
- the information processing system 1 has an intervention information generator 230 .
- the intervention information generator 230 generates intervention information VI for the target TG based on the analysis information MAI.
- the intervention information VI includes judgment information that serves as judgment material for encouraging the target TG to improve its motion, or a training plan for the target TG.
- the intervention information generation unit 230 generates authentication information for authenticating the current level of the target TG.
- the level of the target TG is objectively grasped based on the authentication information.
- the state machine 221 detects multiple phases based on the determination method for each phase stored in the index database 295 .
- the motion analysis unit 222 analyzes the motion of the target TG for each phase based on the scoring items and scoring criteria for each phase stored in the index database 295 .
- the index database 295 stores, for each determination item, one or more information out of moving image shooting conditions, phase definitions, specific scenes to be analyzed, scoring items, and scoring criteria as indicators for motion analysis.
- the judgment item is associated with the type of motion targeted for motion analysis.
- the motion analysis unit 222 transmits the evaluation result of the series of motions to a terminal or server possessed by an interventionist (trainer, family member, service provider, etc.) who intervenes in the target TG. This configuration allows for precise analysis and intervention.
- the present technology can also adopt the following configuration.
- An information processing system having (2) a scene extraction unit that extracts one or more specific frame images representing a specific scene corresponding to each phase from the moving image data; a posture information extraction unit that extracts posture information of the target from the one or more specific frame images; The information processing system according to (1) above.
- the scene extraction unit detects a switch to the specific scene based on a posture analysis result of a group of frame images before the specific scene, and extracts images from the group of frame images acquired in response to the switch to the specific scene. extracting one or more high-resolution frame images as the one or more specific frame images;
- the scene extracting unit extracts the specific scene based on an action of the target when the target and the specific object are in a predetermined positional relationship, or a change in the positional relationship between the target and the specific object. to detect the switching of The information processing system according to (3) above.
- the scene extraction unit extracts the posture information of the target using an analysis model whose posture estimation accuracy is lower than that of the analysis model used in the posture information extraction unit, and estimates the target from the extracted posture information. detecting a switch to the specific scene based on a change in posture of The information processing system according to (3) or (4) above.
- a shooting condition determination unit that determines a shooting direction of the target when acquiring the moving image data based on the type of motion targeted for motion analysis; The information processing system according to any one of (1) to (5) above.
- the state machine detects a plurality of phases included in the series of motions based on the posture information obtained from a plurality of directions.
- the motion analysis unit scores the motion of each phase based on one or more scoring items set for each phase, and generates analysis information indicating evaluation results of the series of motions based on the scoring results of each phase. generate, The information processing system according to any one of (1) to (7) above. (9) an output unit that pauses movement of the target for each of the phases and displays the analytical information along with a still image of the target in the phase; The information processing system according to (8) above. (10) The analytical information includes information indicative of a comparison with target behavior. The information processing system according to (9) above. (11) the analysis information includes skeleton information of the target and reference skeleton information that serves as a reference for comparison; The information processing system according to (10) above.
- the output unit selectively displays skeleton information corresponding to the part of the target to be analyzed in the phase as the skeleton information of the target and the reference skeleton information.
- the output unit displays the skeleton information of the target and the reference skeleton information at timing when a difference exceeding an allowable standard occurs between the skeleton information of the target and the reference skeleton information.
- the output unit highlights a portion of the target skeleton where the difference between the target skeleton information and the reference skeleton information exceeds an allowable standard.
- the analysis information includes information indicating a guideline for approximating the behavior of the target to the desired behavior.
- the information processing system according to any one of (10) to (14) above.
- the analysis information includes information indicating a comparison with past behavior of the target;
- the analysis information includes current skeletal information of the target and past skeletal information of the target serving as a reference for comparison.
- the output unit outputs the current skeleton information of the target and the past target skeleton information at a timing when a difference exceeding an allowable standard occurs between the current skeleton information of the target and the reference skeleton information indicating a target motion.
- the output unit highlights a portion of the target skeleton where the current skeleton information of the target differs from the reference skeleton information by exceeding an allowable standard.
- the analysis information includes information indicating the transition of scoring results for each scoring item in each phase from the past to the present, The information processing system according to any one of (8) to (19) above.
- an intervention information generation unit that generates intervention information for the target based on the analysis information;
- the intervention information includes judgment information that serves as judgment material for prompting the target to improve the operation, or a training plan for the target, The information processing system according to (21) above.
- the intervention information generator generates authentication information that authenticates the current level of the target.
- the state machine detects the plurality of phases based on a determination method for each phase stored in an index database.
- the motion analysis unit analyzes the motion of the target for each phase based on the scoring items and scoring criteria for each phase stored in the index database.
- the index database stores, for each determination item, one or more information among the shooting conditions of the moving image, the definition of the phase, the specific scene to be analyzed, the scoring item, and the scoring criteria, as an index for motion analysis.
- the determination item is associated with the type of exercise targeted for motion analysis, The information processing system according to (26) above.
- the motion analysis unit transmits the evaluation result of the series of motions to a terminal or server owned by an interventionist who intervenes in the target.
- the information processing system according to any one of (1) to (27) above.
- a computer-implemented information processing method comprising: (30) detecting a plurality of phases included in a series of motions of the target based on target posture information extracted from video data; analyzing motion of the target for each phase using the pose information; A program that makes a computer do something.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physical Education & Sports Medicine (AREA)
- Tourism & Hospitality (AREA)
- Educational Administration (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Educational Technology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
[2.情報処理システムの構成]
[3.情報処理方法]
[3-1.動作分析処理の概略]
[3-2.動画取得処理]
[3-3.分析・評価処理]
[4.分析・介入処理に関わる機能構成]
[5.動作分析の具体例]
[6.システム構成のバリエーション]
[7.効果]
図1は、動作分析サービスの一例を示す図である。
図2は、情報処理システム1の機能構成の一例を示すブロック図である。
[3-1.動作分析処理の概略]
図3は、動作分析処理の概略を示すフローチャートである。
図4は、動画取得処理の一例を示すフローチャートである。
クライアント端末100は、動作分析の対象となる人物(ターゲットTG)を認識する。ターゲットTGはカメラ160の撮影視野の中央にいる人物として認識されてもよいし、アカウント情報、顔認証または指紋認証などによりターゲットTGの認証がおこなわれてもよい。
クライアント端末100は、問診データに基づいて運動の可否を判定する。クライアント端末100は、運動が可能と判定した場合には、撮影の準備として、判定項目や撮影条件を決定する。
撮影条件が適正化されたら、クライアント端末100は動画の撮影を実施する。クライアント端末100は、判定項目に関する動画を撮影する前に、アセスメント用動画を撮影してもよい。アセスメント用動画は、ターゲットTGの運動能力を分析するために取得される、立ち上がり、歩行、階段の上り下り、および、起き上がりなどの基本動作を映した動画を意味する。アセスメント用動画は、問診データとともに、ターゲットTGの健康状態を分析するための判断材料として用いられる。
判定項目に関する動画が撮影されたら、クライアント端末100は、必要に応じて、動作分析サーバ200で動作分析を行うための前処理を行う。
図5ないし図7は、前処理の具体例を説明する図である。以下、図7のフローを図5および図6を参照しながら説明する。
図8は、分析・評価処理の一例を示すフローチャートである。
動作分析サーバ200には、クライアント端末100から、複数の特定シーンの映像が動画データMDから抽出されて送信される。動作分析サーバ200は、各特定シーンの姿勢分析を行う。姿勢分析は、公知の姿勢推定技術を用いて行われる。例えば、動作分析サーバ200は、ディープラーニングの手法を用いて、ターゲットTGの画像から複数のキーポイントKP(肩・肘・手首・腰・膝・足首などを示す複数の特徴点:図10参照)を抽出する。動作分析サーバ200は、抽出されたキーポイントKPどうしの相対位置に基づいてターゲットTGの姿勢を推定する。
動作分析アルゴリズムALには、各フェーズの姿勢をどのように定義すべきかについての定義情報が規定されている。姿勢の定義は、例えば、キーポイントKPどうしの位置関係(角度、距離など)、および、特定のキーポイントKPの移動の態様(移動方向、移動速度、移動速度の変化の様子など)などに基づいて行われる。ターゲットTGが使用する特定のオブジェクトOB(ボールなど)との位置関係に基づいて、姿勢の定義が行われてもよい。
<<項目分け>>
動作分析アルゴリズムALには、フェーズごとに、1以上の評価項目が規定されている。個々の評価項目および採点基準は、トレーナやコーチなどによって設定される。標準的な評価項目および採点基準が知られている場合には、公知の評価項目および採点基準がそのまま流用されてもよい。例えば、ADL(Activities of Daily Living)を評価する方法として、バーセルインデックス(BI:Barthel Index)や機能的自立評価法(FIM:Functional Independence Measure)が一般的に利用されている。公知の評価項目および採点基準を用いることで、ターゲットTGの状態がよりよく把握される。
動作分析サーバ200は、フェーズの動作を示すフレーム画像SFIから姿勢情報HPIを抽出する。動作分析サーバ200は、抽出された姿勢情報HPIを評価項目ごとに採点する。採点は、個々の姿勢情報HPIについて行われてもよいし、複数のフレームにまたがる平均的な姿勢情報HPIについて行われてもよい。採点方法としては、閾値に基づく採点手法を用いてもよいし、ディープラーニングなどの機械学習に基づく採点手法を用いてもよい。採点のタイミングは、リアルタイムでもよいし、動画撮影後でもよい。
全フェーズの分析が完了したら(ステップSB4:Yes)、動作分析サーバ200は、各フェーズの採点結果に基づいて、ターゲットTGの動作の特徴を検出する。動作分析サーバ200は、動作の特徴に基づいてターゲットTGの症状を分類する。分類手法としては、閾値に基づく分類手法を用いてもよいし、ディープラーニングなどの機械学習に基づく分類手法を用いてもよい。動作分析アルゴリズムALには、例えば、トレーナやコーチによって設定された複数の分類項目が規定されている。
動作分析サーバ200は、各評価項目の採点結果および症状の分類結果に基づいて、ターゲットTGの一連の動作を評価する。動作分析サーバ200は、ターゲットTGの評価結果を他者(手本となる人物、他のスポーツ会員)の評価結果または過去のターゲットTGの評価結果と比較し、比較結果をターゲットTGに通知することができる。比較方法としては、比較の対象となる複数の骨格画像をオーバーレイ表示または並列表示するなどの方法が挙げられる。この際、骨格画像のサイズは一致させることが好ましい。
動作分析サーバ200は、一連の動作の評価結果を示す分析情報MAIを生成し、ターゲットTGおよび家族FMなどにレポートを行う。分析情報MAIには、例えば、ターゲットTGの現在の状況(採点結果、症状の分類結果)、症状の推移、アドバイス、および、推奨されるトレーニングプランなど、ターゲットTGを支援するための各種情報が含まれる。レポートのタイミングは、任意に設定することができる。
図9は、分析・介入処理に関わる機能構成の一例を示す図である。
クライアント端末100は、処理装置130、記憶装置140とおよび通信装置150を有する。処理装置130は、動画取得部131、撮影条件決定部132、シーン抽出部133および出力部134を有する。
動作分析サーバ200は、処理装置250、記憶装置290および通信装置260を有する。処理装置250は、姿勢情報抽出部214、ステートマシン221および動作分析部222を有する。姿勢情報抽出部214は、センサデータ解析部211に含まれる。ステートマシン221および動作分析部222は、評価部220に含まれる。
サービス提供者サーバ500は、処理装置510、記憶装置590および通信装置520を有する。処理装置510は、情報取得部511および物販情報生成部512を有する。情報取得部511は、分析情報MAIを通信装置520を介して取得する。物販情報生成部512は、情報取得部511から取得した分析情報MAIに基づいて、ターゲットTGの運動の状況に適した製品群の情報を物販データベース591から抽出する。物販情報生成部512は、抽出された物品群の情報に基づいて物販情報を生成し、通信装置520を介して動作分析サーバ200に送信する。動作分析サーバ200は、分析情報MAIおよび物販情報などを用いて介入情報VIを生成し、クライアント端末100に送信する。
図10は、分析情報MAIの一例を示す図である。
図11は、分析情報MAIの通知態様の一例を示す図である。
図12は、分析情報MAIの通知態様の他の一例を示す図である。図12は、ゴルフの学習支援に動作分析サービスが適用された例を示す。
図11の例では、第1分析情報MAI1は、他者との比較を示す情報として提示される。しかし、第1分析情報MAI1は、過去のターゲットTGの動作との比較を示す情報を含んでもよい。例えば、第1分析情報MAI1は、現在のターゲットTGの骨格情報SIと、比較の基準となる過去のターゲットTGの骨格情報SIと、を含むことができる。
図13は、システム構成のバリエーションを示す図である。
情報処理システム1は、ステートマシン221と動作分析部222とを有する。ステートマシン221は、動画データCDから抽出されたターゲットTGの姿勢情報HPIに基づいて、ターゲットTGの一連の動作に含まれる複数のフェーズを検出する。動作分析部222は、姿勢情報HPIを用いてフェーズごとにターゲットTGの動作を分析する。本開示の情報処理方法は、情報処理システム1の処理がコンピュータにより実行される。本開示のプログラムは、情報処理システム1の処理をコンピュータに実現させる。
なお、本技術は以下のような構成も採ることができる。
(1)
動画データから抽出されたターゲットの姿勢情報に基づいて、前記ターゲットの一連の動作に含まれる複数のフェーズを検出するステートマシンと、
前記姿勢情報を用いてフェーズごとに前記ターゲットの動作を分析する動作分析部と、
を有する情報処理システム。
(2)
前記動画データから、前記フェーズごとに、前記フェーズに応じた特定シーンを示す1以上の特定のフレーム画像を抽出するシーン抽出部と、
前記1以上の特定のフレーム画像から前記ターゲットの姿勢情報を抽出する姿勢情報抽出部と、
を有する上記(1)に記載の情報処理システム。
(3)
前記シーン抽出部は、前記特定シーンよりも前のフレーム画像群の姿勢解析結果に基づいて前記特定シーンへの切り替わりを検出し、前記特定シーンの切り替わりに応じて取得した、前記フレーム画像群よりも高解像度の1以上のフレーム画像を前記1以上の特定のフレーム画像として抽出する、
上記(2)に記載の情報処理システム。
(4)
前記シーン抽出部は、前記ターゲットと特定のオブジェクトとが所定の位置関係にあるときの前記ターゲットの動作、または、前記ターゲットと前記特定のオブジェクトとの位置関係の変化に基づいて、前記特定シーンへの切り替わりを検出する、
上記(3)に記載の情報処理システム。
(5)
前記シーン抽出部は、前記姿勢情報抽出部で用いられる分析モデルよりも姿勢の推定精度が低い分析モデルを用いて前記ターゲットの姿勢情報を抽出し、抽出された前記姿勢情報から推定される前記ターゲットの姿勢の変化に基づいて前記特定シーンへの切り替わりを検出する、
上記(3)または(4)に記載の情報処理システム。
(6)
動作分析の対象となる動作の種類に基づいて、前記動画データを取得する際の前記ターゲットの撮影方向を決定する撮影条件決定部を有する、
上記(1)ないし(5)のいずれか1つに記載の情報処理システム。
(7)
前記ステートマシンは、複数の方向から取得される前記姿勢情報に基づいて、前記一連の動作に含まれる複数のフェーズを検出する、
上記(6)に記載の情報処理システム。
(8)
前記動作分析部は、前記フェーズごとに設定された1以上の採点項目に基づいて各フェーズの動作を採点し、各フェーズの採点結果に基づいて、前記一連の動作の評価結果を示す分析情報を生成する、
上記(1)ないし(7)のいずれか1つに記載の情報処理システム。
(9)
前記フェーズごとに前記ターゲットの動きを一時停止し、前記分析情報を前記フェーズにおける前記ターゲットの静止画像とともに表示する出力部を有する、
上記(8)に記載の情報処理システム。
(10)
前記分析情報は、目標となる動作との比較を示す情報を含む、
上記(9)に記載の情報処理システム。
(11)
前記分析情報は、前記ターゲットの骨格情報と前記比較の基準となる基準骨格情報とを含む、
上記(10)に記載の情報処理システム。
(12)
前記出力部は、前記ターゲットの骨格情報および前記基準骨格情報として、前記フェーズにおいて分析されるべき前記ターゲットの部位に対応した骨格の情報を選択的に表示する、
上記(11)に記載の情報処理システム。
(13)
前記出力部は、前記ターゲットの骨格情報と前記基準骨格情報との間に許容基準を超える差分が生じたタイミングで、前記ターゲットの骨格情報および前記基準骨格情報を表示する、
上記(11)または(12)に記載の情報処理システム。
(14)
前記出力部は、前記ターゲットの骨格情報と前記基準骨格情報とが許容基準を超えて相違する部分の前記ターゲットの骨格をハイライト表示する、
上記(11)ないし(13)のいずれか1つに記載の情報処理システム。
(15)
前記分析情報は、前記ターゲットの動作を前記目標となる動作に近づけるための指針を示す情報を含む、
上記(10)ないし(14)のいずれか1つに記載の情報処理システム。
(16)
前記分析情報は、過去の前記ターゲットの動作との比較を示す情報を含む、
上記(9)ないし(15)のいずれか1つに記載の情報処理システム。
(17)
前記分析情報は、現在の前記ターゲットの骨格情報と前記比較の基準となる過去の前記ターゲットの骨格情報とを含む、
上記(16)に記載の情報処理システム。
(18)
前記出力部は、現在の前記ターゲットの骨格情報と目標となる動作を示す基準骨格情報との間に許容基準を超える差分が生じたタイミングで、現在の前記ターゲットの骨格情報および過去の前記ターゲットの骨格情報を表示する、
上記(17)に記載の情報処理システム。
(19)
前記出力部は、現在の前記ターゲットの骨格情報と前記基準骨格情報とが許容基準を超えて相違する部分の前記ターゲットの骨格をハイライト表示する、
上記(18)に記載の情報処理システム。
(20)
前記分析情報は、過去から現在までの各フェーズの採点項目ごとの採点結果の推移を示す情報を含む、
上記(8)ないし(19)のいずれか1つに記載の情報処理システム。
(21)
前記分析情報に基づいて前記ターゲットへの介入情報を生成する介入情報生成部を有する、
上記(8)ないし(20)のいずれか1つに記載の情報処理システム。
(22)
前記介入情報は、前記ターゲットに動作の改善を促すための判断材料となる判断情報、または、前記ターゲットのトレーニングプランを含む、
上記(21)に記載の情報処理システム。
(23)
前記介入情報生成部は、現在の前記ターゲットのレベルを認証する認証情報を生成する、
上記(21)または(22)に記載の情報処理システム。
(24)
前記ステートマシンは、指標データベースに記憶されたフェーズごとの判定方法に基づいて前記複数のフェーズを検出する、
上記(1)ないし(23)のいずれか1つに記載の情報処理システム。
(25)
前記動作分析部は、前記指標データベースに記憶されたフェーズごとの採点項目および採点基準に基づいて、前記フェーズごとに前記ターゲットの動作を分析する、
上記(24)に記載の情報処理システム。
(26)
前記指標データベースは、判定項目ごとの、動画の撮影条件、フェーズの定義、分析対象となる特定シーン、採点項目および採点基準のうちの1以上の情報を、動作分析の指標として記憶する、
上記(24)または(25)に記載の情報処理システム。
(27)
前記判定項目は、動作分析の対象となる運動の種類と関連付けられている、
上記(26)に記載の情報処理システム。
(28)
前記動作分析部は、前記一連の動作の評価結果を、前記ターゲットに介入する介入者が保有する端末またはサーバに送信する、
上記(1)ないし(27)のいずれか1つに記載の情報処理システム。
(29)
動画データから抽出されたターゲットの姿勢情報に基づいて、前記ターゲットの一連の動作に含まれる複数のフェーズを検出し、
前記姿勢情報を用いてフェーズごとに前記ターゲットの動作を分析する、
ことを有する、コンピュータにより実行される情報処理方法。
(30)
動画データから抽出されたターゲットの姿勢情報に基づいて、前記ターゲットの一連の動作に含まれる複数のフェーズを検出し、
前記姿勢情報を用いてフェーズごとに前記ターゲットの動作を分析する、
ことをコンピュータに実現させるプログラム。
132 撮影条件決定部
133 シーン抽出部
134 出力部
143 第1分析モデル
214 姿勢情報抽出部
221 ステートマシン
222 動作分析部
230 介入情報生成部
297 第2分析モデル
FI,SFI フレーム画像
IM 静止画像
LPI,HPI 姿勢情報
MAI 分析情報
MD 動画データ
OB オブジェクト
RSI 基準骨格情報
SI 骨格情報
TG ターゲット
VI 介入情報
Claims (30)
- 動画データから抽出されたターゲットの姿勢情報に基づいて、前記ターゲットの一連の動作に含まれる複数のフェーズを検出するステートマシンと、
前記姿勢情報を用いてフェーズごとに前記ターゲットの動作を分析する動作分析部と、
を有する情報処理システム。 - 前記動画データから、前記フェーズごとに、前記フェーズに応じた特定シーンを示す1以上の特定のフレーム画像を抽出するシーン抽出部と、
前記1以上の特定のフレーム画像からフレーム画像ごとに前記ターゲットの姿勢情報を抽出する姿勢情報抽出部と、
を有する請求項1に記載の情報処理システム。 - 前記シーン抽出部は、前記特定シーンよりも前のフレーム画像群の姿勢解析結果に基づいて前記特定シーンへの切り替わりを検出し、前記特定シーンの切り替わりに応じて取得した、前記フレーム画像群よりも高解像度の1以上のフレーム画像を前記1以上の特定のフレーム画像として抽出する、
請求項2に記載の情報処理システム。 - 前記シーン抽出部は、前記ターゲットと特定のオブジェクトとが所定の位置関係にあるときの前記ターゲットの動作、または、前記ターゲットと前記特定のオブジェクトとの位置関係の変化に基づいて、前記特定シーンへの切り替わりを検出する、
請求項3に記載の情報処理システム。 - 前記シーン抽出部は、前記姿勢情報抽出部で用いられる分析モデルよりも姿勢の推定精度が低い分析モデルを用いて前記ターゲットの姿勢情報を抽出し、抽出された前記姿勢情報から推定される前記ターゲットの姿勢の変化に基づいて前記特定シーンへの切り替わりを検出する、
請求項3に記載の情報処理システム。 - 動作分析の対象となる動作の種類に基づいて、前記動画データを取得する際の前記ターゲットの撮影方向を決定する撮影条件決定部を有する、
請求項1に記載の情報処理システム。 - 前記ステートマシンは、複数の方向から取得される前記姿勢情報に基づいて、前記一連の動作に含まれる複数のフェーズを検出する、
請求項6に記載の情報処理システム。 - 前記動作分析部は、前記フェーズごとに設定された1以上の採点項目に基づいて各フェーズの動作を採点し、各フェーズの採点結果に基づいて、前記一連の動作の評価結果を示す分析情報を生成する、
請求項1に記載の情報処理システム。 - 前記フェーズごとに前記ターゲットの動きを一時停止し、前記分析情報を前記フェーズにおける前記ターゲットの静止画像とともに表示する出力部を有する、
請求項8に記載の情報処理システム。 - 前記分析情報は、目標となる動作との比較を示す情報を含む、
請求項9に記載の情報処理システム。 - 前記分析情報は、前記ターゲットの骨格情報と前記比較の基準となる基準骨格情報とを含む、
請求項10に記載の情報処理システム。 - 前記出力部は、前記ターゲットの骨格情報および前記基準骨格情報として、前記フェーズにおいて分析されるべき前記ターゲットの部位に対応した骨格の情報を選択的に表示する、
請求項11に記載の情報処理システム。 - 前記出力部は、前記ターゲットの骨格情報と前記基準骨格情報との間に許容基準を超える差分が生じたタイミングで、前記ターゲットの骨格情報および前記基準骨格情報を表示する、
請求項11に記載の情報処理システム。 - 前記出力部は、前記ターゲットの骨格情報と前記基準骨格情報とが許容基準を超えて相違する部分の前記ターゲットの骨格をハイライト表示する、
請求項11に記載の情報処理システム。 - 前記分析情報は、前記ターゲットの動作を前記目標となる動作に近づけるための指針を示す情報を含む、
請求項10に記載の情報処理システム。 - 前記分析情報は、過去の前記ターゲットの動作との比較を示す情報を含む、
請求項9に記載の情報処理システム。 - 前記分析情報は、現在の前記ターゲットの骨格情報と前記比較の基準となる過去の前記ターゲットの骨格情報とを含む、
請求項16に記載の情報処理システム。 - 前記出力部は、現在の前記ターゲットの骨格情報と目標となる動作を示す基準骨格情報との間に許容基準を超える差分が生じたタイミングで、現在の前記ターゲットの骨格情報および過去の前記ターゲットの骨格情報を表示する、
請求項17に記載の情報処理システム。 - 前記出力部は、現在の前記ターゲットの骨格情報と前記基準骨格情報とが許容基準を超えて相違する部分の前記ターゲットの骨格をハイライト表示する、
請求項18に記載の情報処理システム。 - 前記分析情報は、過去から現在までの各フェーズの採点項目ごとの採点結果の推移を示す情報を含む、
請求項8に記載の情報処理システム。 - 前記分析情報に基づいて前記ターゲットへの介入情報を生成する介入情報生成部を有する、
請求項8に記載の情報処理システム。 - 前記介入情報は、前記ターゲットに動作の改善を促すための判断材料となる判断情報、または、前記ターゲットのトレーニングプランを含む、
請求項21に記載の情報処理システム。 - 前記介入情報生成部は、現在の前記ターゲットのレベルを認証する認証情報を生成する、
請求項21に記載の情報処理システム。 - 前記ステートマシンは、指標データベースに記憶されたフェーズごとの判定方法に基づいて前記複数のフェーズを検出する、
請求項1に記載の情報処理システム。 - 前記動作分析部は、前記指標データベースに記憶されたフェーズごとの採点項目および採点基準に基づいて、前記フェーズごとに前記ターゲットの動作を分析する、
請求項24に記載の情報処理システム。 - 前記指標データベースは、判定項目ごとの、動画の撮影条件、フェーズの定義、分析対象となる特定シーン、採点項目および採点基準のうちの1以上の情報を、動作分析の指標として記憶する、
請求項24に記載の情報処理システム。 - 前記判定項目は、動作分析の対象となる運動の種類と関連付けられている、
請求項26に記載の情報処理システム。 - 前記動作分析部は、前記一連の動作の評価結果を、前記ターゲットに介入する介入者が保有する端末またはサーバに送信する、
請求項1に記載の情報処理システム。 - 動画データから抽出されたターゲットの姿勢情報に基づいて、前記ターゲットの一連の動作に含まれる複数のフェーズを検出し、
前記姿勢情報を用いてフェーズごとに前記ターゲットの動作を分析する、
ことを有する、コンピュータにより実行される情報処理方法。 - 動画データから抽出されたターゲットの姿勢情報に基づいて、前記ターゲットの一連の動作に含まれる複数のフェーズを検出し、
前記姿勢情報を用いてフェーズごとに前記ターゲットの動作を分析する、
ことをコンピュータに実現させるプログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023549328A JPWO2023047621A1 (ja) | 2021-09-24 | 2022-02-17 | |
CN202280062841.8A CN117999578A (zh) | 2021-09-24 | 2022-02-17 | 信息处理系统、信息处理方法和程序 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-155854 | 2021-09-24 | ||
JP2021155854 | 2021-09-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023047621A1 true WO2023047621A1 (ja) | 2023-03-30 |
Family
ID=85720318
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/006339 WO2023047621A1 (ja) | 2021-09-24 | 2022-02-17 | 情報処理システム、情報処理方法およびプログラム |
Country Status (3)
Country | Link |
---|---|
JP (1) | JPWO2023047621A1 (ja) |
CN (1) | CN117999578A (ja) |
WO (1) | WO2023047621A1 (ja) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4166087B2 (ja) | 2000-11-17 | 2008-10-15 | バイオトニクス インコーポレイテッド | 自動生体力学解析ならびに姿勢偏差の検出および矯正のためのシステムおよび方法 |
JP4594157B2 (ja) | 2005-04-22 | 2010-12-08 | 日本電信電話株式会社 | 運動支援システムとその利用者端末装置及び運動支援プログラム |
JP5547968B2 (ja) | 2007-02-14 | 2014-07-16 | コーニンクレッカ フィリップス エヌ ヴェ | 身体運動を指導及び監督するフィードバック装置及びその作動方法 |
JP2015150226A (ja) * | 2014-02-14 | 2015-08-24 | 日本電信電話株式会社 | 習熟度評価方法、およびプログラム |
WO2016056449A1 (ja) * | 2014-10-10 | 2016-04-14 | 富士通株式会社 | スキル判定プログラム、スキル判定方法、スキル判定装置およびサーバ |
JP6045139B2 (ja) | 2011-12-01 | 2016-12-14 | キヤノン株式会社 | 映像生成装置、映像生成方法及びプログラム |
JP6289165B2 (ja) | 2014-02-27 | 2018-03-07 | キヤノンメディカルシステムズ株式会社 | リハビリテーション支援装置 |
JP6447609B2 (ja) | 2015-10-29 | 2019-01-09 | キヤノンマーケティングジャパン株式会社 | 情報処理装置、情報処理方法、プログラム |
JP2019025348A (ja) * | 2018-10-03 | 2019-02-21 | 住友ゴム工業株式会社 | ゴルフスイングの分析システム、プログラム及び方法 |
JP2020141806A (ja) | 2019-03-05 | 2020-09-10 | 株式会社Sportip | 運動評価システム |
JP2021049319A (ja) | 2019-09-20 | 2021-04-01 | パナソニックIpマネジメント株式会社 | リハビリ動作評価方法及びリハビリ動作評価装置 |
JP2021076887A (ja) * | 2019-11-05 | 2021-05-20 | テンソル・コンサルティング株式会社 | 動作分析システム、動作分析方法、および動作分析プログラム |
JP2021124748A (ja) * | 2020-01-31 | 2021-08-30 | Kddi株式会社 | 映像変換方法、装置およびプログラム |
-
2022
- 2022-02-17 WO PCT/JP2022/006339 patent/WO2023047621A1/ja active Application Filing
- 2022-02-17 CN CN202280062841.8A patent/CN117999578A/zh active Pending
- 2022-02-17 JP JP2023549328A patent/JPWO2023047621A1/ja active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4166087B2 (ja) | 2000-11-17 | 2008-10-15 | バイオトニクス インコーポレイテッド | 自動生体力学解析ならびに姿勢偏差の検出および矯正のためのシステムおよび方法 |
JP4594157B2 (ja) | 2005-04-22 | 2010-12-08 | 日本電信電話株式会社 | 運動支援システムとその利用者端末装置及び運動支援プログラム |
JP5547968B2 (ja) | 2007-02-14 | 2014-07-16 | コーニンクレッカ フィリップス エヌ ヴェ | 身体運動を指導及び監督するフィードバック装置及びその作動方法 |
JP6045139B2 (ja) | 2011-12-01 | 2016-12-14 | キヤノン株式会社 | 映像生成装置、映像生成方法及びプログラム |
JP2015150226A (ja) * | 2014-02-14 | 2015-08-24 | 日本電信電話株式会社 | 習熟度評価方法、およびプログラム |
JP6289165B2 (ja) | 2014-02-27 | 2018-03-07 | キヤノンメディカルシステムズ株式会社 | リハビリテーション支援装置 |
WO2016056449A1 (ja) * | 2014-10-10 | 2016-04-14 | 富士通株式会社 | スキル判定プログラム、スキル判定方法、スキル判定装置およびサーバ |
JP6447609B2 (ja) | 2015-10-29 | 2019-01-09 | キヤノンマーケティングジャパン株式会社 | 情報処理装置、情報処理方法、プログラム |
JP2019025348A (ja) * | 2018-10-03 | 2019-02-21 | 住友ゴム工業株式会社 | ゴルフスイングの分析システム、プログラム及び方法 |
JP2020141806A (ja) | 2019-03-05 | 2020-09-10 | 株式会社Sportip | 運動評価システム |
JP2021049319A (ja) | 2019-09-20 | 2021-04-01 | パナソニックIpマネジメント株式会社 | リハビリ動作評価方法及びリハビリ動作評価装置 |
JP2021076887A (ja) * | 2019-11-05 | 2021-05-20 | テンソル・コンサルティング株式会社 | 動作分析システム、動作分析方法、および動作分析プログラム |
JP2021124748A (ja) * | 2020-01-31 | 2021-08-30 | Kddi株式会社 | 映像変換方法、装置およびプログラム |
Non-Patent Citations (1)
Title |
---|
AYUMI MATSUMOTO, DAN MIKAMI, HARUMI KAWAMURA, AKIRA KOJIMA: "Moter learning support system by the feedback of proficiency -- Proficiency evaluation method using the variation between trials --", IEICE TECHNICAL REPORT, MVE, vol. 113, no. 470 (MVE2013-106), 27 February 2014 (2014-02-27), JP, pages 217 - 222, XP009544975 * |
Also Published As
Publication number | Publication date |
---|---|
JPWO2023047621A1 (ja) | 2023-03-30 |
CN117999578A (zh) | 2024-05-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101936532B1 (ko) | 기억 매체, 스킬 판정 방법 및 스킬 판정 장치 | |
KR100772497B1 (ko) | 골프 클리닉 시스템 및 그것의 운용방법 | |
US20170100637A1 (en) | Fitness training guidance system and method thereof | |
KR101975056B1 (ko) | 사용자 맞춤형 트레이닝 시스템 및 이의 트레이닝 서비스 제공 방법 | |
US20120231840A1 (en) | Providing information regarding sports movements | |
US11798318B2 (en) | Detection of kinetic events and mechanical variables from uncalibrated video | |
US20160372002A1 (en) | Advice generation method, advice generation program, advice generation system and advice generation device | |
US11615648B2 (en) | Practice drill-related features using quantitative, biomechanical-based analysis | |
Yasser et al. | Smart coaching: Enhancing weightlifting and preventing injuries | |
TW202402231A (zh) | 智能步態分析儀 | |
WO2023047621A1 (ja) | 情報処理システム、情報処理方法およびプログラム | |
CN116328279A (zh) | 一种基于视觉人体姿势估计的实时辅助训练方法及设备 | |
Gharasuie et al. | Performance monitoring for exercise movements using mobile cameras | |
Wessa et al. | Can pose classification be used to teach Kickboxing? | |
WO2023153453A1 (ja) | リハビリテーション支援システム、情報処理方法およびプログラム | |
Malawski et al. | Automatic analysis of techniques and body motion patterns in sport | |
Hung et al. | A HRNet-based Rehabilitation Monitoring System | |
JP2023115876A (ja) | リハビリテーション支援システム、情報処理方法およびプログラム | |
KR20240013019A (ko) | 골프 트레이닝 인터페이스 제공 장치 및 이를 이용한 골프 트레이닝 방법 | |
CN118015710A (zh) | 一种智能体育运动的识别方法和装置 | |
JP2024032585A (ja) | 運動指導システム、運動指導方法、およびプログラム | |
Madake et al. | Vision-Based Squat Correctness System | |
Ngô et al. | AI-based solution for exercise posture correction | |
WO2024127412A1 (en) | System for fitness assessment | |
CN118172833A (zh) | 一种对羽毛球运动中运动损伤进行筛查的方法、装置及设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22872380 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023549328 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280062841.8 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022872380 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022872380 Country of ref document: EP Effective date: 20240424 |