TWI823561B - Multiple sensor-fusing based interactive training system and multiple sensor-fusing based interactive training method - Google Patents

Multiple sensor-fusing based interactive training system and multiple sensor-fusing based interactive training method Download PDF

Info

Publication number
TWI823561B
TWI823561B TW111134592A TW111134592A TWI823561B TW I823561 B TWI823561 B TW I823561B TW 111134592 A TW111134592 A TW 111134592A TW 111134592 A TW111134592 A TW 111134592A TW I823561 B TWI823561 B TW I823561B
Authority
TW
Taiwan
Prior art keywords
data
training
user
coordinate system
skeleton
Prior art date
Application number
TW111134592A
Other languages
Chinese (zh)
Other versions
TW202317234A (en
Inventor
柯宏憲
陳恒殷
楊鎮在
Original Assignee
財團法人工業技術研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人工業技術研究院 filed Critical 財團法人工業技術研究院
Priority to CN202211326934.5A priority Critical patent/CN116058827A/en
Priority to US17/975,628 priority patent/US20230140585A1/en
Publication of TW202317234A publication Critical patent/TW202317234A/en
Application granted granted Critical
Publication of TWI823561B publication Critical patent/TWI823561B/en

Links

Images

Abstract

A multiple sensor-fusing based interactive training system includes a posture sensor, a sensing module, an arithmetic module and a display module. The posture sensor is configured to sense posture data and myoelectric data related to the training motion of the user. The sensing module is configured to output limb torque data based on posture data, and output muscle group activation time data based on myoelectric data. The arithmetic module is configured to convert the limb torque data and the muscle group activation time data into torque-skeletal coordinate system and muscle strength eigenvalues-skeletal coordinate system based on the skeleton coordinate system, perform a fusion calculation on the torque-skeletal coordinate system and the muscle strength eigenvalues-skeletal coordinate system, calculate evaluation data for the training action based on the result of the fusion calculation, and determine the training motion corresponding to one of sporting motions based on the evaluation data. The display module is configured to display the evaluation data and the sporting motion.

Description

多模感知協同訓練系統及多模感知協同訓練方法Multi-mode perception collaborative training system and multi-mode perception collaborative training method

本揭露是有關於一種訓練系統,且特別是有關於一種多模感知協同訓練系統及多模感知協同訓練方法。The present disclosure relates to a training system, and in particular, to a multi-mode perception collaborative training system and a multi-mode perception collaborative training method.

目前規律運動的人口愈來愈多,大大小小的健身房幾乎隨處可見。在健身房當中有多種的健身設備,許多使用者到健身房之後,僅僅按照健身器材上的簡單說明便開始使用健身器材,但並沒有經過教練的指導,由於健身器材使用不當導致運動傷害的案例層出不窮。At present, more and more people are exercising regularly, and gyms of all sizes can be found almost everywhere. There are a variety of fitness equipment in the gym. After many users go to the gym, they just follow the simple instructions on the fitness equipment and start using the fitness equipment without the guidance of a coach. There are endless cases of sports injuries caused by improper use of fitness equipment.

又或者即便有經過教練的指導,但沒有教練在旁時進行自主訓練時,也可能因為姿勢不正確、使用到的肌群與訓練動作不匹配,或者是訓練動作的肌群出力順序不正確而造成運動傷害。Or even with the guidance of a coach, when training independently without a coach, it may be due to incorrect posture, mismatch between the muscle groups used and the training movements, or the incorrect order of the muscle groups in the training movements. Cause sports injuries.

另外,在運動員進行運動訓練時,教練或旁人僅能由運動員的動作姿態來初步判斷是否存在運動傷害的風險,然而,並沒有辦法量化運動成效成為一個指標,以提供教練與運動員討論改善方式。In addition, when athletes perform sports training, coaches or bystanders can only initially judge whether there is a risk of sports injuries based on the movement posture of the athletes. However, there is no way to quantify the effectiveness of sports as an indicator to provide coaches and athletes with ways to discuss improvements.

本揭露所提供的多模感知協同訓練系統,包括體態感測器、感測模組、運算模組以及顯示模組。體態感測器包括慣性感測器以及肌電感測器,慣性感測器用以感測與使用者的訓練動作相關的多個體態數據,肌電感測器用以感測與使用者的訓練動作相關的多個肌電數據。感測模組用以根據體態數據輸出肢體扭力數據,並根據肌電數據輸出肌群啟動時間數據。運算模組用以根據骨架座標分別將肢體扭力數據和肌群啟動時間數據轉換為力矩-骨架座標和肌力特徵值-骨架座標,對力矩-骨架座標和肌力特徵值-骨架座標進行融合演算,根據融合演算的結果對訓練動作計算評估數據,根據評估數據判定訓練動作對應多個已知運動動作之一者。顯示模組用以顯示評估數據以及已知運動動作。The multi-modal perception collaborative training system provided by the present disclosure includes a body posture sensor, a sensing module, a computing module and a display module. Posture sensors include inertial sensors and myoelectric sensors. The inertial sensors are used to sense multiple posture data related to the user's training movements. The myoelectric sensors are used to sense various posture data related to the user's training movements. Multiple EMG data. The sensing module is used to output limb torque data based on posture data, and output muscle group activation time data based on electromyographic data. The computing module is used to convert limb torque data and muscle group activation time data into torque-skeleton coordinates and muscle force characteristic values-skeleton coordinates based on skeleton coordinates, and perform fusion calculations on torque-skeleton coordinates and muscle force characteristic values-skeleton coordinates. , calculate evaluation data for training actions based on the results of the fusion algorithm, and determine based on the evaluation data that the training action corresponds to one of multiple known sports actions. The display module is used to display evaluation data and known motion actions.

本揭露所提供的多模感知協同訓練方法,包括:透過體態感測器的慣性感測器感測與使用者的訓練動作相關的多個體態數據,並且透過體態感測器的肌電感測器感測與使用者的訓練動作相關的多個肌電數據;根據體態數據輸出多個肢體扭力數據,並根據肌電數據輸出多個肌群啟動時間數據;根據骨架座標系統將些肢體扭力數據轉換為力矩-骨架座標系統;根據骨架座標系統將肌群啟動時間數據轉換為肌力特徵值-骨架座標系統;對力矩-骨架座標系統和肌力特徵值-骨架座標系統進行融合演算;根據融合演算的結果對訓練動作計算評估數據;根據評估數據判定訓練動作對應多個已知運動動作之一者;以及顯示評估數據以及已知運動動作。The multi-modal perception collaborative training method provided by the present disclosure includes: sensing multiple posture data related to the user's training actions through the inertial sensor of the posture sensor, and using the myoelectric sensor of the posture sensor Sense multiple electromyographic data related to the user's training actions; output multiple limb torque data based on posture data, and output multiple muscle group activation time data based on the electromyographic data; convert these limb torque data based on the skeleton coordinate system is the moment-skeleton coordinate system; convert the muscle group activation time data into muscle force characteristic value-skeleton coordinate system according to the skeleton coordinate system; perform fusion calculation on the moment-skeleton coordinate system and muscle force characteristic value-skeleton coordinate system; according to the fusion calculation Calculate evaluation data for the training action as a result; determine based on the evaluation data that the training action corresponds to one of multiple known sports actions; and display the evaluation data and the known sports actions.

本揭露的部份實施例接下來將會配合附圖來詳細描述,以下的描述所引用的元件符號,當不同附圖出現相同的元件符號將視為相同或相似的元件。這些實施例只是本揭露的一部份,並未揭示所有本揭露的可實施方式。Some embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. The component symbols cited in the following description will be regarded as the same or similar components when the same component symbols appear in different drawings. These embodiments are only part of the disclosure and do not disclose all possible implementations of the disclosure.

圖1是依照本揭露的一實施例所繪示的多模感知協同訓練系統1的架構圖。請參考圖1,多模感知協同訓練系統1包括體態感測器10、感測模組20、運算模組30以及顯示模組40。多模感知協同訓練系統1係透過體態感測器10感測與使用者的訓練動作相關的數據,並將感測到的數據進行處理之後,判斷使用者正在進行的訓練動作屬於多模感知協同訓練系統1中已經內建的運動動作中的哪一種,並即時顯示使用者在進行訓練動作時可供使用者參考的數據。多模感知協同訓練系統1的回饋可幫助使用者判斷自己訓練時的姿勢是否正確、主要使用到的肌群是否與訓練動作匹配、訓練動作的肌群出力順序是否正確等。FIG. 1 is an architectural diagram of a multi-modal perception collaborative training system 1 according to an embodiment of the present disclosure. Please refer to FIG. 1 . The multi-modal perception collaborative training system 1 includes a posture sensor 10 , a sensing module 20 , a computing module 30 and a display module 40 . The multi-modal perception collaborative training system 1 senses data related to the user's training actions through the body posture sensor 10 and processes the sensed data to determine that the user's ongoing training actions belong to multi-modal perception collaborative training. Which of the sports actions have been built into the training system 1, and instantly displays data that the user can refer to when performing training actions. The feedback from the multi-modal perception collaborative training system 1 can help users determine whether their posture during training is correct, whether the main muscle groups used match the training movements, and whether the order of the muscle groups in the training movements is correct, etc.

體態感測器10包括慣性感測器110、肌電感測器120a以及肌電感測器120b。慣性感測器110用以感測與使用者的訓練動作相關的多個體態數據。慣性感測器110可視使用者的訓練動作,設置於使用者的身體或者四肢上。例如,當使用者在跑步時,慣性感測器110可設置於使用者的腰部、腿外側、鞋子上等等位置。而與跑步動作相關的體態數據有步頻、步幅、垂直振幅、身體傾斜角度、雙腳觸地時間、雙腳的移動軌跡等,而這些體態數據均是與跑步時的經濟性相關,並且能夠有效檢視使用者在跑步時的姿勢。實作上,慣性感測器110例如是加速度感測器(G-Sensor)、角速度感測器、陀螺儀(gyro sensor)、步頻感測器等動態感測器,但不限於此。The body posture sensor 10 includes an inertial sensor 110, a myoelectric sensor 120a and a myoelectric sensor 120b. The inertial sensor 110 is used to sense multiple posture data related to the user's training movements. The inertial sensor 110 can see the user's training movements and is installed on the user's body or limbs. For example, when the user is running, the inertial sensor 110 can be disposed on the user's waist, outside of the leg, on the shoe, etc. Postural data related to running movements include stride frequency, stride length, vertical amplitude, body tilt angle, foot contact time, foot movement trajectory, etc. These postural data are all related to the economy of running, and It can effectively check the user's posture while running. In practice, the inertial sensor 110 is, for example, an acceleration sensor (G-Sensor), an angular velocity sensor, a gyroscope (gyro sensor), a cadence sensor and other dynamic sensors, but is not limited thereto.

肌電感測器120a以及肌電感測器120b用以感測與使用者的訓練動作相關的多個肌電數據。肌電感測器120a以及肌電感測器120b可視使用者的訓練動作,貼附或穿戴於使用者的核心肌群或相關肌群的上方,例如左右大腿、左右小腿、左右手臂、兩側後背肌或兩側胸肌等等,以採集肌肉的肌電數據。實作上來說,採集肌電數據的感測器可採用接觸式或非接觸式的肌電感測器,在此不多贅述。The myoelectric sensor 120a and the myoelectric sensor 120b are used to sense a plurality of myoelectric data related to the user's training actions. The myoelectric sensor 120a and the myoelectric sensor 120b can observe the user's training movements, and are attached or worn on the user's core muscle groups or related muscle groups, such as the left and right thighs, left and right calves, left and right arms, and back muscles on both sides. Or the chest muscles on both sides, etc., to collect muscle electromyographic data. Practically speaking, the sensor that collects myoelectric data can be a contact or non-contact myoelectric sensor, which will not be described in detail here.

請再參考圖1。感測模組20耦接體態感測器10,用以根據體態數據輸出多個肢體扭力數據,並根據肌電數據輸出多個肌群啟動時間數據。詳細來說,感測模組20根據慣性感測器110所感測到的多個體態數據進行空間座標轉換後,輸出多個肢體扭力數據。另外,感測模組20根據肌電感測器120a以及肌電感測器120b所感測到的多個肌電數據進行動態肌電圖(Electromyography,EMG)處理後,輸出多個肌群啟動時間數據。Please refer to Figure 1 again. The sensing module 20 is coupled to the posture sensor 10 for outputting a plurality of limb torque data according to the posture data and outputting a plurality of muscle group activation time data according to the electromyographic data. Specifically, the sensing module 20 performs spatial coordinate conversion according to the plurality of posture data sensed by the inertial sensor 110, and then outputs a plurality of limb torque data. In addition, the sensing module 20 performs dynamic electromyography (EMG) processing on the plurality of myoelectric data sensed by the myoelectric sensor 120a and the myoelectric sensor 120b, and then outputs multiple muscle group activation time data.

運算模組30耦接感測模組20,用以根據骨架座標系統分別將肢體扭力數據和肌群啟動時間數據轉換為力矩-骨架座標和肌力特徵值-骨架座標,對力矩-骨架座標和肌力特徵值-骨架座標進行融合演算,根據融合演算的結果對訓練動作計算評估數據,根據評估數據判定訓練動作對應多個已知運動動作之一者。詳細說明將於後面闡述。實務上來說,運算模組30可以是中央處理器(Central Processing Unit,CPU)、數位訊號處理器(Digital Signal Processing,DSP)、多個微處理器(microprocessor)、一個或多個結合數位訊號處理器核心的微處理器、控制器、微控制器、特殊應用集成電路(Application Specific Integrated Circuit,ASIC)、場可程式閘陣列電路(Field Programmable Gate Array,FPGA)、任何其他種類的積體電路、狀態機、根據進階精簡指令集機器(Advanced RISC Machine,ARM)的處理器以及類似品,但可不限於此。The computing module 30 is coupled to the sensing module 20 for converting the limb torque data and muscle group activation time data into torque-skeleton coordinates and muscle force characteristic values-skeleton coordinates according to the skeleton coordinate system, and calculates the torque-skeleton coordinates sum The muscle strength characteristic value-skeleton coordinates are fused and calculated, and the evaluation data of the training actions are calculated based on the results of the fusion calculation. Based on the evaluation data, it is determined that the training action corresponds to one of multiple known sports actions. Detailed description will be explained later. Practically speaking, the computing module 30 can be a central processing unit (CPU), a digital signal processor (Digital Signal Processing, DSP), multiple microprocessors (microprocessors), one or more combined digital signal processing Microprocessor, controller, microcontroller, Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA), any other type of integrated circuit, State machines, processors based on Advanced RISC Machine (ARM) and the like, but may not be limited thereto.

顯示模組40耦接運算模組30,用以顯示評估數據以及已知運動動作。於一實施例中,顯示模組40可以文字、圖示、圖表展現評估數據以及已知運動動作。於另一實施例中,顯示模組40可進一步以聲音、影像搭配文字、圖示、圖表展現評估數據以及已知運動動作。實務上來說,顯示模組40可以是顯示器、平板電腦或個人電腦等具有顯示功能的電子裝置,也可以是在跑步機上的顯示裝置,但可不限於此。The display module 40 is coupled to the computing module 30 for displaying evaluation data and known motion actions. In one embodiment, the display module 40 can display evaluation data and known sports actions in text, icons, and charts. In another embodiment, the display module 40 can further display the evaluation data and known sports actions using sounds, images, text, icons, and charts. Practically speaking, the display module 40 may be an electronic device with a display function such as a monitor, a tablet computer, or a personal computer, or may be a display device on a treadmill, but is not limited thereto.

圖2是依照本揭露的一實施例所繪示的多模感知協同訓練系統1中運算模組30進行融合演算的示意圖。請同時參考圖1、2,骨架座標系統31輸入至運算模組30中,多個肢體扭力數據以及多個肌群啟動時間數據同步並陸續地輸入至運算模組30。運算模組30根據骨架座標系統31對多個肢體扭力數據進行座標系統轉換301,將多個肢體扭力數據轉換為力-骨架座標系統;之後,運算模組30進行轉換302,將力-骨架座標系統轉換為力矩-骨架座標系統。同時間,運算模組30根據骨架座標系統31對多個肌群啟動時間數據進行座標系統轉換301,將多個肌群啟動時間數據轉換為肌力啟動時間-骨架座標系統;之後,運算模組30進行轉換303,將肌力啟動時間-骨架座標系統轉換為肌力特徵值-骨架座標系統。FIG. 2 is a schematic diagram of the computing module 30 performing fusion calculation in the multi-modal perception collaborative training system 1 according to an embodiment of the present disclosure. Please refer to Figures 1 and 2 at the same time. The skeleton coordinate system 31 is input into the calculation module 30, and multiple limb torque data and multiple muscle group activation time data are synchronized and successively input into the calculation module 30. The computing module 30 performs coordinate system conversion 301 on the multiple limb torque data according to the skeleton coordinate system 31, and converts the multiple limb torque data into a force-skeleton coordinate system; after that, the computing module 30 performs conversion 302 to convert the force-skeleton coordinate system The system is converted into a moment-skeleton coordinate system. At the same time, the computing module 30 performs coordinate system conversion 301 on the multiple muscle group activation time data according to the skeleton coordinate system 31, and converts the multiple muscle group activation time data into the muscle force activation time-skeleton coordinate system; after that, the computing module 30 performs conversion 303 to convert the muscle force activation time-skeleton coordinate system into the muscle force characteristic value-skeleton coordinate system.

接著,運算模組30持續將力矩-骨架座標系統與肌力特徵值-骨架座標系統進行融合演算304,根據融合演算304的結果對使用者的訓練動作計算評估數據,並且將評估數據輸出至顯示模組40。Next, the computing module 30 continues to perform a fusion calculation 304 on the moment-skeleton coordinate system and the muscle force characteristic value-skeleton coordinate system, calculates evaluation data for the user's training actions based on the results of the fusion calculation 304, and outputs the evaluation data to the display Mod 40.

詳細來說,評估數據是經由運算模組30將多個肢體扭力數據以及多個肌群啟動時間數據進行轉換並融合演算之後,用以量化使用者的運動成效的一種數據。於一實施例中,運算模組30是採用K平均集群(K-Mean Clustering,KMC)對力矩-骨架座標系統和肌力特徵值-骨架座標系統進行融合演算304。Specifically, the evaluation data is a type of data that is used to quantify the user's exercise effectiveness after the computing module 30 converts and fuses multiple limb torque data and multiple muscle group activation time data. In one embodiment, the computing module 30 uses K-Mean Clustering (KMC) to perform fusion calculation 304 on the moment-skeleton coordinate system and the muscle force characteristic value-skeleton coordinate system.

如圖1所示,運算模組30根據評估數據判定使用者的訓練動作對應多個已知運動動作之一者,並且將判定出的已知運動動作輸出至顯示模組40。顯示模組40顯示評估數據以及判定出的已知運動動作,以提供使用者觀察自己執行訓練動作時的姿勢是否正確、主要使用到的肌群是否與訓練動作匹配、訓練動作的肌群出力順序是否正確等。As shown in FIG. 1 , the computing module 30 determines that the user's training action corresponds to one of a plurality of known exercise actions based on the evaluation data, and outputs the determined known exercise action to the display module 40 . The display module 40 displays the evaluation data and the determined known sports actions to provide the user with the ability to observe whether the posture when performing training actions is correct, whether the main muscle groups used match the training actions, and the sequence of muscle group output of the training actions. Whether it is correct etc.

於一實施例中,使用者可透過本揭露所述的多模感知協同訓練系統1與社群資料進行連結,將評估數據以及已知運動動作上傳至社群網站或訓練網站上。或者使用者可於社群網站或訓練網站上與進行相同已知運動動作的其他使用者針對彼此的評估數據進行互動。或者,使用者可基於自己上傳至社群網站或訓練網站上的評估數據,與遠端的運動教練進行線上討論。In one embodiment, the user can connect to social data through the multi-modal perception collaborative training system 1 described in the present disclosure, and upload the evaluation data and known sports actions to the social website or training website. Or users can interact with other users who perform the same known sports actions on social networking websites or training websites based on each other's evaluation data. Alternatively, users can conduct online discussions with remote sports coaches based on the evaluation data they upload to social networking sites or training websites.

於一實施例中,倘若使用者正在執行的訓練動作與運算模組判定的已知運動動作相同,代表使用者的體態數據和肌電數據符合此已知運動動作應該有的展現,則使用者可以確認自己正在執行的訓練動作姿勢正確、主要使用到的肌群與訓練動作匹配,並且訓練動作的肌群出力順序正確等。In one embodiment, if the training action being performed by the user is the same as the known exercise action determined by the computing module, which means that the user's posture data and myoelectric data match the performance that the known exercise action should have, then the user You can confirm that the posture of the training movements you are performing is correct, the main muscle groups used match the training movements, and the order of the muscle groups in the training movements is correct.

於另一實施例中,倘若使用者正在執行的訓練動作與運算模組判定的已知運動動作不相同,可能代表使用者的體態數據和肌電數據並不完全符合此已知運動動作應該有的展現,則使用者可以進一步確認或調整自己正在執行的訓練動作姿勢、主要使用到的肌群,或者是進一步確認或調整訓練動作的肌群出力順序正確等。或者,使用者也可以進一步確認慣性感測器110、肌電感測器120a以及肌電感測器120b是否設置在正確的位置。In another embodiment, if the training action being performed by the user is different from the known exercise action determined by the computing module, it may mean that the user's posture data and myoelectric data do not completely match the known exercise action. display, the user can further confirm or adjust the posture of the training action he is performing, the main muscle groups used, or further confirm or adjust the correct order of the muscle groups in the training action, etc. Alternatively, the user can further confirm whether the inertial sensor 110, the myoelectric sensor 120a and the myoelectric sensor 120b are set in the correct position.

圖3是依照本揭露的一實施例所繪示的多模感知協同訓練系統1的方塊圖。請參考圖3,當使用者執行的訓練動作不需要使用訓練器材時,體態感測器10包括身體慣性感測器110a、肌電感測器120a以及肌電感測器120b。身體慣性感測器110a同前述設置於使用者的身體或者四肢上的慣性感測器110,肌電感測器120a以及肌電感測器120b同前述貼附或穿戴於使用者的核心肌群或相關肌群的上方的肌電感測器,此處不再贅述。FIG. 3 is a block diagram of a multi-modal perception collaborative training system 1 according to an embodiment of the present disclosure. Please refer to FIG. 3. When the training action performed by the user does not require the use of training equipment, the posture sensor 10 includes a body inertia sensor 110a, a myoelectric sensor 120a and a myoelectric sensor 120b. The body inertial sensor 110a is the same as the inertial sensor 110 disposed on the user's body or limbs, the myoelectric sensor 120a and the myoelectric sensor 120b are attached or worn on the user's core muscles or related to the above. The myoelectric sensors above the muscle groups will not be described again here.

於一實施例中,當使用者執行的訓練動作不需要使用訓練器材時,身體慣性感測器110a會輸出多個體態數據至感測模組20,感測模組20根據身體慣性感測器110a所感測到的多個體態數據進行空間座標轉換21後,輸出多個肢體扭力數據。另外,感測模組20根據肌電感測器120a以及肌電感測器120b所感測到的多個肌電數據進行動態EMG處理22後,輸出多個肌群啟動時間數據。In one embodiment, when the training action performed by the user does not require the use of training equipment, the body inertia sensor 110a will output a plurality of posture data to the sensing module 20, and the sensing module 20 will After the multiple posture data sensed by 110a undergo spatial coordinate conversion 21, multiple limb torque data are output. In addition, the sensing module 20 performs dynamic EMG processing 22 based on the plurality of myoelectric data sensed by the myoelectric sensor 120a and the myoelectric sensor 120b, and then outputs a plurality of muscle group activation time data.

由於體態感測器10中僅包括身體慣性感測器110a,因此輸入至運算模組30中骨架座標系統為人體骨架座標系統31a。多個肢體扭力數據以及多個肌群啟動時間數據同步並陸續地輸入至運算模組30,運算模組30根據人體骨架座標系統31a將多個肢體扭力數據及多個肌群啟動時間數據進行轉換及融合演算,計算評估數據,並且將評估數據輸出至顯示模組40。Since the body posture sensor 10 only includes the body inertia sensor 110a, the skeleton coordinate system input to the computing module 30 is the human skeleton coordinate system 31a. Multiple limb torque data and multiple muscle group activation time data are synchronized and successively input to the computing module 30. The computing module 30 converts multiple limb torque data and multiple muscle group activation time data according to the human skeleton coordinate system 31a. and fusion calculation, calculate the evaluation data, and output the evaluation data to the display module 40 .

於一實施例中,當僅身體慣性感測器110a設置於使用者的身體時,人體骨架座標系統31a是對應於使用者的骨架,包括人體的骨骼、肌肉等。使用者的骨架可透過攝影裝置32獲取。In one embodiment, when only the body inertia sensor 110a is disposed on the user's body, the human skeleton coordinate system 31a corresponds to the user's skeleton, including human bones, muscles, etc. The user's skeleton can be obtained through the photography device 32 .

圖4是依照本揭露的另一實施例所繪示的多模感知協同訓練系統1的方塊圖。請參考圖4,當使用者執行的訓練動作需要使用訓練器材時,體態感測器10包括身體慣性感測器110a、器材慣性感測器110b、肌電感測器120a以及肌電感測器120b。器材慣性感測器110b可視使用者的訓練動作,設置於使用者執行訓練動作時所使用的訓練器材上,例如球棒、球棍等,用以感測與使用者的訓練動作相關的多個體態數據。而身體慣性感測器110a同前述設置於使用者的身體或者四肢上的慣性感測器110,肌電感測器120a以及肌電感測器120b同前述貼附或穿戴於使用者的核心肌群或相關肌群的上方的肌電感測器,此處不再贅述。FIG. 4 is a block diagram of a multi-modal perception collaborative training system 1 according to another embodiment of the present disclosure. Please refer to Figure 4. When the training action performed by the user requires the use of training equipment, the posture sensor 10 includes a body inertia sensor 110a, an equipment inertia sensor 110b, a myoelectric sensor 120a and a myoelectric sensor 120b. The equipment inertia sensor 110b is visible to the user's training movements and is provided on the training equipment used by the user to perform training movements, such as a bat, a bat, etc., to sense multiple parameters related to the user's training movements. Posture data. The body inertial sensor 110a is the same as the inertial sensor 110 disposed on the user's body or limbs, the myoelectric sensor 120a and the myoelectric sensor 120b are attached or worn on the user's core muscles or The electromyographic sensors above the relevant muscle groups will not be described again here.

於一實施例中,當使用者執行的訓練動作需要使用訓練器材時,身體慣性感測器110a以及器材慣性感測器110b均會輸出體態數據至感測模組20,感測模組20根據身體慣性感測器110a以及器材慣性感測器110b所感測到的多個體態數據進行空間座標轉換21後,輸出多個肢體扭力數據。另外,感測模組20根據肌電感測器120a以及肌電感測器120b所感測到的多個肌電數據進行動態EMG處理22後,輸出多個肌群啟動時間數據。In one embodiment, when the training action performed by the user requires the use of training equipment, both the body inertia sensor 110a and the equipment inertia sensor 110b will output posture data to the sensing module 20, and the sensing module 20 will After the spatial coordinate conversion 21 is performed on the plurality of posture data sensed by the body inertia sensor 110a and the equipment inertia sensor 110b, a plurality of limb torque data are output. In addition, the sensing module 20 performs dynamic EMG processing 22 based on the plurality of myoelectric data sensed by the myoelectric sensor 120a and the myoelectric sensor 120b, and then outputs a plurality of muscle group activation time data.

由於體態感測器10中同時包括身體慣性感測器110a和器材慣性感測器110b,因此輸入至運算模組30中骨架座標系統為人體/器材骨架座標系統31b。多個肢體扭力數據以及多個肌群啟動時間數據同步並陸續地輸入至運算模組30,運算模組30根據人體/器材骨架座標系統31b將多個肢體扭力數據及多個肌群啟動時間數據進行轉換及融合演算,計算評估數據,並且將評估數據輸出至顯示模組40。Since the body posture sensor 10 includes both the body inertia sensor 110a and the equipment inertia sensor 110b, the skeleton coordinate system input to the computing module 30 is the human body/equipment skeleton coordinate system 31b. Multiple limb torque data and multiple muscle group activation time data are synchronized and successively input to the computing module 30. The computing module 30 converts multiple limb torque data and multiple muscle group activation time data according to the human body/equipment skeleton coordinate system 31b. Conversion and fusion calculations are performed, evaluation data is calculated, and the evaluation data is output to the display module 40 .

於一實施例中,當身體慣性感測器110 a設置於使用者的身體,且同時器材慣性感測器110b設置於訓練器材時,人體/器材骨架座標系統31b不但對應於使用者的身體骨架,包括人體的骨骼、肌肉等,也對應於訓練器材的器材骨架。使用者的身體骨架以及訓練器材的器材骨架均可透過攝影裝置32獲取。In one embodiment, when the body inertia sensor 110 a is disposed on the user's body and the equipment inertia sensor 110 b is disposed on the training equipment, the human body/equipment skeleton coordinate system 31 b not only corresponds to the user's body skeleton , including human bones, muscles, etc., and also corresponds to the equipment skeleton of training equipment. The user's body frame and the equipment frame of the training equipment can be obtained through the camera device 32 .

圖5是依照本揭露的一實施例所繪示的多模感知協同訓練系統1更新訓練數據及錯誤數據的方塊圖。請同時參考圖1、5,體態感測器10包括設置於使用者身上的至少兩個感測器,其中設置於使用者身上的感測器可為慣性感測器、肌電感測器當中之一者或及其組合。如圖5所示,體態感測器10包括感測器151、感測器152、感測器153以及感測器154,其中感測器151、感測器152、感測器153以及感測器154分別可為慣性感測器或者肌電感測器,換言之,感測器151、感測器152、感測器153以及感測器154可均為慣性感測器或者均為肌電感測器,感測器151、感測器152、感測器153以及感測器154也可部分感測器為慣性感測器,其餘感測器為肌電感測器。FIG. 5 is a block diagram of the multi-modal perception collaborative training system 1 updating training data and error data according to an embodiment of the present disclosure. Please refer to FIGS. 1 and 5 at the same time. The body posture sensor 10 includes at least two sensors disposed on the user's body. The sensors disposed on the user's body may be one of an inertial sensor and a myoelectric sensor. One or a combination thereof. As shown in FIG. 5 , the body posture sensor 10 includes a sensor 151 , a sensor 152 , a sensor 153 and a sensor 154 , wherein the sensor 151 , the sensor 152 , the sensor 153 and the sensor 154 The sensor 154 can be an inertial sensor or a myoelectric sensor respectively. In other words, the sensor 151, the sensor 152, the sensor 153 and the sensor 154 can all be inertial sensors or all myoelectric sensors. , some of the sensors 151, 152, 153 and 154 may also be inertial sensors, and the remaining sensors may be myoelectric sensors.

請繼續參考圖5。體態感測器10會輸出多個感測數據至感測模組20,其中感測數據與體態感測器10所包含的感測器屬於慣性感測器或者是肌電感測器所輸出的數據相對應。感測模組20根據感測數據輸出多個肢體扭力數據及/或多個肌群啟動時間數據。運算模組30將多個肢體扭力數據及多個肌群啟動時間數據進行轉換及融合演算。Please continue to refer to Figure 5. The body posture sensor 10 will output a plurality of sensing data to the sensing module 20, where the sensing data and the sensors included in the body posture sensor 10 are inertial sensors or data output by myoelectric sensors. corresponding. The sensing module 20 outputs multiple limb torque data and/or multiple muscle group activation time data according to the sensing data. The computing module 30 converts and integrates multiple limb torque data and multiple muscle group activation time data.

於一實施例中,多模感知協同訓練系統1更包括運動模擬模型模組50、訓練數據資料庫60以及運動模型資料庫70。運動模擬模型模組50耦接運算模組30。訓練數據資料庫60耦接運算模組30以及運動模擬模型模組50,包括對應多種已知運動模型的訓練數據以及錯誤數據,其中錯誤數據用以判斷使用者的訓練動作是否為錯誤動作或者危險動作。運動模型資料庫70耦接運動模擬模型模組50,包括多個已知運動模型。多個已知運動模型是基於四個以上的慣性感測器或者肌電感測器所預先建立的運動模型。實作上,運動模擬模型模組50可以是微處理器(micro-processor)或嵌入式控制器(embedded controller),訓練數據資料庫60以及運動模型資料庫70可以是記憶體或硬碟等儲存媒體,本揭露並不加以限制。In one embodiment, the multi-modal perception collaborative training system 1 further includes a motion simulation model module 50 , a training data database 60 and a motion model database 70 . The motion simulation model module 50 is coupled to the computing module 30 . The training data database 60 is coupled to the computing module 30 and the motion simulation model module 50, and includes training data corresponding to a variety of known motion models and error data, where the error data is used to determine whether the user's training actions are incorrect actions or dangerous. action. The motion model database 70 is coupled to the motion simulation model module 50 and includes a plurality of known motion models. Multiple known motion models are pre-established motion models based on more than four inertial sensors or myoelectric sensors. In practice, the motion simulation model module 50 can be a micro-processor or an embedded controller, and the training data database 60 and the motion model database 70 can be stored in a memory or a hard disk. media, and this disclosure is not limited thereto.

運算模組30基於使用者使用的體態感測器的數量確定使用者的運動情境,例如跑步、有氧運動、無使用器材的核心肌群訓練等。運算模組30確定使用者的運動情境之後,基於運動情境與運動模擬模型模組50進行運動模型配對,運動模擬模型模組50基於運動情境自運動模型資料庫70中選定已知運動模型當中之一者,目的在找到與使用者正在執行的訓練動作對應的運動模型。The computing module 30 determines the user's exercise context based on the number of body posture sensors used by the user, such as running, aerobic exercise, core muscle training without equipment, etc. After the computing module 30 determines the user's motion situation, it performs motion model matching with the motion simulation model module 50 based on the motion situation. The motion simulation model module 50 selects one of the known motion models from the motion model database 70 based on the motion situation. One, the purpose is to find the motion model corresponding to the training action the user is performing.

運動模擬模型模組50自運動模型資料庫70中選定與使用者正在執行的訓練動作對應的已知運動模型後,運動模擬模型模組50根據已選定的已知運動模型自訓練數據資料庫60中讀取對應已選定的已知運動模型的訓練數據,並回傳至運算模組30。運算模組30比較評估數據與對應已選定的已知運動模型的訓練數據,以計算出使用者的訓練動作與已選定的已知運動模型的相似度。After the motion simulation model module 50 selects a known motion model corresponding to the training action being performed by the user from the motion model database 70, the motion simulation model module 50 self-trains from the training data database 60 based on the selected known motion model. The training data corresponding to the selected known motion model is read and returned to the computing module 30 . The computing module 30 compares the evaluation data with the training data corresponding to the selected known motion model to calculate the similarity between the user's training actions and the selected known motion model.

當相似度大於或等於相似閾值(例如0.5)時,運算模組30判定使用者的訓練動作符合已選定的已知運動模型,並將評估數據儲存到訓練數據資料庫60以更新對應已選定的已知運動模型的訓練數據,建立AI模型。同時,運算模組30會將評估數據以及已選定的已知運動模型所對應的已知運動動作輸出至顯示模組40,顯示模組40會進一步顯示評估數據以及已知運動動作。When the similarity is greater than or equal to the similarity threshold (for example, 0.5), the computing module 30 determines that the user's training actions conform to the selected known motion model, and stores the evaluation data to the training data database 60 to update the corresponding selected The training data of the motion model is known and the AI model is established. At the same time, the computing module 30 will output the evaluation data and the known motion actions corresponding to the selected known motion model to the display module 40, and the display module 40 will further display the evaluation data and the known motion actions.

反之,當相似度小於相似閾值(例如0.5)時,運算模組30判定使用者的訓練動作未符合運動模型資料庫70中所包含的所有已知運動模型。此時,運算模組30將評估數據儲存到訓練數據資料庫60以更新錯誤數據。同時,運算模組30會將評估數據以及錯誤運動動作訊息輸出至顯示模組40,顯示模組40會進一步顯示評估數據以及錯誤運動動作訊息。其中,錯誤運動動作訊息用以提醒使用者進一步確認或調整自己正在執行的訓練動作姿勢、主要使用到的肌群,或者是進一步確認或調整訓練動作的肌群出力順序正確等。或者,使用者也可以進一步確認體態感測器10是否設置在正確的位置。On the contrary, when the similarity is less than the similarity threshold (for example, 0.5), the computing module 30 determines that the user's training actions do not conform to all known motion models included in the motion model database 70 . At this time, the computing module 30 stores the evaluation data into the training data database 60 to update the error data. At the same time, the computing module 30 will output the evaluation data and the incorrect movement action information to the display module 40, and the display module 40 will further display the evaluation data and the incorrect movement action information. Among them, the error movement action message is used to remind the user to further confirm or adjust the training action posture they are performing, the main muscle groups used, or to further confirm or adjust the correct order of the muscle groups in the training action, etc. Alternatively, the user can further confirm whether the posture sensor 10 is set in the correct position.

圖6是依照本揭露的一實施例所繪示的多模感知協同訓練系統中運用慣性感測器110(例如加速度感測器)判斷偏移度的示意圖;圖7是依照本揭露的一實施例所繪示的多模感知協同訓練系統中運用慣性感測器110判斷偏移度的方塊圖。請先參考圖6,當慣性感測器110固定於感測載具16上,並且感測載具16是採用綁帶或穿戴的方式設置於使用者的身體、四肢或衣物上時,在運動過程中,慣性感測器110可能會因為使用者的身體晃動或四肢擺動造成偏移。一旦慣性感測器110與使用者的身體或四肢之間產生相對偏移量d時,將會影響到體態數據的準確性。FIG. 6 is a schematic diagram of using an inertial sensor 110 (such as an acceleration sensor) to determine deflection in a multi-mode perception collaborative training system according to an embodiment of the present disclosure; FIG. 7 is a schematic diagram of an implementation according to the present disclosure. The example shows a block diagram of using the inertial sensor 110 to determine the degree of deviation in the multi-modal perception collaborative training system. Please refer to Figure 6 first. When the inertial sensor 110 is fixed on the sensing carrier 16, and the sensing carrier 16 is set on the user's body, limbs or clothing by means of straps or wear, during movement During the process, the inertial sensor 110 may be deflected due to the user's body shaking or limb swinging. Once a relative offset d occurs between the inertial sensor 110 and the user's body or limbs, the accuracy of the posture data will be affected.

請同時參考圖6、7。慣性感測器110具有偏移感測單元111。當固定慣性感測器110的感測載具16服貼於使用者的身體或者四肢時,慣性感測器110所量測到的感測數據為加速度A1,偏移感測單元111將加速度A1設定為標準參考值。當固定慣性感測器110的感測載具16的一側與使用者的身體或者四肢之間產生相對偏移量d時,慣性感測器110所量測到的感測數據為加速度A2。一旦加速度A2與加速度A1具有誤差e時,偏移感測單元111感測相應於加速度A2的偏移數據,並輸出至感測模組20。Please refer to Figures 6 and 7 at the same time. The inertial sensor 110 has an offset sensing unit 111 . When the sensing carrier 16 that fixes the inertial sensor 110 is attached to the user's body or limbs, the sensing data measured by the inertial sensor 110 is acceleration A1, and the offset sensing unit 111 records the acceleration A1 Set as standard reference value. When a relative offset d occurs between one side of the sensing carrier 16 on which the inertial sensor 110 is fixed and the user's body or limbs, the sensing data measured by the inertial sensor 110 is acceleration A2. Once the acceleration A2 and the acceleration A1 have an error e, the offset sensing unit 111 senses offset data corresponding to the acceleration A2 and outputs it to the sensing module 20 .

感測模組20根據體態數據及偏移數據輸出肢體扭力數據至運算模組30,運算模組30比較評估數據與對應已選定的已知運動模型的訓練數據,判斷固定慣性感測器110的感測載具16與使用者的身體之間的相對偏移量d是否超過偏移閾值。當相對偏移量d並未大於偏移閾值時,代表感測載具16的偏移程度尚不至於影響體態數據的準確性,則多模感知協同訓練系統1會持續運作。反之,當相對偏移量d大於偏移閾值時,代表感測載具16的偏移程度已影響體態數據的準確性,運算模組30會輸出異常信號至顯示模組40,顯示模組40顯示感測器設置異常訊息。The sensing module 20 outputs the limb torque data to the computing module 30 according to the posture data and offset data. The computing module 30 compares the evaluation data with the training data corresponding to the selected known motion model, and determines the accuracy of the fixed inertial sensor 110 It is sensed whether the relative offset d between the carrier 16 and the user's body exceeds the offset threshold. When the relative offset d is not greater than the offset threshold, it means that the offset degree of the sensing vehicle 16 does not affect the accuracy of the posture data, and the multi-modal perception collaborative training system 1 will continue to operate. On the contrary, when the relative offset d is greater than the offset threshold, it means that the offset degree of the sensing vehicle 16 has affected the accuracy of the posture data, and the computing module 30 will output an abnormal signal to the display module 40, and the display module 40 A sensor setting error message is displayed.

當使用者在使用運動器材時,還可於運動器材上設置力學感測器,用以感測使用者在執行訓練動作時的出力狀態,偵測使用者身體左右兩側的出力狀況是否平衡。一旦使用者身體左右兩側的出力狀況不平衡時,本揭露所述的多模感知協同訓練系統可進一步發出警示,以提醒使用者留意身體左右兩側出力處於不平衡的狀況。When the user is using sports equipment, a force sensor can also be installed on the sports equipment to sense the user's force output status when performing training actions and detect whether the force output status of the left and right sides of the user's body is balanced. Once the force output on the left and right sides of the user's body is unbalanced, the multi-mode sensing collaborative training system described in the present disclosure can further issue a warning to remind the user to pay attention to the imbalance in the force output on the left and right sides of the body.

圖8A是依照本揭露的一實施例所繪示的多模感知協同訓練系統中運用體態感測器和力學感測器80計算左右平衡度的示意圖。如圖8A所示,使用者在使用圖8A中的訓練器材時,需同時用雙腳推蹬健身器材上的平板。多模感知協同訓練系統更包括力學感測器80,力學感測器80設置於訓練器材上並耦接於感測模組,用以感測與使用者的訓練動作相對應的多個力學數據(例如壓力感測訊號)。當使用者同時用雙腳推蹬健身器材上的平板時,力學感測器80感測與使用者的訓練動作相對應的多個力學數據(例如壓力感測訊號),其中這些力學數據可以是對應於使用者的雙腳同時施加在平板上的總壓力感測訊號,或者是分別對應於使用者左腳的壓力感測訊號和使用者右腳的壓力感測訊號,取決於力學感測器80的設置方式或者是數量。FIG. 8A is a schematic diagram of using the posture sensor and the force sensor 80 to calculate the left and right balance in the multi-modal perception collaborative training system according to an embodiment of the present disclosure. As shown in Figure 8A, when using the training equipment in Figure 8A, the user needs to push the flat plate on the fitness equipment with both feet at the same time. The multi-modal perception collaborative training system further includes a force sensor 80. The force sensor 80 is provided on the training equipment and coupled to the sensing module to sense multiple mechanical data corresponding to the user's training movements. (e.g. pressure sensing signal). When the user pushes the flat plate on the fitness equipment with both feet at the same time, the force sensor 80 senses a plurality of mechanical data (such as pressure sensing signals) corresponding to the user's training actions, where these mechanical data can be Corresponding to the total pressure sensing signal exerted by both feet of the user on the tablet at the same time, or corresponding to the pressure sensing signal of the user's left foot and the pressure sensing signal of the user's right foot respectively, depending on the force sensor The setting method or quantity of 80.

請繼續參考圖8A。當使用者在使用運動器材時,肌電感測器120a以及肌電感測器120b分別設置於使用者身體的左半邊及右半邊,並且器材慣性感測器110b設置於運動器材上。肌電感測器120a以及肌電感測器120b用以感測與使用者的訓練動作相對應的多個左半邊肌電數據(例如左大腿肌肉訊號)及多個右半邊肌電數據(例如右大腿肌肉訊號),而器材慣性感測器110b用以感測與使用者的訓練動作相關的多個體態數據(例如加速度訊號)。Please continue to refer to Figure 8A. When the user is using sports equipment, the myoelectric sensor 120a and the myoelectric sensor 120b are respectively disposed on the left half and the right half of the user's body, and the equipment inertia sensor 110b is disposed on the sports equipment. The myoelectric sensor 120a and the myoelectric sensor 120b are used to sense a plurality of left half electromyographic data (such as left thigh muscle signals) and a plurality of right half electromyographic data (such as right thigh muscle signal) corresponding to the user's training movements. muscle signals), and the equipment inertial sensor 110b is used to sense multiple postural data (such as acceleration signals) related to the user's training movements.

圖8B是依照本揭露的一實施例所繪示的多模感知協同訓練系統中運用體態感測器和力學感測器80計算左右平衡度的另一示意圖。如圖8B所示,使用者在跑步時,由於跑步的動作是雙腳以交替的方式踏在地面上,即一次只有單隻腳著地,因此雙腿出力的平衡度會影響到跑步的成效,甚至影響跑步時的安全性,因此,需分別針對使用者的雙腿偵測左右平衡度。8B is another schematic diagram of using the posture sensor and the force sensor 80 to calculate the left and right balance in the multi-modal perception collaborative training system according to an embodiment of the present disclosure. As shown in Figure 8B, when the user is running, since the running action is to step on the ground with both feet in an alternating manner, that is, only one foot touches the ground at a time, the balance of the output of the legs will affect the effectiveness of running. , and even affect the safety when running. Therefore, it is necessary to detect the left and right balance of the user's legs separately.

肌電感測器120a以及肌電感測器120c分別設置於使用者身體的左大腿及左小腿,肌電感測器120b以及肌電感測器120d分別設置於使用者身體的右大腿及右小腿,並且於使用者的腰部後側上設置身體慣性感測器110a。肌電感測器120a以及肌電感測器120c用以感測與使用者的訓練動作相對應的多個左半邊肌電數據(例如左大小腿肌肉訊號),肌電感測器120b以及肌電感測器120d用以感測與使用者的訓練動作相對應的多個右半邊肌電數據(例如右大小腿肌肉訊號),而身體慣性感測器110a用以感測與使用者的訓練動作相關的多個體態數據(例如步幅、步頻、垂直振幅、身體傾斜角度、雙腳觸地時間、雙腳的移動軌跡等姿態振幅變化)。The myoelectric sensor 120a and the myoelectric sensor 120c are respectively disposed on the left thigh and left calf of the user's body, and the myoelectric sensor 120b and the myoelectric sensor 120d are respectively disposed on the right thigh and right calf of the user's body, and in A body inertia sensor 110a is provided on the back side of the user's waist. The myoelectric sensor 120a and the myoelectric sensor 120c are used to sense a plurality of left half electromyographic data (such as left calf muscle signals) corresponding to the user's training movements. The myoelectric sensor 120b and the myoelectric sensor 120d is used to sense a plurality of right half electromyographic data (such as right calf muscle signals) corresponding to the user's training movements, and the body inertia sensor 110a is used to sense a plurality of right side electromyographic data related to the user's training movements. Individual posture data (such as stride length, stride frequency, vertical amplitude, body tilt angle, foot contact time, foot movement trajectory, etc. posture amplitude changes).

如圖8B所示,使用者在使用圖8B中的訓練器材(即跑步機)時,使用者的雙腳是以交替的方式踏在跑步機的履帶上。力學感測器80設置於跑步機的履帶上並耦接於感測模組,用以感測與使用者的訓練動作相對應的多個力學數據(例如壓力感測訊號)。由於使用者在跑步時並不會同時將雙腳踏在跑步機的履帶上,因此,力學感測器80可感測與使用者的訓練動作相對應的多個力學數據(例如踏頻壓力感測訊號),特別是這些力學數據是分別對應於使用者左腳的踏頻壓力感測訊號和使用者右腳的踏頻壓力感測訊號。As shown in FIG. 8B , when the user uses the training equipment (i.e., the treadmill) in FIG. 8B , the user's feet step on the tracks of the treadmill in an alternating manner. The force sensor 80 is disposed on the track of the treadmill and coupled to the sensing module for sensing a plurality of force data (such as pressure sensing signals) corresponding to the user's training movements. Since the user does not step on both feet of the treadmill at the same time when running, the force sensor 80 can sense multiple mechanical data corresponding to the user's training movements (such as cadence pressure). measurement signals), especially these mechanical data respectively correspond to the cadence pressure sensing signal of the user's left foot and the cadence pressure sensing signal of the user's right foot.

圖9是依照本揭露的一實施例所繪示的多模感知協同訓練系統中運用體態感測器和力學感測器80計算左右平衡度的方塊圖。感測模組20根據多個力學數據輸出多個壓力數據,並根據多個體態數據輸出多個肢體扭力數據。同時,感測模組20分別根據多個左半邊肌電數據及多個右半邊肌電數據輸出多個左半邊肌群啟動時間數據及多個右半邊肌群啟動時間數據。運算模組30根據多個壓力數據、多個肢體扭力數據及多個左半邊肌群啟動時間數據計算左半邊出力數值,同時根據多個壓力數據、多個肢體扭力數據及多個右半邊肌群啟動時間數據計算右半邊出力數值。而後,運算模組30對左半邊出力數據及右半邊出力數據進行另一融合演算361,根據另一融合演算361的結果計算對應使用者的訓練動作的左右平衡度。FIG. 9 is a block diagram illustrating the use of the posture sensor and the force sensor 80 to calculate the left and right balance in the multi-modal perception collaborative training system according to an embodiment of the present disclosure. The sensing module 20 outputs multiple pressure data based on multiple mechanical data, and outputs multiple limb torque data based on multiple posture data. At the same time, the sensing module 20 outputs a plurality of left half muscle group activation time data and a plurality of right half muscle group activation time data respectively according to the plurality of left half muscle group activation time data and the plurality of right half half muscle group activation time data. The computing module 30 calculates the left half force output value based on multiple pressure data, multiple limb torque data, and multiple left half muscle group activation time data, and simultaneously based on multiple pressure data, multiple limb torque data, and multiple right half muscle groups. The starting time data is used to calculate the output value of the right half. Then, the computing module 30 performs another fusion calculation 361 on the left half output data and the right half output data, and calculates the left and right balance corresponding to the user's training movements based on the result of the other fusion calculation 361 .

當該左右平衡度小於或等於平衡度閾值時,運算模組30判定使用者身體的左半邊及右半邊出力平衡,並持續根據另一融合演算的結果計算對應使用者的訓練動作的左右平衡度。反之,當左右平衡度大於平衡度閾值時,運算模組30判定使用者身體的左半邊及右半邊出力不平衡,顯示模組40顯示評估數據以及不平衡訊息,以提醒使用者留意身體左右兩側出力處於不平衡的狀況。When the left and right balance is less than or equal to the balance threshold, the computing module 30 determines the force balance of the left and right halves of the user's body, and continues to calculate the left and right balance corresponding to the user's training actions based on the results of another fusion calculation. . On the contrary, when the left and right balance is greater than the balance threshold, the computing module 30 determines that the force output of the left and right halves of the user's body is unbalanced, and the display module 40 displays the evaluation data and imbalance information to remind the user to pay attention to the left and right sides of the body. The side output force is unbalanced.

如圖1所示,於一實施例中,多模感知協同訓練系統1更包括生理資訊感測器90,耦接感測模組20,用以感測使用者的多個生理數據,例如使用者在進行訓練動作時的體溫、心率、呼吸、皮膚含水率、汗液等資訊數據,並將該些生理數據傳送至運算模組30。運算模組30可基於該些生理數據監控使用者在進行訓練動作時的生理狀況,判斷是否發出警示訊號提醒使用者停止訓練動作。實作上,生理資訊感測器90可以是接觸式或非接觸式擷取上述生理數值的感測器,本揭露並不加以限制。As shown in FIG. 1 , in one embodiment, the multi-modal perception collaborative training system 1 further includes a physiological information sensor 90 coupled to the sensing module 20 for sensing multiple physiological data of the user, such as using It collects information data such as body temperature, heart rate, respiration, skin moisture content, sweat, etc. when the user performs training movements, and transmits these physiological data to the computing module 30 . The computing module 30 can monitor the physiological conditions of the user when performing training actions based on the physiological data, and determine whether to issue a warning signal to remind the user to stop the training actions. In practice, the physiological information sensor 90 may be a contact or non-contact sensor that captures the above physiological values, and this disclosure is not limited thereto.

圖10是依照本揭露的一實施例所繪示的多模感知協同訓練系統中運用身體慣性感測器110a和肌電感測器120a進行動作辨識的示意圖。請參考圖10,於一實施例中,使用者可運用多模感知協同訓練系統進行動作辨識。將身體慣性感測器110a配置在手套上,並且將肌電感測器120a配置在手腕帶上。FIG. 10 is a schematic diagram of motion recognition using the body inertial sensor 110a and the myoelectric sensor 120a in the multi-modal perception collaborative training system according to an embodiment of the present disclosure. Please refer to Figure 10. In one embodiment, the user can use the multi-modal perception collaborative training system to perform motion recognition. The body inertia sensor 110a is arranged on the glove, and the myoelectric sensor 120a is arranged on the wrist strap.

當使用者欲執行擲飛鏢動作時,可將配置了身體慣性感測器110a的手套套在使用者擲飛鏢的手上,並且將配置了肌電感測器120a的手腕帶固定於使用者射飛鏢的手腕上。身體慣性感測器110a用以感測使用者擲飛鏢時與手部的動作相關的多個體態數據。肌電感測器120a用以感測使用者擲飛鏢時相關的多個肌電數據。When the user wants to perform the dart throwing action, the glove equipped with the body inertia sensor 110a can be put on the user's dart throwing hand, and the wrist strap equipped with the myoelectric sensor 120a can be fixed to the user's dart throwing hand. on the wrist. The body inertia sensor 110a is used to sense multiple posture data related to hand movements when the user throws darts. The myoelectric sensor 120a is used to sense multiple electromyographic data related to when the user throws darts.

另外,攝影裝置32用以獲取使用者擲飛鏢時的姿勢體態影像。當使用者擲飛鏢時,使用者擲飛鏢的手會先舉起、往後伸,而後再往前擲出飛鏢,因此使用者擲飛鏢時的姿勢體態影像包含使用者手部的移動軌跡。另外,當使用者擲飛鏢時,使用者的身體也有可能會運用身體轉動的力量來將飛鏢擲出去,因此使用者擲飛鏢時的姿勢體態影像也包含使用者身體的轉動軌跡。In addition, the photography device 32 is used to obtain images of the user's posture when throwing darts. When the user throws a dart, the user's dart-throwing hand will first raise, extend back, and then throw the dart forward. Therefore, the posture image of the user when throwing the dart includes the movement trajectory of the user's hand. In addition, when the user throws a dart, the user's body may also use the force of body rotation to throw the dart. Therefore, the posture image of the user when throwing the dart also includes the rotation trajectory of the user's body.

接下來,請同時參考圖3、10,身體慣性感測器110a會輸出體態數據至感測模組20,感測模組20根據身體慣性感測器110a所感測到的多個體態數據進行空間座標轉換21後,輸出多個肢體扭力數據。另外,感測模組20根據肌電感測器120a所感測到的多個肌電數據進行動態EMG處理22後,輸出多個肌群啟動時間數據。Next, please refer to Figures 3 and 10 at the same time. The body inertia sensor 110a will output posture data to the sensing module 20. The sensing module 20 performs spatial analysis based on the multiple posture data sensed by the body inertia sensor 110a. After coordinate conversion 21, multiple limb torque data are output. In addition, the sensing module 20 performs dynamic EMG processing 22 based on the plurality of electromyographic data sensed by the myoelectric sensor 120a, and then outputs multiple muscle group activation time data.

體態感測器10中同時包括身體慣性感測器110a和肌電感測器120a,因此輸入至運算模組30中骨架座標系統為人體骨架座標系統31a。多個肢體扭力數據以及多個肌群啟動時間數據同步並陸續地輸入至運算模組30,運算模組30根據人體骨架座標系統31a將多個肢體扭力數據及多個肌群啟動時間數據進行轉換及融合演算,計算評估數據,並且將評估數據輸出至顯示模組40。The body posture sensor 10 includes both the body inertia sensor 110a and the myoelectric sensor 120a, so the skeleton coordinate system input to the computing module 30 is the human skeleton coordinate system 31a. Multiple limb torque data and multiple muscle group activation time data are synchronized and successively input to the computing module 30. The computing module 30 converts multiple limb torque data and multiple muscle group activation time data according to the human skeleton coordinate system 31a. and fusion calculation, calculate the evaluation data, and output the evaluation data to the display module 40 .

此外,於一實施例中,運算模組30可將使用者擲飛鏢時的姿勢體態影像輸出至顯示模組40(例如行動裝置),使用者可透過顯示模組40觀看姿勢體態影像,甚至可以透過放慢姿勢體態影像來觀看手部的移動軌跡以及身體的轉動軌跡。運算模組30還可結合人工智慧AI動作分析模組,針對使用者擲飛鏢時的姿勢體態影像執行動作分析,結合評估數據以量化運動成效,提供使用者在手部動作以及身體轉動的調整建議。In addition, in one embodiment, the computing module 30 can output the posture and posture images of the user when throwing darts to the display module 40 (such as a mobile device). The user can view the posture and posture images through the display module 40, and even By slowing down the posture images, you can watch the movement of the hands and the rotation of the body. The computing module 30 can also be combined with the artificial intelligence AI motion analysis module to perform motion analysis on the user's posture and posture images when throwing darts, combine the evaluation data to quantify the effectiveness of the motion, and provide the user with adjustment suggestions for hand motions and body rotations. .

圖11是依照本揭露的一實施例所繪示的多模感知協同訓練方法2的流程圖。如圖11所示,多模感知協同訓練方法2包括步驟S310~步驟S380。FIG. 11 is a flowchart of the multi-modal perception collaborative training method 2 according to an embodiment of the present disclosure. As shown in Figure 11, the multi-mode sensing collaborative training method 2 includes steps S310 to S380.

於步驟S310中,透過多個體態感測器的至少一慣性感測器感測與使用者的訓練動作相關的多個體態數據,並且透過該些體態感測器的至少一肌電感測器感測與使用者的訓練動作相關的多個肌電數據。於步驟S320中,根據該些體態數據輸出多個肢體扭力數據,並根據該些肌電數據輸出多個肌群啟動時間數據。In step S310, multiple posture data related to the user's training actions are sensed through at least one inertial sensor of the plurality of posture sensors, and sensed through at least one myoelectric sensor of the posture sensors. Measure multiple electromyographic data related to the user's training movements. In step S320, a plurality of limb torque data are output according to the posture data, and a plurality of muscle group activation time data are output according to the electromyographic data.

於步驟S330中,根據骨架座標系統將該些肢體扭力數據轉換為力矩-骨架座標系統。於步驟S340中,根據骨架座標系統將該些肌群啟動時間數據轉換為肌力特徵值-骨架座標系統。本揭露並不限定步驟S330與步驟S340的執行順序,步驟S330與步驟S340亦可同時進行。於步驟S350中,對力矩-骨架座標系統和肌力特徵值-骨架座標系統進行融合演算。其中,融合演算是採用K平均集群(K-Mean Clustering,KMC)對該力矩-骨架座標系統和該肌力特徵值-骨架座標系統進行的。於步驟S360中,根據融合演算的結果對訓練動作計算評估數據。In step S330, the limb torque data is converted into a moment-skeleton coordinate system according to the skeleton coordinate system. In step S340, the muscle group activation time data is converted into muscle strength characteristic value-skeleton coordinate system according to the skeleton coordinate system. This disclosure does not limit the execution order of step S330 and step S340, and step S330 and step S340 can also be performed simultaneously. In step S350, a fusion calculation is performed on the moment-skeleton coordinate system and the muscle force characteristic value-skeleton coordinate system. Among them, the fusion calculation is performed using K-Mean Clustering (KMC) on the moment-skeleton coordinate system and the muscle force eigenvalue-skeleton coordinate system. In step S360, evaluation data is calculated for the training action according to the results of the fusion calculation.

於步驟S370中,根據評估數據判定訓練動作對應多個已知運動動作之一者。於步驟S380中,顯示評估數據以及已知運動動作。In step S370, it is determined based on the evaluation data that the training action corresponds to one of a plurality of known sports actions. In step S380, the evaluation data and known motion actions are displayed.

圖12是依照本揭露的一實施例所繪示的多模感知協同訓練方法中計算出評估數據的流程圖。如圖12所示,於步驟S321中,根據骨架座標系統對多個肢體扭力數據和多個肌群啟動時間數據進行座標系統轉換。於步驟S330中,根據骨架座標系統將該些肢體扭力數據轉換為力-骨架座標系統,接著於步驟S331中,根據骨架座標系統將力-骨架座標系統轉換為力矩-骨架座標系統。另外,於步驟S340中,根據骨架座標系統將該些肌群啟動時間數據轉換為肌力啟動時間-骨架座標系統,接著於步驟S341中,根據骨架座標系統將肌力啟動時間-骨架座標系統轉換為肌力特徵值-骨架座標系統。需特別說明的是,步驟S330~步驟S331與步驟S340~步驟S341是分開的兩流程,可同時進行或者分開進行。FIG. 12 is a flow chart for calculating evaluation data in a multi-modal perception collaborative training method according to an embodiment of the present disclosure. As shown in FIG. 12 , in step S321 , coordinate system conversion is performed on multiple limb torque data and multiple muscle group activation time data according to the skeleton coordinate system. In step S330, the limb torsion data is converted into a force-skeleton coordinate system according to the skeleton coordinate system, and then in step S331, the force-skeleton coordinate system is converted into a moment-skeleton coordinate system according to the skeleton coordinate system. In addition, in step S340, the muscle group activation time data is converted into muscle force activation time-skeleton coordinate system according to the skeleton coordinate system, and then in step S341, the muscle force activation time-skeleton coordinate system is converted according to the skeleton coordinate system. is the muscle force characteristic value-skeleton coordinate system. It should be noted that steps S330 to S331 and steps S340 to S341 are two separate processes, which can be performed simultaneously or separately.

一旦根據骨架座標系統轉換出肌力啟動時間-骨架座標系統和肌力特徵值-骨架座標系統之後,於步驟S350,對力矩-骨架座標系統和肌力特徵值-骨架座標系統進行融合演算。接著,於步驟S360中,根據融合演算的結果對訓練動作計算評估數據。Once the muscle force activation time-skeleton coordinate system and muscle force characteristic value-skeleton coordinate system are converted according to the skeleton coordinate system, in step S350, a fusion calculation is performed on the moment-skeleton coordinate system and the muscle force characteristic value-skeleton coordinate system. Next, in step S360, evaluation data is calculated for the training action according to the results of the fusion calculation.

於一實施例中,當慣性感測器設置於使用者的身體時,骨架座標系統是對應於使用者的骨架,使用者的該骨架可透過攝影裝置獲取。於一實施例中,當慣性感測器設置於訓練器材上時,骨架座標系統更進一步對應於訓練器材的骨架,訓練器材的骨架可透過攝影裝置獲取。In one embodiment, when the inertial sensor is disposed on the user's body, the skeleton coordinate system corresponds to the user's skeleton, and the user's skeleton can be obtained through the photography device. In one embodiment, when the inertial sensor is disposed on the training equipment, the skeleton coordinate system further corresponds to the skeleton of the training equipment, and the skeleton of the training equipment can be obtained through the photography device.

於一實施例中,多模感知協同訓練方法更包括基於使用者使用的體態感測器的數量確定使用者的運動情境,並且基於運動情境選定多個已知運動模型當中之一者。In one embodiment, the multi-modal perception collaborative training method further includes determining the user's movement context based on the number of posture sensors used by the user, and selecting one of a plurality of known movement models based on the movement context.

於一實施例中,多模感知協同訓練方法更包括比較評估數據與對應已選定的已知運動模型的訓練數據,以計算出使用者的訓練動作與已選定的已知運動模型的相似度。當相似度大於或等於相似閾值時,判定使用者的訓練動作符合已選定的已知運動模型,以評估數據更新對應已選定的已知運動模型的訓練數據,並且顯示評估數據以及已知運動動作。反之,當相似度小於相似閾值時,判定使用者的訓練動作未符合所有已知運動模型的每一者,以評估數據更新為錯誤數據,並且顯示評估數據以及錯誤運動動作訊息。In one embodiment, the multi-modal perception collaborative training method further includes comparing the evaluation data with training data corresponding to the selected known motion model to calculate the similarity between the user's training actions and the selected known motion model. When the similarity is greater than or equal to the similarity threshold, it is determined that the user's training actions conform to the selected known motion model, the training data corresponding to the selected known motion model is updated with the evaluation data, and the evaluation data and known motion actions are displayed. . On the contrary, when the similarity is less than the similarity threshold, it is determined that the user's training actions do not conform to each of all known motion models, the evaluation data is updated to error data, and the evaluation data and error motion action messages are displayed.

於一實施例中,慣性感測器具有偏移感測單元,並且慣性感測器設置於使用者的身體,而多模感知協同訓練方法更包括當慣性感測器與使用者的身體之間產生相對偏移量時,感測多個偏移數據,根據多個體態數據及多個偏移數據輸出多個肢體扭力數據。比較評估數據與對應已選定的已知運動模型的訓練數據,判斷慣性感測器與使用者的身體之間的相對偏移量是否超過偏移閾值。其中當相對偏移量大於偏移閾值時,顯示感測器設置異常訊息。In one embodiment, the inertial sensor has an offset sensing unit, and the inertial sensor is disposed on the user's body, and the multi-mode sensing collaborative training method further includes when the inertial sensor and the user's body When a relative offset is generated, multiple offset data are sensed, and multiple limb torque data are output based on multiple posture data and multiple offset data. Compare the evaluation data with training data corresponding to the selected known motion model to determine whether the relative offset between the inertial sensor and the user's body exceeds an offset threshold. When the relative offset is greater than the offset threshold, a sensor setting abnormal message is displayed.

於一實施例中,多模感知協同訓練方法更包括透過分別設置於訓練器材上的力學感測器感測與使用者的該訓練動作相對應的多個力學數據,透過設置於訓練器材上的至少一慣性感測器感測與使用者的訓練動作相對應的多個體態數據,並且透過分別設置於使用者身體的左半邊及右半邊的至少二肌電感測器感測與使用者的訓練動作相對應的多個左半邊肌電數據及多個右半邊肌電數據。根據多個力學數據輸出多個壓力數據,並根據多個體態數據輸出多個肢體扭力數據,分別根據多個左半邊肌電數據及多個右半邊肌電數據輸出多個左半邊肌群啟動時間數據及多個右半邊肌群啟動時間數據。根據多個壓力數據、多個肢體扭力數據及多個左半邊肌群啟動時間數據計算左半邊出力數值,並且根據多個壓力數據、多個肢體扭力數據及多個右半邊肌群啟動時間數據計算右半邊出力數值。對左半邊出力數據及右半邊出力數據進行另一融合演算,並且根據另一融合演算的結果計算對應使用者的訓練動作的左右平衡度。In one embodiment, the multi-modal perception collaborative training method further includes sensing a plurality of mechanical data corresponding to the user's training actions through force sensors respectively provided on the training equipment. At least one inertial sensor senses a plurality of posture data corresponding to the user's training movements, and senses the user's training through at least two myoelectric sensors respectively provided on the left half and right half of the user's body. Multiple left half electromyography data and multiple right half electromyography data corresponding to the action. Output multiple pressure data based on multiple mechanical data, output multiple limb torque data based on multiple posture data, and output multiple left half muscle group activation times based on multiple left half EMG data and multiple right half EMG data. Data and activation time data of multiple right hemimuscle groups. The left half force output value is calculated based on multiple pressure data, multiple limb torque data, and multiple left half muscle group activation time data, and calculated based on multiple pressure data, multiple limb torque data, and multiple right half muscle group activation time data. Right half output value. Another fusion calculation is performed on the left half output data and the right half output data, and the left and right balance corresponding to the user's training movements is calculated based on the result of the other fusion calculation.

於一實施例中,當左右平衡度小於或等於平衡度閾值時,判定使用者身體的左半邊及右半邊出力平衡,並持續根據另一融合演算的結果計算對應使用者的訓練動作的該左右平衡度。反之,當左右平衡度大於平衡度閾值時,判定使用者身體的左半邊及右半邊出力不平衡,顯示評估數據以及不平衡訊息。In one embodiment, when the left and right balance is less than or equal to the balance threshold, it is determined that the force output of the left and right halves of the user's body is balanced, and the left and right corresponding to the user's training movements are continuously calculated based on the results of another fusion calculation. balance. On the contrary, when the left and right balance is greater than the balance threshold, it is determined that the force output of the left and right halves of the user's body is unbalanced, and the evaluation data and imbalance information are displayed.

於一實施例中,多模感知協同訓練方法更包括感測使用者的多個生理數據,例如使用者在進行訓練動作時的體溫、心率、呼吸、皮膚含水率、汗液等資訊數據,基於該些生理數據監控使用者在進行訓練動作時的生理狀況,判斷是否發出警示訊號提醒使用者停止訓練動作。In one embodiment, the multi-modal sensing collaborative training method further includes sensing multiple physiological data of the user, such as the user's body temperature, heart rate, respiration, skin moisture content, sweat and other information data when performing training actions. Based on the These physiological data monitor the user's physiological conditions when performing training actions and determine whether to issue a warning signal to remind the user to stop training actions.

綜上所述,本揭露所提供的多模感知協同訓練系統及多模感知協同訓練方法可提供健身房的使用者在沒有教練指導的情況下,能夠知道使用健身器材的姿勢是否正確、使用到的肌群與訓練動作是否匹配,或者是訓練動作的肌群出力順序是否正確,避免運動傷害,也能夠量化運動成效成為一個指標,以提供教練與運動員討論改善方式。To sum up, the multi-modal sensing collaborative training system and multi-modal sensing collaborative training method provided by the present disclosure can provide gym users with the ability to know whether the postures of using fitness equipment are correct and whether the postures used are correct without the guidance of a coach. Whether the muscle groups match the training movements, or whether the order of the muscle groups in the training movements is correct, avoids sports injuries, and can also quantify the effectiveness of sports and become an indicator to provide coaches and athletes with ways to discuss improvements.

1:多模感知協同訓練系統 10:體態感測器 110:慣性感測器 110a:身體慣性感測器 110b:器材慣性感測器 111:偏移感測單元 120a、120b、120c、120d:肌電感測器 151、152、153、154:感測器 16:感測載具 2:多模感知協同訓練方法 20:感測模組 21:空間座標轉換 22:動態EMG處理 30:運算模組 301:座標系統轉換 302、303:轉換 304:融合演算 31:骨架座標系統 31a:人體骨架座標系統 31b:人體/器材骨架座標系統 32:攝影裝置 361:另一融合演算 40:顯示模組 50:運動模擬模型模組 60:訓練數據資料庫 70:運動模型資料庫 80:力學感測器 90:生理資訊感測器 A1、A2:加速度 d:相對偏移量 e:誤差 S310~S380、S321、S330~S331、S340~S341:步驟 1: Multi-modal perception collaborative training system 10: Body posture sensor 110:Inertial sensor 110a: Body inertia sensor 110b: Equipment inertial sensor 111: Offset sensing unit 120a, 120b, 120c, 120d: Myoelectric sensor 151, 152, 153, 154: Sensor 16: Sensing vehicle 2: Multi-modal perception collaborative training method 20: Sensing module 21: Space coordinate conversion 22: Dynamic EMG processing 30:Computational module 301: Coordinate system conversion 302, 303: Conversion 304: Fusion Calculus 31: Skeleton coordinate system 31a:Human skeleton coordinate system 31b: Human body/equipment skeleton coordinate system 32: Photography installation 361: Another fusion calculus 40:Display module 50:Motion simulation model module 60:Training data database 70:Kinematic model database 80: Mechanical sensor 90: Physiological information sensor A1, A2: acceleration d: relative offset e: error S310~S380, S321, S330~S331, S340~S341: steps

圖1是依照本揭露的一實施例所繪示的多模感知協同訓練系統的架構圖。 圖2是依照本揭露的一實施例所繪示的多模感知協同訓練系統中運算模組進行融合演算的示意圖。 圖3是依照本揭露的一實施例所繪示的多模感知協同訓練系統的方塊圖。 圖4是依照本揭露的另一實施例所繪示的多模感知協同訓練系統的方塊圖。 圖5是依照本揭露的一實施例所繪示的多模感知協同訓練系統更新訓練數據及錯誤數據的方塊圖。 圖6是依照本揭露的一實施例所繪示的多模感知協同訓練系統中運用慣性感測器判斷偏移度的示意圖。 圖7是依照本揭露的一實施例所繪示的多模感知協同訓練系統中運用慣性感測器判斷偏移度的方塊圖。 圖8A是依照本揭露的一實施例所繪示的多模感知協同訓練系統中運用體態感測器和力學感測器計算左右平衡度的示意圖。 圖8B是依照本揭露的一實施例所繪示的多模感知協同訓練系統中運用體態感測器和力學感測器計算左右平衡度的另一示意圖。 圖9是依照本揭露的一實施例所繪示的多模感知協同訓練系統中運用體態感測器和力學感測器計算左右平衡度的方塊圖。 圖10是依照本揭露的一實施例所繪示的多模感知協同訓練系統中運用身體慣性感測器和肌電感測器進行動作辨識的示意圖。 圖11是依照本揭露的一實施例所繪示的多模感知協同訓練方法的流程圖。 圖12是依照本揭露的一實施例所繪示的多模感知協同訓練方法中計算出評估數據的流程圖。 FIG. 1 is an architectural diagram of a multi-modal perception collaborative training system according to an embodiment of the present disclosure. FIG. 2 is a schematic diagram of the computing module performing fusion calculation in the multi-modal perception collaborative training system according to an embodiment of the present disclosure. FIG. 3 is a block diagram of a multi-modal perception collaborative training system according to an embodiment of the present disclosure. FIG. 4 is a block diagram of a multi-modal perception collaborative training system according to another embodiment of the present disclosure. FIG. 5 is a block diagram of a multi-modal perception collaborative training system updating training data and error data according to an embodiment of the present disclosure. FIG. 6 is a schematic diagram of using an inertial sensor to determine the degree of deviation in a multi-modal perception collaborative training system according to an embodiment of the present disclosure. FIG. 7 is a block diagram of using an inertial sensor to determine the degree of deviation in a multi-modal perception collaborative training system according to an embodiment of the present disclosure. FIG. 8A is a schematic diagram of using body posture sensors and force sensors to calculate left and right balance in a multi-modal perception collaborative training system according to an embodiment of the present disclosure. 8B is another schematic diagram of using a posture sensor and a force sensor to calculate left and right balance in a multi-modal perception collaborative training system according to an embodiment of the present disclosure. FIG. 9 is a block diagram illustrating the use of body posture sensors and force sensors to calculate left and right balance in a multi-modal perception collaborative training system according to an embodiment of the present disclosure. FIG. 10 is a schematic diagram of motion recognition using body inertia sensors and myoelectric sensors in a multi-modal perception collaborative training system according to an embodiment of the present disclosure. FIG. 11 is a flow chart of a multi-modal perception collaborative training method according to an embodiment of the present disclosure. FIG. 12 is a flow chart for calculating evaluation data in a multi-modal perception collaborative training method according to an embodiment of the present disclosure.

1:多模感知協同訓練系統 1: Multi-modal perception collaborative training system

10:體態感測器 10: Body posture sensor

110:慣性感測器 110:Inertial sensor

120a、120b:肌電感測器 120a, 120b: Myoelectric sensor

20:感測模組 20: Sensing module

30:運算模組 30:Computational module

40:顯示模組 40:Display module

50:運動模擬模型模組 50:Motion simulation model module

60:訓練數據資料庫 60:Training data database

70:運動模型資料庫 70:Kinematic model database

80:力學感測器 80: Mechanical sensor

90:生理資訊感測器 90: Physiological information sensor

Claims (20)

一種多模感知協同訓練系統,包括:多個體態感測器,包括:至少一慣性感測器,用以感測與使用者的訓練動作相關的多個體態數據;以及至少一肌電感測器,用以感測與該使用者的該訓練動作相關的多個肌電數據;感測模組,耦接該些體態感測器,用以根據該些體態數據輸出多個肢體扭力數據,並根據該些肌電數據輸出多個肌群啟動時間數據;運算模組,耦接該感測模組,用以執行以下步驟:根據骨架座標系統將該些肢體扭力數據轉換為力矩-骨架座標系統;根據該骨架座標系統將該些肌群啟動時間數據轉換為肌力特徵值-骨架座標系統;對該力矩-骨架座標系統和該肌力特徵值-骨架座標系統進行融合演算;根據該融合演算的結果對該訓練動作計算評估數據;以及根據該評估數據判定該訓練動作對應多個已知運動動作之一者;以及顯示模組,耦接該運算模組,用以顯示該評估數據以及該已 知運動動作。 A multi-mode sensing collaborative training system includes: multiple posture sensors, including: at least one inertial sensor for sensing multiple posture data related to the user's training actions; and at least one myoelectric sensor , used to sense a plurality of myoelectric data related to the training action of the user; the sensing module is coupled to the posture sensors, and is used to output a plurality of limb torque data according to the posture data, and Output multiple muscle group activation time data according to the electromyographic data; the computing module is coupled to the sensing module to perform the following steps: convert the limb torque data into a torque-skeleton coordinate system according to the skeleton coordinate system ; Convert the muscle group activation time data into a muscle force characteristic value-skeleton coordinate system according to the skeleton coordinate system; perform a fusion calculation on the moment-skeleton coordinate system and the muscle force characteristic value-skeleton coordinate system; according to the fusion calculation Calculate evaluation data for the training action as a result; and determine based on the evaluation data that the training action corresponds to one of a plurality of known sports actions; and a display module coupled to the computing module to display the evaluation data and the already Know sports actions. 如請求項1所述的多模感知協同訓練系統,其中該運算模組更用以執行以下步驟:根據該骨架座標系統將該些肢體扭力數據轉換為力-骨架座標系統,再將該力-骨架座標系統轉換為該力矩-骨架座標系統;以及根據該骨架座標系統將該些肌群啟動時間數據轉換為肌力啟動時間-骨架座標系統,再將該肌力啟動時間-骨架座標系統轉換為該肌力特徵值-骨架座標系統。 The multi-modal perception collaborative training system as described in claim 1, wherein the computing module is further used to perform the following steps: convert the limb torsion data into a force-skeleton coordinate system according to the skeleton coordinate system, and then convert the force- The skeleton coordinate system is converted into the moment-skeleton coordinate system; and the muscle group activation time data is converted into the muscle force activation time-skeleton coordinate system according to the skeleton coordinate system, and then the muscle force activation time-skeleton coordinate system is converted into The muscle force characteristic value-skeleton coordinate system. 如請求項1所述的多模感知協同訓練系統,其中當該至少一慣性感測器設置於該使用者的身體時,該骨架座標系統是對應於該使用者的身體骨架,該使用者的該身體骨架可透過攝影裝置獲取。 The multi-modal perception collaborative training system as claimed in claim 1, wherein when the at least one inertial sensor is disposed on the user's body, the skeleton coordinate system corresponds to the user's body skeleton, and the user's The body skeleton can be obtained through a photographic device. 如請求項3所述的多模感知協同訓練系統,其中當該至少一慣性感測器設置於訓練器材上時,該骨架座標系統更進一步對應於該訓練器材的器材骨架,該訓練器材的該器材骨架可透過該攝影裝置獲取。 The multi-mode sensing collaborative training system as claimed in claim 3, wherein when the at least one inertial sensor is disposed on the training equipment, the skeleton coordinate system further corresponds to the equipment skeleton of the training equipment, and the The equipment skeleton can be obtained through the photography device. 如請求項1所述的多模感知協同訓練系統,更包括:運動模型資料庫,包括多個已知運動模型;以及運動模擬模型模組,耦接該運動模型資料庫以及該運算模組;其中該運算模組基於該使用者使用的該些體態感測器的數量確定該使用者的運動情境; 其中該運算模組基於該運動情境與該運動模擬模型模組進行配對,該運動模擬模型模組基於該運動情境自運動模型資料庫中選定該些已知運動模型當中之一者。 The multi-modal perception collaborative training system as described in claim 1 further includes: a motion model database including a plurality of known motion models; and a motion simulation model module coupling the motion model database and the computing module; The computing module determines the movement situation of the user based on the number of posture sensors used by the user; The computing module is paired with the motion simulation model module based on the motion scenario, and the motion simulation model module selects one of the known motion models from the motion model database based on the motion scenario. 如請求項5所述的多模感知協同訓練系統,更包括:訓練數據資料庫,耦接該運算模組,該訓練數據資料庫包括對應該些已知運動模型中每一者的訓練數據以及錯誤數據;其中該運算模組比較該評估數據與對應已選定的該已知運動模型的該訓練數據,以計算出該使用者的該訓練動作與已選定的該已知運動模型的相似度。 The multi-modal perception collaborative training system as described in claim 5 further includes: a training data database coupled to the computing module, the training data database including training data corresponding to each of the known motion models; Error data; wherein the computing module compares the evaluation data with the training data corresponding to the selected known motion model to calculate the similarity between the user's training actions and the selected known motion model. 如請求項6所述的多模感知協同訓練系統,其中當該相似度大於或等於一相似閾值時,該運算模組判定該使用者的該訓練動作符合已選定的該已知運動模型,並將該評估數據儲存到該訓練數據資料庫以更新對應已選定的該已知運動模型的該訓練數據,該顯示模組顯示該評估數據以及該已知運動動作;當該相似度小於一相似閾值時,該運算模組判定該使用者的該訓練動作未符合該些已知運動模型,並將該評估數據儲存到該訓練數據資料庫以更新該錯誤數據,該顯示模組顯示該評估數據以及錯誤運動動作訊息。 The multi-modal perception collaborative training system as described in claim 6, wherein when the similarity is greater than or equal to a similarity threshold, the computing module determines that the training action of the user conforms to the selected known motion model, and The evaluation data is stored in the training data database to update the training data corresponding to the selected known motion model, and the display module displays the evaluation data and the known motion; when the similarity is less than a similarity threshold When, the computing module determines that the user's training actions do not conform to the known motion models, and stores the evaluation data to the training data database to update the error data, the display module displays the evaluation data and Wrong motion action message. 如請求項6所述的多模感知協同訓練系統,其中該至少一慣性感測器具有偏移感測單元,並且該至少一慣性感測器設置於該使用者的身體,用以當該至少一慣性感測器與該使用者的 該身體之間產生相對偏移量時,感測多個偏移數據,該感測模組根據該些體態數據及該些偏移數據輸出該些肢體扭力數據;其中該運算模組比較該評估數據與對應該已選定的該已知運動模型的該訓練數據,判斷該至少一慣性感測器與該使用者的該身體之間的該相對偏移量是否超過一偏移閾值;其中當該相對偏移量大於該偏移閾值時,該顯示模組顯示感測器設置異常訊息。 The multi-modal perception collaborative training system as claimed in claim 6, wherein the at least one inertial sensor has a deflection sensing unit, and the at least one inertial sensor is disposed on the user's body for when the at least one An inertial sensor and the user's When a relative offset occurs between the bodies, a plurality of offset data is sensed, and the sensing module outputs the limb torque data according to the posture data and the offset data; wherein the computing module compares the evaluation The data and the training data corresponding to the selected known motion model are used to determine whether the relative offset between the at least one inertial sensor and the user's body exceeds an offset threshold; wherein when the When the relative offset is greater than the offset threshold, the display module displays a sensor setting abnormal message. 如請求項1所述的多模感知協同訓練系統,更包括:力學感測器,耦接該感測模組,設置於訓練器材上,用以感測與該使用者的該訓練動作相對應的多個力學數據;其中該至少一慣性感測器設置於訓練器材上,用以感測與該使用者的該訓練動作相對應的該些體態數據;其中該至少一肌電感測器分別設置於該使用者的身體的左半邊及右半邊,用以感測與該使用者的該訓練動作相對應的多個左半邊肌電數據及多個右半邊肌電數據;其中該感測模組根據該些力學數據輸出多個壓力數據,並根據該些體態數據輸出該些肢體扭力數據;其中該感測模組分別根據該些左半邊肌電數據及該些右半邊肌電數據輸出多個左半邊肌群啟動時間數據及多個右半邊肌群啟動時間數據;其中該運算模組更用以執行以下步驟:根據該些壓力數據、該些肢體扭力數據及該些左半邊肌 群啟動時間數據計算左半邊出力數值;根據該些壓力數據、該些肢體扭力數據及該些右半邊肌群啟動時間數據計算右半邊出力數值;對該左半邊出力數據及該右半邊出力數據進行另一融合演算;以及根據該另一融合演算的結果計算對應該使用者的該訓練動作的左右平衡度。 The multi-modal perception collaborative training system as claimed in claim 1, further comprising: a mechanical sensor coupled to the sensing module and disposed on the training equipment to sense the training action corresponding to the user. A plurality of mechanical data; wherein the at least one inertial sensor is disposed on the training equipment to sense the posture data corresponding to the training movements of the user; wherein the at least one myoelectric sensor is disposed respectively The left and right halves of the user's body are used to sense a plurality of left half electromyographic data and a plurality of right half electromyographic data corresponding to the user's training movements; wherein the sensing module A plurality of pressure data are output according to the mechanical data, and the limb torque data are output according to the posture data; wherein the sensing module outputs a plurality of pressure data according to the left half electromyography data and the right half electromyography data respectively. The left half muscle group activation time data and the plurality of right half muscle group activation time data; wherein the computing module is further used to perform the following steps: according to the pressure data, the limb torque data and the left half muscle group The group activation time data is used to calculate the left half output value; the right half output value is calculated based on the pressure data, the limb torque data and the right half muscle group activation time data; the left half output data and the right half output data are calculated Another fusion calculation; and calculating the left and right balance corresponding to the training action of the user according to the result of the other fusion calculation. 如請求項9所述的多模感知協同訓練系統,其中當該左右平衡度小於或等於一平衡度閾值時,該運算模組判定該使用者身體的該左半邊及該右半邊出力平衡,並持續根據該另一融合演算的結果計算對應該使用者的該訓練動作的該左右平衡度;以及當該左右平衡度大於該平衡度閾值時,該運算模組判定該使用者身體的該左半邊及該右半邊出力不平衡,該顯示模組顯示該評估數據以及不平衡訊息。 The multi-modal perception collaborative training system as described in claim 9, wherein when the left and right balance is less than or equal to a balance threshold, the computing module determines that the left half and the right half of the user's body are balanced, and Continue to calculate the left and right balance corresponding to the training action of the user based on the result of the other fusion calculation; and when the left and right balance is greater than the balance threshold, the computing module determines the left half of the user's body And the right half output is unbalanced, and the display module displays the evaluation data and imbalance information. 一種多模感知協同訓練方法,包括:透過多個體態感測器的至少一慣性感測器感測與使用者的訓練動作相關的多個體態數據,並且透過該些體態感測器的至少一肌電感測器感測與該使用者的該訓練動作相關的多個肌電數據;根據該些體態數據輸出多個肢體扭力數據,並根據該些肌電數據輸出多個肌群啟動時間數據;根據骨架座標系統將該些肢體扭力數據轉換為力矩-骨架座 標系統;根據該骨架座標系統將該些肌群啟動時間數據轉換為肌力特徵值-骨架座標系統;對該力矩-骨架座標系統和該肌力特徵值-骨架座標系統進行融合演算;根據該融合演算的結果對該訓練動作計算評估數據;根據該評估數據判定該訓練動作對應多個已知運動動作之一者;以及顯示該評估數據以及該已知運動動作。 A multi-modal perception collaborative training method includes: sensing multiple posture data related to the user's training actions through at least one inertial sensor of multiple posture sensors; The myoelectric sensor senses a plurality of electromyographic data related to the training action of the user; outputs a plurality of limb torque data according to the posture data, and outputs a plurality of muscle group activation time data according to the electromyographic data; Convert these limb torsion data into moments according to the skeleton coordinate system - skeleton base coordinate system; convert the muscle group activation time data into a muscle force characteristic value-skeleton coordinate system according to the skeleton coordinate system; perform a fusion calculation on the moment-skeleton coordinate system and the muscle force characteristic value-skeleton coordinate system; according to the The result of the fusion calculation is to calculate evaluation data for the training action; determine based on the evaluation data that the training action corresponds to one of a plurality of known sports actions; and display the evaluation data and the known sports actions. 如請求項11所述的多模感知協同訓練方法,更包括:根據該骨架座標系統將該些肢體扭力數據轉換為力-骨架座標系統,再將該力-骨架座標系統轉換為該力矩-骨架座標系統;以及根據該骨架座標系統將該些肌群啟動時間數據轉換為肌力啟動時間-骨架座標系統,再將該肌力啟動時間-骨架座標系統轉換為該肌力特徵值-骨架座標系統。 The multi-modal perception collaborative training method as described in claim 11 further includes: converting the limb torsion data into a force-skeleton coordinate system according to the skeleton coordinate system, and then converting the force-skeleton coordinate system into the moment-skeleton coordinate system coordinate system; and convert the muscle group activation time data into muscle force activation time-skeleton coordinate system according to the skeleton coordinate system, and then convert the muscle force activation time-skeleton coordinate system into the muscle force characteristic value-skeleton coordinate system . 如請求項11所述的多模感知協同訓練方法,其中當該至少一慣性感測器設置於該使用者的身體時,該骨架座標系統是對應於該使用者的身體骨架,該使用者的該身體骨架可透過攝影裝置獲取。 The multi-modal perception collaborative training method as claimed in claim 11, wherein when the at least one inertial sensor is disposed on the user's body, the skeleton coordinate system corresponds to the user's body skeleton, and the user's The body skeleton can be obtained through a photographic device. 如請求項13所述的多模感知協同訓練方法,其中當該至少一慣性感測器設置於訓練器材上時,該骨架座標系統更進一步對應於該訓練器材的器材骨架,該訓練器材的該器材骨架可透過該攝影裝置獲取。 The multi-modal sensing collaborative training method as described in claim 13, wherein when the at least one inertial sensor is installed on the training equipment, the skeleton coordinate system further corresponds to the equipment skeleton of the training equipment, and the The equipment skeleton can be obtained through the photography device. 如請求項11所述的多模感知協同訓練方法,更包括:基於該使用者使用的該些體態感測器的數量確定該使用者的運動情境;基於該運動情境選定多個已知運動模型當中之一者。 The multi-modal perception collaborative training method as described in claim 11, further comprising: determining the user's movement situation based on the number of body posture sensors used by the user; selecting a plurality of known movement models based on the movement situation One of them. 如請求項15所述的多模感知協同訓練方法,更包括:比較該評估數據與對應已選定的該已知運動模型的訓練數據,以計算出該使用者的該訓練動作與已選定的該已知運動模型的相似度。 The multi-modal perception collaborative training method as described in claim 15 further includes: comparing the evaluation data with the training data corresponding to the selected known motion model to calculate the training action of the user and the selected Similarity of known motion models. 如請求項16所述的多模感知協同訓練方法,其中當該相似度大於或等於一相似閾值時,判定該使用者的該訓練動作符合已選定的該已知運動模型,以該評估數據更新對應已選定的該已知運動模型的該訓練數據,顯示該評估數據以及該已知運動動作;當該相似度小於一相似閾值時,判定該使用者的該訓練動作未符合該些已知運動模型,以該評估數據更新為錯誤數據,顯示該評估數據以及錯誤運動動作訊息。 The multi-modal perception collaborative training method as described in claim 16, wherein when the similarity is greater than or equal to a similarity threshold, it is determined that the training action of the user conforms to the selected known motion model, and is updated with the evaluation data The training data corresponding to the selected known motion model displays the evaluation data and the known motion actions; when the similarity is less than a similarity threshold, it is determined that the user's training actions do not conform to the known motions. The model is updated with the evaluation data as error data, and displays the evaluation data and error motion action information. 如請求項16所述的多模感知協同訓練方法,其中該至少一慣性感測器具有偏移感測單元,並且該至少一慣性感測器設置於該使用者的身體,所述多模感知協同訓練方法更包括:當該至少一慣性感測器與該使用者的該身體之間產生相對偏移量時,感測多個偏移數據,根據該些體態數據及該些偏移數據輸出該些肢體扭力數據;以及比較該評估數據與對應該已選定的該已知運動模型的該訓練數據,判斷該至少一慣性感測器與該使用者的該身體之間的該相對偏移量是否超過一偏移閾值;其中當該相對偏移量大於該偏移閾值時,顯示感測器設置異常訊息。 The multi-mode sensing collaborative training method as described in claim 16, wherein the at least one inertial sensor has an offset sensing unit, and the at least one inertial sensor is disposed on the user's body, and the multi-mode sensing The collaborative training method further includes: when a relative offset occurs between the at least one inertial sensor and the user's body, sensing a plurality of offset data, and outputting according to the posture data and the offset data The limb torque data; and comparing the evaluation data with the training data corresponding to the selected known motion model, determining the relative offset between the at least one inertial sensor and the user's body Whether an offset threshold is exceeded; when the relative offset is greater than the offset threshold, a sensor setting abnormal message is displayed. 如請求項11所述的多模感知協同訓練方法,更包括:透過分別設置於訓練器材上的力學感測器感測與該使用者的該訓練動作相對應的多個力學數據;透過設置於該訓練器材上的該至少一慣性感測器感測與該使用者的該訓練動作相對應的該些體態數據;透過分別設置於該使用者的身體的左半邊及右半邊的該至少一肌電感測器感測與該使用者的該訓練動作相對應的多個左半邊肌電數據及多個右半邊肌電數據;根據該些力學數據輸出多個壓力數據,並根據該些體態數據 輸出該些肢體扭力數據;分別根據該些左半邊肌電數據及該些右半邊肌電數據輸出多個左半邊肌群啟動時間數據及多個右半邊肌群啟動時間數據;根據該些壓力數據、該些肢體扭力數據及該些左半邊肌群啟動時間數據計算左半邊出力數值;根據該些壓力數據、該些肢體扭力數據及該些右半邊肌群啟動時間數據計算右半邊出力數值;對該左半邊出力數據及該右半邊出力數據進行另一融合演算;以及根據該另一融合演算的結果計算對應該使用者的該訓練動作的左右平衡度。 The multi-modal perception collaborative training method as described in claim 11 further includes: sensing a plurality of mechanical data corresponding to the training movements of the user through force sensors respectively provided on the training equipment; The at least one inertial sensor on the training equipment senses the body posture data corresponding to the training movements of the user; through the at least one muscle respectively provided on the left and right half of the user's body The electrical sensor senses a plurality of left half electromyography data and a plurality of right half electromyography data corresponding to the training action of the user; outputs a plurality of pressure data according to the mechanical data, and outputs a plurality of pressure data according to the posture data Output the limb torque data; output a plurality of left half muscle group activation time data and a plurality of right half muscle group activation time data respectively according to the left half muscle electromyography data and the right half half muscle electrical data; according to the pressure data , the limb torque data and the left half muscle group activation time data are used to calculate the left half output value; the right half output value is calculated based on the pressure data, the limb torque data and the right half muscle group activation time data; The left half output data and the right half output data perform another fusion calculation; and the left and right balance corresponding to the training action of the user is calculated according to the result of the other fusion calculation. 如請求項19所述的多模感知協同訓練方法,其中當該左右平衡度小於或等於一平衡度閾值時,判定該使用者身體的該左半邊及該右半邊出力平衡,並持續根據該另一融合演算的結果計算對應該使用者的該訓練動作的該左右平衡度;以及當該左右平衡度大於該平衡度閾值時,判定該使用者身體的該左半邊及該右半邊出力不平衡,顯示該評估數據以及不平衡訊息。 The multi-modal perception collaborative training method as described in claim 19, wherein when the left and right balance is less than or equal to a balance threshold, it is determined that the left half and the right half of the user's body are in balance, and the force balance is continued based on the other. The result of a fusion calculation calculates the left and right balance corresponding to the user's training action; and when the left and right balance is greater than the balance threshold, it is determined that the left half and the right half of the user's body are unbalanced, Displays the evaluation data and imbalance message.
TW111134592A 2021-10-29 2022-09-13 Multiple sensor-fusing based interactive training system and multiple sensor-fusing based interactive training method TWI823561B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211326934.5A CN116058827A (en) 2021-10-29 2022-10-25 Multimode perception cooperative training system and multimode perception cooperative training method
US17/975,628 US20230140585A1 (en) 2021-10-29 2022-10-28 Multiple sensor-fusing based interactive training system and multiple sensor-fusing based interactive training method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163273160P 2021-10-29 2021-10-29
US63/273,160 2021-10-29

Publications (2)

Publication Number Publication Date
TW202317234A TW202317234A (en) 2023-05-01
TWI823561B true TWI823561B (en) 2023-11-21

Family

ID=87378704

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111134592A TWI823561B (en) 2021-10-29 2022-09-13 Multiple sensor-fusing based interactive training system and multiple sensor-fusing based interactive training method

Country Status (1)

Country Link
TW (1) TWI823561B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201007578A (en) * 2008-06-24 2010-02-16 Panasonic Elec Works Co Ltd Method and system for carrying out simulation and measurement relating to optimum operational condition for support stand of passive training device
CN109875501A (en) * 2013-09-25 2019-06-14 迈恩德玛泽控股股份有限公司 Physiological parameter measurement and feedback system
JP2019524287A (en) * 2016-08-08 2019-09-05 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. System and method for supporting exercise of subject
WO2020039491A1 (en) * 2018-08-21 2020-02-27 日本電気株式会社 Information distribution system, information distribution method, and program recording medium
TW202135902A (en) * 2019-12-26 2021-10-01 國立大學法人東京大學 Smart treadmill

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201007578A (en) * 2008-06-24 2010-02-16 Panasonic Elec Works Co Ltd Method and system for carrying out simulation and measurement relating to optimum operational condition for support stand of passive training device
CN109875501A (en) * 2013-09-25 2019-06-14 迈恩德玛泽控股股份有限公司 Physiological parameter measurement and feedback system
JP2019524287A (en) * 2016-08-08 2019-09-05 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. System and method for supporting exercise of subject
WO2020039491A1 (en) * 2018-08-21 2020-02-27 日本電気株式会社 Information distribution system, information distribution method, and program recording medium
TW202135902A (en) * 2019-12-26 2021-10-01 國立大學法人東京大學 Smart treadmill

Also Published As

Publication number Publication date
TW202317234A (en) 2023-05-01

Similar Documents

Publication Publication Date Title
US10194837B2 (en) Devices for measuring human gait and related methods of use
CN112603295B (en) Rehabilitation evaluation method and system based on wearable sensor
US10352962B2 (en) Systems and methods for real-time data quantification, acquisition, analysis and feedback
CN110021398B (en) Gait analysis and training method and system
US9226706B2 (en) System, apparatus, and method for promoting usage of core muscles and other applications
US20190192905A1 (en) Systems and methods for sensing balanced-action for improving mammal work-track efficiency
JP5005055B2 (en) Monitoring system and method for muscle strength and exercise / physical ability of limbs
US11318350B2 (en) Systems and methods for real-time data quantification, acquisition, analysis, and feedback
US20130324888A1 (en) System and method for measuring balance and track motion in mammals
JP6772276B2 (en) Motion recognition device and motion recognition method
CN109731316B (en) Shooting training system
Gauthier et al. Human movement quantification using Kinect for in-home physical exercise monitoring
Sanders et al. An approach to identifying the effect of technique asymmetries on body alignment in swimming exemplified by a case study of a breaststroke swimmer
JP2019154489A (en) Athletic ability evaluation system
Madanayake et al. Fitness Mate: Intelligent workout assistant using motion detection
US20190117129A1 (en) Systems, devices, and methods for determining an overall strength envelope
WO2021186709A1 (en) Exercise assistance apparatus, exercise assistance system, exercise assistance method, and exercise assistance program
TWI823561B (en) Multiple sensor-fusing based interactive training system and multiple sensor-fusing based interactive training method
US11527109B1 (en) Form analysis system
US20230140585A1 (en) Multiple sensor-fusing based interactive training system and multiple sensor-fusing based interactive training method
KR20130034245A (en) Rehabilitation momentum measurement system
Silva et al. A technological solution for supporting fall prevention exercises at the physiotherapy clinic
CN113303765A (en) System for detecting specific kind muscle problem of interested object
Ramasamy et al. Ski for squat: A squat exergame with pneumatic gel muscle-based dynamic difficulty adjustment
TWI681360B (en) Rehabilitation monitoring system and method thereof for parkinson's disease