WO2021096669A1 - Évaluation d'un sport basé sur des postures - Google Patents

Évaluation d'un sport basé sur des postures Download PDF

Info

Publication number
WO2021096669A1
WO2021096669A1 PCT/US2020/057439 US2020057439W WO2021096669A1 WO 2021096669 A1 WO2021096669 A1 WO 2021096669A1 US 2020057439 W US2020057439 W US 2020057439W WO 2021096669 A1 WO2021096669 A1 WO 2021096669A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
pose
keypoints
abnormal
sportsperson
Prior art date
Application number
PCT/US2020/057439
Other languages
English (en)
Inventor
Kai Qiu
Bo Wang
Jianlong FU
Xianchao WU
Peijun XIA
Lu Feng
Wei Wang
Lu Yang
Yuanchun XU
Original Assignee
Microsoft Technology Licensing, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing, Llc filed Critical Microsoft Technology Licensing, Llc
Publication of WO2021096669A1 publication Critical patent/WO2021096669A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/647Three-dimensional objects by matching two-dimensional images to three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/76Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries based on eigen-space representations, e.g. from pose or different illumination conditions; Shape manifolds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/49Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Definitions

  • FIG. 1 illustrates an exemplary process for assessing a pose-based sport according to an embodiment.
  • FIG. 5 illustrates an exemplary pose estimation process according to an embodiment.
  • FIG. 1 illustrates an exemplary process 100 for assessing a pose-based sport according to an embodiment.
  • the process 100 may be implemented as an independent application or software dedicated to sport assessment, which may run in various types of smart devices, e.g., mobile phone, laptop computer, tablet computer, desktop computer, etc., or an independent device dedicated to sport assessment.
  • the process 100 may be implemented as a part or a function included in or invoked by other applications or software, e.g., as a function of an AI chatbot.
  • the present disclosure is not limited to any specific means for implementing the process 100, but covers any software, hardware, user interface, etc. that can perform the process of assessing a pose-based sport according to the embodiments of the present disclosure.
  • sportsperson trajectory extraction may be performed in the sport video 110 to obtain a frame sequence for a single sportsperson.
  • the frame sequence includes a plurality of image frames focusing on the sportsperson.
  • the sportsperson may be detected from a frame in which the sportsperson first appears in the sport video 110 through a human detection process.
  • a bounding box surrounding the sportsperson may be used for representing the detection result.
  • the sportsperson may be tracked in the subsequent frames in the sport video 110 through a tracking process.
  • the bounding box as the detection result, may be used for initializing the tracking process.
  • a frame including an abnormal pose may be identified by performing pose analysis on the frame sequence marked with the keypoints.
  • block 340 circles exemplary frames 330-4 and 330-5 that include abnormal poses. Knee bending of the sportsperson is identified in the frame 330-4, and front-back legs apart of the sportsperson is identified in the frame 330-5, wherein both of the two poses are abnormal.
  • the frame sequence 510 associated with the sportsperson may correspond to the frame sequence 450 in FIG. 4.
  • a feature map set 530 may be extracted from the frame sequence 510 through, e.g., a CNN model.
  • the feature map set 530 comprises feature maps corresponding to each frame in the frame sequence 510.
  • a feature map may at least indicate positions of possible keypoints in the corresponding frame, wherein the positions of the possible keypoints may be optimized in the subsequent process.
  • the CNN model may adopt, e.g., ResNet-50, etc.
  • the size or form of the feature map may be represented as [T, C , H, W], wherein T represents the frame number, C represents a channel corresponding to a possible keypoint, if represents the height, and W represents the width. In one case, if may represent the height of a bounding box, and W may represent the width of the bounding box.
  • the feature map set 530 may be further provided as input to a spatial-temporal relation module 540.
  • a frame may be determined whether a frame includes an abnormal pose through a pose classification model, e.g., SVM.
  • a pose classification model e.g., SVM.
  • different keypoints in a set of keypoints may be used for calculating specific reference angles.
  • Table 1 shows an approach of calculating 8 exemplary reference angles by using exemplary 12 keypoints for the sport of freestyle skiing aerials.
  • abnormal poses and corresponding correct poses may be labelled in a freestyle skiing aerials dataset in advance by using the above reference angles, and the labelled dataset may be used for training the SVM.
  • the SVM may classify the frame as including or not including an abnormal pose.
  • a frame may be determined whether a frame includes an abnormal pose based on pre-established determination criteria.
  • the determination criteria may be established in advance based on standard action parameters, etc., and it may be determined whether a frame includes an abnormal pose through comparing calculated reference angles with the determination criteria.
  • a plurality of reference angles to be calculated may be predefined by referring to some standard action parameters of this sport, which are used for identifying common abnormal poses in this sport.
  • a reference angle 1012 in the side view is shown.
  • the reference angle 1012 may correspond to an angle between a vector formed by top of head and left-right knee midpoint and a horizontal vector 1014.
  • any other vector capable of characterizing the orientation of a sportsperson’ s body may be used for substituting the vector formed by top of head and left-right knee midpoint to calculate the reference angle 1012, e.g., a vector formed by top of head and left-right ankle midpoint, a vector formed by top of head and left-right hip midpoint, a vector formed by neck and left-right knee midpoint, etc.
  • a reference angle 1022 in the front view is shown.
  • FIG. 11 illustrates an exemplary assessment result providing process 1100 according to an embodiment.
  • the assessment result providing process 1100 may be performed for providing an assessment result based at least on frames 1102 including abnormal poses, wherein the frames 1102 including abnormal poses may correspond to, e.g., the frames 840 including abnormal poses in FIG. 8.
  • an assessment result may be determined based at least on a plurality of representative frames corresponding to the plurality of abnormal pose frame sections respectively.
  • the assessment result may be determined based on reference angles 1122 calculated for each representative frame and corresponding actions 1132.
  • the assessment result may be an action correction suggestion 1162 provided for at least one representative frame.
  • the correction suggestion 1162 may include a description about how to improve the action to achieve a standard or better effect.
  • the correction suggestion 1162 may be retrieved from a pre-established correction suggestion database 1160.
  • the correction suggestion database 1160 may include correction suggestions for different abnormal poses and different sizes of reference angles. For example, assuming that the reference angle corresponding to "knee bending" calculated in a representative frame is 170 degrees, which indicates a relatively minor knee bending error, a corresponding correction suggestion may be retrieved from the correction suggestion database 1160, e.g., "Here, you should straighten your legs a little bit more", etc.
  • Corresponding body damage analysis may be retrieved from the body damage database 1170 based on information of, such as, reference angles, corresponding actions, etc., e.g., "The knee angle is about 160 degrees when landing, and this angle is too large for the body’s cushioning and is easy to damage the knees.
  • the suggested angle is 100 degrees to 130 degrees. Please lower your body as much as possible to ensure that the landing force is cushioned.”
  • the assessment result may be a demo frame 1182 including a normal pose corresponding to at least one representative frame.
  • the demo frame 1182 may involve the same actions as the representative frame.
  • the demo frame 1182 may be retrieved from a pre-established demo frame database 1180.
  • the demo frame database 1180 may comprise frames that include normal poses and correspond to different actions, which may be extracted from any existing game videos, training videos, etc. For example, if a representative frame corresponds to the abnormal pose "knee bending", a demo frame not including a "knee bending" error may be retrieved from the demo frame database 1180.
  • the demo frame 1182 may include a plurality of consecutive frames associated with an action of a representative frame in order to demonstrate a complete process of the action.
  • FIG. 14 illustrates an exemplary jump start point detection process 1400 according to an embodiment.
  • the process 1400 detects a jump start frame based on object detection.
  • a target block 1440 corresponding to a visible range of a sportsperson may be detected in subsequent frames. As the sportsperson is gradually emerging from the upper end of the platform 1420, the visible range of the target block 1440 increases gradually. When the sportsperson leaves the platform 1420 completely and jumps, the visible range of the target block 1440 reaches the maximum value. A frame in which the target block reaches the maximum value may be construed as a jump start frame. As shown in FIG. 14, at the frame t+2, the visible range of the target block 1440 reaches the maximum value, thus the frame t+2 may be construed as a jump start frame. In an implementation, considering that different jump start modes of the sportsperson may correspond to different maximum visible ranges, the maximum visible range of the target block 1440 may be determined based on the maximum visible pixel distribution of the sportsperson in the sport video.
  • an UI 2300 in FIG. 23, an UI 2400 in FIG. 24, etc. may be presented to prompt the user that the processing is being performed.
  • an UI 2700 in FIG. 27 may be presented so that the user may select a jump code.
  • an UI 2800 in FIG. 28 may be presented.
  • the user may view a demo video corresponding to the jump code "bdff.
  • an UI 2900 in FIG. 29 may be presented so that the user may perform following operations.
  • At 3040 at least one frame including an abnormal pose may be identified through performing pose analysis on the frame sequence based at least on the set of keypoints.
  • the performing pose analysis may comprise: for each frame of the frame sequence, calculating a set of reference angles based at least on the set of keypoints; and determining whether the frame includes an abnormal pose based on the set of reference angles.
  • the performing pose analysis may comprise: for at least one reference angle of the set of reference angles, selecting a frame used for calculating the at least one reference angle from frames in a front view or in a side view, based on a 3-dimension position of the sportsperson.
  • the pose estimation module 3130 may be for: generating a set of feature maps corresponding to the frame sequence, each feature map at least indicating positions of keypoints in a corresponding frame; and obtaining a set of updated feature maps through performing spatial relation process and/or temporal relation process on the set of feature maps, each updated feature map at least indicating optimized positions of keypoints in a corresponding frame.
  • the apparatus 3100 may further comprise any other modules that perform any steps/processes in the methods for assessing a pose-based sport according to the above embodiments of the present disclosure.
  • modules in the apparatuses described above may be implemented in various approaches. These modules may be implemented as hardware, software, or a combination thereof. Moreover, any of these modules may be further functionally divided into sub-modules or combined together.
  • Software should be considered broadly to represent instructions, instruction sets, code, code segments, program code, programs, subroutines, software modules, applications, software applications, software packages, routines, subroutines, objects, running threads, processes, functions, etc. Software may reside on computer readable medium.
  • Computer readable medium may include, e.g., a memory, which may be, e.g., a magnetic storage device (e.g., a hard disk, a floppy disk, a magnetic strip), an optical disk, a smart card, a flash memory device, a random access memory (RAM), a read only memory (ROM), a programmable ROM (PROM), an erasable PROM (EPROM), an electrically erasable PROM (EEPROM), a register, or a removable disk.
  • a memory is shown as being separate from the processor in various aspects presented in this disclosure, a memory may also be internal to the processor (e.g., a cache or a register).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

La présente divulgation concerne des procédés et des appareils d'évaluation d'un sport basé sur des postures. Une vidéo concernant le sport basé sur des postures peut être obtenue. Une séquence de trames associée à un sportif peut être extraite de la vidéo. Un ensemble de points clés dans chaque trame de la séquence de trames peut être marqué par la mise en œuvre d'une évaluation des postures sur la séquence de trames. Au moins une trame comprenant une posture anormale peut être identifiée par la mise en œuvre d'une analyse des postures sur la séquence de trames sur la base au moins de l'ensemble de points clés. Un résultat d'évaluation peut être fourni sur la base de la ou des trames.
PCT/US2020/057439 2019-11-15 2020-10-27 Évaluation d'un sport basé sur des postures WO2021096669A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911118979.1A CN112819852A (zh) 2019-11-15 2019-11-15 对基于姿态的运动进行评估
CN201911118979.1 2019-11-15

Publications (1)

Publication Number Publication Date
WO2021096669A1 true WO2021096669A1 (fr) 2021-05-20

Family

ID=73476250

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/057439 WO2021096669A1 (fr) 2019-11-15 2020-10-27 Évaluation d'un sport basé sur des postures

Country Status (2)

Country Link
CN (1) CN112819852A (fr)
WO (1) WO2021096669A1 (fr)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706507A (zh) * 2021-08-27 2021-11-26 西安交通大学 基于人体姿态检测的实时跳绳计数方法、装置和设备
CN113705534A (zh) * 2021-09-17 2021-11-26 平安医疗健康管理股份有限公司 基于深度视觉的行为预测方法、装置、设备及存储介质
CN114022512A (zh) * 2021-10-30 2022-02-08 平安国际智慧城市科技股份有限公司 运动辅助方法、装置及介质
CN114138844A (zh) * 2021-10-28 2022-03-04 北京斯奇曼智能设备科技有限公司 一种滑雪培训方法、装置、电子设备及存储介质
CN114566249A (zh) * 2022-04-29 2022-05-31 北京奥康达体育产业股份有限公司 一种人体运动安全风险评估分析系统
CN114663972A (zh) * 2021-11-05 2022-06-24 范书琪 基于动作差分的目标标记方法及装置
US11475590B2 (en) * 2019-09-12 2022-10-18 Nec Corporation Keypoint based pose-tracking using entailment
WO2022247147A1 (fr) * 2021-05-24 2022-12-01 Zhejiang Dahua Technology Co., Ltd. Procédés et systèmes de prédiction de posture
CN116228867A (zh) * 2023-03-15 2023-06-06 北京百度网讯科技有限公司 位姿确定方法、装置、电子设备、介质
CN116311536A (zh) * 2023-05-18 2023-06-23 讯龙(广东)智能科技有限公司 一种视频动作评分方法、计算机可读存储介质及系统
CN117216313A (zh) * 2023-09-13 2023-12-12 中关村科学城城市大脑股份有限公司 姿态评价音频输出方法、装置、电子设备和可读介质
CN117275092A (zh) * 2023-10-09 2023-12-22 奥雪文化传播(北京)有限公司 一种智能滑雪动作评估方法、系统、设备及介质
CN117523936A (zh) * 2023-11-07 2024-02-06 中国人民解放军中部战区总医院 基于评价反馈的交互式伤员救治技能组训方法及系统
WO2024104223A1 (fr) * 2022-11-16 2024-05-23 中移(成都)信息通信科技有限公司 Procédé et appareil de comptage, dispositif électronique, support de stockage, programme et produit-programme

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113392746A (zh) * 2021-06-04 2021-09-14 北京格灵深瞳信息技术股份有限公司 动作标准挖掘方法、装置、电子设备和计算机存储介质
CN113392745A (zh) * 2021-06-04 2021-09-14 北京格灵深瞳信息技术股份有限公司 异常动作纠正方法、装置、电子设备和计算机存储介质
CN113537128A (zh) * 2021-07-29 2021-10-22 广州中金育能教育科技有限公司 一种基于深度学习姿态评估对连续动作的比对和分析方法、系统以及设备
CN113901889B (zh) * 2021-09-17 2023-07-07 广州紫为云科技有限公司 一种基于时间和空间建立行为识别热度图的方法
CN113850248B (zh) * 2021-12-01 2022-02-22 中科海微(北京)科技有限公司 运动姿态评估方法、装置、边缘计算服务器及存储介质
CN114140721A (zh) * 2021-12-01 2022-03-04 中科海微(北京)科技有限公司 射箭姿态评估方法、装置、边缘计算服务器及存储介质
CN116453693B (zh) * 2023-04-20 2023-11-14 深圳前海运动保网络科技有限公司 基于人工智能的运动风险防护方法、装置及计算设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4894741B2 (ja) * 2007-12-03 2012-03-14 ソニー株式会社 情報処理装置および情報処理方法、プログラム、並びに記録媒体
JP5604249B2 (ja) * 2010-09-29 2014-10-08 Kddi株式会社 人体姿勢推定装置、人体姿勢推定方法、およびコンピュータプログラム
CN107403440B (zh) * 2016-05-18 2020-09-08 株式会社理光 用于确定对象的姿态的方法和装置
JP2019045967A (ja) * 2017-08-30 2019-03-22 富士通株式会社 姿勢推定装置、方法、及びプログラム
CN109902562B (zh) * 2019-01-16 2022-07-01 重庆邮电大学 一种基于强化学习的驾驶员异常姿态监测方法

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ABHISHEK KUNDU ET AL: "A SURVEY ON VIDEO SEGMENTATION THE FUTURE ROADMAP", INTERNATIONAL JOURNAL OF MODERN TRENDS IN ENGINEERING AND RESEARCH, vol. 2, no. 3, 31 March 2015 (2015-03-31), pages 527 - 535, XP055770036, ISSN: 2393-8161 *
GATT THOMAS ET AL: "Detecting human abnormal behaviour through a video generated model", 2019 11TH INTERNATIONAL SYMPOSIUM ON IMAGE AND SIGNAL PROCESSING AND ANALYSIS (ISPA), IEEE, 23 September 2019 (2019-09-23), pages 264 - 270, XP033634404, DOI: 10.1109/ISPA.2019.8868795 *
KANAZAWA ANGJOO ET AL: "Learning 3D Human Dynamics From Video", 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), IEEE, 15 June 2019 (2019-06-15), pages 5607 - 5616, XP033686824, DOI: 10.1109/CVPR.2019.00576 *
LEIYUE YAO ET AL: "A New Approach to Fall Detection Based on the Human Torso Motion Model", APPLIED SCIENCES, vol. 7, no. 10, 26 September 2017 (2017-09-26), pages 993, XP055769667, DOI: 10.3390/app7100993 *
SUJATHA C ET AL: "A Study on Keyframe Extraction Methods for Video Summary", COMPUTATIONAL INTELLIGENCE AND COMMUNICATION NETWORKS (CICN), 2011 INTERNATIONAL CONFERENCE ON, IEEE, 7 October 2011 (2011-10-07), pages 73 - 77, XP032082396, ISBN: 978-1-4577-2033-8, DOI: 10.1109/CICN.2011.15 *
YANG LIU ET AL: "Keypoint matching by outlier pruning with consensus constraint", 2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE, 26 May 2015 (2015-05-26), pages 5481 - 5486, XP033169179, DOI: 10.1109/ICRA.2015.7139965 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11475590B2 (en) * 2019-09-12 2022-10-18 Nec Corporation Keypoint based pose-tracking using entailment
WO2022247147A1 (fr) * 2021-05-24 2022-12-01 Zhejiang Dahua Technology Co., Ltd. Procédés et systèmes de prédiction de posture
CN113706507A (zh) * 2021-08-27 2021-11-26 西安交通大学 基于人体姿态检测的实时跳绳计数方法、装置和设备
CN113706507B (zh) * 2021-08-27 2024-04-02 西安交通大学 基于人体姿态检测的实时跳绳计数方法、装置和设备
CN113705534A (zh) * 2021-09-17 2021-11-26 平安医疗健康管理股份有限公司 基于深度视觉的行为预测方法、装置、设备及存储介质
CN114138844A (zh) * 2021-10-28 2022-03-04 北京斯奇曼智能设备科技有限公司 一种滑雪培训方法、装置、电子设备及存储介质
CN114022512A (zh) * 2021-10-30 2022-02-08 平安国际智慧城市科技股份有限公司 运动辅助方法、装置及介质
CN114663972A (zh) * 2021-11-05 2022-06-24 范书琪 基于动作差分的目标标记方法及装置
CN114566249A (zh) * 2022-04-29 2022-05-31 北京奥康达体育产业股份有限公司 一种人体运动安全风险评估分析系统
CN114566249B (zh) * 2022-04-29 2022-07-29 北京奥康达体育产业股份有限公司 一种人体运动安全风险评估分析系统
WO2024104223A1 (fr) * 2022-11-16 2024-05-23 中移(成都)信息通信科技有限公司 Procédé et appareil de comptage, dispositif électronique, support de stockage, programme et produit-programme
CN116228867A (zh) * 2023-03-15 2023-06-06 北京百度网讯科技有限公司 位姿确定方法、装置、电子设备、介质
CN116228867B (zh) * 2023-03-15 2024-04-05 北京百度网讯科技有限公司 位姿确定方法、装置、电子设备、介质
CN116311536A (zh) * 2023-05-18 2023-06-23 讯龙(广东)智能科技有限公司 一种视频动作评分方法、计算机可读存储介质及系统
CN116311536B (zh) * 2023-05-18 2023-08-08 讯龙(广东)智能科技有限公司 一种视频动作评分方法、计算机可读存储介质及系统
CN117216313A (zh) * 2023-09-13 2023-12-12 中关村科学城城市大脑股份有限公司 姿态评价音频输出方法、装置、电子设备和可读介质
CN117275092A (zh) * 2023-10-09 2023-12-22 奥雪文化传播(北京)有限公司 一种智能滑雪动作评估方法、系统、设备及介质
CN117523936A (zh) * 2023-11-07 2024-02-06 中国人民解放军中部战区总医院 基于评价反馈的交互式伤员救治技能组训方法及系统

Also Published As

Publication number Publication date
CN112819852A (zh) 2021-05-18

Similar Documents

Publication Publication Date Title
WO2021096669A1 (fr) Évaluation d'un sport basé sur des postures
Wang et al. Ai coach: Deep human pose estimation and analysis for personalized athletic training assistance
Yuan et al. Self-supervised deep correlation tracking
US11532172B2 (en) Enhanced training of machine learning systems based on automatically generated realistic gameplay information
Wang et al. Event-centric hierarchical representation for dense video captioning
Huang et al. Tracknet: A deep learning network for tracking high-speed and tiny objects in sports applications
US12051273B2 (en) Method for recognizing actions, device and storage medium
Yu et al. Fine-grained video captioning for sports narrative
CN110674785A (zh) 一种基于人体关键点跟踪的多人姿态分析方法
WO2021098616A1 (fr) Procédé de reconnaissance de posture de mouvement, appareil de reconnaissance de posture de mouvement, dispositif terminal, et support
US20210065452A1 (en) Instant technique analysis for sports
US20120219209A1 (en) Image Labeling with Global Parameters
US10796448B2 (en) Methods and systems for player location determination in gameplay with a mobile device
US11568617B2 (en) Full body virtual reality utilizing computer vision from a single camera and associated systems and methods
US11615648B2 (en) Practice drill-related features using quantitative, biomechanical-based analysis
Tang et al. Research on sports dance movement detection based on pose recognition
CN114926762A (zh) 运动评分方法、系统、终端及存储介质
US11810352B2 (en) Operating method of server for providing sports video-based platform service
He et al. Mathematical modeling and simulation of table tennis trajectory based on digital video image processing
Cuiping Badminton video analysis based on player tracking and pose trajectory estimation
US20240013675A1 (en) A computerized method for facilitating motor learning of motor skills and system thereof
Liu et al. Motion-aware and data-independent model based multi-view 3D pose refinement for volleyball spike analysis
CN116824697A (zh) 运动动作识别方法、装置及电子设备
US20220273984A1 (en) Method and device for recommending golf-related contents, and non-transitory computer-readable recording medium
Ludwig et al. Recognition of freely selected keypoints on human limbs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20808577

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20808577

Country of ref document: EP

Kind code of ref document: A1