CN114140721A - Archery posture evaluation method and device, edge calculation server and storage medium - Google Patents

Archery posture evaluation method and device, edge calculation server and storage medium Download PDF

Info

Publication number
CN114140721A
CN114140721A CN202111452401.7A CN202111452401A CN114140721A CN 114140721 A CN114140721 A CN 114140721A CN 202111452401 A CN202111452401 A CN 202111452401A CN 114140721 A CN114140721 A CN 114140721A
Authority
CN
China
Prior art keywords
user
archery
posture
image
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111452401.7A
Other languages
Chinese (zh)
Inventor
李峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongkehai Micro Beijing Technology Co ltd
Original Assignee
Zhongkehai Micro Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongkehai Micro Beijing Technology Co ltd filed Critical Zhongkehai Micro Beijing Technology Co ltd
Priority to CN202111452401.7A priority Critical patent/CN114140721A/en
Publication of CN114140721A publication Critical patent/CN114140721A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an archery posture assessment method, an archery posture assessment device, an edge calculation server and a storage medium, wherein the method comprises the following steps: acquiring a first video through the selected image acquisition equipment, and judging whether a user enters a designated position area or not according to the first video; under the condition that a user enters an archery starting area, acquiring a second video through image acquisition equipment, and identifying the positions of skeleton joint points of key parts of the user in the second video aiming at each frame of second image in the second video; determining posture data corresponding to the second image according to the positions of the bone joint points of the key parts of the user; and selecting target posture data from the posture data of the user in the archery process, and comparing the target posture data with preset standard posture data to evaluate the archery posture of the user. Therefore, the standard degree of the archery posture can be objectively and accurately determined, the user is guided to correct the posture, and the training of the user is greatly facilitated.

Description

Archery posture evaluation method and device, edge calculation server and storage medium
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an archery posture assessment method and device, an edge calculation server and a storage medium.
Background
During the competition of archery sports, the exertion of archery level can be influenced by various factors, such as field wind speed, the action normative of archer, the mental state of the athlete, and the like.
The archery training process of athletes needs coaching and correcting one by one for a long time, the teaching cost is high, and although some auxiliary training tools appear in the related technology, the tools cannot visually give out training evaluation and guidance results. For example, in the mode of installing the sensor on the bow, the displacement parameter and the shaking parameter of the bow are collected through the sensor and are used for reference of teaching of athletes or coaches after training of the athletes is finished, and the mode cannot visually and quickly evaluate the posture of the archery athletes in the motion process.
Disclosure of Invention
In order to solve the technical problem that the archery motion evaluation mode in the prior art cannot intuitively and quickly evaluate the motion process of an archery player, the embodiment of the invention provides an archery posture evaluation method and device, an edge calculation server and a storage medium.
In a first aspect of the embodiments of the present invention, there is provided an archery posture assessment method, including:
acquiring a first video through selected image acquisition equipment, and judging whether a user enters a designated position area or not according to the first video;
if yes, triggering an archery posture detection event of the user;
acquiring a second video through the image acquisition equipment, and identifying the positions of the bone joint points of the key parts of the user in each frame of second image in the second video;
determining posture data corresponding to the second images according to the positions of the bone joint points of the key parts of the user, wherein the posture data corresponding to each second image in the second video form posture data in the archery process of the user;
and selecting target posture data from the posture data in the archery process of the user, and comparing the target posture data with preset standard posture data to evaluate the archery posture of the user.
In an optional embodiment, candidate image acquisition devices are respectively deployed on the left side and the right side by taking a target as a central axis, and the candidate image acquisition devices and a user are positioned on the same horizontal line;
the acquiring of the first video by the selected image acquisition device comprises:
acquiring the archery habit of a user, and determining image acquisition equipment from the candidate image acquisition equipment according to the archery habit of the user;
and starting the image acquisition equipment to acquire a video, and acquiring a first video through the image acquisition equipment.
In an optional embodiment, the acquiring, by the image capturing device, the first video includes:
acquiring a first video through image acquisition equipment in an archery detection mode;
the judging whether the user enters a designated position area according to the first video comprises the following steps:
for a first image in the first video, inputting the first image to a preset human body position detection model to obtain a user position in the first image output by the human body position detection model;
and comparing the position of the user with a preset archery standing position area, and judging whether the user enters a designated position area.
In an optional embodiment, the human body position detection model is specifically obtained by:
acquiring archery videos corresponding to a plurality of target users respectively, and performing frame extraction processing on the archery videos to obtain a preset number of archery images;
carrying out human body target labeling on the archery images in the preset number in a rectangular frame labeling mode to generate human body detection training samples;
and carrying out supervised training on the human body position detection initial model based on the human body detection training sample to obtain a human body position detection model.
In an alternative embodiment, said identifying positions of skeletal joint points of key parts of said user in said second image comprises:
inputting the second image into a preset human body posture evaluation model, and acquiring the positions of the bone joint points of the key parts of the user in the second image output by the human body posture evaluation model.
In an optional embodiment, the human posture estimation model is specifically obtained by:
acquiring archery videos corresponding to a plurality of target users respectively, and performing frame extraction processing on the archery videos to obtain a preset number of archery images;
carrying out human body joint point labeling on the archery images in the preset number in a bone joint point labeling mode to generate archery posture evaluation training samples;
and carrying out supervised training on the human body posture evaluation initial model based on the archery posture evaluation training sample to obtain a human body posture evaluation model.
In an optional embodiment, the determining pose data corresponding to the second image according to the positions of the bone joint points of the key parts of the user comprises:
forming a vector corresponding to the key part of the user according to the positions of the skeletal joint points of the key part of the user;
and calculating an included angle between the vectors corresponding to the key part of the user, and determining the included angle as the included angle corresponding to the key part of the user corresponding to the second image.
In an optional embodiment, the selecting target posture data from the posture data during the process of shooting the arrow by the user and comparing the target posture data with preset standard posture data to evaluate the shooting posture of the arrow by the user comprises:
determining the similarity between any adjacent N frames of the second images in the second video;
determining N adjacent frames of the second image with the similarity exceeding a preset similarity threshold from the second video; wherein N is more than or equal to 2;
selecting a target second image from N adjacent frames of second images with the similarity exceeding a preset similarity threshold, and selecting an included angle corresponding to the target second image as a target included angle;
and comparing the target included angle with a preset standard included angle to evaluate the archery posture of the user.
In an alternative embodiment, the motion sensor is mounted on a bow, the method further comprising:
acquiring motion data of a bow currently acquired by the motion sensor in the process of shooting an arrow by a user, and drawing a current moving track of the bow according to the motion data;
determining the acquisition time of the motion data, and searching the second image corresponding to the acquisition time from the second video;
and marking the current moving track of the bow on the second image corresponding to the acquisition time, and transmitting the current moving track of the bow to a display screen for displaying.
In an alternative embodiment, the motion sensor is mounted on a bow, the method further comprising:
acquiring complete motion data of the bow acquired by the motion sensor in the archery process of the user, and drawing a complete movement track of the bow according to the complete motion data;
and marking the complete movement track of the bow on the target second image, and transmitting the complete movement track of the bow to a display screen for displaying.
In a second aspect of the embodiments of the present invention, there is provided an archery posture evaluating apparatus including:
the video acquisition module is used for acquiring a first video through the selected image acquisition equipment;
the user judgment module is used for judging whether a user enters a designated position area or not according to the first video;
the event triggering module is used for triggering an archery posture detection event of the user if the event triggering module is used for triggering the archery posture detection event of the user;
the position identification module is used for acquiring a second video through the image acquisition equipment, and identifying the positions of the bone joint points of the key parts of the user in each frame of second image in the second video;
the data determining module is used for determining posture data corresponding to the second images according to the positions of the bone joint points of the key parts of the user, and the posture data in the archery process of the user are formed by the posture data corresponding to each second image in the second video;
and the posture evaluation module is used for selecting target posture data from the posture data in the archery process of the user and comparing the target posture data with preset standard posture data so as to evaluate the archery posture of the user.
In a third aspect of the embodiments of the present invention, there is further provided an edge computing server, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
and a processor configured to implement the archery posture assessment method described in the first aspect above when executing the program stored in the memory.
In a fourth aspect of embodiments of the present invention, there is also provided a storage medium having stored therein instructions that, when run on a computer, cause the computer to execute the archery posture assessment method described in the above first aspect.
In a fifth aspect of embodiments of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the archery posture assessment method described in the first aspect above.
According to the technical scheme provided by the embodiment of the invention, a first video is obtained through selected image acquisition equipment, whether a user enters a designated position area is judged according to the first video, an archery posture detection event for the user is triggered under the condition that the user enters the designated position area, a second video is obtained through the image acquisition equipment, the position of a bone joint point of a key part of the user in the second image is identified aiming at each frame of second image in the second video, the posture data corresponding to the second image is determined according to the position of the bone joint point of the key part of the user, the posture data corresponding to each second image in the second video form the posture data in the archery process of the user, target posture data is selected from the posture data in the archery process of the user and is compared with preset standard posture data, so that the archery posture of the user is evaluated. Therefore, video acquisition and analysis are carried out in the archery process, the archery posture of the user is detected in the archery process, the archery posture of the user is quantified through data, the obtained target posture data is compared with standard posture data, the standard degree of the archery posture can be visually and accurately determined, the user is guided to correct the posture, and the training of the user is greatly facilitated.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic flow chart illustrating an implementation of an archery posture assessment method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a scene of a candidate image capturing device deployment shown in an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating an implementation of a training method for a human body position detection model according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating an implementation of a training method for a human body posture estimation model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a skeletal joint point labeling approach shown in an embodiment of the present invention;
fig. 6 is a schematic flow chart illustrating an implementation of a method for determining pose data corresponding to a second image according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an evaluation system for archery postures of a user according to an embodiment of the present invention;
FIG. 8 is a schematic flow chart illustrating an implementation of a method for estimating archery postures of a user according to an embodiment of the present invention;
FIG. 9 is a second image of a target during a user's aiming at a bulls-eye in an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of an archery posture evaluating device shown in the embodiment of the present invention;
fig. 11 is a schematic structural diagram of an edge computing server according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1, an implementation flow diagram of an archery posture assessment method provided in an embodiment of the present invention is applied to an edge computing server, and specifically includes the following steps:
s101, acquiring a first video through the selected image acquisition equipment, and judging whether a user enters a designated position area or not according to the first video.
In the embodiment of the present invention, candidate image capturing devices, such as a camera, and the like, may be deployed in a (sports) field for capturing videos in the field in real time, and may be specifically deployed in an archery field for capturing videos in the field in real time; wherein the archery standing position area and the archery direction are marked in the background (i.e. in the picture of the image acquisition device).
It should be noted that, for the deployment of the candidate image capturing device, the target may be used as a central axis, the candidate image capturing devices may be respectively deployed at the left side and the right side, and the candidate image capturing devices and the user are located at the same horizontal line, as shown in fig. 2, so that the viewing angle of the candidate image capturing device may correspond to the left side or the right side of the arrow shot by the user, which is not limited in the embodiment of the present invention.
Based on the deployment of the candidate image acquisition devices, for the edge computing server, the archery habit of the user can be obtained, so that the image acquisition devices are determined from the candidate image acquisition devices according to the archery habit of the user, the image acquisition devices are started to acquire videos, and the first videos are obtained through the image acquisition devices.
It should be noted that, for the archery habit of the user, it is generally understood that the dominant hand used by the user to pull the bowstring is, for example, the user is left-handed, the user is used to pull the bowstring left-handed, for example, the user is right-handed, and the user is used to pull the bowstring right-handed, which means that in practice, different candidate image capturing devices are started according to the difference in archery habit of the user, and only one of the image capturing devices is actually used.
For example, for the edge computing server, an archery habit of a user (the user is left-handed, and the user is used to pull a bow string in the left direction) is obtained, and when the user looks from the archery direction, it can be determined that the candidate image acquisition device on the left side is the image acquisition device according to the archery habit of the user, and then the image acquisition device can be started to perform video acquisition, and the first video is obtained through the image acquisition device.
For example, for the edge computing server, the archery habit of the user (the user is right-handed, and the user is used to pull a bow string in the right direction) is obtained, and when the user looks from the archery direction, the candidate image acquisition device on the right side can be determined to be the image acquisition device according to the archery habit of the user, and then the image acquisition device can be started to perform video acquisition, and the first video is obtained through the image acquisition device.
After the image acquisition equipment is started to carry out video acquisition, in an archery detection mode, a user can capture an archery picture in real time through the image acquisition equipment in the archery process, and the acquired picture is transmitted to the edge computing server. Thus, in the archery detection mode, the edge computing server acquires the first video through the image acquisition device.
For the first video, whether the user enters the designated position area can be judged according to the first video. For example, in the archery detection mode, for a first video, it is determined whether the student enters a designated location area according to the first video, where the designated location area may be as shown in fig. 2.
The method comprises the steps of inputting a first image in a first video to a preset human body position detection model to obtain a user position in the first image output by the human body position detection model, comparing the user position with a preset archery standing position area, and judging whether a user enters a designated position area or not.
It should be noted that, the human body position detection model may specifically be a peoplet model, and may also be other models, which is not limited in the embodiment of the present invention.
As shown in fig. 3, an implementation flow diagram of a training method for a human body position detection model provided in an embodiment of the present invention is shown, and the method specifically includes the following steps:
s301, acquiring archery videos corresponding to a plurality of target users respectively, and performing frame extraction processing on the archery videos to obtain a preset number of archery images.
In the embodiment of the invention, in the model training stage, a plurality of target users are summoned, and archery videos are respectively acquired aiming at each target user. Therefore, the archery videos corresponding to the target users can be obtained.
It should be noted that in the process of respectively acquiring archery videos for each target user, according to the archery habit of each target user, an image acquisition device is determined from the candidate image acquisition devices, and then the image acquisition device is started to acquire videos, and the archery videos of each target user are acquired through the image acquisition device.
For example, in the model training phase, 4000 students are summoned, wherein 2000 persons, male and female, obtain the archery habit of each student, determine an image acquisition device from the candidate image acquisition devices according to the archery habit of each student, further start the image acquisition device to perform video acquisition, and obtain the archery video of each student through the image acquisition device, so that the archery video of 4000 students can be obtained.
For the obtained archery videos corresponding to the multiple target users respectively, frame extraction processing is performed in the embodiment of the invention, so that a preset number of archery images can be obtained. The frame may be randomly decimated, which is not limited in this embodiment of the present invention.
For example, for the acquired archery videos corresponding to the 4000 students, frame extraction processing is performed in the embodiment of the present invention, wherein frames can be extracted randomly by using a script, so that about 50000 archery images can be obtained.
S302, carrying out human body target labeling on the archery images in the preset number in a rectangular frame labeling mode, and generating human body detection training samples.
For the arrow images with the preset number, the human body target labeling can be performed in a rectangular frame labeling mode in the embodiment of the invention, so that the human body detection training sample can be generated, which means that for each arrow image, a rectangular frame is used for framing a person in the arrow image.
S303, carrying out supervised training on the human body position detection initial model based on the human body detection training sample to obtain a human body position detection model.
For the human body detection training sample, in the embodiment of the invention, the human body position detection initial model can be supervised-trained based on the human body detection training sample to obtain the human body position detection model.
It should be noted that, in the embodiment of the present invention, when the loss function converges, or the number of iterations reaches a threshold, the model training may be regarded as being terminated, and this is not limited by the embodiment of the present invention.
And S102, if yes, triggering an archery posture detection event of the user.
S103, acquiring a second video through the image acquisition equipment, and identifying the positions of the bone joint points of the key parts of the user in each frame of second image in the second video.
S104, determining posture data corresponding to the second images according to the positions of the bone joint points of the key parts of the user, and forming the posture data of the user in the archery process by the posture data corresponding to each second image in the second video.
In the embodiment of the invention, the image acquisition device can acquire a video of a user in the whole archery process in real time and transmit the video to the edge computing server for processing, the edge computing server triggers an archery posture detection event of the user under the condition that the user enters an archery starting area, which means that the detection of the archery posture of the user is started at the moment, the user can be tracked through a target tracking algorithm at the moment, the image acquisition device captures a motion picture in real time, and the edge computing server acquires the second video.
It should be noted that, in the embodiment of the present invention, the first video and the second video may be sub-segments in an archery posture evaluation complete video code stream, and the archery posture evaluation complete video code stream is formed by the first video and the second video, where the archery posture evaluation code stream refers to a video code stream acquired in an entire process of a (camera) picture after a user enters a (camera) picture and then enters an archery standing position area, and immediately after starting archery from the archery standing position area and ending archery, the (camera) picture exits.
In the embodiment of the invention, for each frame of the second image in the second video, the position of the bone joint point of the key part of the user in the second image is identified, so that the posture data corresponding to the second image is determined according to the position of the bone joint point of the key part of the user, wherein the posture data in the process of shooting the arrow of the user is formed by the posture data corresponding to each second image in the second video.
For example, in the embodiment of the present invention, for the second image of the ith (i ═ 1, 2, 3, 4 … …) frame in the second video, the positions of the bone joint points of the key parts of the user in the second image of the ith frame are identified, so that the posture data corresponding to the second image of the ith frame is determined according to the positions of the bone joint points of the key parts of the user, as shown in table 1 below, so that the posture data in the process of shooting the user is composed of the posture data corresponding to each second image in the second video.
Second image of ith frame Attitude data
1 st frame second image Attitude data 1
Second image of 2 nd frame Attitude data 2
…… ……
TABLE 1
It should be noted that, for the key parts, in the embodiment of the present invention, specifically, the shoulder joint, the elbow joint, the painted joint, and/or the foot joint, etc. may be selected according to actual needs, and the embodiment of the present invention does not limit this. In addition, the positions of the bone joint points of the key parts of the user can be as follows:
{Pi(xi,yi,ci) I ∈ 0.· 21; wherein (x)i,yi) Representing the horizontal and vertical coordinates of the bone joint points in the image, ciConfidence is indicated, representing the confidence level of the bone joint point location, and if lower, it may be discarded, leaving the bone joint point location with higher confidence.
In the embodiment of the present invention, the positions of the bone joint points of the key parts of the user in the second image may be identified through the human posture evaluation model, and specifically, the second image may be input to a preset human posture evaluation model, so as to obtain the positions of the bone joint points of the key parts of the user in the second image output by the human posture evaluation model.
It should be noted that, the human body posture evaluation model may be a yolo model, and may also be other models, which is not limited in the embodiment of the present invention.
As shown in fig. 4, an implementation flow diagram of a training method for a human body posture assessment model provided in an embodiment of the present invention is shown, and the method specifically includes the following steps:
s401, acquiring archery videos corresponding to a plurality of target users respectively, and performing frame extraction processing on the archery videos to obtain a preset number of archery images.
In the embodiment of the invention, in the model training stage, a plurality of target users are summoned, and archery videos are respectively acquired aiming at each target user. Therefore, the archery videos corresponding to the target users can be obtained.
It should be noted that in the process of respectively acquiring archery videos for each target user, according to the archery habit of each target user, an image acquisition device is determined from the candidate image acquisition devices, and then the image acquisition device is started to acquire videos, and the archery videos of each target user are acquired through the image acquisition device.
For example, in the model training phase, 4000 students are summoned, wherein 2000 persons, male and female, obtain the archery habit of each student, determine an image acquisition device from the candidate image acquisition devices according to the archery habit of each student, further start the image acquisition device to perform video acquisition, and obtain the archery video of each student through the image acquisition device, so that the archery video of 4000 students can be obtained.
For the obtained archery videos corresponding to the multiple target users respectively, frame extraction processing is performed in the embodiment of the invention, so that a preset number of archery images can be obtained. The frame may be randomly decimated, which is not limited in this embodiment of the present invention.
For example, for the acquired archery videos corresponding to the 4000 students, frame extraction processing is performed in the embodiment of the present invention, wherein the frames can be randomly extracted by using a script, and about 50000 archery images can be obtained if the frames are thick.
S402, marking the human body joint points of the archery images in the preset number in a bone joint point marking mode to generate archery posture evaluation training samples.
For the archery images with the preset number, the human body joint points can be labeled in a bone joint point labeling mode in the embodiment of the invention, so that the archery posture evaluation training sample can be generated, that is, the bone joint points in each archery image are labeled.
For example, for 50000 standing long jump images, a skeleton joint point labeling method is adopted to label human joint points, and a total of 22 skeleton joint points are labeled, as shown in fig. 5, wherein each skeleton joint point has a meaning shown in table 2 below.
Figure BDA0003386681020000121
Figure BDA0003386681020000131
TABLE 2
In the embodiment of the present invention, in order to improve the speed of the calculation processing while meeting the requirement of the detection, in the training process, the training sample image is labeled with human joint points, or only 16 skeletal joint points in the upper torso and the lower torso are labeled, or only 8 skeletal joint points in the upper torso are labeled, which is not limited in the embodiment of the present invention.
And S403, carrying out supervised training on the human body posture evaluation initial model based on the archery posture evaluation training sample to obtain a human body posture evaluation model.
For the archery posture evaluation training sample, the supervised training can be carried out on the human posture evaluation initial model based on the archery posture evaluation training sample in the embodiment of the invention, so as to obtain the human posture evaluation model.
It should be noted that, in the embodiment of the present invention, when the loss function converges, or the number of iterations reaches a threshold, the model training may be regarded as being terminated, and this is not limited by the embodiment of the present invention.
In addition, in the embodiment of the present invention, as shown in fig. 6, an implementation flow diagram of a method for determining pose data corresponding to a second image provided in the embodiment of the present invention may specifically include the following steps:
s601, forming a vector corresponding to the key part of the user according to the position of the bone joint point of the key part of the user.
S602, calculating an included angle between the vectors corresponding to the key parts of the user, and determining the included angle as the included angle corresponding to the key parts of the user corresponding to the second image.
For the positions of the bone joint points of the key parts of the user in the second image, in the embodiment of the invention, vectors corresponding to the key parts of the user can be formed according to the positions of the bone joint points of the key parts of the user, an included angle between the vectors corresponding to the key parts of the user is calculated, and the included angle is determined to be the included angle corresponding to the key parts of the user corresponding to the second image.
For example, the position P of the bone joint point for the key part of the user in the second image7、P9、P13The vector corresponding to the key part of the user, namely 2 vectors corresponding to the left shoulder joint of the user, namely
Figure BDA0003386681020000141
Referring to fig. 5 and the meanings shown in table 2 above, the included angle between the 2 vectors is calculated, and the included angle is determined as the included angle corresponding to the left shoulder joint of the user corresponding to the second image and is denoted as angle _ shoulder.
Similarly, for the positions of the other skeletal joint points of the key part of the user in the second image, 2 vectors corresponding to the left waist joint, 2 vectors corresponding to the left knee joint, and 2 vectors corresponding to the left ankle joint of the user are formed, then an included angle between the 2 vectors is calculated, and the included angle is determined as an included angle corresponding to the left waist joint, an included angle corresponding to the left knee joint, and an included angle corresponding to the left ankle joint of the user corresponding to the second image, and is respectively marked as angle _ wait, angle _ knee, and angle _ ankle, as shown in table 3 below.
Thus, the included angles corresponding to the joints of the user corresponding to the second image can be determined.
Figure BDA0003386681020000151
TABLE 3
S105, selecting target posture data from the posture data in the archery process of the user, and comparing the target posture data with preset standard posture data to evaluate the archery posture of the user.
In the embodiment of the invention, the posture data corresponding to each second image in the second video form the posture data of the user in the archery process, the target posture data is selected from the posture data of the user in the archery process, namely the target posture data is selected from the posture data corresponding to each second image, and the target posture data is compared with the preset standard posture data to evaluate the archery posture of the user, namely the corresponding motion expert knowledge base, evaluate the archery posture problem and give the training guidance suggestion in the corresponding expert knowledge base.
The target posture data (such as a right shoulder joint angle, a right knee joint angle and the like) is compared with preset standard posture data (such as a standard right shoulder joint angle, a standard right knee joint angle and the like), whether the archery posture of the user is standard or not is detected, or the standard degree is given, and under the condition that the archery posture of the user is not standard, the archery posture of the user is played back on the display screen, and the marking is carried out at the position which is not standard; as shown in fig. 7, the edge computing server is connected to the camera and the display screen, respectively.
In addition, as shown in fig. 8, an implementation flow diagram of the method for evaluating an archery gesture of a user according to the embodiment of the present invention is provided, and the method is applied to an edge computing server, and may specifically include the following steps:
s801, determining the similarity between the second images of any adjacent N frames in the second video.
S802, determining the second image of the adjacent N frames with the similarity exceeding a preset similarity threshold from the second video; wherein N is more than or equal to 2.
And S803, selecting a target second image from the N adjacent frames of second images with the similarity exceeding a preset similarity threshold, and selecting an included angle corresponding to the target second image as a target included angle.
In the embodiment of the invention, for the second video, the similarity between any adjacent N frames of second images in the second video is determined, so that the adjacent N frames of second images with the similarity of 0 exceeding a preset similarity threshold are determined from the second video, wherein N is more than or equal to 2, the target second image is selected from the adjacent N frames of second images with the similarity exceeding the preset similarity threshold, and the included angle corresponding to the target second image is selected as the target included angle.
In the embodiment of the present invention, a description is given by taking any two adjacent frames of second images in the second video as an example, and for any two adjacent frames of second images in the second video, a similarity between any two adjacent frames of second images in the second video is determined, so that a target included angle can be selected from included angles corresponding to each second image in the second video according to the similarity.
For example, for the edge calculation server, for any two adjacent frames of second images in the second video, the similarity between any two adjacent frames of second images is determined, as shown in table 4 below, so that a target angle can be selected from the corresponding angles of each second image in the second video according to the similarity.
Second image of arbitrary two frames before and after Degree of similarity
First frame second image and second frame second image 85%
Second frame second image and third frame second image 80%
…… ……
TABLE 4
It should be noted that, for the similarity between any two previous and subsequent frames of second images, the calculation may be specifically performed by referring to a relatively mature algorithm in the market, for example, a background subtraction method, which is not limited in the embodiment of the present invention.
During the process of shooting the arrow by the user, the user usually needs to aim at the target center within a period of time by combining the breathing frequency of the user and natural factors, and then shoots the arrow towards the target center, wherein the process that the user aims at the target center is important, and the shooting posture at the moment can be emphatically evaluated.
Based on this, in the embodiment of the present invention, the edge calculation server may determine two adjacent frames of second images with the similarity exceeding the preset similarity threshold from the second video, select the target second image from the two adjacent frames of second images with the similarity exceeding the preset similarity threshold, and select an included angle corresponding to the target second image as the target included angle.
For example, the edge calculation server determines two frames of second images before and after the second video with a similarity exceeding 95%, which means that the user enters a state of aiming at the target, and the user needs to keep aiming at the target for a period of time (e.g., 1-2 seconds), so that there are a plurality of frames of second images before and after the second video, and the similarity of the two frames of second images exceeds 95%.
At this time, one second image may be randomly selected from a plurality of two adjacent front and back second images with a similarity of more than 95% as a target second image, as shown in fig. 9, where the second image corresponds to a process of aiming at a target by a user, and is a certain frame image in the process of aiming at the target by the user, and an included angle corresponding to the target second image is selected as a target included angle.
It should be noted that, in view of that it takes a certain time for the user to aim at the target, for example, 1 second to 2 seconds, the process of the user standard target is a continuous process, the number of the second images of the two adjacent frames before and after the corresponding similarity exceeds the preset similarity threshold is relatively large, and one second image is randomly selected as the target second image from the second images of the two adjacent frames before and after the corresponding similarity exceeds the preset similarity threshold, which is not limited in the embodiment of the present invention.
S804, comparing the target included angle with a preset standard included angle to evaluate the archery posture of the user.
In the embodiment of the invention, the included angle corresponding to the target second image is used as the target included angle, and the target included angle is compared with the preset standard included angle to evaluate the archery posture of the user. The method is used for evaluating the archery gesture of the user by comparing the included angle corresponding to the key part of the user corresponding to a certain frame of image (namely the target second image) in the process of aiming at the target with a preset standard included angle.
In the embodiment of the present invention, the edge calculation server may label the posture data on the second image after obtaining the posture data of the user in the archery process, and present the labeled second image on the display screen.
In the embodiment of the present invention, the edge calculation server may also label the gesture data of the user to the target second image, for example, an included angle corresponding to the target second image is a target included angle.
In addition, in the embodiment of the present invention, a motion sensor (e.g., an accelerometer, a gyroscope, etc.) is installed on the bow, and may be specifically installed on a shock-proof rod of the bow, and the motion sensor is configured to collect motion data (attitude direction, acceleration, etc.) of the bow during an archery process in real time and transmit the motion data to the edge computing server.
Based on the above, for the edge computing server, the motion data of the bow collected by the motion sensor currently in the archery process of the user can be obtained, the current moving track of the bow is drawn according to the motion data, and the current moving track of the bow can be labeled in a specific image and transmitted to the display screen for displaying so as to be watched by the user.
Specifically, the acquisition time of the motion data is determined, the second image corresponding to the acquisition time is searched from the second video, which means that the acquisition time of the second image is consistent with the acquisition time of the motion data, it is indicated that the motion data and the corresponding second image are acquired at the same time, the current movement track of the bow is marked on the second image corresponding to the acquisition time, and the second image is transmitted to the display screen for displaying.
For example, for the edge calculation server, motion data of a bow acquired by a motion sensor currently (12:00) in the process of shooting an arrow by a user is acquired, a current movement track of the bow is drawn according to the motion data, the acquisition time (12:00) of the motion data is determined, and a second image corresponding to the acquisition time is searched for from a second video, which means that the motion data and the corresponding second image are acquired at the 12:00 time, and at this time, the current movement track of the bow can be labeled on the second image corresponding to the acquisition time and transmitted to a display screen for displaying.
For the edge calculation server, the complete movement track of the bow can be finally obtained, specifically, the complete movement data of the bow collected by the movement sensor in the archery process of the user is obtained, the complete movement track of the bow is drawn according to the complete movement data, the complete movement track of the bow is marked on the target second image and is transmitted to the display screen for displaying, and the user can watch the complete movement track.
In the embodiment of the invention, the motion video of the athlete is collected in real time through the camera, the key parts of the human body of the athlete are identified in the video frame image, the posture data of the key parts are obtained through calculation, the motion action of the user is subjected to data quantization, and the posture data and the movement curve of the bow are simultaneously displayed on the video image, so that the standard of the motion of the athlete can be effectively evaluated visually.
It should be noted that, for the motion data or the complete motion data, the existing motion data processing method may be referred to, so that the current movement track or the complete movement track may be obtained, which is not limited in the embodiment of the present invention.
Through the above description of the technical solution provided by the embodiment of the present invention, the first video is obtained through the selected image capturing device, and whether the user enters the designated location area is determined according to the first video, triggering an archery gesture detection event for the user when the user enters a location area on which the user stands during archery, acquiring a second video through image acquisition equipment, identifying the positions of the bone joint points of key parts of a user in each frame of second image in the second video, and determining posture data corresponding to the second images according to the positions of the bone joint points of the key parts of the user, forming posture data in the archery process of the user by the posture data corresponding to each second image in the second video, selecting target posture data from the posture data in the archery process of the user, and comparing the target posture data with preset standard posture data to evaluate the archery posture of the user.
Meanwhile, the motion curve of the bow is drawn on the video frame image, the image is sent to the display screen to be presented, and the video frame image containing the complete motion curve of the bow and the video frame image containing the key action can be extracted and stored locally or uploaded to the cloud for subsequent reference.
Therefore, video acquisition and analysis are carried out in the archery process, the archery posture of the user is detected in the archery process, the archery posture of the user is quantified through data, the obtained target posture data is compared with standard posture data, the standard degree of the archery posture can be objectively and accurately determined, the user is guided to correct the posture, and the training of the user is greatly facilitated.
The archery posture evaluation method provided by the embodiment of the invention is applied to the edge computing server, in a specific application scene, the edge computing server, the camera and the display screen can be independently arranged, and the edge computing server is respectively communicated with the camera and the display screen; or the edge computing server (processor), the camera and the display screen are arranged in an all-in-one manner, which is not limited in the embodiment of the present invention.
Corresponding to the foregoing method embodiment, an embodiment of the present invention further provides an archery posture assessment apparatus, as shown in fig. 10, where the apparatus is applied to an edge computing server, and may include: the system comprises a video acquisition module 1010, a user judgment module 1020, an event trigger module 1030, a position identification module 1040, a data determination module 1050 and a posture evaluation module 1060.
A video acquisition module 1010, configured to acquire a first video through a selected image capturing device;
a user determining module 1020, configured to determine whether a user enters a designated location area according to the first video;
an event triggering module 1030, configured to trigger an archery gesture detection event for the user if the user is determined to be shooting;
the position identification module 1040 is configured to acquire a second video through the image acquisition device, and identify, for each frame of a second image in the second video, a position of a skeletal joint point of a key part of the user in the second image;
a data determining module 1050, configured to determine, according to the positions of the bone joint points of the key portions of the user, pose data corresponding to the second images, where the pose data corresponding to each of the second images in the second video constitute pose data in an archery process of the user;
the posture evaluation module 1060 is configured to select target posture data from the posture data in the archery process of the user, and compare the target posture data with preset standard posture data to evaluate the archery posture of the user.
An embodiment of the present invention further provides an edge computing server, as shown in fig. 11, including a processor 111, a communication interface 112, a memory 113, and a communication bus 114, where the processor 111, the communication interface 112, and the memory 113 complete mutual communication through the communication bus 114,
a memory 113 for storing a computer program;
the processor 111, when executing the program stored in the memory 113, implements the following steps:
acquiring a first video through selected image acquisition equipment, and judging whether a user enters a designated position area or not according to the first video; if yes, triggering an archery posture detection event of the user; acquiring a second video through the image acquisition equipment, and identifying the positions of the bone joint points of the key parts of the user in each frame of second image in the second video; determining posture data corresponding to the second images according to the positions of the bone joint points of the key parts of the user, wherein the posture data corresponding to each second image in the second video form posture data in the archery process of the user; and selecting target posture data from the posture data in the archery process of the user, and comparing the target posture data with preset standard posture data to evaluate the archery posture of the user.
The communication bus mentioned in the above edge computing server may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, etc. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the edge computing server and other devices.
The Memory may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
In yet another embodiment of the present invention, a storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to perform the archery posture assessment method described in any one of the above embodiments.
In yet another embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the archery pose estimation method described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a storage medium or transmitted from one storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (13)

1. An archery pose assessment method, the method comprising:
acquiring a first video through selected image acquisition equipment, and judging whether a user enters a designated position area or not according to the first video;
if yes, triggering an archery posture detection event of the user;
acquiring a second video through the image acquisition equipment, and identifying the positions of the bone joint points of the key parts of the user in each frame of second image in the second video;
determining posture data corresponding to the second images according to the positions of the bone joint points of the key parts of the user, wherein the posture data corresponding to each second image in the second video form posture data in the archery process of the user;
and selecting target posture data from the posture data in the archery process of the user, and comparing the target posture data with preset standard posture data to evaluate the archery posture of the user.
2. The method of claim 1, wherein the candidate image capturing devices are deployed on the left side and the right side respectively with the target as the central axis, and the candidate image capturing devices and the user are located on the same horizontal line;
the acquiring of the first video by the selected image acquisition device comprises:
acquiring the archery habit of a user, and determining image acquisition equipment from the candidate image acquisition equipment according to the archery habit of the user;
and starting the image acquisition equipment to acquire a video, and acquiring a first video through the image acquisition equipment.
3. The method of claim 1, wherein determining whether the user enters a designated location area based on the first video comprises:
for a first image in the first video, inputting the first image to a preset human body position detection model to obtain a user position in the first image output by the human body position detection model;
and comparing the position of the user with a preset archery standing position area, and judging whether the user enters a designated position area.
4. The method according to claim 3, wherein the human body position detection model is obtained by:
acquiring archery videos corresponding to a plurality of target users respectively, and performing frame extraction processing on the archery videos to obtain a preset number of archery images;
carrying out human body target labeling on the archery images in the preset number in a rectangular frame labeling mode to generate human body detection training samples;
and carrying out supervised training on the human body position detection initial model based on the human body detection training sample to obtain a human body position detection model.
5. The method of claim 1, wherein said identifying skeletal joint point locations of key parts of the user in the second image comprises:
inputting the second image into a preset human body posture evaluation model, and acquiring the positions of the bone joint points of the key parts of the user in the second image output by the human body posture evaluation model.
6. The method according to claim 5, wherein the human posture assessment model is obtained by:
acquiring archery videos corresponding to a plurality of target users respectively, and performing frame extraction processing on the archery videos to obtain a preset number of archery images;
carrying out human body joint point labeling on the archery images in the preset number in a bone joint point labeling mode to generate archery posture evaluation training samples;
and carrying out supervised training on the human body posture evaluation initial model based on the archery posture evaluation training sample to obtain a human body posture evaluation model.
7. The method of claim 1, wherein determining pose data corresponding to the second image based on the skeletal joint positions of key parts of the user comprises:
forming a vector corresponding to the key part of the user according to the positions of the skeletal joint points of the key part of the user;
and calculating an included angle between the vectors corresponding to the key part of the user, and determining the included angle as the included angle corresponding to the key part of the user corresponding to the second image.
8. The method of claim 1, wherein the selecting target pose data from the pose data during the archery process of the user and comparing the target pose data with preset standard pose data to evaluate the archery pose of the user comprises:
determining the similarity between any adjacent N frames of the second images in the second video;
determining N adjacent frames of the second image with the similarity exceeding a preset similarity threshold from the second video; wherein N is more than or equal to 2;
selecting a target second image from N adjacent frames of second images with the similarity exceeding a preset similarity threshold, and selecting an included angle corresponding to the target second image as a target included angle;
and comparing the target included angle with a preset standard included angle to evaluate the archery posture of the user.
9. The method of any one of claims 1 to 8, wherein a motion sensor is mounted on the bow, the method further comprising:
acquiring motion data of a bow currently acquired by the motion sensor in the process of shooting an arrow by a user, and drawing a current moving track of the bow according to the motion data;
determining the acquisition time of the motion data, and searching the second image corresponding to the acquisition time from the second video;
and marking the current moving track of the bow on the second image corresponding to the acquisition time, and transmitting the current moving track of the bow to a display screen for displaying.
10. The method of claim 8, wherein a motion sensor is mounted on the bow, the method further comprising:
acquiring complete motion data of the bow acquired by the motion sensor in the archery process of the user, and drawing a complete movement track of the bow according to the complete motion data;
and marking the complete movement track of the bow on the target second image, and transmitting the complete movement track of the bow to a display screen for displaying.
11. An archery posture evaluation device, characterized in that the device comprises:
the video acquisition module is used for acquiring a first video through the selected image acquisition equipment;
the user judgment module is used for judging whether a user enters a designated position area or not according to the first video;
the event triggering module is used for triggering an archery posture detection event of the user if the event triggering module is used for triggering the archery posture detection event of the user;
the position identification module is used for acquiring a second video through the image acquisition equipment, and identifying the positions of the bone joint points of the key parts of the user in each frame of second image in the second video;
the data determining module is used for determining posture data corresponding to the second images according to the positions of the bone joint points of the key parts of the user, and the posture data in the archery process of the user are formed by the posture data corresponding to each second image in the second video;
and the posture evaluation module is used for selecting target posture data from the posture data in the archery process of the user and comparing the target posture data with preset standard posture data so as to evaluate the archery posture of the user.
12. An edge computing server is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 10 when executing a program stored on a memory.
13. A storage medium on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1 to 10.
CN202111452401.7A 2021-12-01 2021-12-01 Archery posture evaluation method and device, edge calculation server and storage medium Pending CN114140721A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111452401.7A CN114140721A (en) 2021-12-01 2021-12-01 Archery posture evaluation method and device, edge calculation server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111452401.7A CN114140721A (en) 2021-12-01 2021-12-01 Archery posture evaluation method and device, edge calculation server and storage medium

Publications (1)

Publication Number Publication Date
CN114140721A true CN114140721A (en) 2022-03-04

Family

ID=80387106

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111452401.7A Pending CN114140721A (en) 2021-12-01 2021-12-01 Archery posture evaluation method and device, edge calculation server and storage medium

Country Status (1)

Country Link
CN (1) CN114140721A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171208A (en) * 2022-05-31 2022-10-11 中科海微(北京)科技有限公司 Sit-up posture evaluation method and device, electronic equipment and storage medium
CN115497596A (en) * 2022-11-18 2022-12-20 深圳聚邦云天科技有限公司 Human body motion process posture correction method and system based on Internet of things

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150139502A1 (en) * 2013-11-21 2015-05-21 Mo' Motion Ventures Jump Shot and Athletic Activity Analysis System
CN108805068A (en) * 2018-06-01 2018-11-13 李泽善 A kind of motion assistant system, method, apparatus and medium based on student movement
CN110598555A (en) * 2019-08-12 2019-12-20 阿里巴巴集团控股有限公司 Image processing method, device and equipment
CN110751100A (en) * 2019-10-22 2020-02-04 北京理工大学 Auxiliary training method and system for stadium
CN110929595A (en) * 2019-11-07 2020-03-27 河海大学 System and method for training or entertainment with or without ball based on artificial intelligence
CN110929596A (en) * 2019-11-07 2020-03-27 河海大学 Shooting training system and method based on smart phone and artificial intelligence
CN112819852A (en) * 2019-11-15 2021-05-18 微软技术许可有限责任公司 Evaluating gesture-based motion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150139502A1 (en) * 2013-11-21 2015-05-21 Mo' Motion Ventures Jump Shot and Athletic Activity Analysis System
CN108805068A (en) * 2018-06-01 2018-11-13 李泽善 A kind of motion assistant system, method, apparatus and medium based on student movement
CN110598555A (en) * 2019-08-12 2019-12-20 阿里巴巴集团控股有限公司 Image processing method, device and equipment
CN110751100A (en) * 2019-10-22 2020-02-04 北京理工大学 Auxiliary training method and system for stadium
CN110929595A (en) * 2019-11-07 2020-03-27 河海大学 System and method for training or entertainment with or without ball based on artificial intelligence
CN110929596A (en) * 2019-11-07 2020-03-27 河海大学 Shooting training system and method based on smart phone and artificial intelligence
CN112819852A (en) * 2019-11-15 2021-05-18 微软技术许可有限责任公司 Evaluating gesture-based motion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115171208A (en) * 2022-05-31 2022-10-11 中科海微(北京)科技有限公司 Sit-up posture evaluation method and device, electronic equipment and storage medium
CN115497596A (en) * 2022-11-18 2022-12-20 深圳聚邦云天科技有限公司 Human body motion process posture correction method and system based on Internet of things

Similar Documents

Publication Publication Date Title
CN113850248B (en) Motion attitude evaluation method and device, edge calculation server and storage medium
US11638854B2 (en) Methods and systems for generating sports analytics with a mobile device
US8639020B1 (en) Method and system for modeling subjects from a depth map
US9600717B1 (en) Real-time single-view action recognition based on key pose analysis for sports videos
US8824802B2 (en) Method and system for gesture recognition
CN110298309B (en) Image-based action feature processing method, device, terminal and storage medium
CN114140722A (en) Pull-up movement evaluation method and device, server and storage medium
CN110544301A (en) Three-dimensional human body action reconstruction system, method and action training system
CN110298220B (en) Action video live broadcast method, system, electronic equipment and storage medium
CN114140721A (en) Archery posture evaluation method and device, edge calculation server and storage medium
CN110751100A (en) Auxiliary training method and system for stadium
JP6362085B2 (en) Image recognition system, image recognition method and program
JP7078577B2 (en) Operational similarity evaluation device, method and program
CN114120204A (en) Sit-up posture assessment method, sit-up posture assessment device and storage medium
Krzeszowski et al. Estimation of hurdle clearance parameters using a monocular human motion tracking method
CN114120168A (en) Target running distance measuring and calculating method, system, equipment and storage medium
CN114302234B (en) Quick packaging method for air skills
CN108970091B (en) Badminton action analysis method and system
CN114926762A (en) Motion scoring method, system, terminal and storage medium
CN113114924A (en) Image shooting method and device, computer readable storage medium and electronic equipment
Tarek et al. Yoga Trainer for Beginners Via Machine Learning
CN114037923A (en) Target activity hotspot graph drawing method, system, equipment and storage medium
CN114093030B (en) Shooting training analysis method based on human body posture learning
CN114639168B (en) Method and system for recognizing running gesture
KR102363435B1 (en) Apparatus and method for providing feedback on golf swing motion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination