CN115171208A - Sit-up posture evaluation method and device, electronic equipment and storage medium - Google Patents

Sit-up posture evaluation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115171208A
CN115171208A CN202210615431.3A CN202210615431A CN115171208A CN 115171208 A CN115171208 A CN 115171208A CN 202210615431 A CN202210615431 A CN 202210615431A CN 115171208 A CN115171208 A CN 115171208A
Authority
CN
China
Prior art keywords
sit
detection area
preset
target user
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210615431.3A
Other languages
Chinese (zh)
Inventor
曹玉社
李峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongkehai Micro Beijing Technology Co ltd
Original Assignee
Zhongkehai Micro Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongkehai Micro Beijing Technology Co ltd filed Critical Zhongkehai Micro Beijing Technology Co ltd
Priority to CN202210615431.3A priority Critical patent/CN115171208A/en
Publication of CN115171208A publication Critical patent/CN115171208A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application relates to a sit-up posture assessment method, a sit-up posture assessment device, an electronic device and a storage medium, and the sit-up posture assessment method comprises the following steps: receiving a real-time video sent by image acquisition equipment, and determining each sit-up detection area contained in the real-time video; aiming at any sit-up detection area, determining a target user in the sit-up detection area based on a real-time video, tracking the target user in the sit-up detection area, and acquiring a real-time tracking video; determining posture data of a target user in the sit-up detection area in any image in a real-time tracking video; and comparing the posture data of the target user in the sit-up detection area with the preset posture data corresponding to the sit-up detection area so as to evaluate the sit-up posture of the target user in the sit-up detection area. Therefore, the posture data of the sporter can be recorded in real time when the sporter performs sit-up exercise, and the evaluation can be performed.

Description

Sit-up posture evaluation method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of artificial intelligence, in particular to a sit-up posture assessment method and device, electronic equipment and a storage medium.
Background
In the sports teaching, sit-up is an indispensable subject, and whether the motion of sporter is standard can't be appraised when doing sit-up, needs experienced person to carry out the real-time observation evaluation.
In the prior art, videos can be recorded when an exerciser sits up, after exercise is finished, the exercise action of the exerciser is evaluated in a video viewing mode, but the mode has certain hysteresis, and the exerciser can only view the exercise videos of the exerciser after the exercise is finished to evaluate the exercise action of the exerciser. And in the process of viewing the video, the sportsman may think that a certain wrong action is a standard action due to lack of professional guidance.
Therefore, the evaluation mode of viewing the video has a certain hysteresis, and the athlete lacks professional guidance during the evaluation of the athletic movements, which results in a non-ideal training effect.
Disclosure of Invention
In view of this, in order to solve the technical problems that the evaluation mode by checking the video has a certain hysteresis, and training effects are not ideal due to lack of professional guidance in the process of evaluating the exercise motions by the sporter, embodiments of the present application provide a method and an apparatus for evaluating the sit-up posture, an electronic device, and a storage medium.
In a first aspect, an embodiment of the present application provides a sit-up posture assessment method, which is applied to an edge computing server, where the method includes:
receiving a real-time video sent by image acquisition equipment, and determining each sit-up detection area contained in the real-time video;
aiming at any sit-up detection area, determining a target user in the sit-up detection area based on a real-time video, tracking the target user in the sit-up detection area, and acquiring a real-time tracking video;
determining posture data of a target user in the sit-up detection area in any image in a real-time tracking video;
and comparing the posture data of the target user in the sit-up detection area with the preset posture data corresponding to the sit-up detection area so as to evaluate the sit-up posture of the target user in the sit-up detection area.
In an alternative embodiment, the determining a target user in the sit-up detection area based on real-time video includes:
performing frame extraction on the real-time video to obtain a real-time image;
determining a rectangular frame where each user is located in the real-time image;
determining the intersection ratio of the rectangular frame of each user in the real-time image and the sit-up detection area;
determining a rectangular frame with the largest cross-over ratio as a target rectangular frame corresponding to the sit-up detection area;
and determining that the user corresponding to the target rectangular frame is the target user corresponding to the sit-up detection area.
In an optional embodiment, the determining, for any image in the real-time tracking video, the posture data of the target user in the sit-up detection area in the image comprises:
determining a sit-up key point position of a target user in the sit-up detection area in the image aiming at any image in a real-time tracking video;
screening target key point positions of target users in the sit-up detection area from sit-up key point positions of the target users in the sit-up detection area;
and determining an included angle of a key part of the target user in the sit-up detection area in the image according to the position of the target key point of the target user in the sit-up detection area.
In an optional embodiment, the screening the target key point positions of the target users in the sit-up detection area from the sit-up key point positions of the target users in the sit-up detection area includes:
dividing the sit-up key point positions of the target users in the sit-up detection area into a first set and a second set according to a preset rule;
determining a first mean value of confidence degrees corresponding to sit-up key point positions of target users in the sit-up detection area contained in the first set;
determining a second mean value of confidence degrees corresponding to the sit-up key point positions of the target users in the sit-up detection area contained in the second set;
determining a target set from the first set and the second set according to the first mean and the second mean;
and screening the target key point positions of the target users in the sit-up detection area from a target set according to sit-up screening rules.
In an optional embodiment, the determining a target set from the first set and the second set according to the first mean and the second mean includes:
comparing the magnitude between the first mean and the second mean;
if the first mean value is larger than the second mean value, determining the first set as a target set;
and if the first average value is smaller than the second average value, determining the second set as a target set.
In an alternative embodiment, the target keypoint location comprises: a first key point position, a second key point position and a third key point position;
will target user's in the sit up detection area gesture data, with preset gesture data that sit up detection area corresponds contrast, in order to assess target user's in the sit up detection area sit up gesture includes:
aim at the contained angle that target user's key position in the sit up detection zone corresponds, if the contained angle is in the preset angle within range that sit up detection zone corresponds, just target user's both hands in the sit up detection zone embrace the head, just target user in the sit up detection zone the distance between first key point position in the target key point position and the second key point position is less than the distance between second key point position and the third key point position, then confirms target user's in the sit up detection zone sit up gesture standard.
In an optional embodiment, the target keypoint location further comprises: a fourth keypoint location;
after determining the sit-up posture criteria of the target user in the sit-up detection area, the method further includes:
if the preset angle range is a first preset angle range; if the preset first marker position, the preset second marker position and the preset third marker position corresponding to the target user in the sit-up detection area are all unfinished marks, setting the preset first marker position corresponding to the target user in the sit-up detection area as a finished mark;
alternatively, the first and second liquid crystal display panels may be,
if the preset angle range is a second preset angle range, and the distance between the third key point position and the fourth key point position in the target key point positions of the target user in the sit-up detection area is smaller than the distance between the second key point position and the third key point position; if the preset first flag position corresponding to the target user in the sit-up detection area is a finished flag, and the preset second flag position and the preset third flag position corresponding to the target user in the sit-up detection area are both unfinished flags, setting the preset second flag position corresponding to the target user in the sit-up detection area as a finished flag;
alternatively, the first and second electrodes may be,
if the preset angle range is a second preset angle range, and the abscissa of a third key point position and an abscissa of a fourth key point position in the target key point positions of the target user in the sit-up detection area meet preset requirements; if the preset first flag position corresponding to the target user in the sit-up detection area is a finished flag, and the preset second flag position and the preset third flag position corresponding to the target user in the sit-up detection area are both unfinished flags, setting the preset second flag position corresponding to the target user in the sit-up detection area as a finished flag;
alternatively, the first and second liquid crystal display panels may be,
if the preset angle range is a third preset angle range; if the preset first marker position and the preset second marker position corresponding to the target user in the sit-up detection area are both finished markers, and the preset third marker position corresponding to the target user in the sit-up detection area is an unfinished marker, setting the preset third marker position corresponding to the target user in the sit-up detection area as a finished marker;
if the preset first marker bit, the preset second marker bit and the preset third marker bit corresponding to the target user in the sit-up detection area are all finished marks, adding 1 to the sit-up times of the target user in the sit-up detection area; wherein an initial value of the number of sit-ups of the target user in the sit-up detection area is 0;
and setting a preset first mark position, a preset second mark position and a preset third mark position corresponding to the target user in the sit-up detection area as unfinished marks.
In an optional embodiment, the method further comprises:
if the included angle corresponding to the key part of the target user in the sit-up detection area does not belong to the preset angle range corresponding to the sit-up detection area, determining that the sit-up posture of the target user in the sit-up detection area is not standard, and setting a preset first mark position, a preset second mark position and a preset third mark position corresponding to the target user in the sit-up detection area as unfinished marks;
if the target user in the sit-up detection area does not hold his head with both hands, determining that the sit-up posture of the target user in the sit-up detection area is not standard, and setting a preset first mark position, a preset second mark position and a preset third mark position corresponding to the target user in the sit-up detection area as unfinished marks;
and/or the presence of a gas in the gas,
if the distance between the first key point position and the second key point position in the target key point positions of the target users in the sit-up detection area is not smaller than the distance between the second key point position and the third key point position, determining that the sit-up posture of the target users in the sit-up detection area is not standard, and setting a preset first mark position, a preset second mark position and a preset third mark position corresponding to the target users in the sit-up detection area as unfinished marks.
In a second aspect, an embodiment of the present application provides a sit-up posture assessment apparatus applied to an edge computing server, the apparatus including:
a detection area determination module: used for receiving the real-time video sent by the image acquisition equipment and determining each sit-up detection area contained in the real-time video
The real-time tracking video acquisition module: the system comprises a video acquisition unit, a video processing unit and a video processing unit, wherein the video acquisition unit is used for acquiring a real-time tracking video;
an attitude data determination module: for any image in a real-time tracking video, determining posture data of a target user in the sit-up detection area in the image;
sit-up posture evaluation module: and the posture data of the target user in the sit-up detection area is compared with the preset posture data corresponding to the sit-up detection area so as to evaluate the sit-up posture of the target user in the sit-up detection area.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory, the processor being configured to execute a program stored in the memory to implement any of the sit-up posture assessment methods of the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium storing one or more programs, where the one or more programs are executable by one or more processors to implement any one of the sit-up posture assessment methods in the first aspect.
According to the technical scheme provided by the embodiment of the application, the real-time video sent by the image acquisition equipment is received, and each sit-up detection area contained in the real-time video is determined; aiming at any sit-up detection area, determining a target user in the sit-up detection area based on a real-time video, tracking the target user in the sit-up detection area, and acquiring a real-time tracking video; determining posture data of a target user in the sit-up detection area in any image in a real-time tracking video; and comparing the posture data of the target user in the sit-up detection area with the preset posture data corresponding to the sit-up detection area so as to evaluate the sit-up posture of the target user in the sit-up detection area. Through the sit-up video of gathering sportsman in real time, but the in-process that the sportsman carries out the sit-up motion can real-time recording sportsman's gesture data to compare with predetermineeing gesture data according to gesture data, assess sportsman's gesture.
Drawings
Fig. 1 is a schematic flow chart of an implementation of a method for estimating a sit-up posture according to an embodiment of the present application;
fig. 2 is a schematic implementation flow chart of a target user determination method provided in an embodiment of the present application;
fig. 3 is a schematic implementation flowchart of a method for determining attitude data according to an embodiment of the present application;
fig. 4 is a schematic diagram of a human body key point provided in the embodiment of the present application;
fig. 5 is a schematic implementation flow chart of another attitude data determination method provided in the embodiment of the present application;
fig. 6 is a schematic implementation flowchart of a target set determining method according to an embodiment of the present application;
fig. 7 is a schematic flow chart of another sit-up posture estimation method provided in the embodiment of the present application;
fig. 8 is a schematic implementation flow chart of a sit-up counting method according to an embodiment of the present application;
fig. 9 is a schematic flow chart of another sit-up counting method provided in the embodiments of the present application;
fig. 10 is a schematic structural diagram of an apparatus for evaluating a sit-up posture according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making creative efforts shall fall within the protection scope of the present application.
In this application embodiment, confirm behind the athletic ground of sit up, deploy image acquisition equipment in the place, image acquisition equipment is including carrying out equipment such as the camera that sit up detected, and this application does not limit to this. After the image acquisition equipment is deployed, all sit-up detection areas in the full-view field of the real-time video shot by the equipment are detected. In the embodiment of the present application, only one sit-up detection area is described, the sit-up detection area is referred to as a first detection area, and a target user in the first detection area is referred to as a first target user, which will not be described below.
Fig. 1 is a schematic implementation flowchart of a method for estimating a sit-up posture according to an embodiment of the present application, where the method may include the following steps:
s101: and receiving a real-time video sent by the image acquisition equipment, and determining each sit-up detection area contained in the real-time video.
In this application embodiment, after image acquisition equipment has been deployed, need the people to carry out sit up detection zone mark at the background, carry out sit up detection zone mark according to the full field of vision of the real-time video that the demand sent image acquisition equipment, confirm each sit up detection zone back that contains in the real-time video, detect each sit up detection zone simultaneously.
In this embodiment of the application, for example, after the image capturing device is deployed, a real-time video sent by the image capturing device is received, 8 sit-up detection areas are marked in a background for a full view of the real-time video, and the 8 sit-up detection areas are simultaneously detected.
S102: aiming at any sit-up detection area, determining a target user in the sit-up detection area based on the real-time video, tracking the target user in the sit-up detection area, and acquiring the real-time tracking video.
In this embodiment of the application, in S101, each sit-up detection area included in the real-time video is determined, and detection is performed simultaneously for each detection area, taking the first detection area as an example, a first target user in the first detection area is determined based on the real-time video, specifically, a first target user in the first detection area is determined from all users in the received real-time video sent by the image capturing device, that is, a user performing sit-up exercise in the first detection area is determined, where all users include but are not limited to: a user who is to perform a sit-up exercise and other assisting users, and the like.
In the embodiment of the application, after the target user in the first detection area is determined, the first target user in the first detection area is tracked, specifically, a sort algorithm may be adopted to assign a unique identifier to the first target user in the first detection area, and then the first target user is tracked according to the unique identifier of the first target user. It should be noted that, in the present application, a tracking manner of the first target user is not limited.
In the embodiment of the application, a first target user in a first detection area is determined, and after the first target user in the first detection area is tracked, a real-time tracking video of the first detection area is obtained.
S103: and determining the posture data of the target user in the sit-up detection area in the image aiming at any image in the real-time tracking video.
In the embodiment of the application, after a first target user in a first detection area is determined, the first target user is tracked, a real-time tracking video of the first detection area is obtained, frames of the real-time tracking video are extracted, and attitude data of the first target user in the first detection area is obtained for each frame of image of the obtained real-time tracking video.
After the pose data of the first target user in the first detection area is determined, each pose data is marked at its corresponding position.
S104: with the target user's in the sit up detection area gesture data, compare with the preset gesture data that sit up detection area corresponds to the target user's in the aassessment sit up detection area sit up gesture.
In the embodiment of the application, the posture data of the first target user in the first detection area in each frame of image in the acquired real-time tracking video is respectively compared with the corresponding preset posture data, so that the evaluation result of each sit-up posture of the first target user in the first detection area is obtained.
In the embodiment of the application, the preset posture data is standard posture data in a preset motion expert knowledge base, the posture data of the first target user is compared with the preset posture data, and the comparison result is marked at the position corresponding to the first target user.
Through the above-mentioned description to the technical scheme that this application embodiment provided, the sit up gesture aassessment method of this application can gather a plurality of sportsmen's sit up video simultaneously, the gesture data of a plurality of sportsmen sit up in-process of real-time recording, and compare the gesture data of a plurality of sportsmen sit up in-process with standard gesture data simultaneously, produce the aassessment result, it can only evaluate the sit up gesture of oneself through the video mode of playback to have solved the sportsman, and the sportsman probably has the problem of wrong cognition to a certain gesture.
In S102, S102 may be specifically shown in fig. 2, and fig. 2 is a schematic implementation flow diagram of a target user determination method provided in an embodiment of the present application, where the method may include the following steps:
s201: and performing frame extraction on the real-time video to acquire a real-time image.
S202: and determining a rectangular frame where each user is located in the real-time image.
In the embodiment of the application, frames of the real-time video are extracted, each frame of real-time image of the real-time video is obtained, and the minimum rectangular frame where each user is located in each frame of real-time image of the real-time video is determined according to a preset rule, for example, a human body detection model can be called to obtain the minimum rectangular frame where all users are located in each frame of real-time image of the real-time video.
Specifically, the human body detection model can be used for marking all human bodies in each sample image by using a rectangular frame through obtaining the sample images in advance and adopting a manual marking mode for the sample images, the marked sample images form a human body detection model training data set, the human body detection model training data set is input into the initial human body detection model, and the initial human body detection model is trained to obtain the human body detection model.
It should be noted that, in the present application, the acquisition of the minimum rectangular frame in which each user is located in each frame of real-time image of the real-time video may also be completed in other manners, which is not limited in the present application.
S203: and determining the intersection ratio of the rectangular frame where each user is located in the real-time image and the sit-up detection area.
S204: and determining the rectangular frame with the maximum cross-over ratio as a target rectangular frame corresponding to the sit-up detection area.
S205: and determining that the user corresponding to the target rectangular frame is the target user corresponding to the sit-up detection area.
The following collectively describes S203 to S205:
in the embodiment of the application, after the device is deployed, the sit-up detection area is manually marked in the background, after the minimum rectangular frame where each user is located is determined in each frame of real-time image of the real-time video, the intersection ratio of the minimum rectangular frame where each user is located and the first detection area is calculated, the rectangular frame with the maximum intersection ratio to the first detection area is determined, and therefore the user in the rectangular frame is determined to be the first target user.
Through the above description to the technical scheme that this application embodiment provided, through all users in the automatic identification field of vision in this application, calculate all users and each sit up detection area's intersection and compare and confirm the target user in each sit up detection area, avoided other user's of assisting sit up motion interference.
In S103, S103 may be specifically shown in fig. 3, and fig. 3 is a schematic implementation flow diagram of a method for determining posture data provided in the embodiment of the present application, where the method may include the following steps:
s301: and determining the position of a sit-up key point of a target user in the sit-up detection area in the image aiming at any image in the real-time tracking video.
In an embodiment of the present application, first, for any image in a real-time tracking video, a sit-up key point position of a first target user in a first detection area in the image is determined. Wherein, sit up key point position of first target user in the first detection area can be drawed according to human key point detection model.
Specifically, the human key point detection model can mark human key points on preset number of sit-up images in a key point marking mode to generate human key point detection training samples. For each of a preset number of the sit-up images, a key point is marked.
For example, a total of 500 volunteers were summoned, with 250 for each boy and girl. Each group of 1-8 students simultaneously performs sit-up video acquisition. The method includes the steps of performing frame extraction processing on the acquired video to obtain about 20000 sit-up images, and labeling key points of a human body for the 20000 sit-up images, wherein 17 key points are labeled in total, specifically, see fig. 4, and fig. 4 is a schematic diagram of key points of the human body provided by the embodiment of the application, wherein each key point has a meaning as shown in table 1 below.
Figure BDA0003673201670000091
Figure BDA0003673201670000101
TABLE 1
And carrying out supervised training on the human key point detection initial model based on the human key point detection training sample to obtain a human key point detection model. It should be noted that, in the embodiment of the present application, when the loss function converges, or the number of iterations reaches a threshold, the model training may be regarded as being terminated, and this is not limited in the present application.
In the embodiment of the application, the key points of the first target user in the first detection area are extracted based on the human body key point detection model to form a key point position set (P) 0 ,P 1 ,P 2 ,P 3 ,P 4 ,P 5 ,P 6 ,P 7 ,P 8 ,P 9 ,P 10 ,P 11 ,P 12 ,P 13 ,P 14 ,P 15 ,P 16 )。
S302: and screening the target key point position of the target user in the sit-up detection area from the sit-up key point positions of the target user in the sit-up detection area.
In this embodiment of the application, after the positions of the sit-up key points of the first target user in the first detection area are determined in S301, the target key point positions of the first target user in the first detection area are screened from the sit-up key point positions of the target user in the first detection area.
For example, when a first target user in a first detection area is detected, a selection is made from 17 key point locationsGo out 7 target key point positions that need to be used in the process of evaluating the sit-up posture to form a key point position set (P) 4 ,P 6 ,P 8 ,P 10 ,P 12 ,P 14 ,P 16 ). In addition, the above P 4 ,P 6 ,P 8 ,P 10 ,P 12 ,P 14 ,P 16 Etc. refer to fig. 4 and the meanings shown in table 1 above, which are not described herein again.
S303: and determining the included angle of the key position of the target user in the sit-up detection area in the image according to the target key position of the target user in the sit-up detection area.
In the embodiment of the application, vectors between the target key point positions are obtained according to the target key point positions of the first target user in the first detection area, and the included angle of the key part of the first target user in the first detection area is calculated according to the vectors between the target key point positions.
Fig. 5 is a schematic implementation flowchart of another attitude data determination method provided in the embodiment of the present application, where the method may include the following steps:
s501: and determining the position of a sit-up key point of a target user in the sit-up detection area in the image aiming at any image in the real-time tracking video.
In the embodiment of the present application, the description of S501 is already given in S301, and is not repeated here.
S502: according to a preset rule, dividing the sit-up key point positions of the target users in the sit-up detection area into a first set and a second set.
S503: and determining a first mean value of confidence degrees corresponding to the sit-up key point positions of the target users in the sit-up detection area contained in the first set.
S504: and determining a second average value of confidence degrees corresponding to the sit-up key point positions of the target users in the sit-up detection area contained in the second set.
S505: and determining a target set from the first set and the second set according to the first mean value and the second mean value.
S506: and screening the target key point positions of the target users in the sit-up detection area from the target set according to sit-up screening rules.
S507: and determining the included angle of the key position of the target user in the sit-up detection area in the image according to the target key position of the target user in the sit-up detection area.
The following describes S502 to S507 in a unified manner:
in this embodiment of the present application, when determining the target key point position of the first target user in the first detection area, the sit-up key points of the first target user are divided into two sets according to the symmetry of the human body, that is, the 17 key point positions shown in fig. 4 and table 1 are divided into (P) 0 ,P 2 ,P 4 ,P 6 ,P 8 ,P 10 ,P 12 ,P 14 ,P 16 )、(P 0 ,P 1 ,P 3 ,P 5 ,P 7 ,P 9 ,P 11 ,P 13 ,P 15 ) And the two sets are respectively marked as a first set and a second set.
And each key point position corresponds to a confidence coefficient, the confidence coefficient mean value of the key point positions in the first set is calculated and recorded as a first mean value, and the confidence coefficient mean value of the key point positions in the second set is calculated and recorded as a second mean value. And determining a target set from the first set and the second set according to the first mean value and the second mean value, for example, determining the target set as the first set, and screening the target key point positions from the key point positions in the first set according to a sit-up screening rule.
In the embodiment of the present application, the objective of the sit-up screening rule is to screen out the target key point positions that need to be used when performing sit-up posture assessment. For example, the target keypoint locations screened from the first set above constitute a set of keypoint locations (P) 4 ,P 6 ,P 8 ,P 10 ,P 12 ,P 14 ,P 16 ) Or, screening from the second setThe positions of the target key points are output to form a key point position set (P) 3 ,P 5 ,P 7 ,P 9 ,P 11 ,P 13 ,P 15 )。
When the first set is determined to be the target set, correspondingly screening out the positions of the target key points from the first set, and forming vectors corresponding to the target key points of the first target user according to the positions of the target key points, which can be specifically referred to table 2.
Figure BDA0003673201670000121
TABLE 2
In S505, S505 may be specifically shown in fig. 6, and specifically, fig. 6 is an implementation flow diagram of a target set determining method provided in the embodiment of the present application, where the method may include the following steps:
s601: comparing the first average value with the second average value, if the first average value is greater than the second average value, executing S602, and if the first average value is less than the second average value, executing S603.
S602: the first set is determined to be a target set.
S603: and determining the second set as a target set.
In the embodiment of the present application, the confidence of the keypoint location is a value between 0 and 1, for example, the first mean value is 0.8, the second mean value is 0.3, and the first mean value is greater than the second mean value, so that the first set is determined as the target set.
Through the above description of the technical scheme provided by the embodiment of the application, the acquisition process of the posture data of the target user in the sit-up detection area is explained in the application, the target user in each sit-up detection area in each frame image in the real-time tracking video is respectively acquired with the posture data in the actual detection process, the key point positions of the target user in each sit-up detection area are screened through the preset rules, the target key point positions required to be used when the sit-up posture is evaluated are obtained, and the included angle corresponding to the target key point positions is obtained according to the target key point positions. Target key point positions needed to be used in the sit-up posture assessment are determined from all detected key point positions, so that the interference of other key point positions is avoided, and the data waste is reduced.
Fig. 7 is a schematic implementation flow chart of another sit-up posture assessment method provided in the embodiment of the present application, which may include the following steps:
s701: and receiving a real-time video sent by the image acquisition equipment, and determining each sit-up detection area contained in the real-time video.
S702: the method comprises the steps of determining a target user in the sit-up detection area based on a real-time video aiming at any sit-up detection area, tracking the target user in the sit-up detection area, and acquiring a real-time tracking video.
S703: and aiming at any image in the real-time tracking video, determining an included angle corresponding to a key part of a target user in the sit-up detection area in the image.
In the embodiment of the present application, the descriptions of S701 to S703 are already given in S101 to S103, and are not repeated herein.
S704: the contained angle that the key position to the target user in the sit up detection zone corresponds, if the contained angle is in the preset angle within range that the sit up detection zone corresponds, and the target user in the sit up detection zone both hands hold the head, and the distance between first key point position and the second key point position in the target user's in the sit up detection zone target key point position is less than the distance between second key point position and the third key point position, then confirm the sit up gesture standard of the target user in the sit up detection zone.
In the embodiment of the present application, for convenience of description, only the first detection region in one frame of image in the real-time tracking video is specifically described. The first target user in the first detection area needs to satisfy the following three conditions simultaneously:
condition 1: the included angle of the key portion of the first target user needs to be within a preset angle range corresponding to the first detection area, specifically, the included angle of the key portion of the first target user may include an included angle shown in table 2, which is not limited in this application.
The preset angle range corresponding to the first detection area is obtained from the motion expert knowledge base, and the preset angle range called from the motion expert knowledge base can be determined according to the included angle between the back and the ground in the current image of the first target user.
Condition 2: a first target user holds the head with both hands;
condition 3: specifically, with reference to the description of fig. 5, when the first set is the target set, the first key point position is P 4 The second key point position is P 10 The third key point is set as P 8 When the second set is the target set, the first key point position is P 3 The second key point position is P 11 The third key point is set as P 7
When the sit-up posture of the first target user satisfies the above three conditions at the same time, it may be determined that the sit-up posture of the first target user is standard.
After the sit-up posture of the first target user is determined to be standard, whether the posture of the first target user is in the preset three states or not needs to be judged, the judgment is carried out according to the current sit-up posture data of the first target user and the values of the flag bits of the current three states of the first target user, the preset three states are set, and when the preset three states are all the finished marks, 1 is added to the sit-up times of the first target user.
Taking a first target user in a first detection area as an example, fig. 8 is a schematic implementation flow diagram of a sit-up counting method provided in an embodiment of the present application, where the method may include the following steps:
s801: if the preset angle range is the first preset angle range; and if the preset first marker position, the preset second marker position and the preset third marker position corresponding to the target user in the sit-up detection area are all unfinished marks, setting the preset first marker position corresponding to the target user in the sit-up detection area as a finished mark.
In this embodiment of the application, if the preset angle range in condition 1 in S704 is the first preset angle range, and the preset first flag bit, the preset second flag bit, and the preset third flag bit corresponding to the current first target user are all unfinished flags, the preset first flag bit corresponding to the first target user is set as a finished flag, and the preset second flag bit and the preset third flag bit corresponding to the first target user are not changed.
In this embodiment, with reference to the included angle corresponding to each target key portion of the user shown in table 2, the first preset angle range may be specifically expressed as: angle _ shade <90 degrees, angle _ elbow <90 degrees, 90 degrees < angle _ wait <150 degrees, 70 degrees < angle _ knee <110 degrees. The unfinished mark of the preset first mark bit, the preset second mark bit and the preset third mark bit is 0, and the finished mark is 1.
When the preset angle range described in condition 1 in S704 is the above range, and the preset first flag bit, the preset second flag bit, and the preset third flag bit corresponding to the first target user are all 0, it is determined that the current posture of the first target user is the first state, and the preset first flag bit corresponding to the first target user is set to 1.
S802: if the preset angle range is the second preset angle range, and the distance between a third key point position and a fourth key point position in the target key point positions of the target users in the sit-up detection area is smaller than the distance between the second key point position and the third key point position; and if the preset first mark position corresponding to the target user in the sit-up detection area is a finished mark, the preset second mark position and the preset third mark position of the target user in the sit-up detection area are unfinished marks, and the preset second mark position of the target user in the sit-up detection area is a finished mark.
In the embodiment of the present application, the key points of the human body shown in table 1 are described with reference to fig. 5When the first set is determined as the target set, the second key point position is P 10 The third key point is set as P 8 The fourth key point position is P 14 . In combination with the included angle corresponding to each target key portion of the user shown in table 2, the second preset angle range may be specifically expressed as: angle _ egress<100 degree angle _ elbow<90 degrees and 10 degrees<angle_waist<80 degrees and 70 degrees<angle_knee<110 degrees. The unfinished mark of the preset first mark bit, the preset second mark bit and the preset third mark bit is 0, and the finished mark is 1.
When the preset angle range described in condition 1 in S704 is the above range, P 8 And P 14 Is less than P 10 And P 8 When the distance between the first target user and the second target user is greater than or equal to the preset first flag bit, the preset second flag bit and the preset third flag bit corresponding to the first target user are both 0, the current posture of the first target user is determined to be in the second state, and the preset second flag bit corresponding to the first target user is set to be 1.
S803: if the preset angle range is a third preset angle range; and if the preset first marker position and the preset second marker position corresponding to the target user in the sit-up detection area are both finished marks, the preset third marker position corresponding to the target user in the sit-up detection area is an unfinished mark, and the preset third marker position corresponding to the target user in the sit-up detection area is a finished mark.
In this embodiment of the application, in combination with the included angle corresponding to each target key portion of the user shown in table 2, the third preset angle range may be specifically expressed as: angle _ shade <90 degrees, angle _ elbow <90 degrees, 90 degrees < angle _ wait <150 degrees, 70 degrees < angle _ knee <110 degrees. The unfinished mark of the preset first mark bit, the preset second mark bit and the preset third mark bit is 0, and the finished mark is 1.
When the preset angle range described in condition 1 in S704 is the above range, the preset first flag bit and the preset second flag bit corresponding to the first target user are both 1, and the preset third flag bit corresponding to the first target user is 0, it is determined that the current posture of the first target user is the third state, and the preset third flag bit corresponding to the first target user is set to 1.
S804: if the preset first marker position, the preset second marker position and the preset third marker position corresponding to the target user in the sit-up detection area are all finished marks, adding 1 to the sit-up times of the target user in the sit-up detection area; wherein, the initial value of the sit-up times of the target user in the sit-up detection area is 0.
In this embodiment of the application, after the preset third flag bit corresponding to the first target user is the completed flag, and at this time, the preset first flag bit, the preset second flag bit, and the preset third flag bit corresponding to the first target user are all completed flags, 1 is added to the sit-up times of the first target user. It should be noted that after the first target user is tracked in S102, the initial sit-up frequency of the first target user is set to 0, and the frequency that three preset flag positions corresponding to the first target user are all 1 is detected in the process of evaluating the sit-up posture and recorded as the sit-up frequency of the first target user.
S805: and setting a preset first marker position, a preset second marker position and a preset third marker position corresponding to the target user in the sit-up detection area as unfinished markers.
In this embodiment of the application, after adding 1 to the number of sit-ups of the first target user, the detection of the next sit-up posture of the first target user is started immediately after the mark of the unfinished preset first mark position, the unfinished preset second mark position, and the unfinished preset third mark position corresponding to the first target user is added.
Still taking the first target user in the first sit-up detection area as an example, fig. 9 is a schematic implementation flow diagram of another sit-up counting method provided in this application embodiment, and the method may include the following steps:
s901: if the preset angle range is the first preset angle range; and if the preset first marker position, the preset second marker position and the preset third marker position corresponding to the target user in the sit-up detection area are all unfinished marks, setting the preset first marker position corresponding to the target user in the sit-up detection area as a finished mark.
In the embodiment of the present application, the description of S901 is already given in S801, and is not repeated here.
S902: if the preset angle range is the second preset angle range, and the abscissa of the third key point position and the fourth key point position in the target key point positions of the target users in the sit-up detection area meets the preset requirement; and if the preset first flag bit corresponding to the target user in the sit-up detection area is the completed flag, the preset second flag bit and the preset third flag bit corresponding to the target user in the sit-up detection area are both incomplete flags, and the preset second flag bit corresponding to the target user in the sit-up detection area is the completed flag.
In this embodiment of the application, the human body key points shown in table 1 are described with reference to fig. 5, and when it is determined that the first set is the target set, it indicates that the device collects the right side of the body of the first target user, and the third key point position is P 8 The fourth key point position is P 14 . In combination with the included angle corresponding to each target key portion of the user shown in table 2, the second preset angle range may be specifically expressed as: angle _ egress<100 degree angle _ elbow<90 degrees and 10 degrees<angle_waist<80 degrees and 70 degrees<angle_knee<110 degrees. The unfinished mark of the preset first mark bit, the preset second mark bit and the preset third mark bit is 0, and the finished mark is 1.
When the preset angle range described in condition 1 in S704 is the above range, P 8 The abscissa is at P 14 When the left side of the abscissa is determined, if the preset first flag bit corresponding to the first target user is 1, and the preset second flag bit and the preset third flag bit corresponding to the first target user are both 0, it is determined that the current posture of the first target user is in the second state, and the preset second flag bit corresponding to the first target user is set to 1. It should be noted that when the second set is determined as the target set, it indicates that the device captures the left side of the body of the target user, and then P is 8 The abscissa is required to be at P 14 Left side of abscissa.
S903: if the preset angle range is a third preset angle range; and if the preset first marker position and the preset second marker position corresponding to the target user in the sit-up detection area are both finished marks, the preset third marker position corresponding to the target user in the sit-up detection area is an unfinished mark, and the preset third marker position corresponding to the target user in the sit-up detection area is a finished mark.
S904: if the preset first marker bit, the preset second marker bit and the preset third marker bit corresponding to the target user in the sit-up detection area are all finished marks, adding 1 to the sit-up times of the target user in the sit-up detection area; wherein, the initial value of the sit-up times of the target user in the sit-up detection area is 0.
S905: and setting a preset first mark position, a preset second mark position and a preset third mark position corresponding to a target user in the sit-up detection area as unfinished marks.
In the embodiment of the present application, descriptions of S903 to S905 are already given in S803 to S805, and are not repeated herein.
The sit-up posture evaluation process of the first target user according to the sit-up posture standard is described above with reference to fig. 8 and 9, and when any one of the condition 1, the condition 2, and the condition 3 in S704 is not satisfied, it is determined that the sit-up posture of the first target user is not standard, and all three preset flag positions corresponding to the first target user are unfinished flags, that is, the sit-up is not counted, but the sit-up posture is also evaluated.
Fig. 10 is a schematic structural diagram of a sit-up posture evaluating apparatus according to an embodiment of the present application, where the apparatus includes: a detection region determination module 1001, a real-time tracking video acquisition module 1002, a posture data determination module 1003, and a sit-up posture evaluation module 1004.
Detection region determination module 1001: the system comprises a video acquisition device, a sit-up detection device and a sit-up detection device, wherein the video acquisition device is used for acquiring a sit-up detection area;
real-time tracking video acquisition module 1002: the system comprises a video acquisition module, a video processing module and a video processing module, wherein the video acquisition module is used for acquiring a real-time tracking video;
pose data determination module 1003: the system comprises a video acquisition unit, a video processing unit, a display unit and a display unit, wherein the video acquisition unit is used for acquiring a sit-up detection area of a sit-up user;
sit-up posture assessment module 1004: a sit up gesture for with the target user's in the sit up detection area gesture data, with sit up detection area correspond predetermine gesture data and contrast to aassessment sit up detection area target user's in the sit up gesture.
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device 1100 shown in fig. 11 includes: at least one processor 1101, memory 1102, at least one network interface 1104, and a user interface 1103. Various components in the electronic device 1100 are coupled together by a bus system 1105. It is understood that the bus system 1105 is used to enable communications among the components. The bus system 1105 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as the bus system 1105 in fig. 11.
The user interface 1103 may include, among other things, a display, a keyboard or a pointing device (e.g., a mouse, trackball), a touch pad or a touch screen, among others.
It will be appreciated that the memory 1102 in the subject embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a Read-only memory (ROM), a programmable Read-only memory (PROM), an erasable programmable Read-only memory (erasabprom, EPROM), an electrically erasable programmable Read-only memory (EEPROM), or a flash memory. The volatile memory may be a Random Access Memory (RAM) which functions as an external cache. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (StaticRAM, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (synchronous DRAM, SDRAM), double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), enhanced synchronous SDRAM (ESDRAM), synchronous link dynamic random access memory (synchlink DRAM, SLDRAM), and direct memory bus random access memory (DRRAM). The memory 1102 described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 1102 stores elements, executable units or data structures, or a subset thereof, or an expanded set thereof as follows: an operating system 11021 and application programs 11022.
The operating system 11021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 11022 contains various applications such as a media player (MediaPlayer), a Browser (Browser), and the like for implementing various application services. Programs that implement methods in accordance with embodiments of the application may be included in application 11022.
In this embodiment of the application, the processor 1101 is configured to execute the method steps provided by the method embodiments by calling a program or an instruction stored in the memory 1102, specifically, a program or an instruction stored in the application 11022, for example, including:
receiving a real-time video sent by image acquisition equipment, and determining each sit-up detection area contained in the real-time video; aiming at any sit-up detection area, determining a target user in the sit-up detection area based on the real-time video, tracking the target user in the sit-up detection area, and acquiring a real-time tracking video; determining posture data of a target user in a sit-up detection area in an image aiming at any image in a real-time tracking video; with the target user's in the sit up detection area gesture data, compare with the preset gesture data that sit up detection area corresponds to the target user's in the aassessment sit up detection area sit up gesture.
The method disclosed in the embodiments of the present application may be applied to the processor 1101, or implemented by the processor 1101. The processor 1101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by instructions in the form of hardware, integrated logic circuits, or software in the processor 1101. The processor 1101 may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software elements in the decoding processor. The software elements may be located in ram, flash, rom, prom, or eprom, registers, among other storage media that are well known in the art. The storage medium is located in the memory 1102, and the processor 1101 reads the information in the memory 1102 and completes the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented in one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions of the present application, or a combination thereof.
For a software implementation, the techniques herein may be implemented by means of units performing the functions herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
The electronic device provided in this embodiment may be the electronic device shown in fig. 11, and may execute all steps of the sit-up posture assessment method shown in fig. 1-3 and 3-9, so as to achieve the technical effects of the sit-up posture assessment method shown in fig. 1-3 and 3-9, specifically please refer to the related descriptions of fig. 1-3 and 3-9, which are not repeated herein for brevity.
The embodiment of the application also provides a storage medium (computer readable storage medium). The storage medium herein stores one or more programs. Among others, the storage medium may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, a hard disk, or a solid state disk; the memory may also comprise a combination of memories of the kind described above.
When the one or more programs in the storage medium are executable by the one or more processors, the sit-up posture evaluating method performed on the electronic device side as described above is implemented.
The processor is used for executing the sit-up posture evaluation program stored in the memory so as to realize the following steps of the sit-up posture evaluation method executed on the electronic equipment side:
receiving a real-time video sent by image acquisition equipment, and determining each sit-up detection area contained in the real-time video; aiming at any sit-up detection area, determining a target user in the sit-up detection area based on the real-time video, tracking the target user in the sit-up detection area, and acquiring a real-time tracking video; determining posture data of a target user in a sit-up detection area in an image aiming at any image in a real-time tracking video; with the target user's in the sit up detection area gesture data, compare with the preset gesture data that sit up detection area corresponds to the target user's in the aassessment sit up detection area sit up gesture.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments, objects, technical solutions and advantages of the present application are described in further detail, it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present application, and are not intended to limit the scope of the present application, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present application should be included in the scope of the present application.

Claims (11)

1. A sit-up posture assessment method applied to an edge calculation server, the method comprising:
receiving a real-time video sent by image acquisition equipment, and determining each sit-up detection area contained in the real-time video;
aiming at any sit-up detection area, determining a target user in the sit-up detection area based on a real-time video, tracking the target user in the sit-up detection area, and acquiring a real-time tracking video;
determining posture data of a target user in the sit-up detection area in any image in a real-time tracking video;
and comparing the posture data of the target user in the sit-up detection area with the preset posture data corresponding to the sit-up detection area so as to evaluate the sit-up posture of the target user in the sit-up detection area.
2. The method of claim 1, wherein the determining the target user in the sit-up detection area based on real-time video comprises:
performing frame extraction on the real-time video to obtain a real-time image;
determining a rectangular frame where each user is located in the real-time image;
determining the intersection ratio of the rectangular frame where each user is located in the real-time image and the sit-up detection area;
determining a rectangular frame with the largest cross-over ratio as a target rectangular frame corresponding to the sit-up detection area;
and determining that the user corresponding to the target rectangular frame is the target user corresponding to the sit-up detection area.
3. The method of claim 1, wherein determining pose data of a target user in the sit-up detection area in any image of the real-time tracking video comprises:
determining a sit-up key point position of a target user in the sit-up detection area in the image aiming at any image in a real-time tracking video;
screening target key point positions of target users in the sit-up detection area from sit-up key point positions of the target users in the sit-up detection area;
and determining an included angle of a key part of the target user in the sit-up detection area in the image according to the position of the target key point of the target user in the sit-up detection area.
4. The method of claim 3, wherein the screening target keypoint locations of the target user in the sit-up detection area from among the sit-up keypoint locations of the target user in the sit-up detection area comprises:
dividing the sit-up key point positions of the target users in the sit-up detection area into a first set and a second set according to a preset rule;
determining a first mean value of confidence degrees corresponding to sit-up key point positions of target users in the sit-up detection area contained in the first set;
determining a second mean value of confidence degrees corresponding to the sit-up key point positions of the target users in the sit-up detection area contained in the second set;
determining a target set from the first set and the second set according to the first mean and the second mean;
and screening the target key point positions of the target users in the sit-up detection area from a target set according to sit-up screening rules.
5. The method of claim 4, wherein determining the target set from the first set and the second set according to the first mean and the second mean comprises:
comparing the magnitude between the first mean and the second mean;
if the first mean value is larger than the second mean value, determining the first set as a target set;
and if the first average value is smaller than the second average value, determining the second set as a target set.
6. The method of any of claims 3 to 5, wherein the target keypoint location comprises: a first key point position, a second key point position and a third key point position;
will target user's in the sit up detection area gesture data, with preset gesture data that sit up detection area corresponds contrast, in order to assess target user's in the sit up detection area sit up gesture includes:
aim at the contained angle that target user's key position in the sit up detection zone corresponds, if the contained angle is in the preset angle within range that sit up detection zone corresponds, just target user's both hands in the sit up detection zone embrace the head, just target user in the sit up detection zone the distance between first key point position in the target key point position and the second key point position is less than the distance between second key point position and the third key point position, then confirms target user's in the sit up detection zone sit up gesture standard.
7. The method of claim 6, wherein the target keypoint location further comprises: a fourth keypoint location;
after determining the sit-up posture criteria of the target user in the sit-up detection area, the method further includes:
if the preset angle range is a first preset angle range; if the preset first marker position, the preset second marker position and the preset third marker position corresponding to the target user in the sit-up detection area are unfinished marks, setting the preset first marker position corresponding to the target user in the sit-up detection area as a finished mark;
alternatively, the first and second electrodes may be,
if the preset angle range is a second preset angle range, and the distance between the third key point position and the fourth key point position in the target key point positions of the target user in the sit-up detection area is smaller than the distance between the second key point position and the third key point position; if the preset first flag bit corresponding to the target user in the sit-up detection area is a finished flag, and the preset second flag bit and the preset third flag bit corresponding to the target user in the sit-up detection area are both unfinished flags, the preset second flag bit corresponding to the target user in the sit-up detection area is a finished flag;
alternatively, the first and second electrodes may be,
if the preset angle range is a second preset angle range, and the abscissa of a third key point position and an abscissa of a fourth key point position in the target key point positions of the target user in the sit-up detection area meet preset requirements; if the preset first flag bit corresponding to the target user in the sit-up detection area is a finished flag, and the preset second flag bit and the preset third flag bit corresponding to the target user in the sit-up detection area are both unfinished flags, the preset second flag bit corresponding to the target user in the sit-up detection area is a finished flag;
alternatively, the first and second electrodes may be,
if the preset angle range is a third preset angle range; if the preset first marker position and the preset second marker position corresponding to the target user in the sit-up detection area are both finished markers, and the preset third marker position corresponding to the target user in the sit-up detection area is an unfinished marker, setting the preset third marker position corresponding to the target user in the sit-up detection area as a finished marker;
if the preset first marker bit, the preset second marker bit and the preset third marker bit corresponding to the target user in the sit-up detection area are all finished marks, adding 1 to the sit-up times of the target user in the sit-up detection area; wherein an initial value of the number of sit-ups of the target user in the sit-up detection area is 0;
and setting a preset first mark position, a preset second mark position and a preset third mark position corresponding to the target user in the sit-up detection area as unfinished marks.
8. The method of claim 7, further comprising:
if the included angle corresponding to the key part of the target user in the sit-up detection area does not belong to the preset angle range corresponding to the sit-up detection area, determining that the sit-up posture of the target user in the sit-up detection area is not standard, and setting a preset first mark position, a preset second mark position and a preset third mark position corresponding to the target user in the sit-up detection area as unfinished marks;
if the target user in the sit-up detection area does not hold his head with both hands, determining that the sit-up posture of the target user in the sit-up detection area is not standard, and setting a preset first mark position, a preset second mark position and a preset third mark position corresponding to the target user in the sit-up detection area as unfinished marks;
and/or the presence of a gas in the atmosphere,
if the distance between the first key point position and the second key point position in the target key point positions of the target users in the sit-up detection area is not smaller than the distance between the second key point position and the third key point position, determining that the sit-up posture of the target users in the sit-up detection area is not standard, and setting a preset first mark position, a preset second mark position and a preset third mark position corresponding to the target users in the sit-up detection area as unfinished marks.
9. An apparatus for evaluating a sit-up posture, applied to an edge calculation server, the apparatus comprising:
a detection area determination module: the system comprises a video acquisition device, a sit-up detection device and a sit-up detection device, wherein the video acquisition device is used for acquiring a sit-up detection area;
the real-time tracking video acquisition module: the system comprises a video acquisition unit, a video processing unit and a video processing unit, wherein the video acquisition unit is used for acquiring a real-time tracking video;
a pose data determination module: for any image in a real-time tracking video, determining posture data of a target user in the sit-up detection area in the image;
sit-up posture evaluation module: and comparing the posture data of the target user in the sit-up detection area with the preset posture data corresponding to the sit-up detection area so as to evaluate the sit-up posture of the target user in the sit-up detection area.
10. An electronic device, comprising: a processor and a memory, the processor being configured to execute a program stored in the memory to implement the sit-up posture assessment method of any one of claims 1 to 8.
11. A storage medium characterized in that the storage medium stores one or more programs executable by one or more processors to implement the sit-up posture assessment method according to any one of claims 1 to 8.
CN202210615431.3A 2022-05-31 2022-05-31 Sit-up posture evaluation method and device, electronic equipment and storage medium Pending CN115171208A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210615431.3A CN115171208A (en) 2022-05-31 2022-05-31 Sit-up posture evaluation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210615431.3A CN115171208A (en) 2022-05-31 2022-05-31 Sit-up posture evaluation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115171208A true CN115171208A (en) 2022-10-11

Family

ID=83483911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210615431.3A Pending CN115171208A (en) 2022-05-31 2022-05-31 Sit-up posture evaluation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115171208A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815907A (en) * 2019-01-25 2019-05-28 深圳市象形字科技股份有限公司 A kind of sit-ups attitude detection and guidance method based on computer vision technique
CN111368810A (en) * 2020-05-26 2020-07-03 西南交通大学 Sit-up detection system and method based on human body and skeleton key point identification
CN112464715A (en) * 2020-10-22 2021-03-09 南京理工大学 Sit-up counting method based on human body bone point detection
CN113398556A (en) * 2021-06-28 2021-09-17 浙江大学 Push-up identification method and system
CN113850248A (en) * 2021-12-01 2021-12-28 中科海微(北京)科技有限公司 Motion attitude evaluation method and device, edge calculation server and storage medium
CN114120204A (en) * 2021-12-01 2022-03-01 中科海微(北京)科技有限公司 Sit-up posture assessment method, sit-up posture assessment device and storage medium
CN114140722A (en) * 2021-12-01 2022-03-04 中科海微(北京)科技有限公司 Pull-up movement evaluation method and device, server and storage medium
CN114140721A (en) * 2021-12-01 2022-03-04 中科海微(北京)科技有限公司 Archery posture evaluation method and device, edge calculation server and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815907A (en) * 2019-01-25 2019-05-28 深圳市象形字科技股份有限公司 A kind of sit-ups attitude detection and guidance method based on computer vision technique
CN111368810A (en) * 2020-05-26 2020-07-03 西南交通大学 Sit-up detection system and method based on human body and skeleton key point identification
CN112464715A (en) * 2020-10-22 2021-03-09 南京理工大学 Sit-up counting method based on human body bone point detection
CN113398556A (en) * 2021-06-28 2021-09-17 浙江大学 Push-up identification method and system
CN113850248A (en) * 2021-12-01 2021-12-28 中科海微(北京)科技有限公司 Motion attitude evaluation method and device, edge calculation server and storage medium
CN114120204A (en) * 2021-12-01 2022-03-01 中科海微(北京)科技有限公司 Sit-up posture assessment method, sit-up posture assessment device and storage medium
CN114140722A (en) * 2021-12-01 2022-03-04 中科海微(北京)科技有限公司 Pull-up movement evaluation method and device, server and storage medium
CN114140721A (en) * 2021-12-01 2022-03-04 中科海微(北京)科技有限公司 Archery posture evaluation method and device, edge calculation server and storage medium

Similar Documents

Publication Publication Date Title
CN113850248B (en) Motion attitude evaluation method and device, edge calculation server and storage medium
Fang et al. Perceptual evaluation for multi-exposure image fusion of dynamic scenes
CN110428486B (en) Virtual interaction fitness method, electronic equipment and storage medium
Wang et al. Image quality assessment based on local linear information and distortion-specific compensation
WO2021098616A1 (en) Motion posture recognition method, motion posture recognition apparatus, terminal device and medium
US11386806B2 (en) Physical movement analysis
EP3786971A1 (en) Advancement manager in a handheld user device
CN114120204A (en) Sit-up posture assessment method, sit-up posture assessment device and storage medium
Wei et al. Performance monitoring and evaluation in dance teaching with mobile sensing technology
Jianbang et al. Real-time monitoring of physical education classroom in colleges and universities based on open IoT and cloud computing
CN114120168A (en) Target running distance measuring and calculating method, system, equipment and storage medium
Wang et al. Distortion recognition for image quality assessment with convolutional neural network
CN113409651A (en) Live broadcast fitness method and system, electronic equipment and storage medium
CN114847932A (en) Method and device for determining motion prompt and computer readable storage medium
CN114926762A (en) Motion scoring method, system, terminal and storage medium
CN114093032A (en) Human body action evaluation method based on action state information
WO2019153454A1 (en) Sports course scoring method and system
WO2021189736A1 (en) Exercise course scoring method and system
CN112633261A (en) Image detection method, device, equipment and storage medium
CN115171208A (en) Sit-up posture evaluation method and device, electronic equipment and storage medium
DE112017008230T5 (en) METHOD AND APPARATUS FOR MAPPING A VIRTUAL ENVIRONMENT TO A PHYSICAL ENVIRONMENT
CN110070036B (en) Method and device for assisting exercise motion training and electronic equipment
CN112149602A (en) Action counting method and device, electronic equipment and storage medium
CN112233770A (en) Intelligent gymnasium management decision-making system based on visual perception
Du RETRACTED: Preventive monitoring of basketball players' knee pads based on IoT wearable devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221011