CN114998986A - Computer vision-based pull-up action specification intelligent identification method and system - Google Patents

Computer vision-based pull-up action specification intelligent identification method and system Download PDF

Info

Publication number
CN114998986A
CN114998986A CN202210539016.4A CN202210539016A CN114998986A CN 114998986 A CN114998986 A CN 114998986A CN 202210539016 A CN202210539016 A CN 202210539016A CN 114998986 A CN114998986 A CN 114998986A
Authority
CN
China
Prior art keywords
pull
standard
testee
computer vision
counting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210539016.4A
Other languages
Chinese (zh)
Inventor
姜成
孙嘉琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanbian University
Original Assignee
Yanbian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanbian University filed Critical Yanbian University
Priority to CN202210539016.4A priority Critical patent/CN114998986A/en
Publication of CN114998986A publication Critical patent/CN114998986A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a pull-up action standard intelligent identification method and a pull-up action standard intelligent identification system based on computer vision. And calculating the motion trail of the coordinate values of the key points of the human body, and judging whether the testee finishes standard pull-up. The invention has the advantages of accurate measurement, no personnel waste and low cost. The system is developed by open source framework OpenPose and AI detection technology, so that the universality and the practicability of the technical scheme are improved. And providing an identification algorithm model with adjustable difficulty based on the standard, and when the testee is in a pull-up state, if the qualified standard is met, determining that the testee is an effective action and counting. If the criterion is not satisfied, the cause of miscounting is displayed. And different pull-up action evaluation standards are set according to the key points of the pull-up human body and the angle characteristics of the joints. The invention can be widely applied to different detection scenes, and has low use cost and high detection precision.

Description

Pull-up action standard intelligent identification method and system based on computer vision
Technical Field
The invention belongs to the technical field of machine vision and image processing, and relates to an intelligent identification method and system for the specification and counting of upward actions of an unattended pull body based on key points of a human body.
Background
In recent years, the chin is listed as one of the indispensable contents of physical ability testing men in sports middle school, high school and university. At present, pull-up tests lack unified and quantitative evaluation standards, traditional manual evaluation has certain subjectivity, test accuracy and fairness can be influenced, and test difficulty cannot be set according to scene requirements. In the large-scale physique test process, the problems of complicated test flow, low automation and intelligence degree and the like exist, and the test efficiency is low.
In order to improve the automation degree and efficiency of the pull-up test, the position of the arm X, Y, Z shaft in the space is measured through the wearable wireless attitude sensor on the tested person, the pull-up number is indirectly measured (CN101716417A human pull-up wireless test system), and the attitude sensor has the problem of error accumulation, which can cause the problem of inaccurate counting. The pull-up wireless tester designed by the Hall sensor and the STC single chip microcomputer is simpler and lighter (development and application of the pull-up wireless tester such as Zhangsu et al. mechanical engineers 2015(10): 183-. Meanwhile, a plurality of students adopt infrared sensors to detect the pull-up to count the actions, each infrared sensor comprises an infrared transmitter and a receiver, when infrared rays are shielded by bodies, level signals can be triggered to count the system, but the requirements on installation positions are severe, and whether the chin exceeds a horizontal bar or not cannot be accurately identified. The ultrasonic sensor can also judge whether the pull-up action reaches the standard according to the distance between the head of the testee and the horizontal bar, but the measurement precision error of the ultrasonic sensor is large, and the distance cannot accurately judge whether the pull-up action is qualified. The infrared sensor, the pressure sensor on the horizontal bar side wall and the ultrasonic sensor are subjected to fusion testing, so that the effectiveness of the pull-up action is accurately measured, but the pull-up action is lack of analysis on the pull-up action specification. Based on traditional sensor chin measurement system, can realize the count of chin action, nevertheless can't discern and the analysis to the action standardization, and have installation and the complicated scheduling problem of use.
The development and application of computer vision, the research of sports by means of photographing and video recording, has become a common research method. An application of a pull-up detection system based on machine vision (CN113095461A a pull-up counter based on machine vision) is to count up pull by using a raspberry group and combining a monocular camera, but the traditional vision detection method is greatly influenced by illumination environment and motion conditions, has unstable detection robustness and cannot meet the requirement of real-time analysis. The chin up-counting is realized by processing the acquired image by adopting the deep convolutional network and extracting the motion characteristics (CN107122798A is a method and a device for detecting the chin up-counting based on the deep convolutional network), but the detection of the details of the chin up-counting is lacked, so the identification and analysis of the motion normalization still can not be carried out,
disclosure of Invention
In order to overcome the defects of the prior art, a set of parameter models with two-level difficulty is specially designed for the pull-up test considering that a plurality of conditions exist in the pull-up test, redundant, single or multiple illegal actions can be judged when the pull is up, and a tester can adjust the test difficulty level according to the tested level, such as no chin over-stroke, too fast falling speed, swinging, hip lifting, knee bending and the like. Thereby improving the universality and the practicability of the invention. According to the analysis of the pull-up action specification, 6 evaluation standards are made:
(1) the two hands and the shoulders are the same in width and are suspended vertically.
(2) The two arms are pulled upwards by force until the chin is flush with or exceeds the upper edge of the horizontal bar, and the process is finished once.
(3) Both arms are slowly restored to the starting state.
(4) The knee cannot be bent during the pull-up process.
(5) The abdomen can not be straightened in the process of pull-up.
(6) The body cannot swing during the pull-up process.
The invention provides a pull-up motion standard intelligent identification method and a pull-up motion standard intelligent identification system based on computer vision, wherein the method utilizes key point position coordinates of a human body identified by OpenPose as main information of quantitative analysis of pull-up motion, and selects 13 key body parts of a left wrist joint, a right elbow joint, a left shoulder joint, a right hip joint, a left knee joint, a right knee joint, a left ankle joint, a right ankle joint and a neck as key identification points of the pull-up motion, wherein w, e, s, n, h, k and a are English initials of the wrist, elbow, shoulder, neck, hip, knee and ankle respectively. The letters following the field subscript "_" indicate the attributes of the keypoint, l indicates the left side, r indicates the right side, and i indicates one of the left and right sides (Table 1).
TABLE 1 coordinate introduction table for key point position
Figure BDA0003649383100000021
Standard 1: the two hands and the shoulders are the same in width and are suspended vertically.
When the pull-up starts, the wrist and the shoulders are kept horizontal, as shown in fig. 3, the distance between the wrist and the shoulders is L w And L s (equations 1 and 2). By comparing the absolute value of the difference between the two hand distances ws (equation 3) it can be judged whether the both hands and the shoulders are the same width, since the physical structure and the action posture of each tester are different, and Δ L is also the same ws The size is also influenced by the distance of picture taking, only by Δ L ws The degree that the two hands and the shoulders are the same in width can not be effectively judged, an error ratio judgment method is provided, and Delta L is calculated ws To the average distance ratio k ws (equation 4) as a judgment criterion, the smaller the value, the more standard the motion becomes, and a qualified threshold δ can be set according to the requirement 1 Size when k ws ≤δ 1 And then the product is qualified. According to the angle theta of the elbow joint we_i Can judge the vertical degree of the arm and theta under the ideal state we_i Is 180 deg., but pull-up action is not standard in practice, theta we_i Within a certain interval, satisfying equation 9, where θ min Allowing a minimum bending angle, and adjusting theta according to difficulty min The value is obtained.
Figure BDA0003649383100000031
Figure BDA0003649383100000032
ΔL ws =|L w -L s | (3)
Figure BDA0003649383100000033
Figure BDA0003649383100000034
Figure BDA0003649383100000035
Figure BDA0003649383100000036
Figure BDA0003649383100000037
θ we_i ∈[θ min ,180°] (9)
Standard 2: the two arms are pulled upwards by force until the chin is flush with or exceeds the upper edge of the horizontal bar once.
In the process of pull-up, the left and right shoulder joints exceed the corresponding left and right wrist joints on the ordinate axisTo the chin bar, and the mean value DeltaL of the absolute value of the difference between the corresponding distances swd (as in equation 10). When the shoulder joint exceeds the corresponding wrist joint on the ordinate axis, the chin can pass through the bar, and the proportion k of the shoulder joint to the average arm length swd (as formula 11) as the qualified judgment standard, and setting the qualified threshold value delta according to the requirement 2 Size when k wsd ≤δ 2 And then the product is qualified. Investigation in this way may avoid the occurrence of a subject lifting the chin over the bar.
Figure BDA0003649383100000038
Figure BDA0003649383100000039
Standard 3: both arms are slowly restored to the starting state.
The reduction to the starting posture means that the upper arm falls back to the initial suspension state, and the requirement in the standard 1 is satisfied. By observing the mean angular velocity omega of the elbow joint a (equation 12) and setting the qualified threshold value delta according to the requirement 3 Magnitude when ω e ≤δ 3 And then the product is qualified.
Figure BDA0003649383100000041
Wherein n represents the total number of frames spent in the pull-up to the initial state, t represents the time spent in the reduction to the initial state,
Figure BDA0003649383100000042
the elbow joint angle at frame i is shown.
Standard 4: the knee cannot be bent during the pull-up process.
According to the angle theta of the knee joint k_i (as shown in formula 13) judging whether knee bending exists or not, and setting a qualified threshold value delta according to the requirement 4 Magnitude when delta 4 ≤θ k_i Then, thenDeemed qualified, according to experimental statistics, theta k_i Values of 150 are suitable.
Figure BDA0003649383100000043
Wherein L is ak_i Denotes the distance, L, from the ankle key point to the knee key point kh_i Representing the distance from a knee keypoint to a hip keypoint, L ah_i Representing the distance from the ankle key point to the hip key point.
Standard 5: the abdomen can not be straightened in the process of pull-up.
When the angle of the hip joint is convex to the left, the angle is obvious in straightening, and meanwhile, the swing wave skill can be used for climbing a bar. According to the angle theta of the hip joint h_i (as shown in formula 14) judging whether the abdomen exists, and setting a qualified threshold value delta according to the requirement 5 Magnitude when delta 5 ≤θ h_i If the product is qualified, according to the statistics of the experiment, theta h_i Values at 165 deg. are suitable.
Figure BDA0003649383100000044
Wherein L is kh_i Represents the distance from the knee keypoint to the hip keypoint, L hs_i Denotes the distance from the shoulder key point to the hip key point, L sk_i Representing the distance from the shoulder keypoint to the knee keypoint.
Standard 6: the body cannot swing during the pull-up process.
If the knee bending or abdomen straightening occurs during the swing, the foul can be detected according to the above rules, and the swing phenomenon that the shoulder, hip, knee and ankle are basically kept unchanged is focused here. The change range of the ankle in the X-axis direction is large during swinging, and the ratio k of the change range of the ankle joint in the X-axis to the shoulder width is determined according to as Measuring the swing degree (as formula 15), and setting an qualified threshold value delta according to requirements 6 Size when k as ≤δ 6 And then the product is qualified.
Figure BDA0003649383100000045
Wherein
Figure BDA0003649383100000046
And
Figure BDA0003649383100000047
represents the maximum and minimum values in the X-axis direction of the ankle joint during the pull-up process.
The system for applying the computer vision-based pull-up action specification intelligent identification method comprises two cameras and a processor;
the two cameras synchronously acquire images by adopting multiple threads; the first camera is positioned right in front of the horizontal bar, and the second camera is positioned in side front; the two cameras are used for collecting images when a testee enters a test area;
the processor is used for carrying out face recognition on the collected image, transmitting the image information back to the database and confirming the identity of the testee; collecting human skeleton key points of a testee by utilizing an OpenPose algorithm; counting is started when the judgment accords with the counting starting standard by analyzing the coordinate change of the key points of the human body in the image; when the qualified standard meeting the corresponding difficulty level is judged, qualified counting is carried out; and when the wrist joint of the tested person is recognized to be separated from the horizontal bar, the test is prompted to be finished. And the scores of the testees are gathered and uploaded to the terminal, and the testees can inquire the scores of the testees on the client.
The pull-up action standard intelligent identification method has the advantages that based on deep combination of AI and computer vision, a pull-up action standard intelligent identification method is developed through an open source framework OpenPose and AI vision detection technology based on human key points and posture estimation of deep learning. The key points of human bodies such as shoulders, elbows, wrists, hips, knees, ankles and the like are identified, and whether the actions are qualified or not is judged according to the joint angle, upward time and falling time of the testee when the testee draws the body upwards. The invention judges the effectiveness and the normalization of the pull-up action according to the key point position of the human body and the change characteristics of the joint angle, thereby greatly improving the accuracy.
According to different ages and athletic ability levels of testees, the adjustable two-stage difficulty parameter model is developed by the product, the problems of abnormal actions of the testees in the pull-up process, such as knee bending, hip lifting, swinging and the like, are identified, and a convenient, practical and ready detection technology is provided for pull-up tests in different scenes. The invention can reduce the cost consumption of manpower, material resources, financial resources, time and the like in the pull-up test.
Compared with the existing pull-up detection equipment, the pull-up detection system has the advantages of simple operation, high detection speed, low equipment cost and high detection efficiency, and can meet the detection requirements under different conditions.
Drawings
Fig. 1 is a schematic view of the installation and view of the camera of the present invention, (a) a schematic view of the orientation of the camera on site, (b) a front camera view, and (c) a 45 ° left front camera view.
FIG. 2 is a flow chart of the overall scheme of the present invention.
Fig. 3 is a schematic diagram of key points of a pull-up process.
Fig. 4 is a chin-up keypoint detection example, (a) a normal-up face, (b) a face restored to the starting posture, (c) no normal pendulous return, (d) a flexed knee side, (e) a straight ventral side, and (f) a rocking side.
FIG. 5 is a flowchart of a pull-up difficulty algorithm process.
Detailed Description
The product of the invention is further explained below with reference to the accompanying drawings
The computer vision-based pull-up action specification intelligent identification method comprises the following steps of:
step 1, using 2 cameras to synchronously shoot testees at different angles, wherein the pixel resolution of the cameras is as follows: 1280 × 720, acquisition frequency: 30 Hz. A schematic diagram of the camera placement is shown in fig. 1. The first camera is used for shooting the testee, and can acquire the position and angle change information of key points of important parts such as shoulders, elbows, knees and the like of the testee, particularly the relative position relation between the chin and the upper edge of the horizontal bar, which is a main evaluation standard for counting the chin upwards. In order to enhance the accuracy of the pull-up assessment and counting algorithm, a second camera is arranged in the left front 45-degree direction of a tester, on one hand, the swing information of the whole body of the tester can be collected and used for auxiliary detection of redundant actions of pull-up, and on the other hand, the conditions of missing detection and false detection of key points of a human body by the first camera can be effectively compensated. The camera placement is to satisfy the following two conditions: (1) the camera needs to be horizontally placed, and the height of the camera is flush with half of the height of the horizontal bar; (2) the camera needs to keep a proper distance from the horizontal bar, so that the testee and the horizontal bar can be completely kept in a window of the camera. In order to ensure the time synchronization and consistency of two paths of video data key point results acquired by two cameras, an algorithm adds a global timestamp to each picture during data acquisition, and takes the two pictures with the minimum global time difference of the two paths of data as matching pictures at the same moment.
And 2, acquiring 25 key points of the human BODY by utilizing an OpenPose algorithm BODY _25 format (see figure 3), selecting 13 key points as standard evaluation key points, and accurately identifying the key points of the human skeleton. Two industrial cameras are connected to a notebook computer.
The identification technology in the embodiment has certain anti-interference capability, and can identify various states of walking, standing, hanging, down-stroke and the like of a testee when the testee enters a specified area. The testee enters the test area to be in place, after the testee is in place, the human face recognition is carried out on the testee, the personal information of the testee is searched in the database, and the testee is prompted by voice to start the test.
In this example, depending on the standard, the suspension posture should be that both hands hold the bar (with the palm forward and the thumb opposite to the other four fingers) and both hands and the shoulders are the same width to form a straight arm suspension.
In this example, the subject's arms are pulled upward until the mandible is flush with or exceeds the upper edge of the horizontal bar, which is a necessary option for the parametric model to complete once (see fig. 4), and the subject is satisfied that the mandible is flush with or exceeds the upper edge of the horizontal bar, and no illicit motion occurs as a qualified count.
In this example, the sample slowly falls and returns to the starting posture after one pass count. The link detection comprises two standards, the two standards are restored to the starting posture and the falling restoration, and repeated observation experiments show that the upper limb and the upper limb muscle group of a tested person have poor capability. The phenomenon that the elbow joint is not straight can occur during falling back, the method identifies whether the angle of the elbow joint on the ordinate axis is larger than 130 degrees or not, and meanwhile, the option can be adjusted in a parameter model, so that a tester can subjectively adjust the difficulty according to the actual situation of a testee, the difficulty is reduced at <130 degrees, and the difficulty is increased at >130 degrees.
The slow release is specifically implemented by obtaining an angle when the elbow joint angle falls back to 130 degrees of the elbow joint when the fall-back condition is met, calculating the difference between the two angles, and dividing the difference by the passing time to obtain the elbow joint angular velocity. Repeated experiments default the threshold to 160 ° to meet the slow release speed of most students. The difficulty drops to >160 ° and the difficulty increases to <160 °.
The product of the invention evaluates the asymmetric force of shoulders when the testee is pulled upwards, if the capability of the testee is poor, the testee is easy to have asymmetric force of shoulders when the testee is pulled upwards, the invention judges that the standard is to subtract the angle of the two elbow joint upward pull bodies, and the action is more standard when the obtained difference value is closer to 0. While the option can be adjusted in the parametric model.
For illegal actions such as knee bending, hip lifting, swinging and the like, the standard of the actions is judged by using the second camera. Due to the limitation of the openposition algorithm, the second camera is placed at the left front 45 degrees of the tested person, so that the angle of bending knees and lifting abdomen is changed and needs to be corrected. Multiple observations show that the knee bending angle is most obvious when reaching 150 degrees, the difficulty of <150 degrees is reduced, and the difficulty of >150 degrees is increased. The hip straightening amplitude is most obvious when the hip straightening amplitude reaches 165 degrees, and whether a testee uses wave swinging skill to put on a lever or not can be identified, so that the difficulty of <165 degrees is reduced, and the difficulty of >165 degrees is increased. The above two options can be adjusted in the parametric model. The rocking described herein is for the case of a body rocking back and forth with the shoulders, hips, knees, ankles remaining unchanged.
In this example, since the swing range of the ankle joint in the irregular motion is large, the swing range is most obvious when the value is set at 300 ° by recognizing the range section of the ankle joint from the leftmost side to the rightmost side on the abscissa axis through repeated observation. The difficulty decreases >300 ° and increases <300 °. And finally, when the equipment identifies that the testee takes the down stroke, the score is automatically returned to the terminal, and the testee can inquire on the terminal.
In the traditional method, only the number of the upward counting of the pull bodies is concerned, a unified and quantitative evaluation method for the normalization of the upward technical action of the pull bodies is lacked, and a difficulty grading method is researched and provided by combining the 6 quantifiable evaluation standards, so that the test scenes with different difficulty requirements can be met, the common mode is suitable for the upward pull body test scenes of schools with low requirements, and the professional mode is suitable for the upward pull body test scenes of special posts with strict and special assessment standards. According to the motion characteristics of the pull-up and the professional degree requirement of the test, the pull-up evaluation and counting die with the grading difficulty has better flexibility and applicability. The pull-up is divided into two parts of action evaluation and action counting, the specific flow is shown in fig. 5, constraint conditions are added according to different difficulty requirements, whether the pull-up action is standard can be further judged, and a single chin exceeding a horizontal bar or being even with the horizontal bar is not taken as a counting standard, wherein the common difficulty mode index comprises that two hands and a shoulder are as wide as the shoulder and are in straight arm suspension, the pull passes through the horizontal bar by the upper chin and the lower chin, and the two arms are slowly restored to a starting posture, the professional difficulty mode is added with judgment of abdomen stretching, knee bending and swinging on the basis, and the judgment threshold value of each item can be adjusted according to the actual test condition.
The method comprises the steps of collecting videos according to the requirement of a pull-up action specification, and in order to verify the effectiveness of an algorithm model of an OpenPose pull-up action specification-based intelligent recognition system, selecting 208 college male students as experimental test objects in research, and collecting data sets for experiments and verification. Each test object collects 3 sections of synchronous videos, 4 pairs of invalid synchronous videos are removed, and the experiment collects 620 pairs of valid front and side synchronous videos in total, wherein the total number of videos is 1240. The cumulative cut 2940 pairs of samples were taken as the base samples of a single complete pull-up course video. And screening the samples according to professional difficulty grades, wherein 1288 pairs of synchronous videos are qualified action samples, and 1652 pairs of synchronous videos are unqualified action specification samples. And respectively randomly selecting 50% of qualified and unqualified synchronous video samples, wherein 1470% of the total number is used for modeling parameters of the pull-up evaluation and counting of the synchronous videos, and the rest 1470% of the total number is used as a test set of the model for evaluating the performance of the model.
And performing performance evaluation on the parameter classification model by using the confusion matrix. First, 4 measurement indexes of the confusion matrix are calculated: TP (True-Positive), FP (False-Positive), FN (False-Negative), TN (True-Negative) (Table 2):
TABLE 2
Figure BDA0003649383100000081
The experimental results obtained TP, FP, FN, TN results. And then calculating Accuracy (Accuracy), Precision (Precision), Recall (Recall) and F value (F-measure) according to the 4 measurement indexes. The results are shown in Table 3.
Figure BDA0003649383100000082
Figure BDA0003649383100000083
Figure BDA0003649383100000084
Figure BDA0003649383100000085
TABLE 3 Parametric Classification model Performance
Figure BDA0003649383100000086
Accuracy (Accuracy) refers to the proportion of the number of pull-up actions (TP + TN), which is determined correctly by the model to be qualified and unqualified, to the total data. Precision, also called Precision, is the percentage of true qualifying actions in the determination of a model as qualifying actions. Recall (Recall), also called Recall, is the proportion of the overall qualified actions that the model correctly determines, and has a value of 0.957. The F value (F-measure) is a Precision (Precision) and Recall (Recall) weighted harmonic mean which is commonly used for evaluating the quality of the classification model, and integrates the Precision (Precision) and the Recall (Recall) results.
The embodiment of the invention provides a parameterized model for secondary difficulty assessment and counting based on pull-up motion characteristics by utilizing an OpenPose framework based on a deep learning algorithm to detect human body key points in a video in real time and combining the requirements on pull-up action specifications in the national physical exercise standard work instruction manual (2020 edition) and the pull-up motion key characteristics. The results of the accuracy, precision, recall rate and F value of the method on performance evaluation are respectively 0.971, 0.978, 0.957 and 0.967, the effectiveness and reliability of the method are fully demonstrated, and the method can be widely applied to pull-up test scenes with different difficulty requirement standards.
Although the present invention has been described with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the present invention.

Claims (10)

1. The method is characterized in that the method is based on two cameras and a processor; the two cameras synchronously acquire images by adopting multiple threads; the method for operating the processor comprises the following steps of:
when a testee enters a test area, the two cameras collect images, carry out face recognition on the images, transmit the image information back to the database and confirm the identity of the testee; collecting human skeleton key points of the testee by using an OpenPose algorithm, and then prompting the testee to start testing;
counting is started when the judgment accords with the counting starting standard by analyzing the coordinate change of the key points of the human body in the image;
by analyzing the coordinate change of the key points of the human body in the image, when the judgment is in accordance with the qualified standard of the corresponding difficulty level, the qualified counting is carried out;
and when the wrist joint of the tested person is recognized to be separated from the horizontal bar, the test is prompted to be finished.
2. The pull-up motion specification intelligent recognition method based on computer vision is characterized in that the OpenPose algorithm is utilized to collect 13 body key points of a testee, including left and right wrist joints, left and right elbow joints, left and right shoulder joints, left and right hip joints, left and right knee joints, left and right ankle joints and neck, as key recognition points of pull-up motion, as shown in Table 1, letters behind a field subscript "_" represent the attributes of the key points, l represents the left side, and r represents the right side;
TABLE 1 coordinate introduction table for key point position
Figure FDA0003649383090000011
3. The computer vision based pull-up action specification intelligent recognition method according to claim 2, wherein the meeting start count criteria is: the two hands and the shoulders are the same in width and are suspended in a straight arm; the method specifically comprises the following steps:
Figure FDA0003649383090000012
Figure FDA0003649383090000013
ΔL ws =|L w -L s | (3)
Figure FDA0003649383090000021
Figure FDA0003649383090000022
Figure FDA0003649383090000023
Figure FDA0003649383090000024
Figure FDA0003649383090000025
θ we_i ∈[θ min ,180°] (9)
wherein, theta we_i The included angle of the elbow joint; theta min To allow for a minimum bend angle; i represents one of the left and right sides;
when k is ws ≤δ 1 And theta we_i Satisfies the starting counting criterion, delta, if equation (9) is satisfied 1 Is the threshold value a.
4. The computer vision-based pull-up action specification intelligent recognition method according to claim 2, wherein the qualification criteria include: the two arms are pulled upwards by force until the chin exceeds the upper edge of the horizontal bar to be flush or exceed the upper edge of the horizontal bar once; the method specifically comprises the following steps:
Figure FDA0003649383090000026
Figure FDA0003649383090000027
when k is wsd ≤δ 2 The two arms are pulled upwards by force until the chin exceeds the upper edge of the horizontal bar to be flush or exceed the upper edge of the horizontal bar to finish the primary standard, delta 2 Is the threshold value B.
5. The computer vision-based pull-up action specification intelligent recognition method according to claim 2, wherein the qualification criteria include: slowly reducing the two arms to a starting state; the method specifically comprises the following steps:
Figure FDA0003649383090000028
wherein n represents the total number of frames spent in the pull-up to the initial state, t represents the time spent in the reduction to the initial state,
Figure FDA0003649383090000029
representing the elbow joint angle at the ith frame; i represents one of the left and right sides;
when ω is e ≤δ 3 Then, the two arms are in accordance with the standard of slow-release reduction to the starting state, delta 3 Is the threshold value C.
6. The computer vision-based pull-up action specification intelligent recognition method according to claim 2, wherein the qualification criteria include: the knee cannot be bent during the process of pulling up; the method comprises the following specific steps:
Figure FDA0003649383090000031
wherein L is ak_i Denotes the distance, L, from the ankle key point to the knee key point kh_i Representing the distance from a knee keypoint to a hip keypoint, L ah_i Represents the distance of the ankle key point to the hip key point, i represents one of the left and right sides;
when theta is k_i ≥δ 4 When it is, it meets the criterion that the knee cannot be bent during the pull-up process, delta 4 Is the threshold value D.
7. The computer vision-based pull-up gesture specification intelligent recognition method of claim 2, wherein the qualification criteria include: the abdomen can not be straightened in the process of pull-up; the method specifically comprises the following steps:
Figure FDA0003649383090000032
wherein L is kh_i Representing the distance from a knee keypoint to a hip keypoint, L hs_i Denotes the distance from the shoulder key point to the hip key point, L sk_i Represents the distance from the shoulder keypoint to the knee keypoint, i represents one of the left and right sides;
when theta is h_i ≥δ 5 When the pull-up is carried out, the standard that the abdomen can not be straightened in the pull-up process is met, delta 5 Is the threshold E.
8. The computer vision-based pull-up action specification intelligent recognition method according to claim 2, wherein the qualification criteria include: the body can not swing during the process of pull-up; the method comprises the following specific steps:
Figure FDA0003649383090000033
wherein
Figure FDA0003649383090000034
And
Figure FDA0003649383090000035
represents the maximum and minimum values of the ankle joint in the X-axis direction during the pull-up process;
when k is as ≤δ 6 Then, the standard that the body can not swing in the process of pull-up is met, delta 6 Is the threshold value F.
9. The method for intelligently identifying the pull-up action specification based on the computer vision according to claim 1, wherein two cameras respectively collect images, global timestamps are added to each frame of picture, and the two frames of pictures with the minimum global time difference of two paths of data are used as matching pictures at the same moment.
10. The system for applying the pull-up specification intelligent recognition method based on computer vision as claimed in any one of claims 1 to 9, wherein the system comprises two cameras and a processor;
the two cameras synchronously acquire images by adopting multiple threads; the first camera is positioned right in front of the horizontal bar, and the second camera is positioned in side front; the two cameras are used for collecting images when a testee enters a test area;
the processor is used for carrying out face recognition on the collected image, transmitting the image information back to the database and confirming the identity of the testee; collecting human skeleton key points of a testee by utilizing an OpenPose algorithm; counting is started when the judgment accords with the counting starting standard by analyzing the coordinate change of the key points of the human body in the image; when the judgment meets the qualified standard of the corresponding difficulty level, performing qualified counting; and when the wrist joint of the tested person is recognized to be separated from the horizontal bar, the test is prompted to be finished.
CN202210539016.4A 2022-05-18 2022-05-18 Computer vision-based pull-up action specification intelligent identification method and system Pending CN114998986A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210539016.4A CN114998986A (en) 2022-05-18 2022-05-18 Computer vision-based pull-up action specification intelligent identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210539016.4A CN114998986A (en) 2022-05-18 2022-05-18 Computer vision-based pull-up action specification intelligent identification method and system

Publications (1)

Publication Number Publication Date
CN114998986A true CN114998986A (en) 2022-09-02

Family

ID=83027004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210539016.4A Pending CN114998986A (en) 2022-05-18 2022-05-18 Computer vision-based pull-up action specification intelligent identification method and system

Country Status (1)

Country Link
CN (1) CN114998986A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115138059A (en) * 2022-09-06 2022-10-04 南京市觉醒智能装备有限公司 Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system
CN116563951A (en) * 2023-07-07 2023-08-08 东莞先知大数据有限公司 Method, device, equipment and storage medium for determining horizontal bar suspension action specification

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115138059A (en) * 2022-09-06 2022-10-04 南京市觉醒智能装备有限公司 Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system
CN115138059B (en) * 2022-09-06 2022-12-02 南京市觉醒智能装备有限公司 Pull-up standard counting method, pull-up standard counting system and storage medium of pull-up standard counting system
CN116563951A (en) * 2023-07-07 2023-08-08 东莞先知大数据有限公司 Method, device, equipment and storage medium for determining horizontal bar suspension action specification
CN116563951B (en) * 2023-07-07 2023-09-26 东莞先知大数据有限公司 Method, device, equipment and storage medium for determining horizontal bar suspension action specification

Similar Documents

Publication Publication Date Title
CN114998986A (en) Computer vision-based pull-up action specification intelligent identification method and system
CN111368791B (en) Pull-up test counting method and system based on Quick-OpenPose model
CN109948459A (en) A kind of football movement appraisal procedure and system based on deep learning
CN112287759A (en) Tumble detection method based on key points
CN107103298A (en) Chin-up number system and method for counting based on image procossing
WO2024051597A1 (en) Standard pull-up counting method, and system and storage medium therefor
CN110755085B (en) Motion function evaluation method and equipment based on joint mobility and motion coordination
CN105740779A (en) Method and device for human face in-vivo detection
CN114973401A (en) Standardized pull-up assessment method based on motion detection and multi-mode learning
Park et al. Imagery based parametric classification of correct and incorrect motion for push-up counter using OpenPose
CN113856186B (en) Pull-up action judging and counting method, system and device
CN113288452B (en) Operation quality detection method and device
CN113974612A (en) Automatic assessment method and system for upper limb movement function of stroke patient
CN109271845A (en) Human action analysis and evaluation methods based on computer vision
KR102369359B1 (en) Image-based intelligent push-up discrimination method and system
CN116740618A (en) Motion video action evaluation method, system, computer equipment and medium
CN115068919B (en) Examination method of horizontal bar project and implementation device thereof
CN114639168B (en) Method and system for recognizing running gesture
CN116189301A (en) Standing long jump motion standardability assessment method based on attitude estimation
CN115331304A (en) Running identification method
CN115937969A (en) Method, device, equipment and medium for determining target person in sit-up examination
Shell et al. Is a head-worn inertial sensor a valid tool to monitor swimming?
CN115641646B (en) CPR automatic detection quality control method and system
Rum et al. Automatic Event Identification of Para Powerlifting Bench Press with a Single Inertial Measurement Unit
CN113378772B (en) Finger flexible detection method based on multi-feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination