CN114565976A - Training intelligent test method and device - Google Patents
Training intelligent test method and device Download PDFInfo
- Publication number
- CN114565976A CN114565976A CN202210203733.XA CN202210203733A CN114565976A CN 114565976 A CN114565976 A CN 114565976A CN 202210203733 A CN202210203733 A CN 202210203733A CN 114565976 A CN114565976 A CN 114565976A
- Authority
- CN
- China
- Prior art keywords
- training
- limb
- point
- joint
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention provides an intelligent training test method and an intelligent training test device, wherein the intelligent training test method comprises the following steps: the human face recognition technology is adopted for personnel verification, then the human body posture recognition, action standard judgment and automatic timing and counting are carried out on the collected training video through the visual model technology; and establishing a training data model aiming at the judgment result by adopting a big data technology so as to train and promote guidance. The training intelligent test method can reduce the manual input amount, improve the accuracy of the performance and avoid the occurrence of unfair conditions of artificial cheating examination substitutes by successfully applying the machine vision technology in the training test process.
Description
Technical Field
The invention relates to the field of exercise training, in particular to an intelligent training test method and device.
Background
The machine vision technology is widely applied to the industries of food and beverage, cosmetics, pharmacy, building materials, chemical engineering, metal processing, electronic manufacturing, packaging, automobile manufacturing and the like, and replaces artificial vision to realize detection, measurement and control; the machine vision technology mainly uses a computer to simulate the visual function of a human, extracts information from an image of an objective object, processes and understands the information, and finally is used for actual detection, measurement and control; machine vision includes digital image processing techniques, deep learning techniques, analog recognition techniques, light source illumination techniques, optical imaging techniques, sensor techniques, analog and digital video techniques, computer software and hardware techniques, human-machine interface techniques, and the like.
However, the application of the machine vision technology to the field of sports training is still rare at present, and especially in the process of parallel bar (bar end arm flexion and extension, swing arm flexion and extension) training, most of training tests adopt manual identity verification, manual action standard judgment and manual achievement statistics, and the manual mode often has the problems of human cheating alternative examinations, inaccurate achievement, unfair conditions, large manual input and the like.
In view of the above, the present invention is particularly proposed.
Disclosure of Invention
In view of this, the invention discloses an intelligent training test method and device, which can reduce the manual input amount, improve the accuracy of the result, and avoid the occurrence of unfair condition of artificial cheating test substitutes by successfully applying the machine vision technology in the training test process.
Specifically, the invention is realized by the following technical scheme:
in a first aspect, the invention discloses an intelligent training test method, which comprises the following steps:
adopting a face recognition technology to perform personnel verification, then performing human body posture recognition, action standard evaluation, action comment evaluation and automatic timing and counting on the collected training video through a visual model technology;
and establishing a training data model aiming at the judgment result by adopting a big data technology for training evaluation and assisting in guiding training.
In a second aspect, the present invention discloses an intelligent training test device, which comprises:
a test module: the human body posture recognition system is used for performing personnel verification by adopting a face recognition technology, evaluating the collected training video by a visual model technology, recognizing the human body posture and comparing and judging the human body posture with a standard posture;
a guidance module: and the big data technology is adopted to establish a training data model aiming at the judgment result so as to be used for training guidance.
In a third aspect, the invention discloses a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of training an intelligent test method according to the first aspect.
In a fourth aspect, the present invention discloses a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the program to implement the steps of training the intelligent test method according to the first aspect.
In conclusion, during specific operation, a trainer only needs to check the identity of a face through a flat brush, enters a preparation area according to flat voice prompt, an operator clicks a command on the flat, voice broadcast prompt begins, the trainer begins training, cameras are arranged on the vertical sides of parallel bars, a machine vision video analysis technology and a deep learning training technology are adopted, videos of a training process are recorded and collected, movement posture analysis and action standard judgment are carried out, if a double shoulder joint is higher than an elbow joint or the elbow joint is not straightened when the arm is bent, an action meter fails, voice prompt is carried out, otherwise action counting is successful, voice prompt is carried out, if feet touch the ground or the feet touch an upright post, the examination is finished, voice prompt and score is broadcast, a software platform carries out evaluation and grading according to collected movement data, meanwhile, a targeted training promotion guidance suggestion is given by combining an evaluation model, so that the method is convenient to operate and high in accuracy.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a schematic flow chart of a training intelligent test method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the evaluation and recognition of human body gestures provided by an embodiment of the present invention;
FIG. 3 is a diagram of a network architecture for evaluating and recognizing human gestures according to an embodiment of the present invention;
FIGS. 4-5 are diagrams illustrating the effect of the intelligent test training method according to the embodiment of the present invention;
FIG. 6 is a graph of limb joint points provided by an embodiment of the present invention;
FIG. 7 is a block diagram of a training intelligence system provided by an embodiment of the present invention;
fig. 8 is a schematic flowchart of a computer device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Referring to fig. 1, the invention discloses an intelligent training test method, which comprises the following steps:
s1, performing personnel verification by adopting a face recognition technology, and then performing human body posture recognition, action standard evaluation, action comment evaluation and automatic timing and counting on the collected training video by adopting a visual model technology;
and S2, establishing a training data model for the judgment result by adopting a big data technology for training evaluation and assisting in guiding training.
In the step S1, when the person is checked, a face recognition technology is adopted, face feature information of the athlete is collected in advance to perform face modeling, a unified face information base is constructed, face feature data is extracted by brushing the face on a flat plate and is transmitted into a model base to perform face feature comparison, a comparison result and person information are fed back, and the person identity check is completed.
After personnel verification is carried out, the training video needs to be specifically evaluated and identified through a visual model technology, joint points of a human body are positioned, marked and chained on the video, the video is decomposed into frames through an image processing technology to generate pictures, the postures and the joint motion amplitudes of the motion process are analyzed and identified through the pictures, the joint motion amplitude data of each complete motion are collected and recorded, and comparison judgment is carried out through the amplitude data and the motion standard rule.
Specifically, the method for recognizing the human body gesture comprises the following steps:
scaling an original bitmap object of a training video, calling a function from a Posenet library to obtain a Person object, and scaling the bitmap back to the size of a screen;
drawing a new bitmap on a Canvas object, drawing a skeleton on a Canvas at the position of a key point acquired from a Person object, displaying the key point with the confidence coefficient exceeding a specific threshold value, and displaying and outputting by using a single SurfaceView;
the surfaceView is displayed on the screen by capturing, locking, and drawing on the View canvas.
The detailed principle is that after a series of convolutional neural networks are used, a confidence diagram of a joint point and a vector diagram of a limb are obtained, and the confidence diagram and the vector diagram are combined to obtain a new drawing framework, and a schematic diagram of the convolutional neural network is shown in fig. 2:
1. fig. a is input data, which is an RGB image. After passing through a series of convolutional neural networks, b and c are obtained simultaneously.
And 2.b represents the joint point confidence maps, and how many joint points need to be detected, and how many corresponding confidence maps exist, and the same joint point of multiple persons possibly exists in one image.
The confidence maps on the left side of the above graph b detect the left elbows and the confidence maps on the right side detect the left shoulders of two persons.
The diagram c represents the vector diagram of the limb, two for each joint.
4, b and c are combined to obtain d, the connection of the joint points of a certain limb.
5. All d are combined to give e, the result of the connection of all the limbs we are about to detect.
The network structure in the specific convolution process is shown in fig. 3. As can be known from fig. 3, F is a feature map obtained after the original image passes through the top 10 layer network of the VGG 19. The confidence map S of the joint point is obtained through a Branch1 network, the vector map L of the limb is obtained through a Branch2 network, S1 and L1 are obtained through Stage1, and from Stage2, the input of the Stage t network is composed of the confidence map S obtained from the previous network Stage t-1, the vector map L and the feature map F, and the expression is as follows:
the first 10 layers of VGG19 are the portions circled in table 1 below:
TABLE 1 grid results
Two losses can be obtained for each Stage network, and the loss function of the Stage network is as follows:
wherein the content of the first and second substances,andfor accurate confidence map and vector map labeled by data of data set, w (p) is a binary mask, when a certain joint point is not labeled in the data set or the joint point cannot form a limb in training the model, w (p) is made to be 0, thereby avoiding the increase of 'errors' of the loss. The total loss function is as follows:
confidence map for loss calculation in model trainingIs generated from data of the COCO dataset,representing the confidence map of a certain joint j, since we need to detect 19 joints, each picture needs to generate 19 confidence maps. The effect graph after training is shown in fig. 4, the left graph in fig. 4 is the original image, the right graph is the confidence graph of all the generated joint points, the zoom graph of the original image is taken as the background for comparison, and the true confidence graph of the joint points (all the joint points) is shown in the right graph in fig. 5.
Assuming that the joint point (x, y) is the coordinate of the joint point j of the person k, a (x0, y0) is the upper left-hand coordinate, B (x1, y1) is the lower right-hand coordinate, a rectangle Z is formed by A, B, and the joint point coordinate is the center of the rectangle Z. Let width and height be the width and height of the picture, x0, y0, x1 and y1 are defined as follows,
x0=int(max(0,x-β))
y0=int(max(0,y-β))
x1=int(min(width,x+β))
y1=int(min(height,y+β))
then, after traversing each coordinate point p in the rectangle Z, finding the points p and xj,kThe value of the coordinate point p in the confidence map is obtainedThe formula is as follows:
As shown in FIG. 6, assume thatAndcoordinates of joint points j1 and j2 representing the limb c of person k in the data set picture, if point p is on the limbThe unit vectors in the j1 to j2 directions, otherwise,to 0, the formula is as follows:
the point P satisfies the condition that,
wherein lc,kIs the length of the limb, σlIs the width of the limb, v⊥Is a vertical vector of the unit vector v.
In actual calculation, a specific threshold needs to be added, so a constant th is set:
in actual calculation, assuming that the coordinates of the joint points at the two ends of the limb are (x1, y1) and (x2, y2) respectively, subtracting or adding a threshold th from or to the coordinates of the joint points at the two ends of the limb respectively to obtain two new coordinate points (x1-th, y1-th) and (x2+ th, y2+ th), obtaining a matrix Z by taking the two coordinate points as the upper left corner and the lower right corner, traversing each pixel point p of the matrix, solving the distance dist from the point p to the limb, and if dist < th, considering that the point p belongs to the limb, and storing the direction cosine and the direction sine of the limb c by the point p. Each joint point corresponds to one joint point confidence map, and each limb corresponds to two vector maps which respectively correspond to the direction cosine and the direction sine of the limb. The formula is as follows:
suppose thatAndis two candidate connected joint points detected, connected as a limb c, between which we now sample and use vector image LcAlong the line segment to measure the probability of their connection, the calculation formula is as follows:
wherein p (u) isAndthe intervening coordinate points are generally approximated by equidistant sampling by integrating:
the greater the E that is determined, the greater the probability that the two joint points should be connected.
After the step S1, in the step S2, a big data technique is used to establish a training data model for training guidance.
In the step S2, before the big data technical analysis, a method for detecting a human body touch line is also involved, and the specific method includes: the method comprises the steps of drawing a line on the ground or a bar column of a camera, obtaining a real-time video stream by adopting a machine vision video analysis technology, carrying out frame decomposition on a video, generating a picture, temporarily storing the picture, analyzing whether the generated picture has a touch line or not by utilizing an image processing technology, if so, capturing the picture, pushing data, reporting the data to a software platform, finishing training, carrying out voice prompt, and counting success number of the platform as a final score.
And then, after the steps are finished, a big data analysis technology is adopted to establish a motion training data model, a training basic knowledge base model and an expert guidance model, and the training data analysis and the pattern comparison analysis are carried out to comprehensively obtain the pertinence promotion guidance training opinions.
Finally, by adopting a video streaming media technology, the camera records the video in the whole process of the motion process, positions, marks and links the joint points of the human body on the video, realizes the cutting and storage of video segments, supports the playback of the video, and can see the drawing lines of the joints of the human body through the playback video.
The software program required by the scheme of the invention in practical application only needs one intelligent training platform and AI algorithm, and the specific differences are slightly specific to different mobile terminal application modes as follows:
1. cell-phone end APP: the intelligent training management platform is combined, training evaluation, score recording and data analysis are carried out by means of intelligent perception modern science and technology, and scene fast support such as daily training plan checking, mobile training, exercise circle, score query, ranking query, evaluation guidance and data analysis is achieved through mobile APP.
2. Flat end APP: the training field is used for providing face recognition identity authentication, issuing orders, timing, score counting and video recording, providing daily training report and assessment report score query and score ranking check, and checking training details, video playback, diagnosis and guidance suggestions.
3. Wisdom training management platform: the method is characterized in that front-end hardware equipment and an algorithm system are connected in a butt joint mode, relevant training data are collected and recorded in a unified mode, and software management and data analysis functions of basic training are provided, and the method comprises the following steps: training plan, training file, training examination, training score, evaluation guidance, system management and the like.
AI algorithm: by using machine vision and a machine learning technology, algorithm customization and training iteration are carried out on different training scenes, and capabilities of human body posture recognition, a big data evaluation model, personnel marking, personnel tracking, touch line recognition, rule study and judgment, data acquisition and the like are provided.
The specific flow of software operation can be as follows:
1. and (4) brushing the face of the athlete to check and record the personnel, if the check fails, carrying out voice prompt, otherwise, displaying the personnel information on the flat plate, and carrying out voice prompt on the athlete to enter a designated area.
2. The operator gives a command, command information is broadcasted through voice, the athlete starts training, if the shoulder joints are higher than the elbow joints when the arm is bent or the elbow joints are not straightened when the arm is stretched in the process, the action counter fails, and the voice prompts are carried out, otherwise, the action counter succeeds, and the voice prompts are carried out.
3. If the feet touch the ground or the feet touch the upright posts, the examination is ended, and the voice prompts and reports the scores.
In a word, the training intelligent test method provided by the invention adopts a machine learning intelligent identification mode to replace manual examination, so that cheating is reduced, the efficiency and fairness are improved, a large amount of manpower organization is reduced, the training examination data is precipitated, and more scientific training is facilitated and guided through the data.
In addition, the present invention further provides an intelligent training test device, as shown in fig. 7, specifically including:
the test module 101: the human body posture recognition system is used for performing personnel verification by adopting a face recognition technology, evaluating the collected training video by a visual model technology, recognizing the human body posture and comparing and judging the human body posture with a standard posture;
the instruction module 102: and the big data technology is used for establishing a training data model aiming at the judgment result so as to be used for training guidance.
The system mainly comprises the two modules, the whole training process is evaluated and guided and suggested by adopting a machine learning mode through the well-established realization of the device, and the system is convenient and high in accuracy.
In specific implementation, the above modules may be implemented as independent entities, or may be combined arbitrarily to be implemented as the same or several entities, and specific implementation of the above units may refer to the foregoing method embodiments, which are not described herein again.
Fig. 8 is a schematic structural diagram of a computer device disclosed in the present invention. Referring to fig. 8, the computer device 400 includes at least a memory 402 and a processor 401; the memory 402 is connected to the processor through a communication bus 403 for storing computer instructions executable by the processor 401, and the processor 401 is configured to read the computer instructions from the memory 402 to implement the steps of the method for training intelligent test according to any of the above embodiments.
For the above-mentioned apparatus embodiments, since they basically correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., internal magnetic disks or removable disks), magneto-optical disks, and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
Finally, it should be noted that: while this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The above description is only exemplary of the present disclosure and should not be taken as limiting the disclosure, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.
Claims (8)
1. An intelligent training test method is characterized by comprising the following steps:
adopting a face recognition technology to perform personnel verification, then performing human body posture recognition, action standard evaluation, action comment evaluation and automatic timing and counting on the collected training video through a visual model technology;
and establishing a training data model aiming at the judgment result by adopting a big data technology for training evaluation and assisting in guiding training.
2. The training intelligent test method according to claim 1, wherein the human body posture recognition method comprises:
scaling an original bitmap object of a training video, calling a function from a Posenet library to obtain a Person object, and scaling the bitmap back to the size of a screen;
drawing a new bitmap on a Canvas object, drawing a skeleton on a Canvas at the position of a key point acquired from a Person object, displaying the key point with the confidence coefficient exceeding a specific threshold value, and displaying and outputting by using a single SurfaceView;
the SurfaceView is displayed on the screen by capturing, locking, and drawing on the View canvas.
3. The method as claimed in claim 2, wherein the original bitmap object passes through a series of convolutional neural networks to obtain a confidence map of joint points and a vector map of limbs, and the confidence map and the vector map are combined to obtain a new rendering skeleton.
4. The method of training a smart test of claim 3 wherein the series of convolutional neural networks comprises:
the original bitmap is passed through a Branch1 network to obtain a confidence map S of a joint point, passed through a Branch2 network to obtain a vector map L of a limb, F is passed through a Stage1 to obtain S1 and L1, and from Stage2, the input of a Stage t network is composed of the confidence map S obtained from a previous network Stage t-1, the vector map L and a feature map F, and the expression is as follows:
two losses can be obtained for each Stage network, and the loss function of the Stage network is as follows:
in the above formula, the first and second carbon atoms are,andfor accurate confidence and vector maps labeled by dataset data, w (p) is a binary mask, the total loss function is as follows:
wherein a confidence map for loss calculationAnd vector diagramsIs generated from data of the COCO dataset;
setting the joint point (x, y) as the coordinate of the joint point j of the person k, taking A (x0, y0) as the upper left-hand coordinate and B (x1, y1) as the lower right-hand coordinate, and forming a rectangle Z by A, B, wherein the joint point coordinate is the center of the rectangle Z; let width and height be the width and height of the picture, x0, y0, x1, y1 are defined as follows:
x0=int(max(0,x-β))
y0=int(max(0,y-β))
x1=int(min(width,x+β))
y1=int(min(height,y+β));
then, after traversing each coordinate point p in the rectangle Z, finding the points p and xj,kThe value of the coordinate point p in the confidence map is obtainedThe formula is as follows:
all the joints j of the person kJust make upBecomes the confidence map of the joint pointThe following equation:
suppose thatAndcoordinates of joint points j1 and j2 representing the limb c of person k in the data set picture, if point p is on the limbThe unit vectors in the j1 to j2 directions, otherwise,to 0, the formula is as follows:
wherein point P satisfies the following formula:
wherein lc,kIs the length of the limbs, σlIs the width of the limb, V⊥Is a vertical vector of the unit vector v.
5. The trained intelligent test method of claim 4, wherein for a confidence mapAnd vector diagramsA specific threshold is added in the calculation process, and the specific calculation process is as follows:
assuming th in the above formula is a constant, the coordinates of the joint points at the two ends of the limb are (x1, y1) and (x2, y2), respectively, and subtracting or adding a threshold th from the coordinates of the joint points at the two ends of the limb respectively to obtain two new coordinate points (x1-th, y1-th) and (x2+ th, y2+ th);
taking the two coordinate points as the upper left corner and the lower right corner to obtain a matrix Z, traversing each pixel point p of the matrix, solving the distance dist from the point p to the limb, if dist is less than th, considering that the point p is on the limb, storing the direction cosine and the direction sine of the limb c by the point p, corresponding to each joint point, having a joint point confidence map, and corresponding to each limb, having two vector maps respectively corresponding to the direction cosine and the direction sine of the limb, wherein the formula is as follows:
suppose thatAndis detected by connecting two joint points, which are connected to form a limb c, sampling the middle of the joint points, and then using vector LcAlong the line segment to measure the possibility of their connection, the calculation formula is as follows:
wherein p (u) isAndthe interpolated coordinate points are approximated by equidistant sampling by integrating:
the greater the E determined, the greater the probability that the two joint points should be connected.
6. An intelligent test apparatus using the training intelligent test method of any one of claims 1 to 5, comprising:
a test module: the human body posture recognition system is used for performing personnel verification by adopting a face recognition technology, evaluating the collected training video by a visual model technology, recognizing the human body posture and comparing and judging the human body posture with a standard posture;
a guidance module: and the big data technology is used for establishing a training data model aiming at the judgment result so as to be used for training guidance.
7. A computer-readable storage medium, on which a computer program is stored, which, when executed, performs the steps of training a smart test method according to any one of claims 1-5.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of training a smart test method according to any one of claims 1-5 are performed when the program is executed by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210203733.XA CN114565976A (en) | 2022-03-02 | 2022-03-02 | Training intelligent test method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210203733.XA CN114565976A (en) | 2022-03-02 | 2022-03-02 | Training intelligent test method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114565976A true CN114565976A (en) | 2022-05-31 |
Family
ID=81718443
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210203733.XA Pending CN114565976A (en) | 2022-03-02 | 2022-03-02 | Training intelligent test method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114565976A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115689819A (en) * | 2022-09-23 | 2023-02-03 | 河北东来工程技术服务有限公司 | Ship emergency training method, system and device and readable storage medium |
CN116934555A (en) * | 2023-09-04 | 2023-10-24 | 福建恒智信息技术有限公司 | Security and elimination integrated management method and device based on Internet of things |
-
2022
- 2022-03-02 CN CN202210203733.XA patent/CN114565976A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115689819A (en) * | 2022-09-23 | 2023-02-03 | 河北东来工程技术服务有限公司 | Ship emergency training method, system and device and readable storage medium |
CN116934555A (en) * | 2023-09-04 | 2023-10-24 | 福建恒智信息技术有限公司 | Security and elimination integrated management method and device based on Internet of things |
CN116934555B (en) * | 2023-09-04 | 2023-11-24 | 福建恒智信息技术有限公司 | Security and elimination integrated management method and device based on Internet of things |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Materzynska et al. | The jester dataset: A large-scale video dataset of human gestures | |
CN110427905A (en) | Pedestrian tracting method, device and terminal | |
CN109819208A (en) | A kind of dense population security monitoring management method based on artificial intelligence dynamic monitoring | |
US20150092981A1 (en) | Apparatus and method for providing activity recognition based application service | |
KR20210031405A (en) | Action recognition using implicit pose representations | |
JP7292492B2 (en) | Object tracking method and device, storage medium and computer program | |
CN114565976A (en) | Training intelligent test method and device | |
CN109214366A (en) | Localized target recognition methods, apparatus and system again | |
CN109376631A (en) | A kind of winding detection method and device neural network based | |
CN113743273B (en) | Real-time rope skipping counting method, device and equipment based on video image target detection | |
WO2021068781A1 (en) | Fatigue state identification method, apparatus and device | |
Suzuki et al. | Enhancement of gross-motor action recognition for children by CNN with OpenPose | |
US20220171961A1 (en) | Motion Identification Method and System | |
US20220207266A1 (en) | Methods, devices, electronic apparatuses and storage media of image processing | |
CN113378649A (en) | Identity, position and action recognition method, system, electronic equipment and storage medium | |
Li et al. | ET-YOLOv5s: toward deep identification of students’ in-class behaviors | |
CN114581990A (en) | Intelligent running test method and device | |
Yang et al. | Research on face recognition sports intelligence training platform based on artificial intelligence | |
CN105844204A (en) | Method and device for recognizing behavior of human body | |
Nguyen et al. | Video smoke detection for surveillance cameras based on deep learning in indoor environment | |
Guo et al. | PhyCoVIS: A visual analytic tool of physical coordination for cheer and dance training | |
CN111275754B (en) | Face acne mark proportion calculation method based on deep learning | |
CN101116108A (en) | Information parts extraction for retrieving image sequence data | |
Li et al. | What and how well you exercised? An efficient analysis framework for fitness actions | |
Suzuki et al. | Deep learning assessment of child gross-motor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |