CN118049890A - Training result judging system and method - Google Patents

Training result judging system and method Download PDF

Info

Publication number
CN118049890A
CN118049890A CN202410444506.5A CN202410444506A CN118049890A CN 118049890 A CN118049890 A CN 118049890A CN 202410444506 A CN202410444506 A CN 202410444506A CN 118049890 A CN118049890 A CN 118049890A
Authority
CN
China
Prior art keywords
information
penalty
time
item
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410444506.5A
Other languages
Chinese (zh)
Other versions
CN118049890B (en
Inventor
薛靖松
薛瑞笙
李晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jilida Intelligent Equipment Group Co ltd
Original Assignee
Shandong Jilida Intelligent Equipment Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jilida Intelligent Equipment Group Co ltd filed Critical Shandong Jilida Intelligent Equipment Group Co ltd
Priority to CN202410444506.5A priority Critical patent/CN118049890B/en
Publication of CN118049890A publication Critical patent/CN118049890A/en
Application granted granted Critical
Publication of CN118049890B publication Critical patent/CN118049890B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41JTARGETS; TARGET RANGES; BULLET CATCHERS
    • F41J5/00Target indicating systems; Target-hit or score detecting systems
    • F41J5/14Apparatus for signalling hits or scores to the shooter, e.g. manually operated, or for communication between target and shooter; Apparatus for recording hits or scores
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41JTARGETS; TARGET RANGES; BULLET CATCHERS
    • F41J5/00Target indicating systems; Target-hit or score detecting systems
    • F41J5/02Photo-electric hit-detector systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a training result judging system and a training result judging method, which belong to the technical field of data management, and comprise the steps of presetting a competition template, presetting a penalty time item and a penalty circle item, constructing a behavior model, receiving an instruction and starting training, selecting a map, drawing a racing motion trail graph or a racing motion trail graph, carrying out feature recognition on video information received in real time through the behavior model, marking the selected video information by using the corresponding penalty time item or penalty circle item, sending the marked video information to a judging user side, receiving judging information, filling the training person finishing time and the time passing through each check point into an edge cutting information table according to the penalty time or penalty circle of the judging information, and calculating the effect that the training system can automatically monitor training and collect finishing data.

Description

Training result judging system and method
Technical Field
The invention relates to the field of data management, in particular to a training result judging system and method.
Background
The training score judging system is an auxiliary system for collecting data and summarizing scores of training scores. At present, the training modes of the shooting range are various, wherein the process of moving the target is that personnel/vehicles start running on multiple tracks on the field, pass through multiple obstacle points, calculate the target score and the noun reaching the end point and the check point after reaching the end point, and score according to the operation condition of personnel. The training process generally has the camera and judges to collect training materials, has the mark target equipment to record training person's design score, conveniently carries out the commentary and scoring for the training person.
The prior art solutions described above have the following drawbacks: at present, when the arbitration system faces mobile targeting, the arbitration system comprises a plurality of scoring elements and is difficult to supervise, so that the arbitration system needs more manpower to collect and sort data, and the time and the labor are wasted.
Disclosure of Invention
In order to reduce the difficulty of supervision and data collection and arrangement by manpower using an arbitration system, the application provides a training result arbitration system and a training result arbitration method.
On one hand, the training result judging method provided by the application adopts the following technical scheme:
a training score arbitration method comprising the steps of:
Presetting a competition template, wherein the competition template comprises a competition field simulation map, a shooting score table, a side cutting information table and a competition field simulation map;
Presetting a penalty time item and a penalty circle item, pre-storing video information of historical training, associating the pre-stored video information with the penalty time item or the penalty circle item, extracting the associated features in the pre-stored video information to construct a behavior model, and classifying the behavior model by the penalty time item and the penalty circle item;
the method comprises the steps of connecting a camera on a field or a vehicle and receiving video information sent by the camera, connecting target equipment and receiving shooting achievement information sent by the target equipment;
Receiving an instruction and starting training, selecting a competition field simulation map or a competition field simulation map according to the received instruction, setting various types of nodes on the competition field simulation map or the competition field simulation map, and receiving input trainer information, vehicle information and judge information;
When the running field simulation map is selected, judging the real-time position of each trainer in the video information according to the trainer information, and combining the real-time position with the running field model map to draw a running motion trail map;
When a competition field simulation map is selected, correlating the trainer information with vehicle information, judging the real-time position of each vehicle in the video information according to the vehicle information, and combining the real-time position with the competition field model map to draw a racing car movement track map;
Connecting judge user end according to judge information, proceeding characteristic identification to real-time received video information through behavior model, marking selected video information with corresponding penalty time project or penalty circle project, transmitting marked video information to judge user end;
Receiving arbitration information sent by an judge user side, and filling the arbitration information into an edge-cut information table, wherein the arbitration information comprises a penalty time item and a penalty circle item;
When the received judging information contains the penalty time item, adding the corresponding penalty time length in the side cut information table, and when the received judging information contains the penalty circle item, acquiring the number of the penalty circle item, and sending the number of the penalty circle item and the corresponding trainer information;
Obtaining the completion time of a trainer and the time of passing each check point according to the racing motion trail graph or the racing motion trail graph, adding the penalty time into the completion time of the trainer, calculating the shooting score according to the received shooting score information, filling the shooting score into a shooting score table, and filling the completion time of the trainer and the time of passing each check point into a side cutting information table.
Through adopting above-mentioned scheme, the system of deciding can be through the automatic action model that builds of historical deciding record, after the map that the user selected the training and relevant configuration, upload the information of training person who participated in the training, vehicle and referee, after starting training, the system passes through video monitoring training process, and be connected with referee user side, obtain the deciding result fast, penalty circle when automatic execution penalizes, obtain the shooting score through target equipment, generate the track map through the video of recording, calculate the time that the training person passes through terminal and each checkpoint through the track map, finally automatically generate limit cutting information table and shooting score table, the manpower that spends when effectively reducing the system of deciding.
Preferably, the step of extracting the associated features in the pre-stored video information to construct a behavior model further includes:
classifying pre-stored video information according to associated penalty time items and penalty circle items;
Cutting out image information which accords with the penalty time item or the penalty circle item in the video information according to the instruction, and extracting motion characteristics and appearance characteristics from the image information;
adding the motion features and the appearance features into a deep learning model for modeling to obtain a behavior model, and labeling the behavior modeling according to the penalty time item and the penalty circle item;
and carrying out model training on the behavior model under the same label through the deep learning model again to obtain a built behavior model.
By adopting the scheme, the behavior model is constructed through the motion characteristics and the appearance characteristics, so that the video information can be comprehensively screened.
Preferably, the step of extracting motion features and appearance features from image information further includes:
Sequencing image information to obtain a video sequence, tracking a motion track of an object in the video sequence, and detecting and matching feature points in continuous frames by using a feature point detection algorithm to obtain motion features;
Extracting color features, texture features and shape features from the image information according to pixels, extracting the color features by using color space conversion and color quantization technologies, extracting the texture features by using a texture analysis algorithm, detecting the edge contour of the image information, describing the shape features by using a shape descriptor, and merging the color features, the texture features and the shape features to obtain appearance features.
By adopting the scheme, the selection of the feature extraction method is determined according to specific application scenes and requirements. Different approaches may exhibit different performance in different tasks. The feature point detection algorithm is suitable for detecting videos of objects moving at high speed, and can be effectively adapted to different trainers and vehicles by detecting appearance features through various features.
Preferably, the step of setting various types of nodes on the competition venue simulation map or the competition venue simulation map further comprises:
presetting a starting point node, an end point node and a check point node;
receiving an instruction, and setting a starting point node, an end point node and a check point node on a competition field simulation map or a competition field simulation map according to the instruction;
The step of obtaining the time for the trainer to finish and the time for each check point to pass according to the racing motion trail graph or the racing motion trail graph further comprises the following steps:
marking a starting point node, an end point node and a check point node on the race motion trail graph or the same position of the race motion trail graph according to the race scene simulation map or the starting point node, the end point node and the check point node arranged on the race scene simulation map;
and extracting the time of the movement track passing through the starting point node, the end point node and the check point node on the racing movement track graph or the racing movement track graph, and calculating the time of the trainer finishing and the time of passing through each check point according to the extracted time.
By adopting the scheme, the time for the trainer to finish and the time for the trainer to pass through each check point can be accurately and rapidly calculated through the cooperation of the track graph and the nodes.
Preferably, the step of "drawing the running motion trail graph by combining the real-time position and the running field model graph" further includes:
after the real-time position is obtained, marking the current real-time position on the competition field model diagram and recording time;
connecting the real-time positions of all marks on the running field model diagram according to the recording time, fitting connecting lines to form a running motion track, and obtaining a running motion track diagram;
The step of combining the real-time position with the racing field model map to draw a racing track map further comprises the following steps:
after the real-time position is obtained, marking the current real-time position on the racing field model map and recording time;
and connecting the real-time positions of all the marks on the racing field model diagram according to the recording time, fitting the connecting lines to form a racing track, and obtaining a racing track diagram.
By adopting the scheme, the position detection of the real-time monitoring trainer or the vehicle is difficult and high in cost, but the track map is formed by intercepting single-point images and then fitting, so that the method is more saved and the realization is simpler.
Preferably, the step of "receiving an instruction and starting training" further includes:
And receiving a forbidden instruction, selecting a penalty time item and/or a penalty circle item according to the forbidden instruction, and prohibiting the system from carrying out feature recognition on the selected penalty time item and/or penalty circle item when training is started.
By employing the above scheme, the user can adapt different training by disabling penalty time items and/or penalty circle items.
Preferably, the method further comprises:
Detecting the current equipment state of the connected camera and target equipment, and recording the equipment state;
Each target device or each camera is associated with a camera in which the target device or the camera can appear in video information;
when an error occurs in one equipment state, an alarm is sent, the recorded equipment state and video information uploaded by a camera corresponding to the equipment are called, and the called equipment state record and the video information are displayed.
By adopting the scheme, the system can monitor the states of all the devices in real time, and can quickly look up the appearance states of the fault devices through the pictures of other cameras, thereby being convenient for users to process in time.
On the other hand, the training result judging system provided by the application adopts the following technical scheme:
The training achievement judging system comprises a master control end and a judging user end, wherein the master control end comprises a storage module, a model building module, a video acquisition module, a shooting acquisition module, a training setting module, a data caching module, a track drawing module, a video identification module, a judging receiving module, a penalty circle notification module and a form editing module;
The storage module is pre-stored with a competition template, video information of historical training, a penalty time item and a penalty circle item, wherein the competition template comprises a competition field simulation map, a shooting score table, a side cutting information table and a competition field simulation map;
the model construction module invokes the historical training video information, the penalty time item and the penalty circle item stored in the storage module, associates the pre-stored video information with the penalty time item or the penalty circle item, extracts the associated features in the pre-stored video information to construct a behavior model, classifies the behavior model according to the penalty time item and the penalty circle item, and sends the behavior model to the video identification module;
The video acquisition module is connected with a camera on a field or a vehicle and used for receiving video information sent by the camera and sending the video information to the storage module for storage;
the shooting acquisition module is connected with the target equipment and receives shooting score information sent by the target equipment, and the shooting score information is sent to the storage module for storage;
the training setting module receives an external input instruction, calls a competition template stored by the storage module according to the instruction, selects a competition field simulation map or a competition field simulation map in the competition template, sets various types of nodes on the competition field simulation map or the competition field simulation map, and starts training according to the instruction;
the data caching module receives and stores trainer information, vehicle information and judge information input from the outside;
The track drawing module calls the latest video information stored by the storage module and the trainer information and the vehicle information stored by the data caching module, correlates the trainer information with the vehicle information, judges the real-time position of each trainer or vehicle in the video information according to the trainer information or the vehicle information, combines the real-time position with the racing field model diagram or the racing field model diagram to draw a racing motion track diagram or a racing motion track diagram, and transmits the racing motion track diagram or the racing motion track diagram to the table editing module;
The video identification module calls the latest video information stored in the storage module, performs feature identification on the video information through the behavior model, marks the selected video information by using the corresponding penalty time item or penalty circle item, and sends the marked video information to the judge user side;
The judging and receiving module receives judging information sent by a judging user side, fills the judging information into an edge cutting information table, comprises a penalty time item and a penalty circle item, and sends the judging information to the penalty circle notification module and the table editing module;
After the penalty circle notification module receives the arbitration information, extracting penalty circle items contained in the arbitration information, calculating the number of the extracted penalty circle items, and sending out the number of the penalty circle items and corresponding trainer information;
The table editing module obtains the completion time of the trainer and the time of passing each check point according to the racing motion trail graph or the racing motion trail graph, adds the penalty time into the completion time of the trainer, calculates the shooting score according to the received shooting score information, fills the shooting score into a shooting score table, and fills the completion time of the trainer and the time of passing each check point into an edge cutting information table.
Through adopting above-mentioned scheme, the system of deciding can be through the automatic action model that builds of historical deciding record, after the map that the user selected the training and relevant configuration, upload the information of training person who participated in the training, vehicle and referee, after starting training, the system passes through video monitoring training process, and be connected with referee user side, obtain the deciding result fast, penalty circle when automatic execution penalizes, obtain the shooting score through target equipment, generate the track map through the video of recording, calculate the time that the training person passes through terminal and each checkpoint through the track map, finally automatically generate limit cutting information table and shooting score table, the manpower that spends when effectively reducing the system of deciding.
Preferably, the model building module comprises a video association unit and a feature building unit;
the video association unit invokes the historical training video information, the penalty time item and the penalty circle item stored in the storage module, classifies the pre-stored video information according to the associated penalty time item and penalty circle item, and associates the pre-stored video information with the penalty time item or the penalty circle item;
The feature construction unit cuts out image information which accords with a penalty time item or a penalty circle item in video information according to instructions, extracts motion features and appearance features from the image information, sequences the image information to obtain a video sequence, tracks the motion track of an object in the video sequence, detects and matches feature points in continuous frames by using a feature point detection algorithm to obtain motion features, extracts color features, texture features and shape features from the image information according to pixels, extracts color features by using a color space conversion and color quantization technology, extracts texture features by using a texture analysis algorithm, detects image information edge contours, describes shape features by using a shape descriptor, combines the color features, the texture features and the shape features to obtain appearance features, adds the motion features and the appearance features into a deep learning model to obtain a behavior model, marks the behavior model according to a penalty time item and a penalty circle item, models the behavior model under the same label again by using the deep learning model to obtain a constructed behavior model, builds the behavior model according to the associated features in the pre-stored video information, classifies the behavior model by using the penalty time item and the penalty circle item, and sends the behavior model to the video recognition module.
By adopting the scheme, the feature point detection algorithm is suitable for detecting videos of objects with high-speed motion, and appearance features can be detected through various features so as to be effectively suitable for different trainers and vehicles.
Preferably, the training setting module comprises a template selection unit, a node setting unit and a training control unit;
The template selection unit receives an external input instruction, calls a competition template stored by the storage module according to the instruction, and selects a competition field simulation map or a competition field simulation map in the competition template;
The node setting unit is pre-provided with a starting point node, an end point node and a check point node, and the starting point node, the end point node and the check point node are set on a competition area simulation map or a competition area simulation map according to the instruction;
And after receiving the instruction, the training control unit controls related equipment of the training field to start according to the instruction.
By adopting the scheme, the time for the trainer to finish and the time for the trainer to pass through each check point can be accurately and rapidly calculated through the cooperation of the track graph and the nodes.
In summary, the invention has the following beneficial effects:
1. The system can rapidly acquire the result of the arbitration, automatically execute the penalty circle when penalizing, and automatically generate the side-cut information table and the shooting score table, thereby effectively reducing the manpower spent when using the system of the arbitration.
Drawings
Fig. 1 is an overall system block diagram of a second embodiment of the present application.
FIG. 2 is a block diagram of a second model building block according to an embodiment of the present application.
Fig. 3 is a block diagram of a training setting module according to a second embodiment of the present application.
Reference numerals illustrate:
1. A master control end; 11. a storage module; 12. a model building module; 121. a video association unit; 122. a feature construction unit; 13. a video acquisition module; 131. a shooting acquisition module; 14. a training setting module; 141. a template selection unit; 142. a node setting unit; 143. training a control unit; 15. a data caching module; 16. a track drawing module; 161. a video identification module; 17. a arbitration receiving module; 18. a penalty circle notification module; 19. a form editing module; 2. judging the user side.
Detailed Description
The application is described in further detail below with reference to fig. 1-3.
The first embodiment of the application discloses a training result judging method, which comprises the following specific steps:
S100, presetting a competition template, wherein the competition template comprises a competition field simulation map, a shooting score table, an edge cutting information table and a competition field simulation map. Presetting a starting point node, an ending point node and a check point node.
S101, presetting a penalty time item and a penalty circle item, pre-storing video information of historical training, and associating the pre-stored video information with the penalty time item or the penalty circle item.
And extracting the associated features in the pre-stored video information to construct a behavior model, and classifying the pre-stored video information according to the associated penalty time items and penalty circle items.
Cutting out image information conforming to the penalty time item or the penalty circle item in the video information according to the instruction, sequencing the image information to obtain a video sequence, tracking the motion trail of an object in the video sequence, and detecting and matching feature points in continuous frames by using a feature point detection algorithm to obtain motion features.
Extracting color features, texture features and shape features from the image information according to pixels, extracting the color features by using color space conversion and color quantization technologies, extracting the texture features by using a texture analysis algorithm, detecting the edge contour of the image information, describing the shape features by using a shape descriptor, and merging the color features, the texture features and the shape features to obtain appearance features.
And adding the motion features and the appearance features into the deep learning model for modeling to obtain a behavior model, and labeling the behavior modeling according to the penalty time item and the penalty circle item.
And carrying out model training on the behavior model under the same label through the deep learning model again to obtain a built behavior model. The behavior model is classified by penalty term and penalty circle term.
S102, connecting a camera on a field or a vehicle and receiving video information sent by the camera, and connecting a target device and receiving shooting achievement information sent by the target device.
S200, receiving an instruction, starting training, receiving a forbidden instruction, selecting a penalty time item and/or a penalty circle item according to the forbidden instruction, and prohibiting the system from carrying out feature recognition on the selected penalty time item and/or penalty circle item when starting training.
Selecting a competition field simulation map or a competition field simulation map according to the received instruction, receiving the instruction, and setting a starting point node, an ending point node and a check point node on the competition field simulation map or the competition field simulation map according to the instruction.
Input trainer information, vehicle information, and referee information is received.
S201, when the running field simulation map is selected, judging real-time positions of all trainers in the video information according to the trainer information. After the real-time position is obtained, the current real-time position is marked on the running field pattern diagram and the time is recorded.
And connecting the real-time positions of all the marks on the running field model diagram according to the recording time, fitting the connecting lines to form a running motion track, and obtaining a running motion track diagram.
And S202, when the competition-ground simulation map is selected, associating the trainer information with the vehicle information, and judging the real-time position of each vehicle in the video information according to the vehicle information. After the real-time position is obtained, the current real-time position is marked on the racing field model diagram and the time is recorded.
And connecting the real-time positions of all the marks on the racing field model diagram according to the recording time, fitting the connecting lines to form a racing track, and obtaining a racing track diagram.
And S300, connecting the judge user side 2 according to the judge information, carrying out feature recognition on the video information received in real time through a behavior model, marking the selected video information by using the corresponding penalty time item or penalty circle item, and transmitting the marked video information to the judge user side 2.
S301, receiving the judging information sent by the judging user side 2 and filling the judging information into an edge cutting information table, wherein the judging information comprises a penalty time item and a penalty circle item.
S302, when the received arbitration information contains the penalty time item, adding the corresponding penalty time length in the side cut information table, and when the received arbitration information contains the penalty circle item, acquiring the number of the penalty circle item, and sending out the number of the penalty circle item and the corresponding trainer information.
S400, marking the starting point node, the end point node and the check point node on the race track graph or the same position of the race track graph according to the starting point node, the end point node and the check point node which are arranged on the race field simulation map or the race field simulation map.
And extracting the time of the movement track passing through the starting point node, the end point node and the check point node on the racing movement track graph or the racing movement track graph, and calculating the time of the trainer finishing and the time of passing through each check point according to the extracted time. And adding the penalty time into the trainer completion time, calculating the shooting score according to the received shooting score information, filling the shooting score into a shooting score table, and filling the trainer completion time and the time passing through each check point into a side cutting information table.
S500, detecting the current equipment state of the connected camera and target equipment, and recording the equipment state.
A camera is associated with each target device or camera that is capable of presenting the target device or camera in video information.
When an error occurs in one equipment state, an alarm is sent, the recorded equipment state and video information uploaded by a camera corresponding to the equipment are called, and the called equipment state record and the video information are displayed.
The implementation principle of the training result judging system and the training result judging method provided by the embodiment of the application is as follows: the system automatically builds a behavior model through historical arbitration records, uploads information of a trainer, a vehicle and an referee participating in training after a user selects a trained map and relevant configuration, starts the training, rapidly acquires arbitration results through a video monitoring training process and is connected with an referee user side 2, penalizes a circle when penalizing is automatically executed, acquires shooting results through target equipment, generates a track map through recorded videos, calculates the time of the trainer passing through a terminal and each checkpoint through the track map, and finally automatically generates an edge trimming information table and a shooting score table, thereby effectively reducing manpower spent when the system is used.
The second embodiment of the application discloses a training result judging system, as shown in fig. 1, comprising a master control end 1 and a judging user end 2, wherein the master control end 1 comprises a storage module 11, a model construction module 12, a video acquisition module 13, a shooting acquisition module 131, a training setting module 14, a data caching module 15, a track drawing module 16, a video identification module 161, a judging receiving module 17, a penalty circle notifying module 18 and a table editing module 19.
As shown in fig. 1, the storage module 11 pre-stores a competition template, video information of history training, penalty time items and penalty circle items, wherein the competition template comprises a competition field simulation map, a shooting score table, an edge cutting information table and a competition field simulation map.
As shown in fig. 1 and 2, the model building module 12 includes a video association unit 121, a feature building unit 122. The video association unit 121 invokes the history training video information, the penalty time item, and the penalty circle item stored in the storage module 11, classifies the pre-stored video information according to the associated penalty time item and penalty circle item, and associates the pre-stored video information with the penalty time item or the penalty circle item. The feature construction unit 122 cuts out the image information conforming to the penalty term or penalty circle term in the video information according to the instruction, extracts the motion feature and the appearance feature from the image information, sorts the image information to obtain the video sequence, tracks the motion track of the object in the video sequence, detects and matches the feature points in the continuous frames by using the feature point detection algorithm to obtain the motion feature, extracts the color feature, the texture feature and the shape feature according to the pixels, extracts the color feature by using the color space conversion and the color quantization technology, extracts the texture feature by using the texture analysis algorithm, detects the image information edge contour, describes the shape feature by using the shape descriptor, combines the color feature, the texture feature and the shape feature to obtain the appearance feature, adds the motion feature and the appearance feature into the deep learning model to obtain the behavior model, marks the behavior modeling according to the penalty term and the penalty circle term, models the behavior model under the same label again by using the deep learning model to obtain the behavior model, builds the behavior model after the correlation in the pre-stored video information, classifies the behavior model by using the penalty term and the penalty term, and sends the behavior model to the video recognition module 161.
As shown in fig. 1, the video acquisition module 13 is connected to a camera on a site or a vehicle, receives video information transmitted by the camera, and transmits the video information to the storage module 11 for storage. The shooting acquisition module 131 is connected with the target equipment and receives shooting score information sent by the target equipment, and sends the shooting score information to the storage module 11 for storage.
As shown in fig. 1 and 3, the training setting module 14 includes a template selecting unit 141, a node setting unit 142, and a training control unit 143. The template selection unit 141 receives an external input instruction, and calls the competition template stored in the storage module 11 according to the instruction, and selects a competition venue simulation map or a competition venue simulation map in the competition template. The node setting unit 142 presets a start node, an end node, and a check point node, and sets the start node, the end node, and the check point node on the competition venue simulation map or the competition venue simulation map according to the instruction. The training control unit 143 receives the instruction and controls the relevant equipment of the training field to start according to the instruction.
As shown in fig. 1, the data caching module 15 receives and stores trainer information, vehicle information, and referee information input from the outside. The track drawing module 16 invokes the latest video information stored in the storage module 11 and the trainer information and the vehicle information stored in the data cache module 15, correlates the trainer information with the vehicle information, judges the real-time position of each trainer or vehicle in the video information according to the trainer information or the vehicle information, combines the real-time position with the running field model map or the racing field model map to draw a running track map or a racing track map, and transmits the running track map or the racing track map to the table editing module 19.
As shown in fig. 1, the video recognition module 161 invokes the latest video information stored in the storage module 11, performs feature recognition on the video information through the behavior model, marks the selected video information with a corresponding penalty time item or penalty circle item, and sends the marked video information to the referee client 2.
As shown in fig. 1, the arbitration receiving module 17 receives arbitration information sent by the referee client 2 and fills the arbitration information into the side-cut information table, the arbitration information includes a penalty time item and a penalty circle item, and sends the arbitration information to the penalty circle notifying module 18 and the table editing module 19. Upon receiving the arbitration information, the penalty circle notification module 18 extracts the penalty circle items contained in the arbitration information, calculates the number of penalty circle items extracted, and issues the number of penalty circle items and corresponding trainer information.
As shown in fig. 1, the table editing module 19 obtains the trainer completion time and the time of passing each check point according to the race track diagram or the racing track diagram, adds the penalty time to the trainer completion time, calculates the shooting score according to the received shooting score information, fills the shooting score into the shooting score table, and fills the trainer completion time and the time of passing each check point into the side cut information table.
The embodiments of the present invention are all preferred embodiments of the present invention, and are not intended to limit the scope of the present invention in this way, therefore: all equivalent changes in structure, shape and principle of the invention should be covered in the scope of protection of the invention.

Claims (10)

1. A training score arbitration method, comprising the steps of:
Presetting a competition template, wherein the competition template comprises a competition field simulation map, a shooting score table, a side cutting information table and a competition field simulation map;
Presetting a penalty time item and a penalty circle item, pre-storing video information of historical training, associating the pre-stored video information with the penalty time item or the penalty circle item, extracting the associated features in the pre-stored video information to construct a behavior model, and classifying the behavior model by the penalty time item and the penalty circle item;
the method comprises the steps of connecting a camera on a field or a vehicle and receiving video information sent by the camera, connecting target equipment and receiving shooting achievement information sent by the target equipment;
Receiving an instruction and starting training, selecting a competition field simulation map or a competition field simulation map according to the received instruction, setting various types of nodes on the competition field simulation map or the competition field simulation map, and receiving input trainer information, vehicle information and judge information;
When the running field simulation map is selected, judging the real-time position of each trainer in the video information according to the trainer information, and combining the real-time position with the running field model map to draw a running motion trail map;
When a competition field simulation map is selected, correlating the trainer information with vehicle information, judging the real-time position of each vehicle in the video information according to the vehicle information, and combining the real-time position with the competition field model map to draw a racing car movement track map;
connecting the judge user side (2) according to the judge information, carrying out feature recognition on the video information received in real time through a behavior model, marking the selected video information by using the corresponding penalty time item or penalty circle item, and transmitting the marked video information to the judge user side (2);
Receiving judging information sent by a judging user side (2) and filling the judging information into an edge cutting information table, wherein the judging information comprises a penalty time item and a penalty circle item;
When the received judging information contains the penalty time item, adding the corresponding penalty time length in the side cut information table, and when the received judging information contains the penalty circle item, acquiring the number of the penalty circle item, and sending the number of the penalty circle item and the corresponding trainer information;
Obtaining the completion time of a trainer and the time of passing each check point according to the racing motion trail graph or the racing motion trail graph, adding the penalty time into the completion time of the trainer, calculating the shooting score according to the received shooting score information, filling the shooting score into a shooting score table, and filling the completion time of the trainer and the time of passing each check point into a side cutting information table.
2. The training performance arbitration method as recited in claim 1, wherein the step of extracting the associated features in the pre-stored video information to construct the behavior model further comprises:
classifying pre-stored video information according to associated penalty time items and penalty circle items;
Cutting out image information which accords with the penalty time item or the penalty circle item in the video information according to the instruction, and extracting motion characteristics and appearance characteristics from the image information;
adding the motion features and the appearance features into a deep learning model for modeling to obtain a behavior model, and labeling the behavior modeling according to the penalty time item and the penalty circle item;
and carrying out model training on the behavior model under the same label through the deep learning model again to obtain a built behavior model.
3. The training performance arbitration method of claim 2, wherein the step of extracting motion features and appearance features from the image information further comprises:
Sequencing image information to obtain a video sequence, tracking a motion track of an object in the video sequence, and detecting and matching feature points in continuous frames by using a feature point detection algorithm to obtain motion features;
Extracting color features, texture features and shape features from the image information according to pixels, extracting the color features by using color space conversion and color quantization technologies, extracting the texture features by using a texture analysis algorithm, detecting the edge contour of the image information, describing the shape features by using a shape descriptor, and merging the color features, the texture features and the shape features to obtain appearance features.
4. The training performance arbitration method as recited in claim 1, wherein the step of disposing the various types of nodes on the running field simulation map or the running field simulation map further comprises:
presetting a starting point node, an end point node and a check point node;
receiving an instruction, and setting a starting point node, an end point node and a check point node on a competition field simulation map or a competition field simulation map according to the instruction;
The step of obtaining the time for the trainer to finish and the time for each check point to pass according to the racing motion trail graph or the racing motion trail graph further comprises the following steps:
marking a starting point node, an end point node and a check point node on the race motion trail graph or the same position of the race motion trail graph according to the race scene simulation map or the starting point node, the end point node and the check point node arranged on the race scene simulation map;
and extracting the time of the movement track passing through the starting point node, the end point node and the check point node on the racing movement track graph or the racing movement track graph, and calculating the time of the trainer finishing and the time of passing through each check point according to the extracted time.
5. The method of claim 1, wherein the step of combining the real-time location with the running field pattern map to draw the running motion profile map further comprises:
after the real-time position is obtained, marking the current real-time position on the competition field model diagram and recording time;
connecting the real-time positions of all marks on the running field model diagram according to the recording time, fitting connecting lines to form a running motion track, and obtaining a running motion track diagram;
The step of combining the real-time position with the racing field model map to draw a racing track map further comprises the following steps:
after the real-time position is obtained, marking the current real-time position on the racing field model map and recording time;
and connecting the real-time positions of all the marks on the racing field model diagram according to the recording time, fitting the connecting lines to form a racing track, and obtaining a racing track diagram.
6. The training performance arbitration method of claim 1, wherein said step of receiving instructions and starting training further comprises:
And receiving a forbidden instruction, selecting a penalty time item and/or a penalty circle item according to the forbidden instruction, and prohibiting the system from carrying out feature recognition on the selected penalty time item and/or penalty circle item when training is started.
7. The training outcome arbitration method as claimed in claim 1, further comprising:
Detecting the current equipment state of the connected camera and target equipment, and recording the equipment state;
Each target device or each camera is associated with a camera in which the target device or the camera can appear in video information;
when an error occurs in one equipment state, an alarm is sent, the recorded equipment state and video information uploaded by a camera corresponding to the equipment are called, and the called equipment state record and the video information are displayed.
8. A training score arbitration system, characterized by: the system comprises a master control end (1) and a referee user end (2), wherein the master control end (1) comprises a storage module (11), a model construction module (12), a video acquisition module (13), a shooting acquisition module (131), a training setting module (14), a data caching module (15), a track drawing module (16), a video identification module (161), a arbitration receiving module (17), a penalty circle notification module (18) and a table editing module (19);
The storage module (11) is pre-stored with a competition template, video information of historical training, penalty time items and penalty circle items, wherein the competition template comprises a competition field simulation map, a shooting score table, a side cutting information table and a competition field simulation map;
The model construction module (12) invokes the historical training video information, the penalty time item and the penalty circle item stored in the storage module (11), associates the pre-stored video information with the penalty time item or the penalty circle item, extracts the associated features in the pre-stored video information to construct a behavior model, classifies the behavior model according to the penalty time item and the penalty circle item, and sends the behavior model to the video identification module (161);
The video acquisition module (13) is connected with a camera on a field or a vehicle and used for receiving video information sent by the camera and sending the video information to the storage module (11) for storage;
the shooting acquisition module (131) is connected with the target equipment and receives shooting achievement information sent by the target equipment, and the shooting achievement information is sent to the storage module (11) for storage;
The training setting module (14) receives an external input instruction, calls a competition template stored by the storage module (11) according to the instruction, selects a competition field simulation map or a competition field simulation map in the competition template, sets various types of nodes on the competition field simulation map or the competition field simulation map, and starts training according to the instruction;
The data caching module (15) receives and stores trainer information, vehicle information and judge information input from the outside;
The track drawing module (16) invokes the latest video information stored in the storage module (11) and the trainer information and the vehicle information stored in the data caching module (15), correlates the trainer information with the vehicle information, judges the real-time position of each trainer or vehicle in the video information according to the trainer information or the vehicle information, combines the real-time position with the running field model diagram or the running field model diagram to draw a running motion track diagram or a racing motion track diagram, and transmits the running motion track diagram or the racing motion track diagram to the table editing module (19);
The video identification module (161) calls the latest video information stored in the storage module (11), performs feature identification on the video information through a behavior model, marks the selected video information by using a corresponding penalty time item or penalty circle item, and sends the marked video information to the judge user side (2);
The judging receiving module (17) receives judging information sent by the judging user side (2) and fills the judging information into the side cutting information table, wherein the judging information comprises a penalty time item and a penalty circle item, and the judging information is sent to the penalty circle notification module (18) and the table editing module (19);
after the penalty circle notification module (18) receives the arbitration information, penalty circle items contained in the arbitration information are extracted, the number of the extracted penalty circle items is calculated, and the number of the penalty circle items and corresponding trainer information are sent out;
The table editing module (19) obtains the completion time of the trainer and the time of passing each check point according to the racing motion trail graph or the racing motion trail graph, adds the penalty time into the completion time of the trainer, calculates the shooting score according to the received shooting score information, fills the shooting score into a shooting score table, and fills the completion time of the trainer and the time of passing each check point into an edge cutting information table.
9. The training outcome arbitration system of claim 8, wherein: the model construction module (12) comprises a video association unit (121) and a feature construction unit (122);
The video association unit (121) invokes the historical training video information, the penalty time item and the penalty circle item stored in the storage module (11), classifies the pre-stored video information according to the associated penalty time item and penalty circle item, and associates the pre-stored video information with the penalty time item or the penalty circle item;
the feature construction unit (122) cuts out image information which accords with a penalty time item or a penalty circle item in video information according to instructions, extracts motion features and appearance features from the image information, sequences the image information to obtain a video sequence, tracks a motion track of an object in the video sequence, detects and matches feature points in continuous frames by using a feature point detection algorithm to obtain the motion features, extracts color features, texture features and shape features of the image information according to pixels, extracts color features by using color space conversion and color quantization technology, extracts texture features by using a texture analysis algorithm, detects edge contours of the image information, describes shape features by using a shape descriptor, combines the color features, the texture features and the shape features to obtain appearance features, adds the motion features and the appearance features into a deep learning model to obtain a behavior model, marks the behavior modeling according to the penalty time item and the penalty circle item, classifies the behavior model under the same label by using the deep learning model to obtain a constructed behavior model, and pre-stored feature construction behavior model after association in the video information, carries out penalty time item and penalty circle item modeling, and sends the behavior model to the video recognition module (161).
10. The training outcome arbitration system of claim 8, wherein: the training setting module (14) comprises a template selection unit (141), a node setting unit (142) and a training control unit (143);
The template selection unit (141) receives an external input instruction, calls a competition template stored by the storage module (11) according to the instruction, and selects a competition field simulation map or a competition field simulation map in the competition template;
The node setting unit (142) is preset with a starting point node, an end point node and a check point node, and sets the starting point node, the end point node and the check point node on a competition area simulation map or a competition area simulation map according to the instruction;
The training control unit (143) receives the instruction and controls the relevant equipment of the training field to start according to the instruction.
CN202410444506.5A 2024-04-15 2024-04-15 Training result judging system and method Active CN118049890B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410444506.5A CN118049890B (en) 2024-04-15 2024-04-15 Training result judging system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410444506.5A CN118049890B (en) 2024-04-15 2024-04-15 Training result judging system and method

Publications (2)

Publication Number Publication Date
CN118049890A true CN118049890A (en) 2024-05-17
CN118049890B CN118049890B (en) 2024-06-25

Family

ID=91045190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410444506.5A Active CN118049890B (en) 2024-04-15 2024-04-15 Training result judging system and method

Country Status (1)

Country Link
CN (1) CN118049890B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100305724A1 (en) * 2007-12-19 2010-12-02 Robert Eric Fry Vehicle competition implementation system
KR101642956B1 (en) * 2015-10-16 2016-07-26 주식회사 인퍼니 A system for shooting simulation game
CN109701261A (en) * 2019-01-11 2019-05-03 中国科学院重庆绿色智能技术研究院 Gunnery meeting management method, system, equipment and storage medium
KR102000982B1 (en) * 2018-04-30 2019-07-17 (주)에프티에스 Simulated combat training system capable of accurate positioning
CN110145971A (en) * 2019-07-01 2019-08-20 山东吉利达智能装备集团有限公司 Unit's hematuria tactical confrontation training system intelligent target machine system and its application
CN111860418A (en) * 2020-07-29 2020-10-30 重庆道吧网络有限公司 Intelligent video examination and consultation system, method, medium and terminal for athletic competition
CN114093030A (en) * 2021-11-23 2022-02-25 杭州中科先进技术研究院有限公司 Shooting training analysis method based on human body posture learning
WO2022262743A1 (en) * 2021-06-18 2022-12-22 北京盈迪曼德科技有限公司 Robot task execution method and apparatus, robot, and storage medium
KR20230077083A (en) * 2021-11-25 2023-06-01 신원섭 A system for training the shoot made to order
CN117387419A (en) * 2023-10-12 2024-01-12 中电万维信息技术有限责任公司 Intelligent acquisition and analysis method based on light weapon shooting training data
CN117768608A (en) * 2023-11-13 2024-03-26 北京中海技创科技发展有限公司 Shooting training monitoring and evaluating system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100305724A1 (en) * 2007-12-19 2010-12-02 Robert Eric Fry Vehicle competition implementation system
KR101642956B1 (en) * 2015-10-16 2016-07-26 주식회사 인퍼니 A system for shooting simulation game
KR102000982B1 (en) * 2018-04-30 2019-07-17 (주)에프티에스 Simulated combat training system capable of accurate positioning
CN109701261A (en) * 2019-01-11 2019-05-03 中国科学院重庆绿色智能技术研究院 Gunnery meeting management method, system, equipment and storage medium
CN110145971A (en) * 2019-07-01 2019-08-20 山东吉利达智能装备集团有限公司 Unit's hematuria tactical confrontation training system intelligent target machine system and its application
CN111860418A (en) * 2020-07-29 2020-10-30 重庆道吧网络有限公司 Intelligent video examination and consultation system, method, medium and terminal for athletic competition
WO2022262743A1 (en) * 2021-06-18 2022-12-22 北京盈迪曼德科技有限公司 Robot task execution method and apparatus, robot, and storage medium
CN114093030A (en) * 2021-11-23 2022-02-25 杭州中科先进技术研究院有限公司 Shooting training analysis method based on human body posture learning
KR20230077083A (en) * 2021-11-25 2023-06-01 신원섭 A system for training the shoot made to order
CN117387419A (en) * 2023-10-12 2024-01-12 中电万维信息技术有限责任公司 Intelligent acquisition and analysis method based on light weapon shooting training data
CN117768608A (en) * 2023-11-13 2024-03-26 北京中海技创科技发展有限公司 Shooting training monitoring and evaluating system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邢惠丽;魏振钢;葛艳;: "帆船比赛辅助训练平台的设计与实现", 青岛大学学报(自然科学版), vol. 22, no. 03, 30 September 2009 (2009-09-30), pages 43 - 48 *

Also Published As

Publication number Publication date
CN118049890B (en) 2024-06-25

Similar Documents

Publication Publication Date Title
CN102324024B (en) Airport passenger recognition and positioning method and system based on target tracking technique
US8107679B2 (en) Horse position information analyzing and displaying method
CN110874362A (en) Data association analysis method and device
RU2683499C1 (en) System for automatic creation of scenario video clip with preset object or group of objects presence in frame
CN107423674A (en) A kind of looking-for-person method based on recognition of face, electronic equipment and storage medium
JP2002530970A (en) Multi-target tracking system
CN113435336B (en) Running intelligent timing system and method based on artificial intelligence
CN113963315A (en) Real-time video multi-user behavior recognition method and system in complex scene
CN112270267A (en) Camera shooting recognition system capable of automatically capturing line faults
CN111460985A (en) On-site worker track statistical method and system based on cross-camera human body matching
CN113822250A (en) Ship driving abnormal behavior detection method
CN114998991A (en) Campus intelligent playground system and motion detection method based on same
CN118049890B (en) Training result judging system and method
CN116189052A (en) Security method, system, intelligent terminal and storage medium based on video stream analysis
CN115713714A (en) Motion evaluation method, device, system, electronic device and storage medium
CN110473015A (en) A kind of smart ads system and advertisement placement method
CN111694829B (en) Motion trail processing method and device and motion trail processing system
JP7528356B2 (en) AI-based monitoring of race tracks
CN106803937A (en) A kind of double-camera video frequency monitoring method and system with text log
Wang et al. Automatic detection and tracking of precast walls from surveillance construction site videos
CN116452632A (en) Cross-camera track determination method, device, equipment and storage medium
CN114488337A (en) High-altitude parabolic detection method and device
KR20220067271A (en) Image acquisition apparatus and image acquisition method
CN118172389B (en) Short-distance track monitoring system and method based on multi-target tracking across cameras
CN118153857B (en) Construction behavior AI supervision method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant