CN117373134B - Training room data management method, device and training system - Google Patents

Training room data management method, device and training system Download PDF

Info

Publication number
CN117373134B
CN117373134B CN202311668700.3A CN202311668700A CN117373134B CN 117373134 B CN117373134 B CN 117373134B CN 202311668700 A CN202311668700 A CN 202311668700A CN 117373134 B CN117373134 B CN 117373134B
Authority
CN
China
Prior art keywords
image
data
trainee
sub
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311668700.3A
Other languages
Chinese (zh)
Other versions
CN117373134A (en
Inventor
石继元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Laiboshi Education Equipment Co ltd
Original Assignee
Guangdong Laiboshi Education Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Laiboshi Education Equipment Co ltd filed Critical Guangdong Laiboshi Education Equipment Co ltd
Priority to CN202311668700.3A priority Critical patent/CN117373134B/en
Publication of CN117373134A publication Critical patent/CN117373134A/en
Application granted granted Critical
Publication of CN117373134B publication Critical patent/CN117373134B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to the field of data management, in particular to a training room data management method, a training room data management device and a training system, wherein the method is characterized in that operation data of a preset training project, generated images and standard data are compared with the generated images by a trainee at the last time, images of the trainee with operation problems are screened out from the generated images, and the screened images are displayed when the trainee performs secondary training; the method and the device enable a trainee to timely learn problematic operation actions of the trainee in the last practical training when the trainee performs secondary practical training, and further conduct targeted correction so as to improve practical training effects.

Description

Training room data management method, device and training system
Technical Field
The present invention relates to the field of data management, and in particular, to a training room data management method, apparatus and training system.
Background
The training room refers to a place or facility specially used for performing actual operations and practice learning, and is generally used for providing a space for simulating a real environment, performing practice activities, cultivating skills and applying knowledge design.
The user carries out practical training operation training in practical training room, and to arbitrary real training project, the user all needs to carry out a lot of training, and can discover own not enough in each training, and carry out the purposeful training to the place of deficiency in the training of next time, the side can effectually promote self level, and then reach training standard, but the current real system of training is difficult to discern the deficiency of trainee at every turn training, can only rely on trainee self-perception to confirm, but because the influence of human subjectivity, the deviation exists to the level forecast of self easily, consequently can not be accurate discovery self not enough, and then be difficult to carry out the purposeful training in the secondary training, will influence trainee's training effect.
Disclosure of Invention
Accordingly, it is necessary to provide a training room data management method, device and training system for solving the above-mentioned problems.
The embodiment of the invention is realized in such a way that the method for managing the training room data comprises the following steps:
s1: acquiring trainee information of each trainee;
s2: the method comprises the steps that a first practical training record of a practical training project corresponding to the practical training person information is obtained, wherein the first practical training record comprises operation data of all sub-processes when the practical training person performs practical training of the practical training project last time, and an overall process operation image of the practical training person performs practical training of the practical training project last time, wherein the operation data are data generated in a practical training machine when the practical training person operates on the practical training machine;
S3: comparing each operation data in the first training record with corresponding preset standard data, and further determining the operation accuracy of the sub-process corresponding to each operation data;
s4: determining sub-processes with operation accuracy lower than preset accuracy as unqualified processes, and determining images of the sub-processes corresponding to each unqualified process as first images from the whole process operation images;
s5: the method comprises the steps of taking a first image, comparing the action deviation degree of the operation action of a trained person in the first image with the action deviation degree of the operation action of the trained person in a corresponding preset standard image, further determining a frame image corresponding to the action deviation degree exceeding the preset deviation degree from the first image as a disqualified frame image, and repeatedly executing the steps until the disqualified frame image in each first image is screened out;
s6: marking deviation actions on each unqualified frame image and storing each unqualified frame image;
s7: taking a first training record of the other trainee information corresponding to a preset training project, and executing the steps S3 to S6 until the unqualified frame image corresponding to each trainee information is screened out, marked and stored;
s8: when any trainee carries out secondary training of a preset training project, the trainee information input by the trainee into the training machine is used as first trainee information, and all stored unqualified frame images corresponding to the first trainee information are called out;
S9: when the trainee performs any operation of the sub-process determined to be the unqualified process in the last practical training, generating and outputting prompt information, displaying the corresponding unqualified frame image, so that the trainee trains the operation action corresponding to the sub-process again and again, and executing the step until the secondary practical training is completed.
In one embodiment, the present invention provides a training room data management apparatus, comprising:
the acquisition module is used for acquiring trainee information of each trainee;
the first processing module is used for taking a first practical training record of the practical training project corresponding to the practical training project, wherein the first practical training record comprises operation data of all sub-processes when the practical training project is performed by the practical training person corresponding to the practical training person information last time, and the whole process operation image of the practical training project is performed by the practical training person last time, and the operation data is data generated in the practical training machine when the practical training person operates on the practical training machine;
the second processing module is used for comparing each piece of operation data in the first training record with corresponding preset standard data so as to further determine the operation accuracy of the sub-process corresponding to each piece of operation data;
The third processing module is used for determining sub-processes with operation accuracy lower than preset accuracy as unqualified processes, and determining images of the sub-processes corresponding to each unqualified process from the whole process operation images as first images;
the fourth processing module is used for taking a first image, comparing the action deviation degree of the operation action of the trainee in the first image with the action deviation degree of the operation action of the trainee in the corresponding preset standard image, further determining a frame image corresponding to the action deviation degree exceeding the preset deviation degree from the first image as a disqualified frame image, and repeatedly executing the steps until the disqualified frame image in each first image is screened out;
a fifth processing module, configured to mark deviation actions on each of the unqualified frame images, and store each of the unqualified frame images;
the sixth processing module is used for taking a first training record of the information of another trained person corresponding to a preset training project;
the seventh processing module is used for taking the trainee information input by the trainee into the training machine as first trainee information when any trainee performs secondary training of a preset training project, and calling out all stored unqualified frame images corresponding to the first trainee information;
And the eighth processing module is used for generating and outputting prompt information and displaying corresponding unqualified frame images when the trainee performs any operation of the sub-process determined to be the unqualified process in the last practical training, so that the trainee can practice the operation action corresponding to the sub-process again and again, and the steps are repeatedly executed until the secondary practical training is completed.
In one embodiment, the present invention provides a training system comprising:
the training machines are arranged in the training room and are used for the trainee to carry out training operation and collect training information of the trainee;
and the computer equipment is connected with each practical training machine and is used for executing the practical training room data management method.
The invention provides a training room data management method, a training room data management device and a training system, wherein the method comprises the steps of obtaining trainee information of each trainee; taking a first training record of the information of a trained person corresponding to a preset training project; comparing each operation data in the first training record with corresponding preset standard data, and further determining the operation accuracy of the sub-process corresponding to each operation data; determining sub-processes with operation accuracy lower than preset accuracy as unqualified processes, and determining images of the sub-processes corresponding to each unqualified process as first images from the whole process operation images; the method comprises the steps of taking a first image, comparing the action deviation degree of the operation action of a trained person in the first image with the action deviation degree of the operation action of the trained person in a corresponding preset standard image, further determining a frame image corresponding to the action deviation degree exceeding the preset deviation degree from the first image as a disqualified frame image, and repeatedly executing the steps until the disqualified frame image in each first image is screened out; marking deviation actions on each unqualified frame image and storing each unqualified frame image; taking a first training record of the other trainee information corresponding to a preset training project, and repeatedly executing the steps until the unqualified frame image corresponding to each trainee information is screened out, marked and stored; when any trainee carries out secondary training of a preset training project, the trainee information input by the trainee into the training machine is used as first trainee information, and all stored unqualified frame images corresponding to the first trainee information are called out; when a trainee performs any operation of the sub-process determined to be the unqualified process in the last practical training, generating and outputting prompt information, displaying a corresponding unqualified frame image, so that the trainee trains the operation action corresponding to the sub-process again and again until the secondary practical training is completed; in the application, for each trainee, an image of a motion with a larger deviation when a certain preset training project is performed last time can be screened out, and the deviation motion is marked on the image; therefore, when any trainee carries out secondary training on the same training project, the screened images can be timely displayed, so that the trainee can learn problematic operation actions of the trainee, and further, the trainee can carry out targeted rectification so as to improve the practical training effect.
Drawings
FIG. 1 is a first flow chart of a method of training room data management provided in one embodiment;
FIG. 2 is a second flowchart of a method of training room data management provided in one embodiment;
FIG. 3 is a block diagram of a training room data management apparatus provided in one embodiment;
FIG. 4 is a composition diagram of a training system provided in one embodiment;
FIG. 5 is a block diagram of the internal architecture of a computer device in one embodiment.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It will be understood that the terms "first," "second," and the like, as used herein, may be used to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another element. For example, a first xx script may be referred to as a second xx script, and similarly, a second xx script may be referred to as a first xx script, without departing from the scope of this disclosure.
As shown in fig. 1, in one embodiment, a method for managing data in a training room is provided, where the method includes:
s1: acquiring trainee information of each trainee;
s2: the method comprises the steps that a first practical training record of a practical training project corresponding to the practical training person information is obtained, wherein the first practical training record comprises operation data of all sub-processes when the practical training person performs practical training of the practical training project last time, and an overall process operation image of the practical training person performs practical training of the practical training project last time, wherein the operation data are data generated in a practical training machine when the practical training person operates on the practical training machine;
s3: comparing each operation data in the first training record with corresponding preset standard data, and further determining the operation accuracy of the sub-process corresponding to each operation data;
s4: determining sub-processes with operation accuracy lower than preset accuracy as unqualified processes, and determining images of the sub-processes corresponding to each unqualified process as first images from the whole process operation images;
s5: the method comprises the steps of taking a first image, comparing the action deviation degree of the operation action of a trained person in the first image with the action deviation degree of the operation action of the trained person in a corresponding preset standard image, further determining a frame image corresponding to the action deviation degree exceeding the preset deviation degree from the first image as a disqualified frame image, and repeatedly executing the steps until the disqualified frame image in each first image is screened out;
S6: marking deviation actions on each unqualified frame image and storing each unqualified frame image;
s7: taking a first training record of the other trainee information corresponding to a preset training project, and executing the steps S3 to S6 until the unqualified frame image corresponding to each trainee information is screened out, marked and stored;
s8: when any trainee carries out secondary training of a preset training project, the trainee information input by the trainee into the training machine is used as first trainee information, and all stored unqualified frame images corresponding to the first trainee information are called out;
s9: when the trainee performs any operation of the sub-process determined to be the unqualified process in the last practical training, generating and outputting prompt information, displaying the corresponding unqualified frame image, so that the trainee trains the operation action corresponding to the sub-process again and again, and executing the step until the secondary practical training is completed.
In this embodiment, the method is executed in a computer device, where the computer device may be an independent physical server or terminal, or may be a server cluster formed by multiple physical servers, or may be a cloud server that provides basic cloud computing services such as a cloud server, a cloud database, a cloud storage, a CDN, and the like; the first practical training record is collected by a practical training machine used by a trainee, and the computer equipment is connected with the practical training machine, so that the computer equipment can acquire the first practical training record; the training room can be a training room in a school, and each trained person is a student who performs training of preset training items in the training room; for example, the preset training items may be computer operation, experimental instrument operation, etc., and each preset training item is composed of a plurality of sub-processes; the whole process operation image is a whole process image of an operation part of the trainee, namely, a body part of the trainee performing the operation action, such as hands and feet; the preset standard image is a pre-recorded image, the recording angle of the preset standard image is consistent with that of the preset standard image so as to facilitate comparison of the preset standard image and the preset standard image, and data generated after the operation action in the preset standard image is acquired by the training machine are preset standard data.
In the application, for a preset training project, a first training record of the previous training project of each trainee is firstly obtained, and for any trainee, the operation data of the trainee is firstly compared with preset standard data, and as the operation data is generated in the training machine when the trainee operates on the training machine, the operation accuracy of the operation data compared with the preset standard data can directly reflect whether the operation action is correct; then taking a sub-process corresponding to operation data with low operation accuracy as an unqualified process, namely locking the sub-process with operation problems, calling out a recorded image of the sub-process, namely a first image, and comparing the first image with a preset standard image, wherein the first image and the preset standard image are images related to operation actions, so that different operation actions among the first image and the second image can be determined through comparison, the frame image with larger difference of the operation actions is screened out, an unqualified frame image can be obtained, the position of the corresponding deviation action on the unqualified frame image is marked, and the unqualified frame images are called out when a corresponding trainee performs secondary training of the preset training project; in the application, for each trainee, an image of a motion with a larger deviation when a certain preset training project is performed last time can be screened out, and the deviation motion is marked on the image; therefore, when any trainee carries out secondary training on the same training project, the screened images can be timely displayed, so that the trainee can learn problematic operation actions of the trainee, and further, the trainee can carry out targeted rectification so as to improve the practical training effect.
As a preferred embodiment, preset standard data is preset corresponding to each sub-process, so that each operation data corresponds to one preset standard data, each operation data has a plurality of first data bits, the corresponding preset standard data is provided with second data bits corresponding to each first data bit, and comparing each operation data in the first training record with the corresponding preset standard data, so as to determine the operation accuracy of the sub-process corresponding to each operation data includes:
s31: taking one operation data as comparison operation data, wherein preset standard data corresponding to the operation data is taken as comparison standard data;
s32: taking data in a first data bit in the comparison operation data as first sub-data, and taking data in a second data bit corresponding to the first data bit in the comparison standard data as second sub-data;
s33: comparing whether the first sub data and the second sub data are identical, if so, the first sub data are accurate sub data, and if not, the first sub data are inaccurate sub data;
s34: excluding the first sub-data after the comparison is completed, and executing the steps S32 to S33 until all the first sub-data are completed for comparison;
S35: calculating the operation accuracy of the sub-flow corresponding to the comparison operation data according to the following formula:
wherein,for operation accuracy, R is the number of accurate sub-data, and W is the number of first sub-data;
s36: excluding the operation data for which the operation accuracy calculation is completed, steps S31 to S35 are performed until the calculation of the operation accuracy corresponding to each operation data is completed.
In this embodiment, during each sub-process, the training machine generates a plurality of data, and a data bit is set for each data, and the generated data is recorded on the data bit; the comparison of the operation data in the sub-process and the preset standard data is performed on the level of each data bit, namely, the fine comparison process is realized, and the accuracy of the comparison is further improved.
As a preferred embodiment, a preset standard image is preset corresponding to each sub-process, so that each first image corresponds to one preset standard image, when practical training of a certain sub-process is started, an operation image corresponding to the sub-process is recorded, and the preset standard images are synchronously played, so that the recorded segments of the operation image at each time point correspond to the segments of the preset standard image at each time point, wherein the operation image is a part of the whole process operation image; the comparison of the action deviation degree of the operation action of the trained person in the first image and the operation action of the trained person in the corresponding preset standard image comprises the following steps:
S51: taking a generation time point of inaccurate sub-data corresponding to the first image;
s52: acquiring a first image fragment with a first set time length before a generation time point from the first image, and acquiring a second image fragment with a first set time length before the generation time point from a preset standard image corresponding to the first image;
s53: comparing the operation deviation degree of the operation action of the trainee in the first image fragment with the operation action of the trainee in the second image fragment.
In this embodiment, the operation image corresponding to the sub-process is recorded, and the preset standard image is synchronously played, so that the trainee can act along with the action of the standard image, and the recorded operation image corresponds to each action segment in the preset standard image; since there is a certain deviation between the generated data and the actual action, the influence of the deviation can be reduced by setting the first time length to accurately find the problematic operation action, and the first time length can be 1s, 2s or other time lengths, which is not limited herein.
As a preferred embodiment, the comparing the action deviation degree of the operation action of the trainee in the first image segment and the operation action of the trainee in the second image segment includes:
S531: numbering each first frame image in the first image fragment according to the recording sequence, and numbering each second frame image in the second image fragment according to the same numbering mode;
s532: corresponding the first frame image and the second frame image with the same number;
s533: taking a first frame image as a first comparison image, and taking a corresponding second frame image as a second comparison image;
s534: identifying a first trainee profile in the first comparison image and identifying a second trainee profile in the second comparison image;
s535: overlapping the second comparison image with the first comparison image, further determining the coincidence rate of the region in the first trainee outline and the region in the second trainee outline, and determining the action deviation degree according to the coincidence rate;
s536: and (3) excluding the compared first frame images, and executing the steps S533 to S535 until the action deviation degree corresponding to each first frame image is obtained.
The degree of motion deviation is calculated by the following formula:
wherein,for action deviation degree, ++>Overlapping area of the area within the first trainee profile and the area within the second trainee profile, +.>For the coincidence rate->Is the area of the area within the first trainee's outline.
In this embodiment, since the actions corresponding to each frame in the first influence segment and the second influence segment do not have larger deviation, the first influence segment and the second influence segment can be compared frame by using the method of this embodiment, so that it can be accurately determined which frame the operation action with problems is specifically present in, and the frame image is locked and output, thereby greatly improving the investigation accuracy of the problem action.
As a preferred embodiment, the frame image corresponding to the motion deviation degree exceeding the preset deviation degree is determined as the disqualified frame image from the first image, that is, the first frame image corresponding to the motion deviation degree exceeding the preset deviation degree is determined as the disqualified frame image;
the act of marking the deviations on each of the failed frame images includes:
s61: taking a disqualified image as a marked image;
s62: identifying a region which is not overlapped with a region in the corresponding second trainee outline in the first trainee outline in the marked image, and taking the region as a deviation region;
s63: marking the deviation area in a preset way;
s64: and (3) excluding the unqualified images of the finished mark, and executing the steps S61 to S63 until all the unqualified images are marked.
In this embodiment, the preset mark may be a circle mark, a hook mark, or may be a bright color filling for the deviation area, which is not limited herein; the deviation area is marked, so that a trainee can conveniently watch the deviation area, and the deviation action can be timely identified for targeted adjustment.
As a preferred embodiment, the prompt information is voice information or text information;
and displaying the unqualified frame image corresponding to the inaccurate data when the trainee finishes the operation action before the operation action in the first image segment corresponding to the inaccurate sub-data determined in the last practical training.
In this embodiment, the prompt message is used to remind the trainee that the trainee is about to start the operation step of training the last problem, so as to concentrate the attention; in this embodiment, the unqualified frame image is displayed before the operation corresponding to the unqualified frame image, so that the trainee can perform practical training after the unqualified frame image is referred to, so as to ensure the training effect.
As shown in fig. 3, in one embodiment, there is provided a training room data management apparatus including:
the acquisition module is used for acquiring trainee information of each trainee;
The first processing module is used for taking a first practical training record of the practical training project corresponding to the practical training project, wherein the first practical training record comprises operation data of all sub-processes when the practical training project is performed by the practical training person corresponding to the practical training person information last time, and the whole process operation image of the practical training project is performed by the practical training person last time, and the operation data is data generated in the practical training machine when the practical training person operates on the practical training machine;
the second processing module is used for comparing each piece of operation data in the first training record with corresponding preset standard data so as to further determine the operation accuracy of the sub-process corresponding to each piece of operation data;
the third processing module is used for determining sub-processes with operation accuracy lower than preset accuracy as unqualified processes, and determining images of the sub-processes corresponding to each unqualified process from the whole process operation images as first images;
the fourth processing module is used for taking a first image, comparing the action deviation degree of the operation action of the trainee in the first image with the action deviation degree of the operation action of the trainee in the corresponding preset standard image, further determining a frame image corresponding to the action deviation degree exceeding the preset deviation degree from the first image as a disqualified frame image, and repeatedly executing the steps until the disqualified frame image in each first image is screened out;
A fifth processing module, configured to mark deviation actions on each of the unqualified frame images, and store each of the unqualified frame images;
the sixth processing module is used for taking a first training record of the information of another trained person corresponding to a preset training project;
the seventh processing module is used for taking the trainee information input by the trainee into the training machine as first trainee information when any trainee performs secondary training of a preset training project, and calling out all stored unqualified frame images corresponding to the first trainee information;
and the eighth processing module is used for generating and outputting prompt information and displaying corresponding unqualified frame images when the trainee performs any operation of the sub-process determined to be the unqualified process in the last practical training, so that the trainee can practice the operation action corresponding to the sub-process again and again, and the steps are repeatedly executed until the secondary practical training is completed.
In this embodiment, after the sixth processing module performs the corresponding steps, the second processing module to the fifth processing module sequentially perform the corresponding steps until the unqualified frame image corresponding to each trainee information is screened out, marked and stored; the process of implementing the respective functions of each module in the training room data management device provided in this embodiment may refer to the description of the embodiment shown in fig. 1, and will not be repeated here.
As shown in fig. 4, in one embodiment, a training system is provided, the system comprising:
the training machines are arranged in the training room and are used for the trainee to carry out training operation and collect training information of the trainee;
and the computer equipment is connected with each practical training machine and is used for executing the practical training room data management method.
In the embodiment, the computer equipment and each training machine cooperate to execute the training room data management method, so that each trainee can screen out the image with larger deviation when a certain preset training project is performed last time, and mark the deviation action on the image; therefore, when any trainee carries out secondary training on the same training project, the screened images can be timely displayed, so that the trainee can learn problematic operation actions of the trainee, and further, the trainee can carry out targeted rectification so as to improve the practical training effect.
FIG. 5 illustrates an internal block diagram of a computer device in one embodiment. As shown in fig. 5, the computer device includes a processor, a memory, a network interface, an input device, and a display screen connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The nonvolatile storage medium of the computer equipment stores an operating system and can also store a computer program, and when the computer program is executed by a processor, the processor can realize the training room data management method provided by the embodiment of the invention. The internal memory may also store a computer program, which when executed by the processor, causes the processor to execute the training room data management method provided by the embodiment of the present invention. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 5 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, the training room data management apparatus provided in the embodiments of the present invention may be implemented in the form of a computer program, which may be executed on a computer device as shown in fig. 5. The memory of the computer device may store various program modules constituting the training room data management apparatus, for example, an acquisition module, a first processing module, a second processing module, a third processing module, a fourth processing module, a fifth processing module, a sixth processing module, a seventh processing module, and an eighth processing module shown in fig. 3. The computer program of each program module causes the processor to execute the steps in the training room data management method of each embodiment of the present invention described in the present specification.
For example, the computer apparatus shown in fig. 5 may perform step S1 through the acquisition module in the training room data management apparatus shown in fig. 3; the computer equipment can execute the step S2 through the first processing module; the computer equipment can execute the step S3 through the second processing module; the computer equipment can execute the step S4 through the third processing module; the computer equipment can execute the step S5 through a fourth processing module; the computer equipment can execute the step S6 through a fifth processing module; the computer equipment can execute the step S7 through a sixth processing module; the computer equipment can execute the step S8 through a seventh processing module; the computer device may perform step S9 through an eighth processing module.
In one embodiment, a computer device is presented, the computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
s1: acquiring trainee information of each trainee;
s2: the method comprises the steps that a first practical training record of a practical training project corresponding to the practical training person information is obtained, wherein the first practical training record comprises operation data of all sub-processes when the practical training person performs practical training of the practical training project last time, and an overall process operation image of the practical training person performs practical training of the practical training project last time, wherein the operation data are data generated in a practical training machine when the practical training person operates on the practical training machine;
s3: comparing each operation data in the first training record with corresponding preset standard data, and further determining the operation accuracy of the sub-process corresponding to each operation data;
s4: determining sub-processes with operation accuracy lower than preset accuracy as unqualified processes, and determining images of the sub-processes corresponding to each unqualified process as first images from the whole process operation images;
S5: the method comprises the steps of taking a first image, comparing the action deviation degree of the operation action of a trained person in the first image with the action deviation degree of the operation action of the trained person in a corresponding preset standard image, further determining a frame image corresponding to the action deviation degree exceeding the preset deviation degree from the first image as a disqualified frame image, and repeatedly executing the steps until the disqualified frame image in each first image is screened out;
s6: marking deviation actions on each unqualified frame image and storing each unqualified frame image;
s7: taking a first training record of the other trainee information corresponding to a preset training project, and executing the steps S3 to S6 until the unqualified frame image corresponding to each trainee information is screened out, marked and stored;
s8: when any trainee carries out secondary training of a preset training project, the trainee information input by the trainee into the training machine is used as first trainee information, and all stored unqualified frame images corresponding to the first trainee information are called out;
s9: when the trainee performs any operation of the sub-process determined to be the unqualified process in the last practical training, generating and outputting prompt information, displaying the corresponding unqualified frame image, so that the trainee trains the operation action corresponding to the sub-process again and again, and executing the step until the secondary practical training is completed.
In one embodiment, a computer readable storage medium is provided, having a computer program stored thereon, which when executed by a processor causes the processor to perform the steps of:
s1: acquiring trainee information of each trainee;
s2: the method comprises the steps that a first practical training record of a practical training project corresponding to the practical training person information is obtained, wherein the first practical training record comprises operation data of all sub-processes when the practical training person performs practical training of the practical training project last time, and an overall process operation image of the practical training person performs practical training of the practical training project last time, wherein the operation data are data generated in a practical training machine when the practical training person operates on the practical training machine;
s3: comparing each operation data in the first training record with corresponding preset standard data, and further determining the operation accuracy of the sub-process corresponding to each operation data;
s4: determining sub-processes with operation accuracy lower than preset accuracy as unqualified processes, and determining images of the sub-processes corresponding to each unqualified process as first images from the whole process operation images;
s5: the method comprises the steps of taking a first image, comparing the action deviation degree of the operation action of a trained person in the first image with the action deviation degree of the operation action of the trained person in a corresponding preset standard image, further determining a frame image corresponding to the action deviation degree exceeding the preset deviation degree from the first image as a disqualified frame image, and repeatedly executing the steps until the disqualified frame image in each first image is screened out;
S6: marking deviation actions on each unqualified frame image and storing each unqualified frame image;
s7: taking a first training record of the other trainee information corresponding to a preset training project, and executing the steps S3 to S6 until the unqualified frame image corresponding to each trainee information is screened out, marked and stored;
s8: when any trainee carries out secondary training of a preset training project, the trainee information input by the trainee into the training machine is used as first trainee information, and all stored unqualified frame images corresponding to the first trainee information are called out;
s9: when the trainee performs any operation of the sub-process determined to be the unqualified process in the last practical training, generating and outputting prompt information, displaying the corresponding unqualified frame image, so that the trainee trains the operation action corresponding to the sub-process again and again, and executing the step until the secondary practical training is completed.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in various embodiments may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the invention and are described in detail herein without thereby limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention.

Claims (5)

1. A method for training room data management, the method comprising:
s1: acquiring trainee information of each trainee;
s2: the method comprises the steps that a first practical training record of a practical training project corresponding to the practical training person information is obtained, wherein the first practical training record comprises operation data of all sub-processes when the practical training person performs practical training of the practical training project last time, and an overall process operation image of the practical training person performs practical training of the practical training project last time, wherein the operation data are data generated in a practical training machine when the practical training person operates on the practical training machine;
S3: comparing each operation data in the first training record with corresponding preset standard data, and further determining the operation accuracy of the sub-process corresponding to each operation data;
s4: determining sub-processes with operation accuracy lower than preset accuracy as unqualified processes, and determining images of the sub-processes corresponding to each unqualified process as first images from the whole process operation images;
s5: the method comprises the steps of taking a first image, comparing the action deviation degree of the operation action of a trained person in the first image with the action deviation degree of the operation action of the trained person in a corresponding preset standard image, further determining a frame image corresponding to the action deviation degree exceeding the preset deviation degree from the first image as a disqualified frame image, and repeatedly executing the steps until the disqualified frame image in each first image is screened out;
s6: marking deviation actions on each unqualified frame image and storing each unqualified frame image;
s7: taking a first training record of the other trainee information corresponding to a preset training project, and executing the steps S3 to S6 until the unqualified frame image corresponding to each trainee information is screened out, marked and stored;
s8: when any trainee carries out secondary training of a preset training project, the trainee information input by the trainee into the training machine is used as first trainee information, and all stored unqualified frame images corresponding to the first trainee information are called out;
S9: when a trainee performs any operation of the sub-process determined to be the unqualified process in the last practical training, generating and outputting prompt information, displaying a corresponding unqualified frame image, so that the trainee trains the operation action corresponding to the sub-process again and again until the secondary practical training is completed;
preset standard data are preset corresponding to each sub-process, so that each operation data corresponds to one preset standard data, each operation data has a plurality of first data bits, corresponding preset standard data are provided with second data bits corresponding to each first data bit, and comparing each operation data in the first training record with the corresponding preset standard data, and further determining the operation accuracy of the sub-process corresponding to each operation data comprises:
s31: taking one operation data as comparison operation data, wherein preset standard data corresponding to the operation data is taken as comparison standard data;
s32: taking data in a first data bit in the comparison operation data as first sub-data, and taking data in a second data bit corresponding to the first data bit in the comparison standard data as second sub-data;
S33: comparing whether the first sub data and the second sub data are identical, if so, the first sub data are accurate sub data, and if not, the first sub data are inaccurate sub data;
s34: excluding the first sub-data after the comparison is completed, and executing the steps S32 to S33 until all the first sub-data are completed for comparison;
s35: calculating the operation accuracy of the sub-flow corresponding to the comparison operation data according to the following formula:
wherein,for operation accuracy, R is the number of accurate sub-data, and W is the number of first sub-data;
s36: excluding the operation data for which the operation accuracy calculation is completed, and executing the steps S31 to S35 until the calculation of the operation accuracy corresponding to each operation data is completed;
presetting a preset standard image corresponding to each sub-process, so that each first image corresponds to one preset standard image, recording an operation image corresponding to a sub-process when practical training of the sub-process is started, and synchronously playing the preset standard image, so that the recorded fragments of the operation image at each time point correspond to the fragments of the preset standard image at each time point, wherein the operation image is a part of the whole process operation image; the comparison of the action deviation degree of the operation action of the trained person in the first image and the operation action of the trained person in the corresponding preset standard image comprises the following steps:
S51: taking a generation time point of inaccurate sub-data corresponding to the first image;
s52: acquiring a first image fragment with a first set time length before a generation time point from the first image, and acquiring a second image fragment with a first set time length before the generation time point from a preset standard image corresponding to the first image;
s53: comparing the action deviation degree of the operation action of the trainee in the first image fragment and the operation action of the trainee in the second image fragment;
the comparison of the action deviation degree of the operation actions of the trainee in the first image segment and the operation actions of the trainee in the second image segment comprises the following steps:
s531: numbering each first frame image in the first image fragment according to the recording sequence, and numbering each second frame image in the second image fragment according to the same numbering mode;
s532: corresponding the first frame image and the second frame image with the same number;
s533: taking a first frame image as a first comparison image, and taking a corresponding second frame image as a second comparison image;
s534: identifying a first trainee profile in the first comparison image and identifying a second trainee profile in the second comparison image;
s535: overlapping the second comparison image with the first comparison image, further determining the coincidence rate of the region in the first trainee outline and the region in the second trainee outline, and determining the action deviation degree according to the coincidence rate;
S536: excluding the compared first frame images, and executing the steps S533 to S535 until the action deviation degree corresponding to each first frame image is obtained;
the degree of motion deviation is calculated by the following formula:
wherein,for action deviation degree, ++>Overlapping area of the area within the first trainee profile and the area within the second trainee profile, +.>For the coincidence rate->Is the area of the area within the first trainee's outline.
2. The method according to claim 1, wherein the frame image corresponding to the motion deviation degree exceeding the preset deviation degree is determined as the failed frame image from the first image, that is, the first frame image corresponding to the motion deviation degree exceeding the preset deviation degree is determined as the failed frame image;
the act of marking the deviations on each of the failed frame images includes:
s61: taking a disqualified image as a marked image;
s62: identifying a region which is not overlapped with a region in the corresponding second trainee outline in the first trainee outline in the marked image, and taking the region as a deviation region;
s63: marking the deviation area in a preset way;
s64: and (3) excluding the unqualified images of the finished mark, and executing the steps S61 to S63 until all the unqualified images are marked.
3. The method of claim 2, wherein the prompt message is a voice message or a text message;
and displaying the unqualified frame image in the first image segment corresponding to the generation time point of the inaccurate sub-data when the trainee finishes the operation action before the operation action in the first image segment corresponding to any inaccurate sub-data determined in the last practical training.
4. A practical training room data management device, characterized in that the practical training room data management device comprises:
the acquisition module is used for acquiring trainee information of each trainee;
the first processing module is used for taking a first practical training record of the practical training project corresponding to the practical training project, wherein the first practical training record comprises operation data of all sub-processes when the practical training project is performed by the practical training person corresponding to the practical training person information last time, and the whole process operation image of the practical training project is performed by the practical training person last time, and the operation data is data generated in the practical training machine when the practical training person operates on the practical training machine;
the second processing module is used for comparing each piece of operation data in the first training record with corresponding preset standard data so as to further determine the operation accuracy of the sub-process corresponding to each piece of operation data;
The third processing module is used for determining sub-processes with operation accuracy lower than preset accuracy as unqualified processes, and determining images of the sub-processes corresponding to each unqualified process from the whole process operation images as first images;
the fourth processing module is used for taking a first image, comparing the action deviation degree of the operation action of the trainee in the first image with the action deviation degree of the operation action of the trainee in the corresponding preset standard image, further determining a frame image corresponding to the action deviation degree exceeding the preset deviation degree from the first image as a disqualified frame image, and repeatedly executing the steps until the disqualified frame image in each first image is screened out;
a fifth processing module, configured to mark deviation actions on each of the unqualified frame images, and store each of the unqualified frame images;
the sixth processing module is used for taking a first training record of the information of another trained person corresponding to a preset training project;
the seventh processing module is used for taking the trainee information input by the trainee into the training machine as first trainee information when any trainee performs secondary training of a preset training project, and calling out all stored unqualified frame images corresponding to the first trainee information;
The eighth processing module is used for generating and outputting prompt information and displaying corresponding unqualified frame images when the trainee performs any operation of the sub-process determined to be the unqualified process in the last practical training, so that the trainee can practice the operation corresponding to the sub-process again and again, and the steps are repeatedly executed until the secondary practical training is completed;
preset standard data are preset corresponding to each sub-process, so that each operation data corresponds to one preset standard data, each operation data has a plurality of first data bits, corresponding preset standard data are provided with second data bits corresponding to each first data bit, and comparing each operation data in the first training record with the corresponding preset standard data, and further determining the operation accuracy of the sub-process corresponding to each operation data comprises:
s31: taking one operation data as comparison operation data, wherein preset standard data corresponding to the operation data is taken as comparison standard data;
s32: taking data in a first data bit in the comparison operation data as first sub-data, and taking data in a second data bit corresponding to the first data bit in the comparison standard data as second sub-data;
S33: comparing whether the first sub data and the second sub data are identical, if so, the first sub data are accurate sub data, and if not, the first sub data are inaccurate sub data;
s34: excluding the first sub-data after the comparison is completed, and executing the steps S32 to S33 until all the first sub-data are completed for comparison;
s35: calculating the operation accuracy of the sub-flow corresponding to the comparison operation data according to the following formula:
wherein,for operation accuracy, R is the number of accurate sub-data, and W is the number of first sub-data;
s36: excluding the operation data for which the operation accuracy calculation is completed, and executing the steps S31 to S35 until the calculation of the operation accuracy corresponding to each operation data is completed;
presetting a preset standard image corresponding to each sub-process, so that each first image corresponds to one preset standard image, recording an operation image corresponding to a sub-process when practical training of the sub-process is started, and synchronously playing the preset standard image, so that the recorded fragments of the operation image at each time point correspond to the fragments of the preset standard image at each time point, wherein the operation image is a part of the whole process operation image; the comparison of the action deviation degree of the operation action of the trained person in the first image and the operation action of the trained person in the corresponding preset standard image comprises the following steps:
S51: taking a generation time point of inaccurate sub-data corresponding to the first image;
s52: acquiring a first image fragment with a first set time length before a generation time point from the first image, and acquiring a second image fragment with a first set time length before the generation time point from a preset standard image corresponding to the first image;
s53: comparing the action deviation degree of the operation action of the trainee in the first image fragment and the operation action of the trainee in the second image fragment;
the comparison of the action deviation degree of the operation actions of the trainee in the first image segment and the operation actions of the trainee in the second image segment comprises the following steps:
s531: numbering each first frame image in the first image fragment according to the recording sequence, and numbering each second frame image in the second image fragment according to the same numbering mode;
s532: corresponding the first frame image and the second frame image with the same number;
s533: taking a first frame image as a first comparison image, and taking a corresponding second frame image as a second comparison image;
s534: identifying a first trainee profile in the first comparison image and identifying a second trainee profile in the second comparison image;
s535: overlapping the second comparison image with the first comparison image, further determining the coincidence rate of the region in the first trainee outline and the region in the second trainee outline, and determining the action deviation degree according to the coincidence rate;
S536: excluding the compared first frame images, and executing the steps S533 to S535 until the action deviation degree corresponding to each first frame image is obtained;
the degree of motion deviation is calculated by the following formula:
wherein,for action deviation degree, ++>Overlapping area of the area within the first trainee profile and the area within the second trainee profile, +.>Is coincident withRate of->Is the area of the area within the first trainee's outline.
5. A practical training system, the system comprising:
the training machines are arranged in the training room and are used for the trainee to carry out training operation and collect training information of the trainee;
computer device, connected to each training machine, for performing a training room data management method according to any of claims 1-3.
CN202311668700.3A 2023-12-07 2023-12-07 Training room data management method, device and training system Active CN117373134B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311668700.3A CN117373134B (en) 2023-12-07 2023-12-07 Training room data management method, device and training system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311668700.3A CN117373134B (en) 2023-12-07 2023-12-07 Training room data management method, device and training system

Publications (2)

Publication Number Publication Date
CN117373134A CN117373134A (en) 2024-01-09
CN117373134B true CN117373134B (en) 2024-03-26

Family

ID=89406308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311668700.3A Active CN117373134B (en) 2023-12-07 2023-12-07 Training room data management method, device and training system

Country Status (1)

Country Link
CN (1) CN117373134B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510825A (en) * 2018-04-04 2018-09-07 重庆鲁班机器人技术研究院有限公司 Robot practical training method and system
CN111612667A (en) * 2020-07-04 2020-09-01 江苏工程职业技术学院 Management method and device for intelligent training room
CN112085284A (en) * 2020-09-14 2020-12-15 江苏工程职业技术学院 Energy-saving management method and device for training room
CN114998985A (en) * 2022-05-07 2022-09-02 哈尔滨工业大学(深圳) Early warning control method of intelligent experiment table and intelligent experiment table
CN116959216A (en) * 2023-07-31 2023-10-27 深圳市三思试验仪器有限公司 Experimental operation monitoring and early warning method, device and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510825A (en) * 2018-04-04 2018-09-07 重庆鲁班机器人技术研究院有限公司 Robot practical training method and system
CN111612667A (en) * 2020-07-04 2020-09-01 江苏工程职业技术学院 Management method and device for intelligent training room
CN112085284A (en) * 2020-09-14 2020-12-15 江苏工程职业技术学院 Energy-saving management method and device for training room
CN114998985A (en) * 2022-05-07 2022-09-02 哈尔滨工业大学(深圳) Early warning control method of intelligent experiment table and intelligent experiment table
CN116959216A (en) * 2023-07-31 2023-10-27 深圳市三思试验仪器有限公司 Experimental operation monitoring and early warning method, device and system

Also Published As

Publication number Publication date
CN117373134A (en) 2024-01-09

Similar Documents

Publication Publication Date Title
CN110389969A (en) The system and method for the learning Content of customization are provided
US20130246317A1 (en) System, method and computer readable medium for identifying the likelihood of a student failing a particular course
Matricano Grey vs. young entrepreneurs: are they really that different in terms of entrepreneurial intentions? Empirical evidence from Italy
CN112749340A (en) Course recommendation method and system based on big data, mobile terminal and storage medium
US8036922B2 (en) Apparatus and computer-readable program for estimating man-hours for software tests
US11615294B2 (en) Method and apparatus based on position relation-based skip-gram model and storage medium
Cole et al. Formative graphing with a Microsoft Excel 2013 template.
CN117373134B (en) Training room data management method, device and training system
Milla et al. Higher education value added using multiple outcomes
CN110310257A (en) Medical image processing method, device, computer equipment and storage medium
US20040150662A1 (en) Online system and method for assessing/certifying competencies and compliance
CN114254122A (en) Test question generation method and device, electronic equipment and readable storage medium
Medina et al. Predicting pharmacy curriculum outcomes assessment performance using admissions, curricular, demographics, and preparation data
CN115578226A (en) Learning situation analysis method and system based on big data
CN116127029A (en) Content configuration method and device based on capability portraits, electronic equipment and storage medium
CN111652767B (en) User portrait construction method and device, computer equipment and storage medium
US20140303760A1 (en) Sport performance development and analytics
CN108108101B (en) Picture testing method, device, system, computer equipment and storage medium
KR102476023B1 (en) Method for providing predicted achievement grade information and method for prociding learning planner based on the predicted achievement grade information
KR20200012276A (en) Collaborative learning system for measuring and controling learen's learning direction of learner and methid thereof
JP7112694B1 (en) Information processing device, information processing method and information processing program
CN111242415A (en) Learning report generation method and device, computer equipment and storage medium
Kangwantrakool et al. R3P2: A Performance Model For Readiness Review Process Improvement In Capability Maturity Model Integration Level 3
TWI352316B (en)
Ferens Software support cost models: quo vadis?

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant