CN114663261B - Data processing method suitable for training and examination system - Google Patents

Data processing method suitable for training and examination system Download PDF

Info

Publication number
CN114663261B
CN114663261B CN202210538885.5A CN202210538885A CN114663261B CN 114663261 B CN114663261 B CN 114663261B CN 202210538885 A CN202210538885 A CN 202210538885A CN 114663261 B CN114663261 B CN 114663261B
Authority
CN
China
Prior art keywords
training
trigger
information
area
separation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210538885.5A
Other languages
Chinese (zh)
Other versions
CN114663261A (en
Inventor
张建仓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Tianfang Yetan Network Technology Co ltd
Original Assignee
Flame Blue Zhejiang Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Flame Blue Zhejiang Information Technology Co ltd filed Critical Flame Blue Zhejiang Information Technology Co ltd
Priority to CN202210538885.5A priority Critical patent/CN114663261B/en
Publication of CN114663261A publication Critical patent/CN114663261A/en
Application granted granted Critical
Publication of CN114663261B publication Critical patent/CN114663261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a data processing method suitable for a training assessment system, which comprises the steps of carrying out cell division processing on training image-text video frames based on the content of the training image-text video frames to obtain virtual region cells corresponding to each training image-text video frame; the server generates a virtual grid sequence and sends the virtual grid sequence to the trigger end; the trigger terminal generates a trigger grid sequence according to the corresponding relation between the trigger area grids and the virtual area grids; the server generates first training assessment information according to the trigger area lattice and the trigger target area and sends the first training assessment information to a training assessment terminal; the server generates second training assessment information according to the trigger area lattice and the information acquisition layer and sends the second training assessment information to a training assessment terminal; and the training assessment end plays the training image-text video data based on the first training assessment information or the second training assessment information. The invention can carry out playback according to different behaviors of different employees during training of the employees so as to enhance the training with pertinence, and can also generate a training result to display the training effect.

Description

Data processing method suitable for training and examination system
Technical Field
The invention relates to the technical field of data processing, in particular to a data processing method suitable for a training and examination system.
Background
The internal training of an enterprise is special training set by the enterprise according to the own industrial characteristics and development conditions, and aims to improve the level of each aspect of knowledge, skill, working method, working attitude and the like of staff, so that the development of the whole enterprise is promoted.
Currently, internal training of enterprises is usually performed by organizing field training or online teaching. However, any of the above training methods cannot be trained specifically according to the specific situations of different employees.
Therefore, a data processing method of a training and examination system is urgently needed, which can carry out targeted and enhanced training according to different behaviors of different employees when the employees are trained, and can show the training effect after the training.
Disclosure of Invention
The embodiment of the invention provides a data processing method suitable for a training and assessment system, which can be used for carrying out playback according to different behaviors of different employees during training of the employees so as to enhance the training pertinence, and can also be used for generating an annotation video to display the training result.
In a first aspect of the embodiments of the present invention, a data processing method suitable for a training assessment system is provided, which includes a server, a training assessment terminal, and a trigger terminal, and performs data processing by the following steps, specifically including:
the server determines corresponding training image-text video data according to personnel attribute information of training personnel, extracts training image-text video frames in the training image-text video data, and divides the training image-text video frames into grids based on the content of the training image-text video frames to obtain virtual area grids corresponding to each training image-text video frame;
the server generates a virtual grid sequence according to the time of the training image-text video frame corresponding to each virtual area grid, and sends the virtual grid sequence to the trigger end;
the method comprises the steps that a trigger end classifies a reference trigger point of the trigger end according to a virtual area grid to obtain a plurality of trigger areas to form a trigger area grid, and a trigger grid sequence is generated according to the corresponding relation between the trigger area grid and the virtual area grid;
the training image-text video data are sent to a training assessment end to be played, if the training personnel are judged to trigger at least one target area of the trigger area grids in a first mode, the trigger area grids and the trigger target area are sent to a server, and the server generates first training assessment information according to the trigger area grids and the trigger target area and sends the first training assessment information to the training assessment end;
if the fact that the training personnel trigger the trigger area grids in the second mode is judged, an information acquisition layer is generated at the trigger end, the information acquisition layer and the trigger area grids are sent to a server after input information is received on the basis of the information acquisition layer, and the server generates second training and checking information according to the trigger area grids and the information acquisition layer and sends the second training and checking information to the training and checking end;
and the training assessment end plays the training image-text video data based on the first training assessment information or the second training assessment information.
Optionally, in a possible implementation manner of the first aspect, the extracting a training teletext video frame in the training teletext video data, and performing binning processing on the training teletext video frame based on content of the training teletext video frame to obtain a virtual area bin corresponding to each training teletext video frame specifically includes:
obtaining a first quantity value according to the quantity of background pixel points in a background pixel interval in the training image-text video frame, obtaining a second quantity value according to the quantity of content pixel points in a content pixel interval in the training image-text video frame, and generating content proportion information according to the first quantity value and the second quantity value;
acquiring length information and width information in the training image-text video frame to obtain length-width ratio information, calculating according to the content proportion information and the length partition column reference number to obtain the current number of length partition columns, and calculating according to the current number of length partition columns and the length-width ratio information to obtain the current number of width partition columns;
and selecting the length separation columns and the width separation columns corresponding to the current number of the length separation columns and the current number of the width separation columns, and carrying out binning processing on the training image-text video frame to obtain virtual area bins.
Optionally, in a possible implementation manner of the first aspect, the selecting length and width separators that correspond to the current number of the length separators and the current number of the width separators, and performing binning processing on the training image-text video frame to obtain a virtual area bin specifically includes:
carrying out one-time separation processing on the training image-text video frame based on the length separation bar and the width separation bar to obtain a plurality of standard separation areas;
if the situation that the separation column pixel points corresponding to the standard separation areas coincide with the content pixel points is judged to exist, extracting content information to be confirmed corresponding to the coincident content pixel points, and determining a first separation ratio and a second separation ratio of the content information to be confirmed in two adjacent standard separation areas, wherein the first separation ratio is larger than the second separation ratio;
performing offset expansion processing of secondary separation on the standard separation area corresponding to the first separation ratio to obtain a first adjustment separation area, so that the first adjustment separation area corresponding to the first separation ratio completely comprises the content information to be confirmed;
generating a second adjustment separation area corresponding to the second separation ratio while generating the first adjustment separation area, so that the second adjustment separation area corresponding to the second separation ratio no longer includes the content information to be confirmed;
generating a corresponding virtual region grid based on all of the standard separation regions, the first adjusted separation regions, and the second adjusted separation regions.
Optionally, in a possible implementation manner of the first aspect, the performing offset expansion processing on the standard separation area corresponding to the first separation ratio for the second time to obtain a first adjusted separation area, so that the first adjusted separation area corresponding to the first separation ratio completely includes the to-be-confirmed content information specifically includes:
selecting a common separation line at two adjacent standard separation areas, and determining edge pixel points of content information to be confirmed at the training image-text video frame corresponding to the standard separation area with the second separation ratio;
and offsetting the position of the edge pixel point by taking the edge pixel point as a datum point to obtain a cut-off pixel point, and offsetting the separation line to enable the separation line to coincide with the cut-off pixel point to obtain a first adjustment separation area after offset expansion processing.
Optionally, in a possible implementation manner of the first aspect, the classifying, by the trigger terminal, a reference trigger point of the trigger terminal according to a virtual area grid to obtain a plurality of trigger areas to form a trigger area grid, and generating a trigger grid sequence according to a correspondence between the trigger area grid and the virtual area grid specifically includes:
the triggering end determines a reference triggering point corresponding to the standard separation area, the first adjustment separation area and the second adjustment separation area of the virtual area grid;
classifying all the reference trigger points according to the distribution of the standard separation area, the first adjustment separation area and the second adjustment separation area, and connecting the reference trigger points classified into one class to obtain a plurality of trigger sub-areas;
generating corresponding trigger area grids according to the plurality of trigger subareas, acquiring time sequence information of the training image-text video frame corresponding to the virtual area grids, adding time sequence information to each trigger area grid according to the corresponding relation of the trigger area grids and the virtual area grids, and generating a trigger grid sequence according to the trigger area grids added with the time sequence information.
Optionally, in a possible implementation manner of the first aspect, if it is determined that the training staff triggers at least one target area of the trigger area grid in the first manner, the triggering area grid and the trigger target area are sent to the server, and the server generates first training assessment information according to the trigger area grid and the trigger target area and sends the first training assessment information to the training assessment terminal specifically includes:
after receiving a trigger area grid sent by a trigger end, a server determines a corresponding virtual area grid according to time sequence information corresponding to the trigger area grid;
determining any one or more of a corresponding standard separation region, a first adjustment separation region and a second adjustment separation region in the virtual region grid according to the trigger target region, and using the standard separation region, the first adjustment separation region and the second adjustment separation region as to-be-processed separation regions;
acquiring trigger content information corresponding to the to-be-processed separation area in a training image-text video frame;
extracting training labels of training personnel who trigger the triggering content information in a first mode, and determining triggering times of triggering the same triggering content information in the first mode;
generating a first trigger coefficient according to the training label of the training staff and the triggering times of triggering the same triggering content information in a first mode, and if the first trigger coefficient is larger than a first preset coefficient, generating first training assessment information and sending the first training assessment information to a training assessment end;
the training assessment terminal plays the training image-text video data based on the first training assessment information or the second training assessment information, and specifically comprises the following steps:
and the training assessment end plays back the training image-text video data based on the first training assessment information.
Optionally, in a possible implementation manner of the first aspect, the generating a first trigger coefficient for the number of triggers of the same trigger content information in a first manner according to the training labels of the training staff, and if the first trigger coefficient is greater than a first preset coefficient, generating first training assessment information and sending the first training assessment information to a training assessment terminal specifically includes:
acquiring a training information set corresponding to the training label, wherein the training information set comprises at least one type of associated training information and a training coefficient of each associated training information;
extracting all training coefficients corresponding to each same trigger content information and the triggering times of the same trigger content information to generate a first trigger coefficient, calculating the first trigger coefficient by the following formula,
Figure 998654DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 181373DEST_PATH_IMAGE002
is a first trigger factor that is a function of the trigger factor,
Figure 226690DEST_PATH_IMAGE003
is a first
Figure 762844DEST_PATH_IMAGE004
All training coefficients corresponding to the same trigger content information,
Figure 842796DEST_PATH_IMAGE005
is a first normalization coefficient, and is,
Figure 930838DEST_PATH_IMAGE006
the number of triggers for the same trigger content information,
Figure 463450DEST_PATH_IMAGE007
is a second normalization coefficient that is a function of,
Figure 803296DEST_PATH_IMAGE008
representing the upper limit value of the same triggering content information quantity;
if the first trigger coefficient
Figure 472174DEST_PATH_IMAGE002
Greater than a first predetermined factor
Figure 731117DEST_PATH_IMAGE009
And taking the training image-text video frame corresponding to the trigger content information as a first video frame, determining a third quantity corresponding to the first video frame according to the first trigger coefficient, selecting a second video frame corresponding to the third quantity by taking the first video frame as a starting point, and generating corresponding first training assessment information according to the first video frame and the second video frame.
Optionally, in a possible implementation manner of the first aspect, the taking a training teletext video frame corresponding to the trigger content information as a first video frame, determining a third number corresponding to the first video frame according to the first trigger coefficient, selecting a second video frame corresponding to the third number with the first video frame as a starting point, and generating corresponding first training assessment information according to the first video frame and the second video frame specifically includes:
comparing the first trigger coefficient with the first preset coefficient to obtain a coefficient deviation value, and calculating based on the coefficient deviation value, the total time of the training image-text video data and the number of standard frames to obtain a third number;
the third quantity is calculated by the following formula,
Figure 751026DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 753617DEST_PATH_IMAGE011
in order to be the third number of the first,
Figure 417948DEST_PATH_IMAGE009
is a first preset coefficient, and is a second preset coefficient,
Figure 113371DEST_PATH_IMAGE012
in order to calculate the weight value for the first time,
Figure 620576DEST_PATH_IMAGE013
to train the total time of the teletext video data,
Figure 161279DEST_PATH_IMAGE014
in order to calculate the weight value for the second time,
Figure 804750DEST_PATH_IMAGE015
the number of standard frames;
determining second video frames which are smaller than the time sequence information of the first video frames and correspond to a third number of the first video frames and are adjacent in sequence by taking the first video frames as a starting point;
and generating first training assessment information according to the time sequence information respectively corresponding to the first video frame and the second video frame, wherein the first training assessment information is used for enabling the training assessment end to play back the first video frame and the second video frame.
Optionally, in a possible implementation manner of the first aspect, the playing, by the training assessment terminal, the training videotext video data based on the first training assessment information or the second training assessment information specifically includes:
the training assessment end obtains the image-text video frame comments corresponding to the second training assessment information, and the image-text video frame comments are sequenced according to corresponding time sequence information to generate comment image-text videos, wherein the comment image-text videos belong to training image-text video data;
and the training and checking end plays the annotation image-text video.
The invention has the beneficial effects that:
1. the method can obtain corresponding training image-text video data according to different personnel attribute information of training personnel, and carries out targeted training and examination on the training personnel; in addition, in the scheme, in the training process, the condition that the training personnel does not understand or needs annotation is considered, data are input and processed in a corresponding mode, playback image-text videos and annotation image-text videos corresponding to the content are obtained and then played back, so that the staff can be played back according to different behaviors of different staff when being trained, the training assessment is strengthened in a targeted mode, and the training effect can be displayed based on the generated annotation image-text videos;
2. the virtual area grid, the trigger area grid, the virtual grid sequence and the trigger grid sequence are set, so that the synchronization of the trigger end and the training image-text video frame can be realized, and training staff can directly perform corresponding synchronous operation on the training image-text video frame at the trigger end; the scheme also sets the offset of the partition fence, so that the content selected by the training personnel is complete; in addition, the trigger end arranged by the scheme can save cost in a scene with low precision requirement and is convenient to use;
3. the method adopts the third quantity to accurately position and intercept the played back content, so that the played back content is not too much or too little, and meanwhile, when the third quantity is calculated, the method integrates multidimensional data such as the first trigger coefficient, the total time of training image-text video data and the like, so that the third quantity is more accurate.
Drawings
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a data processing method suitable for a training assessment system according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present application, "comprising" and "having" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in the present invention, "a plurality" means two or more. "and/or" is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "comprises A, B and C" and "comprises A, B, C" means that all three of A, B, C comprise, "comprises A, B or C" means that one of A, B, C comprises, "comprises A, B and/or C" means that any 1 or any 2 or 3 of A, B, C comprises.
It should be understood that in the present invention, "B corresponding to a", "a corresponds to B", or "B corresponds to a" means that B is associated with a, and B can be determined from a. Determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information. And the matching of A and B means that the similarity of A and B is greater than or equal to a preset threshold value.
As used herein, "if" may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context.
The technical means of the present invention will be described in detail with reference to specific examples. These several specific embodiments may be combined with each other below, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Referring to fig. 1, which is a schematic view of an application scenario provided by an embodiment of the present invention, the scheme includes a server, a training assessment terminal, and a trigger terminal, where the server is configured to process data of the training assessment terminal and the trigger terminal; the training assessment end is used for a training teacher to play training image-text video data; the trigger end can be a trigger pad which is in wireless or wired connection with the server, the size of the trigger pad can be reduced proportionally to a screen for playing training videos, when training is needed, one trigger pad can be distributed to each person of training staff, a user can trigger the trigger pad, and the server can collect trigger information on the trigger pad in real time. It should be noted that, during training in the prior art, each employee needs to have a terminal with a display interface, for example, a notebook computer and the like, and the notebook computer is expensive in cost and inconvenient to carry. The trigger pad of the scheme is convenient for a user to carry, low in cost and capable of interacting with a training examination end in real time in a training process under the condition of low operation precision requirement.
Referring to fig. 2, which is a schematic flow chart of a data processing method suitable for a training assessment system according to an embodiment of the present invention, an execution subject of the method shown in fig. 2 may be a software and/or hardware device. The execution subject of the present application may include, but is not limited to, at least one of: user equipment, network equipment, etc. The user equipment may include, but is not limited to, a computer, a smart phone, a Personal Digital Assistant (PDA), the above mentioned electronic equipment, and the like. The network device may include, but is not limited to, a single network server, a server group of multiple network servers, or a cloud of numerous computers or network servers based on cloud computing, wherein cloud computing is one type of distributed computing, a super virtual computer consisting of a cluster of loosely coupled computers. The present embodiment does not limit this. The method comprises steps S1 to S6, and specifically comprises the following steps:
s1, the server determines corresponding training image-text video data according to personnel attribute information of training personnel, extracts training image-text video frames in the training image-text video data, and performs binning processing on the training image-text video frames based on the content of the training image-text video frames to obtain virtual area bins corresponding to each training image-text video frame.
It can be understood that the internal training of the enterprise is a special training set by the enterprise according to the own industry characteristics and development conditions, and aims to improve the level of each aspect of knowledge, skill, working method, working attitude and the like of the staff, so that the development of the whole enterprise is promoted.
According to the scheme, in order to provide a method for training different staff, the service can determine corresponding training graphics-text video data according to staff attribute information of the training staff.
Wherein, the personnel attribute information can be the working attribute of the personnel, for example, the employee a is the manager, and then the corresponding management training image-text video data is determined to train the employee a, so as to realize the targeted training of the employee a.
The training teletext video data may be PPT training data, which may have 10 PPT content, and thus there may be multiple video frames, which may be 10 video frames for the 10 PPT content.
In other embodiments, the training permission information corresponding to the user can be acquired after the user logs in, corresponding training image-text video data are screened from a preset database by using the training permission information, and training personnel are trained by using the training image-text video data. It can be understood that the scheme can carry out targeted training according to different positions of the user or different knowledge levels by setting corresponding training image-text video data for the user, so that the condition that the training is not suitable is avoided.
According to the scheme, after the training image-text video data are obtained, the training image-text video frames in the training image-text video data are extracted, and then the training image-text video frames are divided into grids by using the content of the training image-text video frames to obtain virtual area grids corresponding to each training image-text video frame.
It can be understood that, according to the scheme, the corresponding virtual area grid is obtained according to the content of the training image-text video frame, and the training image-text video frame is divided by using the virtual area grid, which is specifically referred to below.
In some embodiments, the step S1 (extracting the training teletext video frames in the training teletext video data, and performing binning processing on the training teletext video frames based on the content of the training teletext video frames to obtain virtual area bins corresponding to each training teletext video frame) includes steps S11 to S13, which are as follows:
and S11, obtaining a first quantity value according to the quantity of background pixel points in a background pixel interval in the training image-text video frame, obtaining a second quantity value according to the quantity of content pixel points in a content pixel interval in the training image-text video frame, and generating content ratio information according to the first quantity value and the second quantity value.
It can be understood that, in the scheme, the content of the training image-text video frame is utilized to perform binning processing on the training image-text video frame to obtain the virtual area grid corresponding to each training image-text video frame, the more the content is, the more the corresponding virtual area grid is, and in a similar way, the less the content is, the less the corresponding virtual area grid is.
According to the scheme, the number of background pixel points in a background pixel interval in a training image-text video frame is counted to obtain a first number value, then the number of content pixel points in a content pixel interval in the training image-text video frame is counted to obtain a second number value, and content proportion information is calculated by utilizing the first number value and the second number value.
The number of the background pixels may be, for example, the number of white pixels, and the number of the content pixels may be, for example, the number of black pixels. It will be appreciated that the more content and the larger the second numerical value, the smaller the first numerical value. According to the scheme, the content proportion information can be rapidly and accurately calculated by judging the quantity of different pixel points.
S12, length information and width information in the training image-text video frame are obtained to obtain length-width ratio information, the current number of length separation columns is obtained through calculation according to the content ratio information and the length separation column reference number, and the current number of width separation columns is obtained through calculation according to the current number of length separation columns and the length-width ratio information.
In the scheme, in order to separate the training image-text video frame, separation information needs to be obtained, and the training image-text video frame is separated by utilizing the separation information.
Firstly, the scheme obtains length information and width information in a training teletext video frame, wherein the length information can be 80cm and the width information can be 40cm, and then length-width ratio information can be calculated by using the length information and the width information, and the length-width ratio information can be 2: 1.
After the length-width ratio information is obtained, the present solution calculates the current number of length columns by using the content ratio information and the reference number of length columns in step S11.
The length division bars are vertical and used for dividing the training image-text video frames in the length direction, the reference number of the length division bars can be 3, for example, and the reference number of the length division bars is adjusted by comprehensively considering content proportion information to obtain the current number of the length division bars.
For example, the content ratio information of the present scheme is larger, which indicates that the content is larger, in this case, the length division bar reference number may be increased by using the content ratio information, and the current number of the length division bars may be 4, for example, compared to the number of 3.
According to the scheme, the current number of the corresponding length separation columns is obtained according to the number of the contents, when the contents are large, the number of the length separation columns is increased, the video frames are finely divided, and a foundation is laid for accurate extraction of the subsequent contents.
After the current number of the length separation columns is obtained, the current number of the width separation columns is calculated by using the current number of the length separation columns and the length-width ratio information.
It is understood that the length-to-width ratio information may be, for example, 2:1, the current number of length bars is 4, and the current number of width bars may be 2, so as to reasonably divide the video frame.
It should be noted that the current number of length barriers and the current number of length barriers in this scheme need to be integers, and if not, they may be rounded, for example, if the value is 2.1, then it may be rounded to 2.
And S13, selecting the length separation bars and the width separation bars corresponding to the current number of the length separation bars and the current number of the width separation bars, and carrying out division processing on the training image-text video frame to obtain virtual area lattices.
In step S12, the current number of length and width partitions is obtained, and the corresponding length and width partitions are generated to partition the video frame, so as to obtain the corresponding virtual region lattice.
In some embodiments, the step S13 (selecting the length and width barriers corresponding to the current number of the length and width barriers, and performing binning processing on the training teletext video frame to obtain the virtual area bin) includes steps S131 to S133, which are as follows:
s131, carrying out one-time separation processing on the training image-text video frame based on the length separation bar and the width separation bar to obtain a plurality of standard separation areas.
Illustratively, when there are 2 length and width bars, the training teletext video frame can be divided into 9 standard division areas.
And S132, if the situation that the partition column pixel points corresponding to the standard partition areas coincide with the content pixel points is judged to exist, extracting the content information to be confirmed corresponding to the coincident content pixel points, and determining a first partition proportion and a second partition proportion of the content information to be confirmed in two adjacent standard partition areas, wherein the first partition proportion is larger than the second partition proportion.
The scheme considers that when the separation processing is carried out, a continuous content is separated on two areas, for example, a letter "A" is separated on the two areas, and in order to avoid the influence on the subsequent content selection, the scheme can process the situation.
It can be understood that, in the present solution, for example, a letter "a" is divided between two regions, the letter "a" is in a content pixel point set, and at this time, a condition that a separation column pixel point and a content pixel point coincide may occur, and at this time, the present solution may extract content information to be confirmed corresponding to the coincident content pixel point, that is, may use the letter "a" as the content information to be confirmed.
In order to classify and process the content information to be confirmed, a first separation ratio and a second separation ratio of the content information to be confirmed in two adjacent standard separation areas are obtained, and the first separation ratio is limited to be larger than the second separation ratio.
For example, the first division ratio of the letter "a" in the first standard divided region may be 0.7, and the second division ratio in the second standard divided region may be 0.3.
In practical applications, when the present solution obtains the first partition ratio and the second partition ratio, the number of the pixel values may be used for determination, for example, the number of the letters "a" in the first standard partition area is 70, and the number in the second standard partition area may be 30.
S133, performing a second division offset expansion process on the standard division area corresponding to the first division ratio to obtain a first adjusted division area, so that the first adjusted division area corresponding to the first division ratio completely includes the content information to be confirmed.
After the first and second partition ratios are obtained in step S132, in order to enable content information to be confirmed (for example, a letter "a", or a word "training") to appear in one area, the present solution performs a second-partition offset expansion process on a standard partition area corresponding to the first partition ratio to obtain a first adjusted partition area, so that the first adjusted partition area corresponding to the first partition ratio completely includes the content information to be confirmed, so that a subsequent user can select complete content without missing content when selecting content.
It should be noted that, in the present solution, the first adjustment partition area corresponding to the first partition ratio (larger ratio) is selected to include the content information to be confirmed, so that the adjustment range of the partition line can be reduced, and the influence on other fonts can be reduced.
In some embodiments, the performing, by the offset expansion processing, the second division of the standard division region corresponding to the first division ratio to obtain a first adjusted division region, so that the first adjusted division region corresponding to the first division ratio completely includes the content information to be confirmed specifically includes:
selecting a common separation line at two adjacent standard separation areas, and determining edge pixel points of content information to be confirmed at the training image-text video frame corresponding to the standard separation area with the second separation ratio;
and offsetting the position of the edge pixel point by taking the edge pixel point as a datum point to obtain a cut-off pixel point, and offsetting the separation line to enable the separation line to coincide with the cut-off pixel point to obtain a first adjustment separation area after offset expansion processing.
It can be understood that, in order to perform the offset expansion processing of the second division on the standard division region corresponding to the first division ratio to obtain the first adjusted division region, the common division line at two adjacent standard division regions is selected, and then the position of the division line is shifted, for example, upward or downward or leftward or rightward, so that the first adjusted division region corresponding to the first division ratio completely includes the content information to be confirmed.
Firstly, the scheme determines edge pixel points of content information to be confirmed at a training image-text video frame corresponding to a standard separation area with a second separation ratio, wherein the edge pixel points can be the most edge pixel points of the content information to be confirmed in the standard separation area with the second separation ratio.
Illustratively, the content of the right 0.3 of the letter "a" is located in the standard partition region of the second partition proportion, and then the edge pixel point is the pixel point in the bottom right corner of the letter "a".
Then, in order to leave a margin, the separation line is not completely attached to the content information to be confirmed, and the position of the edge pixel point is shifted by taking the edge pixel point as a reference point to obtain a cut-off pixel point.
Finally, according to the scheme, the cut-off pixel point is used as a reference, the separation line is subjected to offset to enable the separation line to coincide with the cut-off pixel point, the separation line is offset, then the first adjustment separation area after offset expansion processing is obtained, and at the moment, the content information to be confirmed is completely located in the first adjustment separation area.
It should be noted that the division line adjusted according to the present embodiment is a division line for each adjusted division region, and is not a whole length division bar or a width division bar, that is, according to the present embodiment, the position of a shorter division line (for example, the length division bar or the width division bar may be one third, etc.) is shifted, so that the influence on the content of other regions when the position of the division line is changed can be reduced.
And S134, generating a second adjustment separation area corresponding to the second separation ratio while generating the first adjustment separation area, so that the second adjustment separation area corresponding to the second separation ratio no longer includes the content information to be confirmed.
It can be understood that after the adjustment of the partition line in step S133, the content information to be confirmed is completely located in the first adjustment partition region, and at this time, the second adjustment partition region corresponding to the second partition ratio does not include the content information to be confirmed any more.
S135, generating a corresponding virtual region lattice based on all the standard dividing regions, the first adjustment dividing regions, and the second adjustment dividing regions.
The scheme can obtain a virtual area grid matched with the content of the video frame by utilizing the standard separation area, the first adjustment separation area and the second adjustment separation area. When the subsequent user operates, the virtual area grid is taken as a reference, and the accuracy of the user for extracting the video frame content is ensured.
And S2, the server generates a virtual grid sequence according to the time of the training image-text video frame corresponding to each virtual area grid, and sends the virtual grid sequence to the trigger terminal.
After the virtual area grid is obtained in step S1, the scheme obtains a virtual grid sequence containing time, and sends the virtual grid sequence to the trigger end.
Illustratively, the training video has a duration of 100S and contains 10 training teletext video frames, each of which may be 10S, and may generate a virtual grid sequence for 10 training teletext video frames in 10S. For example, the first frame of the teletext video corresponds to a time of 0-10S, the second frame of the teletext video corresponds to a time of 10-20S, and so on.
The trigger end in the scheme can be a trigger pad which is in wireless or wired connection with the server, the size of the trigger pad can be reduced according to the proportion of a screen for playing a training video, when training is needed, one trigger pad can be distributed to each person of training staff, a user can trigger the trigger pad, and the server can acquire trigger information on the trigger pad in real time.
It should be noted that, during training in the prior art, each employee needs to have a terminal with a display interface, for example, a notebook computer and the like, and the notebook computer is expensive in cost and inconvenient to carry. The user of the trigger pad can carry about, the cost is low, and the trigger pad can interact with a training examination end in real time in the training process.
And S3, the trigger terminal classifies the reference trigger point of the trigger terminal according to the virtual region grid to obtain a plurality of trigger regions to form a trigger region grid, and a trigger grid sequence is generated according to the corresponding relation between the trigger region grid and the virtual region grid.
In order to realize synchronization of the trigger terminal and the training video, after the trigger terminal receives the virtual area grids, the virtual area grids are used for classifying the reference trigger points of the trigger terminal to obtain a plurality of trigger areas to form trigger area grids, and a trigger grid sequence is generated according to the corresponding relation between the trigger area grids and the virtual area grids.
It can be understood that, when a user wants to perform a synchronization operation (e.g., selecting or the like) on content on a training video, the user first needs to obtain a corresponding trigger area grid corresponding to a virtual area grid, and obtain a trigger grid sequence corresponding to the virtual grid sequence.
In some embodiments, the trigger end classifies the reference trigger point of the trigger end according to the virtual area grid to obtain a plurality of trigger areas to form the trigger area grid, and the generating of the trigger grid sequence according to the correspondence between the trigger area grid and the virtual area grid includes S31 to S33, which are as follows:
s31, the trigger terminal determines a reference trigger point corresponding to the standard separation region, the first adjusted separation region and the second adjusted separation region of the virtual area grid.
First, the trigger terminal determines a reference trigger point corresponding to the standard partition area, the first adjusted partition area and the second adjusted partition area of the virtual area grid.
It can be understood that, several trigger points may be densely distributed on the trigger end, and the trigger operation received by the user in real time generates corresponding trigger data.
And S32, classifying all the reference trigger points according to the distribution of the standard separation area, the first adjustment separation area and the second adjustment separation area, and connecting the classified reference trigger points to obtain a plurality of trigger sub-areas.
According to the scheme, all the reference trigger points are classified, and the reference trigger points classified into one class are connected to obtain a plurality of trigger subregions.
For example, the trigger end may be divided into 9 trigger sub-areas according to the distribution of the standard separation area, the first adjustment separation area and the second adjustment separation area, and the reference trigger point in each trigger sub-area is a type of reference trigger point.
And S33, generating corresponding trigger area grids according to the plurality of trigger sub-areas, acquiring the time sequence information of the training image-text video frame corresponding to the virtual area grids, adding time sequence information to each trigger area grid according to the corresponding relation between the trigger area grids and the virtual area grids, and generating a trigger grid sequence according to the trigger area grids added with the time sequence information.
According to the scheme, after the corresponding trigger area grids are generated according to the plurality of trigger sub-areas, the time sequence information of the training image-text video frame corresponding to the virtual area grids is searched, then the corresponding time sequence information is added to the trigger area grids, and a trigger grid sequence is generated, so that the synchronization of the trigger end and the training image-text video frame is realized, and if a user has a doubt about the current content in the training, the user can perform synchronous operation on the trigger end, such as annotation or handwriting establishment.
S4, sending the training image-text video data to a training assessment terminal for playing, if the condition that a training person triggers at least one target area of a trigger area grid in a first mode is judged, sending the trigger area grid and the trigger target area to a server, and the server generates first training assessment information according to the trigger area grid and the trigger target area and sends the first training assessment information to the training assessment terminal.
It can be understood that when a teacher is trained, a training assessment terminal can be used for playing training image-text video data, if training personnel have a question about training, a first mode and a second mode can be used for operating the trigger terminal, and the server can process operation data to obtain corresponding training information and send the training information to the training assessment terminal.
In practical application, the trigger end may be provided with a selection button, the operation performed after the user selects the first button may be an operation corresponding to the first mode, and the operation performed after the user selects the second button may be an operation corresponding to the second mode.
In step S4, if the training person does not understand the content of the current training image-text video frame, the training person may trigger at least one target area of the trigger area lattice in a first manner to select a corresponding content area on the training image-text video frame, and at the same time, the trigger area lattice and the trigger target area may be sent to the server, and the server may generate first training assessment information and send the first training assessment information to the training assessment terminal.
In some embodiments, if it is determined that the training staff triggers at least one target area of the trigger area grid in the first manner, the triggering area grid and the trigger target area are sent to the server, and the step of generating, by the server, first training assessment information according to the trigger area grid and the trigger target area and sending the first training assessment information to the training assessment terminal specifically includes:
after receiving the trigger area grids sent by the trigger end, the server determines corresponding virtual area grids according to the time sequence information corresponding to the trigger area grids.
It can be understood that after the training personnel click the trigger area grid, the server can find the corresponding virtual area grid by using the time sequence information corresponding to the trigger area grid, so as to realize the synchronization of the trigger end operation and the training image-text video frame operation.
And determining any one or more of the corresponding standard separation region, the first adjustment separation region and the second adjustment separation region in the virtual region grid according to the trigger target region, and taking the standard separation region, the first adjustment separation region and the second adjustment separation region as the separation regions to be processed.
And acquiring trigger content information corresponding to the to-be-processed separation area in the training image-text video frame.
The method comprises the steps that a training person clicks a trigger area grid, the trigger target area is used for determining any one or more of a standard separation area, a first adjustment separation area and a second adjustment separation area which correspond to the virtual area grid and serve as a to-be-processed separation area, and then trigger content information of the to-be-processed separation area corresponding to a training image-text video frame is found.
The method includes extracting training tags for training personnel who trigger the trigger content information in a first manner and determining the number of triggers for all of the same trigger content information in the first manner.
It is understood that the training staff may have a plurality of persons, for example, 10 persons, and the training labels of the training staff may also be different, the training labels may be numerical values, for example, 1, 2 or 3, the training labels may be classified according to the job type or the job level, for example, the higher the job level, the higher the corresponding numerical value of the training labels.
It will also be appreciated that if multiple individuals are not aware of the content of one location during the training process, the number of triggers that trigger the same trigger content information in the first way will be counted.
And generating a first trigger coefficient according to the training label of the training staff and the triggering times of triggering the same triggering content information in a first mode, and if the first trigger coefficient is larger than a first preset coefficient, generating first training assessment information and sending the first training assessment information to a training assessment terminal.
According to the scheme, after the training labels and the triggering times are obtained, a first triggering coefficient is calculated to judge whether the current training video needs to be played back or not according to the size of the first triggering coefficient, if the first triggering coefficient is larger than a first preset coefficient, the situation that the number of people who cannot be understood is large or the situation that an evaluation value which cannot be understood is large is shown, first training assessment information can be generated and sent to a training assessment end, and the first training assessment information can be training image-text video frames of corresponding contents.
The playing of the training image-text video data by the training assessment terminal based on the first training assessment information or the second training assessment information specifically comprises the following steps: and the training assessment end plays back the training image-text video data based on the first training assessment information.
It can be understood that when the training video is judged to be unintelligible to more people, the training examination end utilizes the corresponding first training examination information to replay the training image-text video data, a training teacher can explain once again, and the training teacher can replay the training image-text video data according to different behaviors of different staff when training the staff to achieve targeted enhanced training, so that the training quality is ensured.
In other embodiments, the generating a first trigger coefficient according to the training label of the training staff and the number of triggers to the same trigger content information in a first manner, and if the first trigger coefficient is greater than a first preset coefficient, generating first training assessment information and sending the first training assessment information to a training assessment terminal specifically includes:
and acquiring a training information set corresponding to the training label, wherein the training information set comprises at least one type of associated training information and a training coefficient of each associated training information. When training is carried out, different training personnel can have different training labels, such as staff labels, group leader labels, supervisor labels and the like, the invention obtains a corresponding training information set after each training label is obtained, the training information set comprises at least one type of associated training information, such as staff labels corresponding to one type of associated training information, group leader labels corresponding to one type of associated training information, each management training information comprises a corresponding training coefficient, such as supervisor labels with a training coefficient of 3, group leader labels with a training coefficient of 2, staff labels with a training coefficient of 1, and generally speaking, the higher the job level is, the larger the corresponding training coefficient is possible.
Extracting all training coefficients corresponding to each piece of same trigger content information and trigger times triggered by the same trigger content information to generate a first trigger coefficient, calculating the first trigger coefficient through the following formula,
Figure 280861DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 275362DEST_PATH_IMAGE002
is a first trigger coefficient, and is,
Figure 619756DEST_PATH_IMAGE003
is a first
Figure 117733DEST_PATH_IMAGE004
All training coefficients corresponding to the same trigger content information,
Figure 154960DEST_PATH_IMAGE005
is a first normalization coefficient, and is a second normalization coefficient,
Figure 246543DEST_PATH_IMAGE006
the number of triggers for the same trigger content information,
Figure 129049DEST_PATH_IMAGE007
is the second normalization coefficient, and is,
Figure 747112DEST_PATH_IMAGE008
representing an upper limit value for the number of identical trigger content messages. When the first trigger coefficient is calculated, comprehensive calculation is carried out according to the sum of all training coefficients and two dimensions of the triggering times, and if the sum of the training coefficients is larger, the higher the relative job level of the persons who cannot be understood is proved, or the more the possible number of the persons who cannot be understood is proved, so that the higher the relative job level of the persons who cannot be understood is, the more the possible number of the persons who cannot be understood is, and the more the persons who cannot be understood is, the more the first trigger coefficient is calculated at the moment
Figure 955239DEST_PATH_IMAGE016
And a first trigger coefficient
Figure 658753DEST_PATH_IMAGE002
Is in direct proportion to the number of times of triggering
Figure 79370DEST_PATH_IMAGE006
Can directly reflect the number of people who cannot be understood, so the triggering times
Figure 692885DEST_PATH_IMAGE006
Is related to the first trigger coefficient
Figure 71914DEST_PATH_IMAGE002
The method is in a direct proportion relationship, and the training coefficient and the triggering times are combined to perform fusion calculation, so that the first triggering coefficient calculated by the method
Figure 262724DEST_PATH_IMAGE002
Relatively more suitable for the current computing scenario.
If the first trigger coefficient
Figure 487032DEST_PATH_IMAGE002
Greater than a first predetermined factor
Figure 814108DEST_PATH_IMAGE009
And then, taking the training image-text video frame corresponding to the trigger content information as a first video frame, determining a third quantity corresponding to the first video frame according to the first trigger coefficient, selecting a second video frame corresponding to the first quantity by taking the first video frame as a starting point, and generating corresponding first training assessment information according to the first video frame and the second video frame.
It can be understood that the scheme is to judge the first trigger coefficient
Figure 239404DEST_PATH_IMAGE002
Greater than a first predetermined coefficient
Figure 917510DEST_PATH_IMAGE009
It is determined to play back the corresponding content.
When the corresponding content is determined to be played back, the scheme takes the training image-text video frame corresponding to the trigger content information as a first video frame, and then determines a third number corresponding to the first video frame by using the first trigger coefficient obtained by the formula, wherein the third number can be 2, for example.
After the third quantity is determined, the scheme selects a second video frame corresponding to the third quantity (for example, 2) by taking the first video frame as a starting point, and captures the video frame between the first video frame and the second video frame to generate corresponding first training and assessment information. According to the scheme, the contents which are not understood by the training staff can be accurately positioned, and the retransmission contents can be matched with places which are not understood by the training staff.
In still other embodiments, the taking the training teletext video frame corresponding to the trigger content information as a first video frame, determining a third number corresponding to the first video frame according to the first trigger coefficient, selecting a second video frame corresponding to the third number with the first video frame as a starting point, and generating corresponding first training assessment information according to the first video frame and the second video frame specifically includes:
comparing the first trigger coefficient with the first preset coefficient to obtain a coefficient deviation value, and calculating based on the coefficient deviation value, the total time of the training image-text video data and the number of standard frames to obtain a third number;
the third quantity is calculated by the following formula,
Figure 945509DEST_PATH_IMAGE010
wherein the content of the first and second substances,
Figure 392671DEST_PATH_IMAGE011
in order to be the third number of,
Figure 847923DEST_PATH_IMAGE009
is a first preset coefficient, and is a second preset coefficient,
Figure 897481DEST_PATH_IMAGE012
in order to calculate the weight value for the first time,
Figure 463591DEST_PATH_IMAGE013
in order to train the total time of the teletext video data,
Figure 30839DEST_PATH_IMAGE014
is a second meterCalculating the weight value of the weight value,
Figure 656992DEST_PATH_IMAGE015
is the standard number of frames.
The basic concept of the above calculation formula is:
the scheme considers the first triggering coefficient
Figure 309690DEST_PATH_IMAGE002
And a first predetermined coefficient
Figure 820437DEST_PATH_IMAGE009
The larger the difference value of (A) is, the more people who cannot understand the difference value is, and therefore, the larger the difference value of (A) is, the more people who cannot understand the difference value is, the more people can use the difference value
Figure 711033DEST_PATH_IMAGE017
Obtaining a first offset coefficient to perform larger offset processing on the third quantity, so that the intercepted playback content is larger; meanwhile, the scheme considers the total time of training the image-text video data
Figure 773667DEST_PATH_IMAGE013
The longer the video frame, the more video frames are available at this time
Figure 648082DEST_PATH_IMAGE018
And obtaining a second offset coefficient to perform larger offset processing on the third quantity. Wherein the first calculated weight value
Figure 87153DEST_PATH_IMAGE012
And a first calculated weight value
Figure 238780DEST_PATH_IMAGE014
May be manually set.
According to the scheme, the multidimensional data are comprehensively considered, the third quantity is shifted by utilizing the first shift coefficient and the second shift coefficient, and the more accurate third quantity is calculated, so that the played content is not excessive and is not lost.
And determining second video frames which are smaller than the time sequence information of the first video frames and correspond to a third number of the first video frames which are adjacent in sequence by taking the first video frames as a starting point.
And generating first training assessment information according to the time sequence information respectively corresponding to the first video frame and the second video frame, wherein the first training assessment information is used for enabling the training assessment end to play back the first video frame and the second video frame.
It is understood that after the third number is obtained, the second video frames corresponding to the third number, which are sequentially adjacent to the first video frames, can be found by using the forward search of the third number. And then, generating first training assessment information to be played back by utilizing the time sequence information corresponding to the first video frame and the second video frame respectively.
S5, if the condition that the trigger area lattice is triggered by the training staff in the second mode is judged, an information acquisition layer is generated at the trigger end, the information acquisition layer and the trigger area lattice are sent to a server after the information acquisition layer receives input information, and the server generates second training assessment information according to the trigger area lattice and the information acquisition layer and sends the second training assessment information to the training assessment end.
In some embodiments, if it is determined that the training staff triggers the trigger area grid in the second manner, generating an information acquisition layer at the trigger end, receiving input information based on the information acquisition layer, and then sending the information acquisition layer and the trigger area grid to the server, where the server generates second training assessment information according to the trigger area grid and the information acquisition layer and sends the second training assessment information to the training assessment end specifically includes:
and if the trigger terminal judges that the training personnel triggers the trigger area grid in a second mode, generating an information acquisition layer with the same specification as the trigger area grid.
Unlike step S4, the present solution is directed to that the training personnel needs to make corresponding remarks or notes on the training frame, and at this time, the training personnel can perform the triggering operation on the triggering area grid in the second mode.
Generating a corresponding trigger trace at the information acquisition layer according to the trigger action of the input information on the reference trigger point;
sending the trigger trace and the trigger area grid included in the information acquisition layer to a server;
the server generates a picture-text video frame annotation corresponding to each training picture-text video frame according to a trigger trace included in each information acquisition layer, and obtains corresponding second training assessment information based on the picture-text video frame annotation.
In the scheme, when the condition that the training personnel trigger the trigger area lattices in the second mode is detected, an information acquisition layer can be generated, a user can send the information acquisition layer and the trigger area lattices to the server after receiving input information by using the information acquisition layer, the server can acquire information in the information acquisition layer and trigger the area lattices, and corresponding second training and examination information is generated and sent to the training and examination end.
In some embodiments, the playing of the training videotext video data by the training assessment terminal based on the first training assessment information or the second training assessment information specifically includes:
the training assessment end obtains the image-text video frame annotations corresponding to the second training assessment information, and the image-text video frame annotations are sequenced according to corresponding time sequence information to generate annotation image-text videos, wherein the annotation image-text videos belong to training image-text video data;
and the training and checking end plays the annotation image-text video.
It can be understood that, the scheme not only can replay certain contents to pertinently strengthen the training effect, but also can simultaneously replay the comments of each training person, and each training person can observe the learning conditions of other training persons, so that the training quality is improved.
It should be noted that the annotations of the training staff in this embodiment may be some simple annotations, such as labels and some remarks with a few characters, and are not applicable to annotations with more characters.
And S6, the training assessment terminal plays the training image-text video data based on the first training assessment information or the second training assessment information.
After the first training assessment information or the second training assessment information is obtained in the steps S1-S6, corresponding playback operation can be performed, certain contents can be retrained, and the training quality is ensured.
Referring to fig. 3, which is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention, the electronic device 30 includes: a processor 31, a memory 32 and a computer program; wherein
A memory 32 for storing the computer program, which may also be a flash memory (flash). The computer program is, for example, an application program, a functional module, or the like that implements the above method.
A processor 31 for executing the computer program stored in the memory to implement the steps performed by the apparatus in the above method. Reference may be made in particular to the description relating to the preceding method embodiment.
Alternatively, the memory 32 may be separate or integrated with the processor 31.
When the memory 32 is a device independent of the processor 31, the apparatus may further include:
a bus 33 for connecting the memory 32 and the processor 31.
The present invention also provides a storage medium having a computer program stored therein, the computer program being executable by a processor to implement the methods provided by the various embodiments described above.
The storage medium may be a computer storage medium or a communication medium. Communication media includes any medium that facilitates transfer of a computer program from one place to another. Computer storage media can be any available media that can be accessed by a general purpose or special purpose computer. For example, a storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuits (ASIC). Additionally, the ASIC may reside in user equipment. Of course, the processor and the storage medium may reside as discrete components in a communication device. The storage medium may be read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and the like.
The present invention also provides a program product comprising execution instructions stored in a storage medium. The at least one processor of the device may read the execution instructions from the storage medium, and the execution of the execution instructions by the at least one processor causes the device to implement the methods provided by the various embodiments described above.
In the embodiment of the terminal or the server, it should be understood that the Processor may be a Central Processing Unit (CPU), other general-purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. The data processing method suitable for the training assessment system is characterized by comprising a server, a training assessment terminal and a trigger terminal, and performing data processing through the following steps of:
the server determines corresponding training image-text video data according to personnel attribute information of training personnel, extracts training image-text video frames in the training image-text video data, and divides the training image-text video frames into grids based on the content of the training image-text video frames to obtain virtual area grids corresponding to each training image-text video frame;
the server generates a virtual grid sequence according to the time of the training image-text video frame corresponding to each virtual area grid, and sends the virtual grid sequence to the trigger end;
the method comprises the steps that a trigger end classifies a reference trigger point of the trigger end according to a virtual area grid to obtain a plurality of trigger areas to form a trigger area grid, and a trigger grid sequence is generated according to the corresponding relation between the trigger area grid and the virtual area grid;
the training image-text video data are sent to a training assessment end to be played, if the condition that a training person triggers at least one target area of a trigger area grid in a first mode is judged, the trigger area grid and the trigger target area are sent to a server, and the server generates first training assessment information according to the trigger area grid and the trigger target area and sends the first training assessment information to the training assessment end;
if the fact that the training personnel trigger the trigger area lattices in the second mode is judged, an information acquisition layer is generated at the trigger end, the information acquisition layer and the trigger area lattices are sent to a server after input information is received on the basis of the information acquisition layer, and the server generates second training and examination information according to the trigger area lattices and the information acquisition layer and sends the second training and examination information to a training and examination end;
the training assessment end plays the training image-text video data based on the first training assessment information or the second training assessment information;
the extracting training image-text video frames in the training image-text video data, and carrying out binning processing on the training image-text video frames based on the content of the training image-text video frames to obtain virtual region bins corresponding to each training image-text video frame includes:
obtaining a first quantity value according to the quantity of background pixel points in a background pixel interval in the training image-text video frame, obtaining a second quantity value according to the quantity of content pixel points in a content pixel interval in the training image-text video frame, and generating content proportion information according to the first quantity value and the second quantity value;
acquiring length information and width information in the training image-text video frame to obtain length-width ratio information, calculating according to the content ratio information and the length division bar reference number to obtain the current number of length division bars, and calculating according to the current number of length division bars and the length-width ratio information to obtain the current number of width division bars;
and selecting the length separation columns and the width separation columns corresponding to the current number of the length separation columns and the current number of the width separation columns, and carrying out division processing on the training image-text video frame to obtain virtual area lattices.
2. The data processing method for the training assessment system according to claim 1,
the selecting the length division bars and the width division bars corresponding to the current number of the length division bars and the current number of the width division bars, and carrying out division processing on the training image-text video frame to obtain virtual area lattices comprises the following steps:
carrying out one-time separation processing on the training image-text video frame based on the length separation bar and the width separation bar to obtain a plurality of standard separation areas;
if the situation that the partition column pixel points corresponding to the standard partition areas coincide with the content pixel points is judged to exist, extracting content information to be confirmed corresponding to the coincident content pixel points, and determining a first partition proportion and a second partition proportion of the content information to be confirmed in two adjacent standard partition areas, wherein the first partition proportion is larger than the second partition proportion;
performing offset expansion processing of secondary separation on the standard separation area corresponding to the first separation ratio to obtain a first adjustment separation area, so that the first adjustment separation area corresponding to the first separation ratio completely comprises the content information to be confirmed;
generating a second adjustment separation area corresponding to the second separation ratio while generating the first adjustment separation area, so that the second adjustment separation area corresponding to the second separation ratio no longer includes the content information to be confirmed;
generating a corresponding virtual region grid based on all of the standard separation regions, the first adjusted separation regions, and the second adjusted separation regions.
3. The data processing method for training assessment system according to claim 2,
the performing, by the second-time division offset expansion processing, on the standard separation area corresponding to the first separation ratio to obtain a first adjusted separation area, so that the first adjusted separation area corresponding to the first separation ratio completely includes the content information to be confirmed, includes:
selecting a common separation line at two adjacent standard separation areas, and determining edge pixel points of content information to be confirmed at the training image-text video frame corresponding to the standard separation area with the second separation ratio;
and offsetting the position of the edge pixel point by taking the edge pixel point as a datum point to obtain a cut-off pixel point, and offsetting the separation line to enable the separation line to coincide with the cut-off pixel point to obtain a first adjustment separation area after offset expansion processing.
4. The data processing method for the training assessment system according to claim 2,
the method includes the steps that the trigger terminal classifies a reference trigger point of the trigger terminal according to a virtual area grid to obtain a plurality of trigger areas to form a trigger area grid, and a trigger grid sequence is generated according to the corresponding relation between the trigger area grid and the virtual area grid, and includes:
the triggering end determines a reference triggering point corresponding to the standard separation area, the first adjustment separation area and the second adjustment separation area of the virtual area grid;
classifying all the reference trigger points according to the distribution of the standard separation area, the first adjustment separation area and the second adjustment separation area, and connecting the reference trigger points classified into one class to obtain a plurality of trigger sub-areas;
generating corresponding trigger area grids according to the plurality of trigger subareas, acquiring time sequence information of the training image-text video frame corresponding to the virtual area grids, adding time sequence information to each trigger area grid according to the corresponding relation of the trigger area grids and the virtual area grids, and generating a trigger grid sequence according to the trigger area grids added with the time sequence information.
5. The data processing method for the training assessment system according to claim 1,
if the training personnel are judged to trigger at least one target area of the trigger area grid in a first mode, the trigger area grid and the trigger target area are sent to a server, the server generates first training assessment information according to the trigger area grid and the trigger target area and sends the first training assessment information to a training assessment terminal, and the method comprises the following steps:
after receiving a trigger area grid sent by a trigger end, a server determines a corresponding virtual area grid according to time sequence information corresponding to the trigger area grid;
determining any one or more of a corresponding standard separation region, a first adjustment separation region and a second adjustment separation region in the virtual region grid according to the trigger target region, and using the standard separation region, the first adjustment separation region and the second adjustment separation region as to-be-processed separation regions;
acquiring trigger content information corresponding to the to-be-processed separation area in a training image-text video frame;
extracting training labels of training personnel who trigger the triggering content information in a first mode, and determining triggering times of triggering the same triggering content information in the first mode;
generating a first trigger coefficient according to the training label of the training staff and the triggering times of triggering the same triggering content information in a first mode, and if the first trigger coefficient is larger than a first preset coefficient, generating first training assessment information and sending the first training assessment information to a training assessment end;
the training assessment terminal plays the training image-text video data based on the first training assessment information or the second training assessment information, and the method comprises the following steps:
and the training assessment end plays back the training image-text video data based on the first training assessment information.
6. The data processing method for the training assessment system according to claim 5,
the method comprises the following steps of generating a first trigger coefficient for the trigger times of the same trigger content information in a first mode according to a training label of a training staff, and if the first trigger coefficient is larger than a first preset coefficient, generating first training assessment information and sending the first training assessment information to a training assessment terminal, wherein the method comprises the following steps:
acquiring a training information set corresponding to the training label, wherein the training information set comprises at least one type of associated training information and a training coefficient of each associated training information;
extracting all training coefficients corresponding to each piece of same trigger content information and the triggering times of the same trigger content information to generate a first trigger coefficient, and calculating the first trigger coefficient through the following formula:
Figure 233785DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE003
is a first trigger factor that is a function of the trigger factor,
Figure 799633DEST_PATH_IMAGE004
is as follows
Figure DEST_PATH_IMAGE005
All training coefficients corresponding to the same trigger content information,
Figure 223792DEST_PATH_IMAGE006
is a first normalization coefficient, and is a second normalization coefficient,
Figure DEST_PATH_IMAGE007
the number of triggers for the same trigger content information,
Figure 35803DEST_PATH_IMAGE008
is a second normalization coefficient that is a function of,
Figure DEST_PATH_IMAGE009
representing the upper limit value of the same triggering content information quantity;
if the first trigger coefficient
Figure 942317DEST_PATH_IMAGE003
Greater than a first predetermined factor
Figure 78900DEST_PATH_IMAGE010
And taking the training image-text video frame corresponding to the trigger content information as a first video frame, determining a third quantity corresponding to the first video frame according to the first trigger coefficient, selecting a second video frame corresponding to the third quantity by taking the first video frame as a starting point, and generating corresponding first training assessment information according to the first video frame and the second video frame.
7. The data processing method for the training assessment system according to claim 6,
the step of taking the training image-text video frame corresponding to the trigger content information as a first video frame, determining a third number corresponding to the first video frame according to the first trigger coefficient, selecting a second video frame corresponding to the third number by taking the first video frame as a starting point, and generating corresponding first training assessment information according to the first video frame and the second video frame includes:
comparing the first trigger coefficient with the first preset coefficient to obtain a coefficient deviation value, and calculating based on the coefficient deviation value, the total time of the training image-text video data and the number of standard frames to obtain a third number;
the third quantity is calculated by the following formula:
Figure 685462DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE013
in order to be the third number of the first,
Figure 52727DEST_PATH_IMAGE010
is a first preset coefficient, and is a second preset coefficient,
Figure 416843DEST_PATH_IMAGE014
in order to calculate the weight value for the first time,
Figure DEST_PATH_IMAGE015
in order to train the total time of the teletext video data,
Figure 357117DEST_PATH_IMAGE016
in order to calculate the weight value for the second time,
Figure DEST_PATH_IMAGE017
the number of standard frames;
determining second video frames which are smaller than the time sequence information of the first video frames and correspond to a third number of the first video frames and are adjacent in sequence by taking the first video frames as a starting point;
and generating first training assessment information according to the time sequence information respectively corresponding to the first video frame and the second video frame, wherein the first training assessment information is used for enabling the training assessment end to play back the first video frame and the second video frame.
8. The data processing method for the training assessment system according to claim 1,
if the training personnel trigger the triggering area lattice in a second mode, an information acquisition layer is generated at the triggering end, the information acquisition layer and the triggering area lattice are sent to a server after the information acquisition layer receives input information, and the server generates second training and examination information according to the triggering area lattice and the information acquisition layer and sends the second training and examination information to the training and examination end, wherein the method comprises the following steps:
if the triggering end judges that the training personnel trigger the triggering area grid in a second mode, an information acquisition layer with the same specification as the triggering area grid is generated;
generating a corresponding trigger trace at the information acquisition layer according to the trigger action of the input information on the reference trigger point;
sending the trigger trace and the trigger area grid included in the information acquisition layer to a server;
the server generates a picture-text video frame annotation corresponding to each training picture-text video frame according to the trigger trace included in each information acquisition layer, and obtains corresponding second training assessment information based on the picture-text video frame annotation.
9. The data processing method for training assessment system according to claim 8,
the training assessment terminal plays the training image-text video data based on the first training assessment information or the second training assessment information, and the method comprises the following steps:
the training assessment end obtains the image-text video frame comments corresponding to the second training assessment information, and the image-text video frame comments are sequenced according to corresponding time sequence information to generate comment image-text videos, wherein the comment image-text videos belong to training image-text video data;
and the training and checking end plays the annotation image-text video.
CN202210538885.5A 2022-05-18 2022-05-18 Data processing method suitable for training and examination system Active CN114663261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210538885.5A CN114663261B (en) 2022-05-18 2022-05-18 Data processing method suitable for training and examination system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210538885.5A CN114663261B (en) 2022-05-18 2022-05-18 Data processing method suitable for training and examination system

Publications (2)

Publication Number Publication Date
CN114663261A CN114663261A (en) 2022-06-24
CN114663261B true CN114663261B (en) 2022-08-23

Family

ID=82036494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210538885.5A Active CN114663261B (en) 2022-05-18 2022-05-18 Data processing method suitable for training and examination system

Country Status (1)

Country Link
CN (1) CN114663261B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115866340B (en) * 2023-01-18 2023-05-09 北京思想天下教育科技有限公司 On-line training data processing method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722591A (en) * 2012-04-20 2012-10-10 曾理 Technical method for accurately calculating class hour in training software platform
CN112991857A (en) * 2021-03-04 2021-06-18 华北电力大学 Electric power emergency rescue training system and method

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150079579A1 (en) * 2013-09-13 2015-03-19 Ian James Oliver Integrated physical sensor grid and lesson system
CN105187787B (en) * 2015-09-02 2018-05-01 台山核电合营有限公司 A kind of mouse movable screen automatic identification and the method and system of monitoring
CN107027072A (en) * 2017-05-04 2017-08-08 深圳市金立通信设备有限公司 A kind of video marker method, terminal and computer-readable recording medium
CN108010394B (en) * 2017-12-20 2020-11-03 杭州埃欧哲建设工程咨询有限公司 Virtual teaching method, control terminal and virtual teaching system based on VR
CN109034036B (en) * 2018-07-19 2020-09-01 青岛伴星智能科技有限公司 Video analysis method, teaching quality assessment method and system and computer-readable storage medium
CN110347258A (en) * 2019-07-09 2019-10-18 北京猫眼视觉科技有限公司 A kind of user's operation trace record back method for virtual reality training and examination
CN111553323A (en) * 2020-05-22 2020-08-18 北京市商汤科技开发有限公司 Video processing method and device, electronic equipment and storage medium
CN111985829A (en) * 2020-08-27 2020-11-24 河南工学院 Enterprise management training system
CN113283220A (en) * 2021-05-18 2021-08-20 维沃移动通信有限公司 Note recording method, device and equipment and readable storage medium
CN113468225A (en) * 2021-06-24 2021-10-01 上海东普信息科技有限公司 Online pushing method, device and equipment for training data and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722591A (en) * 2012-04-20 2012-10-10 曾理 Technical method for accurately calculating class hour in training software platform
CN112991857A (en) * 2021-03-04 2021-06-18 华北电力大学 Electric power emergency rescue training system and method

Also Published As

Publication number Publication date
CN114663261A (en) 2022-06-24

Similar Documents

Publication Publication Date Title
CN110175549B (en) Face image processing method, device, equipment and storage medium
CN108229674B (en) Training method and device of neural network for clustering, and clustering method and device
CN109284729B (en) Method, device and medium for acquiring face recognition model training data based on video
CN109800320B (en) Image processing method, device and computer readable storage medium
CN110163076A (en) A kind of image processing method and relevant apparatus
US20230027412A1 (en) Method and apparatus for recognizing subtitle region, device, and storage medium
CN114663261B (en) Data processing method suitable for training and examination system
CN110475132A (en) Direct broadcasting room kind identification method, device and data processing equipment
CN107766316B (en) Evaluation data analysis method, device and system
CN110765215A (en) Query method and device for personnel common relationship, electronic equipment and storage medium
US20230410222A1 (en) Information processing apparatus, control method, and program
US20180096192A1 (en) Systems and Methods for Identifying Objects in Media Contents
CN114969449A (en) Metadata management method and system based on construction structure tree
CN112632926B (en) Bill data processing method and device, electronic equipment and storage medium
JP6314071B2 (en) Information processing apparatus, information processing method, and program
CN109451334A (en) User, which draws a portrait, generates processing method, device and electronic equipment
CN113568934A (en) Data query method and device, electronic equipment and storage medium
CN113221721A (en) Image recognition method, device, equipment and medium
CN105979331A (en) Smart television data recommend method and device
CN110765278B (en) Method for searching similar exercises, computer equipment and storage medium
CN113469019B (en) Landscape image characteristic value calculation method, device, equipment and storage medium
CN115525792A (en) Video searching method and device, server and terminal equipment
CN113190589A (en) Content distribution method and device suitable for distribution system and storage medium
CN113627542A (en) Event information processing method, server and storage medium
CN113379163A (en) Teaching assistance method, teaching assistance device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240423

Address after: Room 602, 6th Floor, No. 378, Lane 1555, Jinsha Jiangxi Road, Jiangqiao Town, Jiading District, Shanghai, March 2018

Patentee after: SHANGHAI TIANFANG YETAN NETWORK TECHNOLOGY CO.,LTD.

Country or region after: China

Address before: 311107 room 5591, floor 5, building 4, No. 88, Renhe Avenue, Renhe street, Yuhang District, Hangzhou City, Zhejiang Province

Patentee before: Flame blue (Zhejiang) Information Technology Co.,Ltd.

Country or region before: China