CN114663339A - Video picture detection method, device, equipment and storage medium - Google Patents

Video picture detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN114663339A
CN114663339A CN202011542276.4A CN202011542276A CN114663339A CN 114663339 A CN114663339 A CN 114663339A CN 202011542276 A CN202011542276 A CN 202011542276A CN 114663339 A CN114663339 A CN 114663339A
Authority
CN
China
Prior art keywords
video
image
picture detection
target
video picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011542276.4A
Other languages
Chinese (zh)
Inventor
宋泽坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hongxiang Technical Service Co Ltd
Original Assignee
Beijing Hongxiang Technical Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hongxiang Technical Service Co Ltd filed Critical Beijing Hongxiang Technical Service Co Ltd
Priority to CN202011542276.4A priority Critical patent/CN114663339A/en
Publication of CN114663339A publication Critical patent/CN114663339A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of video detection, and discloses a video picture detection method, a device, equipment and a storage medium, wherein the method comprises the following steps: when a video detection instruction is received, determining a video file to be detected and a target processing node according to the video detection instruction; classifying the video files to be detected through a target processing node based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results; determining the target type of the video file to be detected according to the type probability value; and generating a video picture detection result corresponding to the video file to be detected according to the target type. In the invention, the target processing node classifies the video files to be detected based on the preset video picture detection model, and obtains the type probability values corresponding to a plurality of classification results to determine the target type, so that the video pictures are detected in a mode of classifying the model, and the efficiency and the accuracy of video picture detection are improved.

Description

Video picture detection method, device, equipment and storage medium
Technical Field
The present invention relates to the field of video detection technologies, and in particular, to a method, an apparatus, a device, and a storage medium for detecting a video frame.
Background
The video picture detection belongs to a part of video quality detection, and whether the phenomena of black screen and screen splash exist in a video or not is detected in the video playing process when the video picture is detected. The existing video picture detection method needs to decode and analyze video data, and determines the quality condition of a video picture according to the decoding and analyzing result, but the detection mode is complex, the detection accuracy is not high, and a good video picture detection effect cannot be achieved.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a video picture detection method, a video picture detection device, video picture detection equipment and a storage medium, and aims to solve the technical problems that the video picture detection method in the prior art is complex and the detection accuracy is low.
In order to achieve the above object, the present invention provides a video picture detection method, which comprises the following steps:
when a video detection instruction is received, determining a video file to be detected and a target processing node according to the video detection instruction;
classifying the video file to be detected through the target processing node based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results;
determining the target type of the video file to be detected according to the type probability value;
and generating a video picture detection result corresponding to the video file to be detected according to the target type.
Optionally, before the classifying, by the target processing node, the video file to be detected based on a preset video picture detection model and obtaining type probability values corresponding to a plurality of classification results, the method further includes:
acquiring initial video frame data;
classifying the initial video frame data to obtain a plurality of classes of image data sets;
and training a preset residual error neural network model according to the image data set to obtain a preset video picture detection model.
Optionally, the classifying the initial video frame data to obtain a plurality of categories of image data sets includes:
determining a video image to be processed according to the initial video frame data, and acquiring an image category corresponding to the video image to be processed;
generating label data according to the image category;
marking the video image to be processed according to the marking data to obtain a video image to be classified;
and classifying the video images to be classified to obtain image data sets of multiple categories.
Optionally, the determining a video image to be processed according to the initial video frame data includes:
determining an initial video image according to the initial video frame data;
acquiring image pixel information of the initial video image;
and adjusting the initial video image according to the image pixel information to obtain a video image to be processed.
Optionally, before the adjusting the initial video image according to the image pixel information to obtain a video image to be processed, the method further includes:
comparing the image pixel information with preset pixel information;
and when the image pixel information is inconsistent with preset pixel information, executing the step of adjusting the initial video image according to the image pixel information to obtain a video image to be processed.
Optionally, after comparing the image pixel information with preset pixel information, the method further includes:
and when the image pixel information is consistent with the preset pixel information, taking the initial video image as a video image to be processed.
Optionally, the marking the video image to be processed according to the marking data to obtain a video image to be classified includes:
carrying out image detection on the video image to be processed;
judging whether the video image to be processed contains interference noise data or not according to an image detection result;
when the video image to be processed does not contain interference noise data, automatically marking the video image to be processed according to the marking data to obtain a video image to be classified;
and when the video image to be processed contains interference noise data, taking the video image to be processed as a target video image, and manually marking the target video image to obtain a video image to be classified.
Optionally, the interference noise data comprises video component textual information;
the interference noise data comprises video component textual information;
when the video image to be processed contains interference noise data, taking the video image to be processed as a target video image, and manually marking the target video image to obtain a video image to be classified, wherein the method comprises the following steps:
and when the video image to be processed contains the text information of the video assembly, taking the video image to be processed as a target video image, and manually marking the target video image to obtain a video image to be classified.
Optionally, the training a preset residual neural network model according to the image data set to obtain a preset video picture detection model includes:
dividing the image dataset into a training dataset, a validation dataset, and a test dataset;
training a preset residual error neural network model according to the training data set to obtain an initial video picture detection model;
predicting through the initial video picture detection model and a verification data set to determine model accuracy;
selecting a target video picture detection model from the initial video picture detection models according to the model accuracy;
and optimizing the target video picture detection model according to the test data set to obtain a preset video picture detection model.
Optionally, when the video detection instruction is received, determining the video file to be detected and the target processing node according to the video detection instruction includes:
when a video detection instruction is received, extracting video detection information from the video detection instruction;
determining a video file to be detected according to the video detection information;
and selecting a target processing node from the processing nodes to be selected according to the video detection instruction.
Optionally, before the selecting the target processing node from the processing nodes to be selected according to the video detection instruction, the method further includes:
when a heartbeat message is received, determining message information according to the heartbeat message;
determining a node end to be selected for reporting the heartbeat message according to the message information;
searching node end information corresponding to the node end to be selected;
and generating a node to be selected according to the node end information, and establishing a corresponding relation between the node to be selected and the node end.
Optionally, the classifying, by the target processing node, the to-be-detected video file based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results includes:
searching a target node end corresponding to the target processing node according to the corresponding relation between the processing node to be selected and the node end;
and sending the video file to be detected to the target node end so that the target node end classifies the video file to be detected based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results.
Optionally, the determining the target type of the video file to be detected according to the type probability value includes:
sorting the type probability values;
selecting the maximum type probability value from the type probability values as a target type probability value according to the sorting result;
and searching the video picture type corresponding to the target type probability value, and determining the target type of the video file to be detected according to the video picture type.
Optionally, the video picture types include: a normal type, a flower screen type, and a black screen type;
the determining the target type of the video file to be detected according to the video picture type comprises the following steps:
when the video picture type is a normal type, judging that the target type of the video file to be detected is a normal type;
when the video picture type is a screen-splash type, judging that the target type of the video file to be detected is the screen-splash type;
and when the video picture type is a black screen type, judging that the target type of the video file to be detected is the black screen type.
In addition, to achieve the above object, the present invention further provides a video picture detection apparatus, including:
the detection instruction module is used for determining a video file to be detected and a target processing node according to a video detection instruction when the video detection instruction is received;
the classification processing module is used for classifying the video file to be detected through the target processing node based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results;
the type determining module is used for determining the target type of the video file to be detected according to the type probability value;
and the detection result module is used for generating a video picture detection result corresponding to the video file to be detected according to the target type.
Optionally, the video picture detection apparatus further includes:
the model training module is used for acquiring initial video frame data; classifying the initial video frame data to obtain a plurality of classes of image data sets; and training a preset residual error neural network model according to the image data set to obtain a preset video picture detection model.
Optionally, the model training module is further configured to determine a video image to be processed according to the initial video frame data, and acquire an image category corresponding to the video image to be processed; generating label data according to the image category; marking the video image to be processed according to the marking data to obtain a video image to be classified; and classifying the video images to be classified to obtain image data sets of multiple categories.
Further, to achieve the above object, the present invention also provides a video picture detection apparatus, including: a memory, a processor and a video picture detection program stored on said memory and executable on said processor, said video picture detection program when executed by the processor implementing the steps of the video picture detection method as described above.
In addition, to achieve the above object, the present invention further provides a storage medium having a video picture detection program stored thereon, wherein the video picture detection program, when executed by a processor, implements the steps of the video picture detection method as described above.
The video picture detection method provided by the invention comprises the steps of determining a video file to be detected and a target processing node according to a video detection instruction when the video detection instruction is received; classifying the video file to be detected through the target processing node based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results; determining the target type of the video file to be detected according to the type probability value; and generating a video picture detection result corresponding to the video file to be detected according to the target type. In the invention, the target processing node classifies the video files to be detected based on the preset video picture detection model, obtains the type probability values corresponding to a plurality of classification results to determine the target type, and further generates the corresponding video picture detection result, so that the video pictures are detected in a mode of classifying the models, the video picture detection efficiency is improved, and the detection accuracy is improved.
Drawings
FIG. 1 is a schematic diagram of a video frame detection device in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a video frame detection method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a video frame detection method according to a second embodiment of the present invention;
FIG. 4 is a diagram illustrating an example of a blank data picture according to an embodiment of a video frame detection method of the present invention;
FIG. 5 is a diagram illustrating an example of a flower screen data picture according to an embodiment of a video frame detection method of the present invention;
FIG. 6 is a flowchart illustrating a video frame detection method according to a third embodiment of the present invention;
FIG. 7 is a block diagram of an overall framework of a video frame detection method according to an embodiment of the present invention;
fig. 8 is a functional block diagram of a video frame detection apparatus according to a first embodiment of the invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a video picture detection device in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the video picture detection apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may comprise a Display screen (Display), an input unit such as keys, and the optional user interface 1003 may also comprise a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The Memory 1005 may be a Random Access Memory (RAM) Memory or a non-volatile Memory (e.g., a magnetic disk Memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the device configuration shown in fig. 1 does not constitute a limitation of the video picture detection device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a video screen detection program.
In the video picture detection apparatus shown in fig. 1, the network interface 1004 is mainly used for connecting an external network and performing data communication with other network apparatuses; the user interface 1003 is mainly used for connecting to a user equipment and performing data communication with the user equipment; the apparatus of the present invention calls a video picture detection program stored in the memory 1005 through the processor 1001 and executes the video picture detection method provided by the embodiment of the present invention.
Based on the hardware structure, the embodiment of the video picture detection method is provided.
Referring to fig. 2, fig. 2 is a flowchart illustrating a video frame detection method according to a first embodiment of the present invention.
In a first embodiment, the video picture detection method comprises the steps of:
and step S10, when a video detection instruction is received, determining the video file to be detected and the target processing node according to the video detection instruction.
It should be noted that the execution subject in this embodiment may be a video picture detection device, such as a server device, a web-end device, or other devices that can achieve the same or similar functions.
It should be understood that the present solution is divided into two ends, namely a web end and a node end, where the web end is responsible for receiving a request return result, the node end is responsible for calculating a result, and the communication between the web end and the node end may adopt an zmq framework or other frameworks, which is not limited in this embodiment. The Node side can actively report the heartbeat to the web side, and the link can be a push-pull-sub, so that the web side can sense which nodes can process the request. when the web end receives the video detection instruction, the target processing node can be determined according to the principle of random distribution to detect the video file to be detected.
It should be understood that the video detection command may be a request input manually or a predetermined request. For example, a user may input a video detection request when needing to detect a video played on a terminal device, and at this time, the web end receives a video detection instruction and acquires a video file to be detected that is being played on the terminal device. Or directly uploading the video file to be detected to a web end for a user, and requesting the web end to detect the video file to be detected. One or more detection times may also be preset, and the detection request is triggered when the detection time is reached, and at this time, the web end may receive the video detection instruction, and other methods may also be used. The video file to be detected may be a screenshot, a video picture, a video data file, or other types of video files, which is not limited in this embodiment.
And step S20, classifying the video files to be detected through the target processing node based on a preset video picture detection model, and obtaining type probability values corresponding to a plurality of classification results.
It should be noted that the preset video picture detection model may be a model built by using a neural network principle, in this embodiment, a model built by using a residual error renet 18 principle is taken as an example for description, and a model built according to other principles may also be used, which is not limited in this embodiment.
It should be understood that after the target processing node is determined, the target processing node may classify the video file to be detected based on the preset video picture detection model, and may determine type probability values corresponding to a plurality of classification results. The classification results may be three types, i.e., a splash screen, a black screen, and a normal screen, and may also be other classification results and other number of classification results, which is not limited in this embodiment. The scoring criteria may be information such as ambiguity, color, and image texture. The preset video picture detection model in this embodiment can classify data, so that type probability values corresponding to a plurality of classification results can be obtained, and the type probability values are used for representing the probability of the classification results.
And step S30, determining the target type of the video file to be detected according to the type probability value.
It should be understood that after the type probability value corresponding to each classification result is determined, the probability of each classification result can be determined according to the type probability value, the maximum value can be taken as the target classification result, and then the target type of the video file to be detected is determined according to the target classification result.
Further, in order to determine the maximum value in the type probability values more accurately and improve the accuracy of the target type, the step S30 includes:
sorting the type probability values; selecting the maximum type probability value from the type probability values as a target type probability value according to the sorting result; and searching the video picture type corresponding to the target type probability value, and determining the target type of the video file to be detected according to the video picture type.
It should be understood that the multiple type probability values may be ranked, then the maximum value is selected from the multiple type probability values according to the ranking result as the target type probability value, the target classification result corresponding to the target type probability value may be determined, the video picture type corresponding to the target classification result may be searched, and then the target type of the video file to be detected may be determined according to the video picture type.
In a specific implementation, for example, three classification results are assumed, which are a classification result a, a classification result B, and a classification result C, where the classification result a corresponds to the flower screen type, the classification result B corresponds to the black screen type, and the classification result C corresponds to the normal type. The type probability value A of the classification result A is 80%, the type probability value B of the classification result B is 15%, and the type probability value C of the classification result C is 5%. And sequencing the probability values of the multiple types from large to small, wherein the sequencing result is as follows: 80%, 15% and 5%, and then the maximum value can be determined to be 80%, so that the target type probability value is 80%, the target classification result corresponding to the target type probability value is a classification result a, the video picture type corresponding to the target type probability value is a screen-splash type, and therefore the target type of the video file to be detected is a screen-splash type.
Further, the video picture types include: a normal type, a flower screen type, and a black screen type; the determining the target type of the video file to be detected according to the video picture type comprises the following steps:
when the video picture type is a normal type, judging that the target type of the video file to be detected is a normal type; when the video picture type is a screen-splash type, judging that the target type of the video file to be detected is the screen-splash type; and when the video picture type is a black screen type, judging that the target type of the video file to be detected is the black screen type.
It should be understood that, based on the conventional video picture situation, the video picture type in this embodiment can be divided into three situations, which are a normal type, a flower screen type and a black screen type, respectively, and the video picture type corresponds to the type of the video file to be detected one by one.
And step S40, generating a video picture detection result corresponding to the video file to be detected according to the target type.
It should be understood that, after the target type of the video file to be detected is determined, the video picture detection result corresponding to the video file to be detected can be generated according to the target type. For example, when the target type of the video file to be detected is a black screen type, the video picture detection result corresponding to the video file to be detected may be generated as follows: and a black screen phenomenon exists in the video file to be detected. In addition to the above manner, the video picture detection result corresponding to the video file to be detected may also be expressed in other manners, which is not limited in this embodiment.
In the embodiment, when a video detection instruction is received, a video file to be detected and a target processing node are determined according to the video detection instruction; classifying the video file to be detected through the target processing node based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results; determining the target type of the video file to be detected according to the type probability value; and generating a video picture detection result corresponding to the video file to be detected according to the target type. In the embodiment, the target processing node classifies the video files to be detected based on the preset video picture detection model, obtains the type probability values corresponding to the classification results to determine the target type, and further generates the corresponding video picture detection result, so that the video pictures are detected in a mode of classifying the models, the efficiency of video picture detection is improved, and the accuracy of detection is improved.
In an embodiment, as shown in fig. 3, a second embodiment of the video frame detection method according to the present invention is proposed based on the first embodiment, before the step S20, the method further includes:
and S001, acquiring initial video frame data.
It should be understood that, in the embodiment, the preset residual neural network model may be built by using the residual renet 18 principle, and then the preset residual neural network model is trained through a large amount of data, so as to obtain a preset video picture detection model that can be used for video picture detection. Accordingly, a plurality of initial video frame data for use as training samples may be acquired.
Step S002, performing classification processing on the initial video frame data to obtain image datasets of multiple categories.
It will be appreciated that these acquired initial video frame data may be subjected to a classification process to obtain multiple classes of image datasets. For example, assuming that 742 pieces of initial video frame data are obtained and classified, 336 normal pieces of initial video frame data, 200 flower screens and 206 black screens exist, so that three image data sets can be established, namely a normal image data set, a flower screen image data set and a black screen image data set, wherein the normal image data set contains 336 normal pieces of initial video frame data, the flower screen image data set contains 200 flower screen initial video frame data, and the black screen image data set contains 206 initial video frame data.
Further, in order to achieve better classification effect and model training effect, so as to improve accuracy of video picture detection, the step S002 includes:
determining a video image to be processed according to the initial video frame data, and acquiring an image category corresponding to the video image to be processed; generating label data according to the image category; marking the video image to be processed according to the marking data to obtain a video image to be classified; and classifying the video images to be classified to obtain image data sets of multiple categories.
It should be understood that, a video image to be processed may be determined according to initial video frame data, and an image category corresponding to the video image to be processed may be obtained, where the image category may include three categories, namely normal, flower screen, and black screen, and then different flag data may be set for different image categories, for example, the flag data corresponding to the normal image may be set to 0, the flag data corresponding to the flower screen image may be set to 1, and the flag data corresponding to the black screen image may be set to 2. Therefore, corresponding marking data can be generated according to the image categories, the video images to be processed are marked according to the marking data, the marked video images to be classified are obtained, and then the video images to be classified are classified according to the marking data, so that image data sets of multiple categories are obtained. Therefore, the accuracy of video detection is improved in a mode of automatically acquiring data and manually marking the data.
Further, since the initial video frame data may include images of various sizes, the determining the video image to be processed according to the initial video frame data includes:
determining an initial video image according to the initial video frame data; acquiring image pixel information of the initial video image; and adjusting the initial video image according to the image pixel information to obtain a video image to be processed.
It should be understood that, in this embodiment, three channels of picture information of feature pixels 360 × 640 may be used as an input model of input image information, label is a current picture type, output node foftmax probability classification has three values, which are probability values of current three classifications, and images of other pixel information may also be used, for example, 800 × 1000, 600 × 800, and the like, which is not limited in this embodiment.
It can be understood that, since the size of the image is mostly represented by the pixel information, the initial video image may be determined according to the initial video data, then the image pixel information of the initial video image may be obtained, and the image pixel information may be compared with the preset pixel information; and when the image pixel information is inconsistent with the preset pixel information, executing the step of adjusting the initial video image according to the image pixel information to obtain a video image to be processed. And when the image pixel information is consistent with the preset pixel information, taking the initial video image as a video image to be processed.
It should be understood that the preset pixel information may be set to 360 × 640, after the image pixel information of the initial video image is obtained, the image pixel information is compared with the preset pixel information, and if the image pixel information is consistent with the preset pixel information, it indicates that the initial video image does not need to be adjusted, and the initial video image is taken as the video image to be processed; if the initial video image is inconsistent with the preset pixel information, the initial video image needs to be adjusted, and at this time, the initial video image can be adjusted to be consistent with the preset pixel information so as to obtain the video image to be processed.
Further, the labeling the video image to be processed according to the labeling data to obtain a video image to be classified includes:
carrying out image detection on the video image to be processed; judging whether the video image to be processed contains interference noise data or not according to an image detection result; when the video image to be processed does not contain interference noise data, automatically marking the video image to be processed according to the marking data to obtain a video image to be classified; and when the video image to be processed contains interference noise data, taking the video image to be processed as a target video image, and manually marking the target video image to obtain a video image to be classified.
It should be understood that, in the video frame data, there are some beautifying picture sets which cannot be distinguished from the flower screen well, and the existing data sets cannot meet the detection of the video, and the data needs to be collected and labeled again. Thus, a large amount of screenshot and video picture data can be integrated in the markup data to increase compatibility and mobility issues of the model, so that some image-generating image datasets containing interfering acoustic data can be collected.
It can be understood that, since the computer can more accurately identify the image not containing the interference noise data, in order to improve the marking efficiency, the image detection can be performed on the video image to be processed, and when the video image to be processed does not contain the interference noise data, the video image to be processed is automatically marked according to the marking data. However, the computer cannot better identify the image containing the interference noise data, so that when the video image to be processed contains the interference noise data, the video image to be processed is taken as a target video image, and the target video image is marked manually.
Further, the interference noise data includes video component text information, and when the video image to be processed includes interference noise data, the video image to be processed is taken as a target video image, and the target video image is manually marked to obtain a video image to be classified, including:
and when the video image to be processed contains the text information of the video assembly, taking the video image to be processed as a target video image, and manually marking the target video image to obtain a video image to be classified.
It should be understood that the interfering sound data may include the video component text information, e.g., the video component and some text information of the video, and some Android components. For example, as shown in fig. 4, fig. 4 is an exemplary diagram of a black screen data picture, where some video components and some text information of the video, and some Android components are included on the picture. As shown in fig. 5, fig. 5 is an exemplary diagram of a flower screen data picture, where the picture includes some video components, some text information of the video, and some Android components. By means of the method, the pertinence of the detection model of the scheme is improved, and the accuracy of detection is improved for the screen-blooming phenomenon and the screen-blacking phenomenon in the video.
And S003, training a preset residual error neural network model according to the image data set to obtain a preset video picture detection model.
It should be understood that after the image data set is generated, the data in the image data set may be transmitted to a preset residual neural network model built by using the residual renet 18 principle for training, the loss function may be a cross entropy loss function, the optimization function may be an AdaGrad model, the training precision may be 96.4%, and the model is saved after the training is finished, so as to obtain the preset video picture detection model.
Further, in order to train the model better and obtain a video picture detection model with higher detection accuracy, the step S003 includes:
dividing the image dataset into a training dataset, a validation dataset, and a test dataset; training a preset residual neural network model according to the training data set to obtain an initial video picture detection model; predicting through the initial video picture detection model and a verification data set to determine model accuracy; selecting a target video picture detection model from the initial video picture detection models according to the model accuracy; and optimizing the target video picture detection model according to the test data set to obtain a preset video picture detection model.
It can be understood that the image data set may be divided into a training data set, a verification data set and a test data set, the ratio may be 0.7, 0.1, 0.2, and the preset residual neural network model is trained according to the training set data to obtain an initial video frame detection model. And then, predicting through the initial video picture detection model and the verification data set to determine the model accuracy, and selecting a target video picture detection model from the initial video picture detection model according to the model accuracy. And optimizing the target video picture detection model according to the test data set to obtain a preset video picture detection model with higher precision.
In the embodiment, initial video frame data is obtained; classifying the initial video frame data to obtain a plurality of classes of image data sets; and training a preset residual neural network model according to the image data set to obtain a preset video picture detection model, so that the preset video picture detection model for video picture detection is obtained, and the accuracy of video picture detection is improved.
In an embodiment, as shown in fig. 6, a third embodiment of the video frame detection method according to the present invention is proposed based on the first embodiment or the second embodiment, and in this embodiment, the description is made based on the first embodiment, and the step S10 includes:
step S101, when a video detection instruction is received, video detection information is extracted from the video detection instruction.
It should be understood that the web side may extract video detection information from the video detection instruction when receiving the video detection instruction.
And S102, determining the video file to be detected according to the video detection information.
It can be understood that after the video detection information is obtained, the video file to be detected may be determined according to the video detection information, where the video file to be detected may be a screenshot, a video picture, a video data file, or another type of video file, and this embodiment is not limited to this.
And step S103, selecting a target processing node from the processing nodes to be selected according to the video detection instruction.
It should be understood that the web side may also select a target processing node from a plurality of processing nodes to be selected according to the video detection instruction. And selecting a target processing node from the processing nodes to be selected according to a random distribution principle to detect the video file to be detected. Besides the above-mentioned modes, other selection modes are also possible, and this embodiment is not limited to this.
Further, before selecting a target processing node from the processing nodes to be selected according to the video detection instruction, the method further includes:
when receiving a heartbeat message, determining message information according to the heartbeat message; determining a node end to be selected for reporting the heartbeat message according to the message information; searching node end information corresponding to the node end to be selected; and generating a node to be selected according to the node end information, and establishing a corresponding relation between the node to be selected and the node end.
Further, the classifying the video file to be detected by the target processing node based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results includes:
searching a target node end corresponding to the target processing node according to the corresponding relation between the processing node to be selected and the node end; and sending the video file to be detected to the target node end so that the target node end classifies the video file to be detected based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results.
It should be understood that, as shown in fig. 7, fig. 7 is an overall frame schematic diagram, in this scheme, two ends are divided into a web end and a node end, respectively, where the web end is responsible for receiving a request return result, the node end is responsible for calculating a result, and the communication between the web end and the node end may adopt an zmq frame or other frames, which is not limited in this embodiment. The node side can actively report the heartbeat to the web side, and the link can be a push-pull-sub, so that the web side can sense which nodes can process the request. when the web end receives the heartbeat message, the node end to be selected for reporting the heartbeat message is determined according to the message information of the heartbeat message, the node end information corresponding to the node end to be selected is searched, the node to be selected is generated according to the node end information, and the corresponding relation between the node to be selected and the node end is established. And each node to be selected can store a preset video picture detection model, and picture detection is carried out on the video file through the preset video picture detection model.
when the web end receives a video detection instruction, a target processing node can be determined according to a random distribution principle to detect a video file to be detected, then a target node end corresponding to the target processing node is searched according to the corresponding relation between the node and the node end to be selected, the video file to be detected is sent to the target node end, so that the target node end classifies the video file to be detected based on a preset video picture detection model, and type probability values corresponding to a plurality of classification results are obtained. It is understood that the target node side may return the result to the web side after the processing is completed, and the link may be a router-scaler.
In the embodiment, when a video detection instruction is received, video detection information is extracted from the video detection instruction; determining a video file to be detected according to the video detection information; and selecting a target processing node from the processing nodes to be selected according to the video detection instruction, so that the video file to be detected and the target processing node can be determined according to the video detection instruction, and the video file to be detected is detected through the target processing node, so that the efficiency of video picture detection is improved.
Furthermore, an embodiment of the present invention further provides a storage medium, where a video picture detection program is stored, and the video picture detection program, when executed by a processor, implements the steps of the video picture detection method as described above.
Since the storage medium adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and no further description is given here.
In addition, referring to fig. 8, an embodiment of the present invention further provides a video picture detection apparatus, where the video picture detection apparatus includes:
and the detection instruction module 10 is configured to determine, when a video detection instruction is received, a video file to be detected and a target processing node according to the video detection instruction.
And the classification processing module 20 is configured to perform classification processing on the video file to be detected through the target processing node based on a preset video picture detection model, and obtain type probability values corresponding to a plurality of classification results.
And the type determining module 30 is configured to determine the target type of the video file to be detected according to the type probability value.
And the detection result module 40 is configured to generate a video picture detection result corresponding to the video file to be detected according to the target type.
In the embodiment, when a video detection instruction is received, a video file to be detected and a target processing node are determined according to the video detection instruction; classifying the video file to be detected through the target processing node based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results; determining the target type of the video file to be detected according to the type probability value; and generating a video picture detection result corresponding to the video file to be detected according to the target type. In the embodiment, the target processing node classifies the video files to be detected based on the preset video picture detection model, obtains the type probability values corresponding to the classification results to determine the target type, and further generates the corresponding video picture detection result, so that the video pictures are detected in a mode of classifying the models, the efficiency of video picture detection is improved, and the accuracy of detection is improved.
In an embodiment, the model training module is further configured to determine an initial video image according to the initial video frame data; acquiring image pixel information of the initial video image; and adjusting the initial video image according to the image pixel information to obtain a video image to be processed.
In an embodiment, the model training module is further configured to compare the image pixel information with preset pixel information; and when the image pixel information is inconsistent with preset pixel information, executing the step of adjusting the initial video image according to the image pixel information to obtain a video image to be processed.
In an embodiment, the model training module is further configured to use the initial video image as a video image to be processed when the image pixel information is consistent with preset pixel information.
In an embodiment, the model training module is further configured to perform image detection on the video image to be processed; judging whether the video image to be processed contains interference noise data or not according to an image detection result; when the video image to be processed does not contain interference noise data, automatically marking the video image to be processed according to the marking data to obtain a video image to be classified; and when the video image to be processed contains interference noise data, taking the video image to be processed as a target video image, and manually marking the target video image to obtain a video image to be classified.
In an embodiment, the interference noise data includes video component text information, and the model training module is further configured to, when the video image to be processed includes the video component text information, take the video image to be processed as a target video image, and manually mark the target video image to obtain a video image to be classified.
In an embodiment, the model training module is further configured to divide the image data set into a training data set, a validation data set, and a test data set; training a preset residual error neural network model according to the training data set to obtain an initial video picture detection model; predicting through the initial video picture detection model and a verification data set to determine model accuracy; selecting a target video picture detection model from the initial video picture detection models according to the model accuracy; and optimizing the target video picture detection model according to the test data set to obtain a preset video picture detection model.
In an embodiment, the detection instruction module 10 is further configured to, when a video detection instruction is received, extract video detection information from the video detection instruction; determining a video file to be detected according to the video detection information; and selecting a target processing node from the processing nodes to be selected according to the video detection instruction.
In an embodiment, the video picture detection apparatus further includes a node determination module, configured to determine message information according to a heartbeat message when the heartbeat message is received; determining a node end to be selected for reporting the heartbeat message according to the message information; searching node end information corresponding to the node end to be selected; and generating a node to be selected according to the node end information, and establishing a corresponding relation between the node to be selected and the node end.
In an embodiment, the classification processing module 20 is further configured to search a target node end corresponding to the target processing node according to a corresponding relationship between the processing node to be selected and the node end; and sending the video file to be detected to the target node end so that the target node end classifies the video file to be detected based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results.
In an embodiment, the type determining module 30 is further configured to rank the type probability values; selecting the maximum type probability value from the type probability values according to the sequencing result as a target type probability value; and searching the video picture type corresponding to the target type probability value, and determining the target type of the video file to be detected according to the video picture type.
In one embodiment, the video picture types include: the type determining module 30 is further configured to determine that the target type of the video file to be detected is a normal type when the video picture type is the normal type; when the video picture type is a screen-splash type, judging that the target type of the video file to be detected is the screen-splash type; and when the video picture type is a black screen type, judging that the target type of the video file to be detected is the black screen type.
Other embodiments or specific implementation methods of the video frame detection apparatus according to the present invention may refer to the above embodiments, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or the portions contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) readable by an estimator as described above and includes instructions for enabling an intelligent device (e.g. a mobile phone, an estimator, a video image detection device, or a network video image detection device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
The invention discloses A1 and a video picture detection method, which comprises the following steps:
when a video detection instruction is received, determining a video file to be detected and a target processing node according to the video detection instruction;
classifying the video file to be detected through the target processing node based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results;
determining the target type of the video file to be detected according to the type probability value;
and generating a video picture detection result corresponding to the video file to be detected according to the target type.
A2, the video picture detection method as in a1, before the target processing node classifies the video file to be detected based on a preset video picture detection model and obtains type probability values corresponding to a plurality of classification results, the method further includes:
acquiring initial video frame data;
classifying the initial video frame data to obtain a plurality of classes of image data sets;
and training a preset residual error neural network model according to the image data set to obtain a preset video picture detection model.
A3, the video picture detection method as in a2, wherein the classifying the initial video frame data to obtain a plurality of classes of image data sets comprises:
determining a video image to be processed according to the initial video frame data, and acquiring an image category corresponding to the video image to be processed;
generating label data according to the image category;
marking the video image to be processed according to the marking data to obtain a video image to be classified;
and classifying the video images to be classified to obtain image data sets of multiple categories.
A4, the video frame detection method as in A3, wherein the determining the video image to be processed according to the initial video frame data comprises:
determining an initial video image according to the initial video frame data;
acquiring image pixel information of the initial video image;
and adjusting the initial video image according to the image pixel information to obtain a video image to be processed.
A5, the method for detecting video frames according to a4, wherein before the adjusting the initial video image according to the image pixel information to obtain the video image to be processed, the method further comprises:
comparing the image pixel information with preset pixel information;
and when the image pixel information is inconsistent with preset pixel information, executing the step of adjusting the initial video image according to the image pixel information to obtain a video image to be processed.
A6, the video picture detection method as in a5, further comprising, after comparing the image pixel information with preset pixel information:
and when the image pixel information is consistent with the preset pixel information, taking the initial video image as a video image to be processed.
A7, the video picture detection method according to the A3, said labeling the video image to be processed according to the label data to obtain the video image to be classified, comprising:
carrying out image detection on the video image to be processed;
judging whether the video image to be processed contains interference noise data or not according to an image detection result;
when the video image to be processed does not contain interference noise data, automatically marking the video image to be processed according to the marking data to obtain a video image to be classified;
and when the video image to be processed contains interference noise data, taking the video image to be processed as a target video image, and manually marking the target video image to obtain a video image to be classified.
A8, the video picture detection method of a7, the disturbing noise data comprising video component text information;
when the video image to be processed contains interference noise data, the video image to be processed is used as a target video image, and the target video image is manually marked to obtain a video image to be classified, wherein the method comprises the following steps:
and when the video image to be processed contains the text information of the video assembly, taking the video image to be processed as a target video image, and manually marking the target video image to obtain a video image to be classified.
A9, the method for detecting video frames according to A2, wherein the training of the neural network model with preset residuals according to the image data set to obtain the detection model of the video frames comprises:
dividing the image dataset into a training dataset, a validation dataset, and a test dataset;
training a preset residual error neural network model according to the training data set to obtain an initial video picture detection model;
predicting through the initial video picture detection model and a verification data set to determine model accuracy;
selecting a target video picture detection model from the initial video picture detection models according to the model accuracy;
and optimizing the target video picture detection model according to the test data set to obtain a preset video picture detection model.
A10, the video image detection method according to any one of a1 to a9, where the determining, according to a video detection instruction, a video file to be detected and a target processing node when the video detection instruction is received includes:
when a video detection instruction is received, extracting video detection information from the video detection instruction;
determining a video file to be detected according to the video detection information;
and selecting a target processing node from the processing nodes to be selected according to the video detection instruction.
A11, the video frame detection method as in A10, wherein before the selecting the target processing node from the processing nodes to be selected according to the video detection instruction, the method further comprises:
when receiving a heartbeat message, determining message information according to the heartbeat message;
determining a node end to be selected for reporting the heartbeat message according to the message information;
searching node end information corresponding to the node end to be selected;
and generating a node to be selected according to the node end information, and establishing a corresponding relation between the node to be selected and the node end.
A12, the video picture detection method as in a11, where the classifying processing is performed on the video file to be detected through the target processing node based on a preset video picture detection model, and type probability values corresponding to a plurality of classification results are obtained, the method includes:
searching a target node end corresponding to the target processing node according to the corresponding relation between the processing node to be selected and the node end;
and sending the video file to be detected to the target node end so that the target node end classifies the video file to be detected based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results.
A13, the method for detecting video pictures as any one of A1-A9, wherein the determining the target type of the video file to be detected according to the type probability value includes:
sorting the type probability values;
selecting the maximum type probability value from the type probability values according to the sequencing result as a target type probability value;
and searching the video picture type corresponding to the target type probability value, and determining the target type of the video file to be detected according to the video picture type.
A14, the video picture detection method as in a13, the video picture types comprising: a normal type, a flower screen type, and a black screen type;
the determining the target type of the video file to be detected according to the video picture type comprises the following steps:
when the video picture type is a normal type, judging that the target type of the video file to be detected is a normal type;
when the video picture type is a screen splash type, judging that the target type of the video file to be detected is the screen splash type;
and when the video picture type is a black screen type, judging that the target type of the video file to be detected is the black screen type.
The invention also discloses B15, a video picture detection device, which comprises:
the detection instruction module is used for determining a video file to be detected and a target processing node according to a video detection instruction when the video detection instruction is received;
the classification processing module is used for classifying the video file to be detected through the target processing node based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results;
the type determining module is used for determining the target type of the video file to be detected according to the type probability value;
and the detection result module is used for generating a video picture detection result corresponding to the video file to be detected according to the target type.
B16, the video picture detection apparatus of B15, further comprising:
the model training module is used for acquiring initial video frame data; classifying the initial video frame data to obtain a plurality of classes of image data sets; and training a preset residual error neural network model according to the image data set to obtain a preset video picture detection model.
B17, the video picture detection device according to B16, the model training module further configured to determine a video image to be processed according to the initial video frame data, and obtain an image category corresponding to the video image to be processed; generating label data according to the image category; marking the video image to be processed according to the marking data to obtain a video image to be classified; and classifying the video images to be classified to obtain image data sets of multiple categories.
B18, the video frame detection device as described in B17, the model training module further configured to determine an initial video image according to the initial video frame data; acquiring image pixel information of the initial video image; and adjusting the initial video image according to the image pixel information to obtain a video image to be processed.
The invention also discloses C19, a video picture detection device, comprising: a memory, a processor and a video picture detection program stored on said memory and executable on said processor, said video picture detection program being configured with steps implementing a video picture detection method as described above.
The invention also discloses D20 and a storage medium, wherein the storage medium is stored with a video picture detection program, and the video picture detection program realizes the steps of the video picture detection method when being executed by a processor.

Claims (10)

1. A video picture detection method, characterized in that the video picture detection method comprises the steps of:
when a video detection instruction is received, determining a video file to be detected and a target processing node according to the video detection instruction;
classifying the video file to be detected through the target processing node based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results;
determining the target type of the video file to be detected according to the type probability value;
and generating a video picture detection result corresponding to the video file to be detected according to the target type.
2. The video picture detection method according to claim 1, wherein before the target processing node classifies the video file to be detected based on a preset video picture detection model and obtains type probability values corresponding to a plurality of classification results, the method further comprises:
acquiring initial video frame data;
classifying the initial video frame data to obtain a plurality of classes of image data sets;
and training a preset residual error neural network model according to the image data set to obtain a preset video picture detection model.
3. The video picture detection method of claim 2, wherein the classifying the initial video frame data to obtain a plurality of classes of image data sets comprises:
determining a video image to be processed according to the initial video frame data, and acquiring an image category corresponding to the video image to be processed;
generating label data according to the image category;
marking the video image to be processed according to the marking data to obtain a video image to be classified;
and classifying the video images to be classified to obtain image data sets of multiple categories.
4. The video picture detection method of claim 3, wherein said determining a video image to be processed from the initial video frame data comprises:
determining an initial video image according to the initial video frame data;
acquiring image pixel information of the initial video image;
and adjusting the initial video image according to the image pixel information to obtain a video image to be processed.
5. The video picture detection method of claim 4, wherein before the adjusting the initial video image according to the image pixel information to obtain the video image to be processed, further comprising:
comparing the image pixel information with preset pixel information;
and when the image pixel information is inconsistent with preset pixel information, executing the step of adjusting the initial video image according to the image pixel information to obtain a video image to be processed.
6. The video picture detection method of claim 5, wherein after comparing the image pixel information with preset pixel information, further comprising:
and when the image pixel information is consistent with the preset pixel information, taking the initial video image as a video image to be processed.
7. The video picture detection method according to claim 3, wherein said labeling the video image to be processed according to the label data to obtain a video image to be classified comprises:
carrying out image detection on the video image to be processed;
judging whether the video image to be processed contains interference noise data or not according to an image detection result;
when the video image to be processed does not contain interference noise data, automatically marking the video image to be processed according to the marking data to obtain a video image to be classified;
and when the video image to be processed contains interference noise data, taking the video image to be processed as a target video image, and manually marking the target video image to obtain a video image to be classified.
8. A video picture detection apparatus, characterized in that the video picture detection apparatus comprises:
the detection instruction module is used for determining a video file to be detected and a target processing node according to a video detection instruction when the video detection instruction is received;
the classification processing module is used for classifying the video file to be detected through the target processing node based on a preset video picture detection model to obtain type probability values corresponding to a plurality of classification results;
the type determining module is used for determining the target type of the video file to be detected according to the type probability value;
and the detection result module is used for generating a video picture detection result corresponding to the video file to be detected according to the target type.
9. A video picture detection apparatus, characterized in that the video picture detection apparatus comprises: memory, processor and a video picture detection program stored on the memory and executable on the processor, the video picture detection program being configured with steps to implement the video picture detection method according to any of claims 1 to 7.
10. A storage medium having stored thereon a video picture detection program which, when executed by a processor, implements the steps of the video picture detection method according to any one of claims 1 to 7.
CN202011542276.4A 2020-12-23 2020-12-23 Video picture detection method, device, equipment and storage medium Pending CN114663339A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011542276.4A CN114663339A (en) 2020-12-23 2020-12-23 Video picture detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011542276.4A CN114663339A (en) 2020-12-23 2020-12-23 Video picture detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114663339A true CN114663339A (en) 2022-06-24

Family

ID=82024908

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011542276.4A Pending CN114663339A (en) 2020-12-23 2020-12-23 Video picture detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114663339A (en)

Similar Documents

Publication Publication Date Title
US10395120B2 (en) Method, apparatus, and system for identifying objects in video images and displaying information of same
CN109740018B (en) Method and device for generating video label model
CA3066029A1 (en) Image feature acquisition
KR20180122926A (en) Method for providing learning service and apparatus thereof
CN109447156B (en) Method and apparatus for generating a model
CN109145828B (en) Method and apparatus for generating video category detection model
CN111031346A (en) Method and device for enhancing video image quality
CN109165645A (en) A kind of image processing method, device and relevant device
CN109961032B (en) Method and apparatus for generating classification model
CN108959474B (en) Entity relation extraction method
CN111783712A (en) Video processing method, device, equipment and medium
CN113778864A (en) Test case generation method and device, electronic equipment and storage medium
CN112381092A (en) Tracking method, device and computer readable storage medium
CN112527676A (en) Model automation test method, device and storage medium
CN112101231A (en) Learning behavior monitoring method, terminal, small program and server
CN117409419A (en) Image detection method, device and storage medium
WO2022062968A1 (en) Self-training method, system, apparatus, electronic device, and storage medium
CN110209780B (en) Question template generation method and device, server and storage medium
CN112633341A (en) Interface testing method and device, computer equipment and storage medium
CN114663339A (en) Video picture detection method, device, equipment and storage medium
CN111581487B (en) Information processing method and device
CN112035736B (en) Information pushing method, device and server
CN113269276A (en) Image recognition method, device, equipment and storage medium
CN112950167A (en) Design service matching method, device, equipment and storage medium
CN114913513A (en) Method and device for calculating similarity of official seal images, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination