Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the application for generating the method for model or the example of the device for generating model
Property system architecture 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as video record class is answered on terminal device 101,102,103
With the application of, video playback class, the application of interactive voice class, searching class application, instant messaging tools, mailbox client, social platform
Software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, it can be the various electronic equipments with display screen, including but not limited to smart phone, tablet computer, on knee portable
Computer and desktop computer etc..When terminal device 101,102,103 is software, above-mentioned cited electricity may be mounted at
In sub- equipment.Multiple softwares or software module (such as providing Distributed Services) may be implemented into it, also may be implemented into
Single software or software module.It is not specifically limited herein.
When terminal device 101,102,103 is hardware, it is also equipped with image capture device thereon.Image Acquisition is set
It is standby to can be the various equipment for being able to achieve acquisition image function, such as camera, sensor.User can use terminal device
101, the image capture device on 102,103, to acquire video.
Server 105 can be to provide the server of various services, such as uploading to terminal device 101,102,103
The video video processing service device that is stored, managed or analyzed.The available sample set of video processing service device.Sample
Concentration may include a large amount of sample.Wherein, the sample in above-mentioned sample set may include Sample video, be used to indicate sample view
Whether frequency belongs to the first markup information of low-quality video, is used to indicate the of the low-quality classification for belonging to the Sample video of low-quality video
Two markup informations.In addition, video processing service device can use the sample in sample set, initial model is trained, and can
Training result (such as the low-quality video detection model generated) to be stored.In this way, user using terminal device 101,
102, after 103 uploaded videos, server 105 can detecte whether the video that user is uploaded is low-quality video, in turn, Ke Yijin
The operations such as row prompt information push.
It should be noted that server 105 can be hardware, it is also possible to software.When server is hardware, Ke Yishi
The distributed server cluster of ready-made multiple server compositions, also may be implemented into individual server.When server is software,
Multiple softwares or software module (such as providing Distributed Services) may be implemented into, single software or soft also may be implemented into
Part module.It is not specifically limited herein.
It should be noted that the method provided by the embodiment of the present application for generating model is generally held by server 105
Row, correspondingly, the device for generating model is generally positioned in server 105.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process of one embodiment of the method for generating model according to the application is shown
200.The method for being used to generate model, comprising the following steps:
Step 201, sample set is obtained.
It in the present embodiment, can be with for generating the executing subject (such as server 105 shown in FIG. 1) of the method for model
Obtain sample set in several ways.For example, executing subject can by wired connection mode or radio connection, from
It is obtained in another server (such as database server) of storage sample and is stored in existing sample set therein.Example again
Such as, user can collect sample by terminal device (such as terminal device shown in FIG. 1 101,102,103).In this way, above-mentioned
Executing subject can receive sample collected by terminal, and these samples are stored in local, to generate sample set.It needs to refer to
Out, above-mentioned radio connection can include but is not limited to 3G/4G connection, WiFi connection, bluetooth connection, WiMAX connection,
Zigbee connection, UWB (ultra wideband) connection and other currently known or exploitation in the future radio connections.
It herein, may include a large amount of sample in sample set.Wherein, sample may include Sample video and be used to indicate sample
Whether this video belongs to the first markup information of low-quality video.For example, the first markup information can be when belonging to low-quality video
"1";When being not belonging to low-quality video, the first markup information can be " 0 ".When the Sample video in sample belongs to low-quality video,
The sample further includes the second markup information for being used to indicate the low-quality classification of the Sample video.
It should be noted that low-quality video is usually the lower video of quality.For example, low-quality video may include but unlimited
In fuzzy video, black screen video, record screen video etc..Correspondingly, low-quality classification can include but is not limited to fuzzy video class, blank screen
Video class, record screen video class etc..
Step 202, sample is extracted from sample set.
In the present embodiment, sample is extracted in the sample set that executing subject can be obtained from step 201, and executes step
Rapid 203 to step 206 training step.Wherein, the extracting mode of sample is not intended to limit in this application.For example, it may be with
Machine extracts a sample, is also possible to extract the sample that need to currently extract according to designated order in therefrom sample set.
Step 203, the frame in the Sample video in extracted sample is input to initial model, respectively obtains sample view
Frequency belongs to the probability of low-quality video and each low-quality classification.
In the present embodiment, executing subject can be defeated by the frame in the Sample video in sample extracted in step 202
Enter to initial model, initial model can export Sample video and belong to by carrying out feature extraction, analysis etc. to the frame in video
The probability of low-quality video, and, the probability that Sample video belongs to each low-quality classification can be exported.It should be noted that sample regards
Frequency belongs to the probability of each low-quality classification, it can be understood as, in the case where Sample video belongs to low-quality video, the Sample video category
In the conditional probability of each low-quality classification.
In the present embodiment, initial model can be created based on machine learning techniques various and mention with characteristics of image
Take the model of function and classification feature.Initial model can carry out feature extraction to the frame in video, then to extracted spy
It levies and the processing such as is merged, analyzed, final output Sample video belongs to the probability of low-quality video and each low-quality classification.In practice,
In the training process of initial model, every probability that initial model is exported is usually inaccurate.The purpose of training initial model
It is that every probability that the initial model after training on trial is practiced exports is more acurrate.
As an example, initial model can be using various existing structures (such as DenseBox, VGGNet, ResNet,
SegNet etc.) convolutional neural networks.In practice, convolutional neural networks (Convolutional Neural Network, CNN)
It is a kind of feedforward neural network, its artificial neuron can respond the surrounding cells in a part of coverage area, for image
Processing has outstanding performance, therefore, it is possible to carry out the extraction of the frame feature in Sample video using convolutional neural networks.In this example
In, the product neural network established may include convolutional layer, pond layer, Fusion Features layer, full articulamentum etc..Wherein, convolutional layer
It can be used for extracting characteristics of image.Pond layer can be used for carrying out down-sampled (downsample) to the information of input.Feature is melted
Close layer can be used for by the corresponding characteristics of image of obtained each frame (for example, it may be the form or feature of eigenmatrix to
The form of amount) it is merged.For example, the characteristic value of the same position in the corresponding eigenmatrix of different frame can be averaged,
To carry out Fusion Features, a fused eigenmatrix is generated.Full articulamentum can be used for being divided obtained feature
Class.
It is understood that since initial model can export the probability and sample that Sample video belongs to low-quality video
Video belongs to the probability of each low-quality classification.Thus, full articulamentum can be made of two parts.Wherein, a part can export sample
This video belongs to the probability of low-quality video.Another part can export the probability that Sample video belongs to each low-quality classification.In practice,
Each section can carry out probability calculation using independent softmax function respectively.
It should be noted that above-mentioned initial model is also possible to have the function of other of image characteristics extraction and classification feature
Model, however it is not limited to which above-mentioned example, specific model structure are not construed as limiting herein.
Step 204, based on markup information, obtained probability and the loss letter pre-established in extracted sample
Number, determines the penalty values of sample.
In the present embodiment, above-mentioned executing subject (can include the first mark based on the markup information in extracted sample
Infuse information and the second markup information), obtained probability and the loss function pre-established, determine the penalty values of sample.Practice
In, loss function (loss function) can be used to estimate the information (such as probability) that initial model is exported and true value (such as
Markup information) inconsistent degree.Under normal circumstances, the value (penalty values) of loss function is smaller, and the robustness of model is better.
Loss function can be arranged according to actual needs.
In the present embodiment, the setting of loss function is considered that two-part loss (for example, it can be set to being two
Divide the weighted results of the sum of loss or two parts loss).A portion loss can be used for characterizing initial model and be exported
Sample video belong to low-quality video probability and true value (such as the first markup information, if the first markup information instruction sample view
Frequency is low-quality video, then true value is 1;Conversely, for difference degree 0).Another part loss can be used for characterizing initial model
The Sample video of output belongs to the difference degree of the probability Yu true value (such as 1) of low-quality classification indicated by the second markup information.
It should be noted that the partial loss can be arranged to preset when not containing the second markup information in extracted sample
It is worth (such as 0).In practice, two parts loss can be utilized respectively intersection entropy loss and be calculated.
In some optional implementations of the present embodiment, above-mentioned executing subject can determine sample in accordance with the following steps
Penalty values:
The first step inputs the probability that the first markup information, the Sample video in extracted sample belong to low-quality video
To the first-loss function pre-established, first-loss value is obtained.Herein, first-loss function can be used for characterizing initial model
The Sample video exported belongs to the probability of low-quality video and the difference degree of the first markup information.In practice, first-loss letter
Intersection entropy loss can be used in number.
Second step can be by above-mentioned first-loss in response to not including the second markup information in the extracted sample of determination
Value is determined as the penalty values of extracted sample.
Optionally, above-mentioned in response to including the second markup information in the extracted sample of determination in above-mentioned implementation
Executing subject can execute following steps to determine the penalty values of sample: it is possible, firstly, to by the second mark in extracted sample
Low-quality classification indicated by information is infused as target category.It then, can be by the second mark included in extracted sample
The probability that the Sample video that information, initial model are exported belongs to the target category is input to the second loss letter pre-established
Number, obtains the second penalty values.Herein, the Sample video that the second loss function can be used for characterizing that initial model is exported belongs to mesh
Mark the probability of classification (i.e. low-quality classification indicated by the second markup information) and the difference degree of true value (such as 1).In practice,
Intersection entropy loss also can be used in second loss function.Later, can by above-mentioned first-loss value and above-mentioned second penalty values it
With the penalty values for being determined as extracted sample.Herein, the penalty values of sample can also be obtained using other modes.For example, will
The weighted results of first-loss value and above-mentioned second penalty values are determined as the penalty values of extracted sample.Wherein, weight can be with
It is that technical staff is as needed and pre-set.
Step 205, based on penalty values compared with target value, determine whether initial model trains completion.
In the present embodiment, above-mentioned executing subject can determine initial based on determined penalty values compared with target value
Whether model trains completion.As an example, above-mentioned executing subject can determine whether penalty values have restrained.When determine penalty values receive
When holding back, then it can determine that initial model at this time has trained completion.As another example, above-mentioned executing subject can be first by damage
Mistake value is compared with target value.In response to determining that penalty values are less than or equal to target value, nearest preset quantity can be counted
In penalty values determined by secondary (such as nearly 100 times) training step, the quantity less than or equal to the penalty values of above-mentioned target value is accounted for
The ratio of the preset quantity.When the ratio is greater than preset ratio (such as 95%), it can determine that initial model training is completed.It needs
It is noted that multiple (at least two) samples can have been extracted in step 202.At this point it is possible to it is directed to each sample, it can be with
Operation documented by step 202- step 204 is utilized respectively to calculate the penalty values of sample.At this point, executing subject can will be each
The penalty values of sample are compared with target value respectively.It may thereby determine that whether the penalty values of each sample are less than or equal to mesh
Scale value.It should be pointed out that target value can be generally used for indicating the ideal of the inconsistent degree between predicted value and true value
Situation.That is, when penalty values are less than or equal to target value, it is believed that predicted value nearly or approximately true value.It is default
Value can be arranged according to actual needs.
It should be noted that can then continue to execute step 206 in response to determining that initial model has trained completion.Response
In determining that initial model not complete by training, the parameter in initial model can be updated, from above-mentioned sample based on identified penalty values
This concentration extracts sample again, and the initial model after using undated parameter continues to execute above-mentioned training step as initial model.
Herein, it can use the gradient that back-propagation algorithm acquires penalty values relative to model parameter, then utilize gradient descent algorithm
Based on gradient updating model parameter.It should be noted that above-mentioned back-propagation algorithm, gradient descent algorithm and machine learning side
Method is the well-known technique studied and applied extensively at present, and details are not described herein.It should be pointed out that sample extraction mode here
It does not also limit in this application.For example, in the case where sample is concentrated with great amount of samples, executing subject can therefrom extract not by
The sample extracted.
Step 206, in response to determining that initial model training is completed, the initial model after training is determined as the inspection of low-quality video
Survey model.
In the present embodiment, in response to determine initial model training complete, above-mentioned executing subject can will after training at the beginning of
Beginning model is determined as low-quality video detection model.Whether the low-quality video detection model can be low-quality video to detection video,
Meanwhile it can detecte out the low-quality classification of low-quality video.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for generating model of the present embodiment
Figure.In the application scenarios of Fig. 3, in the application scenarios of Fig. 3, mould can be installed on terminal device 301 used by a user
Type training class application.When user opens the application, and after uploading the store path of sample set or sample set, after providing the application
The server 302 that platform is supported can run the method for generating low-quality video detection model, comprising:
It is possible, firstly, to obtain sample set.Wherein, the sample in above-mentioned sample set may include Sample video 303, for referring to
Show whether Sample video belongs to the first markup information 304 of low-quality video, is used to indicate the Sample video for belonging to low-quality video
Second markup information 305 of low-quality classification.Later, sample can be extracted from above-mentioned sample set, executes following training step: will
The frame in Sample video in extracted sample is input to initial model 306, respectively obtains Sample video and belongs to low-quality video
With the probability of each low-quality classification;Based on markup information, obtained probability and the loss pre-established in extracted sample
Function determines the penalty values 307 of sample;Based on above-mentioned penalty values compared with target value, determine whether initial model has trained
At.If initial model training is completed, the initial model after training is determined as low-quality video detection model 308.
The method provided by the above embodiment of the application can extract sample therefrom by obtaining sample set to carry out just
The training of beginning model.Wherein, the sample in above-mentioned sample set may include Sample video, be used to indicate whether Sample video belongs to
First markup information of low-quality video, the second mark letter for being used to indicate the low-quality classification for belonging to the Sample video of low-quality video
Breath.In this way, the frame of the Sample video in the sample of extraction is input to initial model, it can obtain what initial model was exported
Sample video belongs to the probability of low-quality video and each low-quality classification.Then, based on markup information, the gained in extracted sample
To probability and the loss function that pre-establishes, that is, can determine the penalty values of sample.Later, above-mentioned penalty values and target value are based on
Comparison, can determine whether initial model trains completion.If initial model training is completed, so that it may will be initial after training
Model is determined as low-quality video detection model.Thus, it is possible to obtain a kind of model that can be used for low-quality video detection, the model
Help to improve the efficiency to low-quality video detection.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for generating model.The use
In the process 400 for the method for generating model, comprising the following steps:
Step 401, sample set is obtained.
It in the present embodiment, can be with for generating the executing subject (such as server 105 shown in FIG. 1) of the method for model
Obtain sample set.Wherein, sample may include Sample video and be used to indicate whether Sample video belongs to low-quality video first
Markup information.When the Sample video in sample belongs to low-quality video, which further includes be used to indicate the Sample video low
Second markup information of matter classification.
Step 402, sample is extracted from sample set.
In the present embodiment, sample is extracted in the sample set that executing subject can be obtained from step 401, and executes step
Rapid 403 to step 410 training step.Wherein, the extracting mode of sample is not intended to limit in this application.For example, it may be with
Machine extracts a sample, is also possible to extract the sample that need to currently extract according to designated order in therefrom sample set.
Step 403, the frame in the Sample video in extracted sample is input to initial model, respectively obtains sample view
Frequency belongs to the probability of low-quality video and each low-quality classification.
In the present embodiment, above-mentioned executing subject can will be in the Sample video in sample extracted in step 402
Frame is input to initial model, and initial model can export Sample video by carrying out feature extraction, analysis etc. to the frame in video
Belong to the probability of low-quality video, and, the probability that Sample video belongs to each low-quality classification can be exported.
In the present embodiment, the convolutional neural networks created based on machine learning techniques can be used in initial model.Institute
The product neural network of foundation may include convolutional layer, pond layer, Fusion Features layer, full articulamentum etc..Full articulamentum can be by two
Part is constituted.Wherein, a part can export the probability that Sample video belongs to low-quality video.Another part can export sample view
Frequency belongs to the probability of each low-quality classification.In practice, each section can carry out probability meter using independent softmax function respectively
It calculates.
Step 404, the probability that the first markup information, the Sample video in extracted sample belong to low-quality video is inputted
To the first-loss function pre-established, first-loss value is obtained.
In the present embodiment, above-mentioned executing subject can be by the first markup information in extracted sample, step 403 institute
The probability that the Sample video of output belongs to low-quality video is input to the first-loss function pre-established, obtains first-loss value.
Herein, the Sample video that first-loss function can be used for characterizing that initial model is exported belongs to the probability and first of low-quality video
The difference degree of markup information.In practice, intersection entropy loss is can be used in first-loss function.
Step 405, it whether determines in extracted sample comprising the second markup information.
In the present embodiment, whether above-mentioned executing subject can determine in extracted sample comprising the second markup information.
If not including, step 406 can be executed, to determine the penalty values of sample.If comprising step 407-408 can be executed, with determination
The penalty values of sample.
Step 406, in response to not including the second markup information in the extracted sample of determination, first-loss value is determined as
The penalty values of extracted sample.
In the present embodiment, in response to not including the second markup information, above-mentioned executing subject in the extracted sample of determination
First-loss value can be determined as to the penalty values of extracted sample.
Step 407, in response to including the second markup information in the extracted sample of determination, by the in extracted sample
Low-quality classification indicated by two markup informations as target category, by the second markup information included in extracted sample,
The probability that Sample video belongs to target category is input to the second loss function pre-established, obtains the second penalty values.
In the present embodiment, in response to including the second markup information in the extracted sample of determination, above-mentioned executing subject can
Using by low-quality classification indicated by the second markup information in extracted sample as target category, will be in extracted sample
The probability that the second markup information, the Sample video for being included belong to target category is input to the second loss function pre-established,
Obtain the second penalty values.Herein, the Sample video that the second loss function can be used for characterizing that initial model is exported belongs to target
The difference degree of class probability and true value (such as 1).In practice, intersection entropy loss is also can be used in the second loss function.
Step 408, the sum of first-loss value and the second penalty values are determined as to the penalty values of extracted sample.
In the present embodiment, above-mentioned executing subject can determine the sum of above-mentioned first-loss value and above-mentioned second penalty values
For the penalty values of extracted sample.
Step 409, based on penalty values compared with target value, determine whether initial model trains completion.
In the present embodiment, above-mentioned executing subject can determine initial based on determined penalty values compared with target value
Whether model trains completion.As an example, above-mentioned executing subject can determine whether penalty values have restrained.When determine penalty values receive
When holding back, then it can determine that initial model at this time has trained completion.As an example, above-mentioned executing subject can be first by penalty values
It is compared with target value.In response to determining that penalty values are less than or equal to target value, nearest preset quantity time (example can be counted
Such as nearly 100 times) in penalty values determined by training step, it is default that this is accounted for less than or equal to the quantity of the penalty values of above-mentioned target value
The ratio of quantity.When the ratio is greater than preset ratio (such as 95%), it can determine that initial model training is completed.It needs to illustrate
, target value can be generally used for indicating the ideal situation of the inconsistent degree between predicted value and true value.That is,
When penalty values are less than or equal to target value, it is believed that predicted value nearly or approximately true value.Preset value can be according to reality
Demand is arranged.
It should be noted that can then continue to execute step 410 in response to determining that initial model has trained completion.Response
In determining that initial model not complete by training, the parameter in initial model can be updated, from above-mentioned sample based on identified penalty values
This concentration extracts sample again, and the initial model after using undated parameter continues to execute above-mentioned training step as initial model.
Herein, it can use the gradient that back-propagation algorithm acquires penalty values relative to model parameter, then utilize gradient descent algorithm
Based on gradient updating model parameter.It should be noted that above-mentioned back-propagation algorithm, gradient descent algorithm and machine learning side
Method is the well-known technique studied and applied extensively at present, and details are not described herein.It should be pointed out that sample extraction mode here
It does not also limit in this application.For example, in the case where sample is concentrated with great amount of samples, executing subject can therefrom extract not by
The sample extracted.
Step 410, in response to determining that initial model training is completed, the initial model after training is determined as the inspection of low-quality video
Survey model.
In the present embodiment, in response to determine initial model training complete, above-mentioned executing subject can will after training at the beginning of
Beginning model is determined as low-quality video detection model.Whether the low-quality video detection model can be low-quality video to detection video,
Meanwhile it can detecte out the low-quality classification of low-quality video.
Figure 4, it is seen that the method for generating model compared with the corresponding embodiment of Fig. 2, in the present embodiment
Process 400 relate to a kind of calculations of penalty values.Initial model is carried out based on the penalty values that this mode is calculated
Training, the model realization after training can be made to the detection function of low-quality video, and, realize the low-quality class to low-quality video
Other detection function.Meanwhile video detection is carried out using the low-quality video detection model trained, help to be promoted to low-quality
The detection speed of video, and, help to promote the detection effect to low-quality classification.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating mould
One embodiment of the device of type, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, being used to generate the device 500 of model described in the present embodiment includes: acquiring unit 501, it is configured
At obtaining sample set, wherein sample may include Sample video and be used to indicate whether Sample video belongs to low-quality video the
One markup information.When the Sample video in sample belongs to low-quality video, which further includes being used to indicate the Sample video
Second markup information of low-quality classification;Training unit 502 is configured to extract sample from above-mentioned sample set, executes following instruction
Practice step: the frame of the Sample video in extracted sample being input to initial model, Sample video is respectively obtained and belongs to low-quality
The probability of video and each low-quality classification;Based in extracted sample markup information, obtained probability and pre-establish
Loss function determines the penalty values of sample;Based on above-mentioned penalty values compared with target value, determine whether initial model has trained
At;In response to determining that initial model training is completed, the initial model after training is determined as low-quality video detection model.
In some optional implementations of the present embodiment, above-mentioned training unit 502 can be further configured to: will
The probability that the first markup information, Sample video in extracted sample belong to low-quality video is input to the first damage pre-established
Function is lost, first-loss value is obtained;In response to not including the second markup information in the extracted sample of determination, above-mentioned first is damaged
Mistake value is determined as the penalty values of extracted sample.
In some optional implementations of the present embodiment, above-mentioned training unit 502 can be further configured to: be rung
It should be in determining comprising the second markup information in extracted sample, it will be indicated by the second markup information in extracted sample
Low-quality classification belongs to target class as target category, by the second markup information, Sample video included in extracted sample
Other probability is input to the second loss function pre-established, obtains the second penalty values;By above-mentioned first-loss value and above-mentioned the
The sum of two penalty values are determined as the penalty values of extracted sample.
In some optional implementations of the present embodiment, which can also include that updating unit (does not show in figure
Out).Wherein, above-mentioned updating unit may be configured in response to determining that initial model not complete by training, based on identified damage
Mistake value updates the parameter in initial model, extracts sample again from above-mentioned sample set, use the initial model after undated parameter
As initial model, above-mentioned training step is continued to execute.
The device provided by the above embodiment of the application obtains sample set by acquiring unit 501, and training unit 502 can
To extract sample therefrom to carry out the training of initial model.Wherein, the sample in above-mentioned sample set may include Sample video, use
Whether belong to the first markup information of low-quality video in instruction Sample video, be used to indicate and belong to the Sample video of low-quality video
Second markup information of low-quality classification.In this way, the frame of the Sample video in the sample of extraction is input to initially by training unit 502
Model can obtain the probability that the Sample video that initial model is exported belongs to low-quality video and each low-quality classification.Then, base
Markup information, obtained probability in extracted sample and the loss function pre-established, that is, can determine the damage of sample
Mistake value.Later, based on above-mentioned penalty values compared with target value, it can determine whether initial model trains completion.If initial
Model training is completed, so that it may which the initial model after training is determined as low-quality video detection model.Thus, it is possible to obtain one kind
It can be used for the model of low-quality video detection, which helps to improve the efficiency to low-quality video detection.
Fig. 6 is referred to, it illustrates provided by the present application for detecting the stream of one embodiment of the method for low-quality video
Journey 600.The method for being used to detect low-quality video may comprise steps of:
Step 601, the low-quality video detection comprising target video is received to request.
In the present embodiment, for detecting executing subject (such as the server shown in FIG. 1 105, Huo Zhecun of low-quality video
Contain other servers of low-quality video detection model) it can receive the low-quality video detection request comprising target video.Herein,
Target video can be the video of pending low-quality video detection.Target video can be stored in advance in above-mentioned executing subject.
It is also possible to transmitted by other electronic equipments (such as terminal device shown in FIG. 1 101,102,103).
Step 602, the frame in target video is inputted into low-quality video detection model, obtains testing result.
In the present embodiment, the frame in target video can be inputted low-quality video detection model by above-mentioned executing subject, be obtained
To testing result.Wherein, above-mentioned testing result may include the probability that above-mentioned target video belongs to low-quality video.The inspection of low-quality video
Surveying model can be using the method for generating low-quality video detection model as described in above-mentioned Fig. 2 embodiment and generates.Tool
Body generating process may refer to the associated description of Fig. 2 embodiment, and details are not described herein again.
Step 603, in response to determining that target video belongs to the probability of low-quality video greater than the first preset threshold, target is determined
Video is low-quality video.
In the present embodiment, the probability for belonging to low-quality video in response to determining target video is greater than the first preset threshold, on
Stating executing subject can determine that target video is low-quality video.
In some optional implementations of the present embodiment, above-mentioned testing result can also include above-mentioned target video category
In the probability of each low-quality classification.Determine above-mentioned target video be low-quality video after, above-mentioned executing subject can also be performed as
Lower operation:
Firstly, above-mentioned target video can be belonged to the general of low-quality video in response to receiving the detection request of low-quality classification
Rate is as the first probability, for each low-quality classification, determine above-mentioned target video belong to the probability of the low-quality classification with it is above-mentioned
Above-mentioned product is determined as the probability that above-mentioned target video belongs to the low-quality classification by the product of the first probability.
Later, the low-quality classification that probability is greater than the second default value can be determined as to the low-quality class of above-mentioned target video
Not.It is thus possible to determine the low-quality classification of target video.
It should be noted that the method that the present embodiment is used to detect low-quality video can be used for testing the various embodiments described above institute
The low-quality video detection model of generation.And then low-quality video detection model can constantly be optimized according to test result.This method
It is also possible to the practical application methods of the various embodiments described above low-quality video detection model generated.Using the various embodiments described above institute
The low-quality video detection model of generation helps to improve the performance of low-quality video detection model to carry out low-quality video detection.Together
When, low-quality video detection is carried out using above-mentioned low-quality video detection model, improves the detection speed to low-quality video, and,
Improve the detection effect to low-quality classification.
With continued reference to Fig. 7, as the realization to method shown in above-mentioned Fig. 6, this application provides one kind for detecting low-quality
One embodiment of the device of video.The Installation practice is corresponding with embodiment of the method shown in fig. 6, which specifically can be with
Applied in various electronic equipments.
As shown in fig. 7, being used to detect the device 700 of low-quality video described in the present embodiment includes: the first receiving unit
701, it is configured to receive the low-quality video detection comprising target video and requests;Input unit 702 is configured to above-mentioned target
Frame in video inputs low-quality video detection model, obtains testing result.Wherein, above-mentioned testing result includes above-mentioned target video
Belong to the probability of low-quality video;First determination unit 703 is configured in response to determine that above-mentioned probability is greater than the first default threshold
Value determines that above-mentioned target video is low-quality video.
In some optional implementations of the present embodiment, above-mentioned testing result can also include above-mentioned target video category
In the probability of each low-quality classification.Above-mentioned apparatus can also include the second receiving unit and the second determination unit (not shown).
Wherein, above-mentioned second receiving unit may be configured in response to receiving the detection request of low-quality classification, by above-mentioned target video
Belong to the probability of low-quality video as the first probability, for each low-quality classification, determines that above-mentioned target video belongs to the low-quality
Above-mentioned product is determined as above-mentioned target video and belongs to the general of the low-quality classification by the product of the probability of classification and above-mentioned first probability
Rate.The low-quality classification that above-mentioned second determination unit may be configured to for probability being greater than the second default value is determined as above-mentioned target
The low-quality classification of video.
It is understood that all units recorded in the device 700 and each step phase in the method with reference to Fig. 6 description
It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 700 and its
In include unit, details are not described herein.
Below with reference to Fig. 8, it illustrates the computer systems 800 for the electronic equipment for being suitable for being used to realize the embodiment of the present application
Structural schematic diagram.Electronic equipment shown in Fig. 8 is only an example, function to the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in figure 8, computer system 800 includes central processing unit (CPU) 801, it can be read-only according to being stored in
Program in memory (ROM) 802 or be loaded into the program in random access storage device (RAM) 803 from storage section 808 and
Execute various movements appropriate and processing.In RAM 803, also it is stored with system 800 and operates required various programs and data.
CPU 801, ROM 802 and RAM 803 are connected with each other by bus 804.Input/output (I/O) interface 805 is also connected to always
Line 804.
I/O interface 805 is connected to lower component: the importation 806 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 807 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 808 including hard disk etc.;
And the communications portion 809 of the network interface card including LAN card, modem etc..Communications portion 809 via such as because
The network of spy's net executes communication process.Driver 810 is also connected to I/O interface 805 as needed.Detachable media 811, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 810, in order to read from thereon
Computer program be mounted into storage section 808 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 809, and/or from detachable media
811 are mounted.When the computer program is executed by central processing unit (CPU) 801, limited in execution the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.
The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires electrical connection,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores
The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And
In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed
Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet
Include acquiring unit and training unit.Wherein, the title of these units does not constitute the limit to the unit itself under certain conditions
It is fixed, for example, acquiring unit is also described as " obtaining the unit of sample set ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should
Device: sample set is obtained;Sample is extracted from the sample set, executes following training step: by the sample in extracted sample
Frame in video is input to initial model, respectively obtains the probability that Sample video belongs to low-quality video and each low-quality classification;It is based on
Markup information, obtained probability in extracted sample and the loss function pre-established, determine the penalty values of sample;Base
In the penalty values compared with target value, determine whether initial model trains completion;In response to determining that initial model training is completed,
Initial model after training is determined as low-quality video detection model.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.