Invention content
The embodiment of the present application proposes the method and apparatus for generating model, and the method and dress of video for identification
It sets.
In a first aspect, the embodiment of the present application provides a kind of method for generating model, this method includes:Obtain training
Sample set, and training sample set is divided into preset quantity training sample group, wherein training sample includes that sample regards
Frequency and the specimen discerning that is marked in advance for Sample video as a result, Sample video to be shot regarding of being obtained to sample object
Frequently, specimen discerning result is used to indicate whether Sample video is to showing that the screen of sample object is shot regarding of being obtained
Frequently;For the training sample group in preset quantity training sample group, the sample of the training sample in this group of training sample is regarded
Frequency, using the specimen discerning result corresponding to the Sample video inputted as desired output, utilizes machine learning side as input
Method trains to obtain the initial video identification model corresponding to this group of training sample;Based on obtained initial video identification model,
Generate video identification model.
In some embodiments, for the training sample group in preset quantity training sample group, by this group of training sample
In training sample Sample video as input, using the specimen discerning result corresponding to the Sample video inputted as expectation
Output, trains to obtain the initial video identification model corresponding to this group of training sample using machine learning method, including:From default
Training sample group is chosen in quantity training sample group as candidate training sample group, and based on candidate training sample group and just
Beginning model executes following training step:It is using the Sample video of the training sample in candidate training sample group as input, institute is defeated
Specimen discerning result corresponding to the Sample video entered instructs initial model using machine learning method as desired output
Practice, obtains initial video identification model;It determines and whether there is unselected training sample in preset quantity training sample group
Group;Unselected training sample group is not present in response to determining, obtains preset quantity initial video identification model.
In some embodiments, for the training sample group in preset quantity training sample group, by this group of training sample
In training sample Sample video as input, using the specimen discerning result corresponding to the Sample video inputted as expectation
Output, trains to obtain the initial video identification model corresponding to this group of training sample using machine learning method, further includes:Response
In determining there are unselected training sample group, training sample group is chosen from unselected training sample group as newly
Candidate training sample group, the initial video identification model that the last time is obtained continue to execute training as new initial model
Step.
In some embodiments, for the training sample group in preset quantity training sample group, by this group of training sample
In training sample Sample video as input, using the specimen discerning result corresponding to the Sample video inputted as expectation
Output, trains to obtain the initial video identification model corresponding to this group of training sample using machine learning method, including:It determines and uses
In the numerical value of the good and bad degree of characterization preset quantity training sample group;Based on identified numerical value, from preset quantity training
Optimal training sample group is chosen in sample group as candidate training sample group, and based on candidate training sample group and introductory die
Type executes following training step:Using the Sample video of the training sample in candidate training sample group as input, by what is inputted
Specimen discerning result corresponding to Sample video is trained initial model using machine learning method as desired output,
Obtain initial video identification model;It determines and whether there is unselected training sample group in preset quantity training sample group;
Unselected training sample group is not present in response to determining, obtains preset quantity initial video identification model.
In some embodiments, for the training sample group in preset quantity training sample group, by this group of training sample
In training sample Sample video as input, using the specimen discerning result corresponding to the Sample video inputted as expectation
Output, trains to obtain the initial video identification model corresponding to this group of training sample using machine learning method, further includes:Response
It is chosen from unselected training sample group based on identified numerical value in determining that there are unselected training sample groups
Optimal training sample group is as new candidate training sample group, and the initial video identification model that the last time is obtained is as new
Initial model, continue to execute training step.
In some embodiments, the numerical value of the good and bad degree for characterizing preset quantity training sample group is determined, including:
Obtain pre-set verification sample set, wherein verification sample includes verification video and marked in advance with video for verification
The verification recognition result of note;For the training sample group in preset quantity training sample group, following steps are executed:By the group
The Sample video of training sample in training sample is as input, by the specimen discerning result corresponding to the Sample video inputted
As output, train to obtain the video identification model to be verified corresponding to this group of training sample using machine learning method;It will test
To be verified video identification model of the verification corresponding to this group of training sample of video input of the verification sample in sample set is demonstrate,proved,
Practical recognition result is obtained, determines that practical recognition result is tied relative to the identification of the verification corresponding to the verification video inputted
The penalty values of fruit generate the numerical value of the good and bad degree for characterizing this group of training sample group based on identified penalty values.
In some embodiments, it is based on obtained initial video identification model, generates video identification model, including:Base
In identified numerical value, the initial video identification model to be obtained distributes weight;Based on the weight distributed, to what is obtained
Initial video identification model is merged, and video identification model is generated.
In some embodiments, it is based on obtained initial video identification model, generates video identification model, including:It will
The initial video identification model that last time obtains is determined as video identification model.
Second aspect, the embodiment of the present application provide a kind of device for generating model, which includes:Sample acquisition
Unit is configured to obtain training sample set, and training sample set is divided into preset quantity training sample group,
In, training sample includes Sample video and the specimen discerning that is marked in advance for Sample video is as a result, Sample video is to sample
Object is shot obtained video, and specimen discerning result is used to indicate whether Sample video is screen to showing sample object
Curtain is shot obtained video;Model training unit is configured to for the training in preset quantity training sample group
Sample group, using the Sample video of the training sample in this group of training sample as input, corresponding to the Sample video inputted
Specimen discerning result as desired output, initial being regarded using what machine learning method was trained to obtain corresponding to this group of training sample
Frequency identification model;Model generation unit is configured to be based on obtained initial video identification model, generates video identification mould
Type.
In some embodiments, model training unit includes:First execution module is configured to from preset quantity training
Training sample group is chosen in sample group as candidate training sample group, and based on candidate training sample group and initial model, is held
The following training step of row:Using the Sample video of the training sample in candidate training sample group as input, the sample that will be inputted
Specimen discerning result corresponding to video is trained initial model using machine learning method as desired output, obtains
Initial video identification model;It determines and whether there is unselected training sample group in preset quantity training sample group;Response
Unselected training sample group is not present in determining, obtains preset quantity initial video identification model.
In some embodiments, model training unit further includes:Second execution module is configured in response to determine presence
Unselected training sample group chooses training sample group as new candidate's training sample from unselected training sample group
This group, the initial video identification model that the last time is obtained continue to execute training step as new initial model.
In some embodiments, model training unit includes:Numerical value determining module is configured to determine default for characterizing
The numerical value of the good and bad degree of quantity training sample group;Third execution module, is configured to based on identified numerical value, from default
Optimal training sample group is chosen in quantity training sample group as candidate training sample group, and based on candidate training sample
Group and initial model execute following training step:Using the Sample video of the training sample in candidate training sample group as input,
Using the specimen discerning result corresponding to the Sample video inputted as desired output, using machine learning method to initial model
It is trained, obtains initial video identification model;It determines and whether there is unselected instruction in preset quantity training sample group
Practice sample group;Unselected training sample group is not present in response to determining, obtains preset quantity initial video identification model.
In some embodiments, model training unit further includes:4th execution module is configured in response to determine presence
Unselected training sample group chooses optimal training based on identified numerical value from unselected training sample group
Sample group is as new candidate training sample group, and the initial video identification model that the last time is obtained is as new introductory die
Type continues to execute training step.
In some embodiments, numerical value determining module includes:Sample acquisition module is configured to obtain pre-set test
Demonstrate,prove sample set, wherein verification sample includes that verification video and the verification identification marked in advance with video for verification are tied
Fruit;Numerical generation module is configured to, for the training sample group in preset quantity training sample group, execute following steps:
Using the Sample video of the training sample in this group of training sample as input, the sample corresponding to the Sample video inputted is known
Other result trains to obtain the video identification mould to be verified corresponding to this group of training sample as output using machine learning method
Type;To be verified video of the verification corresponding to this group of training sample of video input for the verification sample verified in sample set is known
Other model obtains practical recognition result, determines practical recognition result relative to the verification corresponding to the verification video inputted
With the penalty values of recognition result the good and bad degree for characterizing this group of training sample group is generated based on identified penalty values
Numerical value.
In some embodiments, model generation unit includes:Weight distribution module is configured to based on identified number
Value, the initial video identification model to be obtained distribute weight;Model Fusion module is configured to based on the weight distributed,
The initial video identification model obtained is merged, video identification model is generated.
The third aspect, the embodiment of the present application provide a kind of method of video for identification, and this method includes:It obtains and waits knowing
Other video, wherein video to be identified is shoots object obtained video;Video input to be identified is used as above-mentioned
In the video identification model that method described in any embodiment generates in first aspect, the knowledge corresponding to video to be identified is generated
Other result, wherein recognition result is used to indicate whether video to be identified is to be obtained to showing that the screen of object is shot
Video.
Fourth aspect, the embodiment of the present application provide a kind of device of video for identification, which includes:Video acquisition
Unit is configured to obtain video to be identified, wherein video to be identified is shoots object obtained video;As a result
Generation unit is configured to use the method as described in any embodiment in above-mentioned first aspect to give birth to video input to be identified
At video identification model in, generate the recognition result corresponding to video to be identified, wherein recognition result is used to indicate to be identified
Whether video is to showing that the screen of object is shot obtained video.
5th aspect, the embodiment of the present application provide a kind of electronic equipment, including:One or more processors;Storage dress
Set, be stored thereon with one or more programs, when one or more programs are executed by one or more processors so that one or
Multiple processors realize the method as described in any embodiment in above-mentioned first aspect and the third aspect.
6th aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should
The method as described in any embodiment in above-mentioned first aspect and the third aspect is realized when program is executed by processor.
Method and apparatus provided by the embodiments of the present application for generating model, by acquisition training sample set, and
Training sample set is divided into preset quantity training sample group, wherein training sample include Sample video and be directed to sample
Specimen discerning that video marks in advance as a result, Sample video to be shot obtained video, specimen discerning to sample object
As a result it is used to indicate whether Sample video is to showing that the screen of sample object is shot obtained video, then for pre-
If the training sample group in quantity training sample group, using the Sample video of the training sample in this group of training sample as defeated
Enter, using the specimen discerning result corresponding to the Sample video inputted as desired output, trained using machine learning method
To the initial video identification model corresponding to this group of training sample, it is finally based on obtained initial video identification model, is generated
Video identification model, so as to obtain it is a kind of can be used for identifying the model of video, and contribute to the generation side of abundant model
Formula.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 show can apply the embodiment of the present application for generate the method for model, the device for generating model,
The method of video or for identification exemplary system architecture 100 of the device of video for identification.
As shown in Figure 1, system architecture 100 may include terminal 101,102, network 103,104 kimonos of database server
Business device 105.Network 103 is in terminal 101,102, offer communication link between database server 104 and server 105
Medium.Network 103 may include various connection types, such as wired, wireless communication link or fiber optic cables etc..
User 110 can be interacted with using terminal 101,102 by network 103 and server 105, to receive or send
Message etc..Various client applications can be installed, such as the application of model training class, video identification class are answered in terminal 101,102
With, social class application, the application of payment class, web browser and immediate communication tool etc..
Here terminal 101,102 can be hardware, can also be software.When terminal 101,102 is hardware, Ke Yishi
Various electronic equipments with display screen, including but not limited to smart mobile phone, tablet computer, E-book reader, MP3 player
(Moving Picture Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3),
Pocket computer on knee and desktop computer etc..When terminal 101,102 is software, may be mounted at above-mentioned cited
In electronic equipment.Multiple softwares or software module (such as providing Distributed Services) may be implemented into it, can also realize
At single software or software module.It is not specifically limited herein.
When terminal 101,102 is hardware, it is also equipped with video capture device thereon.Video capture device can be
The various equipment that can realize acquisition video capability, such as camera, sensor.User 110 can utilize in terminal 101,102
Video capture device acquire video.
Database server 104 can be to provide the database server of various services.Such as it can in database server
To be stored with sample set.It include a large amount of sample in sample set.Wherein, sample may include Sample video and be directed to
The specimen discerning result that Sample video marks in advance.In this way, user 110 can also be by terminal 101,102, from database service
Sample is chosen in the sample set that device 104 is stored.
Server 105 can also be to provide the server of various services, such as various answer to what is shown in terminal 101,102
With the background server for providing support.Background server can utilize the sample in the sample set that terminal 101,102 is sent, right
Initial model is trained, and can training result be sent to terminal 101,102 (such as the video identification model generated).This
Sample, user can apply the video identification model generated to carry out video identification.
Here database server 104 and server 105 can be equally hardware, can also be software.When they are
When hardware, the distributed server cluster of multiple server compositions may be implemented into, individual server can also be implemented as.When it
Be software when, multiple softwares or software module (such as providing Distributed Services) may be implemented into, can also be implemented as
Single software or software module.It is not specifically limited herein.
It should be noted that method for generating model that the embodiment of the present application is provided or the side of video for identification
Method is generally executed by server 105.Correspondingly, the device for the generating model or device of video is generally also provided with for identification
In server 105.
It should be pointed out that in the case where the correlation function of database server 104 may be implemented in server 105, it is
It can be not provided with database server 104 in system framework 100.
It should be understood that the number of the terminal, network, database server and server in Fig. 1 is only schematical.Root
It factually now needs, can have any number of terminal, network, database server and server.
With continued reference to Fig. 2, the flow of one embodiment of the method for generating model according to the application is shown
200.The method for being used to generate model, includes the following steps:
Step 201, training sample set is obtained, and training sample set is divided into preset quantity training sample
Group.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for generating model can lead to
Cross wired connection mode or radio connection from database server (such as database server 104 shown in FIG. 1) or
Person's terminal (such as terminal shown in FIG. 1 101,102) obtains training sample set, and training sample set is divided into default
Quantity training sample group.Wherein, training sample includes Sample video and the specimen discerning knot that is marked in advance for Sample video
Fruit.Sample video can be to shoot sample object obtained video.Sample object can be various things, such as people
The behaviors such as the objects such as object, animal, or running, swimming.
In the present embodiment, specimen discerning result can include but is not limited at least one of following:Word, number, symbol.
Specimen discerning result can serve to indicate that whether Sample video is obtained by showing that the screen of above-mentioned sample object is shot
Video.For example, specimen discerning result may include number 1 and number 0, wherein number 1 can serve to indicate that Sample video is
To showing that the screen of above-mentioned sample object is shot obtained video;It is pair that number 0, which can serve to indicate that Sample video not,
Show that the screen of above-mentioned sample object is shot obtained video.
In the present embodiment, above-mentioned executive agent may be used various modes and training sample set be divided into preset quantity
A training sample group.For example, training sample set is divided into preset quantity by the mode that decile may be used in above-mentioned executive agent
A training sample group can also divide training sample set so that each instruction in preset quantity training sample group
The quantitative value for practicing the training sample included by sample group is more than or equal to predetermined threshold value.It should be noted that above-mentioned preset quantity can
To be pre-set by technical staff.
Step 202, for the training sample group in preset quantity training sample group, by the training in this group of training sample
The Sample video of sample is as input, using the specimen discerning result corresponding to the Sample video inputted as desired output, profit
Train to obtain the initial video identification model corresponding to this group of training sample with machine learning method.
In the present embodiment, for the training sample group in the preset quantity training sample group that is obtained in step 201, on
Stating executive agent can be using the Sample video of the training sample in this group of training sample as input, the Sample video that will be inputted
Corresponding specimen discerning result trains to obtain corresponding to this group of training sample as desired output, using machine learning method
Initial video identification model.Wherein, initial video identification model is trained using the training sample in training sample group
Model is determined for final video identification model.
As an example, for each training sample group in preset quantity training sample group, it can utilize and set in advance
Initial model (such as convolutional neural networks (Convolutional Neural Network, CNN), the residual error network set
(ResNet) etc.) be trained, it is final to obtain preset quantity initial video identification model corresponding with training sample group.Tool
Body, for each training sample group in preset quantity training sample group, which can be trained sample by above-mentioned executive agent
The Sample video of training sample in this inputs initial model, obtains the recognition result corresponding to inputted Sample video, so
Afterwards using the specimen discerning result corresponding to the Sample video that is inputted as the desired output of initial model, machine learning side is utilized
Method trains initial model, and the initial model after training is determined as initial video identification model.
In some optional realization methods of the present embodiment, above-mentioned executive agent can be based on preset quantity and train sample
This group obtains preset quantity initial video identification model as follows:
Step 2021, training sample group is chosen from preset quantity training sample group as candidate training sample group.
In the present embodiment, in the preset quantity training sample group that above-mentioned executive agent can be obtained from step 201
Training sample group is chosen as candidate training sample group, and executes step 2022 to the training step of step 2024.Wherein, it instructs
The selection mode for practicing sample group is not intended to limit in this application.Such as can randomly select, can also be that preferential choose is trained
The more training sample group of sample.
Step 2022:Using the Sample video of the training sample in candidate training sample group as input, the sample that will be inputted
Specimen discerning result corresponding to this video is trained initial model using machine learning method, obtains as desired output
Obtain initial video identification model.
Specifically, above-mentioned executive agent can obtain the initial video corresponding to candidate training sample group as follows
Identification model:
Above-mentioned executive agent can be from the selection training sample in candidate training sample group, and executes following steps:By institute
The Sample video for choosing training sample inputs initial model, obtains recognition result;By the sample corresponding to the Sample video inputted
Desired output of this recognition result as initial model, based on the recognition result and specimen discerning obtained as a result, adjustment is initial
The parameter of model;It determines in candidate's training sample group and whether there is unselected training sample;It is not chosen in response to being not present
Initial model after adjustment is determined as the initial video identification model corresponding to candidate training sample group by the training sample taken.
It should be noted that the selection mode of training sample is not intended to limit in this application.Such as can randomly select, can also be
The preferential preferable training sample of clarity for choosing Sample video.
Step 2023:It determines and whether there is unselected training sample group in preset quantity training sample group.
Step 2024:Unselected training sample group is not present in response to determining, obtains preset quantity initial video
Identification model.
It is understood that when unselected training sample group is not present in preset quantity training sample group, i.e.,
For each training sample group in preset quantity training sample group, training generates corresponding initial video identification model,
Therefore unselected training sample group can be not present in response to determining in above-mentioned executive agent in preset quantity training sample group,
Obtain preset quantity initial video identification model.
Optionally, above-mentioned executive agent may also respond to determine that there are unselected training sample groups, never selected
Training sample group is chosen in the training sample group taken as new candidate training sample group, by the initial video of the last time acquisition
Identification model continues to execute above-mentioned training step 2022-2024 as new initial model.
In the realization method, above-mentioned executive agent can will be obtained first by the training sample group training preferentially chosen
Beginning video identification model can effectively utilize sample as the initial model corresponding to the training sample group then chosen with this
Notebook data generates more accurate initial video identification model.
Step 203, it is based on obtained initial video identification model, generates video identification model.
In the present embodiment, it is based on the obtained initial video identification model of step 202, above-mentioned executive agent can generate
Video identification model.
Know specifically, above-mentioned executive agent can choose an initial video from obtained initial video identification model
Other model is handled as video identification model, or to obtained initial video identification model, obtains video identification mould
Type.
As an example, above-mentioned executive agent can be each first based on the quantity of obtained initial video identification model
Beginning video identification model distributes identical weight, in turn, based on the weight distributed, to obtained initial video identification model
It is merged, obtains video identification model.
For example, obtained initial video identification model includes:" y=ax+b ";" y=cx+d ".Wherein, x is independent variable,
It can be used for the input of characterization model;Y is dependent variable, can be used for the output of characterization model;A and b is first initial video
The coefficient of identification model;C and d is the coefficient of second initial video identification model.Herein, it is initially regarded due to having obtained two
Frequency identification model, therefore the weight that can be determined as each initial video identification model distribution is 0.5 (0.5=1 ÷ 2), Jin Erke
To based on the weight distributed, being merged to model " y=ax+b " and model " y=cx+d ", to obtain video identification model
" y=0.5x (a+c)+0.5 (b+d) " (y=0.5* (ax+b)+0.5* (cx+d)).
In some optional realization methods of the present embodiment, based on the step 2021- in above-mentioned optional realization method
2024 obtained initial video identification models, the initial video that above-mentioned executive agent can directly obtain last time identify
Model is determined as video identification model.
It is a schematic diagram of the application scenarios of the method generated according to the model of the present embodiment with continued reference to Fig. 3, Fig. 3.
In the application scenarios of Fig. 3, model training class application can be installed in terminal 301 used by a user.It is somebody's turn to do when user opens
Using, and after uploading the store path of training sample set or training sample set, the service of back-office support is provided the application
Device 302 can run the method for generating model, including:
It is possible, firstly, to obtain training sample set 303 and training sample set 303 is divided into two (preset quantities
It is a) training sample group 304,305, wherein training sample includes that Sample video and the sample marked in advance for Sample video are known
Not as a result, Sample video is shoots sample object obtained video, specimen discerning result is used to indicate Sample video
Whether it is to showing that the screen of sample object is shot obtained video.
Then, for training sample group 304, above-mentioned executive agent can be by the sample of the training sample in this group of training sample
This video, using the specimen discerning result corresponding to the Sample video inputted as desired output, utilizes engineering as input
Learning method trains to obtain the initial video identification model 306 corresponding to this group of training sample;It is above-mentioned for training sample group 305
Executive agent can be using the Sample video of the training sample in this group of training sample as input, by the Sample video inputted institute
Corresponding specimen discerning result trains to obtain first corresponding to this group of training sample as desired output using machine learning method
Beginning video identification model 307.
Finally, above-mentioned executive agent can be based on obtained initial video identification model 306 and initial video identifies mould
Type 307 generates video identification model 308.
At this point, server 302 can also send the prompt message for being used to indicate model training and completing to terminal 301.This is carried
Show that information can be voice and/or text information.In this way, user can get video identification mould in preset storage location
Type.
The method that above-described embodiment of the application provides is drawn by obtaining training sample set, and by training sample set
It is divided into preset quantity training sample group, then for the training sample group in preset quantity training sample group, which is instructed
Practice the Sample video of the training sample in sample as input, the specimen discerning result corresponding to the Sample video inputted is made
For desired output, train to obtain the initial video identification model corresponding to this group of training sample using machine learning method, finally
Based on obtained initial video identification model, video identification model is generated, can be used for identifying so as to obtaining one kind and regard
The model of frequency, and contribute to the generating mode of abundant model.
With further reference to Fig. 4, it illustrates the flows 400 of another embodiment of the method for generating model.The use
In the flow 400 for the method for generating model, include the following steps:
Step 401, training sample set is obtained, and training sample set is divided into preset quantity training sample
Group.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for generating model can lead to
Cross wired connection mode or radio connection from database server (such as database server 104 shown in FIG. 1) or
Person's terminal (such as terminal shown in FIG. 1 101,102) obtains training sample set, and training sample set is divided into default
Quantity training sample group.
It is realized with step 201 similar mode in previous embodiment it should be noted that step 401 may be used.Phase
Ying Di describes the also suitable step 401 that can be used for the present embodiment above with respect to step 201, and details are not described herein again.
Step 402, the numerical value of the good and bad degree for characterizing preset quantity training sample group is determined.
In the present embodiment, for the preset quantity training sample group obtained in step 401, above-mentioned executive agent can be with
Determine the numerical value of the good and bad degree for characterizing preset quantity training sample group.Specifically, above-mentioned executive agent may be used
Various modes determine the numerical value of the good and bad degree for characterizing preset quantity training sample group, for example, above-mentioned executive agent can
With the quantity for the training sample that each training sample group of determination includes, the quantitative value of identified quantity is determined as characterizing
The numerical value of the good and bad degree of preset quantity training sample group.Herein, it is to be understood that the instruction included by training sample group
It is more to practice sample, then may be more to the parameter adjustment number of initial model, and then the initial identification model that training obtains then may be used
Can be more accurate, therefore above-mentioned executive agent can be determined according to the quantity of the training sample included by training sample group and be used for table
Levy the numerical value of the good and bad degree of preset quantity training sample group.
It should be noted that herein, the correspondence of the size of numerical value and good and bad degree can be advance by technical staff
Setting.Specifically, can be that numerical value is bigger by correspondence setting, training sample group is more excellent;It is smaller to may be set to be numerical value,
Training sample group is more excellent.
In some optional realization methods of the present embodiment, above-mentioned executive agent can be determined and is used for as follows
Characterize the numerical value of the good and bad degree of preset quantity training sample group:
First, above-mentioned executive agent can obtain pre-set verification sample set, wherein verification sample includes verification
With video and the verification recognition result marked in advance with video for verification.
Then, for the training sample group in preset quantity training sample group, above-mentioned executive agent can execute following
Step:Using the Sample video of the training sample in this group of training sample as input, corresponding to the Sample video inputted
Specimen discerning result is trained to obtain the video to be verified corresponding to this group of training sample as output, using machine learning method and is known
Other model;By to be verified regarding corresponding to this group of training sample of verification video input for the verification sample verified in sample set
Frequency identification model obtains practical recognition result, determines practical recognition result relative to corresponding to the verification video inputted
The penalty values of verification recognition result generate the good and bad journey for characterizing this group of training sample group based on identified penalty values
The numerical value of degree.
Wherein, penalty values can be used for characterizing the difference between reality output and desired output.It is understood that above-mentioned
Difference is smaller, then the video identification model to be verified that training obtains is then more accurate, and in turn, the training sample group utilized is then got over
It is excellent.Therefore various modes may be used in the relationship based on above-mentioned penalty values Yu the good and bad degree of training sample group, above-mentioned executive agent
Based on identified penalty values, the numerical value of the good and bad degree for characterizing training sample group is generated.For example, can be directly by loss
Value is determined as the numerical value of the good and bad degree for characterizing training sample group, at this point, the good and bad degree for characterizing training sample group
Numerical value it is smaller, training sample group is more excellent;The inverse of penalty values can also be determined as to the quality for characterizing training sample group
The numerical value of degree, at this point, the numerical value of the good and bad degree for characterizing training sample group is bigger, training sample group is more excellent.
Herein, it should be noted that above-mentioned executive agent may be used preset various loss function calculating and be obtained
Penalty values of the practical recognition result relative to the verification recognition result corresponding to the verification video inputted, for example, can
To use L2 norms as loss function counting loss value.
Step 403, based on identified numerical value, optimal training sample group is chosen from preset quantity training sample group
As candidate training sample group.
In the present embodiment, the numerical value that above-mentioned executive agent can be determined based on step 402, from what is obtained in step 401
Optimal training sample group is chosen in preset quantity training sample group as candidate training sample group, and executes step 404
To the training step of step 406.
It should be noted that the specific implementation of the present embodiment be to choose from preset quantity training sample group it is optimal
Training sample group is as candidate training sample group, therefore the numerical value for working as the good and bad degree for being used to characterize training sample group is bigger, training
When sample group is more excellent, above-mentioned executive agent can choose identified, maximum numerical value from preset quantity training sample group
Corresponding training sample group is as candidate training sample group;When the numerical value of the good and bad degree for characterizing training sample group is got over
Small, when training sample group is more excellent, above-mentioned executive agent can be chosen identified, minimum from preset quantity training sample group
Numerical value corresponding to training sample group as candidate training sample group.
Step 404, using the Sample video of the training sample in candidate training sample group as input, the sample that will be inputted
Specimen discerning result corresponding to video is trained initial model using machine learning method as desired output, obtains
Initial video identification model.
Specifically, above-mentioned executive agent can obtain the initial video corresponding to candidate training sample group as follows
Identification model:
Above-mentioned executive agent can be from the selection training sample in candidate training sample group, and executes following steps:By institute
The Sample video for choosing training sample inputs initial model, obtains recognition result;By the sample corresponding to the Sample video inputted
Desired output of this recognition result as initial model, based on the recognition result and specimen discerning obtained as a result, adjustment is initial
The parameter of model;It determines in candidate's training sample group and whether there is unselected training sample;It is not chosen in response to being not present
Initial model after adjustment is determined as the initial video identification model corresponding to candidate training sample group by the training sample taken.
It should be noted that the selection mode of training sample is not intended to limit in this application.Such as can randomly select, can also be
The preferential preferable training sample of clarity for choosing Sample video.
Step 405, it determines and whether there is unselected training sample group in preset quantity training sample group.
Step 406, unselected training sample group is not present in response to determining, obtains preset quantity initial video and knows
Other model.
It is understood that when unselected training sample group is not present in preset quantity training sample group, i.e.,
For each training sample group in preset quantity training sample group, training generates corresponding initial video identification model,
Therefore unselected training sample group can be not present in response to determining in above-mentioned executive agent in preset quantity training sample group,
Obtain preset quantity initial video identification model.
In some optional realization methods of the present embodiment, above-mentioned executive agent may also respond to determine exist not by
The training sample group of selection chooses optimal training sample based on identified numerical value from unselected training sample group
Group is as new candidate training sample group, using the initial video identification model of the last acquisition as new initial model, after
It is continuous to execute above-mentioned training step 404-406.
Step 407, it is based on obtained initial video identification model, generates video identification model.
In the present embodiment, it is based on the obtained initial video identification model of step 406, above-mentioned executive agent can generate
Video identification model.
Specifically, above-mentioned executive agent can choose an initial video identification mould from obtained initial identification model
Type is handled as video identification model, or to obtained initial video identification model, obtains video identification model.
In some optional realization methods of the present embodiment, above-mentioned executive agent can generate video as follows
Identification model:First, above-mentioned executive agent can be based on the numerical value determined in step 402, for the initial video identification obtained
Model distributes weight.Then, above-mentioned executive agent can be based on the weight distributed, to the initial video identification model obtained
It is merged, generates video identification model.Specifically, above-mentioned executive agent can determine each instruction based on identified numerical value
Practice the good and bad degree of sample group, and then the initial video identification model to be obtained distributes weight by various modes, so that compared with
The weight corresponding to initial video identification model corresponding to excellent training sample group is larger, corresponding to more bad training sample group
Initial video identification model corresponding to weight it is smaller.
Figure 4, it is seen that compared with the corresponding embodiments of Fig. 2, the method for generating model in the present embodiment
Flow 400 highlight the numerical value of the good and bad degree for characterizing preset quantity training sample group of determination, and then really based on institute
Fixed numerical value chooses the step of training sample group is trained from preset quantity training sample group.The present embodiment is retouched as a result,
The scheme stated can be trained first with preferably training sample group, obtain accurate initial video identification model, from
And subsequent training can on this basis carry out initial video identification model smaller adjustment, improve the effect of model generation
Rate.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating mould
One embodiment of the device of type, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the device 500 for generating model of the present embodiment includes:Sample acquisition unit 501, model instruction
Practice unit 502 and model generation unit 503.Wherein, sample acquisition unit 501 is configured to obtain training sample set, and
Training sample set is divided into preset quantity training sample group, wherein training sample include Sample video and be directed to sample
Specimen discerning that video marks in advance as a result, Sample video to be shot obtained video, specimen discerning to sample object
As a result it is used to indicate whether Sample video is to showing that the screen of sample object is shot obtained video;Model training list
Member 502 is configured to for the training sample group in preset quantity training sample group, by the training sample in this group of training sample
This Sample video is utilized as input using the specimen discerning result corresponding to the Sample video inputted as desired output
Machine learning method trains to obtain the initial video identification model corresponding to this group of training sample;Model generation unit 503 by with
It is set to and is based on obtained initial video identification model, generate video identification model.
In the present embodiment, the sample acquisition unit 501 of the device 500 for generating model can be by wired connection side
Formula either from database server (such as database server 104 shown in FIG. 1) or terminal (such as scheme by radio connection
102) terminal 101 shown in 1 obtains training sample set, and training sample set is divided into preset quantity and trains sample
This group.Wherein, training sample includes Sample video and the specimen discerning result that is marked in advance for Sample video.Sample video can
Think and is shot obtained video to sample object.Sample object can be various things.
In the present embodiment, specimen discerning result can include but is not limited at least one of following:Word, number, symbol.
Specimen discerning result can serve to indicate that whether Sample video is obtained by showing that the screen of above-mentioned sample object is shot
Video.
In the present embodiment, sample acquisition unit 501 may be used various modes training sample set is divided into it is default
Quantity training sample group.It should be noted that above-mentioned preset quantity can be pre-set by technical staff.
In the present embodiment, for the training in the preset quantity training sample group that is obtained in sample acquisition unit 501
Sample group, model training unit 502 can be defeated by institute using the Sample video of the training sample in this group of training sample as input
Specimen discerning result corresponding to the Sample video entered trains to obtain group training using machine learning method as desired output
Initial video identification model corresponding to sample.Wherein, initial video identification model is to utilize the training sample in training sample group
The model that this training obtains is determined for final video identification model.
In the present embodiment, 502 obtained initial video identification model of model training unit, model generation unit are based on
503 can generate video identification model.
Know specifically, above-mentioned executive agent can choose an initial video from obtained initial video identification model
Other model is handled as video identification model, or to obtained initial video identification model, obtains video identification mould
Type.
In some optional realization methods of the present embodiment, model training unit 502 may include:First execution module
(not shown) is configured to choose training sample group from preset quantity training sample group as candidate training sample
Group, and based on candidate training sample group and initial model, execute following training step:By the training in candidate training sample group
The Sample video of sample is as input, using the specimen discerning result corresponding to the Sample video inputted as desired output, profit
Initial model is trained with machine learning method, obtains initial video identification model;Determine preset quantity training sample
It whether there is unselected training sample group in group;Unselected training sample group is not present in response to determining, obtains pre-
If quantity initial video identification model.
In some optional realization methods of the present embodiment, model training unit 502 can also include:Second executes mould
Block (not shown) is configured in response to determine that there are unselected training sample groups, from unselected training sample
Training sample group is chosen in this group as new candidate training sample group, the initial video identification model that the last time is obtained is made
For new initial model, training step is continued to execute.
In some optional realization methods of the present embodiment, model training unit 502 may include:Numerical value determining module
(not shown) is configured to determine the numerical value of the good and bad degree for characterizing preset quantity training sample group;Third is held
Row module (not shown) is configured to, based on identified numerical value, choose from preset quantity training sample group optimal
Training sample group as candidate training sample group, and based on candidate training sample group and initial model, execute following training
Step:Using the Sample video of the training sample in candidate training sample group as input, corresponding to the Sample video inputted
Specimen discerning result as desired output, initial model is trained using machine learning method, obtain initial video know
Other model;It determines and whether there is unselected training sample group in preset quantity training sample group;It is not deposited in response to determination
In unselected training sample group, preset quantity initial video identification model is obtained.
In some optional realization methods of the present embodiment, model training unit 502 can also include:4th executes mould
Block (not shown), is configured in response to determine there are unselected training sample group, based on identified numerical value, from
Optimal training sample group is chosen in unselected training sample group as new candidate training sample group, and the last time is obtained
The initial video identification model obtained continues to execute training step as new initial model.
In some optional realization methods of the present embodiment, numerical value determining module (not shown) may include:Sample
This acquisition module (not shown) is configured to obtain pre-set verification sample set, wherein verification sample includes testing
Card video and the verification recognition result marked in advance with video for verification;Numerical generation module (not shown), quilt
It is configured to, for the training sample group in preset quantity training sample group, execute following steps:It will be in this group of training sample
The Sample video of training sample is as input, using the specimen discerning result corresponding to the Sample video inputted as output, profit
Train to obtain the video identification model to be verified corresponding to this group of training sample with machine learning method;It will verify in sample set
Verification sample to be verified video identification model of the verification corresponding to this group of training sample of video input, obtain practical identification
As a result, determine penalty values of the practical recognition result relative to the verification recognition result corresponding to the verification video inputted,
Based on identified penalty values, the numerical value of the good and bad degree for characterizing this group of training sample group is generated.
In some optional realization methods of the present embodiment, model generation unit 503 may include:Weight distribution module
(not shown) is configured to based on identified numerical value, and the initial video identification model to be obtained distributes weight;Model
Fusion Module (not shown) is configured to, based on the weight distributed, carry out the initial video identification model obtained
Fusion generates video identification model.
In some optional realization methods of the present embodiment, model generation unit 503 can be further configured to:It will
The initial video identification model that last time obtains is determined as video identification model.
The device 500 that above-described embodiment of the application provides obtains training sample set by sample acquisition unit 501, with
And training sample set is divided into preset quantity training sample group, wherein training sample include Sample video and be directed to sample
The specimen discerning that this video marks in advance as a result, Sample video is shoots sample object obtained video, know by sample
Other result be used to indicate Sample video whether be to show sample object screen shot obtained video, then for
Training sample group in preset quantity training sample group, using the Sample video of the training sample in this group of training sample as defeated
Enter, model training unit 502 utilizes machine using the specimen discerning result corresponding to the Sample video inputted as desired output
Learning method trains to obtain the initial video identification model corresponding to this group of training sample, and last model generation unit 503 is based on
Obtained initial video identification model generates video identification model, so as to obtain a kind of can be used for identifying video
Model, and contribute to the generating mode of abundant model.
Fig. 6 is referred to, it illustrates the flows of one embodiment of the method for video for identification provided by the present application
600.The method of the video for identification may comprise steps of:
Step 601, video to be identified is obtained.
In the present embodiment, the executive agent (such as server 105 shown in FIG. 1) of the method for video can be with for identification
Band identification video is obtained by wired connection type or wireless connection type.For example, above-mentioned executive agent can be from number
It is stored in video therein according to being obtained in library server (such as database server 104 shown in FIG. 1), terminal can also be received
The video of (such as terminal shown in FIG. 1 101,102) or other equipment acquisition.
In the present embodiment, video to be identified can be to shoot object obtained video.Object can be each
The behaviors such as the objects such as kind things, such as personage, animal, or running, swimming.
Step 602, by video input video identification model to be identified, the identification knot corresponding to video to be identified is generated
Fruit.
In the present embodiment, the video input Video Model to be identified that above-mentioned executive agent can will obtain in step 601
In, to generate the recognition result corresponding to video to be identified.Wherein, whether recognition result can serve to indicate that video to be identified
For to showing that the screen of above-mentioned object is shot obtained video.
In the present embodiment, video identification model can be generated using the method as described in above-mentioned Fig. 2 embodiments
's.Specific generating process may refer to the associated description of Fig. 2 embodiments, and details are not described herein.
It is generated it should be noted that the method for the present embodiment video for identification can be used for testing the various embodiments described above
Video identification model.And then video identification model can constantly be optimized according to test result.This method can also be above-mentioned
The practical application methods for the video identification model that each embodiment is generated.The video identification mould generated using the various embodiments described above
Type carries out video identification, may be implemented to the detection by recording the video that screen obtains, and help to improve video identification
Accuracy.
With continued reference to Fig. 7, as the realization to method shown in above-mentioned Fig. 6, this application provides a kind of videos for identification
Device one embodiment.The device embodiment is corresponding with embodiment of the method shown in fig. 6, which can specifically apply
In various electronic equipments.
As shown in fig. 7, the device 700 of the video for identification of the present embodiment may include:Video acquisition unit 701 and knot
Fruit generation unit 702.Wherein, video acquisition unit 701 is configured to obtain video to be identified, wherein video to be identified is pair
Object is shot obtained video;As a result generation unit 702 is configured to video input to be identified using such as above-mentioned Fig. 2
In the model that method described in embodiment generates, the recognition result corresponding to video to be identified is generated, wherein recognition result is used
In indicating whether video to be identified is to showing that the screen of object is shot obtained video.
It is understood that all units described in the device 700 and each step phase in the method described with reference to figure 6
It is corresponding.As a result, above with respect to the operation of method description, the advantageous effect of feature and generation be equally applicable to device 700 and its
In include unit, details are not described herein.
Referring to Fig. 8, it illustrates the computer systems 800 suitable for the electronic equipment for realizing the embodiment of the present application
Structural schematic diagram.Electronic equipment shown in Fig. 8 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in figure 8, computer system 800 includes central processing unit (CPU) 801, it can be read-only according to being stored in
Program in memory (ROM) 802 or be loaded into the program in random access storage device (RAM) 803 from storage section 808 and
Execute various actions appropriate and processing.In RAM 803, also it is stored with system 800 and operates required various programs and data.
CPU 801, ROM 802 and RAM 803 are connected with each other by bus 804.Input/output (I/O) interface 805 is also connected to always
Line 804.
It is connected to I/O interfaces 805 with lower component:Include the importation of touch screen, keyboard, mouse, photographic device etc.
806;Output par, c 807 including cathode-ray tube (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Including hard
The storage section 808 of disk etc.;And the communications portion 809 of the network interface card including LAN card, modem etc..It is logical
Believe that part 809 executes communication process via the network of such as internet.Driver 810 is also according to needing to be connected to I/O interfaces
805.Detachable media 811, such as disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver as needed
On 810, in order to be mounted into storage section 808 as needed from the computer program read thereon.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed by communications portion 809 from network, and/or from detachable media
811 are mounted.When the computer program is executed by central processing unit (CPU) 801, limited in execution the present processes
Above-mentioned function.It should be noted that the computer-readable medium of the application can be computer-readable signal media or calculating
Machine readable storage medium storing program for executing either the two arbitrarily combines.Computer readable storage medium for example can be --- but it is unlimited
In --- electricity, system, device or the device of magnetic, optical, electromagnetic, infrared ray or semiconductor, or the arbitrary above combination.It calculates
The more specific example of machine readable storage medium storing program for executing can include but is not limited to:Being electrically connected, be portable with one or more conducting wires
Formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory
(EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or
The above-mentioned any appropriate combination of person.In this application, can be any include computer-readable medium or storage program has
Shape medium, the program can be commanded the either device use or in connection of execution system, device.And in the application
In, computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated, wherein
Carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not limited to electric
Magnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit
Any computer-readable medium other than storage media, the computer-readable medium can send, propagate or transmit for by referring to
Enable execution system, device either device use or program in connection.The program for including on computer-readable medium
Code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned times
The suitable combination of meaning.
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses
The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet
Include sample acquisition unit, model training unit and model generation unit.For another example can also be described as:A kind of processor includes
Acquiring unit, training unit and generation unit.Wherein, the title of these units is not constituted under certain conditions to the unit sheet
The restriction of body, for example, sample acquisition unit is also described as " obtaining the unit of training sample set ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in electronic equipment described in above-described embodiment;Can also be individualism, and without be incorporated the electronic equipment in.
Above computer readable medium carries one or more program, when said one or multiple programs are held by the electronic equipment
When row so that the electronic equipment:Training sample set is obtained, and training sample set is divided into preset quantity and trains sample
This group, wherein training sample includes Sample video and the specimen discerning that is marked in advance for Sample video is as a result, Sample video is
Obtained video is shot to sample object, specimen discerning result is used to indicate whether Sample video is to showing sample pair
The screen of elephant is shot obtained video;For the training sample group in preset quantity training sample group, which is instructed
Practice the Sample video of the training sample in sample as input, the specimen discerning result corresponding to the Sample video inputted is made
For desired output, train to obtain the initial video identification model corresponding to this group of training sample using machine learning method;It is based on
Obtained initial video identification model generates video identification model.
In addition, when said one or multiple programs are executed by the electronic equipment, it is also possible that the electronic equipment:It obtains
Take video to be identified, wherein video to be identified is shoots object obtained video;By video input video to be identified
In identification model, the recognition result corresponding to video to be identified is generated, wherein whether recognition result is used to indicate video to be identified
For to showing that the screen of object is shot obtained video.Video identification model can be using such as the various embodiments described above institute
Description is generated for generating the method for model.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.