CN108805091A - Method and apparatus for generating model - Google Patents

Method and apparatus for generating model Download PDF

Info

Publication number
CN108805091A
CN108805091A CN201810617804.4A CN201810617804A CN108805091A CN 108805091 A CN108805091 A CN 108805091A CN 201810617804 A CN201810617804 A CN 201810617804A CN 108805091 A CN108805091 A CN 108805091A
Authority
CN
China
Prior art keywords
training sample
video
group
sample
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810617804.4A
Other languages
Chinese (zh)
Other versions
CN108805091B (en
Inventor
李伟健
王长虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Douyin Vision Co Ltd
Douyin Vision Beijing Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201810617804.4A priority Critical patent/CN108805091B/en
Publication of CN108805091A publication Critical patent/CN108805091A/en
Priority to PCT/CN2018/116339 priority patent/WO2019237657A1/en
Application granted granted Critical
Publication of CN108805091B publication Critical patent/CN108805091B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the present application discloses the method and apparatus for generating model.One specific implementation mode of this method includes:Obtain training sample set, and training sample set is divided into preset quantity training sample group, wherein, training sample includes Sample video and the specimen discerning that is marked in advance for Sample video is as a result, specimen discerning result is used to indicate whether Sample video is to showing that the screen of sample object is shot obtained video;For the training sample group in preset quantity training sample group, using the Sample video of the training sample in this group of training sample as input, using the specimen discerning result corresponding to the Sample video inputted as desired output, train to obtain the initial video identification model corresponding to this group of training sample using machine learning method;Based on obtained initial video identification model, video identification model is generated.A kind of model can be used for identifying video can be obtained by the embodiment, and enriches the generating mode of model.

Description

Method and apparatus for generating model
Technical field
The invention relates to field of computer technology, the method and apparatus for more particularly, to generating model.
Background technology
Currently, realizing that Information Sharing has become Information Sharing pattern important in people's life by shooting video.It is real In trampling, there are many users to shoot the video obtained as its people for the video for obtaining other users shooting, it can be to other User shoots the video obtained and records.
It is understood that the video for recording other users often brings infringement, damages the harmful effects such as fairness, because This, such video can be identified in the platform for carrying out Information Sharing, and then is intercepted to it.
Invention content
The embodiment of the present application proposes the method and apparatus for generating model, and the method and dress of video for identification It sets.
In a first aspect, the embodiment of the present application provides a kind of method for generating model, this method includes:Obtain training Sample set, and training sample set is divided into preset quantity training sample group, wherein training sample includes that sample regards Frequency and the specimen discerning that is marked in advance for Sample video as a result, Sample video to be shot regarding of being obtained to sample object Frequently, specimen discerning result is used to indicate whether Sample video is to showing that the screen of sample object is shot regarding of being obtained Frequently;For the training sample group in preset quantity training sample group, the sample of the training sample in this group of training sample is regarded Frequency, using the specimen discerning result corresponding to the Sample video inputted as desired output, utilizes machine learning side as input Method trains to obtain the initial video identification model corresponding to this group of training sample;Based on obtained initial video identification model, Generate video identification model.
In some embodiments, for the training sample group in preset quantity training sample group, by this group of training sample In training sample Sample video as input, using the specimen discerning result corresponding to the Sample video inputted as expectation Output, trains to obtain the initial video identification model corresponding to this group of training sample using machine learning method, including:From default Training sample group is chosen in quantity training sample group as candidate training sample group, and based on candidate training sample group and just Beginning model executes following training step:It is using the Sample video of the training sample in candidate training sample group as input, institute is defeated Specimen discerning result corresponding to the Sample video entered instructs initial model using machine learning method as desired output Practice, obtains initial video identification model;It determines and whether there is unselected training sample in preset quantity training sample group Group;Unselected training sample group is not present in response to determining, obtains preset quantity initial video identification model.
In some embodiments, for the training sample group in preset quantity training sample group, by this group of training sample In training sample Sample video as input, using the specimen discerning result corresponding to the Sample video inputted as expectation Output, trains to obtain the initial video identification model corresponding to this group of training sample using machine learning method, further includes:Response In determining there are unselected training sample group, training sample group is chosen from unselected training sample group as newly Candidate training sample group, the initial video identification model that the last time is obtained continue to execute training as new initial model Step.
In some embodiments, for the training sample group in preset quantity training sample group, by this group of training sample In training sample Sample video as input, using the specimen discerning result corresponding to the Sample video inputted as expectation Output, trains to obtain the initial video identification model corresponding to this group of training sample using machine learning method, including:It determines and uses In the numerical value of the good and bad degree of characterization preset quantity training sample group;Based on identified numerical value, from preset quantity training Optimal training sample group is chosen in sample group as candidate training sample group, and based on candidate training sample group and introductory die Type executes following training step:Using the Sample video of the training sample in candidate training sample group as input, by what is inputted Specimen discerning result corresponding to Sample video is trained initial model using machine learning method as desired output, Obtain initial video identification model;It determines and whether there is unselected training sample group in preset quantity training sample group; Unselected training sample group is not present in response to determining, obtains preset quantity initial video identification model.
In some embodiments, for the training sample group in preset quantity training sample group, by this group of training sample In training sample Sample video as input, using the specimen discerning result corresponding to the Sample video inputted as expectation Output, trains to obtain the initial video identification model corresponding to this group of training sample using machine learning method, further includes:Response It is chosen from unselected training sample group based on identified numerical value in determining that there are unselected training sample groups Optimal training sample group is as new candidate training sample group, and the initial video identification model that the last time is obtained is as new Initial model, continue to execute training step.
In some embodiments, the numerical value of the good and bad degree for characterizing preset quantity training sample group is determined, including: Obtain pre-set verification sample set, wherein verification sample includes verification video and marked in advance with video for verification The verification recognition result of note;For the training sample group in preset quantity training sample group, following steps are executed:By the group The Sample video of training sample in training sample is as input, by the specimen discerning result corresponding to the Sample video inputted As output, train to obtain the video identification model to be verified corresponding to this group of training sample using machine learning method;It will test To be verified video identification model of the verification corresponding to this group of training sample of video input of the verification sample in sample set is demonstrate,proved, Practical recognition result is obtained, determines that practical recognition result is tied relative to the identification of the verification corresponding to the verification video inputted The penalty values of fruit generate the numerical value of the good and bad degree for characterizing this group of training sample group based on identified penalty values.
In some embodiments, it is based on obtained initial video identification model, generates video identification model, including:Base In identified numerical value, the initial video identification model to be obtained distributes weight;Based on the weight distributed, to what is obtained Initial video identification model is merged, and video identification model is generated.
In some embodiments, it is based on obtained initial video identification model, generates video identification model, including:It will The initial video identification model that last time obtains is determined as video identification model.
Second aspect, the embodiment of the present application provide a kind of device for generating model, which includes:Sample acquisition Unit is configured to obtain training sample set, and training sample set is divided into preset quantity training sample group, In, training sample includes Sample video and the specimen discerning that is marked in advance for Sample video is as a result, Sample video is to sample Object is shot obtained video, and specimen discerning result is used to indicate whether Sample video is screen to showing sample object Curtain is shot obtained video;Model training unit is configured to for the training in preset quantity training sample group Sample group, using the Sample video of the training sample in this group of training sample as input, corresponding to the Sample video inputted Specimen discerning result as desired output, initial being regarded using what machine learning method was trained to obtain corresponding to this group of training sample Frequency identification model;Model generation unit is configured to be based on obtained initial video identification model, generates video identification mould Type.
In some embodiments, model training unit includes:First execution module is configured to from preset quantity training Training sample group is chosen in sample group as candidate training sample group, and based on candidate training sample group and initial model, is held The following training step of row:Using the Sample video of the training sample in candidate training sample group as input, the sample that will be inputted Specimen discerning result corresponding to video is trained initial model using machine learning method as desired output, obtains Initial video identification model;It determines and whether there is unselected training sample group in preset quantity training sample group;Response Unselected training sample group is not present in determining, obtains preset quantity initial video identification model.
In some embodiments, model training unit further includes:Second execution module is configured in response to determine presence Unselected training sample group chooses training sample group as new candidate's training sample from unselected training sample group This group, the initial video identification model that the last time is obtained continue to execute training step as new initial model.
In some embodiments, model training unit includes:Numerical value determining module is configured to determine default for characterizing The numerical value of the good and bad degree of quantity training sample group;Third execution module, is configured to based on identified numerical value, from default Optimal training sample group is chosen in quantity training sample group as candidate training sample group, and based on candidate training sample Group and initial model execute following training step:Using the Sample video of the training sample in candidate training sample group as input, Using the specimen discerning result corresponding to the Sample video inputted as desired output, using machine learning method to initial model It is trained, obtains initial video identification model;It determines and whether there is unselected instruction in preset quantity training sample group Practice sample group;Unselected training sample group is not present in response to determining, obtains preset quantity initial video identification model.
In some embodiments, model training unit further includes:4th execution module is configured in response to determine presence Unselected training sample group chooses optimal training based on identified numerical value from unselected training sample group Sample group is as new candidate training sample group, and the initial video identification model that the last time is obtained is as new introductory die Type continues to execute training step.
In some embodiments, numerical value determining module includes:Sample acquisition module is configured to obtain pre-set test Demonstrate,prove sample set, wherein verification sample includes that verification video and the verification identification marked in advance with video for verification are tied Fruit;Numerical generation module is configured to, for the training sample group in preset quantity training sample group, execute following steps: Using the Sample video of the training sample in this group of training sample as input, the sample corresponding to the Sample video inputted is known Other result trains to obtain the video identification mould to be verified corresponding to this group of training sample as output using machine learning method Type;To be verified video of the verification corresponding to this group of training sample of video input for the verification sample verified in sample set is known Other model obtains practical recognition result, determines practical recognition result relative to the verification corresponding to the verification video inputted With the penalty values of recognition result the good and bad degree for characterizing this group of training sample group is generated based on identified penalty values Numerical value.
In some embodiments, model generation unit includes:Weight distribution module is configured to based on identified number Value, the initial video identification model to be obtained distribute weight;Model Fusion module is configured to based on the weight distributed, The initial video identification model obtained is merged, video identification model is generated.
The third aspect, the embodiment of the present application provide a kind of method of video for identification, and this method includes:It obtains and waits knowing Other video, wherein video to be identified is shoots object obtained video;Video input to be identified is used as above-mentioned In the video identification model that method described in any embodiment generates in first aspect, the knowledge corresponding to video to be identified is generated Other result, wherein recognition result is used to indicate whether video to be identified is to be obtained to showing that the screen of object is shot Video.
Fourth aspect, the embodiment of the present application provide a kind of device of video for identification, which includes:Video acquisition Unit is configured to obtain video to be identified, wherein video to be identified is shoots object obtained video;As a result Generation unit is configured to use the method as described in any embodiment in above-mentioned first aspect to give birth to video input to be identified At video identification model in, generate the recognition result corresponding to video to be identified, wherein recognition result is used to indicate to be identified Whether video is to showing that the screen of object is shot obtained video.
5th aspect, the embodiment of the present application provide a kind of electronic equipment, including:One or more processors;Storage dress Set, be stored thereon with one or more programs, when one or more programs are executed by one or more processors so that one or Multiple processors realize the method as described in any embodiment in above-mentioned first aspect and the third aspect.
6th aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should The method as described in any embodiment in above-mentioned first aspect and the third aspect is realized when program is executed by processor.
Method and apparatus provided by the embodiments of the present application for generating model, by acquisition training sample set, and Training sample set is divided into preset quantity training sample group, wherein training sample include Sample video and be directed to sample Specimen discerning that video marks in advance as a result, Sample video to be shot obtained video, specimen discerning to sample object As a result it is used to indicate whether Sample video is to showing that the screen of sample object is shot obtained video, then for pre- If the training sample group in quantity training sample group, using the Sample video of the training sample in this group of training sample as defeated Enter, using the specimen discerning result corresponding to the Sample video inputted as desired output, trained using machine learning method To the initial video identification model corresponding to this group of training sample, it is finally based on obtained initial video identification model, is generated Video identification model, so as to obtain it is a kind of can be used for identifying the model of video, and contribute to the generation side of abundant model Formula.
Description of the drawings
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for generating model of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for generating model of the application;
Fig. 4 is the flow chart according to another embodiment of the method for generating model of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for generating model of the application;
Fig. 6 is the flow chart according to the application one embodiment of the method for video for identification;
Fig. 7 is the structural schematic diagram according to the application one embodiment of the device of video for identification;
Fig. 8 is adapted for the structural schematic diagram of the computer system of the electronic equipment for realizing the embodiment of the present application.
Specific implementation mode
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, is illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 show can apply the embodiment of the present application for generate the method for model, the device for generating model, The method of video or for identification exemplary system architecture 100 of the device of video for identification.
As shown in Figure 1, system architecture 100 may include terminal 101,102, network 103,104 kimonos of database server Business device 105.Network 103 is in terminal 101,102, offer communication link between database server 104 and server 105 Medium.Network 103 may include various connection types, such as wired, wireless communication link or fiber optic cables etc..
User 110 can be interacted with using terminal 101,102 by network 103 and server 105, to receive or send Message etc..Various client applications can be installed, such as the application of model training class, video identification class are answered in terminal 101,102 With, social class application, the application of payment class, web browser and immediate communication tool etc..
Here terminal 101,102 can be hardware, can also be software.When terminal 101,102 is hardware, Ke Yishi Various electronic equipments with display screen, including but not limited to smart mobile phone, tablet computer, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3), Pocket computer on knee and desktop computer etc..When terminal 101,102 is software, may be mounted at above-mentioned cited In electronic equipment.Multiple softwares or software module (such as providing Distributed Services) may be implemented into it, can also realize At single software or software module.It is not specifically limited herein.
When terminal 101,102 is hardware, it is also equipped with video capture device thereon.Video capture device can be The various equipment that can realize acquisition video capability, such as camera, sensor.User 110 can utilize in terminal 101,102 Video capture device acquire video.
Database server 104 can be to provide the database server of various services.Such as it can in database server To be stored with sample set.It include a large amount of sample in sample set.Wherein, sample may include Sample video and be directed to The specimen discerning result that Sample video marks in advance.In this way, user 110 can also be by terminal 101,102, from database service Sample is chosen in the sample set that device 104 is stored.
Server 105 can also be to provide the server of various services, such as various answer to what is shown in terminal 101,102 With the background server for providing support.Background server can utilize the sample in the sample set that terminal 101,102 is sent, right Initial model is trained, and can training result be sent to terminal 101,102 (such as the video identification model generated).This Sample, user can apply the video identification model generated to carry out video identification.
Here database server 104 and server 105 can be equally hardware, can also be software.When they are When hardware, the distributed server cluster of multiple server compositions may be implemented into, individual server can also be implemented as.When it Be software when, multiple softwares or software module (such as providing Distributed Services) may be implemented into, can also be implemented as Single software or software module.It is not specifically limited herein.
It should be noted that method for generating model that the embodiment of the present application is provided or the side of video for identification Method is generally executed by server 105.Correspondingly, the device for the generating model or device of video is generally also provided with for identification In server 105.
It should be pointed out that in the case where the correlation function of database server 104 may be implemented in server 105, it is It can be not provided with database server 104 in system framework 100.
It should be understood that the number of the terminal, network, database server and server in Fig. 1 is only schematical.Root It factually now needs, can have any number of terminal, network, database server and server.
With continued reference to Fig. 2, the flow of one embodiment of the method for generating model according to the application is shown 200.The method for being used to generate model, includes the following steps:
Step 201, training sample set is obtained, and training sample set is divided into preset quantity training sample Group.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for generating model can lead to Cross wired connection mode or radio connection from database server (such as database server 104 shown in FIG. 1) or Person's terminal (such as terminal shown in FIG. 1 101,102) obtains training sample set, and training sample set is divided into default Quantity training sample group.Wherein, training sample includes Sample video and the specimen discerning knot that is marked in advance for Sample video Fruit.Sample video can be to shoot sample object obtained video.Sample object can be various things, such as people The behaviors such as the objects such as object, animal, or running, swimming.
In the present embodiment, specimen discerning result can include but is not limited at least one of following:Word, number, symbol. Specimen discerning result can serve to indicate that whether Sample video is obtained by showing that the screen of above-mentioned sample object is shot Video.For example, specimen discerning result may include number 1 and number 0, wherein number 1 can serve to indicate that Sample video is To showing that the screen of above-mentioned sample object is shot obtained video;It is pair that number 0, which can serve to indicate that Sample video not, Show that the screen of above-mentioned sample object is shot obtained video.
In the present embodiment, above-mentioned executive agent may be used various modes and training sample set be divided into preset quantity A training sample group.For example, training sample set is divided into preset quantity by the mode that decile may be used in above-mentioned executive agent A training sample group can also divide training sample set so that each instruction in preset quantity training sample group The quantitative value for practicing the training sample included by sample group is more than or equal to predetermined threshold value.It should be noted that above-mentioned preset quantity can To be pre-set by technical staff.
Step 202, for the training sample group in preset quantity training sample group, by the training in this group of training sample The Sample video of sample is as input, using the specimen discerning result corresponding to the Sample video inputted as desired output, profit Train to obtain the initial video identification model corresponding to this group of training sample with machine learning method.
In the present embodiment, for the training sample group in the preset quantity training sample group that is obtained in step 201, on Stating executive agent can be using the Sample video of the training sample in this group of training sample as input, the Sample video that will be inputted Corresponding specimen discerning result trains to obtain corresponding to this group of training sample as desired output, using machine learning method Initial video identification model.Wherein, initial video identification model is trained using the training sample in training sample group Model is determined for final video identification model.
As an example, for each training sample group in preset quantity training sample group, it can utilize and set in advance Initial model (such as convolutional neural networks (Convolutional Neural Network, CNN), the residual error network set (ResNet) etc.) be trained, it is final to obtain preset quantity initial video identification model corresponding with training sample group.Tool Body, for each training sample group in preset quantity training sample group, which can be trained sample by above-mentioned executive agent The Sample video of training sample in this inputs initial model, obtains the recognition result corresponding to inputted Sample video, so Afterwards using the specimen discerning result corresponding to the Sample video that is inputted as the desired output of initial model, machine learning side is utilized Method trains initial model, and the initial model after training is determined as initial video identification model.
In some optional realization methods of the present embodiment, above-mentioned executive agent can be based on preset quantity and train sample This group obtains preset quantity initial video identification model as follows:
Step 2021, training sample group is chosen from preset quantity training sample group as candidate training sample group.
In the present embodiment, in the preset quantity training sample group that above-mentioned executive agent can be obtained from step 201 Training sample group is chosen as candidate training sample group, and executes step 2022 to the training step of step 2024.Wherein, it instructs The selection mode for practicing sample group is not intended to limit in this application.Such as can randomly select, can also be that preferential choose is trained The more training sample group of sample.
Step 2022:Using the Sample video of the training sample in candidate training sample group as input, the sample that will be inputted Specimen discerning result corresponding to this video is trained initial model using machine learning method, obtains as desired output Obtain initial video identification model.
Specifically, above-mentioned executive agent can obtain the initial video corresponding to candidate training sample group as follows Identification model:
Above-mentioned executive agent can be from the selection training sample in candidate training sample group, and executes following steps:By institute The Sample video for choosing training sample inputs initial model, obtains recognition result;By the sample corresponding to the Sample video inputted Desired output of this recognition result as initial model, based on the recognition result and specimen discerning obtained as a result, adjustment is initial The parameter of model;It determines in candidate's training sample group and whether there is unselected training sample;It is not chosen in response to being not present Initial model after adjustment is determined as the initial video identification model corresponding to candidate training sample group by the training sample taken. It should be noted that the selection mode of training sample is not intended to limit in this application.Such as can randomly select, can also be The preferential preferable training sample of clarity for choosing Sample video.
Step 2023:It determines and whether there is unselected training sample group in preset quantity training sample group.
Step 2024:Unselected training sample group is not present in response to determining, obtains preset quantity initial video Identification model.
It is understood that when unselected training sample group is not present in preset quantity training sample group, i.e., For each training sample group in preset quantity training sample group, training generates corresponding initial video identification model, Therefore unselected training sample group can be not present in response to determining in above-mentioned executive agent in preset quantity training sample group, Obtain preset quantity initial video identification model.
Optionally, above-mentioned executive agent may also respond to determine that there are unselected training sample groups, never selected Training sample group is chosen in the training sample group taken as new candidate training sample group, by the initial video of the last time acquisition Identification model continues to execute above-mentioned training step 2022-2024 as new initial model.
In the realization method, above-mentioned executive agent can will be obtained first by the training sample group training preferentially chosen Beginning video identification model can effectively utilize sample as the initial model corresponding to the training sample group then chosen with this Notebook data generates more accurate initial video identification model.
Step 203, it is based on obtained initial video identification model, generates video identification model.
In the present embodiment, it is based on the obtained initial video identification model of step 202, above-mentioned executive agent can generate Video identification model.
Know specifically, above-mentioned executive agent can choose an initial video from obtained initial video identification model Other model is handled as video identification model, or to obtained initial video identification model, obtains video identification mould Type.
As an example, above-mentioned executive agent can be each first based on the quantity of obtained initial video identification model Beginning video identification model distributes identical weight, in turn, based on the weight distributed, to obtained initial video identification model It is merged, obtains video identification model.
For example, obtained initial video identification model includes:" y=ax+b ";" y=cx+d ".Wherein, x is independent variable, It can be used for the input of characterization model;Y is dependent variable, can be used for the output of characterization model;A and b is first initial video The coefficient of identification model;C and d is the coefficient of second initial video identification model.Herein, it is initially regarded due to having obtained two Frequency identification model, therefore the weight that can be determined as each initial video identification model distribution is 0.5 (0.5=1 ÷ 2), Jin Erke To based on the weight distributed, being merged to model " y=ax+b " and model " y=cx+d ", to obtain video identification model " y=0.5x (a+c)+0.5 (b+d) " (y=0.5* (ax+b)+0.5* (cx+d)).
In some optional realization methods of the present embodiment, based on the step 2021- in above-mentioned optional realization method 2024 obtained initial video identification models, the initial video that above-mentioned executive agent can directly obtain last time identify Model is determined as video identification model.
It is a schematic diagram of the application scenarios of the method generated according to the model of the present embodiment with continued reference to Fig. 3, Fig. 3. In the application scenarios of Fig. 3, model training class application can be installed in terminal 301 used by a user.It is somebody's turn to do when user opens Using, and after uploading the store path of training sample set or training sample set, the service of back-office support is provided the application Device 302 can run the method for generating model, including:
It is possible, firstly, to obtain training sample set 303 and training sample set 303 is divided into two (preset quantities It is a) training sample group 304,305, wherein training sample includes that Sample video and the sample marked in advance for Sample video are known Not as a result, Sample video is shoots sample object obtained video, specimen discerning result is used to indicate Sample video Whether it is to showing that the screen of sample object is shot obtained video.
Then, for training sample group 304, above-mentioned executive agent can be by the sample of the training sample in this group of training sample This video, using the specimen discerning result corresponding to the Sample video inputted as desired output, utilizes engineering as input Learning method trains to obtain the initial video identification model 306 corresponding to this group of training sample;It is above-mentioned for training sample group 305 Executive agent can be using the Sample video of the training sample in this group of training sample as input, by the Sample video inputted institute Corresponding specimen discerning result trains to obtain first corresponding to this group of training sample as desired output using machine learning method Beginning video identification model 307.
Finally, above-mentioned executive agent can be based on obtained initial video identification model 306 and initial video identifies mould Type 307 generates video identification model 308.
At this point, server 302 can also send the prompt message for being used to indicate model training and completing to terminal 301.This is carried Show that information can be voice and/or text information.In this way, user can get video identification mould in preset storage location Type.
The method that above-described embodiment of the application provides is drawn by obtaining training sample set, and by training sample set It is divided into preset quantity training sample group, then for the training sample group in preset quantity training sample group, which is instructed Practice the Sample video of the training sample in sample as input, the specimen discerning result corresponding to the Sample video inputted is made For desired output, train to obtain the initial video identification model corresponding to this group of training sample using machine learning method, finally Based on obtained initial video identification model, video identification model is generated, can be used for identifying so as to obtaining one kind and regard The model of frequency, and contribute to the generating mode of abundant model.
With further reference to Fig. 4, it illustrates the flows 400 of another embodiment of the method for generating model.The use In the flow 400 for the method for generating model, include the following steps:
Step 401, training sample set is obtained, and training sample set is divided into preset quantity training sample Group.
In the present embodiment, the executive agent (such as server shown in FIG. 1) of the method for generating model can lead to Cross wired connection mode or radio connection from database server (such as database server 104 shown in FIG. 1) or Person's terminal (such as terminal shown in FIG. 1 101,102) obtains training sample set, and training sample set is divided into default Quantity training sample group.
It is realized with step 201 similar mode in previous embodiment it should be noted that step 401 may be used.Phase Ying Di describes the also suitable step 401 that can be used for the present embodiment above with respect to step 201, and details are not described herein again.
Step 402, the numerical value of the good and bad degree for characterizing preset quantity training sample group is determined.
In the present embodiment, for the preset quantity training sample group obtained in step 401, above-mentioned executive agent can be with Determine the numerical value of the good and bad degree for characterizing preset quantity training sample group.Specifically, above-mentioned executive agent may be used Various modes determine the numerical value of the good and bad degree for characterizing preset quantity training sample group, for example, above-mentioned executive agent can With the quantity for the training sample that each training sample group of determination includes, the quantitative value of identified quantity is determined as characterizing The numerical value of the good and bad degree of preset quantity training sample group.Herein, it is to be understood that the instruction included by training sample group It is more to practice sample, then may be more to the parameter adjustment number of initial model, and then the initial identification model that training obtains then may be used Can be more accurate, therefore above-mentioned executive agent can be determined according to the quantity of the training sample included by training sample group and be used for table Levy the numerical value of the good and bad degree of preset quantity training sample group.
It should be noted that herein, the correspondence of the size of numerical value and good and bad degree can be advance by technical staff Setting.Specifically, can be that numerical value is bigger by correspondence setting, training sample group is more excellent;It is smaller to may be set to be numerical value, Training sample group is more excellent.
In some optional realization methods of the present embodiment, above-mentioned executive agent can be determined and is used for as follows Characterize the numerical value of the good and bad degree of preset quantity training sample group:
First, above-mentioned executive agent can obtain pre-set verification sample set, wherein verification sample includes verification With video and the verification recognition result marked in advance with video for verification.
Then, for the training sample group in preset quantity training sample group, above-mentioned executive agent can execute following Step:Using the Sample video of the training sample in this group of training sample as input, corresponding to the Sample video inputted Specimen discerning result is trained to obtain the video to be verified corresponding to this group of training sample as output, using machine learning method and is known Other model;By to be verified regarding corresponding to this group of training sample of verification video input for the verification sample verified in sample set Frequency identification model obtains practical recognition result, determines practical recognition result relative to corresponding to the verification video inputted The penalty values of verification recognition result generate the good and bad journey for characterizing this group of training sample group based on identified penalty values The numerical value of degree.
Wherein, penalty values can be used for characterizing the difference between reality output and desired output.It is understood that above-mentioned Difference is smaller, then the video identification model to be verified that training obtains is then more accurate, and in turn, the training sample group utilized is then got over It is excellent.Therefore various modes may be used in the relationship based on above-mentioned penalty values Yu the good and bad degree of training sample group, above-mentioned executive agent Based on identified penalty values, the numerical value of the good and bad degree for characterizing training sample group is generated.For example, can be directly by loss Value is determined as the numerical value of the good and bad degree for characterizing training sample group, at this point, the good and bad degree for characterizing training sample group Numerical value it is smaller, training sample group is more excellent;The inverse of penalty values can also be determined as to the quality for characterizing training sample group The numerical value of degree, at this point, the numerical value of the good and bad degree for characterizing training sample group is bigger, training sample group is more excellent.
Herein, it should be noted that above-mentioned executive agent may be used preset various loss function calculating and be obtained Penalty values of the practical recognition result relative to the verification recognition result corresponding to the verification video inputted, for example, can To use L2 norms as loss function counting loss value.
Step 403, based on identified numerical value, optimal training sample group is chosen from preset quantity training sample group As candidate training sample group.
In the present embodiment, the numerical value that above-mentioned executive agent can be determined based on step 402, from what is obtained in step 401 Optimal training sample group is chosen in preset quantity training sample group as candidate training sample group, and executes step 404 To the training step of step 406.
It should be noted that the specific implementation of the present embodiment be to choose from preset quantity training sample group it is optimal Training sample group is as candidate training sample group, therefore the numerical value for working as the good and bad degree for being used to characterize training sample group is bigger, training When sample group is more excellent, above-mentioned executive agent can choose identified, maximum numerical value from preset quantity training sample group Corresponding training sample group is as candidate training sample group;When the numerical value of the good and bad degree for characterizing training sample group is got over Small, when training sample group is more excellent, above-mentioned executive agent can be chosen identified, minimum from preset quantity training sample group Numerical value corresponding to training sample group as candidate training sample group.
Step 404, using the Sample video of the training sample in candidate training sample group as input, the sample that will be inputted Specimen discerning result corresponding to video is trained initial model using machine learning method as desired output, obtains Initial video identification model.
Specifically, above-mentioned executive agent can obtain the initial video corresponding to candidate training sample group as follows Identification model:
Above-mentioned executive agent can be from the selection training sample in candidate training sample group, and executes following steps:By institute The Sample video for choosing training sample inputs initial model, obtains recognition result;By the sample corresponding to the Sample video inputted Desired output of this recognition result as initial model, based on the recognition result and specimen discerning obtained as a result, adjustment is initial The parameter of model;It determines in candidate's training sample group and whether there is unselected training sample;It is not chosen in response to being not present Initial model after adjustment is determined as the initial video identification model corresponding to candidate training sample group by the training sample taken. It should be noted that the selection mode of training sample is not intended to limit in this application.Such as can randomly select, can also be The preferential preferable training sample of clarity for choosing Sample video.
Step 405, it determines and whether there is unselected training sample group in preset quantity training sample group.
Step 406, unselected training sample group is not present in response to determining, obtains preset quantity initial video and knows Other model.
It is understood that when unselected training sample group is not present in preset quantity training sample group, i.e., For each training sample group in preset quantity training sample group, training generates corresponding initial video identification model, Therefore unselected training sample group can be not present in response to determining in above-mentioned executive agent in preset quantity training sample group, Obtain preset quantity initial video identification model.
In some optional realization methods of the present embodiment, above-mentioned executive agent may also respond to determine exist not by The training sample group of selection chooses optimal training sample based on identified numerical value from unselected training sample group Group is as new candidate training sample group, using the initial video identification model of the last acquisition as new initial model, after It is continuous to execute above-mentioned training step 404-406.
Step 407, it is based on obtained initial video identification model, generates video identification model.
In the present embodiment, it is based on the obtained initial video identification model of step 406, above-mentioned executive agent can generate Video identification model.
Specifically, above-mentioned executive agent can choose an initial video identification mould from obtained initial identification model Type is handled as video identification model, or to obtained initial video identification model, obtains video identification model.
In some optional realization methods of the present embodiment, above-mentioned executive agent can generate video as follows Identification model:First, above-mentioned executive agent can be based on the numerical value determined in step 402, for the initial video identification obtained Model distributes weight.Then, above-mentioned executive agent can be based on the weight distributed, to the initial video identification model obtained It is merged, generates video identification model.Specifically, above-mentioned executive agent can determine each instruction based on identified numerical value Practice the good and bad degree of sample group, and then the initial video identification model to be obtained distributes weight by various modes, so that compared with The weight corresponding to initial video identification model corresponding to excellent training sample group is larger, corresponding to more bad training sample group Initial video identification model corresponding to weight it is smaller.
Figure 4, it is seen that compared with the corresponding embodiments of Fig. 2, the method for generating model in the present embodiment Flow 400 highlight the numerical value of the good and bad degree for characterizing preset quantity training sample group of determination, and then really based on institute Fixed numerical value chooses the step of training sample group is trained from preset quantity training sample group.The present embodiment is retouched as a result, The scheme stated can be trained first with preferably training sample group, obtain accurate initial video identification model, from And subsequent training can on this basis carry out initial video identification model smaller adjustment, improve the effect of model generation Rate.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating mould One embodiment of the device of type, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer For in various electronic equipments.
As shown in figure 5, the device 500 for generating model of the present embodiment includes:Sample acquisition unit 501, model instruction Practice unit 502 and model generation unit 503.Wherein, sample acquisition unit 501 is configured to obtain training sample set, and Training sample set is divided into preset quantity training sample group, wherein training sample include Sample video and be directed to sample Specimen discerning that video marks in advance as a result, Sample video to be shot obtained video, specimen discerning to sample object As a result it is used to indicate whether Sample video is to showing that the screen of sample object is shot obtained video;Model training list Member 502 is configured to for the training sample group in preset quantity training sample group, by the training sample in this group of training sample This Sample video is utilized as input using the specimen discerning result corresponding to the Sample video inputted as desired output Machine learning method trains to obtain the initial video identification model corresponding to this group of training sample;Model generation unit 503 by with It is set to and is based on obtained initial video identification model, generate video identification model.
In the present embodiment, the sample acquisition unit 501 of the device 500 for generating model can be by wired connection side Formula either from database server (such as database server 104 shown in FIG. 1) or terminal (such as scheme by radio connection 102) terminal 101 shown in 1 obtains training sample set, and training sample set is divided into preset quantity and trains sample This group.Wherein, training sample includes Sample video and the specimen discerning result that is marked in advance for Sample video.Sample video can Think and is shot obtained video to sample object.Sample object can be various things.
In the present embodiment, specimen discerning result can include but is not limited at least one of following:Word, number, symbol. Specimen discerning result can serve to indicate that whether Sample video is obtained by showing that the screen of above-mentioned sample object is shot Video.
In the present embodiment, sample acquisition unit 501 may be used various modes training sample set is divided into it is default Quantity training sample group.It should be noted that above-mentioned preset quantity can be pre-set by technical staff.
In the present embodiment, for the training in the preset quantity training sample group that is obtained in sample acquisition unit 501 Sample group, model training unit 502 can be defeated by institute using the Sample video of the training sample in this group of training sample as input Specimen discerning result corresponding to the Sample video entered trains to obtain group training using machine learning method as desired output Initial video identification model corresponding to sample.Wherein, initial video identification model is to utilize the training sample in training sample group The model that this training obtains is determined for final video identification model.
In the present embodiment, 502 obtained initial video identification model of model training unit, model generation unit are based on 503 can generate video identification model.
Know specifically, above-mentioned executive agent can choose an initial video from obtained initial video identification model Other model is handled as video identification model, or to obtained initial video identification model, obtains video identification mould Type.
In some optional realization methods of the present embodiment, model training unit 502 may include:First execution module (not shown) is configured to choose training sample group from preset quantity training sample group as candidate training sample Group, and based on candidate training sample group and initial model, execute following training step:By the training in candidate training sample group The Sample video of sample is as input, using the specimen discerning result corresponding to the Sample video inputted as desired output, profit Initial model is trained with machine learning method, obtains initial video identification model;Determine preset quantity training sample It whether there is unselected training sample group in group;Unselected training sample group is not present in response to determining, obtains pre- If quantity initial video identification model.
In some optional realization methods of the present embodiment, model training unit 502 can also include:Second executes mould Block (not shown) is configured in response to determine that there are unselected training sample groups, from unselected training sample Training sample group is chosen in this group as new candidate training sample group, the initial video identification model that the last time is obtained is made For new initial model, training step is continued to execute.
In some optional realization methods of the present embodiment, model training unit 502 may include:Numerical value determining module (not shown) is configured to determine the numerical value of the good and bad degree for characterizing preset quantity training sample group;Third is held Row module (not shown) is configured to, based on identified numerical value, choose from preset quantity training sample group optimal Training sample group as candidate training sample group, and based on candidate training sample group and initial model, execute following training Step:Using the Sample video of the training sample in candidate training sample group as input, corresponding to the Sample video inputted Specimen discerning result as desired output, initial model is trained using machine learning method, obtain initial video know Other model;It determines and whether there is unselected training sample group in preset quantity training sample group;It is not deposited in response to determination In unselected training sample group, preset quantity initial video identification model is obtained.
In some optional realization methods of the present embodiment, model training unit 502 can also include:4th executes mould Block (not shown), is configured in response to determine there are unselected training sample group, based on identified numerical value, from Optimal training sample group is chosen in unselected training sample group as new candidate training sample group, and the last time is obtained The initial video identification model obtained continues to execute training step as new initial model.
In some optional realization methods of the present embodiment, numerical value determining module (not shown) may include:Sample This acquisition module (not shown) is configured to obtain pre-set verification sample set, wherein verification sample includes testing Card video and the verification recognition result marked in advance with video for verification;Numerical generation module (not shown), quilt It is configured to, for the training sample group in preset quantity training sample group, execute following steps:It will be in this group of training sample The Sample video of training sample is as input, using the specimen discerning result corresponding to the Sample video inputted as output, profit Train to obtain the video identification model to be verified corresponding to this group of training sample with machine learning method;It will verify in sample set Verification sample to be verified video identification model of the verification corresponding to this group of training sample of video input, obtain practical identification As a result, determine penalty values of the practical recognition result relative to the verification recognition result corresponding to the verification video inputted, Based on identified penalty values, the numerical value of the good and bad degree for characterizing this group of training sample group is generated.
In some optional realization methods of the present embodiment, model generation unit 503 may include:Weight distribution module (not shown) is configured to based on identified numerical value, and the initial video identification model to be obtained distributes weight;Model Fusion Module (not shown) is configured to, based on the weight distributed, carry out the initial video identification model obtained Fusion generates video identification model.
In some optional realization methods of the present embodiment, model generation unit 503 can be further configured to:It will The initial video identification model that last time obtains is determined as video identification model.
The device 500 that above-described embodiment of the application provides obtains training sample set by sample acquisition unit 501, with And training sample set is divided into preset quantity training sample group, wherein training sample include Sample video and be directed to sample The specimen discerning that this video marks in advance as a result, Sample video is shoots sample object obtained video, know by sample Other result be used to indicate Sample video whether be to show sample object screen shot obtained video, then for Training sample group in preset quantity training sample group, using the Sample video of the training sample in this group of training sample as defeated Enter, model training unit 502 utilizes machine using the specimen discerning result corresponding to the Sample video inputted as desired output Learning method trains to obtain the initial video identification model corresponding to this group of training sample, and last model generation unit 503 is based on Obtained initial video identification model generates video identification model, so as to obtain a kind of can be used for identifying video Model, and contribute to the generating mode of abundant model.
Fig. 6 is referred to, it illustrates the flows of one embodiment of the method for video for identification provided by the present application 600.The method of the video for identification may comprise steps of:
Step 601, video to be identified is obtained.
In the present embodiment, the executive agent (such as server 105 shown in FIG. 1) of the method for video can be with for identification Band identification video is obtained by wired connection type or wireless connection type.For example, above-mentioned executive agent can be from number It is stored in video therein according to being obtained in library server (such as database server 104 shown in FIG. 1), terminal can also be received The video of (such as terminal shown in FIG. 1 101,102) or other equipment acquisition.
In the present embodiment, video to be identified can be to shoot object obtained video.Object can be each The behaviors such as the objects such as kind things, such as personage, animal, or running, swimming.
Step 602, by video input video identification model to be identified, the identification knot corresponding to video to be identified is generated Fruit.
In the present embodiment, the video input Video Model to be identified that above-mentioned executive agent can will obtain in step 601 In, to generate the recognition result corresponding to video to be identified.Wherein, whether recognition result can serve to indicate that video to be identified For to showing that the screen of above-mentioned object is shot obtained video.
In the present embodiment, video identification model can be generated using the method as described in above-mentioned Fig. 2 embodiments 's.Specific generating process may refer to the associated description of Fig. 2 embodiments, and details are not described herein.
It is generated it should be noted that the method for the present embodiment video for identification can be used for testing the various embodiments described above Video identification model.And then video identification model can constantly be optimized according to test result.This method can also be above-mentioned The practical application methods for the video identification model that each embodiment is generated.The video identification mould generated using the various embodiments described above Type carries out video identification, may be implemented to the detection by recording the video that screen obtains, and help to improve video identification Accuracy.
With continued reference to Fig. 7, as the realization to method shown in above-mentioned Fig. 6, this application provides a kind of videos for identification Device one embodiment.The device embodiment is corresponding with embodiment of the method shown in fig. 6, which can specifically apply In various electronic equipments.
As shown in fig. 7, the device 700 of the video for identification of the present embodiment may include:Video acquisition unit 701 and knot Fruit generation unit 702.Wherein, video acquisition unit 701 is configured to obtain video to be identified, wherein video to be identified is pair Object is shot obtained video;As a result generation unit 702 is configured to video input to be identified using such as above-mentioned Fig. 2 In the model that method described in embodiment generates, the recognition result corresponding to video to be identified is generated, wherein recognition result is used In indicating whether video to be identified is to showing that the screen of object is shot obtained video.
It is understood that all units described in the device 700 and each step phase in the method described with reference to figure 6 It is corresponding.As a result, above with respect to the operation of method description, the advantageous effect of feature and generation be equally applicable to device 700 and its In include unit, details are not described herein.
Referring to Fig. 8, it illustrates the computer systems 800 suitable for the electronic equipment for realizing the embodiment of the present application Structural schematic diagram.Electronic equipment shown in Fig. 8 is only an example, to the function of the embodiment of the present application and should not use model Shroud carrys out any restrictions.
As shown in figure 8, computer system 800 includes central processing unit (CPU) 801, it can be read-only according to being stored in Program in memory (ROM) 802 or be loaded into the program in random access storage device (RAM) 803 from storage section 808 and Execute various actions appropriate and processing.In RAM 803, also it is stored with system 800 and operates required various programs and data. CPU 801, ROM 802 and RAM 803 are connected with each other by bus 804.Input/output (I/O) interface 805 is also connected to always Line 804.
It is connected to I/O interfaces 805 with lower component:Include the importation of touch screen, keyboard, mouse, photographic device etc. 806;Output par, c 807 including cathode-ray tube (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Including hard The storage section 808 of disk etc.;And the communications portion 809 of the network interface card including LAN card, modem etc..It is logical Believe that part 809 executes communication process via the network of such as internet.Driver 810 is also according to needing to be connected to I/O interfaces 805.Detachable media 811, such as disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver as needed On 810, in order to be mounted into storage section 808 as needed from the computer program read thereon.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed by communications portion 809 from network, and/or from detachable media 811 are mounted.When the computer program is executed by central processing unit (CPU) 801, limited in execution the present processes Above-mentioned function.It should be noted that the computer-readable medium of the application can be computer-readable signal media or calculating Machine readable storage medium storing program for executing either the two arbitrarily combines.Computer readable storage medium for example can be --- but it is unlimited In --- electricity, system, device or the device of magnetic, optical, electromagnetic, infrared ray or semiconductor, or the arbitrary above combination.It calculates The more specific example of machine readable storage medium storing program for executing can include but is not limited to:Being electrically connected, be portable with one or more conducting wires Formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device or The above-mentioned any appropriate combination of person.In this application, can be any include computer-readable medium or storage program has Shape medium, the program can be commanded the either device use or in connection of execution system, device.And in the application In, computer-readable signal media may include in a base band or as the data-signal that a carrier wave part is propagated, wherein Carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but not limited to electric Magnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer-readable and deposit Any computer-readable medium other than storage media, the computer-readable medium can send, propagate or transmit for by referring to Enable execution system, device either device use or program in connection.The program for including on computer-readable medium Code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned times The suitable combination of meaning.
Flow chart in attached drawing and block diagram, it is illustrated that according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part for a part for one module, program segment, or code of table, the module, program segment, or code includes one or more uses The executable instruction of the logic function as defined in realization.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, this is depended on the functions involved.Also it to note Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit can also be arranged in the processor, for example, can be described as:A kind of processor packet Include sample acquisition unit, model training unit and model generation unit.For another example can also be described as:A kind of processor includes Acquiring unit, training unit and generation unit.Wherein, the title of these units is not constituted under certain conditions to the unit sheet The restriction of body, for example, sample acquisition unit is also described as " obtaining the unit of training sample set ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be Included in electronic equipment described in above-described embodiment;Can also be individualism, and without be incorporated the electronic equipment in. Above computer readable medium carries one or more program, when said one or multiple programs are held by the electronic equipment When row so that the electronic equipment:Training sample set is obtained, and training sample set is divided into preset quantity and trains sample This group, wherein training sample includes Sample video and the specimen discerning that is marked in advance for Sample video is as a result, Sample video is Obtained video is shot to sample object, specimen discerning result is used to indicate whether Sample video is to showing sample pair The screen of elephant is shot obtained video;For the training sample group in preset quantity training sample group, which is instructed Practice the Sample video of the training sample in sample as input, the specimen discerning result corresponding to the Sample video inputted is made For desired output, train to obtain the initial video identification model corresponding to this group of training sample using machine learning method;It is based on Obtained initial video identification model generates video identification model.
In addition, when said one or multiple programs are executed by the electronic equipment, it is also possible that the electronic equipment:It obtains Take video to be identified, wherein video to be identified is shoots object obtained video;By video input video to be identified In identification model, the recognition result corresponding to video to be identified is generated, wherein whether recognition result is used to indicate video to be identified For to showing that the screen of object is shot obtained video.Video identification model can be using such as the various embodiments described above institute Description is generated for generating the method for model.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art Member should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Other technical solutions of arbitrary combination and formation.Such as features described above has similar work(with (but not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (20)

1. a kind of method for generating model, including:
Training sample set is obtained, and training sample set is divided into preset quantity training sample group, wherein training sample This include Sample video and the specimen discerning that is marked in advance for Sample video as a result, the Sample video be to sample object into Row shoots obtained video, and the specimen discerning result is used to indicate whether the Sample video is to showing the sample pair The screen of elephant is shot obtained video;
For the training sample group in the preset quantity training sample group, by the sample of the training sample in this group of training sample This video, using the specimen discerning result corresponding to the Sample video inputted as desired output, utilizes engineering as input Learning method trains to obtain the initial video identification model corresponding to this group of training sample;
Based on obtained initial video identification model, video identification model is generated.
2. according to the method described in claim 1, wherein, the training sample in the preset quantity training sample group This group, using the Sample video of the training sample in this group of training sample as input, corresponding to the Sample video inputted Specimen discerning result trains to obtain the initial video corresponding to this group of training sample using machine learning method as desired output Identification model, including:
Training sample group is chosen from the preset quantity training sample group as candidate training sample group, and based on candidate Training sample group and initial model execute following training step:By the Sample video of the training sample in candidate training sample group As input machine learning method is utilized using the specimen discerning result corresponding to the Sample video inputted as desired output Initial model is trained, initial video identification model is obtained;Determine whether deposited in the preset quantity training sample group In unselected training sample group;Unselected training sample group is not present in response to determining, at the beginning of obtaining preset quantity Beginning video identification model.
3. according to the method described in claim 2, wherein, the training sample in the preset quantity training sample group This group, using the Sample video of the training sample in this group of training sample as input, corresponding to the Sample video inputted Specimen discerning result trains to obtain the initial video corresponding to this group of training sample using machine learning method as desired output Identification model further includes:
In response to determining that there are unselected training sample groups, training sample group is chosen from unselected training sample group As new candidate training sample group, the initial video identification model that the last time is obtained continues as new initial model Execute the training step.
4. according to the method described in claim 1, wherein, the training sample in the preset quantity training sample group This group, using the Sample video of the training sample in this group of training sample as input, corresponding to the Sample video inputted Specimen discerning result trains to obtain the initial video corresponding to this group of training sample using machine learning method as desired output Identification model, including:
Determine the numerical value of the good and bad degree for characterizing the preset quantity training sample group;
Based on identified numerical value, optimal training sample group is chosen from the preset quantity training sample group as candidate Training sample group, and based on candidate training sample group and initial model, execute following training step:By candidate training sample group In training sample Sample video as input, using the specimen discerning result corresponding to the Sample video inputted as expectation Output, is trained initial model using machine learning method, obtains initial video identification model;Determine the preset quantity It whether there is unselected training sample group in a training sample group;Unselected training sample is not present in response to determining Group obtains preset quantity initial video identification model.
5. according to the method described in claim 4, wherein, the training sample in the preset quantity training sample group This group, using the Sample video of the training sample in this group of training sample as input, corresponding to the Sample video inputted Specimen discerning result trains to obtain the initial video corresponding to this group of training sample using machine learning method as desired output Identification model further includes:
In response to determining that there are unselected training sample groups, based on identified numerical value, from unselected training sample Optimal training sample group is chosen in group as new candidate training sample group, the initial video that the last time is obtained identifies mould Type continues to execute the training step as new initial model.
6. according to the method described in claim 4, wherein, the determination is for characterizing the preset quantity training sample group The numerical value of good and bad degree, including:
Obtain pre-set verification sample set, wherein verification sample includes verification video and for verification video preprocessor The verification recognition result first marked;
For the training sample group in the preset quantity training sample group, following steps are executed:It will be in this group of training sample Training sample Sample video as input, using the specimen discerning result corresponding to the Sample video inputted as export, Train to obtain the video identification model to be verified corresponding to this group of training sample using machine learning method;Sample set will be verified In verification sample to be verified video identification model of the verification corresponding to this group of training sample of video input, obtain and practical know Not as a result, determining loss of the practical recognition result relative to the verification recognition result corresponding to the verification video inputted Value generates the numerical value of the good and bad degree for characterizing this group of training sample group based on identified penalty values.
7. according to the method described in one of claim 4-6, wherein described to be based on obtained initial video identification model, life At video identification model, including:
Based on identified numerical value, the initial video identification model to be obtained distributes weight;
Based on the weight distributed, the initial video identification model obtained is merged, generates video identification model.
8. according to the method described in one of claim 2-6, wherein described to be based on obtained initial video identification model, life At video identification model, including:
The initial video identification model that last time obtains is determined as video identification model.
9. a kind of device for generating model, including:
Sample acquisition unit is configured to obtain training sample set, and training sample set is divided into preset quantity Training sample group, wherein training sample includes Sample video and the specimen discerning that is marked in advance for Sample video is as a result, described For Sample video to be shot obtained video to sample object, the specimen discerning result is used to indicate the Sample video Whether it is to showing that the screen of the sample object is shot obtained video;
Model training unit is configured to, for the training sample group in the preset quantity training sample group, which be instructed Practice the Sample video of the training sample in sample as input, the specimen discerning result corresponding to the Sample video inputted is made For desired output, train to obtain the initial video identification model corresponding to this group of training sample using machine learning method;
Model generation unit is configured to be based on obtained initial video identification model, generates video identification model.
10. device according to claim 9, wherein the model training unit includes:
First execution module is configured to choose training sample group from the preset quantity training sample group as candidate instruction Practice sample group, and based on candidate training sample group and initial model, executes following training step:It will be in candidate training sample group Training sample Sample video as input, using the specimen discerning result corresponding to the Sample video inputted as expectation it is defeated Go out, initial model is trained using machine learning method, obtains initial video identification model;Determine the preset quantity It whether there is unselected training sample group in training sample group;Unselected training sample is not present in response to determining Group obtains preset quantity initial video identification model.
11. device according to claim 10, wherein the model training unit further includes:
Second execution module is configured in response to determine that there are unselected training sample groups, from unselected training Training sample group is chosen in sample group as new candidate training sample group, by the initial video identification model of the last time acquisition As new initial model, the training step is continued to execute.
12. device according to claim 9, wherein the model training unit includes:
Numerical value determining module is configured to determine the number of the good and bad degree for characterizing the preset quantity training sample group Value;
Third execution module is configured to, based on identified numerical value, choose most from the preset quantity training sample group Excellent training sample group executes following instruction as candidate training sample group, and based on candidate training sample group and initial model Practice step:It is using the Sample video of the training sample in candidate training sample group as input, the Sample video inputted institute is right The specimen discerning result answered is trained initial model as desired output, using machine learning method, obtains initial video Identification model;It determines and whether there is unselected training sample group in the preset quantity training sample group;In response to true Surely unselected training sample group is not present, obtains preset quantity initial video identification model.
13. device according to claim 12, wherein the model training unit further includes:
4th execution module, is configured in response to determine there are unselected training sample group, based on identified numerical value, Optimal training sample group is chosen from unselected training sample group as new candidate training sample group, it will be the last The initial video identification model of acquisition continues to execute the training step as new initial model.
14. device according to claim 12, wherein the numerical value determining module includes:
Sample acquisition module is configured to obtain pre-set verification sample set, wherein verification sample includes verification with regarding Frequency and the verification recognition result marked in advance with video for verification;
Numerical generation module is configured to, for the training sample group in the preset quantity training sample group, execute following Step:Using the Sample video of the training sample in this group of training sample as input, corresponding to the Sample video inputted Specimen discerning result is trained to obtain the video to be verified corresponding to this group of training sample as output, using machine learning method and is known Other model;By to be verified regarding corresponding to this group of training sample of verification video input for the verification sample verified in sample set Frequency identification model obtains practical recognition result, determines practical recognition result relative to corresponding to the verification video inputted The penalty values of verification recognition result generate the good and bad journey for characterizing this group of training sample group based on identified penalty values The numerical value of degree.
15. according to the device described in one of claim 12-14, wherein the model generation unit includes:
Weight distribution module is configured to based on identified numerical value, and the initial video identification model to be obtained distributes weight;
Model Fusion module is configured to, based on the weight distributed, merge the initial video identification model obtained, Generate video identification model.
16. according to the device described in one of claim 10-14, wherein the model generation unit is further configured to:
The initial video identification model that last time obtains is determined as video identification model.
17. a kind of method of video for identification, including:
Obtain video to be identified, wherein the video to be identified is shoots object obtained video;
In video identification model by the video input to be identified using method as described in one of claim 1-8 generation, Generate the recognition result corresponding to the video to be identified, wherein the recognition result, which is used to indicate the video to be identified, is The no screen for the display object is shot obtained video.
18. a kind of device of video for identification, including:
Video acquisition unit is configured to obtain video to be identified, wherein the video to be identified is carries out shooting institute to object The video of acquisition;
As a result generation unit is configured to the video input to be identified using method as described in one of claim 1-8 In the video identification model of generation, the recognition result corresponding to the video to be identified is generated, wherein the recognition result is used for Indicate whether the video to be identified is to showing that the screen of the object is shot obtained video.
19. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs;
When one or more of programs are executed by one or more of processors so that one or more of processors are real The now method as described in any in claim 1-8,17.
20. a kind of computer-readable medium, is stored thereon with computer program, wherein the computer program is held by processor The method as described in any in claim 1-8,17 is realized when row.
CN201810617804.4A 2018-06-15 2018-06-15 Method and apparatus for generating a model Active CN108805091B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810617804.4A CN108805091B (en) 2018-06-15 2018-06-15 Method and apparatus for generating a model
PCT/CN2018/116339 WO2019237657A1 (en) 2018-06-15 2018-11-20 Method and device for generating model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810617804.4A CN108805091B (en) 2018-06-15 2018-06-15 Method and apparatus for generating a model

Publications (2)

Publication Number Publication Date
CN108805091A true CN108805091A (en) 2018-11-13
CN108805091B CN108805091B (en) 2021-08-10

Family

ID=64086183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810617804.4A Active CN108805091B (en) 2018-06-15 2018-06-15 Method and apparatus for generating a model

Country Status (2)

Country Link
CN (1) CN108805091B (en)
WO (1) WO2019237657A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109492128A (en) * 2018-10-30 2019-03-19 北京字节跳动网络技术有限公司 Method and apparatus for generating model
CN109740018A (en) * 2019-01-29 2019-05-10 北京字节跳动网络技术有限公司 Method and apparatus for generating video tab model
CN109816023A (en) * 2019-01-29 2019-05-28 北京字节跳动网络技术有限公司 Method and apparatus for generating picture tag model
CN110007755A (en) * 2019-03-15 2019-07-12 百度在线网络技术(北京)有限公司 Object event triggering method, device and its relevant device based on action recognition
CN110009101A (en) * 2019-04-11 2019-07-12 北京字节跳动网络技术有限公司 Method and apparatus for generating quantization neural network
WO2019237657A1 (en) * 2018-06-15 2019-12-19 北京字节跳动网络技术有限公司 Method and device for generating model
CN110619537A (en) * 2019-06-18 2019-12-27 北京无限光场科技有限公司 Method and apparatus for generating information
CN111949860A (en) * 2019-05-15 2020-11-17 北京字节跳动网络技术有限公司 Method and apparatus for generating a relevance determination model

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113138847B (en) * 2020-01-19 2024-06-18 京东科技控股股份有限公司 Computer resource allocation scheduling method and device based on federal learning
CN113807122A (en) * 2020-06-11 2021-12-17 阿里巴巴集团控股有限公司 Model training method, object recognition method and device, and storage medium
CN112200218B (en) * 2020-09-10 2023-06-20 浙江大华技术股份有限公司 Model training method and device and electronic equipment
CN112101566A (en) * 2020-09-11 2020-12-18 石化盈科信息技术有限责任公司 Prediction model training method, price prediction method, storage medium, and electronic device
CN112101464B (en) * 2020-09-17 2024-03-15 西安锐思数智科技股份有限公司 Deep learning-based image sample data acquisition method and device
CN112149807B (en) * 2020-09-28 2024-06-28 北京百度网讯科技有限公司 User characteristic information processing method and device
CN112819078B (en) * 2021-02-04 2023-12-15 上海明略人工智能(集团)有限公司 Iteration method and device for picture identification model
CN112925785A (en) * 2021-03-29 2021-06-08 中国建设银行股份有限公司 Data cleaning method and device
CN114913405B (en) * 2022-06-13 2024-08-09 国网智能电网研究院有限公司 Training method and device for deep neural network model

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598972A (en) * 2015-01-22 2015-05-06 清华大学 Quick training method of large-scale data recurrent neutral network (RNN)
US20150332438A1 (en) * 2014-05-16 2015-11-19 Adobe Systems Incorporated Patch Partitions and Image Processing
CN105912500A (en) * 2016-03-30 2016-08-31 百度在线网络技术(北京)有限公司 Machine learning model generation method and machine learning model generation device
CN106529598A (en) * 2016-11-11 2017-03-22 北京工业大学 Classification method and system based on imbalanced medical image data set
CN106529008A (en) * 2016-11-01 2017-03-22 天津工业大学 Double-integration partial least square modeling method based on Monte Carlo and LASSO
CN106897746A (en) * 2017-02-28 2017-06-27 北京京东尚科信息技术有限公司 Data classification model training method and device
CN107423673A (en) * 2017-05-11 2017-12-01 上海理湃光晶技术有限公司 A kind of face identification method and system
CN107657243A (en) * 2017-10-11 2018-02-02 电子科技大学 Neutral net Radar range profile's target identification method based on genetic algorithm optimization
CN107766868A (en) * 2016-08-15 2018-03-06 中国联合网络通信集团有限公司 A kind of classifier training method and device
CN107766940A (en) * 2017-11-20 2018-03-06 北京百度网讯科技有限公司 Method and apparatus for generation model
CN107967491A (en) * 2017-12-14 2018-04-27 北京木业邦科技有限公司 Machine learning method, device, electronic equipment and the storage medium again of plank identification
CN107992783A (en) * 2016-10-26 2018-05-04 上海银晨智能识别科技有限公司 Face image processing process and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833569A (en) * 2010-04-08 2010-09-15 中国科学院自动化研究所 Method for automatically identifying film human face image
CN105354543A (en) * 2015-10-29 2016-02-24 小米科技有限责任公司 Video processing method and apparatus
CN108805091B (en) * 2018-06-15 2021-08-10 北京字节跳动网络技术有限公司 Method and apparatus for generating a model

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150332438A1 (en) * 2014-05-16 2015-11-19 Adobe Systems Incorporated Patch Partitions and Image Processing
CN104598972A (en) * 2015-01-22 2015-05-06 清华大学 Quick training method of large-scale data recurrent neutral network (RNN)
CN105912500A (en) * 2016-03-30 2016-08-31 百度在线网络技术(北京)有限公司 Machine learning model generation method and machine learning model generation device
CN107766868A (en) * 2016-08-15 2018-03-06 中国联合网络通信集团有限公司 A kind of classifier training method and device
CN107992783A (en) * 2016-10-26 2018-05-04 上海银晨智能识别科技有限公司 Face image processing process and device
CN106529008A (en) * 2016-11-01 2017-03-22 天津工业大学 Double-integration partial least square modeling method based on Monte Carlo and LASSO
CN106529598A (en) * 2016-11-11 2017-03-22 北京工业大学 Classification method and system based on imbalanced medical image data set
CN106897746A (en) * 2017-02-28 2017-06-27 北京京东尚科信息技术有限公司 Data classification model training method and device
CN107423673A (en) * 2017-05-11 2017-12-01 上海理湃光晶技术有限公司 A kind of face identification method and system
CN107657243A (en) * 2017-10-11 2018-02-02 电子科技大学 Neutral net Radar range profile's target identification method based on genetic algorithm optimization
CN107766940A (en) * 2017-11-20 2018-03-06 北京百度网讯科技有限公司 Method and apparatus for generation model
CN107967491A (en) * 2017-12-14 2018-04-27 北京木业邦科技有限公司 Machine learning method, device, electronic equipment and the storage medium again of plank identification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHUANGLIU 等: "Randomly dividing homologous samples leads to overinflated accuracies for emotion recognition", 《INTERNATIONAL JOURNAL OF PSYCHOPHYSIOLOGY》 *
王惠亚: "基于分类的复杂数据处理方法研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019237657A1 (en) * 2018-06-15 2019-12-19 北京字节跳动网络技术有限公司 Method and device for generating model
CN109492128A (en) * 2018-10-30 2019-03-19 北京字节跳动网络技术有限公司 Method and apparatus for generating model
CN109492128B (en) * 2018-10-30 2020-01-21 北京字节跳动网络技术有限公司 Method and apparatus for generating a model
CN109740018A (en) * 2019-01-29 2019-05-10 北京字节跳动网络技术有限公司 Method and apparatus for generating video tab model
CN109816023A (en) * 2019-01-29 2019-05-28 北京字节跳动网络技术有限公司 Method and apparatus for generating picture tag model
CN109740018B (en) * 2019-01-29 2021-03-02 北京字节跳动网络技术有限公司 Method and device for generating video label model
CN110007755A (en) * 2019-03-15 2019-07-12 百度在线网络技术(北京)有限公司 Object event triggering method, device and its relevant device based on action recognition
CN110009101A (en) * 2019-04-11 2019-07-12 北京字节跳动网络技术有限公司 Method and apparatus for generating quantization neural network
CN110009101B (en) * 2019-04-11 2020-09-25 北京字节跳动网络技术有限公司 Method and apparatus for generating a quantized neural network
CN111949860A (en) * 2019-05-15 2020-11-17 北京字节跳动网络技术有限公司 Method and apparatus for generating a relevance determination model
CN111949860B (en) * 2019-05-15 2022-02-08 北京字节跳动网络技术有限公司 Method and apparatus for generating a relevance determination model
CN110619537A (en) * 2019-06-18 2019-12-27 北京无限光场科技有限公司 Method and apparatus for generating information

Also Published As

Publication number Publication date
WO2019237657A1 (en) 2019-12-19
CN108805091B (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN108805091A (en) Method and apparatus for generating model
CN108830235A (en) Method and apparatus for generating information
CN108898185A (en) Method and apparatus for generating image recognition model
CN109858445A (en) Method and apparatus for generating model
CN108960316A (en) Method and apparatus for generating model
CN110288049A (en) Method and apparatus for generating image recognition model
CN109101919A (en) Method and apparatus for generating information
CN107393541A (en) Information Authentication method and apparatus
CN109981787B (en) Method and device for displaying information
CN109545192A (en) Method and apparatus for generating model
CN109086719A (en) Method and apparatus for output data
CN108595628A (en) Method and apparatus for pushed information
CN110401844A (en) Generation method, device, equipment and the readable medium of net cast strategy
CN109976997A (en) Test method and device
CN108989882A (en) Method and apparatus for exporting the snatch of music in video
CN109618236A (en) Video comments treating method and apparatus
CN108345387A (en) Method and apparatus for output information
CN109829432A (en) Method and apparatus for generating information
CN109545193A (en) Method and apparatus for generating model
CN109214501A (en) The method and apparatus of information for identification
CN108960110A (en) Method and apparatus for generating information
CN110084317A (en) The method and apparatus of image for identification
CN109145783A (en) Method and apparatus for generating information
CN108391141A (en) Method and apparatus for output information
CN108509041A (en) Method and apparatus for executing operation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: Tiktok vision (Beijing) Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Patentee before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.