CN109447156A - Method and apparatus for generating model - Google Patents
Method and apparatus for generating model Download PDFInfo
- Publication number
- CN109447156A CN109447156A CN201811273681.3A CN201811273681A CN109447156A CN 109447156 A CN109447156 A CN 109447156A CN 201811273681 A CN201811273681 A CN 201811273681A CN 109447156 A CN109447156 A CN 109447156A
- Authority
- CN
- China
- Prior art keywords
- sample
- model
- penalty values
- subset
- classification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Abstract
The embodiment of the present application discloses the method and apparatus for generating model.One specific embodiment of this method includes: acquisition sample set;Extract the part sample composition subset in the sample set, it executes following training step: the sample in the subset is input to initial model, the classification mark that sample in information and the subset based on initial model output is had, determines the penalty values of each sample inputted;The penalty values of the positive sample in the subset and the penalty values of part negative sample are chosen, the average value of selected penalty values is determined as target loss value;Based on the target loss value, determine whether initial model trains completion;If so, the initial model after training is determined as classification detection model.This embodiment improves the accuracys of model generated.
Description
Technical field
The invention relates to field of computer technology, and in particular to the method and apparatus for generating model.
Background technique
In machine learning field, it usually needs carry out model training using sample set.However, for carrying out model training
Sample set in, since different classes of sample acquisition difficulty differs greatly, thus, different classes of sample in sample set
Quantity is unbalanced.As an example, training is for detecting video classification (for example, being divided into improper category video and normal category
Video) when, the sample set, usual positive sample (improper category video) quantity is considerably less.And negative sample (normal category
Video) it is more.
Relevant mode is usually directly utilized sample set, model training is carried out in the way of supervised learning.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for generating model.
In a first aspect, the embodiment of the present application provides a kind of method for generating model, this method comprises: obtaining sample
Collection, wherein sample set includes positive sample and negative sample, and the quantity of positive sample is less than the quantity of negative sample, the sample in sample set
It is marked with classification;The part sample extracted in sample set forms subset, executes following training step: the sample in subset is defeated
Enter to initial model, the classification mark that the sample in information and subset based on initial model output is had, determination is inputted
Each sample penalty values;The penalty values of the positive sample in subset and the penalty values of part negative sample are chosen, by selected damage
The average value of mistake value is determined as target loss value;Based on target loss value, determine whether initial model trains completion;If so, will
Initial model after training is determined as classification detection model.
In some embodiments, the penalty values of the positive sample in subset and the penalty values of part negative sample are chosen, comprising: choosing
Take the penalty values of the positive sample in subset;The loss of the negative sample of destination number is chosen according to the sequence of penalty values from big to small
Value, wherein the ratio of the quantity of the positive sample in destination number and extracted subset is in default value section.
In some embodiments, the penalty values of the positive sample in subset and the penalty values of part negative sample are chosen, comprising: ring
Preset quantity or preset ratio should be chosen according to the sequence of penalty values from big to small in determining in subset there is no positive sample
Penalty values.
In some embodiments, this method further include: in response to determining that initial model not complete by training, is based on target loss
Value updates the parameter in initial model, extracts sample composition subset again from sample set, uses the introductory die after undated parameter
Type continues to execute training step as initial model.
In some embodiments, initial model obtains as follows: machine learning method is utilized, it will be in sample set
Sample is as input, and by the classification mark of the sample inputted as output, training obtains initial model.
In some embodiments, the sample in sample set is Sample video, and the classification that sample is had is noted for indicating
The classification of Sample video, classification detection model are for detecting the other video classification detection model of video class.
Second aspect, the embodiment of the present application provide it is a kind of for generating the device of model, the device include: obtain it is single
Member is configured to obtain sample set, wherein sample set includes positive sample and negative sample, and the quantity of positive sample is less than negative sample
Quantity, the sample in sample set are marked with classification;Training unit is configured to extract part sample composition in sample set
Collection, executes following training step: the sample in subset is input to initial model, information and subset based on initial model output
In the classification mark that is had of sample, determine the penalty values of each sample inputted;Choose the loss of the positive sample in subset
The penalty values of value and part negative sample, are determined as target loss value for the average value of selected penalty values;Based on target loss
Value, determines whether initial model trains completion;If so, the initial model after training is determined as classification detection model.
In some embodiments, training unit is further configured to: choosing the penalty values of the positive sample in subset;It presses
The penalty values of the negative sample of destination number are chosen according to the sequence of penalty values from big to small, wherein destination number and extracted son
The ratio of the quantity of the positive sample of concentration is in default value section.
In some embodiments, training unit is further configured to: positive sample is not present in subset in response to determining,
The penalty values of preset quantity or preset ratio are chosen according to the sequence of penalty values from big to small.
In some embodiments, device further include: updating unit is configured in response to determine that initial model is not trained
It completes, is based on target loss value, update the parameter in initial model, extract sample composition subset again from sample set, use
Initial model after undated parameter continues to execute training step as initial model.
In some embodiments, initial model obtains as follows: machine learning method is utilized, it will be in sample set
Sample is as input, and by the classification mark of the sample inputted as output, training obtains initial model.
In some embodiments, the sample in sample set is Sample video, and the classification that sample is had is noted for indicating
The classification of Sample video, classification detection model are for detecting the other video classification detection model of video class.
The third aspect, the embodiment of the present application provide a kind of for detecting video class method for distinguishing, comprising: receive target view
Frequently;The video classification that frame input in target video is generated using the method as described in the embodiment in above-mentioned first aspect
Detection model obtains video classification testing result.
Fourth aspect, the embodiment of the present application provide a kind of for detecting the other device of video class, comprising: receiving unit,
It is configured to receive target video;Input unit is configured to the frame input in target video using such as above-mentioned first aspect
In embodiment described in method generate video classification detection model, obtain video classification testing result.
5th aspect, the embodiment of the present application provide a kind of electronic equipment, comprising: one or more processors;Storage dress
Set, be stored thereon with one or more programs, when one or more programs are executed by one or more processors so that one or
Multiple processors realize the method such as any embodiment in above-mentioned first aspect and the third aspect.
6th aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should
The method such as any embodiment in above-mentioned first aspect and the third aspect is realized when program is executed by processor.
Method and apparatus provided by the embodiments of the present application for generating model can be mentioned therefrom by obtaining sample set
This composition subset is sampled to carry out the training of initial model.Wherein, the sample in sample set is marked with classification, in sample set
The quantity of positive sample is less than the quantity of negative sample.In this way, the sample in subset is input to initial model, can obtain initial
The corresponding information of each sample of model output.Later, the class that the sample in information, subset based on initial model output is had
It does not mark, can determine the penalty values of each sample inputted.Then, penalty values and the portion of the positive sample in subset can be chosen
The penalty values for dividing negative sample, are determined as target loss value for the average value of selected penalty values.It is then possible to be damaged based on target
Mistake value, determines whether initial model trains completion.If initial model training is completed, so that it may which the initial model after training is true
It is set to classification detection model.Since the positive and negative sample size in subset is unbalanced, in the feelings for the penalty values for choosing positive sample
Under condition, the penalty values of selected part negative sample carry out the training of initial model, can be with the number of active balance positive sample and negative sample
Amount, improves the accuracy of model generated.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for generating model of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for generating model of the application;
Fig. 4 is the flow chart according to another embodiment of the method for generating model of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for generating model of the application;
Fig. 6 is the flow chart for being used to detect one embodiment of video class method for distinguishing according to the application;
Fig. 7 is the structural schematic diagram for being used to detect one embodiment of the other device of video class according to the application;
Fig. 8 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the application for generating the method for model or the example of the device for generating model
Property system architecture 100.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as video record class is answered on terminal device 101,102,103
With the application of, video playback class, the application of interactive voice class, searching class application, instant messaging tools, mailbox client, social platform
Software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard
When part, it can be the various electronic equipments with display screen, including but not limited to smart phone, tablet computer, on knee portable
Computer and desktop computer etc..When terminal device 101,102,103 is software, above-mentioned cited electricity may be mounted at
In sub- equipment.Multiple softwares or software module (such as providing Distributed Services) may be implemented into it, also may be implemented into
Single software or software module.It is not specifically limited herein.
When terminal device 101,102,103 is hardware, it is also equipped with image capture device thereon.Image Acquisition is set
It is standby to can be the various equipment for being able to achieve acquisition image function, such as camera, sensor.User can use terminal device
101, the image capture device on 102,103, to acquire video.
Server 105 can be to provide the server of various services, such as carrying out data storage and data processing
Data processing server.Sample set is can store in data processing server.It may include a large amount of sample in sample set.Its
In, the sample in above-mentioned sample set can be marked with classification.In addition, data processing server can use the sample in sample set
This, is trained initial model, and training result (such as the classification detection model generated) can be stored.In this way, can
Corresponding data processing is carried out using the classification detection model trained to realize what category detection model was supported
Function.
It should be noted that server 105 can be hardware, it is also possible to software.When server is hardware, Ke Yishi
The distributed server cluster of ready-made multiple server compositions, also may be implemented into individual server.When server is software,
Multiple softwares or software module (such as providing Distributed Services) may be implemented into, single software or soft also may be implemented into
Part module.It is not specifically limited herein.
It should be noted that the method provided by the embodiment of the present application for generating model is generally held by server 105
Row, correspondingly, the device for generating model is generally positioned in server 105.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the process of one embodiment of the method for generating model according to the application is shown
200.The method for being used to generate model, comprising the following steps:
Step 201, sample set is obtained.
It in the present embodiment, can be with for generating the executing subject (such as server 105 shown in FIG. 1) of the method for model
Obtain sample set in several ways.For example, executing subject can by wired connection mode or radio connection, from
It is obtained in another server (such as database server) of storage sample and is stored in existing sample set therein.Example again
Such as, user can collect sample by terminal device (such as terminal device shown in FIG. 1 101,102,103).In this way, above-mentioned
Executing subject can receive sample collected by terminal, and these samples are stored in local, to generate sample set.It needs to refer to
Out, above-mentioned radio connection can include but is not limited to 3G/4G connection, WiFi connection, bluetooth connection, WiMAX connection,
Zigbee connection, UWB (ultra wideband) connection and other currently known or exploitation in the future radio connections.
It herein, may include a large amount of sample in sample set.It may include positive sample and negative sample in sample set.Practice
In, positive sample and negative sample can usually characterize two kinds of different classes of samples.In addition, positive sample can also characterize some classification
Sample, negative sample can characterize remaining classification (may include one or more classifications) in addition to the corresponding classification of positive sample
Sample.It should be noted that positive sample, negative sample the classification of corresponding sample can according to need to set, this
Place is not construed as limiting.
It should be pointed out that positive sample is usually the sample for being difficult to the classification obtained.Therefore, positive sample in sample set
Quantity can be less than the quantity of negative sample.
Herein, the sample in above-mentioned sample set can be marked with classification.Above-mentioned classification mark can serve to indicate that sample
Classification.By positive sample, negative sample the classification of corresponding sample can according to need and preset, lead to
Classification mark is crossed, can learn that sample is positive sample or negative sample.As an example, when positive sample, negative sample characterize two respectively
When planting different classes of sample, the classification of positive sample can be marked and be set as 1;0 is set by the classification mark of negative sample.
It should be noted that the sample in sample set can obtain according to actual needs.For example, if desired training an energy
The model of image category (such as facial image class, inhuman face image class) detection is enough carried out, then the sample in sample set can be
Sample image, classification mark the classification that can serve to indicate that image.
Step 202, the part sample extracted in sample set forms subset.
In the present embodiment, part sample is extracted in the sample set that executing subject can be obtained from step 201 to form
Subset, and execute the training step of step 203 to step 207.Wherein, the extracting mode of sample is in this application and unlimited
System.For example, the sample that need to currently extract can be extracted according to designated order from sample set.
In machine learning field, the subset for extracting sample composition every time is properly termed as minibatch (mini crowd
It is secondary).The behavior that traversal completes all samples in sample set can be called an epoch (period).As an example, sample set
In have 128000 samples, wherein 128 samples composition subset can be chosen every time, carry out the training of model.In sample set
128000 samples can successively form 1000 subsets.After each subset use, then it is assumed that have passed through an epoch.It needs
It is noted that different epoch, can extract the sample composition subset of different number.For example, first epoch, it can be with every
It is secondary to extract 128 data composition subsets.Second epoch can extract 256 data composition subsets every time.
Since the sample size in usual sample set is larger, in each round training, if in disposable sample set
Whole samples, then time-consuming larger, treatment effeciency is lower.The part sample chosen in sample set herein forms subset, in training,
Each subset carries out a subgradient decline, has finally traversed the sample in sample set, so that the data volume of iteration is smaller every time.
Therefore, time-consuming can be reduced, improve treatment effeciency.
Step 203, the sample in subset is input to initial model, in information and subset based on initial model output
The classification mark that sample is had, determines the penalty values of each sample inputted.
In the present embodiment, above-mentioned executing subject can first input the sample in subset composed in step 202
To initial model.Wherein, initial model can carry out the processing such as feature extraction, analysis, and then output information to sample.It needs
Bright, above-mentioned initial model, which can be the disaggregated model pre-established as needed, (can be used existing achievable point
The model structure of class function), it is also possible to as needed to obtained model after existing disaggregated model progress initial training.
For example, if desired training the model for being able to carry out image category detection or text categories detection, then can be used existing
Disaggregated model as initial model.As an example, existing disaggregated model can be used various existing structures (such as
DenseBox, VGGNet, ResNet, SegNet etc.) convolutional neural networks.Also support vector machines (Support can be used
Vector Machine, SVM) etc..
After the sample in above-mentioned subset is input to initial model, above-mentioned executing subject can extract initial model institute
The information of output.Wherein, each sample inputted can correspond to the information of initial model output.For example, in subset
There are 128 samples, then one-to-one 128 information of 128 samples that initial model can be exported and be inputted.
Then, what the sample in information and the subset that above-mentioned executing subject can be exported based on initial model was had
Classification mark, determines the penalty values of each sample inputted.Herein, the target of training initial model be the information that is allowed to export with
The difference for the classification mark that the sample inputted is had is as small as possible.Therefore, it will can be used to characterize initial model output
The value of information and the difference of classification mark is as penalty values.In practice, various existing loss function (loss can be used
Function), come characterize initial model output information and classification mark difference.It, will for each sample inputted
The information corresponding with the sample of initial model output and the classification mark of the sample are input to loss function, and the sample can be obtained
This penalty values.
In practice, loss function can be the predicted value (information exported) and true value for estimating initial model
The inconsistent degree of (i.e. classification mark).It is a non-negative real-valued function.Under normal circumstances, the value (penalty values) of loss function
Smaller, the robustness of model is better.Loss function can be arranged according to actual needs.For example, it may be using Euclidean away from
From, cross entropy loss function etc..
In some optional implementations of the present embodiment, above-mentioned initial model is also possible to as needed to building in advance
Vertical disaggregated model carries out obtained model after initial training.Specifically, above-mentioned initial model obtains as follows: can
To utilize machine learning method, using the sample in above-mentioned sample set as input, the classification of the sample inputted is marked into conduct
Output, training obtain initial model.As an example, if desired train a model for image category detection, then it can benefit
With corresponding sample set (sample is that can be image, and the classification mark of sample can serve to indicate that the classification of image), using having
Supervised learning mode carries out initial training to convolutional neural networks.Convolutional neural networks after initial training are determined as initially
Model.Specifically, sample can be extracted from sample set successively to constitute subset, each subset traversal is completed, and can use
Gradient descent algorithm once updates model.What above-mentioned executing subject was trained when each subset can be traversed and be completed
Model is determined as initial model.
Step 204, the penalty values of the positive sample in subset and the penalty values of part negative sample are chosen, by selected loss
The average value of value is determined as target loss value.
In the present embodiment, it is marked due to the sample in sample set with classification, and known positive sample, negative sample institute difference
Corresponding classification.Therefore, above-mentioned executing subject can choose the penalty values of positive sample first from the extracted subset of step 202
With the penalty values of part negative sample, the average value of selected penalty values is determined as target loss value.In practice, target loss
Value is the penalty values of the extracted subset.
Herein, since the quantity of the positive sample in sample set is usually less, the quantity of negative sample is usually more.Therefore, sample
The quantity of positive sample in the subset of this collection is also usually less, and the quantity of negative sample is usually more.In practice, it can choose and be mentioned
The penalty values of whole positive samples in the subset taken.Meanwhile it can be with the loss of the part negative sample in the subset of selection and withdrawal
Value.Here, the extraction of the penalty values of negative sample can be and extract a part therein at random, be also possible to according to penalty values from
Small sequence is arrived greatly chooses a part therein.
It should be noted that the quantity of the penalty values of selected negative sample can or phase identical as the quantity of positive sample
Closely.For example, sharing 128 samples in subset.Wherein, positive sample has 10, and negative sample has 118.At this point it is possible to choose 10
The penalty values of the penalty values of negative sample or 15 negative samples.For another example sharing 128 samples in subset.Wherein, positive sample
There is 1, negative sample there are 127.At this point it is possible to choose the penalty values of 1 negative sample.In addition, the loss of selected negative sample
The quantity of value is not it may be specified beforehand that consider the quantity of positive sample.In practice, the quantity of the positive sample in different subsets is usual
Difference is smaller, and therefore, the usual difference of the quantity of selected penalty values is smaller.
In some optional implementations of the present embodiment, above-mentioned executing subject can be chosen in extracted subset
The penalty values of positive sample, and, the penalty values of the negative sample of destination number are chosen according to the sequence of penalty values from big to small.Its
In, the ratio of the quantity of the positive sample in above-mentioned destination number and extracted subset is in default value section (such as numerical value area
Between [1,2]) in.Since penalty values are bigger, model is more difficult to determine sample class, therefore, according to penalty values from big to small suitable
Sequence carries out the selection of the penalty values of negative sample, can use and is most difficult to determine that the sample of classification is trained, model instruction can be improved
Practice efficiency.
In some optional implementations of the present embodiment, in response to positive sample is not present in the extracted subset of determination
This, above-mentioned executing subject can according to the sequence of penalty values from big to small choose preset quantity (such as 10 or 20) or
The penalty values of person's preset ratio (such as 10%).
Step 205, it is based on target loss value, determines whether initial model trains completion.
In the present embodiment, above-mentioned executing subject can be based on target loss value, and benefit determines initial model in various manners
Whether completion is trained.As an example, above-mentioned executing subject can determine whether target loss value has restrained.When determining target loss
When value convergence, then it can determine that initial model at this time has trained completion.As another example, above-mentioned executing subject can be first
Target loss value is compared with preset value.In response to determining that target loss value is less than or equal to preset value, can count most
In target loss value determined by close preset quantity (such as 100) secondary training step, less than or equal to the mesh of above-mentioned preset value
The quantity of mark penalty values accounts for the ratio of the preset quantity.When the ratio is greater than preset ratio (such as 95%), can determine just
Beginning model training is completed.It should be noted that preset value can be generally used for indicating inconsistent between predicted value and true value
The ideal situation of degree.That is, when penalty values are less than or equal to preset value, it is believed that predicted value is nearly or approximately true
Real value.Preset value can be arranged according to actual needs.
It should be noted that can then continue to execute step 206 in response to determining that initial model has trained completion.Response
In determining that initial model not complete by training, the target loss value based on determined by step 204 updates the ginseng in initial model
Number, extracts sample composition subset again from above-mentioned sample set, initial model after using undated parameter as initial model, after
It is continuous to execute above-mentioned training step.Herein, it can use back-propagation algorithm and acquire ladder of the target loss value relative to model parameter
Degree is then based on gradient updating model parameter using gradient descent algorithm.It should be pointed out that the penalty values that do not choose are not involved in
Gradient decline.It should be noted that above-mentioned back-propagation algorithm, gradient descent algorithm and machine learning method are extensive at present
The well-known technique of research and application, details are not described herein.
Step 206, in response to determining that initial model training is completed, the initial model after training is determined as classification detection mould
Type.
In the present embodiment, in response to determine initial model training complete, above-mentioned executing subject can will after training at the beginning of
Beginning model is determined as classification detection model.
In some optional implementations of the present embodiment, the sample in above-mentioned sample set can be Sample video, sample
This classification being had marks the classification that can serve to indicate that Sample video, and above-mentioned classification detection model can be for for detecting view
The video classification detection model of frequency classification.
In some optional implementations of the present embodiment, classification detection model can be stored in by above-mentioned executing subject
It is local, other electronic equipments (such as terminal device shown in FIG. 1 101,102,103) can also be sent it to.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for generating model of the present embodiment
Figure.In the application scenarios of Fig. 3, in the application scenarios of Fig. 3, mould can be installed on terminal device 301 used by a user
Type training class application.When user opens the application, and after uploading the store path of sample set or sample set, after providing the application
The server 302 that platform is supported can run the method for generating model, comprising:
It is possible, firstly, to obtain sample set.Wherein, above-mentioned sample set may include positive sample and negative sample.The number of positive sample
Amount is less than the quantity of negative sample, and the sample in above-mentioned sample set is marked with classification.
Then, the part sample in above-mentioned sample set is extracted to form subset 303, executes following training step: by subset
Sample in 303 is input to initial model 304, the class that the sample in information, subset 303 based on initial model output is had
It does not mark, determines the penalty values of each sample inputted.Then, penalty values and the portion of the positive sample in extracted subset are chosen
The penalty values for dividing negative sample, are determined as target loss value 305 for the average value of selected penalty values.Then, it can be based on upper
Target loss value is stated, determines whether initial model trains completion.If it is determined that training is completed, it can be by the introductory die after undated parameter
Type is determined as object module 306.
The method provided by the above embodiment of the application, by obtain sample set, can therefrom extract part sample group at
Subset is to carry out the training of initial model.Wherein, the sample in sample set is marked with classification, the number of the positive sample in sample set
Amount is less than the quantity of negative sample.In this way, the sample in subset is input to initial model, initial model output can be obtained
The corresponding information of each sample.Later, the classification mark that the sample in information, subset based on initial model output is had, can
To determine the penalty values of each sample inputted.Then, the penalty values and part negative sample of the positive sample in subset can be chosen
Penalty values, the average value of selected penalty values is determined as target loss value.It is then possible to be based on target loss value, really
Determine whether initial model trains completion.If initial model training is completed, so that it may which the initial model after training is determined as class
Other detection model.Since the positive and negative sample size in subset is unbalanced, in the case where choosing the penalty values of positive sample,
The penalty values of selected part negative sample carry out the training of initial model, can be mentioned with the quantity of active balance positive sample and negative sample
The high accuracy of model generated.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for generating model.The use
In the process 400 for the method for generating model, comprising the following steps:
Step 401, sample set is obtained.
It in the present embodiment, can be with for generating the executing subject (such as server 105 shown in FIG. 1) of the method for model
Obtain sample set in several ways.It herein, may include a large amount of sample in sample set.It may include positive sample in sample set
Sheet and negative sample.The quantity of positive sample in sample set can be less than the quantity of negative sample.Herein, the sample in above-mentioned sample set
It can be marked with classification.Above-mentioned classification mark can serve to indicate that the classification of sample.
In the present embodiment, the sample in above-mentioned sample set can be Sample video, and the classification mark that sample is had can
To be used to indicate the classification of Sample video.Positive sample can be the Sample video of some specified classification, such as class contrary to law view
Frequently.Negative sample can be other class videos.
Step 402, the part sample extracted in sample set forms subset.
In the present embodiment, part sample is extracted in the sample set that executing subject can be obtained from step 401 to form
Subset, and execute the training step of step 403 to step 408.Wherein, the extracting mode of sample is in this application and unlimited
System.For example, the sample that need to currently extract can be extracted according to designated order from sample set.
Step 403, the sample in subset is input to initial model, in information and subset based on initial model output
The classification mark that sample is had, determines the penalty values of each sample inputted.
In the present embodiment, above-mentioned executing subject can first input the sample in subset composed in step 402
To initial model.Then, the information that initial model is exported can be extracted.Wherein, each sample inputted can correspond to
The information of one initial model output.Then, it can be had based on the sample in the information and the subset that initial model exports
Classification mark, determine the penalty values of each sample inputted.It should be noted that the calculating operation of penalty values and step 203
In it is documented operation it is essentially identical, details are not described herein again.
Herein, above-mentioned initial model can be as needed to obtained after the model progress initial training pre-established
Model.Specifically, above-mentioned initial model obtains as follows: it can use machine learning method, it will be in above-mentioned sample set
Sample as input, by the classification of the sample inputted mark as output, training obtains initial model.Herein, can make
It is trained to obtain initial model with existing model structure.
In the present embodiment, the model for being used to carry out video classification detection can be trained.Above-mentioned executing subject can be with
In the way of supervised learning, initial training is carried out to convolutional neural networks.Convolutional neural networks after training are determined as just
Beginning model.It should be noted that each subset traversal is completed, then it can use gradient descent algorithm and model carried out once more
Newly.The model that above-mentioned executing subject is trained when sample set can be traversed and be completed is determined as initial model.
Step 404, the penalty values of the positive sample in subset are chosen.
In the present embodiment, above-mentioned executing subject can choose the penalty values of the positive sample in extracted subset.It can be with
Understand, since positive sample is less, the penalty values of whole positive samples in subset can be chosen.
Step 405, the penalty values of the negative sample of destination number are chosen according to the sequence of penalty values from big to small.
In the present embodiment, above-mentioned executing subject can according to the sequence of penalty values from big to small, from step 403 obtained by
Penalty values in choose destination number negative sample penalty values.Wherein, in above-mentioned destination number and extracted subset just
The ratio of the quantity of sample is in default value section (such as numerical intervals [1,2]).Since penalty values are bigger, model is more difficult to
Determine sample class, therefore, the selection of the penalty values of negative sample is carried out according to the sequence of penalty values from big to small, can use most
Hardly possible determines that the sample of classification is trained, and model training efficiency can be improved.
In the present embodiment, positive sample is not present in the subset in response to determining, above-mentioned executing subject can be according to loss
The sequence of value from big to small chooses the loss of preset quantity (such as 10 or 20) or preset ratio (such as 10%)
Value.
Step 406, the average value of selected penalty values is determined as target loss value.
In the present embodiment, above-mentioned executing subject can be by the average value of penalty values selected by step 404 and step 405
It is determined as target loss value.In practice, target loss value is the penalty values of the extracted subset.
Step 407, it is based on target loss value, determines whether initial model trains completion.
In the present embodiment, above-mentioned executing subject can be based on target loss value, and benefit determines initial model in various manners
Whether completion is trained.As an example, above-mentioned executing subject can determine whether target loss value has restrained.When determining target loss
When value convergence, then it can determine that initial model at this time has trained completion.
It should be noted that can then continue to execute step 408 in response to determining that initial model has trained completion.Response
In determining that initial model not complete by training, the target loss value based on determined by step 406 updates the ginseng in initial model
Number, extracts sample composition subset again from above-mentioned sample set, initial model after using undated parameter as initial model, after
It is continuous to execute above-mentioned training step.Herein, it can use back-propagation algorithm and acquire ladder of the target loss value relative to model parameter
Degree is then based on gradient updating model parameter using gradient descent algorithm.It should be pointed out that the penalty values that do not choose are not involved in
Gradient decline.It should be noted that above-mentioned back-propagation algorithm, gradient descent algorithm and machine learning method are extensive at present
The well-known technique of research and application, details are not described herein.
Step 408, in response to determining that initial model training is completed, the initial model after training is determined as classification detection mould
Type.
In the present embodiment, in response to determine initial model training complete, above-mentioned executing subject can will after training at the beginning of
Beginning model is determined as classification detection model.Herein, above-mentioned classification detection model can be for detecting the other video class of video class
Other detection model.
In the present embodiment, video classification detection model can be stored in local by above-mentioned executing subject, can also be by it
It is sent to other electronic equipments (such as terminal device shown in FIG. 1 101,102,103).
Figure 4, it is seen that the method for generating model compared with the corresponding embodiment of Fig. 2, in the present embodiment
Process 400 the step of relating to the penalty values for the negative sample for choosing destination number according to penalty values sequence from big to small.By
Bigger in penalty values, model is more difficult to determine sample class, therefore, carries out negative sample according to the sequence of penalty values from big to small
The selection of penalty values can use and be most difficult to determine that the sample of classification is trained, model training efficiency can be improved.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating mould
One embodiment of the device of type, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, being used to generate the device 500 of model described in the present embodiment includes: acquiring unit 501, it is configured
At acquisition sample set, wherein the sample set includes positive sample and negative sample, and the quantity of positive sample is less than the quantity of negative sample, should
Sample in sample set is marked with classification;Training unit 502 is configured to extract part sample composition in the sample set
Collection, execute following training step: the sample in the subset be input to initial model, based on initial model output information and should
The classification mark that sample in subset is had, determines the penalty values of each sample inputted;Choose the positive sample in the subset
Penalty values and part negative sample penalty values, the average value of selected penalty values is determined as target loss value;Based on this
Target loss value, determines whether initial model trains completion;If so, the initial model after training is determined as classification detection mould
Type.
In some optional implementations of the present embodiment, above-mentioned training unit 502 can be further configured to: choosing
Take the penalty values of the positive sample in the subset;The loss of the negative sample of destination number is chosen according to the sequence of penalty values from big to small
Value, wherein the ratio of the quantity of the positive sample in the destination number and extracted subset is in default value section.
In some optional implementations of the present embodiment, above-mentioned training unit 502 can be further configured to: be rung
It should be in determining that there is no positive samples in the subset, according to the sequence selection preset quantity or preset ratio of penalty values from big to small
Penalty values.
In some optional implementations of the present embodiment, which can also include that updating unit (does not show in figure
Out).Wherein, above-mentioned updating unit may be configured in response to determining that initial model not complete by training, is based on the target loss
Value updates the parameter in initial model, sample composition subset is extracted again from the sample set, using initial after undated parameter
Model continues to execute the training step as initial model.
In some optional implementations of the present embodiment, initial model can obtain as follows: utilize machine
The classification mark of the sample inputted is used as and exports using the sample in the sample set as input by device learning method, trained
To initial model.
In some optional implementations of the present embodiment, the sample in above-mentioned sample set can be Sample video, sample
This classification being had marks the classification that can serve to indicate that Sample video, and above-mentioned classification detection model can be for detecting view
The video classification detection model of frequency classification.
The device provided by the above embodiment of the application obtains sample set by acquiring unit 501, can therefrom extract sample
This composition subset is to carry out the training of initial model.Wherein, the sample in sample set is marked with classification, the positive sample in sample set
This quantity is less than the quantity of negative sample.In this way, the sample in subset is input to initial model by training unit 502, it can
Obtain the corresponding information of each sample of initial model output.Later, training unit 502 is exported based on initial model information, son
The classification mark that the sample of concentration is had, can determine the penalty values of each sample inputted.Then, it can choose in subset
Positive sample penalty values and part negative sample penalty values, the average value of selected penalty values is determined as target loss
Value.It is then possible to be based on target loss value, determine whether initial model trains completion.If initial model training is completed, so that it may
The initial model after training is determined as classification detection model.Since the positive and negative sample size in subset is unbalanced,
In the case where the penalty values for choosing positive sample, the penalty values of selected part negative sample carry out the training of initial model, can be effective
The quantity for balancing positive sample and negative sample, improves the accuracy of model generated.
Fig. 6 is referred to, it illustrates provided by the present application for detecting the stream of one embodiment of video class method for distinguishing
Journey 600.This is used to detect video class method for distinguishing and may comprise steps of:
Step 601, target video is received.
In the present embodiment, for detecting the other executing subject of video class (such as server shown in FIG. 1 105, Huo Zhecun
Contain other servers of video classification detection model) it can use wired connection or radio connection, it receives terminal and sets
Target video transmitted by standby (such as terminal device shown in FIG. 1 101,102,103).
Step 602, by the frame input video classification detection model in target video, video classification testing result is obtained.
In the present embodiment, the frame input video classification in above-mentioned target video can be detected mould by above-mentioned executing subject
Type obtains video classification testing result.Video classification detection model can be using the generation as described in above-mentioned Fig. 2 embodiment
The method of classification detection model and generate.Specific generating process may refer to the associated description of Fig. 2 embodiment, no longer superfluous herein
It states.Video classification testing result can serve to indicate that the classification of target video.
In some optional implementations of the present embodiment, above-mentioned executing subject is obtaining video classification testing result
Afterwards, target video can be stored in video library corresponding with classification indicated by the video classification testing result.
The present embodiment can be used for detecting the classification of video for detecting video class method for distinguishing, can be improved video classification
The accuracy of detection.
With continued reference to Fig. 7, as the realization to method shown in above-mentioned Fig. 6, this application provides one kind for detecting video
One embodiment of the device of classification.The Installation practice is corresponding with embodiment of the method shown in fig. 6, which specifically can be with
Applied in various electronic equipments.
As shown in fig. 7, being used to detect the other device 700 of video class described in the present embodiment includes: receiving unit 701, quilt
It is configured to receive target video;Input unit 702 is configured to the frame input video classification in above-mentioned target video detecting mould
Type obtains video classification testing result.
It is understood that all units recorded in the device 700 and each step phase in the method with reference to Fig. 6 description
It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 700 and its
In include unit, details are not described herein.
Below with reference to Fig. 8, it illustrates the computer systems 800 for the electronic equipment for being suitable for being used to realize the embodiment of the present application
Structural schematic diagram.Electronic equipment shown in Fig. 8 is only an example, function to the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in figure 8, computer system 800 includes central processing unit (CPU) 801, it can be read-only according to being stored in
Program in memory (ROM) 802 or be loaded into the program in random access storage device (RAM) 803 from storage section 808 and
Execute various movements appropriate and processing.In RAM 803, also it is stored with system 800 and operates required various programs and data.
CPU 801, ROM 802 and RAM 803 are connected with each other by bus 804.Input/output (I/O) interface 805 is also connected to always
Line 804.
I/O interface 805 is connected to lower component: the importation 806 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 807 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 808 including hard disk etc.;
And the communications portion 809 of the network interface card including LAN card, modem etc..Communications portion 809 via such as because
The network of spy's net executes communication process.Driver 810 is also connected to I/O interface 805 as needed.Detachable media 811, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 810, in order to read from thereon
Computer program be mounted into storage section 808 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 809, and/or from detachable media
811 are mounted.When the computer program is executed by central processing unit (CPU) 801, limited in execution the present processes
Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or
Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but
Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.
The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires electrical connection,
Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit
Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory
Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores
The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And
In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed
Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not
It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer
Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use
In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc., Huo Zheshang
Any appropriate combination stated.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet
Include acquiring unit and training unit.Wherein, the title of these units does not constitute the limit to the unit itself under certain conditions
It is fixed, for example, acquiring unit is also described as " obtaining the unit of sample set ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in device described in above-described embodiment;It is also possible to individualism, and without in the supplying device.Above-mentioned calculating
Machine readable medium carries one or more program, when said one or multiple programs are executed by the device, so that should
Device: sample set is obtained;The subset in the sample set is extracted, following training step is executed: obtaining sample set;Extract the sample set
In part sample form subset, execute following training step: the sample in the subset be input to initial model, based on initial
The classification mark that the information of model output and the sample in the subset are had, determines the penalty values of each sample inputted;Choosing
The penalty values of the positive sample in the subset and the penalty values of part negative sample are taken, the average value of selected penalty values is determined as
Target loss value;Based on the target loss value, determine whether initial model trains completion;If so, by the initial model after training
It is determined as classification detection model.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (16)
1. a kind of method for generating model, comprising:
Obtain sample set, wherein the sample set includes positive sample and negative sample, and the quantity of positive sample is less than the number of negative sample
It measures, the sample in the sample set is marked with classification;
The part sample composition subset in the sample set is extracted, executes following training step: the sample in the subset is defeated
Enter to initial model, the classification mark that the sample in information and the subset based on initial model output is had determines institute
The penalty values of each sample of input;The penalty values of the positive sample in the subset and the penalty values of part negative sample are chosen, by institute
The average value of the penalty values of selection is determined as target loss value;Based on the target loss value, determine whether initial model trains
It completes;If so, the initial model after training is determined as classification detection model.
2. the method according to claim 1 for generating model, wherein described to choose positive sample in the subset
The penalty values of penalty values and part negative sample, comprising:
Choose the penalty values of the positive sample in the subset;
The penalty values of the negative sample of destination number are chosen according to penalty values sequence from big to small, wherein the destination number with
The ratio of the quantity of positive sample in extracted subset is in default value section.
3. the method according to claim 1 for generating model, wherein described to choose positive sample in the subset
The penalty values of penalty values and part negative sample, comprising:
In response to positive sample is not present in the determination subset, according to penalty values sequence from big to small choose preset quantity or
The penalty values of preset ratio.
4. the method according to claim 1 for generating model, wherein the method also includes:
In response to determining that initial model not complete by training, is based on the target loss value, the parameter in initial model is updated, from institute
It states and extracts sample composition subset in sample set again, the initial model after using undated parameter is continued to execute as initial model
The training step.
5. the method according to claim 1 for generating model, wherein initial model obtains as follows:
The classification of the sample inputted is marked and is made using the sample in the sample set as input using machine learning method
For output, training obtains initial model.
6. the method described in one of -5 for generating model according to claim 1, wherein the sample in the sample set is sample
This video, the classification that sample is had are noted for the classification of instruction Sample video, and the classification detection model is for detecting
The other video classification detection model of video class.
7. a kind of for generating the device of model, comprising:
Acquiring unit is configured to obtain sample set, wherein the sample set includes positive sample and negative sample, the number of positive sample
Amount is less than the quantity of negative sample, and the sample in the sample set is marked with classification;
Training unit is configured to extract the part sample composition subset in the sample set, executes following training step: by institute
The sample stated in subset is input to initial model, what the sample in information and the subset based on initial model output was had
Classification mark, determines the penalty values of each sample inputted;Choose the negative sample of penalty values and part of the positive sample in the subset
This penalty values, are determined as target loss value for the average value of selected penalty values;Based on the target loss value, determine just
Whether beginning model trains completion;If so, the initial model after training is determined as classification detection model.
8. according to claim 7 for generating the device of model, wherein the training unit is further configured to:
Choose the penalty values of the positive sample in the subset;
The penalty values of the negative sample of destination number are chosen according to penalty values sequence from big to small, wherein the destination number with
The ratio of the quantity of positive sample in extracted subset is in default value section.
9. according to claim 7 for generating the device of model, wherein the training unit is further configured to:
In response to positive sample is not present in the determination subset, according to penalty values sequence from big to small choose preset quantity or
The penalty values of preset ratio.
10. according to claim 7 for generating the device of model, wherein described device further include:
Updating unit is configured in response to determine that initial model not complete by training, is based on the target loss value, update initial
Parameter in model, extracts sample composition subset again from the sample set, initial model after using undated parameter as
Initial model continues to execute the training step.
11. according to claim 7 for generating the device of model, wherein initial model obtains as follows:
The classification of the sample inputted is marked and is made using the sample in the sample set as input using machine learning method
For output, training obtains initial model.
12. for generating the device of model according to one of claim 7-11, wherein the sample in the sample set is
Sample video, the classification that sample is had are noted for the classification of instruction Sample video, and the classification detection model is for examining
Survey the other video classification detection model of video class.
13. one kind is for detecting video class method for distinguishing, comprising:
Receive target video;
Frame input in the target video is used into the video classification detection model generated method as claimed in claim 6,
Obtain video classification testing result.
14. one kind is for detecting the other device of video class, comprising:
Receiving unit is configured to receive target video;
Input unit is configured to method as claimed in claim 6 generate the frame input use in the target video
Video classification detection model obtains video classification testing result.
15. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now method as described in any in claim 1-6,13.
16. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor
Method as described in any in claim 1-6,13.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811273681.3A CN109447156B (en) | 2018-10-30 | 2018-10-30 | Method and apparatus for generating a model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811273681.3A CN109447156B (en) | 2018-10-30 | 2018-10-30 | Method and apparatus for generating a model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109447156A true CN109447156A (en) | 2019-03-08 |
CN109447156B CN109447156B (en) | 2022-05-17 |
Family
ID=65549749
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811273681.3A Active CN109447156B (en) | 2018-10-30 | 2018-10-30 | Method and apparatus for generating a model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109447156B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110070505A (en) * | 2019-04-12 | 2019-07-30 | 北京迈格威科技有限公司 | Enhance the method and apparatus of image classification plant noise robustness |
WO2020087974A1 (en) * | 2018-10-30 | 2020-05-07 | 北京字节跳动网络技术有限公司 | Model generation method and device |
CN111770317A (en) * | 2020-07-22 | 2020-10-13 | 平安国际智慧城市科技股份有限公司 | Video monitoring method, device, equipment and medium for intelligent community |
CN112347278A (en) * | 2019-10-25 | 2021-02-09 | 北京沃东天骏信息技术有限公司 | Method and apparatus for training a characterization model |
CN112395179A (en) * | 2020-11-24 | 2021-02-23 | 创新奇智(西安)科技有限公司 | Model training method, disk prediction method, device and electronic equipment |
CN112434073A (en) * | 2019-08-24 | 2021-03-02 | 北京地平线机器人技术研发有限公司 | Method and device for determining sample selection model |
WO2021051879A1 (en) * | 2019-09-17 | 2021-03-25 | 平安科技(深圳)有限公司 | Target parameter selection method in reverse proxy evaluation model and related apparatus |
CN113079130A (en) * | 2020-01-06 | 2021-07-06 | 上海交通大学 | Multimedia management and control system and management and control method |
CN113780485A (en) * | 2021-11-12 | 2021-12-10 | 浙江大华技术股份有限公司 | Image acquisition, target recognition and model training method and equipment |
WO2023093346A1 (en) * | 2021-11-25 | 2023-06-01 | 支付宝(杭州)信息技术有限公司 | Exogenous feature-based model ownership verification method and apparatus |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090319457A1 (en) * | 2008-06-18 | 2009-12-24 | Hong Cheng | Method and apparatus for structural data classification |
DE102009009228A1 (en) * | 2009-02-17 | 2010-08-26 | GEMAC-Gesellschaft für Mikroelektronikanwendung Chemnitz mbH | Agglutination-based detection of disease comprises adding substrate of buffer, measuring initial intensity of buffer, diluting blood sample with buffer, measuring reference intensity and originating test person with disease to diagnose |
US20160315952A1 (en) * | 2015-04-27 | 2016-10-27 | Cisco Technology, Inc. | Detecting Network Address Translation Devices In A Network Based On Network Traffic Logs |
CN106485230A (en) * | 2016-10-18 | 2017-03-08 | 中国科学院重庆绿色智能技术研究院 | Based on the training of the Face datection model of neutral net, method for detecting human face and system |
CN106528771A (en) * | 2016-11-07 | 2017-03-22 | 中山大学 | Fast structural SVM text classification optimization algorithm |
KR20170083419A (en) * | 2016-01-08 | 2017-07-18 | 마우키스튜디오 주식회사 | Deep learning model training method using many unlabeled training data and deep learning system performing the same |
CN107766860A (en) * | 2017-10-31 | 2018-03-06 | 武汉大学 | Natural scene image Method for text detection based on concatenated convolutional neutral net |
CN107909021A (en) * | 2017-11-07 | 2018-04-13 | 浙江师范大学 | A kind of guideboard detection method based on single deep layer convolutional neural networks |
CN108364073A (en) * | 2018-01-23 | 2018-08-03 | 中国科学院计算技术研究所 | A kind of Multi-label learning method |
-
2018
- 2018-10-30 CN CN201811273681.3A patent/CN109447156B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090319457A1 (en) * | 2008-06-18 | 2009-12-24 | Hong Cheng | Method and apparatus for structural data classification |
DE102009009228A1 (en) * | 2009-02-17 | 2010-08-26 | GEMAC-Gesellschaft für Mikroelektronikanwendung Chemnitz mbH | Agglutination-based detection of disease comprises adding substrate of buffer, measuring initial intensity of buffer, diluting blood sample with buffer, measuring reference intensity and originating test person with disease to diagnose |
US20160315952A1 (en) * | 2015-04-27 | 2016-10-27 | Cisco Technology, Inc. | Detecting Network Address Translation Devices In A Network Based On Network Traffic Logs |
KR20170083419A (en) * | 2016-01-08 | 2017-07-18 | 마우키스튜디오 주식회사 | Deep learning model training method using many unlabeled training data and deep learning system performing the same |
CN106485230A (en) * | 2016-10-18 | 2017-03-08 | 中国科学院重庆绿色智能技术研究院 | Based on the training of the Face datection model of neutral net, method for detecting human face and system |
CN106528771A (en) * | 2016-11-07 | 2017-03-22 | 中山大学 | Fast structural SVM text classification optimization algorithm |
CN107766860A (en) * | 2017-10-31 | 2018-03-06 | 武汉大学 | Natural scene image Method for text detection based on concatenated convolutional neutral net |
CN107909021A (en) * | 2017-11-07 | 2018-04-13 | 浙江师范大学 | A kind of guideboard detection method based on single deep layer convolutional neural networks |
CN108364073A (en) * | 2018-01-23 | 2018-08-03 | 中国科学院计算技术研究所 | A kind of Multi-label learning method |
Non-Patent Citations (3)
Title |
---|
SHAOQING REN 等: "Fast R-CNN: Towards Real-time Object Detection with Region Proposal Networks", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
刘艳 等: "用于处理不平衡样本的改进近似支持向量机新算法", 《计算机应用》 * |
欧阳源遊: "基于混合采样的非平衡数据集分类研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020087974A1 (en) * | 2018-10-30 | 2020-05-07 | 北京字节跳动网络技术有限公司 | Model generation method and device |
CN110070505A (en) * | 2019-04-12 | 2019-07-30 | 北京迈格威科技有限公司 | Enhance the method and apparatus of image classification plant noise robustness |
CN112434073A (en) * | 2019-08-24 | 2021-03-02 | 北京地平线机器人技术研发有限公司 | Method and device for determining sample selection model |
CN112434073B (en) * | 2019-08-24 | 2024-03-19 | 北京地平线机器人技术研发有限公司 | Method and device for determining sample selection model |
WO2021051879A1 (en) * | 2019-09-17 | 2021-03-25 | 平安科技(深圳)有限公司 | Target parameter selection method in reverse proxy evaluation model and related apparatus |
CN112347278A (en) * | 2019-10-25 | 2021-02-09 | 北京沃东天骏信息技术有限公司 | Method and apparatus for training a characterization model |
CN113079130A (en) * | 2020-01-06 | 2021-07-06 | 上海交通大学 | Multimedia management and control system and management and control method |
CN113079130B (en) * | 2020-01-06 | 2022-08-19 | 上海交通大学 | Multimedia management and control system and management and control method |
CN111770317B (en) * | 2020-07-22 | 2023-02-03 | 平安国际智慧城市科技股份有限公司 | Video monitoring method, device, equipment and medium for intelligent community |
CN111770317A (en) * | 2020-07-22 | 2020-10-13 | 平安国际智慧城市科技股份有限公司 | Video monitoring method, device, equipment and medium for intelligent community |
CN112395179A (en) * | 2020-11-24 | 2021-02-23 | 创新奇智(西安)科技有限公司 | Model training method, disk prediction method, device and electronic equipment |
CN112395179B (en) * | 2020-11-24 | 2023-03-10 | 创新奇智(西安)科技有限公司 | Model training method, disk prediction method, device and electronic equipment |
CN113780485A (en) * | 2021-11-12 | 2021-12-10 | 浙江大华技术股份有限公司 | Image acquisition, target recognition and model training method and equipment |
WO2023093346A1 (en) * | 2021-11-25 | 2023-06-01 | 支付宝(杭州)信息技术有限公司 | Exogenous feature-based model ownership verification method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN109447156B (en) | 2022-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109447156A (en) | Method and apparatus for generating model | |
CN109344908A (en) | Method and apparatus for generating model | |
CN109376267A (en) | Method and apparatus for generating model | |
CN109492128A (en) | Method and apparatus for generating model | |
CN109214343A (en) | Method and apparatus for generating face critical point detection model | |
CN109191453A (en) | Method and apparatus for generating image category detection model | |
CN109145828A (en) | Method and apparatus for generating video classification detection model | |
CN108446387A (en) | Method and apparatus for updating face registration library | |
CN109308490A (en) | Method and apparatus for generating information | |
CN108520220A (en) | model generating method and device | |
CN109446990A (en) | Method and apparatus for generating information | |
CN109740018A (en) | Method and apparatus for generating video tab model | |
CN108171191B (en) | Method and apparatus for detecting face | |
CN109447246A (en) | Method and apparatus for generating model | |
CN109976997A (en) | Test method and device | |
CN110175555A (en) | Facial image clustering method and device | |
CN109815365A (en) | Method and apparatus for handling video | |
CN109947989A (en) | Method and apparatus for handling video | |
CN109961032A (en) | Method and apparatus for generating disaggregated model | |
CN108960110A (en) | Method and apparatus for generating information | |
CN110084317A (en) | The method and apparatus of image for identification | |
CN110263748A (en) | Method and apparatus for sending information | |
CN110009059A (en) | Method and apparatus for generating model | |
CN108062416B (en) | Method and apparatus for generating label on map | |
CN109145973A (en) | Method and apparatus for detecting monocrystaline silicon solar cell defect |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |