Specific embodiment
In order to make those skilled in the art more fully understand the technical solution in this specification embodiment, below in conjunction with this
Attached drawing in specification embodiment is described in detail the technical solution in this specification embodiment, it is clear that described
Embodiment is only a part of the embodiment of this specification, instead of all the embodiments.The embodiment of base in this manual,
Those of ordinary skill in the art's every other embodiment obtained, all should belong to the range of protection.
This specification embodiment provides a kind of prediction model training method for target scene, shown in Figure 1, the party
Method may comprise steps of:
S101, N number of 1 with the source training sample set of label and 1 target scene for obtaining N number of source scene have
The target training sample set of label;Wherein, N is default positive integer;
In the scheme that this specification provides, enough training sample amounts can not be run up in a short time for target scene, from
And the problem of prediction effect preferable model can not be trained, using will be in advance in a large amount of trained samples of other source scenes accumulation
This, the mode of a small amount of training sample of combining target scene, training is directed to the prediction model of target scene.
Source scene can be the scene with certain similitude with target scene.For example, it is assumed that target scene is Thailand
Trade market, for the scene need training to transaction carry out risk control model, and source scene can be Malaysia,
The trade market in the area such as the U.S., Japan.And wherein, the Malay level of consumption, per capita income level, personal consumption are practised
The features such as used, it is higher with the similarity of Thailand, and the features described above in the area such as the U.S., Japan and the similarity of Thailand are lower, i.e.,
It is considered that the similarity of Malay trade market and Thailand is higher, and the trade market of the U.S., Japan etc. and Thailand
Similarity is lower.
Each source scene accumulates a large amount of source training samples in advance, then directly constitutes source by the source training sample accumulated
Training sample set obtains can be used for training objective alternatively, screening the source training sample accumulated by artificial or algorithm
After enough source training samples of scene prediction model, source training sample set is constituted.Also, each source training sample has mark
Label, label can be it is manually adding, be also possible to according to event generate, can also be algorithm prediction addition, etc..
It also include the target training sample with label, label in a small amount of target training sample accumulated in target scene
It can also add through the above way.
Source training sample set for training objective scene prediction model is N number of, it can be one or more, for description
It is convenient, with symbol " S in this specification1”、“S2”、……“SN" indicate N number of source training sample set, and target is indicated with symbol " T "
Training sample set.
This illustrates in embodiment, can pass through S1、S2、……SNWith T, it is based on supervised learning algorithm, training is directed to
The prediction model of target scene;
In addition, can also include part unlabeled exemplars in target training sample, so as to pass through S1、S2、……SNWith
T, is based on semi-supervised learning algorithm, and training obtains the prediction model for target scene, it is notable that in the prior art,
Semi-supervised learning it is targeted have exemplar and unlabeled exemplars are same type of data samples, and S in the application1、
S2、……SNIt is N+1 set not fully consistent sample with T, therefore says traditional applied field with semi-supervised learning on stricti jurise
Scape different from;
It is understood that the label value of source training sample and target training sample, can be also used for trained model
It carries out the purposes except model training such as verifying, therefore, in this specification embodiment, S can be passed through1、S2、……SNWith T, base
In unsupervised learning algorithm, training obtains the prediction model for target scene;
Those skilled in the art can be according to S1、S2、……SNWith the actual conditions of T and source scene and target scene, spirit
The mode and specific algorithm of ground living preference pattern training, this specification do not limit this.
The each sample set concentrated for N number of source training sample: S102 the sample set and target training sample set is carried out
Merge;
S103, the sample set obtained using merging, training obtain alternative model;
S104, each sample that target training sample is concentrated input alternative model, according to the predicted value of model output and respectively
The label value of sample calculates the prediction error of the model;
For ease of description, S102 to S104 is combined and is illustrated.
By S1、S2、……SNMerge respectively with T, obtain N number of combined sample set, and is utilized respectively N number of combined sample
Collection, training obtain N number of model.
As previously described, N number of source scene may there are certain similitudes with target scene, and not homologous scene and mesh
The similarity of scene is marked, it is high or low to have differences.Correspondingly, the source training sample accumulated in each source scene, similarly
With the target training sample accumulated in target scene, have identical, different data characteristicses, thereby increases and it is possible to certain source training samples with
The identical data characteristics of target training sample is more, and certain identical data features is less.
Therefore, using S1、S2、……SNAfter merging respectively with T, the obtained different alternative models of training, to target scene into
There is also differences for the accuracy rate of row prediction.Assuming that SiIn source training sample and T in target training sample, identical data are special
Levy more, and SjIn source training sample and T in target training sample, identical data characteristics is less, then may SiIt is closed with T
And the model that training obtains afterwardsTo the accuracy rate that target scene is predicted, it is higher than SjTraining obtains after merging with T
Model
Certainly, although SjIn source training sample data characteristics identical with the target training sample in T it is less, but may
With having and S in target training samplejIn source training sample in certain data characteristicses for not having, this to utilize SjInto
The model training of row target scene necessitates.And if features described above is even more important, the prediction model trained is influenced more
It greatly, then may modelThe accuracy rate predicted target scene is higher than model
According to above-mentioned analysis, the N number of model that can be obtained for training, Knowledge Verification Model predicts target scene respectively
Accuracy rate, in this specification embodiment, by way of model predictive error calculated, the knot of mark verification accuracy rate
Fruit.
By being that computation model is directed to the accuracy rate predicted of target scene, for modelExample, by target training sample
Each sample concentrated inputs the model, the predicted value of each sample of model output is obtained, in conjunction with the label value of each sample
Calculate the prediction error of the model.
Specifically, in this specification embodiment, the predicted value and various kinds of each sample of model output can be obtained
This label value, and determine the predicted value of each sample and the difference of label value;Determine the error calculation power of preset each sample
Weight;Using the error calculation weight of identified each sample, summation is weighted to the difference of identified each sample, is somebody's turn to do
The prediction error of model.
Wherein, the error calculation weight of each sample can be all the same, and the prediction error for being equivalent to model is the model pair
The sum of the prediction error of each target training sample (multiple);As previously described, not homologous training sample and target training sample
Between there are different differences, and there is also difference between different target training samples, thus different target training sample pair
The importance of model training is distinct, therefore, can be for the setting of prior sample more in the prediction error of computation model
High error calculation weight, to increase the model when measuring total prediction error of the model and predict to miss in significant samples
The specific gravity of difference.
It based on the above principles, can also be each sample in N+1 sample set obtained in this specification embodiment,
Identical or different sample weights are set, and before first time iterative processing before, determine various kinds in N+1 sample set obtained
This default sample weights.The sample weights make prior for indicating corresponding significance level of the sample in sample set
Sample plays prior effect in model training.For example, can be the source training sample more like with target training sample,
Higher sample weights are set, so that the model that training obtains is more suitable for target scene.
As it can be seen that sample weights when error calculation weight and training, are used to indicate corresponding weight of the sample in sample set
Want degree.In this specification embodiment, when determining the error calculation weight of preset each sample, for inputting any of the model
Sample: the sample weights of the sample in current iteration are obtained, and the sample weights are determined as should when this calculates prediction error
The error calculation weight of sample.
In addition, can use all target training samples when the sample size that target training sample is concentrated is less, calculate
The prediction error of model;And when the sample size that target training sample is concentrated is more, it can sample to target training sample set
Subset is obtained, the calculating of model prediction weight is carried out using the sample in subset, specific sample size can be by this field skill
Art personnel neatly either statically or dynamically set, this specification is to this according to representative, verification cost, iteration speed etc. factor
Without limitation.
S105 will predict the smallest model of error, obtain as current iteration preferred in the N number of alternative model trained
Model;
In the scheme that this specification provides, successive ignition (being assumed to be M times) will be carried out, and iteration obtains training every time
N number of model, the M*N model that will be obtained according to training after M iteration obtain the prediction model suitable for target scene.
Specifically, it in each iteration, by with N number of model for being trained alternately model, is missed with the prediction of each model
Difference is screening index, the optimization model that will predict that the smallest alternative model of error is obtained as the secondary iteration.
As described above, in each iteration, sample weights can also be updated.In this specification embodiment, benefit
With the optimization model of current iteration, obtain updating required parameter.For any sample in N+1 sample set obtained
This, can carry out more the sample weights of the sample in current iteration according to preset undated parameter and preset update rule
Newly, updated sample weights are used for next iteration;Wherein, preset undated parameter include: the sample label value and should
The predicted value exported after the optimization model that sample input current iteration obtains.
For example, it may be determined that defeated after the label value of the sample and the optimization model for obtaining sample input current iteration
Predicted value out;Using the absolute value of determined label value and the difference of predicted value, the sample of the sample in current iteration is weighed
It is updated again;Wherein, in the case where the sample is the sample that N number of source training sample is concentrated, result and the absolute value are updated
Relationship be negative correlation;In the case where the sample is the sample that target training sample is concentrated, result and the absolute value are updated
Relationship is to be positively correlated.
Due to that will be updated to sample weights in iterative processing, the default sample obtained before first time iterative processing
Weight can be set to the numerical value generated at random or each sample numerical value all the same or be obtained by preset algorithm according to similitude
Numerical value, etc..
After the update of above-mentioned sample weights, for each sample that N number of source training sample is concentrated, if optimization model is to certain
The predicted value of a sample and the absolute value of physical tags value difference value are larger, i.e., larger to the prediction error of the sample, then can recognize
For the sample, there are larger differences with target training sample, should reduce the sample weights of the sample, avoid the sample to target field
The training of scape model generates the influence of negative transfer;
And for each sample that target training sample is concentrated, if optimization model is larger to the prediction error of some sample,
It may be considered that "current" model is poor to the learning effect of the sample, the sample weights that should increase the sample are somebody's turn to do, to increase next time
The importance of sample when repetitive exercise model.
Updated sample weights will be used for the model training in next iterative processing, can be also used for next iterative processing
In model predictive error calculate, specific calculating process is same as above, no longer repeated.
Whether S106, judgement reach default termination condition for the prediction model training of target scene, if it is continue
S107 is executed, S102 is otherwise returned;
The predictablity rate that default termination condition can be the model that training obtains reaches some index, for example, this changes
The prediction error for the optimization model that generation obtains is less than default error threshold;
And if the negligible amounts of target training sample, in the case that especially black sample is less, pass through the training of each target
The prediction error of sample computation model, although can be used for lateral comparison between each model, the prediction for judgment models is quasi-
When true rate, effect may be poor, accordingly it is also possible to previously according to artificial experience or algorithm, calculates and set frequency threshold value, and with
The number of iterations reaches termination condition of the preset times threshold value as iteration.
It is of course also possible to which above-mentioned 2 kinds of modes or other modes are used in combination, this specification is not limited this.
S107, after iteration, according to default screening rule, from each optimization model that each secondary iteration obtains, selection is complete
Portion or department pattern are weighted, and obtain the prediction model for target scene;Wherein, any optimization model being weighted
Weight is determined according to the prediction error of the model, and predicts that the relationship of error and determined weight is negative correlation.
Assuming that will then obtain M optimization model to iteration M times altogether at the end of iteration, and carry out the default screening of model discrimination
Rule can be flexibly set according to demand by those skilled in the art: for example, can choose the mould of the prediction small Mr. Yu's value of error
Type;In another example carrying out priority ranking to model according to prediction error, and some number of model is selected according to priority orders;
For another example verifying using other samples in target scene to M model, and select accuracy rate higher partly or entirely
Model;Etc., this specification does not limit this.
According to the scheme that this specification provides, the finally obtained prediction model for target scene will be by being changed by M times
The M optimization model that generation training obtains is constituted, and wherein predicts target scene relatively accurate model, in prediction model
Role is bigger, and relatively inaccurate model, acted on played in prediction model it is smaller, to make entire prediction model more
Suitable for target scene.
Below with reference to one, more specifically example, the prediction model for target scene provided this specification are trained
Method is illustrated.
Currently, every country and area are for local trade market, in fields such as electronic transaction, credit card trades,
Air control model is taken to carry out risk profile and control to transaction.
Wherein, the transaction data integration time of certain areas is longer, accumulation is larger, data characteristics is compared with horn of plenty, thus
Can be using transaction data as training sample, training is obtained for the preferable air control model of local market effect.And certain areas
Transaction data integration time is shorter, accumulation is less or even black sample size is insufficient.
By taking credit card trade as an example, there are credits card to be stolen the risks such as laggard pirate brush transaction, if the user find that credit
Card is stolen, can report a case to the security authorities to credit card issuer, after bank accepts and investigates, will notify relevant Third-party payment mechanism, payment mechanism
Corresponding transaction data can be labeled as black sample according to case.
Above-mentioned black sample labeling process usually requires long period, such as some months, shorter for transaction data integration time
Scene, data volume and black sample size are insufficient, will be unable to training and obtaining the preferable air control model of effect.
It can be by the transaction of time longer country according to the scheme that this specification provides for this problem
Market is as source scene, and the conduct target scene that the time is shorter, auxiliary using training sample a large amount of and abundant in the scene of source
The low volume data accumulated in target scene is helped, training obtains the air control model for being suitble to target scene.
As shown in Fig. 2, being the design architecture schematic diagram of training process.
Firstly, obtaining the transaction data for having label in N number of source scene, source training sample set S is constituted1、S2、……SN, obtain
The transaction data for having label in target scene is taken, target training sample set T is constituted.The credit that each sample concentrates user to report a case to the security authorities
The transaction that card occurs is black sample, and not reporting a case to the security authorities is white sample.
It is then possible to initialize random sample weight for N+1 sampleAny one source
Training sample set SkIn nkThe sample weights of a training sample areTarget training sample set T
In the sample weights of m training sample be
After the sample weights for determining N+1 training sample set and each sample, as shown in Fig. 2, can carry out at M iteration
It manages (M is preset times), and in each iteration:
For middle S1、S2、……SNEach sample set, with SkFor, by SkMerge with T, and utilizes the sample after merging
Collect (Sk∪ T) and corresponding sample weightsBased on the supervised learnings algorithm such as SVM, training obtains classifier
N number of source training sample set will be respectively trained to obtain N number of classifier with T.
Calculate the prediction error of each classifier.With classifierFor, each training sample in T is inputted respectivelyThe predicted value of each sample exported, and calculate according to following formula (1) the prediction error ε of the classifier:
Wherein,For the sample weights of j-th of sample;For the physical tags value of j-th of sample;To divide
Class deviceThe predicted value of j-th of sample of output;If the predicted value of the sample is identical as actual label value, difference is obtained
It is 0;If it is different, then obtaining difference is 1.
According to formula (1), the prediction error for N number of classifier that current iteration obtains can be calculated separately, and selects to predict
The smallest base classifier h obtained as current iteration of errort, and determine the corresponding prediction error ε of base classifiert, according to following
Formula (2), (3), the intermediate parameters α for carrying out sample weights update can be calculatedSWith αt:
Wherein,Sum to the number of samples that N number of source training sample is concentrated;M is the number of iterations;And it can see
α outtWith εtRelationship be negative correlation.
Each sample that N number of source training sample is concentrated inputs base classifier ht, obtain the predicted value of each sample, and benefit
The intermediate parameters α being calculated with formula (2), (3)SWith αt, according to following formula (4), (5), in N+1 sample set
The sample weights of each sample are updated:
Wherein,For SkIn i-th of sample sample weights,For SkIn i-th of sample physical tags value,For base classifier htTo SkIn i-th of sample predicted value;For the sample weights of j-th of sample in T,For T
In j-th of sample physical tags value,For base classifier htTo the predicted value of j-th of sample in T.
By formula (4), (5), so that N number of source training sample is concentrated, base classifier htThe sample of prediction error (can be recognized
To differ greatly with target training sample) sample weights reduce, also, make in target training sample, base classifier htIn advance
The sample weights of the sample (it is considered that needing selective learning) of sniffing accidentally increase, this sample weights updated will be used for next time
Iteration.
After the number of iterations reaches M, iteration will be stopped, and obtain M base classifier, carried out according to following formula (6)
The air control model f for target scene is remembered in weighted sumt:
ft=sign (∑tαtht(x)) (6)
Final air control model f is obtained by the above processtAfterwards, that is, the model can be used to carry out wind to the transaction of target scene
Danger control.For example, giving a mark to transaction event, and result of giving a mark is lower than to the transaction of preset threshold, is determined to have risk
Transaction, traded and do not passed through.
As it can be seen that using above scheme, it can be in source scene of the existing multiple data of target scene with certain similitude
In the case where, the sample data a large amount of, abundant for making full use of multiple source scenes to be accumulated, auxiliary mark scene middle or short term inner product
Tired low volume data, training obtain air control model.
In addition, based on each source training sample set and target training sample set, obtaining multiple points in each iterative processing
Class device, to select the best base classifier of effect in current iteration.Also, sample weights are carried out more using base classifier
Newly, make in each iteration, acted in smaller, previous iteration with the lower source sample of target sample similarity and do not predict accurate mesh
This effect of standard specimen is bigger, so that training obtains the air control model for being more suitable for target scene, avoids because of certain source samples and target sample
Originally the bring that differs greatly negative sense migration effect.
Corresponding to above method embodiment, this specification embodiment also provides a kind of prediction model instruction for target scene
Practice device, it is shown in Figure 3, the apparatus may include:
Input module 110, for obtaining the N number of with the source training sample set of label and 1 target field of N number of source scene
1 of scape has the target training sample set of label;Wherein, N is default positive integer;
Each sample set for concentrating for N number of source training sample: study module 120 sample set and target is instructed
Practice sample set to merge;The sample set obtained using merging, training obtain alternative model;
Screening module 130, each alternative model that each sample for concentrating target training sample inputs, according to
The predicted value of model output and the label value of each sample, calculate the prediction error of the model;It will be in the N number of alternative model that trained
Predict the smallest model of error, the optimization model obtained as current iteration;
The study module 120 and the screening module 130, which cooperate, realizes iterative processing, until iteration reaches default
Termination condition;
Output module 140, for after iteration, according to default screening rule, what is obtained from each secondary iteration to be each preferred
In model, selects all or part of model to be weighted, obtain the prediction model for target scene;Wherein, it is weighted
The weight of any optimization model is determined according to the prediction error of the model, and predicts that error and the relationship of determined weight are negative
It is related.
In a kind of specific embodiment that this specification provides, before first time iterative processing, the input module 110
It can be also used for:
Determine the default sample weights of each sample in N+1 sample set obtained;The sample weights are for expression pair
Answer significance level of the sample in sample set.
In a kind of specific embodiment that this specification provides, the device, as shown in figure 4, can also include:
Update module 150, for for any sample in N+1 sample set obtained: being joined according to preset update
Several and preset update rule, is updated the sample weights of the sample in current iteration, updated sample weights are used for
Next iteration;
Wherein, preset undated parameter includes: the label value of the sample and obtains sample input current iteration excellent
The predicted value exported after modeling type;
The update module 150 cooperates with study module 120, the screening module 130 and realizes iterative processing.
In a kind of specific embodiment that this specification provides, the update module may include:
Parameter determination submodule 151, for determining the label value of the sample and the sample being inputted what current iteration obtained
The predicted value exported after optimization model;
Weight updates submodule 152, for the absolute value using determined label value and the difference of predicted value, changes to this
The sample weights of the sample are updated in generation;
Wherein, in the case where the sample is the sample that N number of source training sample is concentrated, the pass of result and the absolute value is updated
System is negative correlation;In the case where the sample is the sample that target training sample is concentrated, the relationship of result and the absolute value is updated
To be positively correlated.
In a kind of specific embodiment that this specification provides, the default termination condition may include:
The number of iterations reaches preset times threshold value;And/or
The prediction error for the optimization model that current iteration obtains is less than default error threshold.
In a kind of specific embodiment that this specification provides, the screening module 130 may include:
Calculating parameter determines submodule 131, for obtain each sample of model output predicted value and each sample
Label value, and determine the predicted value of each sample and the difference of label value;
It calculates weight and determines submodule 132, for determining the error calculation weight of preset each sample;
Error calculation submodule 133 is predicted, for the error calculation weight using identified each sample, to identified
The difference of each sample is weighted summation, obtains the prediction error of the model.
In a kind of specific embodiment that this specification provides, the calculating weight determines submodule 132, specifically can be with
For:
For inputting any sample of the model: obtaining the sample weights of the sample in current iteration, and the sample is weighed
It is determined as the error calculation weight of sample when this calculates prediction error again;
Wherein, the sample weights are to preset and/or be calculated in iteration, and be used to indicate that corresponding sample to exist
Significance level in sample set.
The function of modules and the realization process of effect are specifically detailed in the above method and correspond to step in above-mentioned apparatus
Realization process, details are not described herein.
This specification embodiment also provides a kind of computer equipment, includes at least memory, processor and is stored in
On reservoir and the computer program that can run on a processor, wherein processor realizes above-mentioned be directed to when executing described program
The prediction model training method of target scene.This method includes at least:
A kind of prediction model training method for target scene, this method comprises:
N number of 1 with the source training sample set of label and 1 target scene of N number of source scene is obtained with label
Target training sample set;Wherein, N is default positive integer;
It is iterated processing using following steps, until reaching default termination condition:
The each sample set concentrated for N number of source training sample: the sample set and target training sample set are merged;
The sample set obtained using merging, training obtain alternative model;Each sample that target training sample is concentrated inputs alternative model,
According to the predicted value of model output and the label value of each sample, the prediction error of the model is calculated;
The smallest model of error, the optimization model obtained as current iteration will be predicted in the N number of alternative model trained;
After iteration, according to default screening rule, from each optimization model that each secondary iteration obtains, whole or portion are selected
Sub-model is weighted, and obtains the prediction model for target scene;Wherein, the weight root for any optimization model being weighted
It is determined according to the prediction error of the model, and predicts that the relationship of error and determined weight is negative correlation.
Fig. 5 shows one kind provided by this specification embodiment and more specifically calculates device hardware structural schematic diagram,
The equipment may include: processor 1010, memory 1020, input/output interface 1030, communication interface 1040 and bus
1050.Wherein processor 1010, memory 1020, input/output interface 1030 and communication interface 1040 are real by bus 1050
The now communication connection inside equipment each other.
Processor 1010 can use general CPU (Central Processing Unit, central processing unit), micro- place
Reason device, application specific integrated circuit (Application Specific Integrated Circuit, ASIC) or one
Or the modes such as multiple integrated circuits are realized, for executing relative program, to realize technical side provided by this specification embodiment
Case.
Memory 1020 can use ROM (Read Only Memory, read-only memory), RAM (Random Access
Memory, random access memory), static storage device, the forms such as dynamic memory realize.Memory 1020 can store
Operating system and other applications are realizing technical solution provided by this specification embodiment by software or firmware
When, relevant program code is stored in memory 1020, and execution is called by processor 1010.
Input/output interface 1030 is for connecting input/output module, to realize information input and output.Input and output/
Module can be used as component Configuration (not shown) in a device, can also be external in equipment to provide corresponding function.Wherein
Input equipment may include keyboard, mouse, touch screen, microphone, various kinds of sensors etc., output equipment may include display,
Loudspeaker, vibrator, indicator light etc..
Communication interface 1040 is used for connection communication module (not shown), to realize the communication of this equipment and other equipment
Interaction.Wherein communication module can be realized by wired mode (such as USB, cable etc.) and be communicated, can also be wirelessly
(such as mobile network, WIFI, bluetooth etc.) realizes communication.
Bus 1050 include an access, equipment various components (such as processor 1010, memory 1020, input/it is defeated
Outgoing interface 1030 and communication interface 1040) between transmit information.
It should be noted that although above equipment illustrates only processor 1010, memory 1020, input/output interface
1030, communication interface 1040 and bus 1050, but in the specific implementation process, which can also include realizing normal fortune
Other assemblies necessary to row.In addition, it will be appreciated by those skilled in the art that, it can also be only comprising real in above equipment
Component necessary to existing this specification example scheme, without including all components shown in figure.
This specification embodiment also provides a kind of computer readable storage medium, is stored thereon with computer program, the journey
The prediction model training method above-mentioned for target scene is realized when sequence is executed by processor.This method includes at least:
A kind of prediction model training method for target scene, this method comprises:
N number of 1 with the source training sample set of label and 1 target scene of N number of source scene is obtained with label
Target training sample set;Wherein, N is default positive integer;
It is iterated processing using following steps, until reaching default termination condition:
The each sample set concentrated for N number of source training sample: the sample set and target training sample set are merged;
The sample set obtained using merging, training obtain alternative model;Each sample that target training sample is concentrated inputs alternative model,
According to the predicted value of model output and the label value of each sample, the prediction error of the model is calculated;
The smallest model of error, the optimization model obtained as current iteration will be predicted in the N number of alternative model trained;
After iteration, according to default screening rule, from each optimization model that each secondary iteration obtains, whole or portion are selected
Sub-model is weighted, and obtains the prediction model for target scene;Wherein, the weight root for any optimization model being weighted
It is determined according to the prediction error of the model, and predicts that the relationship of error and determined weight is negative correlation.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method
Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data.
The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), moves
State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable
Programmable read only memory (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM),
Digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other magnetic storage devices
Or any other non-transmission medium, can be used for storage can be accessed by a computing device information.As defined in this article, it calculates
Machine readable medium does not include temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
As seen through the above description of the embodiments, those skilled in the art can be understood that this specification
Embodiment can be realized by means of software and necessary general hardware platform.Based on this understanding, this specification is implemented
Substantially the part that contributes to existing technology can be embodied in the form of software products the technical solution of example in other words,
The computer software product can store in storage medium, such as ROM/RAM, magnetic disk, CD, including some instructions are to make
It is each to obtain computer equipment (can be personal computer, server or the network equipment etc.) execution this specification embodiment
Method described in certain parts of a embodiment or embodiment.
System, device, module or the unit that above-described embodiment illustrates can specifically realize by computer chip or entity,
Or it is realized by the product with certain function.A kind of typically to realize that equipment is computer, the concrete form of computer can
To be personal computer, laptop computer, cellular phone, camera phone, smart phone, personal digital assistant, media play
In device, navigation equipment, E-mail receiver/send equipment, game console, tablet computer, wearable device or these equipment
The combination of any several equipment.
All the embodiments in this specification are described in a progressive manner, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device reality
For applying example, since it is substantially similar to the method embodiment, so describing fairly simple, related place is referring to embodiment of the method
Part explanation.The apparatus embodiments described above are merely exemplary, wherein described be used as separate part description
Module may or may not be physically separated, can be each module when implementing this specification example scheme
Function realize in the same or multiple software and or hardware.Can also select according to the actual needs part therein or
Person's whole module achieves the purpose of the solution of this embodiment.Those of ordinary skill in the art are not the case where making the creative labor
Under, it can it understands and implements.
The above is only the specific embodiment of this specification embodiment, it is noted that for the general of the art
For logical technical staff, under the premise of not departing from this specification embodiment principle, several improvements and modifications can also be made, this
A little improvements and modifications also should be regarded as the protection scope of this specification embodiment.