CN110070143A - Obtain method, apparatus, equipment and the storage medium of training data - Google Patents

Obtain method, apparatus, equipment and the storage medium of training data Download PDF

Info

Publication number
CN110070143A
CN110070143A CN201910356202.2A CN201910356202A CN110070143A CN 110070143 A CN110070143 A CN 110070143A CN 201910356202 A CN201910356202 A CN 201910356202A CN 110070143 A CN110070143 A CN 110070143A
Authority
CN
China
Prior art keywords
training data
subset
data subset
target
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910356202.2A
Other languages
Chinese (zh)
Other versions
CN110070143B (en
Inventor
张志伟
李焱
吴丽军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910356202.2A priority Critical patent/CN110070143B/en
Publication of CN110070143A publication Critical patent/CN110070143A/en
Application granted granted Critical
Publication of CN110070143B publication Critical patent/CN110070143B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions

Abstract

The disclosure is directed to a kind of method, apparatus, equipment and storage medium for obtaining training data, method includes: to obtain target training data subset, and target training data subset is any one in multiple training data subsets of initial training data set;In the training data subset of initial training data set, obtains first in addition to target training data subset and refer to quantity training data subset;It is referred in quantity training data subset first, obtains the second reference quantity training data in each training data subset, obtain the first training data with reference to sets of numbers;The training data of first reference sets of numbers is added in target training data subset, the target training data for being trained to machine learning model is obtained based on training data subset remaining in updated target training data subset and initial training data set.The recognition accuracy of model can be improved in the application, reduces the time cost for obtaining training data.

Description

Obtain method, apparatus, equipment and the storage medium of training data
Technical field
This disclosure relates to artificial intelligence field more particularly to a kind of method, apparatus, equipment and storage for obtaining training data Medium.
Background technique
Deep learning method is widely applied in related fieldss such as video image, speech recognition, natural language processings. By taking convolutional neural networks (convolution neural network, CNN) as an example, CNN because of its superpower capability of fitting and Global optimization ability end to end, so that machine recognition accuracy rate greatly improves.Such as image classification task application CNN it Afterwards, image classification accuracy is substantially improved, but it is not all fully up to expectations to promote amplitude.Because of the height of the recognition accuracy of a model The low clean level depending on training data, i.e., ratio shared by noise data in training data.Noise data in training data Fewer, the predictablity rate of model is also higher.It is thus typically necessary to be located in advance to the training dataset comprising noise data Reason.
In the related technology, the method for obtaining training data has: cleaning data using data cleansing algorithm;Either to instruction The data practiced in data set re-start artificial mark.
But it is suffered from the drawback that by the acquisition methods of above-mentioned training data currently without general data cleansing algorithm, Data under different application scene need to formulate special cleaning algorithm policy so that data cleansing treatment process time-consuming, at This height;Same artificial mark cost is also higher, long processing period.
Summary of the invention
The disclosure provides a kind of method, apparatus, equipment and storage medium for obtaining training data, can overcome the relevant technologies Middle mode time length, the problem at high cost for obtaining training data.
According to the first aspect of the embodiments of the present disclosure, a kind of method for obtaining training data is provided, comprising: obtain target instruction Practice data subset, the target training data subset is any one in multiple training data subsets of initial training data set A, each training data subset in the multiple training data subset respectively corresponds a class label;
In the training data subset of the initial training data set, obtain in addition to the target training data subset First refers to quantity training data subset;
Described first with reference in quantity training data subset, the second reference number in each training data subset is obtained A training data is measured, the first training data with reference to sets of numbers is obtained;
The training data of the first reference sets of numbers is added in the target training data subset, after obtaining update Target training data subset, based on remaining in the updated target training data subset and the initial training data set Training data subset obtain target training data for being trained to machine learning model.
Optionally, the second reference quantity is according to training data subset in reference ratio, the initial training data set Quantity and the quantity of each training data subset training data for including determine, it is described to increase with reference to ratio for determining Training data quantity.
Optionally, before the acquisition target training data subset, further includes:
Initial training data set is obtained, the initial training data set is divided into multiple training data subsets;
It is described to be based on remaining training in the updated target training data subset and the initial training data set Data subset obtains the target training data for being trained to machine learning model, comprising:
One or more training data subsets are chosen in remaining training data subset in the initial training data set, By the way of handling the target training data subset, the training data subset of selection is handled, is obtained more Training data subset after new;
By the updated training data subset, the updated target training data subset and the initial training The training data subset not updated in data set merges processing, obtains updated training dataset, after the update Training data concentrate include training data as the target training data being trained to machine learning model.
Optionally, described based on remaining in the updated target training data subset and the initial training data set Training data subset obtain target training data for being trained to machine learning model after, further includes:
Machine learning model is trained using the target training data;
When the accuracy rate of obtained machine learning model is less than or equal to target value, target training data is reacquired, Until the accuracy rate of the obtained machine learning model is greater than the target value.
Optionally, the multiple training data subset is according to the label information of training data in the initial training data set It obtains, the classification of the label information characterization training data of the training data.
Optionally, the multiple training data subset is according to training data in the initial training data set got Physical characteristic information is clustered to obtain to the physical features data.
According to the second aspect of an embodiment of the present disclosure, a kind of device for obtaining training data is provided, described device includes:
First obtains module, is configured as executing acquisition target training data subset, the target training data subset is Any one in multiple training data subsets of initial training data set, each training in the multiple training data subset Data subset respectively corresponds a class label;
Second obtains module, is configured as executing in the training data subset of the initial training data set, acquisition removes First except the target training data subset refers to quantity training data subset;
Third obtains module, is configured as executing described first obtaining each with reference in quantity training data subset Second refers to quantity training data in a training data subset, obtains the first training data with reference to sets of numbers;
Training data obtains module, be configured as executing by described first with reference to sets of numbers training data be added to it is described In target training data subset, updated target training data subset is obtained, is based on the updated target training data Remaining training data subset is obtained for being trained to machine learning model in subset and the initial training data set Target training data.
Optionally, the second reference quantity is according to training data subset in reference ratio, the initial training data set Quantity and the quantity of each training data subset training data for including determine, it is described to increase with reference to ratio for determining Training data quantity.
Optionally, described first module is obtained, is additionally configured to execute acquisition initial training data set, by the initial instruction Practice data set and is divided into multiple training data subsets;
The training data obtains module, is configured as executing the remaining training data in the initial training data set One or more training data subsets are chosen in subset, it is right by the way of handling the target training data subset The training data subset of selection is handled, and updated training data subset is obtained;By updated training data The training data subset not updated in collection, the updated target training data subset and the initial training data set carries out Merging treatment obtains updated training dataset, the updated training data is concentrated the training data that includes as The target training data that machine learning model is trained.
Optionally, described device, further includes:
Training module is configured as execution and is trained using the training data to machine learning model;
When the accuracy rate of obtained machine learning model is less than or equal to target value, target training data is reacquired, Until the accuracy rate of the obtained machine learning model is greater than the target value.
Optionally, the multiple to be believed to training data subset according to the label of training data in the initial training data set Breath obtains, the classification of the label information characterization training data of the training data.
Optionally, the multiple training data subset is according to training data in the initial training data set got Physical characteristic information is clustered to obtain to the physical features data.
According to the third aspect of an embodiment of the present disclosure, a kind of electronic equipment is provided, comprising: processor;It is described for storing The memory of processor-executable instruction;Wherein, the processor is configured to executing any of first aspect or first aspect Method in the possible embodiment of kind.
According to a fourth aspect of embodiments of the present disclosure, a kind of computer readable storage medium is provided, comprising: when the storage When instruction in medium is executed by the processor of terminal, enables the terminal to execute first aspect or any of first aspect can Method in the embodiment of energy.
According to a fifth aspect of the embodiments of the present disclosure, a kind of computer program (product), the computer program are provided (product) includes: computer program code, when the computer program code is run by computer, so that the computer is held Method in the above-mentioned various aspects of row.
The technical scheme provided by this disclosed embodiment at least can include the following benefits:
The method for the acquisition training data that the embodiment of the present disclosure provides, by obtaining number in same initial training data set According to and the training data that will acquire be added to target training data subset, using increase the target training data subset after data and Remaining training data subset obtains the target training data of training machine learning model in initial training data set, so as to It is enough that influence of original noise data to model training is offset by increased noise data, and then the identification for improving model is accurate Rate, while decreasing the time cost and manpower and financial resources cost for obtaining training data.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows the implementation for meeting the application Example, and together with specification it is used to explain the principle of the application.
Fig. 1 is a kind of schematic illustration of method for obtaining training data shown according to an exemplary embodiment;
Fig. 2 is a kind of flow chart of method for obtaining training data shown according to an exemplary embodiment;
Fig. 3 is a kind of flow chart of method for obtaining training data shown according to an exemplary embodiment;
Fig. 4 is a kind of block diagram of device for obtaining training data shown according to an exemplary embodiment;
Fig. 5 is the block diagram of a kind of electronic equipment shown according to an exemplary embodiment;
Fig. 6 is a kind of schematic diagram of terminal shown according to an exemplary embodiment.
Specific embodiment
Example embodiments are described in detail here, and the example is illustrated in the accompanying drawings.Following description is related to When attached drawing, unless otherwise indicated, the same numbers in different drawings indicate the same or similar elements.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the application.On the contrary, they be only with it is such as appended The example of the consistent device and method of some aspects be described in detail in claims, the application.
The application background of the embodiment of the present application is introduced first.In machine learning model training, what is obtained is used for Training data in machine learning model generally comprises noise data.If the clean level of training data sample is unsatisfactory for wanting It asks, then will affect the recognition accuracy of machine learning model using the training data that clean level is unsatisfactory for requiring.In order to It is more intuitive to understand method provided by the embodiments of the present application, next illustrate.As shown in Figure 1, it is assumed that based on comprising making an uproar When the training data of sound data carries out machine learning model training, target identification result edge is obtained by the positive sample in training data The direction F2, the recognition result obtained by the noise data in training data is along the direction F1.So, the resultant force generated at this time will edge The direction F.It can be seen that the recognition result that the training data comprising noise data obtains can deviate the direction of target identification result F2, i.e., Training data comprising noise data can influence the recognition accuracy of machine learning model.It is also shown in FIG. 1, if instructing Practice in data and in addition increase some noise datas, when the recognition result of the noise data newly increased is along the direction F3, then F3 and F is closed After power, machine learning model recognition result will be made along the direction F2, i.e., with the target identification result that is obtained by positive sample Direction is identical.It can be seen that the recognition result of machine learning model can be improved by increasing noise data.
According to above-mentioned principle, Fig. 2 is a kind of stream of method for obtaining training data shown according to an exemplary embodiment Cheng Tu.As shown in Fig. 2, the method for the acquisition training data is in terminal, comprising the following steps:
In the step s 21, target training data subset is obtained, which is initial training data set Any one in multiple training data subsets, each training data subset in multiple training data subsets respectively corresponds one Class label.
Illustratively, the data class which can need to identify according to actual machine learning model It determines.Such as the recognition accuracy to training identification apple, just using the training data subset of apple as target training data Subset.Can also will in multiple training data subsets of initial training data set it is any one or more as target instruct Practice data subset.Those skilled in the art can need to be arranged target training data subset according to realistic model training.
As one optional embodiment of the application, the method that initial training data set is divided into multiple training data subsets It can be and obtained according to the label information of training data in initial training data set, the label information of training data characterizes training number According to classification.Wherein, the label information of training data is used as the class label of the training data.
Illustratively, it skilled person will appreciate that, for training machine learning model, is trained for machine learning model Training data concentrate training data needs be labeled.In order to clearly describe the scheme of the embodiment of the present application, with can be with For identification for the machine learning model of fruit, it is assumed that by the machine learning model of the identification fruit, can identify plum, Banana and apple.Machine learning model is informed i.e. by way of mark, which type of data is apple, which type of data It is banana and which type of data is plum.It is certain to reach satisfaction by the training of a large amount of training datas for machine learning model The machine learning model for the fruit for identification that discrimination requires.Therefore training data subset is being divided to initial training data set The data for being labeled as plum are divided into a training data subset, will marked by the class label that can be formed in the process by mark Note is that the data of apple are divided into a training data subset, and the data for being labeled as banana are divided into a training data subset.
As one optional embodiment of the application, the method that initial training data set is divided into multiple training data subsets It can also be that multiple training data subsets concentrate the physical characteristic information of training data according to the training data got, to the object Reason characteristic is clustered to obtain.
Illustratively, for the training data of the machine learning model of identification fruit, physical characteristic information can be fruit Shape, be also possible to the color of fruit, be also possible to other physical features data for classification.Such as when physical features are believed Breath is the shape of fruit, then the shape by fruit can be three classes by clustering the training data subset being divided into.It will The data that half curved features are presented are divided into one kind;A circular feature will be presented, and circular radius is greater than the data point of certain value At one kind;A circular feature will be presented, and circular radius is divided into one kind less than or equal to the data of certain value.
Specifically with what kind of physical data, those skilled in the art can be determining according to actual classification demand.For example, If merely desiring to be divided into two classes, then can be divided by color characteristic to training data subset.It can be by presentation " yellow color " spy Sign is divided into one kind, and presentation " red color " feature is divided into one kind.The physical features data selected by control are different, point At categorical measure be also that can control.Wherein the embodiment of the present application use cluster mode can be K-Means algorithm or Person is DBSCAN algorithm.The embodiment of the present application is using the training data subset being divided into as the training data subset of plum, the instruction of apple For the training data subset for practicing data subset and banana, but it is limited not to this.
In step S22, in the training data subset of initial training data set, obtain except target training data subset it Outer first refers to quantity training data subset.
Illustratively, it is assumed that using the training data subset of plum as target training data subset, then remove the training of plum The training data subset for including except data subset is the training data subset of apple and the training data subset of banana respectively, is total to Two training data subsets.First in addition to target training data subset is obtained with reference in quantity training data subset, is somebody's turn to do First can be one with reference to quantity, be also possible to two.The training data subset that apple can only be obtained, can also only obtain Banana training data subset, can also two training data subsets obtain simultaneously.When the instruction that initial training dataset divides The quantity for practicing data subset is more, then for wherein any one target training data subset, remaining training data subset Quantity it is more, first with reference to quantity selection type it is also more.Those skilled in the art can according to actual use demand, Obtain any one or more training data subsets in remaining training data subset.
In step S23, first with reference in quantity training data subset, obtaining in each training data subset the Two refer to quantity training data, obtain the first training data with reference to sets of numbers.
Illustratively, when the training data subset of the training data subset and banana that obtain apple simultaneously, respectively in apple The second training data with reference to quantity is obtained in the training data subset of fruit and the training data subset of banana, obtains two groups of instructions Practice data.The second of the training data wherein obtained in each training data subset can be according to training with reference to the value of quantity The type of data subset confirms.Such as the training data subset for plum, apple are compared with banana, the shape of apple and plum Feature is increasingly similar, therefore 20 training datas can be obtained in the training data subset of apple, in training data of banana It concentrates and obtains 10 training datas, second can be respectively 20 and 10 with reference to quantity at this time.Trained number can not also be distinguished According to the type of subset, 10 trained numbers are obtained respectively in the training data subset of apple and the training data subset of banana respectively According to.Those skilled in the art can determine the training data obtained in each training data subset according to actual use demand Number.
As one optional embodiment of the application, which can be according to reference ratio, initial training number The quantity for the training data that quantity and each training data subset according to concentration training data subset include determines, by this With reference to the quantity of the increased training data of ratio-dependent.
Illustratively, it such as formula (1) or the expression-form of (2), after art technology can also be according to actual experiment, selects Other expression-forms, the embodiment of the present application are not construed as limiting this.
Either,
In formula, N is the quantity for the training data subset that initial training data set is divided into;M is each training data The quantity for the training data that concentration includes;K is the reference ratio for determining the quantity of increased training data, such as k=0.1.
Illustratively, also according to the machine learning model of above-mentioned identification fruit, it is assumed that in the training data subset of apple The training data for including is 50, then second obtained in the training data subset of apple is needed to refer to the following formula of quantity (3) It is shown.When obtained by calculation second is decimal with reference to quantity, the embodiment of the present application is using rounding up, i.e., when the second ginseng Examine quantity be 2.5 when, 3 training datas are taken in the training data subset of apple.Those skilled in the art can be according to reality Using needs by the way of being rounded downwards, the embodiment of the present application is not construed as limiting this on border.
Method for obtaining the quantity of training data from the training data subset of banana, it is identical with apple, herein not It repeats.I.e. when including 100 training datas in the training data subset of banana, by available second ginseng of above-mentioned calculation Examining quantity can be 10 or 5.
In step s 24, the training data of the first reference sets of numbers is added in target training data subset, is obtained more Target training data subset after new, based on remaining instruction in updated target training data subset and initial training data set Practice target training data of the data subset acquisition for being trained to machine learning model.
Illustratively, when using formula (1), 5 training datas is obtained in the training data subset of apple and are added to In the training data subset of plum.The training number that 10 training datas are added to plum is obtained in the training data subset of banana It, specifically can be as shown in formula (4) according to the training data subset in subset, finally obtaining updated plum for identification.
In formula: trainiTo increase the training data that the training data subset i after training data includes;To increase The training data that training data subset i includes before training data;For in remaining any training data Concentrate the training data obtained;∪ and ∪ is union.
Illustratively, if it is 100 that the training data subset of plum, which increases the training data for including before training data, plus After entering above-mentioned two groups of training datas, the training data for including in the training data subset of the plum after increasing training data is 115 It is a.Then obtained based on remaining training data subset in 115 training datas and initial training data set for machine The target training data that learning model is trained.
Illustratively, skilled person will appreciate that, the recognition result that machine recognition model obtains is with machine recognition mould Type corresponding to corresponding maximum identification probability is recognition result in type.An i.e. given fruit to be identified, when passing through engineering Practising model and obtaining the probability that the fruit is plum is 60%, which is that the probability of apple is 50%, which is the general of banana Rate is 30%, then the type for the fruit that machine learning model provides at this time is plum.If in 100 identification process, at least 90 Secondary obtained recognition result is correct, then theoretically the accuracy rate of this identification model can reach 90% or more.
For so high accuracy rate, how the scheme that the embodiment of the present application is recorded reaches.It now gives an example explanation. When using the training data subset of plum as target training data subset, pass through the training data subset and apple in banana A certain number of training datas are obtained in training data subset respectively to be added in the training data subset of plum.If in plum Training data subset is added before training data, and the noise data for including in the training data subset of plum may have an apple Photo, but the label of the photo is plum.So such training data will cause to mislead to machine learning model, may Cause machine learning model by an apple to be identified, is identified as plum.If training data of the banana newly increased at this time Training data in the training data subset of collection or apple includes the photo of identical apple, and the photo is marked as perfume (or spice) at this time Any of several broadleaf plants, or it is marked as apple.When so machine learning model is learnt at this time, it will learn to same photo, both may be used It can be plum, it is also possible to which apple or banana, i.e., same fruit to be identified, machine learning model may there are three types of know Other result.Machine learning model just weakens an apple identification at the probability of plum relatively as a result,.Although not being added at this time The correctly training data of identification plum, but reduce the probability by apple identification for plum.So compared by probability size, The accuracy rate of identification plum will improve.
The method provided by the embodiments of the present application for obtaining training data, due to the training data quilt of uniform machinery learning model Mark error probability it is larger, such as occur intersect mistake, plum is marked as apple or banana, apple is marked as to banana or Banana is marked as apple or plum by plum.Therefore through selection in the same training data subset concentrated to training data The training data for getting and generating the direction F3 in Fig. 1 can be improved as increased noise data in training data.If by increasing Noise data, model recognition accuracy are not improved, can carry out the acquisition of a wheel noise data again, until model identification is quasi- True rate is met the requirements.
According to method provided in this embodiment, it is similarly available for the training data subset of identification apple and banana. Details are not described herein for the embodiment of the present application.It, can be direct by the target training data in the training data subset of obtained plum Training is served only for the single fruit identification model of identification plum;It can also be by the training number of the target in the training data subset of plum It is quasi- with the identification for improving machine learning model for training multi-class water identification model according to being added in initial training data set True rate.
The method provided by the embodiments of the present application for obtaining training data, by obtaining number in same initial training data set According to and the training data that will acquire be added to target training data subset, using increase the target training data subset after data and Remaining training data subset obtains the target training data of training machine learning model in initial training data set, so as to It is enough that influence of original noise data to model training is offset by increased noise data, and then the identification for improving model is accurate Rate, while decreasing the time cost and manpower and financial resources cost for obtaining training data.
As one optional embodiment of the application, as shown in figure 3, before step S21, this method further include:
In step S20, initial training data set is obtained, initial training data set is divided into multiple training data Collection.
The process that the initial training data set is divided into multiple training data subsets may refer to an embodiment step S21, details are not described herein.
In above-mentioned steps S24, based on remaining instruction in updated target training data subset and initial training data set Practice target training data of the data subset acquisition for being trained to machine learning model, comprising:
In step S241, one or more training are chosen in remaining training data subset in initial training data set Data subset handles the training data subset of selection, obtains by the way of handling target training data subset To updated training data subset.
The acquisition methods of the updated training data subset may refer to step S21-S24 in an embodiment, herein It repeats no more.
In step S242, by updated training data subset, updated target training data subset and initial instruction Practice the training data subset not updated in data set and merge processing, obtains updated training dataset, it will be updated Training data concentrates the training data for including as the target training data being trained to machine learning model.
Illustratively, the method for merging treatment can be as shown in following formula (5).I.e. by updated training data subset, more The training data subset not updated in target training data subset and initial training data set after new is added to a trained number According in collection, in the training dataset, each training data subset is stored in the text of training dataset in the form of subfile In part.When using training dataset training pattern, the file of entire training dataset is imported together.
In formula: DBnoiseThe training data for including for training dataset;∪ is union;trainiAfter increasing data The training data that training data subset i includes;N is the number for increasing the training data subset after data.
Illustratively, it for the machine learning model of fruit for identification in the application, is obtained when according to above-described embodiment The training data of updated plum be 115, the training data of apple is 115, and the training data of banana is 150, Then the target training data for being trained to machine learning model is 380.
Illustratively, the scheme recorded through the foregoing embodiment is it is found that the training data subset for increasing training data can To improve the recognition accuracy of single identification model.Therefore for multiple types identification model, if by by way of a upper embodiment Obtained one or more is added to training dataset to training data subset, and the knowledge of multiple types identification model equally can be improved Other accuracy rate.
As one optional embodiment of the application, after obtaining target training data, this method comprises:
Firstly, being trained using target training data to machine learning model.
Secondly, reacquiring target training when the accuracy rate of obtained machine learning model is less than or equal to target value Data, until the accuracy rate of obtained machine learning model is greater than target value.
Illustratively, which can identify confirmation request according to actual machine learning model.Such as to recognition result The machine learning model that accuracy rate is wanted can set 90% for target value.It can also be assembled for training according to initial training data Practice the recognition result that machine learning model obtains to determine.Such as when the identification knot for obtaining machine learning model using initial training collection Fruit is 70%, then target value can be arranged to 70%, can be configured as any one value greater than 70%.Art technology Personnel can be set according to actual needs the target value, and the embodiment of the present application is not limited thereto.
Illustratively, the method provided through this embodiment after obtaining target training data, utilizes the target training data Machine learning model is trained, the accuracy rate of machine recognition model after being trained.If current machine learning model uses The target training data that the method for the present embodiment obtains, after carrying out model training, obtained recognition accuracy is unsatisfactory for setting Target value then reuses the method in above-described embodiment, reacquires target training data, until model recognition accuracy is full The target value being arranged enough.
Fig. 4 is a kind of device block diagram for obtaining training data shown according to an exemplary embodiment.Referring to Fig. 4, the dress It sets and obtains the acquisition of module 41, second module 42 including first, third obtains module 43, training data obtains module 44.
First obtains module 41, is configured as executing acquisition target training data subset, target training data subset is first Any one in multiple training data subsets of beginning training dataset, each training data in multiple training data subsets Collection respectively corresponds a class label;
Second obtains module 42, is configured as executing in the training data subset of initial training data set, obtains and removes mesh It marks first except training data subset and refers to quantity training data subset;
Third obtains module 43, is configured as obtaining each training with reference in quantity training data subset first Second refers to quantity training data in data subset, obtains the first training data with reference to sets of numbers;
Training data obtains module 44, is configured as executing and is added to target instruction with reference to the training data of sets of numbers for first Practice in data subset, obtain updated target training data subset, based on updated target training data subset and initially Training data concentrates remaining training data subset to obtain the target training data for being trained to machine learning model.
As one optional embodiment of the application, the second reference quantity is according in reference ratio, initial training data set The quantity for the training data that the quantity of training data subset and each training data subset include determines, is used for reference to ratio Determine the quantity of increased training data.
Module 41 is obtained as one optional embodiment of the application, first, is additionally configured to execute acquisition initial training Initial training data set is divided into multiple training data subsets by data set;
Training data obtains module 44, is configured as executing in initial training data set in remaining training data subset Choose one or more training data subsets, the training by the way of handling target training data subset, to selection Data subset is handled, and updated training data subset is obtained;By updated training data subset, updated target Training data subset and training data concentrate the training data subset not updated to merge processing, obtain updated trained number According to collection, concentrate the training data for including as the target training being trained to machine learning model updated training data Data.
As one optional embodiment of the application, after obtaining target training data, the device further include:
Training module is configured as execution and is trained using target training data to machine learning model;
When the accuracy rate of obtained machine learning model is less than or equal to target value, target training data is reacquired, Until the accuracy rate of obtained machine learning model is greater than target value.
As one optional embodiment of the application, multiple training data subsets are according to initial training data concentration training number According to label information obtain, the classification of the label information of training data characterization training data.
As one optional embodiment of the application, multiple training data subsets are according to the initial training data set got The physical characteristic information of middle training data is clustered to obtain to physical features data.
The device that training dataset provided by the embodiments of the present application obtains, by being obtained in same initial training data set Data and the training data that will acquire are added to target training data subset, utilize the target training data subset after increase data And remaining training data subset obtains the target training data of training machine learning model in initial training data set, thus Influence of original noise data to model training can be offset by increased noise data, and then the identification for improving model is quasi- True rate, while decreasing the time cost and manpower and financial resources cost for obtaining training data.
About the device in above-described embodiment, wherein modules execute the concrete mode of operation in related this method Embodiment in be described in detail, no detailed explanation will be given here.
Based on same idea, the embodiment of the present application also provides a kind of electronic equipment, as shown in figure 5, the equipment includes:
Processor 51;
For storing the memory 52 of the processor-executable instruction;Processor 51 and memory 52 pass through communication bus 53 connections.
Wherein, the processor 51 is configured as executing the method that training data is obtained described in above-described embodiment.
It should be understood that above-mentioned processor can be central processing unit (Central Processing Unit, CPU), also It can be other general processors, digital signal processor (digital signal processing, DSP), dedicated integrated electricity Road (application specific integrated circuit, ASIC), field programmable gate array (field- Programmable gate array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor either any conventional processor etc..It is worth noting that Processor can be the processor for supporting advanced reduced instruction set machine (advanced RISC machines, ARM) framework.
Further, in an alternative embodiment, above-mentioned memory may include read-only memory and arbitrary access Memory, and instruction and data is provided to processor.Memory can also include nonvolatile RAM.For example, Memory can be with the information of storage device type.
Fig. 6 is a kind of block diagram of terminal 600 shown according to an exemplary embodiment.The terminal 600 may is that intelligent hand Machine, tablet computer, laptop or desktop computer.Terminal 600 is also possible to referred to as user equipment, portable terminal, above-knee Other titles such as type terminal, terminal console.
In general, terminal 600 includes: processor 601 and memory 602.
Processor 601 may include one or more processing cores, such as 4 core processors, 8 core processors etc..Place Reason device 601 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field- Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed Logic array) at least one of example, in hardware realize.Processor 601 also may include primary processor and coprocessor, master Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.? In some embodiments, processor 601 can be integrated with GPU (Graphics Processing Unit, image processor), GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 601 can also be wrapped AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning Calculating operation.
Memory 602 may include one or more computer readable storage mediums, which can To be non-transient.Memory 602 may also include high-speed random access memory and nonvolatile memory, such as one Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 602 can Storage medium is read for storing at least one instruction, at least one instruction performed by processor 601 for realizing this Shen Please in embodiment of the method provide acquisition training data method.
In some embodiments, terminal 600 is also optional includes: peripheral device interface 603 and at least one peripheral equipment. It can be connected by bus or signal wire between processor 601, memory 602 and peripheral device interface 603.Each peripheral equipment It can be connected by bus, signal wire or circuit board with peripheral device interface 603.Specifically, peripheral equipment includes: radio circuit 604, at least one of display screen 605, camera 606, voicefrequency circuit 607, positioning component 608 and power supply 609.
Peripheral device interface 603 can be used for I/O (Input/Output, input/output) is relevant outside at least one Peripheral equipment is connected to processor 601 and memory 602.In some embodiments, processor 601, memory 602 and peripheral equipment Interface 603 is integrated on same chip or circuit board;In some other embodiments, processor 601, memory 602 and outer Any one or two in peripheral equipment interface 603 can realize on individual chip or circuit board, the present embodiment to this not It is limited.
Radio circuit 604 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal.It penetrates Frequency circuit 604 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 604 turns electric signal It is changed to electromagnetic signal to be sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 604 wraps It includes: antenna system, RF transceiver, one or more amplifiers, tuner, oscillator, digital signal processor, codec chip Group, user identity module card etc..Radio circuit 604 can be carried out by least one wireless communication protocol with other terminals Communication.The wireless communication protocol includes but is not limited to: Metropolitan Area Network (MAN), each third generation mobile communication network (2G, 3G, 4G and 5G), wireless office Domain net and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, radio circuit 604 may be used also To include the related circuit of NFC (Near Field Communication, wireless near field communication), the application is not subject to this It limits.
Display screen 605 is for showing UI (User Interface, user interface).The UI may include figure, text, figure Mark, video and its their any combination.When display screen 605 is touch display screen, display screen 605 also there is acquisition to show The ability of the touch signal on the surface or surface of screen 605.The touch signal can be used as control signal and be input to processor 601 are handled.At this point, display screen 605 can be also used for providing virtual push button and/or dummy keyboard, also referred to as soft button and/or Soft keyboard.In some embodiments, display screen 605 can be one, and the front panel of terminal 600 is arranged;In other embodiments In, display screen 605 can be at least two, be separately positioned on the different surfaces of terminal 600 or in foldover design;In still other reality It applies in example, display screen 605 can be flexible display screen, be arranged on the curved surface of terminal 600 or on fold plane.Even, it shows Display screen 605 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Display screen 605 can use LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-EmittingDiode, Organic Light Emitting Diode) Etc. materials preparation.
CCD camera assembly 606 is for acquiring image or video.Optionally, CCD camera assembly 606 include front camera and Rear camera.In general, the front panel of terminal is arranged in front camera, the back side of terminal is arranged in rear camera.One In a little embodiments, rear camera at least two is main camera, depth of field camera, wide-angle camera, focal length camera shooting respectively Any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide-angle Camera fusion realizes that pan-shot and VR (Virtual Reality, virtual reality) shooting function or other fusions are clapped Camera shooting function.In some embodiments, CCD camera assembly 606 can also include flash lamp.Flash lamp can be monochromatic warm flash lamp, It is also possible to double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for not With the light compensation under colour temperature.
Voicefrequency circuit 607 may include microphone and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and will Sound wave, which is converted to electric signal and is input to processor 601, to be handled, or is input to radio circuit 604 to realize voice communication. For stereo acquisition or the purpose of noise reduction, microphone can be separately positioned on the different parts of terminal 600 to be multiple.Mike Wind can also be array microphone or omnidirectional's acquisition type microphone.Loudspeaker is then used to that processor 601 or radio circuit will to be come from 604 electric signal is converted to sound wave.Loudspeaker can be traditional wafer speaker, be also possible to piezoelectric ceramic loudspeaker.When When loudspeaker is piezoelectric ceramic loudspeaker, the audible sound wave of the mankind can be not only converted electrical signals to, it can also be by telecommunications Number the sound wave that the mankind do not hear is converted to carry out the purposes such as ranging.In some embodiments, voicefrequency circuit 607 can also include Earphone jack.
Positioning component 608 is used for the current geographic position of positioning terminal 600, to realize navigation or LBS (Location Based Service, location based service).Positioning component 608 can be the GPS (Global based on the U.S. Positioning System, global positioning system), the dipper system of China, Russia Gray receive this system or European Union The positioning component of Galileo system.
Power supply 609 is used to be powered for the various components in terminal 600.Power supply 609 can be alternating current, direct current, Disposable battery or rechargeable battery.When power supply 609 includes rechargeable battery, which can support wired charging Or wireless charging.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 600 further includes having one or more sensors 610.The one or more sensors 610 include but is not limited to: acceleration transducer 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, Optical sensor 615 and proximity sensor 616.
The acceleration that acceleration transducer 611 can detecte in three reference axis of the coordinate system established with terminal 600 is big It is small.For example, acceleration transducer 611 can be used for detecting component of the acceleration of gravity in three reference axis.Processor 601 can With the acceleration of gravity signal acquired according to acceleration transducer 611, touch display screen 605 is controlled with transverse views or longitudinal view Figure carries out the display of user interface.Acceleration transducer 611 can be also used for the acquisition of game or the exercise data of user.
Gyro sensor 612 can detecte body direction and the rotational angle of terminal 600, and gyro sensor 612 can To cooperate with acquisition user to act the 3D of terminal 600 with acceleration transducer 611.Processor 601 is according to gyro sensor 612 Following function may be implemented in the data of acquisition: when action induction (for example changing UI according to the tilt operation of user), shooting Image stabilization, game control and inertial navigation.
The lower layer of side frame and/or touch display screen 605 in terminal 600 can be set in pressure sensor 613.Work as pressure When the side frame of terminal 600 is arranged in sensor 613, user can detecte to the gripping signal of terminal 600, by processor 601 Right-hand man's identification or prompt operation are carried out according to the gripping signal that pressure sensor 613 acquires.When the setting of pressure sensor 613 exists When the lower layer of touch display screen 605, the pressure operation of touch display screen 605 is realized to UI circle according to user by processor 601 Operability control on face is controlled.Operability control includes button control, scroll bar control, icon control, menu At least one of control.
Fingerprint sensor 614 is used to acquire the fingerprint of user, collected according to fingerprint sensor 614 by processor 601 The identity of fingerprint recognition user, alternatively, by fingerprint sensor 614 according to the identity of collected fingerprint recognition user.It is identifying When the identity of user is trusted identity out, the user is authorized to execute relevant sensitive operation, the sensitive operation packet by processor 601 Include solution lock screen, check encryption information, downloading software, payment and change setting etc..Terminal can be set in fingerprint sensor 614 600 front, the back side or side.When being provided with physical button or manufacturer Logo in terminal 600, fingerprint sensor 614 can be with It is integrated with physical button or manufacturer Logo.
Optical sensor 615 is for acquiring ambient light intensity.In one embodiment, processor 601 can be according to optics The ambient light intensity that sensor 615 acquires controls the display brightness of touch display screen 605.Specifically, when ambient light intensity is higher When, the display brightness of touch display screen 605 is turned up;When ambient light intensity is lower, the display for turning down touch display screen 605 is bright Degree.In another embodiment, the ambient light intensity that processor 601 can also be acquired according to optical sensor 615, dynamic adjust The acquisition parameters of CCD camera assembly 606.
Proximity sensor 616, also referred to as range sensor are generally arranged at the front panel of terminal 600.Proximity sensor 616 For acquiring the distance between the front of user Yu terminal 600.In one embodiment, when proximity sensor 616 detects use When family and the distance between the front of terminal 600 gradually become smaller, touch display screen 605 is controlled from bright screen state by processor 601 It is switched to breath screen state;When proximity sensor 616 detects user and the distance between the front of terminal 600 becomes larger, Touch display screen 605 is controlled by processor 601 and is switched to bright screen state from breath screen state.
It will be understood by those skilled in the art that the restriction of structure shown in Fig. 6 not structure paired terminal 600, can wrap It includes than illustrating more or fewer components, perhaps combine certain components or is arranged using different components.
This application provides a kind of computer programs, when computer program is computer-executed, can make processor Or computer executes corresponding each step and/or process in above method embodiment.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to its of the application Its embodiment.This application is intended to cover any variations, uses, or adaptations of the application, these modifications, purposes or Person's adaptive change follows the general principle of the application and including the undocumented common knowledge in the art of the disclosure Or conventional techniques.The description and examples are only to be considered as illustrative, and the true scope and spirit of the application are by following Claim is pointed out.
It should be understood that the application is not limited to the precise structure that has been described above and shown in the drawings, and And various modifications and changes may be made without departing from the scope thereof.Scope of the present application is only limited by the accompanying claims.

Claims (10)

1. a kind of method for obtaining training data, which is characterized in that the described method includes:
Target training data subset is obtained, the target training data subset is multiple training datas of initial training data set Any one concentrated, each training data subset in the multiple training data subset respectively corresponds a class label;
In the training data subset of the initial training data set, first in addition to the target training data subset is obtained With reference to quantity training data subset;
It is a with reference to the second reference quantity in each training data subset in quantity training data subset, is obtained described first Training data obtains the first training data with reference to sets of numbers;
The training data of the first reference sets of numbers is added in the target training data subset, updated mesh is obtained Training data subset is marked, based on remaining instruction in the updated target training data subset and the initial training data set Practice target training data of the data subset acquisition for being trained to machine learning model.
2. the method according to claim 1 for obtaining training data, which is characterized in that the second reference quantity is according to ginseng Examine ratio, the training that the quantity of training data subset and each training data subset include in the initial training data set The quantity of data is determining, the quantity for being used to determine increased training data with reference to ratio.
3. the method according to claim 1 for obtaining training data, which is characterized in that acquisition target training data Before collection, further includes:
Initial training data set is obtained, the initial training data set is divided into multiple training data subsets;
It is described to be based on remaining training data in the updated target training data subset and the initial training data set Subset obtains the target training data for being trained to machine learning model, comprising:
One or more training data subsets are chosen in remaining training data subset in the initial training data set, are used To the mode that the target training data subset is handled, the training data subset of selection is handled, after obtaining update Training data subset;
By the updated training data subset, the updated target training data subset and the initial training data It concentrates the training data subset not updated to merge processing, updated training dataset is obtained, by the updated instruction Practice the training data for including in data set as the target training data being trained to machine learning model.
4. the method according to claim 3 for obtaining training data, which is characterized in that described to be based on the updated mesh Remaining training data subset in training data subset and the initial training data set is marked to obtain for machine learning model After the target training data being trained, further includes:
Machine learning model is trained using the target training data;
When the accuracy rate of obtained machine learning model is less than or equal to target value, target training data is reacquired, until The accuracy rate of the obtained machine learning model is greater than the target value.
5. the method for training data is obtained described in any one of -4 according to claim 1, which is characterized in that the multiple training Data subset is obtained according to the label information of training data in the initial training data set, the label information of the training data Characterize the classification of training data.
6. the method for training data is obtained described in any one of -4 according to claim 1, which is characterized in that the multiple training Data subset is according to the physical characteristic information of training data in the initial training data set got, to the physical features Data are clustered to obtain.
7. a kind of device for obtaining training data, which is characterized in that described device includes:
First obtains module, is configured as executing acquisition target training data subset, the target training data subset is initial Any one in multiple training data subsets of training dataset, each training data in the multiple training data subset Subset respectively corresponds a class label;
Second obtains module, is configured as executing in the training data subset of the initial training data set, obtains except described First except target training data subset refers to quantity training data subset;
Third obtains module, is configured as executing described first obtaining each instruction with reference in quantity training data subset Second refers to quantity training data in white silk data subset, obtains the first training data with reference to sets of numbers;
Training data obtains module, is configured as executing and is added to the target with reference to the training data of sets of numbers for described first In training data subset, updated target training data subset is obtained, is based on the updated target training data subset And remaining training data subset obtains the target for being trained to machine learning model in the initial training data set Training data.
8. the device according to claim 7 for obtaining training data, which is characterized in that the second reference quantity is according to ginseng Examine ratio, the training that the quantity of training data subset and each training data subset include in the initial training data set The quantity of data is determining, the quantity for being used to determine increased training data with reference to ratio.
9. a kind of electronic equipment characterized by comprising
Processor;
For storing the memory of the processor-executable instruction;
Wherein, the processor is configured to the method that perform claim requires acquisition training data described in any one of 1-6.
10. a kind of computer readable storage medium, which is characterized in that when the instruction in the storage medium is by the processor of terminal When execution, the method that perform claim requires acquisition training data described in any one of 1-6 is enabled the terminal to.
CN201910356202.2A 2019-04-29 2019-04-29 Method, device and equipment for acquiring training data and storage medium Active CN110070143B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910356202.2A CN110070143B (en) 2019-04-29 2019-04-29 Method, device and equipment for acquiring training data and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910356202.2A CN110070143B (en) 2019-04-29 2019-04-29 Method, device and equipment for acquiring training data and storage medium

Publications (2)

Publication Number Publication Date
CN110070143A true CN110070143A (en) 2019-07-30
CN110070143B CN110070143B (en) 2021-07-16

Family

ID=67369533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910356202.2A Active CN110070143B (en) 2019-04-29 2019-04-29 Method, device and equipment for acquiring training data and storage medium

Country Status (1)

Country Link
CN (1) CN110070143B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555480A (en) * 2019-09-05 2019-12-10 腾讯科技(深圳)有限公司 Training data generation method and related device
CN111047050A (en) * 2019-12-17 2020-04-21 苏州浪潮智能科技有限公司 Distributed parallel training method, equipment and storage medium
CN111597934A (en) * 2020-04-30 2020-08-28 重庆科技学院 System and method for processing training data for statistical applications

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106294490A (en) * 2015-06-08 2017-01-04 富士通株式会社 The feature Enhancement Method of data sample and device and classifier training method and apparatus
CN108009228A (en) * 2017-11-27 2018-05-08 咪咕互动娱乐有限公司 A kind of method to set up of content tab, device and storage medium
US20180240011A1 (en) * 2017-02-22 2018-08-23 Cisco Technology, Inc. Distributed machine learning
CN108764296A (en) * 2018-04-28 2018-11-06 杭州电子科技大学 More sorting techniques of study combination are associated with multitask based on K-means
CN109272003A (en) * 2017-07-17 2019-01-25 华东师范大学 A kind of method and apparatus for eliminating unknown error in deep learning model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106294490A (en) * 2015-06-08 2017-01-04 富士通株式会社 The feature Enhancement Method of data sample and device and classifier training method and apparatus
US20180240011A1 (en) * 2017-02-22 2018-08-23 Cisco Technology, Inc. Distributed machine learning
CN109272003A (en) * 2017-07-17 2019-01-25 华东师范大学 A kind of method and apparatus for eliminating unknown error in deep learning model
CN108009228A (en) * 2017-11-27 2018-05-08 咪咕互动娱乐有限公司 A kind of method to set up of content tab, device and storage medium
CN108764296A (en) * 2018-04-28 2018-11-06 杭州电子科技大学 More sorting techniques of study combination are associated with multitask based on K-means

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUE ZHU等: "Multi-Label Learning with Global and Local Label Correlation", 《IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 》 *
王凯等: "一种面向非平衡生物医学数据的自训练半监督方法", 《大庆师范学院学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110555480A (en) * 2019-09-05 2019-12-10 腾讯科技(深圳)有限公司 Training data generation method and related device
CN111047050A (en) * 2019-12-17 2020-04-21 苏州浪潮智能科技有限公司 Distributed parallel training method, equipment and storage medium
CN111597934A (en) * 2020-04-30 2020-08-28 重庆科技学院 System and method for processing training data for statistical applications

Also Published As

Publication number Publication date
CN110070143B (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN110189340A (en) Image partition method, device, electronic equipment and storage medium
US11074466B2 (en) Anti-counterfeiting processing method and related products
CN110121118A (en) Video clip localization method, device, computer equipment and storage medium
CN108594997A (en) Gesture framework construction method, apparatus, equipment and storage medium
CN110210571A (en) Image-recognizing method, device, computer equipment and computer readable storage medium
CN109299315A (en) Multimedia resource classification method, device, computer equipment and storage medium
CN108538311A (en) Audio frequency classification method, device and computer readable storage medium
US11482237B2 (en) Method and terminal for reconstructing speech signal, and computer storage medium
CN110147852A (en) Method, apparatus, equipment and the storage medium of image recognition
CN110163380A (en) Data analysing method, model training method, device, equipment and storage medium
CN110059652A (en) Face image processing process, device and storage medium
CN109285178A (en) Image partition method, device and storage medium
CN109005457A (en) Blank screen detection method, device, computer equipment and storage medium
CN110070143A (en) Obtain method, apparatus, equipment and the storage medium of training data
CN109862412A (en) It is in step with the method, apparatus and storage medium of video
CN110210573A (en) Fight generation method, device, terminal and the storage medium of image
CN110163296A (en) Method, apparatus, equipment and the storage medium of image recognition
CN109558837A (en) Face critical point detection method, apparatus and storage medium
CN109886208A (en) Method, apparatus, computer equipment and the storage medium of object detection
CN110535820A (en) For the classification method of malice domain name, device, electronic equipment and medium
CN110991457A (en) Two-dimensional code processing method and device, electronic equipment and storage medium
CN110096865A (en) Issue the method, apparatus, equipment and storage medium of verification mode
CN110175653A (en) Method, apparatus, equipment and the storage medium of image recognition
CN109961802A (en) Sound quality comparative approach, device, electronic equipment and storage medium
CN109189290A (en) Click on area recognition methods, device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant