CN107967491A - Machine learning method, device, electronic equipment and the storage medium again of plank identification - Google Patents

Machine learning method, device, electronic equipment and the storage medium again of plank identification Download PDF

Info

Publication number
CN107967491A
CN107967491A CN201711347768.6A CN201711347768A CN107967491A CN 107967491 A CN107967491 A CN 107967491A CN 201711347768 A CN201711347768 A CN 201711347768A CN 107967491 A CN107967491 A CN 107967491A
Authority
CN
China
Prior art keywords
training sample
identification model
sample set
local
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711347768.6A
Other languages
Chinese (zh)
Inventor
丁磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Woodstate Science And Technology Co Ltd
Original Assignee
Beijing Woodstate Science And Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Woodstate Science And Technology Co Ltd filed Critical Beijing Woodstate Science And Technology Co Ltd
Priority to CN201711347768.6A priority Critical patent/CN107967491A/en
Publication of CN107967491A publication Critical patent/CN107967491A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques

Abstract

The embodiment of the present disclosure discloses a kind of machine of plank identification learning method, device, electronic equipment and storage medium again.The described method includes:Obtain the initial identification model for being used for plank identification;The initial identification model is trained by the first training sample set;The first training sample set includes the first training sample and the first annotation results;Obtain the second training sample set;The second training sample set includes the second training sample and the second annotation results;Obtain local identification model;The local identification model obtains the initial identification model retraining by the second training sample set;Described obtain includes for the initial identification model of plank identification:The initial identification model for having been subjected to the first training sample set training is obtained from server;Or the first training sample set is obtained from server, and training to obtain the initial identification model locally with the first training sample set.

Description

Machine learning method, device, electronic equipment and the storage medium again of plank identification
Technical field
A kind of this disclosure relates to timber automatic identification technical field, and in particular to the machine of the plank identification side of study again Method, device, electronic equipment and storage medium.
Background technology
In wood processing field, many trained workers of need of work, by observation, incorporate experience into completion.Such as Judge the flatness of each piece of plank, classified to board products, design plank processing scheme etc..
However, need to expend substantial amounts of human resources, and the variation due to demand and timber using artificial method The scrambling of attribute, artificial mode not only inefficiency, but also need constantly to retrain, could keep certain standard Exactness.Meanwhile as the increase of working time, the method for manpower also occur that accuracy rate declines, the slack-off phenomenon of efficiency.
Just becoming the emerging direction of current industry using the method for machine progress flatness detection, in wood treatment Many steps can be solved by the method for machine.Such as the flatness judgement of plank, the classification of plank, Wood products Defects detection etc. can realize automatic business processing by the method for artificial intelligence.
The content of the invention
The embodiment of the present disclosure provides a kind of machine of plank identification, and learning method, device, electronic equipment and computer can again Read storage medium.
In a first aspect, a kind of machine of plank identification learning method again is provided in the embodiment of the present disclosure, including:
Obtain the initial identification model for being used for plank identification;The initial identification model is instructed by the first training sample set Practice;The first training sample set includes at least one first training sample and first training sample corresponding first Annotation results;
Obtain the second training sample set;The second training sample set include at least one second training sample and Corresponding second annotation results of second training sample;
Obtain local identification model;The local identification model is by the second training sample set to the initial knowledge Other model retraining obtains;
Wherein, described obtain is used for the initial identification model that plank identifies, including:
The initial identification for having been subjected to the first training sample set and training is obtained from server by communication network Model;Or,
The first training sample set is obtained from server by communication network, and locally with the described first training Sample set trains to obtain the initial identification model.
Alternatively, the second training sample set of the acquisition, including:
When the initial identification model is less than threshold value to the confidence level of the recognition result of plank to be identified, wait to know by described Corresponding second training sample of other plank and the second annotation results add the second training sample set.
Alternatively, the local identification model of the acquisition, including:
Retraining is carried out to the preliminary identification model using the second training sample set, obtains the local identification Model;Or,
The second training sample set is sent to server, and is obtained from the server by the described second instruction Practice the local identification model after sample training.
Alternatively, the initial identification model uses the first training sample set to the machine based on first nerves network Device learning model is trained to obtain;The local identification model is using the second training sample set to based on nervus opticus The machine learning model of network is trained to obtain, and the part or all of nodal value of the nervus opticus network by first with instructing The nodal value for practicing the first nerves network after sample set training corresponds to unanimously.
Alternatively, the nervus opticus network includes all nodes of the first nerves network and newly-generated section Point, and/or the nervus opticus network are different with the output node of the first nerves network.
Alternatively, first annotation results and the second annotation results include classification, cutting scheme, the defect knowledge of timber Not, one or more of grade assessment;The initial identification model and the recognition result of the local identification model include wood One or more of the classification of material, cutting scheme, defect recognition, grade assessment.
Alternatively, labelling schemes are different used by first annotation results and the second annotation results.
Alternatively, the method further includes:
Gather the 3rd training sample;
3rd training sample is inputted into the preliminary identification model, and by obtained recognition result with it is corresponding 3rd training sample is sent to server, to add into the first training sample set.
Alternatively, the method further includes:
The local identification model is sent to server.
Alternatively, described obtain is used for the initial identification model that plank identifies, including:
Obtain multiple initial identification models;
Wherein, the local identification model is that multiple initial identification models are distinguished using the second training sample set After being trained, the optimal model of the recognition effect therefrom chosen.
In a first aspect, a kind of machine of plank identification learning device again is provided in the embodiment of the present disclosure, including:
First acquisition module, is configured as obtaining the initial identification model for being used for plank identification;The initial identification model Trained by the first training sample set;The first training sample set includes at least one first training sample and described Corresponding first annotation results of first training sample;
Second acquisition module, is configured as obtaining the second training sample set;The second training sample set is included extremely Few second training sample and corresponding second annotation results of second training sample;
3rd acquisition module, is configured as obtaining local identification model;The local identification model is by the described second instruction Practice sample set to obtain the initial identification model retraining;
Wherein, first acquisition module is obtained from server by communication network and has been subjected to first training sample set Close the initial identification model of training;Or by communication network from server acquisition the first training sample set, and Train to obtain the initial identification model locally with the first training sample set.
Alternatively, second acquisition module, including:
Submodule is added, is configured as low to the confidence level of the recognition result of plank to be identified in the initial identification model When threshold value, corresponding second training sample of the plank to be identified and the second annotation results are added into the second training sample This set.
Alternatively, the 3rd acquisition module, including:
First acquisition submodule, is configured as carrying out the preliminary identification model using the second training sample set Retraining, obtains the local identification model;Or,
Second acquisition submodule, is configured as sending the second training sample set to server, and from described Server obtains the local identification model after second training sample training.
Alternatively, the initial identification model uses the first training sample set to the machine based on first nerves network Device learning model is trained to obtain;The local identification model is using the second training sample set to based on nervus opticus The machine learning model of network is trained to obtain, and the part or all of nodal value of the nervus opticus network by first with instructing The nodal value for practicing the first nerves network after sample set training corresponds to unanimously.
Alternatively, the nervus opticus network includes all nodes of the first nerves network and newly-generated section Point, and/or the nervus opticus network are different with the output node of the first nerves network.
Alternatively, first annotation results and the second annotation results include classification, cutting scheme, the defect knowledge of timber Not, one or more of grade assessment;The initial identification model and the recognition result of the local identification model include wood One or more of the classification of material, cutting scheme, defect recognition, grade assessment.
Alternatively, labelling schemes are different used by first annotation results and the second annotation results.
Alternatively, the method further includes:
Acquisition module, is configured as the 3rd training sample of collection;
Sample Refreshment module, is configured as inputting the 3rd training sample into the preliminary identification model, and will Obtained recognition result is sent to server with corresponding 3rd training sample, to add to first training sample set In conjunction.
Alternatively, the method further includes:
Sending module, is configured as sending the local identification model to server.
Alternatively, first acquisition module, including:
3rd acquisition submodule, is configured as obtaining multiple initial identification models;
Wherein, the local identification model is that multiple initial identification models are distinguished using the second training sample set After being trained, the optimal model of the recognition effect therefrom chosen.
The function can also be performed corresponding software and be realized by hardware realization by hardware.The hardware or Software includes the one or more and corresponding module of above-mentioned function.
In a possible design, the structure of the machine learning device again of plank identification includes memory and processing Device, the machine that the memory is used to store one or more support plank identification learn to put wood in the above-mentioned first aspect of execution again The computer instruction of the machine learning method again of plate identification, the processor are configurable for performing and are stored in the memory Computer instruction.Learning device can also include communication interface to the machine of the plank identification again, the machine for plank identification Device learning device and other equipment or communication again.
The third aspect, the embodiment of the present disclosure provide a kind of electronic equipment, including memory and processor;Wherein, it is described Memory is used to store one or more computer instruction, wherein, one or more computer instruction is by the processor Perform to realize the method and step described in first aspect.
Fourth aspect, the embodiment of the present disclosure provide a kind of computer-readable recording medium, for storing plank identification The machine computer instruction used in learning device again, it includes to be learned again for performing the machine that plank identifies in above-mentioned first aspect Computer instruction involved by learning method.
The technical solution that the embodiment of the present disclosure provides can include the following benefits:
The embodiment of the present disclosure, collects various plank samples in server end first, after being labeled, including First training sample and the first training sample of the first annotation results combine;Secondly, before local side carries out plank identification, Obtain the initial identification model crossed of sample training passed through in the first training sample set from server end, or from server end The first training sample set and general machine learning model are obtained, recycles the sample in the first training sample set to logical Machine learning model is trained, and obtains initial identification model;Initial identification model is equal used in different local sides It is to obtain from server end or obtained using same training sample and machine learning model.Local side obtains initial knowledge Other model and then the second training sample of use local carry out retraining to initial identification sample, to adapt to each local side Different situations.In this way, can be or even complete in server end by collecting substantial amounts of training sample in server end Into the initial training of model, the training of local side can be mitigated, and multiple local sides can share same initial knowledge Other model, saves cost, and local side trains initial identification model process again, and it is different to can adapt to each local side Situation, enabling it is cost-effective at the same time, also meet the different demands of different local sides.
It should be appreciated that the general description and following detailed description of the above are only exemplary and explanatory, not The disclosure can be limited.
Brief description of the drawings
With reference to attached drawing, by the detailed description of following non-limiting embodiment, the further feature of the disclosure, purpose and excellent Point will be apparent.In the accompanying drawings:
Fig. 1 shows the flow chart of the machine learning method again identified according to the plank of one embodiment of the disclosure;
Fig. 2 shows the network deployment structure schematic diagram of the local device and server according to one embodiment of the disclosure;
Fig. 3 shows the flow chart of the first training sample set collecting part according to one embodiment of the disclosure;
Fig. 4 shows the structure diagram of the machine learning device again identified according to the plank of one embodiment of the disclosure;
Fig. 5 is adapted for the electronics for realizing the machine learning method again identified according to the plank of one embodiment of the disclosure The structure diagram of equipment.
Embodiment
Hereinafter, the illustrative embodiments of the disclosure will be described in detail with reference to the attached drawings, so that those skilled in the art can Easily realize them.In addition, for the sake of clarity, the portion unrelated with description illustrative embodiments is eliminated in the accompanying drawings Point.
In the disclosure, it should be appreciated that the term of " comprising " or " having " etc. is intended to refer to disclosed in this specification Feature, numeral, step, behavior, component, part or presence of its combination, and be not intended to exclude other one or more features, Numeral, step, behavior, component, part or its combination there is a possibility that or be added.
It also should be noted that in the case where there is no conflict, the feature in embodiment and embodiment in the disclosure It can be mutually combined.Describe the disclosure in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Current machine learning method, such as deep learning method is compared to traditional method, more dependent on magnanimity Training data.With the lifting of training data, new machine learning method is capable of the precision of continuous hoisting machine, this feature It is not present in traditional machine vision.Therefore, how to be accumulated in wood processing field and use the training data of magnanimity As a new technological challenge.Especially, timber is a kind of non-standardized products, it means that may be due to original between sample Material or manufacturing process produce larger otherness, and a kind of processing method of standardization may not apply to all scenes.Therefore, The present disclosure proposes the technical solution that a kind of machine of plank identification learns again, which is based on communication network, Ke Yiyong In the relevant data accumulation of the machine learning of wood processing field and use.
The equipment of the scheme that the disclosure is proposed including at least one local disposition, remote server and for connecting The equipment of at least one local disposition and the communication network of remote server.The equipment of local disposition includes image pattern collection dress Put, machine learning model deployment device, timber automatic processing device.In general, machine learning model deployment device deployment One machine learning model, the machine learning model are receiving the next pending sample of sample collecting device transmission, such as former After wood, plank, wood chip, batten, wooden unit equal samples, by the processing of machine learning model, an execution signal is exported.Timber After automatic processing device receives the execution signal, the processing to pending sample, such as sorting, detection, cutting etc. are performed.Institute The machine learning model stated be it is a kind of based on training data training model, such as neutral net, convolutional neural networks, depth god Through models such as network, support vector machines, decision forest, Bayesian network etc..
Due to the particularity of wood processing field, it is however generally that, there are the processing standard of oneself, such as plank in every factory Sample criteria for classification, quality control standard.That is the plank processing blanket processing criterion of neither one, therefore generally For can not obtain blanket training data.For example, five plank samples, are divided into three in the processing criterion of factory A A grade, but it is divided into four grades in the processing criterion of factory B.Therefore, even if possessing the sample number of magnanimity in advance According to, also can not in advance to plank formulate standard, so as to obtain a blanket model.Therefore, a kind of feasible scheme exists In, using the model of a blank in the equipment of local disposition, that is, the model without process any training data training, Then sample collection is carried out by the image capture device of the equipment of local disposition, so according to the factory processing standard to sample Originally it is labeled, and then the model of blank is trained using these samples.However, the problem so operated is, Sample datas that these serious machine learning methods by training data generally require magnanimity are trained, for example, it is hundreds thousand of very To millions of training datas.It it is one non-this means the equipment by local disposition is acquired enough sample datas Often poorly efficient method.In addition, limitation of the equipment of local disposition due to computing capability so that it also can not efficiently train blank Model.Such as may take a couple of days even several weeks could complete to train, this is the unacceptable cycle in actual operation.
Fig. 1 shows the flow chart of the machine learning method again identified according to the plank of one embodiment of the disclosure.Such as Fig. 1 institutes Show, learning method comprises the following steps S101-S103 to the machine of the plank identification again:
In step S101, the initial identification model for being used for plank identification is obtained;The initial identification model passes through first Training sample set is trained;The first training sample set includes at least one first training sample and first training Corresponding first annotation results of sample;
In step s 102, the second training sample set is obtained;The second training sample set includes at least one the Two training samples and corresponding second annotation results of second training sample;
In step s 103, local identification model is obtained;The local identification model passes through second training sample set Conjunction obtains the initial identification model retraining;
Wherein, the step S101, i.e. acquisition are used for the step of initial identification model of plank identification, further comprise:
The initial identification for having been subjected to the first training sample set and training is obtained from server by communication network Model;Or,
The first training sample set is obtained from server by communication network, and locally with the described first training Sample set trains to obtain the initial identification model.
In the present embodiment, the first training sample of magnanimity and its right that the first training sample set can be gathered in server end The set that the first annotation results answered are formed.First training sample can be to various logs, plank, wood chip, batten, wooden unit etc. The view data collected, the first annotation results can be according to server to these logs, plank, wood chip, batten, wooden unit etc. The true annotation results for holding general mark rule to obtain.Equally, the second training sample set is collected by local side The set that second training sample and its corresponding second annotation results are formed.Second training sample can be to various logs, wood The view data that plate, wood chip, batten, wooden unit etc. collect, the second annotation results can be to these logs, plank, wood chip, wood The true annotation results that bar, wooden unit etc. are obtained according to the mark rule of local side.
In disclosure embodiment, remote server uses the training sample and mark in the first training sample set Data cause machine learning model to become an initial identification model (general model) from a blank model;This is initial Identification model is labeled sample data using substantial amounts of training data and a kind of standardization dimension model.Then, remotely The initial identification model is sent to local disposition equipment by server by communication network.Local disposition equipment is adopted by image Collect the local sample of equipment collection, and sample is labeled using the notation methods of a local, obtain the second training sample set Close.Afterwards, local disposition equipment trains initial identification model using the second training sample set again.Meanwhile by Mass data is used in initial identification model and has carried out first training, therefore trains required data volume much again Required sample data volume is trained from blank model less than one, generation one is so considerably reduced and is suitable for this The cycle of the model of ground factory, therefore greatly accelerated the deployment efficiency of machine learning model.The one of the disclosure is given in Fig. 2 The corresponding schematic diagram of a embodiment, it can be seen that the retraining model that final multiple local disposition equipment use is according to this The model that ground sample characteristics and factory needing were trained.And server end is then without the concern for the specific requirements of every factory, Only it is responsible for generating and distributing initial identification model.
Alternatively, initial identification model can use substantial amounts of timber data, and use a more rough mark side Method.For example, in a kind of system of timber assortment, only it is labeled by several small numbers of classification.At this time, retraining is made Mark classification often means that more classification numbers, such a efficiency for arranging to can speed up retraining.For example, initial know Other model includes three planks and classifies, but a local notation methods include five planks and classify.Then, local disposition is set It is standby to use the local data gathered and mark, initial identification model is trained again.Used due to initial identification model General mask method, therefore initial identification model can not be suitable for the needs of local factory, such as can not be by local factory Criteria for classification classifies sample.However, the data used in training again are the local training gathered and mark Data, this causes the local identification model after training again to be suitable for the needs of local factory.
In a kind of optional embodiment, server and multiple local disposition equipment pass through network by network connection The initial identification model after training is sent to multiple local disposition equipment.
In the present embodiment, server end can directly utilize these samples after the first training sample set is collected This is trained machine learning model, can also send these samples to local side, these samples pair are utilized by local side Machine learning model is trained, and obtains initial identification model.Which kind of mode is specifically used, can the actual feelings based on local side Condition carries out, and is not limited herein.
In an optional implementation of the present embodiment, the step S102, that is, obtain the second training sample set Step, further comprises the steps:
When the initial identification model is less than threshold value to the confidence level of the recognition result of plank to be identified, wait to know by described Corresponding second training sample of other plank and the second annotation results add the second training sample set.
In the optional implementation, the second training sample can be obtained based on the recognition result of initial identification model.It is right In the case that the mode classification of initial identification model disclosure satisfy that local standard, locally initial identification mould can be directly being used Type carries out plank identification, and for the undesirable plank (knowledge that i.e. initial identification model obtains of initial identification Model Identification result The confidence level of other result is less than threshold value), can be by manually being marked to it after, using the view data of the plank as second training Sample, using artificial annotation results as annotation results, adds in the second training sample set, in the number of the second training sample set After mesh reaches requirement, initial identification model can be trained again.Certainly, the second training sample can also pass through its other party Formula obtains, and more uncommon plank carries out sampling acquisition.In this way, initial identification can be continually strengthened The identification function of model, is finally reached the demand of local side.
In an optional implementation of the present embodiment, the step S102, that is, the step of obtaining local identification model, Further comprise the steps:
Retraining is carried out to the preliminary identification model using the second training sample set, obtains the local identification Model;Or,
The second training sample set is sent to server, and is obtained from the server by the described second instruction Practice the local identification model after sample training.
, can be in local to initial after local side collects the second training sample set in the optional implementation Identification model is trained, and obtains local identification model, can also be sent the second training sample set to server, by servicing Device is trained initial identification model to obtain local identification model, and is sent trained local identification model by server To local side.
For example, local device sends the second training sample marked according to local mask method to remote server, far Journey server completes retraining on initial identification model basis, and is sent the model after retraining to local by network Equipment, and the model after retraining is deployed in local device.
For another example local disposition equipment sends out the second training sample and mask method that are marked according to local mask method Send to remote server, remote server is marked the second training sample according to the standard again, and completes retraining process. The mark for marking the data sample for being manually sent according to local disposition equipment again and completing described herein.Further, long-range clothes Business device, which can send the second training sample after mark to local side, to be confirmed.Manually data are labeled for convenience And mark confirms that remote server and local disposition equipment can include display device and man-machine interface, for operation Personnel show sample and receive markup information.
In an optional implementation of the present embodiment, the initial identification model uses first training sample set Conjunction is trained to obtain to the machine learning model based on first nerves network;The local identification model is using the described second instruction Practice sample set the machine learning model based on nervus opticus network is trained to obtain, the part of the nervus opticus network Or whole nodal values are corresponding with the nodal value of the first nerves network after the training of the first training sample set consistent.
In the optional implementation, initial identification model is obtained using neural network learning, before training, initial to know The nodal value of each node of first nerves network is a preset value used by other model, such as 0;And utilizing the first training When the first training sample in sample set is trained, the nodal value of each node of first nerves network can be constantly updated, Finally obtain the corresponding initial value of initial identification model;And local identification model is carried out again on the basis of initial identification model What training obtained, therefore each nodal value of nervus opticus network is no longer preset value 0 etc. used by local identification model, But the corresponding initial value of initial identification model (when first nerves network is identical with the network node of nervus opticus network), profit When carrying out retraining to initial identification model with the second training sample set, the nodal value of the corresponding node of nervus opticus network exists It is continuously updated on the basis of initial value, is finally reached the corresponding nodal value of local identification model.
In an optional implementation of the present embodiment, the nervus opticus network includes the first nerves network All nodes and newly-generated node, and/or the output node of the nervus opticus network and the first nerves network is not Together.
In the optional implementation, the node of first nerves network and the node of nervus opticus network can be different, the Two neutral nets can be on the basis of first nerves network, be obtained by increasing new node.Second is neutral net Output node can also be different with the output node of first nerves network.In order to adapt to the difference that plank is classified by each factory Standard, initial identification model can be a more broad classification models, the output node of its corresponding first nerves network Can be less (number of output node is corresponding with the number of annotation results consistent).And in local side retraining, can be Expand new node on the basis of one neutral net, such as increase new network level, include in the network level newly increased The nodal value of new node is arranged to preset value, such as 0, and the nodal value of other original nodes passes through training for first nerves network The nodal value obtained afterwards, so enables to nervus opticus network preferably to adapt to new labeling system and can reach target capabilities to make Obtain neutral net;Further, it is also possible to increase the output node of nervus opticus network so that the number of annotation results is more.Example Such as, in a kind of system of timber assortment, the output node of first nerves network is less, only by it is several it is small numbers of classify into Rower is noted.And after the output node number increase of nervus opticus network, mark classification used in retraining is often meant that more More classification numbers, such a efficiency for arranging to can speed up retraining.In this way, the first god in initial identification model It can be good at handling the relevant feature extraction functions of timber through network node, while newly-increased neural network node can be used in Otherness between the local mask method of processing and general mask method.
Alternatively, first annotation results and the second annotation results include classification, cutting scheme, the defect knowledge of timber Not, one or more of grade assessment;The initial identification model and the recognition result of the local identification model include wood One or more of the classification of material, cutting scheme, defect recognition, grade assessment.
Alternatively, labelling schemes are different used by first annotation results and the second annotation results.Server end point The mask method for issuing the initial identification model of each local side can be a more rough labelling schemes, and local side can To be labeled based on local labelling schemes to the second training sample in the second training sample set, the mark side of server end Case can be different from the labelling schemes of local side.For example, for timber assortment system, the labelling schemes of server end are by sample Be labeled as three types, and a certain local side labelling schemes be by sample classification be five more careful types.This can Can be achieved by increasing new model parameter on the basis of the corresponding machine learning model of initial identification model, for example, it is right In neutral net, can be realized by increasing neutral net level and/or increase output node etc..In this way, may be used With the free labeled standards of adaptive local side.
In an optional implementation of the present embodiment, as shown in figure 3, the method is further included following steps S301-S302:
In step S301, the 3rd training sample is gathered;
In step s 302, the 3rd training sample is inputted into the preliminary identification model, and the knowledge that will be obtained Other result is sent to server with corresponding 3rd training sample, to add into the first training sample set.
In the optional implementation, local side can also be obtained by the sample of collection and by initial identification Model Identification To recognition result be sent to server end so that server end is added into the first training sample set, to expand server The first training sample set at end.
Certainly in other embodiments, local side can also obtain corresponding first mark of the first annotation results from server end Note scheme, and after the sample locally collected is labeled with the first labelling schemes, send to server end, to expand service The first training sample set at device end.
In an optional implementation of the present embodiment, the method is further included following steps:By the local Identification model is sent to server.In the optional implementation, local side is sent extremely after training obtains local identification model Server end is stored, so that the consistent local side of other labeled standards can directly use the local identification model, and nothing Need re -training.Alternatively, after multiple local identification models that server end can also send different local sides optimize, choosing A wherein optimal seat initial identification model is selected, each local side is distributed to again and uses.
Since the initial identification Model suitability of remote server is poor, initial identification model can only be according to advance Labeled standards rather than local side actual demand and produce.Local number is constantly accumulated with the local identification model of local disposition According to so that the local identification model after retraining gradually contains more training datas and more practical mask method, because Preferably local identification model is periodically arranged to initial identification model by this server, so ensures that server and multiple Local disposition equipment being capable of constantly evolution machine learning model, and with the accumulation of data, the efficiency and essence of continuous lifting system Degree.
In an optional implementation of the present embodiment, the step S101, that is, obtain and be used for the initial of plank identification The step of identification model, further comprise the steps:
Obtain multiple initial identification models;
Wherein, the local identification model is that multiple initial identification models are distinguished using the second training sample set After being trained, the optimal model of the recognition effect therefrom chosen.
In the optional implementation, local side can obtain multiple initial identification models from server, or from service Device end obtains a variety of different training samples, and machine learning model is trained to obtain multiple initial identification models.Afterwards originally Ground terminal is carrying out multiple initial identification models retraining using the second local training sample set, and from obtain it is multiple again Select recognition effect optimal in training pattern one is used as local identification model.Recognition effect can be based on those in training mould Type judges to determine to the recognition result of test data.
In one embodiment, local image capture device can obtain the standardization of remote server by network Mask method, and being labeled according to the mask method of standardization to local sample, afterwards again by network by the sheet after mark Ground sample is sent to remote server, is added in the first training sample set.
In one embodiment, local collecting device by the second training sample marked according to local mask method and Its corresponding second annotation results sends to remote server, remote server and utilizes these the basis of initial identification model Second training sample and its corresponding second standard results complete retraining, and the local after retraining is identified mould by network Type is sent to local device, and the local identification model after retraining is deployed in local device, to save the money of local device Source, improves efficiency.
In one embodiment, the second training sample and mask method that local disposition equipment will be collected locally Send to remote server, remote server will mark it again according to local mask method, and complete retraining process, obtain To local identification model.The mark for marking the mask method for being manually sent according to local disposition equipment again and completing described herein Note.Further, the second training sample after mark can be sent to local device and be confirmed by remote server.For convenience Confirmation is manually labeled and marked to training sample, and remote server and local disposition equipment include display device and people Machine interface, for showing sample to operating personnel and receiving markup information.
In one embodiment, remote server passes through net distribution initial identification model and retraining standard.Again Training standard includes the marks such as the data format for retraining, data bulk, mask method.Local side is according to the retraining mark Standard sample collection and mark again so that retraining process being capable of universal model that more preferably match server generates.
In one embodiment, local disposition device-to-server sends initial identification model criteria, and server receives The standard of initial identification model, is trained machine learning model using the first training sample set for being stored in server To initial identification model.The initial identification model after training is sent to local disposition equipment again.In this fashion, local portion A suitable initial identification model can be prejudged according to the demand of local factory by affixing one's name to equipment.Since local factory is not enough More training datas and computing resource, therefore only send the standard of initial identification model to server, and it is complete by server Into training.For example, local disposition equipment can be according to local computing resource, the standard of model requirements generation neutral net, training The standards such as the quantity of collection, the definition of mask method, and the standard is sent to server.Server goes out one according to the standard exercise A initial identification model, retransmits to local disposition equipment and completes to dispose by local disposition equipment.
Following is embodiment of the present disclosure, can be used for performing embodiments of the present disclosure.
Fig. 4 shows the structure diagram of the machine learning device again identified according to the plank of one embodiment of the disclosure, the dress Putting can be implemented in combination with as some or all of of electronic equipment by software, hardware or both.As shown in figure 4, institute State plank identification machine learning device includes the first acquisition module 401, the second acquisition module 402 and the 3rd acquisition module again 403:
First acquisition module 401, is configured as obtaining the initial identification model for being used for plank identification;The initial identification mould Type is trained by the first training sample set;The first training sample set includes at least one first training sample and institute State corresponding first annotation results of the first training sample;
Second acquisition module 402, is configured as obtaining the second training sample set;The second training sample set includes At least one second training sample and corresponding second annotation results of second training sample;
3rd acquisition module 403, is configured as obtaining local identification model;The local identification model passes through described second Training sample set obtains the initial identification model retraining;
Wherein, first acquisition module is obtained from server by communication network and has been subjected to first training sample set Close the initial identification model of training;Or by communication network from server acquisition the first training sample set, and Train to obtain the initial identification model locally with the first training sample set.
In an optional implementation of the present embodiment, second acquisition module 402, including:
Submodule is added, is configured as low to the confidence level of the recognition result of plank to be identified in the initial identification model When threshold value, corresponding second training sample of the plank to be identified and the second annotation results are added into the second training sample This set.
In an optional implementation of the present embodiment, the 3rd acquisition module 403, including:
First acquisition submodule, is configured as carrying out the preliminary identification model using the second training sample set Retraining, obtains the local identification model;Or,
Second acquisition submodule, is configured as sending the second training sample set to server, and from described Server obtains the local identification model after second training sample training.
In an optional implementation of the present embodiment, the initial identification model uses first training sample set Conjunction is trained to obtain to the machine learning model based on first nerves network;The local identification model is using the described second instruction Practice sample set the machine learning model based on nervus opticus network is trained to obtain, the part of the nervus opticus network Or whole nodal values are corresponding with the nodal value of the first nerves network after the training of the first training sample set consistent.
In an optional implementation of the present embodiment, the nervus opticus network includes the first nerves network All nodes and newly-generated node, and/or the output node of the nervus opticus network and the first nerves network is not Together.
In an optional implementation of the present embodiment, first annotation results and the second annotation results include timber Classification, cutting scheme, defect recognition, grade assessment one or more of;The initial identification model and the local knowledge One or more of classification of the recognition result of other model including timber, cutting scheme, defect recognition, grade assessment.
In an optional implementation of the present embodiment, used by first annotation results and the second annotation results Labelling schemes are different.
In an optional implementation of the present embodiment, described device further includes:
Acquisition module, is configured as the 3rd training sample of collection;
Sample Refreshment module, is configured as inputting the 3rd training sample into the preliminary identification model, and will Obtained recognition result is sent to server with corresponding 3rd training sample, to add to first training sample set In conjunction.
In an optional implementation of the present embodiment, described device further includes:
Sending module, is configured as sending the local identification model to server.
In an optional implementation of the present embodiment, first acquisition module 401, including:
3rd acquisition submodule, is configured as obtaining multiple initial identification models;
Wherein, the local identification model is that multiple initial identification models are distinguished using the second training sample set After being trained, the optimal model of the recognition effect therefrom chosen.
Learning method is corresponding unanimously again with machine that above-mentioned plank identify for learning device again for the machine of above-mentioned plank identification, has Body details can be found in the description of the machine learning method again of plank identification, and details are not described herein.
Fig. 5 is adapted for for realizing that the electronics of the machine learning method again identified according to the plank of disclosure embodiment is set Standby structure diagram.
As shown in figure 5, electronic equipment 500 includes central processing unit (CPU) 501, it can be according to being stored in read-only deposit Program in reservoir (ROM) 502 is held from the program that storage part 508 is loaded into random access storage device (RAM) 503 The various processing in embodiment shown in the above-mentioned Fig. 1 of row.In RAM503, be also stored with electronic equipment 500 operate it is required Various programs and data.CPU501, ROM502 and RAM503 are connected with each other by bus 504.Input/output (I/O) interface 505 are also connected to bus 504.
I/O interfaces 505 are connected to lower component:Importation 506 including keyboard, mouse etc.;Penetrated including such as cathode The output par, c 507 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage part 508 including hard disk etc.; And the communications portion 509 of the network interface card including LAN card, modem etc..Communications portion 509 via such as because The network of spy's net performs communication process.Driver 510 is also according to needing to be connected to I/O interfaces 505.Detachable media 511, it is all Such as disk, CD, magneto-optic disk, semiconductor memory, are installed on driver 510, in order to read from it as needed The computer program gone out is mounted into storage part 508 as needed.
Especially, according to embodiment of the present disclosure, it is soft to may be implemented as computer above with reference to Fig. 1 methods described Part program.For example, embodiment of the present disclosure includes a kind of computer program product, it includes being tangibly embodied in and its readable Computer program on medium, the computer program include the machine of the plank identification for being used for performing Fig. 1 learning method again Program code.In such embodiment, which can be downloaded and pacified from network by communications portion 509 Dress, and/or be mounted from detachable media 511.
Flow chart and block diagram in attached drawing, it is illustrated that according to the system, method and computer of the various embodiments of the disclosure Architectural framework in the cards, function and the operation of program product.At this point, each square frame in course diagram or block diagram can be with A part for a module, program segment or code is represented, a part for the module, program segment or code includes one or more The executable instruction of logic function as defined in being used for realization.It should also be noted that some as replace realization in, institute in square frame The function of mark can also be with different from the order marked in attached drawing generation.For example, two square frames succeedingly represented are actual On can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depending on involved function.Also It is noted that the combination of each square frame and block diagram in block diagram and/or flow chart and/or the square frame in flow chart, Ke Yiyong The dedicated hardware based systems of functions or operations as defined in execution is realized, or can be referred to specialized hardware and computer The combination of order is realized.
Being described in unit or module involved in disclosure embodiment can be realized by way of software, also may be used Realized in a manner of by hardware.Described unit or module can also be set within a processor, these units or module Title do not form restriction to the unit or module in itself under certain conditions.
As on the other hand, the disclosure additionally provides a kind of computer-readable recording medium, the computer-readable storage medium Matter can be computer-readable recording medium included in device described in the above embodiment;Can also be individualism, Without the computer-readable recording medium in supplying equipment.Computer-readable recording medium storage has one or more than one journey Sequence, described program is used for performing by one or more than one processor is described in disclosed method.
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.People in the art Member should be appreciated that invention scope involved in the disclosure, however it is not limited to the technology that the particular combination of above-mentioned technical characteristic forms Scheme, while should also cover in the case where not departing from the inventive concept, carried out by above-mentioned technical characteristic or its equivalent feature The other technical solutions for being combined and being formed.Such as features described above has similar work(with the (but not limited to) disclosed in the disclosure The technical solution that the technical characteristic of energy is replaced mutually and formed.

Claims (10)

  1. A kind of 1. machine of plank identification learning method again, it is characterised in that including:
    Obtain the initial identification model for being used for plank identification;The initial identification model is trained by the first training sample set; The first training sample set includes at least one first training sample and corresponding first mark of first training sample Note result;
    Obtain the second training sample set;The second training sample set includes at least one second training sample and described Corresponding second annotation results of second training sample;
    Obtain local identification model;The local identification model is by the second training sample set to the initial identification mould Type retraining obtains;
    Wherein, described obtain is used for the initial identification model that plank identifies, including:
    The initial identification model for having been subjected to the first training sample set and training is obtained from server by communication network; Or,
    The first training sample set is obtained from server by communication network, and locally with first training sample Set training obtains the initial identification model.
  2. 2. machine according to claim 1 learning method again, it is characterised in that the second training sample set of the acquisition, Including:
    When the initial identification model is less than threshold value to the confidence level of the recognition result of plank to be identified, by the wood to be identified Corresponding second training sample of plate and the second annotation results add the second training sample set.
  3. 3. machine according to claim 1 learning method again, it is characterised in that it is described to obtain local identification model, including:
    Retraining is carried out to the preliminary identification model using the second training sample set, obtains the local identification mould Type;Or,
    The second training sample set is sent to server, and is obtained from the server by the described second training sample The local identification model after this training.
  4. 4. machine according to claim 1 learning method again, it is characterised in that the initial identification model is using described the One training sample set is trained to obtain to the machine learning model based on first nerves network;The local identification model is adopted The machine learning model based on nervus opticus network is trained to obtain with the second training sample set, second god Through the node of the part or all of nodal value and the first nerves network after the training of the first training sample set of network Value is corresponding consistent.
  5. 5. machine according to claim 4 learning method again, it is characterised in that the nervus opticus network includes described the All nodes and newly-generated node of one neutral net, and/or the nervus opticus network and the first nerves network Output node it is different.
  6. 6. machine according to claim 1 learning method again, it is characterised in that first annotation results and the second mark As a result one or more of the classification including timber, cutting scheme, defect recognition, grade assessment;The initial identification model One in classification of the recognition result including timber, cutting scheme, defect recognition, grade assessment with the local identification model It is or multiple.
  7. 7. machine according to claim 1 learning method again, it is characterised in that first annotation results and the second mark As a result used labelling schemes are different.
  8. A kind of 8. machine of plank identification learning device again, it is characterised in that including:
    First acquisition module, is configured as obtaining the initial identification model for being used for plank identification;The initial identification model passes through First training sample set is trained;The first training sample set includes at least one first training sample and described first Corresponding first annotation results of training sample;
    Second acquisition module, is configured as obtaining the second training sample set;The second training sample set includes at least one A second training sample and corresponding second annotation results of second training sample;
    3rd acquisition module, is configured as obtaining local identification model;The local identification model is by the described second training sample This set obtains the initial identification model retraining;
    Wherein, first acquisition module is obtained from server by communication network and has been subjected to the first training sample set instruction The experienced initial identification model;Or the first training sample set is obtained from server by communication network, and in local Train to obtain the initial identification model using the first training sample set.
  9. 9. a kind of electronic equipment, it is characterised in that including memory and processor;Wherein,
    The memory is used to store one or more computer instruction, wherein, one or more computer instruction is by institute Processor is stated to perform to realize claim 1-10 any one of them method and steps.
  10. 10. a kind of computer-readable recording medium, is stored thereon with computer instruction, it is characterised in that the computer instruction quilt Processor realizes claim 1-10 any one of them method and steps when performing.
CN201711347768.6A 2017-12-14 2017-12-14 Machine learning method, device, electronic equipment and the storage medium again of plank identification Pending CN107967491A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711347768.6A CN107967491A (en) 2017-12-14 2017-12-14 Machine learning method, device, electronic equipment and the storage medium again of plank identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711347768.6A CN107967491A (en) 2017-12-14 2017-12-14 Machine learning method, device, electronic equipment and the storage medium again of plank identification

Publications (1)

Publication Number Publication Date
CN107967491A true CN107967491A (en) 2018-04-27

Family

ID=61995400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711347768.6A Pending CN107967491A (en) 2017-12-14 2017-12-14 Machine learning method, device, electronic equipment and the storage medium again of plank identification

Country Status (1)

Country Link
CN (1) CN107967491A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805091A (en) * 2018-06-15 2018-11-13 北京字节跳动网络技术有限公司 Method and apparatus for generating model
CN108898179A (en) * 2018-06-28 2018-11-27 广东科达洁能股份有限公司 A kind of ceramic tile presses grade packing method and system
CN109291049A (en) * 2018-09-30 2019-02-01 北京木业邦科技有限公司 Data processing method, device and control equipment
CN109344155A (en) * 2018-08-24 2019-02-15 北京木业邦科技有限公司 Timber metrical information automatic record method, device, electronic equipment and storage medium
CN109410220A (en) * 2018-10-16 2019-03-01 腾讯科技(深圳)有限公司 Image partition method, device, computer equipment and storage medium
CN109409428A (en) * 2018-10-25 2019-03-01 北京木业邦科技有限公司 Training method, device and the electronic equipment of plank identification and plank identification model
CN109558892A (en) * 2018-10-30 2019-04-02 银河水滴科技(北京)有限公司 A kind of target identification method neural network based and system
CN109711611A (en) * 2018-12-17 2019-05-03 北京木业邦科技有限公司 Timber cuts volume recovery recognition methods, device, electronic equipment and storage medium
CN110059549A (en) * 2019-03-11 2019-07-26 齐鲁工业大学 A kind of thin wood plate categorizing system and algorithm based on deep learning
CN110516572A (en) * 2019-08-16 2019-11-29 咪咕文化科技有限公司 A kind of method, electronic equipment and storage medium identifying competitive sports video clip
WO2020134533A1 (en) * 2018-12-29 2020-07-02 北京市商汤科技开发有限公司 Method and apparatus for training deep model, electronic device, and storage medium
CN111832591A (en) * 2019-04-23 2020-10-27 创新先进技术有限公司 Machine learning model training method and device
CN112183321A (en) * 2020-09-27 2021-01-05 深圳奇迹智慧网络有限公司 Method and device for optimizing machine learning model, computer equipment and storage medium
TWI786555B (en) * 2021-02-26 2022-12-11 寶元數控股份有限公司 Pattern identification and classification method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426857A (en) * 2015-11-25 2016-03-23 小米科技有限责任公司 Training method and device of face recognition model
CN106778791A (en) * 2017-03-01 2017-05-31 成都天衡电科科技有限公司 A kind of timber visual identity method based on multiple perceptron
CN107305636A (en) * 2016-04-22 2017-10-31 株式会社日立制作所 Target identification method, Target Identification Unit, terminal device and target identification system
CN107944504A (en) * 2017-12-14 2018-04-20 北京木业邦科技有限公司 Plank identifies and machine learning method, device and the electronic equipment of plank identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426857A (en) * 2015-11-25 2016-03-23 小米科技有限责任公司 Training method and device of face recognition model
CN107305636A (en) * 2016-04-22 2017-10-31 株式会社日立制作所 Target identification method, Target Identification Unit, terminal device and target identification system
CN106778791A (en) * 2017-03-01 2017-05-31 成都天衡电科科技有限公司 A kind of timber visual identity method based on multiple perceptron
CN107944504A (en) * 2017-12-14 2018-04-20 北京木业邦科技有限公司 Plank identifies and machine learning method, device and the electronic equipment of plank identification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DENNY402: "Caffe学习系列(23):如何将别人训练好的model用到自己的数据上", 《博客园HTTPS://BLOG.CSDN.NET/HALUOLUO211/ARTICLE/DETAILS/53103050》 *
KHALID M: "Design of an intelligent wood species recognition system", 《INTERNATIONAL JOURNAL OF SIMULATION SYSTEM, SCIENCE AND TECHNOLOGY》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805091A (en) * 2018-06-15 2018-11-13 北京字节跳动网络技术有限公司 Method and apparatus for generating model
CN108898179A (en) * 2018-06-28 2018-11-27 广东科达洁能股份有限公司 A kind of ceramic tile presses grade packing method and system
WO2020001480A1 (en) * 2018-06-28 2020-01-02 广东科达洁能股份有限公司 Method and system for packaging tiles by grade
CN108898179B (en) * 2018-06-28 2023-03-14 广东科达洁能股份有限公司 Method and system for packaging ceramic tiles according to grades
CN109344155A (en) * 2018-08-24 2019-02-15 北京木业邦科技有限公司 Timber metrical information automatic record method, device, electronic equipment and storage medium
CN109291049B (en) * 2018-09-30 2021-03-05 北京木业邦科技有限公司 Data processing method and device and control equipment
CN109291049A (en) * 2018-09-30 2019-02-01 北京木业邦科技有限公司 Data processing method, device and control equipment
CN109410220A (en) * 2018-10-16 2019-03-01 腾讯科技(深圳)有限公司 Image partition method, device, computer equipment and storage medium
CN111062952A (en) * 2018-10-16 2020-04-24 腾讯科技(深圳)有限公司 Lung image segmentation apparatus, method and storage medium
CN111062952B (en) * 2018-10-16 2022-09-30 腾讯科技(深圳)有限公司 Lung image segmentation apparatus, method and storage medium
CN109410220B (en) * 2018-10-16 2019-12-24 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium
WO2020078263A1 (en) * 2018-10-16 2020-04-23 腾讯科技(深圳)有限公司 Image segmentation method and apparatus, computer device and storage medium
CN109409428A (en) * 2018-10-25 2019-03-01 北京木业邦科技有限公司 Training method, device and the electronic equipment of plank identification and plank identification model
CN109558892A (en) * 2018-10-30 2019-04-02 银河水滴科技(北京)有限公司 A kind of target identification method neural network based and system
CN109711611B (en) * 2018-12-17 2021-08-17 北京木业邦科技有限公司 Wood cutting outturn rate identification method and device, electronic equipment and storage medium
CN109711611A (en) * 2018-12-17 2019-05-03 北京木业邦科技有限公司 Timber cuts volume recovery recognition methods, device, electronic equipment and storage medium
WO2020134533A1 (en) * 2018-12-29 2020-07-02 北京市商汤科技开发有限公司 Method and apparatus for training deep model, electronic device, and storage medium
TWI747120B (en) * 2018-12-29 2021-11-21 大陸商北京市商湯科技開發有限公司 Method, device and electronic equipment for depth model training and storage medium thereof
JP2021536083A (en) * 2018-12-29 2021-12-23 ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド Deep model training methods and their equipment, electronic devices and storage media
JP7110493B2 (en) 2018-12-29 2022-08-01 ベイジン・センスタイム・テクノロジー・デベロップメント・カンパニー・リミテッド Deep model training method and its device, electronic device and storage medium
CN110059549A (en) * 2019-03-11 2019-07-26 齐鲁工业大学 A kind of thin wood plate categorizing system and algorithm based on deep learning
CN111832591A (en) * 2019-04-23 2020-10-27 创新先进技术有限公司 Machine learning model training method and device
CN110516572A (en) * 2019-08-16 2019-11-29 咪咕文化科技有限公司 A kind of method, electronic equipment and storage medium identifying competitive sports video clip
CN110516572B (en) * 2019-08-16 2022-06-28 咪咕文化科技有限公司 Method for identifying sports event video clip, electronic equipment and storage medium
CN112183321A (en) * 2020-09-27 2021-01-05 深圳奇迹智慧网络有限公司 Method and device for optimizing machine learning model, computer equipment and storage medium
TWI786555B (en) * 2021-02-26 2022-12-11 寶元數控股份有限公司 Pattern identification and classification method and system

Similar Documents

Publication Publication Date Title
CN107967491A (en) Machine learning method, device, electronic equipment and the storage medium again of plank identification
CN106570477B (en) Vehicle cab recognition model building method and model recognizing method based on deep learning
CN110135231A (en) Animal face recognition methods, device, computer equipment and storage medium
CN110413786B (en) Data processing method based on webpage text classification, intelligent terminal and storage medium
CN112241452B (en) Model training method and device, electronic equipment and storage medium
CN104881770A (en) Express bill information identification system and express bill information identification method
CN110163236A (en) The training method and device of model, storage medium, electronic device
CN108734296A (en) Optimize method, apparatus, electronic equipment and the medium of the training data of supervised learning
CN109635292B (en) Work order quality inspection method and device based on machine learning algorithm
CN106874926A (en) Service exception detection method and device based on characteristics of image
CN108009497A (en) Image recognition monitoring method, system, computing device and readable storage medium storing program for executing
CN109598307A (en) Data screening method, apparatus, server and storage medium
CN107590195A (en) Textual classification model training method, file classification method and its device
CN107203841A (en) The method of inspection and device of product quality
CN107729469A (en) Usage mining method, apparatus, electronic equipment and computer-readable recording medium
CN111949795A (en) Work order automatic classification method and device
CN106997488A (en) A kind of action knowledge extraction method of combination markov decision process
CN107958270A (en) Classification recognition methods, device, electronic equipment and computer-readable recording medium
JPWO2020240808A1 (en) Learning device, classification device, learning method, classification method, learning program, and classification program
CN110287888A (en) A kind of TV station symbol recognition method and system
CN108171275A (en) For identifying the method and apparatus of flowers
CN108256550A (en) A kind of timber classification update method and device
CN113569852A (en) Training method and device of semantic segmentation model, electronic equipment and storage medium
US20190205361A1 (en) Table-meaning estimating system, method, and program
JP6988995B2 (en) Image generator, image generator and image generator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180427