CN108985386A - Obtain method, image processing method and the corresponding intrument of image processing model - Google Patents
Obtain method, image processing method and the corresponding intrument of image processing model Download PDFInfo
- Publication number
- CN108985386A CN108985386A CN201810891677.7A CN201810891677A CN108985386A CN 108985386 A CN108985386 A CN 108985386A CN 201810891677 A CN201810891677 A CN 201810891677A CN 108985386 A CN108985386 A CN 108985386A
- Authority
- CN
- China
- Prior art keywords
- image processing
- network
- neural network
- model
- processing model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to technical field of image processing, provide a kind of method, image processing method and corresponding intrument for obtaining image processing model.The method for obtaining image processing model includes: the generation new neural network different from the neural network structure in initial network set, it deletes runing time in newly-generated neural network and is more than the neural network of preset time limitation, and remaining neural network is added to candidate network set;At least one neural network is selected to be trained from candidate network set on training set;Determine at least one image processing model that can be used for image processing tasks.In the method, it tests the speed since candidate network set has already been through, the available guarantee of the speed of service at least one neural network selected, the faster model of the speed of service can be obtained by being trained to it.Simultaneously as the neural network only selected just is trained, so as to avoid a large amount of training mission, the search efficiency of image processing model is improved.
Description
Technical field
The present invention relates to technical field of image processing, in particular to a kind of method for obtaining image processing model, figure
As processing method and corresponding intrument.
Background technique
For different image processing tasks, neural network structure appropriate is designed, is one in present deep learning
Basic problem.Due to all following certain principle when the design of many neural networks, this neural network can usually use one
A little network architecture parameters show.But since there are many kind combinations for parameter, there is still a need for therefrom seeking optimized parameter, and
Time needed for these parameters of man-made chamber and resource be all costly, therefore in order to reduce workload, need it is some from
The method of dynamic search neural network model.
At present in the method for search neural network model, generally need largely to carry out model training, required hardware resource
It is more, so that these methods are difficult to large-scale application.
Summary of the invention
In view of this, the embodiment of the present invention provides and a kind of obtains the method for image processing model, image processing method and right
Device is answered, to solve the above technical problems.
To achieve the above object, the invention provides the following technical scheme:
In a first aspect, the embodiment of the present invention provides a kind of method for obtaining image processing model, comprising:
The new neural network different from the neural network structure in initial network set is generated, newly-generated nerve is deleted
Runing time is more than the neural network of preset time limitation in network, and remaining neural network is added to candidate network collection
It closes, wherein runing time is the time of the complete pre-set image of Processing with Neural Network;
At least one neural network is selected to be trained on the training set of image processing tasks from candidate network set;
Determine at least one image processing model that can be used for image processing tasks, wherein each image processing model is equal
For a trained neural network.
In the above-mentioned methods, candidate network set is obtained by generating neural network first, then from candidate network set
Middle at least one neural network of selection is trained.Firstly, since candidate network set has already been through and tests the speed, nerve therein
Network all meets runing time requirement, therefore the operation speed of at least one neural network gone out from candidate network Resource selection
Degree is available guarantee, is trained on this basis to it, and the faster image processing model of the speed of service can be obtained.
Meanwhile being not that all neural networks in candidate network set will be trained, at least one nerve net only selected
Network is just trained, and so as to avoid a large amount of training mission, improves the search efficiency of image processing model.
With reference to first aspect, in a kind of possible implementation of first aspect, in generation and initial network set
The different new neural network of neural network structure is deleted runing time in newly-generated neural network and is limited more than preset time
Neural network, and remaining neural network is added to candidate network set, comprising:
Initial network set is determined as baseline network set;
Based on preset generation strategy, the network architecture parameters of the neural network in baseline network set are reset with life
The neural network of Cheng Xin;
The neural network that runing time in newly-generated neural network is more than preset time limitation is deleted, obtaining includes residue
Neural network generation collection of network, and by generate collection of network be added to candidate network set;
Baseline network set is redefined based on collection of network is generated, and is jumped to based on preset generation strategy, again
The network architecture parameters for the neural network being arranged in baseline network set are executed with the step iteration for generating new neural network, directly
To meeting iteration termination condition.
Obtaining candidate neural network can be carried out by above-mentioned iterative step, from newly-generated mind in each iterative process
Through deleting the neural network of runing time time-out in network, only retains in candidate network set and meet runing time requirement
Neural network.
The possible implementation of with reference to first aspect the first, in two kinds of possible implementations of first aspect,
It is being based on preset generation strategy, it is new to generate to reset the network architecture parameters of the neural network in baseline network set
After neural network, before runing time is more than the neural network that preset time limits in deleting newly-generated neural network,
Method further include:
The neural network generated before deleting in newly-generated neural network.
The neural network repeatedly generated is deleted, meaningless compute repeatedly is avoided.
The possible implementation of with reference to first aspect the first, in three kinds of possible implementations of first aspect,
Network architecture parameters include the type of module for constituting neural network, the port number of the module of composition neural network, composition nerve
At least one of the quantity of module of connection type and composition neural network between the module of network parameter.
Neural network can be considered as by one or more module compositions, and above-mentioned four kinds of network architecture parameters are that description constitutes mind
The basic parameter of module through network, according to certain generation strategy, resets it on the basis of existing neural network
The value of middle one or more parameter, it can generate new neural network.
The possible implementation of with reference to first aspect the first, in four kinds of possible implementations of first aspect,
Baseline network set is redefined based on collection of network is generated, comprising:
Based on preset Pruning strategy, at least one neural network generated in collection of network is deleted;
Generation collection of network after execution delete operation is determined as baseline network set.
It, can be to avoid not generating the quantity of the neural network in collection of network in an iterative process not by executing Pruning strategy
The disconnected performance for increasing and then influencing neural network and generate.
With reference to first aspect, in five kinds of possible implementations of first aspect, in generation and initial network set
Before the different new neural network of neural network structure, method further include:
Initial network set is added to candidate network set.
With reference to first aspect, in six kinds of possible implementations of first aspect, from candidate network set selection to
A few neural network is trained on the training set of image processing tasks, comprising:
Utilize the internetworking of the neural network suitable for the evaluation function of image processing tasks assessment candidate network set
Can, and the ranking results based on network performance select at least one neural network to be instructed on the training set of image processing tasks
Practice.
In the above-mentioned methods, at least one neural network that only selection meets performance requirement is trained, and is significantly reduced
Training burden, while being also ensured that the model trained is with good performance.Here performance refers to neural network for complete
Error rate etc. at the performance of a certain image processing tasks, such as when for image classification task.
The 6th kind of possible implementation with reference to first aspect, in seven kinds of possible implementations of first aspect,
The network performance of the neural network of functional value and the evaluation function assessment of evaluation function is positively correlated.
The network performance of the neural network of functional value and the evaluation function assessment of evaluation function is positively correlated, and is estimated as selection
Valence function can intuitively obtain the Performance Evaluation of neural network very much.
The 6th kind of possible implementation with reference to first aspect, in eight kinds of possible implementations of first aspect,
Utilizing the network performance that the neural network candidate network set is assessed suitable for the evaluation function of image processing tasks, and base
At least one neural network is selected to carry out after training on the training set of image processing tasks in assessment result, method is also wrapped
It includes:
The parameter of evaluation function is updated based on training result.
According to training result, the parameter of adjustable evaluation function is then based on updated evaluation function and reselects
Neural network is trained.
With reference to first aspect or first aspect the first to any one possible implementation in the 8th kind,
In nine kinds of possible implementations of one side, initial network set includes abortive haul network, and abortive haul network is that input is identical with output
Neural network.
Neural network generation is carried out based on abortive haul network, maximum formation range can be obtained, i.e., theoretically can be generated and appoint
The neural network of meaning.
With reference to first aspect or first aspect the first to any one possible implementation in the 8th kind,
In ten kinds of possible implementations of one side, it is image classification task, image segmentation task, image that described image, which handles task,
One of Detection task and image recognition tasks task.
Above-mentioned is the common image processing task that can use neural network model, and method provided in an embodiment of the present invention can
To be applied to, but it is not limited to be applied in above-mentioned image processing tasks.
Second aspect, the embodiment of the present invention provide a kind of image processing method, comprising:
Using any one possible implementation of first aspect or first aspect on the first image processing tasks
The method of offer obtains at least one first image processing model that can be used for the first image processing tasks;
Based at least one training result of first image processing model on the training set of the first image processing tasks,
The first optimum image for selecting network performance optimal from least one first image processing model handles model;
The first image processing tasks is executed using the first optimum image processing model.
In the above-mentioned methods, since the first optimum image processing model has both, the speed of service is very fast, and network performance is preferable
Advantage, therefore execute the first image processing tasks and can obtain preferable effect.
The third aspect, the embodiment of the present invention provide a kind of image processing method, comprising:
Using any one possible implementation of first aspect or first aspect on the first image processing tasks
The method of offer, acquisition can be used at least one first image processing model of the first image processing tasks;
It is combined into initial network set with the collection that at least one first image processing model is constituted, is appointed in second of image procossing
The method provided in business using the possible implementation of any one of first aspect or first aspect, acquisition can be used for second
At least one second image processing model of image processing tasks, wherein second image processing tasks are and the first image
The different types of image processing tasks of processing task;
Based at least one training result of second image processing model on the training set of second of image processing tasks,
The second optimum image for selecting network performance optimal from least one second image processing model handles model;
Second of image processing tasks is executed using the second optimum image processing model.
In the above-mentioned methods, the first image processing model for being used for the first image processing tasks is migrated to being used for second
Second image processing model of kind image processing tasks, is equivalent to the process of a transfer learning, can efficiently use model and exist
Training result in the first image processing tasks, it is final to obtain in combination with the specific requirements of second of image processing tasks
The second optimum image processing model have both that the speed of service is very fast, the preferable advantage of network performance, therefore execute second of image
Processing task can obtain preferable effect.
Fourth aspect, the embodiment of the present invention provide a kind of device for obtaining image processing model, comprising:
Generation module is deleted for generating the new neural network different from the neural network structure in initial network set
Except runing time is more than the neural network of preset time limitation in newly-generated neural network, and remaining neural network is added
To candidate network set, wherein runing time is the time of the complete pre-set image of Processing with Neural Network;
Training module, for selecting at least one neural network in the training of image processing tasks from candidate network set
It is trained on collection;
Model determining module, for determining at least one image processing model that can be used for image processing tasks, wherein every
A image processing model is a trained neural network.
5th aspect, the embodiment of the present invention provide a kind of image processing apparatus, comprising:
Model obtains module, for applying any one of first aspect or first aspect on the first image processing tasks
The method that the possible implementation of kind provides, obtains at least one first image procossing that can be used for the first image processing tasks
Model;
Optimal models selecting module, based at least one first image processing model the first image processing tasks instruction
Practice the training result on collection, from the first optimum image for selecting network performance optimal at least one first image processing model
Manage model;
Execution module, for executing the first image processing tasks using the first optimum image processing model.
6th aspect, the embodiment of the present invention provide a kind of image processing apparatus, comprising:
First model obtains module, for applying times of first aspect or first aspect on the first image processing tasks
It anticipates a kind of method that possible implementation provides, obtains at least one first image that can be used for the first image processing tasks
Handle model;
Second model obtains module, and the set for constituting at least one first image processing model is determined as original net
Network set mentions on second of image processing tasks using any one possible implementation of first aspect or first aspect
The method of confession obtains at least one second image processing model that can be used for second of image processing tasks, wherein second of figure
As processing task is and the different types of image processing tasks of the first image processing tasks;
Optimal models selecting module, for being based at least one second image processing model in second of image processing tasks
Training set on training result, optimal the second optimal figure of network performance is selected from least one second image processing model
As processing model;
Execution module, for executing second of image processing tasks using the second optimum image processing model.
7th aspect, the embodiment of the present invention provide a kind of computer readable storage medium, on computer readable storage medium
Computer program instructions are stored with, when computer program instructions are read out by the processor and run, the embodiment of the present invention is executed and provides
Method the step of.
Eighth aspect, the embodiment of the present invention provide a kind of electronic equipment, including memory and processor, deposit in memory
Computer program instructions are contained, when computer program instructions are read out by the processor and run, are executed provided in an embodiment of the present invention
The step of method.
To enable above-mentioned purpose of the invention, technical scheme and beneficial effects to be clearer and more comprehensible, special embodiment below, and
Cooperate appended attached drawing, is described in detail below.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 shows a kind of structural block diagram that can be applied to the electronic equipment in the embodiment of the present invention;
Fig. 2 shows the flow charts for the method for obtaining image processing model that first embodiment of the invention provides;
Fig. 3 shows the process of the step S10 of the method for the acquisition image processing model of first embodiment of the invention offer
Figure;
Fig. 4 shows the process of the step S11 of the method for the acquisition image processing model of first embodiment of the invention offer
Figure;
Fig. 5 shows the functional block diagram of the device of the acquisition image processing model of fourth embodiment of the invention offer;
Fig. 6 shows the functional block diagram of the image processing apparatus of fifth embodiment of the invention offer;
Fig. 7 shows the functional block diagram of the image processing apparatus of sixth embodiment of the invention offer.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete
Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist
The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause
This, is not intended to limit claimed invention to the detailed description of the embodiment of the present invention provided in the accompanying drawings below
Range, but it is merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art are not doing
Every other embodiment obtained under the premise of creative work out, shall fall within the protection scope of the present invention.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.Meanwhile of the invention
In description, term " first ", " second " etc. are only used for distinguishing one entity or operation from another entity or operation,
It is not understood to indicate or imply relative importance, can not be understood as require that or imply and be deposited between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
Fig. 1 shows a kind of structural block diagram of electronic equipment 100 that can be applied in the embodiment of the present invention.Referring to Fig.1, electric
Sub- equipment 100 includes one or more processors 102, one or more storage devices 104, input unit 106, output device
108 and image collecting device 110, bindiny mechanism's (not shown) that these components pass through bus system 112 and/or other forms
Interconnection.
Processor 102 can be central processing unit (CPU) or have data-handling capacity and/or instruction execution capability
Other forms processing unit, and can control the other assemblies in electronic equipment 100 to execute desired function.
Storage device 104 can with various forms of computer readable storage mediums, such as volatile memory and/or it is non-easily
The property lost memory.Volatile memory for example may include random access memory (RAM) and/or cache memory
(cache) etc..Nonvolatile memory for example may include read-only memory (ROM), hard disk, flash memory etc..Computer-readable
Can store one or more computer program instructions on storage medium, processor 102 can run computer program instructions, with
Realize the method and/or other desired functions in the embodiment of the present invention hereafter.In a computer-readable storage medium
Various application programs and various data, such as application program use and/or the various data generated etc. can also be stored.
Input unit 106 can be the device that user is used to input instruction, and may include keyboard, mouse, microphone
One or more of with touch screen etc..
Output device 108 can export various information (for example, image or sound) to external (for example, user), and can
To include one or more of display, loudspeaker etc..
It is appreciated that structure shown in FIG. 1 is only to illustrate, electronic equipment 100 may also include it is more than shown in Fig. 1 or
Less component, or with the configuration different from shown in Fig. 1.Each component shown in Fig. 1 can using hardware, software or its
Combination is realized.In the embodiment of the present invention, electronic equipment 100 can be server, personal computer, mobile device, intelligence and wear
The equipment such as equipment, mobile unit are worn, physical equipment can also be not limited to, such as can be virtual machine, Cloud Server etc..
First embodiment
Fig. 2 shows the flow charts for the method for obtaining image processing model that first embodiment of the invention provides.This method
Image processing model obtained refers to the neural network model that can be used for certain image processing tasks, at image designated herein
Reason task includes but is not limited to image classification task, image segmentation task, image detection task and image recognition tasks.Reference
Fig. 2, this method comprises:
Step S10: the processor 102 of electronic equipment 100 generates different from the neural network structure in initial network set
New neural network, delete the neural network that runing time in newly-generated neural network is more than preset time limitation, and will
Remaining neural network is added to candidate network set.
It include at least one neural network in initial network set, it can based on each neural network in initial network set
To generate the new neural network different from original nerve network structure.It tests the speed, obtains to newly-generated each neural network
The runing time of neural network is obtained, runing time designated herein refers to the time of the complete pre-set image of Processing with Neural Network, for example,
It is input to neural network from by preset one or more image, until the neural network exports the classification processing knot of all images
The time of fruit,.
After the runing time for obtaining neural network, it is compared with preset time limitation, if runing time is greater than
Preset time limitation, shows that network structure execution efficiency possessed by the neural network is lower, subsequent to carry out to the neural network
Training value is not high, therefore can be deleted.If runing time is limited no more than preset time, show the neural network institute
The network structure execution efficiency having is higher, and subsequent to be trained value to the neural network higher, therefore can be retained
To candidate network set.
Wherein, the selection mode of the neural network in initial network set is not construed as limiting, and can choose the nerve net on basis
Network module, such as Xception module, Resnet module etc., also can choose abortive haul network, i.e. input and the identical nerve of output
Network, it is also an option that the neural network arbitrarily specified by user.Theoretically, arbitrary structures can be generated based on abortive haul network
Neural network, therefore when expecting to have biggish network formation range, it can choose using abortive haul network as in initial network set
Neural network.
In some embodiments, before step S10 execution, the neural network in initial network set can also be added
It is added in candidate network set, to increase the quantity of the neural network in candidate network set with different structure.
Step S11: the processor 102 of electronic equipment 100 selects at least one neural network to exist from candidate network set
It is trained on the training set of image processing tasks.
After obtaining candidate network set, at least one neural network is therefrom selected to be trained.The method of selection does not limit
It is fixed, such as can be artificial selection, it is also possible to automatically select using certain algorithms.The standard of selection is also not construed as limiting, such as
It can be and selected according to the performance of neural network, performance designated herein refers to neural network for completing at certain image
The ability of reason task, for example can refer to the accuracys of classification results for its performance of the neural network of image classification task.
It, naturally should be in such image procossing since obtaining the image processing model that can be used for certain image processing tasks
Model training is carried out on the corresponding training set of task, for example, for image classification task, it can be enterprising in ImageNet data set
Row training.
Step S12: the determination of processor 102 of electronic equipment 100 can be used at least one image of image processing tasks
Manage model.
Wherein, each image processing model is trained neural network in a step S11.The one of step S12
It, can be by neural network trained in step S11 all as can be used for certain image processing tasks in kind of implementation
Image processing model.In another implementation of step S12, it can also be based further on from trained neural network
The factors further screening such as network performance can be used for the image processing model of image processing tasks.
It needs to particularly point out, although being trained in step S11 using the training set of certain image processing tasks, walk
The image processing model obtained in rapid S12 is only that can be used for the image processing tasks, however it is not limited to is only used for and step S11
In identical image processing tasks, in fact, the image processing model of acquisition can also be used for by modes such as transfer learnings
Other image processing tasks can also be further described such case in the third embodiment.
In the above-mentioned methods, it tests the speed since candidate network set has already been through, neural network therein is all to meet fortune
Row time requirement, therefore the speed of service of at least one neural network gone out from candidate network Resource selection is available guarantor
Barrier, it is trained using the corresponding training set of image processing tasks on this basis, it is very fast the speed of service can be obtained
Image processing model.Meanwhile being not that all neural networks in candidate network set will be trained, only select
At least one neural network is just trained, and so as to avoid a large amount of training mission, improves the search of image processing model
Efficiency can determine the image processing model that can be used for image processing tasks in a relatively short period of time.
In a kind of embodiment of first embodiment, step S10 can be executed by the way of iteration.Fig. 3 is shown
The flow chart of the step S10 of the method for the acquisition image processing model that first embodiment of the invention provides.Referring to Fig. 3, step S10
May include:
Step S100: initial network set is determined as baseline network set by the processor 102 of electronic equipment 100.
In each iterative process, using generation strategy, is generated based on baseline network set and generate collection of network, in iteration
Baseline network set is thus continually updated in the process, and before iteration starts for the first time, initial network set is determined as baseline network
" initial value " of set.
Step S101: the processor 102 of electronic equipment 100 is based on preset generation strategy, resets baseline network collection
The network architecture parameters of neural network in conjunction are to generate new neural network.
Network architecture parameters refer to the parameter of the network structure for describing neural network, and the change of network architecture parameters will
Cause the change of the network structure of neural network, therefore resets network parameter and can produce from original nerve network with different
The new neural network of structure.
In a kind of cognitive style, regard as made of one or more block combiners neural network as module both may be used
To be the most basic component of neural network, such as convolutional layer, warp lamination, pond layer etc., it is also possible to more complicated function
Unit, such as Xception module, Resnet module etc..Therefore, under such cognitive style, the network knot of neural network is set
Structure parameter is actually certain attributes of setup module, such as may include, but be not limited to the channel of the type of module, module
The attributes such as connection type, the quantity of module between number, module.The module taken is increased, modification, replacement, the order of connection
The operations such as adjustment belong to the behavior for resetting the network architecture parameters of neural network.
Generation strategy refers to the method reset to the network architecture parameters of neural network, using which kind of generation plan
It does not limit slightly, can determine according to actual needs, such as can use makes the network structure generated gradually become complexity
Generation strategy, the strategies such as port number including adding module, enumeration module, to generate a large amount of alternative neural networks.
Step S102: it is more than default that the processor 102 of electronic equipment 100, which deletes runing time in newly-generated neural network,
The neural network of time restriction obtains the generation collection of network including remaining neural network, and will generate collection of network addition
To candidate network set.
About testing the speed to newly-generated neural network, related content had been illustrated in step slo, it is no longer heavy herein
It is multiple to illustrate.After treatment, it generates the neural network in collection of network and all meets runing time requirement, so as to give birth to
It is incorporated into candidate network set at collection of network.
Step S103: the processor 102 of electronic equipment 100 is based on generation collection of network and redefines baseline network set.
It will can directly generate collection of network and be determined as baseline network set, or can also be carried out to collection of network is generated
Baseline network set is obtained after certain processing, after obtaining baseline network set, the S100 that gos to step starts next iteration mistake
Journey.
Before carrying out next iteration, needs first to judge whether to have met iteration termination condition, only be unsatisfactory for
It is just iterated when iteration termination condition, otherwise terminates iteration and continue to execute subsequent step.Iteration termination condition can be according to reality
One of border demand is determining, such as may be, but not limited to, following condition: the neural network in candidate network set has reached
The execution time to preset quantity, iterative process alreadys exceed preset duration, newly-generated institute during newest an iteration
There is the runing time of neural network to be all unsatisfactory for requiring.In some cases, judge that iteration termination condition can execute every time
Just judged before step S103, if having met iteration termination condition, does not need to execute step S103 again.
In an iterative manner realize step S10, convenient for generate candidate network set process control, other one
In a little implementations, candidate network set can not also be generated by way of iteration, such as by way of parallel processing.
It, after step slol, can also be to step before step S102 in a kind of embodiment of first embodiment
Newly-generated neural network carries out duplicate removal in rapid S101, that is, deletes wherein in the neural network generated before.Specifically
For, it can be marked when each neural network generates, avoid the neural network of regeneration same structure next time, it is special
Not, the neural network in initial network set can also be marked.
Since step S101 to step S103 is an iterative process, the duplicate removal for carrying out neural network can be to avoid meaningless
Repeatability calculating, these repeatability calculating not only waste computing resource, in some instances it may even be possible to cause iterative process that can not terminate.
For example, if the neural network for being computed runing time will calculate runing time again without duplicate removal, in another example,
The neural network generated before with one goes to generate new neural network, it is possible to create had generated before more
Neural network.
In a kind of embodiment of first embodiment, step S103 can be realized in the following ways.It is primarily based on pre-
If Pruning strategy delete generate collection of network at least one neural network, then by execute delete operation after generation net
Network set is determined as baseline network set.
Here Pruning strategy, which refers to, to be selected at least one neural network from generation neural network set and is deleted
The method removed.By execute Pruning strategy, can to avoid generate collection of network in neural network quantity in an iterative process
Constantly increase and then influence the performance that neural network generates.
It is not limited, can be determined according to actual needs, such as assessed by evaluation function using which kind of Pruning strategy
The performance of the neural network in network is generated, only several best neural networks of retention property, remaining is deleted, about estimating
Valence function is specifically described again in the specific implementation of subsequent illustrative step S11.In another example structure neural network based
The operational capability for generating the neural network in network is theoretically calculated, the flops per second of neural network is such as calculated
(Floating-point Operations Per Second, FLOPS), several strongest nerve nets of Selecting operation ability
Network deletes remaining.
When formulating Pruning strategy, to avoid excessively trimming, some valuable neural networks is deleted, subsequent nothing is caused
Method is based on the neural network again and is generated, and can take the neural network generated in network with heterogeneous networks structural parameters
A degree of safeguard measure.For example, there are different layers of neural networks at least to retain one after beta pruning, in another example,
The FLOPS of calculated neural network is divided into several sections, the neural network in each section is at least wanted after beta pruning
Retain one.Certainly, it above are only example, can also include other safeguard measures.
In a kind of embodiment of first embodiment, step S11 can be executed by the way of iteration.Fig. 4 is shown
The flow chart of the step S11 of the method for the acquisition image processing model that first embodiment of the invention provides.Referring to Fig. 4, step S11
May include:
Step S110: the processor 102 of electronic equipment 100 is assessed using the evaluation function for being suitable for image processing tasks and is waited
The network performance of the neural network in collection of network is selected, and the ranking results based on network performance select at least one neural network
It is trained on the training set of image processing tasks.
Evaluation function can estimate the performance of neural network, not limited using the evaluation function of which kind of form specifically
It is fixed, but in general evaluation function be it is relevant with specific image processing tasks, i.e., expectation can be used in step s 12
The image processing model of certain image processing tasks, in step s 110 should be using the appraisal suitable for the image processing tasks
Function.For example, sum (every layer of port number ^1.5 of neural network) means that a kind of evaluation function, wherein sum indicates summation.One
As think, the port number of neural network is more, and the performance for image classification is better, therefore above-mentioned function can be used as image point
The evaluation function of generic task.
In the specific implementation, it can use, but be not limited to using the performance of functional value and the neural network to be assessed just
Relevant function is as evaluation function, such as function exemplified above.Can intuitively it be estimated very much by functional value in this way
The performance of neural network.
It after the performance for calculating neural network using evaluation function, is ranked up according to calculated result, and from ranking results
Middle at least one neural network for selecting to meet the requirements.For example, the performance of functional value and neural network for evaluation function is just
Related situation can select to come at least the one of foremost from ranking results by calculated result according to sorting from large to small
A neural network is trained, and the performance estimation to these neural networks is optimal, therefore is expected to train function admirable
Model.
Step S111: the processor 102 of electronic equipment 100 updates the parameter of evaluation function based on training result.
The parameter of evaluation function can be adjusted according to training result, to realize the optimization of evaluation function, so that appraisal
Function can preferably assess the performance of neural network.For example, the index 1.5 in evaluation function exemplified above be exactly one can
The parameter of tune.Again selection is used for trained neural network to evaluation function after can use optimization from candidate network set,
The S110 iteration that gos to step executes, it should be pointed out that, the neural network having been selected out before is due to having been trained, institute
Without the neural network selected before reselection.
Iteration termination condition one of can determine according to actual needs, such as may be, but not limited to, following condition:
Trained neural network has reached preset quantity, and the execution time of iterative process alreadys exceed preset duration, newest one
Its performance of the neural network trained in secondary iterative process is not obviously improved relative to the neural network trained before.?
In some cases, judge that iteration termination condition can just be judged before executing step S111 every time, if met
Iteration termination condition does not need then to execute step S111 again.
Further, in some embodiments, it can use the ginseng of a special Neural Network Optimization evaluation function
Number, to obtain better optimum results.
Step S10 is realized in an iterative manner, and better performances nerve net may be selected during successive ignition
Network, can not also be by way of iteration the considerations of in terms of the time efficiency in some other implementation, such as holds
Directly terminate process after row step S110.
Further, it after step S11 execution, since the neural network selected has already passed through training, is tied according to training
Fruit can accurately understand the performance of neural network, thus can according to training result from trained neural network into
One step determines that at least one can be used for the image processing model of image processing tasks.
Other than the image processing model selected can be used for image processing tasks, in fact the embodiment of the present invention is mentioned
Any one neural network and evaluation function generated in the method for the acquisition image processing model of confession can individually be made
With.For example, the neural network in candidate network set is used directly for the image processing tasks of certain time restriction classes, and example
Such as, the evaluation function of generation can then be combined with hand-designed, inspire the mentality of designing of evaluation function.
Second embodiment
Second embodiment of the invention provides a kind of image processing method, comprising:
The side of the acquisition image processing model provided on the first image processing tasks using first embodiment of the invention
Method obtains at least one first image processing model that can be used for the first image processing tasks;
Based at least one training result of first image processing model on the training set of the first image processing tasks,
The first optimum image for selecting network performance optimal from least one first image processing model handles model;
The first image processing tasks is executed using the first optimum image processing model.
Above-mentioned steps are illustrated so that the first image processing tasks is image classification task as an example below, so-called image
Classification task generally refers to: inputting the image for fixed size, exports the classification for objects in images.It should be understood that here
Merely illustrative, the first image processing tasks can also be other image processing tasks.
It only includes the set of abortive haul network that initial network collection can be taken, which to be combined into, by being iteratively generated candidate network set,
Wherein, generation strategy can be using each iteration on the basis of original nerve network (referring to the neural network in baseline network set)
Increase a module (such as convolutional layer, pond layer, warp lamination etc.), and enumerate the desirable port number of the module (such as 16,
32,64 etc., general port number is not necessarily to continuous value when enumerating).
Select at least one neural network in the corresponding instruction of image classification task from the candidate network set finally obtained
Practice and be trained on collection, obtains training result.Directly can all be used as trained neural network can be used for image classification times
The image processing model of business might as well be referred to as the first image processing model.
Further, the performance of each first image processing model can have been determined based on training result, so as to
Therefrom select optimal the first optimum image processing model of network performance.For example, being tied for image classification task by training
Fruit can obtain TOP-1 the and TOP-5 error rate of neural network, these error rates may act as the performance of neural network
Description.So as to select the neural network that wherein error rate is minimum to handle model as the first optimum image.
It is very fast due to can choose out the speed of service by the method in first embodiment, and estimated performance preferably (example
Can such as be assessed by evaluation function) neural network, on this basis further progress training and based on training result select
The first optimum image handles model out, which necessarily has both that the speed of service is very fast, the preferable advantage of network performance, therefore executes
Image classification processing task can obtain preferable effect.
For example, being tested in certain experiment that inventor is done using ImageNet data set, the both sides of comparison are
The Xecption network model of hand-designed and the first optimum image processing model obtained using the method in second embodiment,
In the similar situation of the speed of service, the FLOPS of the latter is doubled more (300M is to 140M), and network performance, which also has, obviously to be mentioned
It rises (TOP1 error rate 32% is to 29%).
Simultaneously as the method used in first embodiment carries out pattern search, obtains the first optimum image and handle mould
The process time-consuming of type is shorter, is conducive to be completed in a short time image processing tasks.
3rd embodiment
Third embodiment of the invention provides a kind of image processing method, comprising:
The side of the acquisition image processing model provided on the first image processing tasks using first embodiment of the invention
Method, acquisition can be used at least one first image processing model of the first image processing tasks;
It is combined into initial network set with the collection that at least one first image processing model is constituted, is appointed in second of image procossing
The method of the acquisition image processing model provided in business using first embodiment of the invention, acquisition can be used for second of image procossing
At least one second image processing model of task, wherein second image processing tasks are and the first image processing tasks
Different types of image processing tasks;
Based at least one training result of second image processing model on the training set of second of image processing tasks,
The second optimum image for selecting network performance optimal from least one second image processing model handles model;
Second of image processing tasks is executed using the second optimum image processing model.
In the method that third embodiment of the invention provides, the side provided in first embodiment of the invention has been used twice
Method realizes and migrates the first image processing model into the second image processing tasks, to obtain the second image processing model
Process.
It is below image classification task with the first image processing tasks, second of image processing tasks is image, semantic point
It cuts and above-mentioned steps is illustrated for task, so-called image, semantic segmentation task generally refers to: inputting to be not fixed size
Image exports as the image of the sizes such as one and input, and each pixel exported in image indicates corresponding position in input picture
Pixel object category (this classification includes background).It should be understood that it is only for example, the first image processing tasks
And second of image processing tasks can also be other image processing tasks, but the first image processing tasks and second of figure
As processing task should be different types of image processing tasks, such as cannot all be image processing tasks, otherwise do not need into
Row transfer learning, directly can using the method in second embodiment of the invention.
Using the method in first embodiment on image processing tasks, at least one first image processing model is obtained,
Its detailed process can be with reference to the description in second embodiment.
It may be noted that being typically passed through in the method that 3rd embodiment provides when obtaining the first image processing model
Screening, is only remained in several neural networks that superperformance is shown in image classification task, and inventor is studied for a long period of time hair
It is existing, good neural network is showed in image classification task, after moving on other image processing tasks, is expected to obtain same
Good performance shows poor neural network in image classification task, after moving on other image processing tasks,
Usual performance can be worse.
The set that at least one first image processing model selected is constituted is determined as a new initial network set.
By being iteratively generated candidate network set, wherein generation strategy can be using each iteration on the basis of original nerve network
One module of upper increase, and enumerate the desirable port number of the module.According to above-mentioned generation strategy, in newly-generated neural network
In actually remain the structure of the first image processing model, only increase on its basis suitable with image, semantic segmentation task
The module matched, the first image processing model retained are generally used in newly-generated neural network as feature extractor
Characteristics of image is extracted, backbone is also usually referred to as.
Selection inputs neural network identical with output size in image, semantic from the candidate network set finally obtained
Be trained on the corresponding training set of segmentation task, obtain training result, wherein input picture with export that image size is identical is
Image, semantic divides requirement of the task to input and output.Directly can all be used as trained neural network can be used for image language
The image processing model of adopted segmentation task, might as well be referred to as the second image processing model.
Further, the performance of each second image processing model can have been determined based on training result, so as to
Optimal the second optimum image processing model of network performance is therefrom selected, it is very fast which has both the speed of service, network performance
Preferable advantage, therefore execution image, semantic segmentation task can obtain preferable effect, implement simultaneously as using first
Method in example carries out pattern search, and the process time-consuming for obtaining the second optimum image processing model is shorter, is conducive in the short time
Interior completion image processing tasks.Further, since the second optimum image processing model is on the basis of the first image processing model
It is generated through transfer learning, the training result accumulated in image classification task before being remained in model, is conducive to save instruction
Practice the time, and obtains the model of better quality.Also, currently, the available data set of image classification task is more, other images
The available data set of processing task is relatively fewer, therefore higher using the scheme practical value of above-mentioned transfer learning, can be to avoid
Training is caused situations such as over-fitting occur because data sample is less.
Further, in the above-mentioned methods, the training result in image, semantic segmentation task can also be utilized, adjustment is searched
The process of the first image processing model of rope, for example, the parameter of the evaluation function for image classification processing task is updated, with search
Performance more preferably the first image processing model out, and then obtain performance more preferably the second image processing model.This process can be with
Iteration executes, until the second optimum image processing model finally obtained meets user to the performance need of image, semantic segmentation task
It asks.
Fourth embodiment
Fig. 5 shows the functional block diagram of the device 200 of the acquisition image processing model of fourth embodiment of the invention offer.
Referring to Fig. 5, which includes generation module 210, training module 220 and model determining module 230.
Wherein, generation module 210 is for generating the new nerve different from the neural network structure in initial network set
Network, deletes the neural network that runing time in newly-generated neural network is more than preset time limitation, and by remaining nerve
Network is added to candidate network set, wherein runing time is the time of the complete pre-set image of Processing with Neural Network;
Training module 220 from candidate network set for selecting at least one neural network in the instruction of image processing tasks
Practice and is trained on collection;
Model determining module 230 can be used at least one image processing model of image processing tasks for determining, wherein
Each image processing model is a trained neural network.
The skill of the device 200 for the acquisition image processing model that fourth embodiment of the invention provides, realization principle and generation
Art effect is identical with first embodiment, and to briefly describe, Installation practice part does not refer to place, can refer to first and applies phase in example
Answer content.
5th embodiment
Fig. 6 shows the functional block diagram of the image processing apparatus 300 of fifth embodiment of the invention offer.It, should referring to Fig. 6
Device includes generation module 310, training module 320 and model determining module 330.
Model obtain module 310 on the first image processing tasks using any of first aspect or first aspect
A kind of method that possible implementation provides, acquisition can be used at least one first image of the first image processing tasks
Manage model;
Optimal models selecting module 320 is based at least one first image processing model in the first image processing tasks
Training result on training set, the first optimum image for selecting network performance optimal from least one first image processing model
Handle model;
Execution module 330 is used to execute the first image processing tasks using the first optimum image processing model.
The image processing apparatus 300 that fifth embodiment of the invention provides, the technical effect of realization principle and generation and the
Two embodiments are identical, and to briefly describe, Installation practice part does not refer to place, can refer to second and apply corresponding contents in example.
Sixth embodiment
Fig. 7 shows the functional block diagram of the image processing apparatus 400 of sixth embodiment of the invention offer.It, should referring to Fig. 7
Device includes that the first model obtains module 410, the second model acquisition module 420, optimal models selecting module 430 and executes mould
Block 440.
First model obtain module 410 on the first image processing tasks using first aspect or first aspect
The method that any one possible implementation provides, obtains at least one first figure that can be used for the first image processing tasks
As processing model;
Second model obtains module 420 for the set that at least one first image processing model is constituted to be determined as initially
Collection of network, using any one possible implementation of first aspect or first aspect on second of image processing tasks
The method of offer obtains at least one second image processing model that can be used for second of image processing tasks, wherein second
Image processing tasks be and the different types of image processing tasks of the first image processing tasks;
Optimal models selecting module 430 is used to appoint based at least one second image processing model in second of image procossing
Training result on the training set of business, select network performance optimal from least one second image processing model second are optimal
Image processing model;
Execution module 440 is used to execute second of image processing tasks using the second optimum image processing model.
The image processing apparatus 400 that sixth embodiment of the invention provides, the technical effect of realization principle and generation and the
Three embodiments are identical, and to briefly describe, Installation practice part does not refer to place, can refer to third and apply corresponding contents in example.
7th embodiment
Seventh embodiment of the invention provides a kind of computer readable storage medium, is stored on computer readable storage medium
Computer program instructions when computer program instructions are read out by the processor and run, execute provided in an embodiment of the present invention above-mentioned
The step of each method.The computer readable storage medium can be implemented as, but be not limited to storage device 104 shown in fig. 1.
8th embodiment
Eighth embodiment of the invention provides a kind of electronic equipment, including memory and processor, is stored in memory
Computer program instructions when computer program instructions are read out by the processor and run, execute provided in an embodiment of the present invention above-mentioned
The step of each method.The electronic equipment can be implemented as, but be not limited to electronic equipment 100 shown in fig. 1.
It should be noted that all the embodiments in this specification are described in a progressive manner, each embodiment weight
Point explanation is the difference from other embodiments, and the same or similar parts between the embodiments can be referred to each other.
For device class embodiment, since it is basically similar to the method embodiment, so being described relatively simple, related place ginseng
See the part explanation of embodiment of the method.
In several embodiments provided herein, it should be understood that disclosed device and method can also pass through it
His mode is realized.The apparatus embodiments described above are merely exemplary, for example, the flow chart and block diagram in attached drawing are aobvious
The device of multiple embodiments according to the present invention, architectural framework in the cards, the function of method and computer program product are shown
It can and operate.In this regard, each box in flowchart or block diagram can represent one of a module, section or code
Point, a part of the module, section or code includes one or more for implementing the specified logical function executable
Instruction.It should also be noted that function marked in the box can also be attached to be different from some implementations as replacement
The sequence marked in figure occurs.For example, two continuous boxes can actually be basically executed in parallel, they sometimes may be used
To execute in the opposite order, this depends on the function involved.It is also noted that each of block diagram and or flow chart
The combination of box in box and block diagram and or flow chart can be based on the defined function of execution or the dedicated of movement
The system of hardware is realized, or can be realized using a combination of dedicated hardware and computer instructions.
In addition, each functional module in each embodiment of the present invention can integrate one independent portion of formation together
Point, it is also possible to modules individualism, an independent part can also be integrated to form with two or more modules.
It, can be with if the function is realized and when sold or used as an independent product in the form of software function module
It is stored in computer-readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words to existing
Having the part for the part or the technical solution that technology contributes can be embodied in the form of software products, the computer
Software product is stored in a storage medium, including some instructions are used so that computer equipment executes each embodiment institute of the present invention
State all or part of the steps of method.Computer equipment above-mentioned includes: personal computer, server, mobile device, intelligently wears
The various equipment with execution program code ability such as equipment, the network equipment, virtual unit are worn, storage medium above-mentioned includes: U
Disk, mobile hard disk, read-only memory, random access memory, magnetic disk, tape or CD etc. are various to can store program code
Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (18)
1. a kind of method for obtaining image processing model characterized by comprising
The new neural network different from the neural network structure in initial network set is generated, newly-generated neural network is deleted
Middle runing time is more than the neural network of preset time limitation, and remaining neural network is added to candidate network set,
In, the runing time is the time of the complete pre-set image of Processing with Neural Network;
At least one neural network is selected to be trained on the training set of image processing tasks from the candidate network set;
Determine at least one image processing model that can be used for described image processing task, wherein each image processing model is equal
For a trained neural network.
2. the method according to claim 1 for obtaining image processing model, which is characterized in that the generation and initial network
The different new neural network of neural network structure in set, it is more than default for deleting runing time in newly-generated neural network
The neural network of time restriction, and remaining neural network is added to candidate network set, comprising:
The initial network set is determined as baseline network set;
Based on preset generation strategy, the network architecture parameters of the neural network in the baseline network set are reset with life
The neural network of Cheng Xin;
The neural network that runing time described in newly-generated neural network is more than preset time limitation is deleted, including
The generation collection of network of remaining neural network, and the generation collection of network is added to the candidate network set;
The baseline network set is redefined based on the generation collection of network, and is jumped to described based on preset generation plan
Slightly, the step of network architecture parameters of the neural network in the baseline network set are to generate new neural network is reset
Iteration executes, until meeting iteration termination condition.
3. the method according to claim 2 for obtaining image processing model, which is characterized in that be based on preset life described
At strategy, reset the network architecture parameters of the neural network in the baseline network set with generate new neural network it
Afterwards, it is described delete runing time described in newly-generated neural network be more than preset time limitation neural network it
Before, the method also includes:
The neural network generated before deleting in newly-generated neural network.
4. the method according to claim 2 for obtaining image processing model, which is characterized in that the network architecture parameters packet
Between the module for including the type for the module for constituting neural network, the port number for the module for constituting neural network, composition neural network
Connection type and constitute at least one of the quantity of the module parameter of neural network.
5. the method according to claim 2 for obtaining image processing model, which is characterized in that be based on the generation network collection
Conjunction redefines the baseline network set, comprising:
Based on preset Pruning strategy, described at least one neural network generated in collection of network is deleted;
The generation collection of network after execution delete operation is determined as the baseline network set.
6. the method according to claim 1 for obtaining image processing model, which is characterized in that the generation and initial network
Before the different new neural network of neural network structure in set, the method also includes:
The initial network set is added to the candidate network set.
7. the method according to claim 1 for obtaining image processing model, which is characterized in that described from the candidate network
At least one neural network is selected to be trained in set on the training set of image processing tasks, comprising:
Utilize the net that the neural network the candidate network set is assessed suitable for the evaluation function of described image processing task
Network performance, and the ranking results based on the network performance select at least one neural network in the instruction of described image processing task
Practice and is trained on collection.
8. the method according to claim 7 for obtaining image processing model, which is characterized in that the function of the evaluation function
The network performance of value and the neural network of evaluation function assessment is positively correlated.
9. the method according to claim 7 for obtaining image processing model, which is characterized in that described using suitable for institute
The evaluation function for stating image processing tasks assesses the network performance of the neural network in the candidate network set, and based on assessment
As a result at least one neural network is selected to carry out after training on the training set of described image processing task, the method is also wrapped
It includes:
The parameter of the evaluation function is updated based on training result.
10. the method according to claim 1 to 9 for obtaining image processing model, which is characterized in that described first
Beginning collection of network includes abortive haul network, and the abortive haul network is input and the identical neural network of output.
11. the method according to claim 1 to 9 for obtaining image processing model, which is characterized in that the figure
As processing task is that one of image classification task, image segmentation task, image detection task and image recognition tasks are appointed
Business.
12. a kind of image processing method characterized by comprising
Method of any of claims 1-11 is applied on the first image processing tasks, acquisition can be used for described
At least one first image processing model of the first image processing tasks;
Based at least one the described training of the first image processing model on the training set of the first image processing tasks
As a result, the first optimum image for selecting network performance optimal from least one described first image processing model handles model;
The first described image processing tasks are executed using first optimum image processing model.
13. a kind of image processing method characterized by comprising
Method of any of claims 1-11 is applied on the first image processing tasks, acquisition can be used for described
At least one first image processing model of the first image processing tasks;
It is combined into initial network set with the collection that at least one described first image processing model is constituted, is appointed in second of image procossing
Method of any of claims 1-11 is applied in business, acquisition can be used for second of image processing tasks extremely
Few second image processing model, wherein second of image processing tasks are and the first described image processing tasks
Different types of image processing tasks;
Training based at least one described second image processing model on the training set of second of image processing tasks
As a result, the second optimum image for selecting network performance optimal from least one described second image processing model handles model;
Second of image processing tasks are executed using second optimum image processing model.
14. a kind of device for obtaining image processing model characterized by comprising
Generation module is deleted new for generating the new neural network different from the neural network structure in initial network set
Runing time is more than the neural network of preset time limitation in the neural network of generation, and remaining neural network is added to time
Select collection of network, wherein the runing time is the time of the complete pre-set image of Processing with Neural Network;
Training module, for selecting at least one neural network in the training of image processing tasks from the candidate network set
It is trained on collection;
Model determining module, for determining at least one image processing model that can be used for described image processing task, wherein every
A image processing model is a trained neural network.
15. a kind of image processing apparatus characterized by comprising
Model obtains module, for applying side of any of claims 1-11 on the first image processing tasks
Method obtains at least one first image processing model that can be used for the first image processing tasks;
Optimal models selecting module, based at least one described first image processing model in the first described image processing tasks
Training set on training result, selected from least one described first image processing model network performance it is optimal first most
Excellent image processing model;
Execution module, for executing the first described image processing tasks using first optimum image processing model.
16. a kind of image processing apparatus characterized by comprising
First model obtains module, for applying described in any one of claim 1-11 on the first image processing tasks
Method, obtain and can be used at least one first image processing models of the first image processing tasks;
Second model obtains module, and the set for constituting at least one described first image processing model is determined as original net
Network set, applies method of any of claims 1-11 on second of image processing tasks, and acquisition can be used for institute
State at least one second image processing model of second of image processing tasks, wherein second of image processing tasks are
With the different types of image processing tasks of the first image processing tasks;
Optimal models selecting module, at least one second image processing model based on described in second of image procossing
Training result on the training set of task selects that network performance is optimal from least one described second image processing model
Two optimum images handle model;
Execution module, for executing second of image processing tasks using second optimum image processing model.
17. a kind of computer readable storage medium, computer program instructions are stored on the computer readable storage medium,
It is characterized in that, when the computer program instructions are read out by the processor and run, perform claim is required described in any one of 1-13
Method the step of.
18. a kind of electronic equipment, including memory and processor, computer program instructions are stored in the memory,
It is characterized in that, when the computer program instructions are read and run by the processor, perform claim requires any one of 1-13
The step of described method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810891677.7A CN108985386A (en) | 2018-08-07 | 2018-08-07 | Obtain method, image processing method and the corresponding intrument of image processing model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810891677.7A CN108985386A (en) | 2018-08-07 | 2018-08-07 | Obtain method, image processing method and the corresponding intrument of image processing model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108985386A true CN108985386A (en) | 2018-12-11 |
Family
ID=64556065
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810891677.7A Pending CN108985386A (en) | 2018-08-07 | 2018-08-07 | Obtain method, image processing method and the corresponding intrument of image processing model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108985386A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109685204A (en) * | 2018-12-24 | 2019-04-26 | 北京旷视科技有限公司 | Pattern search method and device, image processing method and device |
CN109978069A (en) * | 2019-04-02 | 2019-07-05 | 南京大学 | The method for reducing ResNeXt model over-fitting in picture classification |
CN110555514A (en) * | 2019-08-20 | 2019-12-10 | 北京迈格威科技有限公司 | Neural network model searching method, image identification method and device |
CN110659690A (en) * | 2019-09-25 | 2020-01-07 | 深圳市商汤科技有限公司 | Neural network construction method and device, electronic equipment and storage medium |
CN110807515A (en) * | 2019-10-30 | 2020-02-18 | 北京百度网讯科技有限公司 | Model generation method and device |
CN111401516A (en) * | 2020-02-21 | 2020-07-10 | 华为技术有限公司 | Neural network channel parameter searching method and related equipment |
WO2020238039A1 (en) * | 2019-05-31 | 2020-12-03 | 北京市商汤科技开发有限公司 | Neural network search method and apparatus |
CN112884118A (en) * | 2019-11-30 | 2021-06-01 | 华为技术有限公司 | Neural network searching method, device and equipment |
CN112949662A (en) * | 2021-05-13 | 2021-06-11 | 北京市商汤科技开发有限公司 | Image processing method and device, computer equipment and storage medium |
WO2021143883A1 (en) * | 2020-01-15 | 2021-07-22 | 华为技术有限公司 | Adaptive search method and apparatus for neural network |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5033006A (en) * | 1989-03-13 | 1991-07-16 | Sharp Kabushiki Kaisha | Self-extending neural-network |
CN103294601A (en) * | 2013-07-03 | 2013-09-11 | 中国石油大学(华东) | Software reliability forecasting method based on selective dynamic weight neural network integration |
CN105678380A (en) * | 2016-01-08 | 2016-06-15 | 浙江工业大学 | Ecological niche and adaptive negative correlation learning-based evolutionary neural network integration method |
CN105701542A (en) * | 2016-01-08 | 2016-06-22 | 浙江工业大学 | Neural network evolution method based on multi-local search |
CN106203623A (en) * | 2014-11-27 | 2016-12-07 | 三星电子株式会社 | The method of method and apparatus and dimensionality reduction for extending neutral net |
CN106446754A (en) * | 2015-08-11 | 2017-02-22 | 阿里巴巴集团控股有限公司 | Image identification method, metric learning method, image source identification method and devices |
CN106934456A (en) * | 2017-03-16 | 2017-07-07 | 山东理工大学 | A kind of depth convolutional neural networks model building method |
CN107316079A (en) * | 2017-08-08 | 2017-11-03 | 珠海习悦信息技术有限公司 | Processing method, device, storage medium and the processor of terminal convolutional neural networks |
CN107766940A (en) * | 2017-11-20 | 2018-03-06 | 北京百度网讯科技有限公司 | Method and apparatus for generation model |
CN108205707A (en) * | 2017-09-27 | 2018-06-26 | 深圳市商汤科技有限公司 | Generate the method, apparatus and computer readable storage medium of deep neural network |
-
2018
- 2018-08-07 CN CN201810891677.7A patent/CN108985386A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5033006A (en) * | 1989-03-13 | 1991-07-16 | Sharp Kabushiki Kaisha | Self-extending neural-network |
CN103294601A (en) * | 2013-07-03 | 2013-09-11 | 中国石油大学(华东) | Software reliability forecasting method based on selective dynamic weight neural network integration |
CN106203623A (en) * | 2014-11-27 | 2016-12-07 | 三星电子株式会社 | The method of method and apparatus and dimensionality reduction for extending neutral net |
CN106446754A (en) * | 2015-08-11 | 2017-02-22 | 阿里巴巴集团控股有限公司 | Image identification method, metric learning method, image source identification method and devices |
CN105678380A (en) * | 2016-01-08 | 2016-06-15 | 浙江工业大学 | Ecological niche and adaptive negative correlation learning-based evolutionary neural network integration method |
CN105701542A (en) * | 2016-01-08 | 2016-06-22 | 浙江工业大学 | Neural network evolution method based on multi-local search |
CN106934456A (en) * | 2017-03-16 | 2017-07-07 | 山东理工大学 | A kind of depth convolutional neural networks model building method |
CN107316079A (en) * | 2017-08-08 | 2017-11-03 | 珠海习悦信息技术有限公司 | Processing method, device, storage medium and the processor of terminal convolutional neural networks |
CN108205707A (en) * | 2017-09-27 | 2018-06-26 | 深圳市商汤科技有限公司 | Generate the method, apparatus and computer readable storage medium of deep neural network |
CN107766940A (en) * | 2017-11-20 | 2018-03-06 | 北京百度网讯科技有限公司 | Method and apparatus for generation model |
Non-Patent Citations (2)
Title |
---|
JONATHAN LONG,EVAN SHELHAMER,TREVOR DARRELL: "Fully Convolutional Networks for Semantic Segmentation", 《2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
林源: "《新型农村合作医疗保险欺诈风险管理研究》", 31 August 2015 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109685204A (en) * | 2018-12-24 | 2019-04-26 | 北京旷视科技有限公司 | Pattern search method and device, image processing method and device |
CN109978069A (en) * | 2019-04-02 | 2019-07-05 | 南京大学 | The method for reducing ResNeXt model over-fitting in picture classification |
CN109978069B (en) * | 2019-04-02 | 2020-10-09 | 南京大学 | Method for reducing overfitting phenomenon of ResNeXt model in image classification |
TWI751458B (en) * | 2019-05-31 | 2022-01-01 | 大陸商北京市商湯科技開發有限公司 | Neural network search method and device, processor, electronic equipment and computer readable storage medium |
JP7168772B2 (en) | 2019-05-31 | 2022-11-09 | 北京市商▲湯▼科技▲開▼▲発▼有限公司 | Neural network search method, device, processor, electronic device, storage medium and computer program |
WO2020238039A1 (en) * | 2019-05-31 | 2020-12-03 | 北京市商汤科技开发有限公司 | Neural network search method and apparatus |
JP2022502762A (en) * | 2019-05-31 | 2022-01-11 | 北京市商▲湯▼科技▲開▼▲発▼有限公司Beijing Sensetime Technology Development Co., Ltd. | Neural network search methods, devices, processors, electronic devices, storage media and computer programs |
CN110555514A (en) * | 2019-08-20 | 2019-12-10 | 北京迈格威科技有限公司 | Neural network model searching method, image identification method and device |
CN110555514B (en) * | 2019-08-20 | 2022-07-12 | 北京迈格威科技有限公司 | Neural network model searching method, image identification method and device |
CN110659690A (en) * | 2019-09-25 | 2020-01-07 | 深圳市商汤科技有限公司 | Neural network construction method and device, electronic equipment and storage medium |
CN110659690B (en) * | 2019-09-25 | 2022-04-05 | 深圳市商汤科技有限公司 | Neural network construction method and device, electronic equipment and storage medium |
CN110807515A (en) * | 2019-10-30 | 2020-02-18 | 北京百度网讯科技有限公司 | Model generation method and device |
CN110807515B (en) * | 2019-10-30 | 2023-04-28 | 北京百度网讯科技有限公司 | Model generation method and device |
CN112884118A (en) * | 2019-11-30 | 2021-06-01 | 华为技术有限公司 | Neural network searching method, device and equipment |
WO2021143883A1 (en) * | 2020-01-15 | 2021-07-22 | 华为技术有限公司 | Adaptive search method and apparatus for neural network |
WO2021164752A1 (en) * | 2020-02-21 | 2021-08-26 | 华为技术有限公司 | Neural network channel parameter searching method, and related apparatus |
CN111401516A (en) * | 2020-02-21 | 2020-07-10 | 华为技术有限公司 | Neural network channel parameter searching method and related equipment |
CN111401516B (en) * | 2020-02-21 | 2024-04-26 | 华为云计算技术有限公司 | Searching method for neural network channel parameters and related equipment |
CN112949662A (en) * | 2021-05-13 | 2021-06-11 | 北京市商汤科技开发有限公司 | Image processing method and device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108985386A (en) | Obtain method, image processing method and the corresponding intrument of image processing model | |
Mavrotas et al. | An improved version of the augmented ε-constraint method (AUGMECON2) for finding the exact pareto set in multi-objective integer programming problems | |
CN104951425B (en) | A kind of cloud service performance self-adapting type of action system of selection based on deep learning | |
CN109376844A (en) | The automatic training method of neural network and device recommended based on cloud platform and model | |
CA3106673A1 (en) | Workflow optimization | |
CN104750780B (en) | A kind of Hadoop configuration parameter optimization methods based on statistical analysis | |
CN109685204A (en) | Pattern search method and device, image processing method and device | |
CN109992699B (en) | User group optimization method and device, storage medium and computer equipment | |
CN106204597B (en) | A kind of video object dividing method based on from the step Weakly supervised study of formula | |
Chandra et al. | Web service selection using modified artificial bee colony algorithm | |
CN107316200A (en) | A kind of method and apparatus for analyzing the user behavior cycle | |
CN107045511A (en) | A kind of method for digging and device of target signature data | |
CN108549909A (en) | Object classification method based on crowdsourcing and object classification system | |
CN113656696A (en) | Session recommendation method and device | |
Shukla et al. | FAT-ETO: Fuzzy-AHP-TOPSIS-Based efficient task offloading algorithm for scientific workflows in heterogeneous fog–cloud environment | |
Nascimento et al. | A reinforcement learning scheduling strategy for parallel cloud-based workflows | |
Filatovas et al. | A reference point-based evolutionary algorithm for approximating regions of interest in multiobjective problems | |
WO2013170435A1 (en) | Pattern mining based on occupancy | |
JP6991960B2 (en) | Image recognition device, image recognition method and program | |
Behzad et al. | A framework for auto-tuning HDF5 applications | |
US20220179862A1 (en) | Optimizing breakeven points for enhancing system performance | |
Salmon et al. | A two-tiered software architecture for automated tuning of disk layouts | |
Chen et al. | A new efficient approach for data clustering in electronic library using ant colony clustering algorithm | |
Curry et al. | Designing for system value sustainment using interactive epoch era analysis: a case study for on-orbit servicing vehicles | |
Zhu et al. | Sky Computing: Accelerating Geo-distributed Computing in Federated Learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181211 |