CN108256555A - Picture material recognition methods, device and terminal - Google Patents
Picture material recognition methods, device and terminal Download PDFInfo
- Publication number
- CN108256555A CN108256555A CN201711394566.7A CN201711394566A CN108256555A CN 108256555 A CN108256555 A CN 108256555A CN 201711394566 A CN201711394566 A CN 201711394566A CN 108256555 A CN108256555 A CN 108256555A
- Authority
- CN
- China
- Prior art keywords
- loss function
- convolutional neural
- neural networks
- iterations
- trained
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Abstract
An embodiment of the present invention provides a kind of picture material recognition methods, device and terminal, wherein, the method includes:During being trained to convolutional neural networks, the input sample image into convolutional neural networks, wherein, the sample image is used to be iterated training to the convolutional neural networks;It determines to have trained iterations to the convolutional neural networks;Iterations are trained based on described, regulation loss function obtains target loss function;Training is iterated according to the target loss function, obtains target convolutional neural networks;By the target convolutional neural networks, content recognition is carried out to images to be recognized.The convolutional neural networks training program provided through the embodiment of the present invention, the distribution of complicated image sample can be preferably fitted to, reduce the sample image number of intermediate probability Distribution value, so as in the case where ensureing convolutional neural networks recognition result accuracy rate, increase the recall rate of sample.
Description
Technical field
The present invention relates to image identification technical field, more particularly to a kind of picture material recognition methods, device and end
End.
Background technology
Deep learning is widely applied in related fields such as video image, speech recognition, natural language processings.Volume
An important branch of the product neural network as deep learning, due to its superpower capability of fitting and global end to end excellent
Change ability so that the precision of its gained prediction result in the Computer Vision Tasks such as target detection, classification is substantially improved.
But in practical applications, the result that generally will not be directly generated using convolutional neural networks.With one two classification
For task, its probability in some classification can be provided for an input data convolutional neural networks.Probability threshold value meeting
It is set according to specific application scenarios, a higher threshold value can be set to obtain higher accuracy rate under normal conditions, still
The recall rate of image pattern will accordingly decline, it is clear that the accuracy rate of recognition result and the recall rate of image pattern are inversely proportional.It can
The technical issues of seeing, urgently being solved there is an urgent need to those skilled in the art at present be:How convolutional neural networks knowledge is being ensured
In the case of other result accuracy rate, increase the recall rate of sample.
Invention content
The embodiment of the present invention provides a kind of picture material recognition methods, device and terminal, to solve to exist in the prior art
Convolutional neural networks recognition result accuracy rate and the recall rate of sample the problem of can not taking into account.
One side according to the present invention provides a kind of picture material recognition methods, the method includes:To volume
During product neural network is trained, the input sample image into convolutional neural networks, wherein, the sample image is used
In being iterated training to the convolutional neural networks;It determines to have trained iterations to the convolutional neural networks;It is based on
Described to have trained iterations, regulation loss function obtains target loss function;It is iterated according to the target loss function
Training, obtains target convolutional neural networks;By the target convolutional neural networks, content recognition is carried out to images to be recognized.
Optionally, it is described to have trained iterations based on described, it adjusts default loss function and obtains target loss function
Step, including:Loss function is preset in extraction, has trained whether iterations are more than the first preset times described in judgement;If it is not,
Hyper parameter in the default loss function is adjusted to 0, obtains target loss function;If so, by the default loss function
In hyper parameter be adjusted to preset value, obtain target loss function.
Optionally, the default loss function is as follows:
SinFocallLoss=- (1-pt)γsin(2π*clip(s-i,0,i/2)/i)log(pt)
Wherein, ptFor probability value, γ is hyper parameter, and i is iterations upper limit value, and s is has trained iterations;
Optionally, it is described to have trained iterations based on described, it adjusts default loss function and obtains target loss function
Step, including:Determine iterations upper limit value;By the iterations upper limit value and it is described trained iterations, substitute into institute
It states in default loss function, obtains target loss function.
Optionally, the step that current iteration training is carried out according to the target loss function, including:By described
Convolutional neural networks determine the corresponding characteristic pattern of the sample image;The characteristic pattern is subjected to average pond, to average pond
Characteristic pattern afterwards carries out dimension-reduction treatment, obtains feature vector;Wherein, the first eigenvector includes multiple points, Mei Gedian
Tag along sort and a probability value in the corresponding convolutional neural networks;It is calculated based on the target loss function
The average loss value of the convolutional neural networks;Calculate the local derviation of target loss function each point in described eigenvector
Number obtains Grad, and the corresponding model parameter of the convolutional neural networks is updated according to the Grad.
Other side according to the present invention, provides a kind of picture material identification device, and described device includes:Input
Module is configured as during being trained to convolutional neural networks, the input sample image into convolutional neural networks,
Wherein, the sample image is used to be iterated training to the convolutional neural networks;Determining module, is configured to determine that pair
The convolutional neural networks have trained iterations;Loss function adjustment module is configured as having trained iteration based on described
Number, regulation loss function obtain target loss function;Training module is configured as carrying out according to the target loss function
Current iteration is trained, and obtains target convolutional neural networks;Prediction module is configured as through the target convolution nerve net
Network carries out content recognition to images to be recognized.
Optionally, the loss function adjustment module includes:Extracting sub-module is configured as extracting default loss function,
Train whether iterations are more than the first preset times described in judging;First adjusts submodule, is configured as if it is not, by institute
The hyper parameter stated in default loss function is adjusted to 0, obtains target loss function;Second adjusts submodule, if being configured as
It is that the hyper parameter in the default loss function is adjusted to preset value, obtains target loss function.
Optionally, the default loss function is as follows:
SinFocallLoss=- (1-pt)γsin(2π*clip(s-i,0,i/2)/i)log(pt)
Wherein, ptFor probability value, γ is hyper parameter, and i is iterations upper limit value, and s is has trained iterations;
Optionally, the loss function adjustment module includes:Upper limit value determination sub-module is configured to determine that iteration time
Number upper limit value;Substitute into submodule, be configured as by the iterations upper limit value and it is described trained iterations, substitution institute
It states in default loss function, obtains target loss function.
Optionally, the training module includes:Characteristic pattern determination sub-module is configured as through the convolutional Neural net
Network determines the corresponding characteristic pattern of the sample image;Submodule is handled, is configured as the characteristic pattern carrying out average pond,
Dimension-reduction treatment is carried out to the characteristic pattern behind average pond, obtains feature vector;Wherein, the first eigenvector includes more
A, each pair of point answers tag along sort and a probability value in the convolutional neural networks;Computational submodule, quilt
It is configured to the average loss value that the target loss function calculates the convolutional neural networks;Submodule is updated, is configured
Grad is obtained to calculate target loss function partial derivative of each point in described eigenvector, according to the Grad
The corresponding model parameter of the convolutional neural networks is updated.
In accordance with a further aspect of the present invention, a kind of terminal is provided, including:Memory, processor and it is stored in described deposit
On reservoir and the picture material recognizer that can run on the processor, described image content recognition program is by the place
The step of reason device realizes any one heretofore described picture material recognition methods when performing.
According to another aspect of the invention, a kind of computer readable storage medium, the computer-readable storage are provided
Picture material recognizer is stored on medium, is realized in the present invention when described image content recognition program is executed by processor
The step of any one described picture material recognition methods.
Compared with prior art, the present invention has the following advantages:
Picture material identifying schemes provided in an embodiment of the present invention, in the sample image according to input to convolutional Neural net
When network is iterated trained, the target loss function for being used for repetitive exercise every time is adjusted based on iterations dynamic, it can be more preferable
Ground is fitted to the distribution of complicated image sample, reduces the sample image number of intermediate probability Distribution value, so as to ensure convolution god
In the case of through Network Recognition result accuracy rate, increase the recall rate of sample.
Above description is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention,
And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can
It is clearer and more comprehensible, below the special specific embodiment for lifting the present invention.
Description of the drawings
By reading the detailed description of hereafter preferred embodiment, various advantages and benefit are for ordinary skill
Personnel will become clear.Attached drawing is only used for showing preferred embodiment, and is not considered as limitation of the present invention.And
And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 is a kind of step flow chart of according to embodiments of the present invention one picture material recognition methods;
Fig. 2 is a kind of step flow chart of according to embodiments of the present invention two picture material recognition methods;
Fig. 3 is a kind of structure diagram of according to embodiments of the present invention three picture material identification device;
Fig. 4 is a kind of structure diagram of according to embodiments of the present invention four terminal.
Specific embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although this public affairs is shown in attached drawing
The exemplary embodiment opened, it being understood, however, that may be realized in various forms the disclosure without the implementation that should be illustrated here
Example is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the disclosure
Range is completely communicated to those skilled in the art.
Embodiment one
With reference to Fig. 1, a kind of step flow chart of picture material recognition methods of the embodiment of the present invention one is shown.
The picture material recognition methods of the embodiment of the present invention may comprise steps of:
Step 101:During being trained to convolutional neural networks, the input sample figure into convolutional neural networks
Picture.
Wherein, sample image is used to be iterated training to convolutional neural networks.
The convolutional neural networks of the embodiment of the present invention can be more categorised content identification models, can identify belonging to image
Classification;It may be two categorised content identification models, can identify whether image belongs to a certain classification.Convolutional Neural net
After the completion of network modeling, need to carry out successive ignition training to it using a large amount of sample image, to ensure convolutional neural networks
Convergence ensures the accuracy of prediction result.The idiographic flow that each pass sample image is trained convolutional neural networks
It is identical, it is carried out for inputting a sample image and carrying out an iteration training to convolutional neural networks in the embodiment of the present invention
Explanation.
Step 102:It determines to have trained iterations to convolutional neural networks.
It during due to being trained convolutional neural networks, needs to carry out successive ignition training to it, inputs a sample every time
This image then needs to carry out an iteration training.In training, system adds up training iteration time of the record to convolutional neural networks
Number, the training iterations recorded adjust loss function when being trained for next iteration.
Such as:Convolutional neural networks have been carried out with 100 repetitive exercises before step 101 is performed, it is defeated in step 101
Enter sample image and then carry out the 101st repetitive exercise, therefore determine that it is 100 times to have trained iterations.
Step 103:Based on iterations have been trained, regulation loss function obtains target loss function.
Adjustable variables are provided in loss function, adjustable variables change with the variation of iterations.So as to fulfill
Target loss function dynamic regulation with the variation of iterations.
The target loss function for being used for repetitive exercise every time is adjusted based on iterations dynamic, simple sample can be reduced
Gradient in parameter training can preferably be fitted to the distribution of complicated image sample, reduce the sample of intermediate probability Distribution value
This image number.
Step 104:Training is iterated according to target loss function, obtains target convolutional neural networks.
For being iterated trained detailed process, parameter correlation skill based on target loss function pair convolutional neural networks
Art is not particularly limited this in the embodiment of the present invention.Convolutional neural networks can be detected by target loss function
Degree of convergence, and can also by the Grad that target loss function is calculated to convolutional neural networks corresponding model
Parameter is updated.
Multiple convolution neural network repetitive exercise is repeated, when convolutional neural networks converge to predeterminable level, generation
Target convolutional neural networks subsequently can carry out picture material identification by target convolutional neural networks.
Step 105:By target convolutional neural networks, content recognition is carried out to images to be recognized.
It, can be defeated after carrying out content recognition to images to be recognized if target convolutional neural networks are two Classification and Identification models
Go out to indicate images to be recognized whether be particular category image result.
Wherein, target convolutional neural networks can be trained to the image of identification any specific classification, such as:A classifications
Image, correspondingly target convolutional neural networks may recognize that whether images to be recognized belongs to the image of A classifications.
Picture material recognition methods provided in an embodiment of the present invention, in the sample image according to input to convolutional Neural net
When network is iterated trained, the target loss function for being used for repetitive exercise every time is adjusted based on iterations dynamic, it can be more preferable
Ground is fitted to the distribution of complicated image sample, reduces the sample image number of intermediate probability Distribution value, so as to ensure convolution god
In the case of through Network Recognition result accuracy rate, increase the recall rate of sample.
Embodiment two
With reference to Fig. 2, the step flow chart of the picture material recognition methods of the embodiment of the present invention two is shown.
The picture material recognition methods of the embodiment of the present invention specifically may comprise steps of:
Step 201:During being trained to convolutional neural networks, the input sample figure into convolutional neural networks
Picture.
Wherein, sample image is used to be iterated training to convolutional neural networks.After the completion of convolutional neural networks modeling,
It needs to carry out successive ignition training to it using a large amount of sample image, be ensured with the convergence for ensuring convolutional neural networks pre-
Survey the accuracy of result.The idiographic flow that each pass sample image is trained convolutional neural networks is identical, and the present invention is real
It applies in example and is illustrated for inputting a sample image and carrying out an iteration training to convolutional neural networks.
Step 202:It determines to have trained iterations to convolutional neural networks.
System is during convolutional neural networks are iterated training, and add up each secondary repetitive exercise number.Such as:
A preceding repetitive exercise is the 50th time, then current iteration is trained for the 51st time, and correspondingly next iteration is trained for the 52nd time.
Step 203:Based on iterations have been trained, regulation loss function obtains target loss function.
During specific implementation, those skilled in the art can set different loss functions according to actual demand.Institute
The loss function of setting is different, then based on having trained iterations also different to the specific regulative mode of loss function.No matter such as
What setting loss function, how regulation loss function can ensure to change by target loss function pair convolutional neural networks
During generation training, extraction of the simple sample in parameter training is reduced, model is made preferably to be fitted to complex samples distribution.
Optionally, loss function can be preset as:FocallLoss=- (1-pt)γlog(pt);Wherein, ptFor probability
Value, γ is hyper parameter.ptBig sample image is simple sample image, otherwise ptSmall sample image is complex samples image.
For above-mentioned preset loss function, based on iterations have been trained, adjust default loss function and obtain target damage
When losing function:The default loss function of extraction, judgement have trained whether iterations are more than the first preset times;If it is not, by described in
Hyper parameter in default loss function is adjusted to 0, obtains target loss function;If so, by the hyper parameter in default loss function
Preset value is adjusted to, obtains target loss function.
The mode of loss function is preset in above-mentioned adjusting, and the training pattern of model training initial stage setting γ=0 for a period of time, is treated
When model is almost restrained, in order to allow the distribution of model preferably difficulty of learning sample, γ is adjusted to preset value at this time, is preset
It is worth for non-zero value.This kind adjusts the mode of default loss function, can be fitted to the distribution of complex samples, reduces intermediate probability value
The sample image number of distribution.It is disadvantageous in that 0 mutation for arriving preset value can occur during training in super ginseng γ, this
Sample can cause greatly to influence on model parameter moment.
A kind of preferably preset function is additionally provided in the embodiment of the present invention, preferably default loss function is as follows:
SinFocallLoss=- (1-pt)γsin(2π*clip(s-i,0,i/2)/i)log(pt)
Wherein, ptFor probability value, γ is hyper parameter, and i is iterations upper limit value, and s is has trained iterations;
For the preferably default loss function, based on iterations have been trained, adjust default loss function and obtain target
The mode of loss function is as follows:First, iterations upper limit value is determined;Secondly, by iterations upper limit value and iteration has been trained
Number substitutes into default loss function, obtains target loss function.
Iterations upper limit value can be configured by those skilled in the art according to actual demand, the embodiment of the present invention
In the value is not particularly limited.The preferably default loss function of this kind and the mode for adjusting default loss function, no
The distribution of complex samples can be only fitted to, reduces the sample image number of intermediate probability Distribution value.Due to γ sin (2 π *
Clip (s-i, 0, i/2)/i) it is gradual change, therefore loss function variation will not cause extreme influence to model parameter moment.
Step 204:The corresponding characteristic pattern of sample image is determined by convolutional neural networks.
Sample image can be the single-frame images in video in the embodiment of the present invention, may also be only a multimedia figure
Picture.One image is input in convolutional neural networks, and characteristic pattern can be obtained after convolutional layer or pond layer.For inciting somebody to action
In sample image input convolutional neural networks, the specific processing mode of characteristic pattern is obtained, with reference to existing the relevant technologies, sheet
This is not specifically limited in inventive embodiments.
Step 205:Characteristic pattern is subjected to average pond, dimension-reduction treatment is carried out to the characteristic pattern behind average pond, obtains spy
Sign vector.
Wherein, feature vector includes multiple points, each pair of point answer tag along sort in a convolutional neural networks and
One probability value.Comprising multiple labels in the convolutional neural networks, probability value is sample image and tag along sort matching degree.It needs
If being noted that two categorised content identification model of convolutional neural networks, two tag along sorts are included in convolutional neural networks,
Respectively be used to indicate for the classification label and be used to indicate be not the classification label;If convolutional neural networks are more points
Class content recognition model will include the corresponding tag along sort of each classification in convolutional neural networks.
Step 206:The average loss value of convolutional neural networks is calculated based on target loss function.
When calculating average loss value, the corresponding penalty values of each point in feature vector are calculated by target loss function first;
Then the mean value of the corresponding penalty values of each point is calculated, obtains average loss value.
It can decide whether can to terminate the repetitive exercise of convolutional neural networks by average loss value.Specifically, judge
Whether average loss value is less than default penalty values;If so, terminate that the repetitive exercise of convolutional neural networks step need not be returned again to
Rapid 201 into convolutional neural networks input sample image;Continue if it is not, then returning and performing step 201 into convolutional neural networks
Input sample image is iterated training to convolutional neural networks, until average loss value is less than after default penalty values to convolution
The repetitive exercise of neural network.
Average loss value is less than default penalty values and then can determine that convolutional neural networks converge to preset standard.Default loss
Value can be configured by those skilled in the art according to actual demand, this is not specifically limited in the embodiment of the present invention.In advance
If penalty values are smaller, then the convergence of the convolutional neural networks after the completion of training is better;Default penalty values are bigger, then convolution god
Repetitive exercise through network is easier.
Step 207:It calculates target loss function partial derivative of each point in feature vector and obtains Grad, according to gradient
To convolutional neural networks, corresponding model parameter is updated value, obtains target convolutional neural networks.
Target loss function is in particular calculated, the partial derivative of each point obtains Grad in feature vector.To convolution god
Repetitive exercise through network is substantially the continuous renewal to model parameter, until convolutional neural networks converge to preset standard
After can carry out picture material prediction.
It should be noted that step 207 is not limited to perform after step 206 during specific implementation, may be used also
To be performed before step 206.
Step 208:By target convolutional neural networks, content recognition is carried out to images to be recognized.
It can be preferably fitted to again by being trained in step 201 to step 207 in obtained target convolutional neural networks
The distribution of miscellaneous image pattern reduces the sample image number of intermediate probability Distribution value.Therefore, images to be recognized is inputted into target
When content recognition is carried out in convolutional neural networks, accurately recognition result can be obtained.
Picture material recognition methods provided in an embodiment of the present invention, in the sample image according to input to convolutional Neural net
When network is iterated trained, the target loss function for being used for repetitive exercise every time is adjusted based on iterations dynamic, it can be more preferable
Ground is fitted to the distribution of complicated image sample, reduces the sample image number of intermediate probability Distribution value, so as to ensure convolution god
In the case of through Network Recognition result accuracy rate, increase the recall rate of sample.
Embodiment three
With reference to Fig. 3, a kind of structure diagram of picture material identification device of the embodiment of the present invention three is shown.
The iconic content identification device of the embodiment of the present invention can include:Input module 301, is configured as to convolution
During neural network is trained, the input sample image into convolutional neural networks, wherein, the sample image is used for
Training is iterated to the convolutional neural networks;Determining module 302 is configured to determine that the convolutional neural networks
Iterations are trained;Loss function adjustment module 303 is configured as having trained iterations, regulation loss letter based on described
Number obtains target loss function;Training module 304 is configured as carrying out current iteration training according to the target loss function,
Obtain target convolutional neural networks;Prediction module 305 is configured as by the target convolutional neural networks, to be identified
Image carries out content recognition.
Preferably, the loss function adjustment module 303 can include:It is pre- to be configured as extraction for extracting sub-module 3031
If loss function, train whether iterations are more than the first preset times described in judgement;First adjusts submodule 3032, quilt
It is configured to if it is not, the hyper parameter in the default loss function is adjusted to 0, obtain target loss function;Second adjusts submodule
Block 3033 is configured as if so, the hyper parameter in the default loss function is adjusted to preset value, obtaining target loss letter
Number.
Preferably, the default loss function is as follows:
SinFocallLoss=- (1-pt)γsin(2π*clip(s-i,0,i/2)/i)log(pt)
Wherein, ptFor probability value, γ is that hyper parameter i is iterations upper limit value, and s is has trained iterations;
Preferably, the loss function adjustment module 303 can include:Upper limit value determination sub-module 3034, is configured as
Determine iterations upper limit value;Submodule 3035 is substituted into, is configured as changing the iterations upper limit value and described trained
Generation number substitutes into the default loss function, obtains target loss function.
Preferably, the training module 304 can include:Characteristic pattern determination sub-module 3041, is configured as by described
Convolutional neural networks determine the corresponding characteristic pattern of the sample image;Submodule 3042 is handled, is configured as the characteristic pattern
Average pond is carried out, dimension-reduction treatment is carried out to the characteristic pattern behind average pond, obtains feature vector;Wherein, the fisrt feature
Vector includes multiple points, and each pair of point answers tag along sort and a probability value in the convolutional neural networks;Meter
Operator module 3043 is configured as calculating the average loss value of the convolutional neural networks based on the target loss function;More
New submodule 3044, is configured as calculating target loss function partial derivative of each point in described eigenvector and obtains ladder
Angle value is updated the corresponding model parameter of the convolutional neural networks according to the Grad.
The picture material identification device of the embodiment of the present invention is used to implement in previous embodiment one, embodiment two accordingly
Iconic content recognition methods, and with advantageous effect corresponding with embodiment of the method, details are not described herein.
Example IV
With reference to Fig. 4, a kind of structure diagram of terminal for iconic content identification of the embodiment of the present invention four is shown.
The terminal of the embodiment of the present invention can include:Memory, processor and storage are on a memory and can be in processor
The iconic content recognizer of upper operation, iconic content recognizer realize heretofore described appoint when being executed by processor
Anticipate a kind of iconic content recognition methods the step of.
Fig. 4 is the block diagram according to a kind of convolutional neural networks training terminal 600 shown in an exemplary embodiment.For example,
Terminal 600 can be mobile phone, computer, digital broadcast terminal, messaging devices, game console, tablet device,
Medical Devices, body-building equipment, personal digital assistant etc..
With reference to Fig. 4, terminal 600 can include following one or more components:Processing component 602, memory 604, power supply
Component 606, multimedia component 608, audio component 610, the interface 612 of input/output (I/O), sensor module 614 and
Communication component 616.
The integrated operation of 602 usual control device 600 of processing component, such as with display, call, data communication, phase
Machine operates and record operates associated operation.Processing component 602 can refer to including one or more processors 620 to perform
It enables, to perform all or part of the steps of the methods described above.In addition, processing component 602 can include one or more modules,
Convenient for the interaction between processing component 602 and other assemblies.For example, processing component 602 can include multi-media module, with side
Just the interaction between multimedia component 608 and processing component 602.
Memory 604 is configured as storing various types of data to support the operation in terminal 600.These data
Example is included for the instruction of any application program or method that is operated in terminal 600, contact data, telephone book data,
Message, picture, video etc..Memory 604 can by any kind of volatibility or non-volatile memory device or they
Combination realizes that, such as static RAM (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable
Programmable read only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, soon
Flash memory, disk or CD.
Power supply module 606 provides electric power for the various assemblies of terminal 600.Power supply module 606 can include power management system
System, one or more power supplys and other generate, manage and distribute electric power associated component with for terminal 600.
Multimedia component 608 is included in the screen of one output interface of offer between the terminal 600 and user.
In some embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel,
Screen may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touch and passes
Sensor is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or slide dynamic
The boundary of work, but also detect duration and pressure associated with the touch or slide operation.In some embodiments,
Multimedia component 608 includes a front camera and/or rear camera.When terminal 600 is in operation mode, such as shoot
When pattern or video mode, front camera and/or rear camera can receive external multi-medium data.It is each preposition
Camera and rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 610 is configured as output and/or input audio signal.For example, audio component 610 includes a wheat
Gram wind (MIC), when terminal 600 is in operation mode, during such as call model, logging mode and speech recognition mode, microphone quilt
It is configured to receive external audio signal.The received audio signal can be further stored in memory 604 or via communication
Component 616 is sent.In some embodiments, audio component 610 further includes a loud speaker, for exports audio signal.
I/O interfaces 612 provide interface between processing component 602 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include but be not limited to:Home button, volume button, start button and lock
Determine button.
Sensor module 614 includes one or more sensors, and the state for providing various aspects for terminal 600 is commented
Estimate.For example, sensor module 614 can detect opening/closed state of terminal 600, the relative positioning of component, such as institute
The display and keypad that component is terminal 600 are stated, sensor module 614 can be with detection terminal 600 or 600 1, terminal
The position change of component, the existence or non-existence that user contacts with terminal 600,600 orientation of device or acceleration/deceleration and terminal
600 temperature change.Sensor module 614 can include proximity sensor, be configured in no any physical contact
When detect the presence of nearby objects.Sensor module 614 can also include optical sensor, such as CMOS or CCD imaging sensors,
For being used in imaging applications.In some embodiments, which can also include acceleration transducer, top
Spiral shell instrument sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 616 is configured to facilitate the communication of wired or wireless way between terminal 600 and other equipment.Eventually
End 600 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or combination thereof.It is exemplary at one
In embodiment, communication component 616 receives broadcast singal or broadcast correlation from external broadcasting management system via broadcast channel
Information.In one exemplary embodiment, the communication component 616 further includes near-field communication (NFC) module, to promote short distance
Communication.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be based in NFC module
(UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, terminal 600 can be by one or more application application-specific integrated circuit (ASIC), number
Signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for performing convolutional neural networks training side
Method, specifically picture material recognition methods include:During being trained to convolutional neural networks, to convolutional Neural net
Input sample image in network, wherein, the sample image is used to be iterated training to the convolutional neural networks;Determining pair
The convolutional neural networks have trained iterations;Iterations are trained based on described, regulation loss function obtains target
Loss function;Training is iterated according to the target loss function, obtains target convolutional neural networks;Pass through the target
Convolutional neural networks carry out content recognition to images to be recognized.
Preferably, it is described to have trained iterations based on described, it adjusts default loss function and obtains target loss function
Step, including:Loss function is preset in extraction, has trained whether iterations are more than the first preset times described in judgement;If it is not,
Hyper parameter in the default loss function is adjusted to 0, obtains target loss function;If so, by the default loss function
In hyper parameter be adjusted to preset value, obtain target loss function.
Preferably, the default loss function is as follows:
SinFocallLoss=- (1-pt)γsin(2π*clip(s-i,0,i/2)/i)log(pt)
Wherein, ptFor probability value, γ is hyper parameter, and i is iterations upper limit value, and s is has trained iterations;
Preferably, it is described to have trained iterations based on described, it adjusts default loss function and obtains target loss function
Step, including:Determine iterations upper limit value;By the iterations upper limit value and it is described trained iterations, substitute into institute
It states in default loss function, obtains target loss function.
Preferably, the step that an iteration training is carried out according to the target loss function, including:By described
Convolutional neural networks determine the corresponding characteristic pattern of the sample image;The characteristic pattern is subjected to average pond, to average pond
Characteristic pattern afterwards carries out dimension-reduction treatment, obtains feature vector;Wherein, the first eigenvector includes multiple points, Mei Gedian
Tag along sort and a probability value in the corresponding convolutional neural networks;It is calculated based on the target loss function
The average loss value of the convolutional neural networks;Calculate the local derviation of target loss function each point in described eigenvector
Number obtains Grad, and the corresponding model parameter of the convolutional neural networks is updated according to the Grad.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instructing, example are additionally provided
Such as include the memory 604 of instruction, above-metioned instruction can be performed to complete above-mentioned convolutional Neural net by the processor 620 of terminal 600
Network training method.For example, the non-transitorycomputer readable storage medium can be ROM, random access memory (RAM),
CD-ROM, tape, floppy disk and optical data storage devices etc..When the instruction in storage medium is performed by the processor of terminal, make
Obtain the step of terminal is able to carry out any one heretofore described convolutional neural networks training method.
Terminal provided in an embodiment of the present invention is iterated training in the sample image according to input to convolutional neural networks
When, the target loss function for being used for repetitive exercise every time is adjusted based on iterations dynamic, can preferably be fitted to complicated figure
The distribution of decent reduces the sample image number of intermediate probability Distribution value, so as to ensure convolutional neural networks recognition result
In the case of accuracy rate, increase the recall rate of sample.
For device embodiment, since it is basicly similar to embodiment of the method, so fairly simple, the phase of description
Part is closed referring to the part of embodiment of the method to illustrate.
Picture material identifying schemes are not consolidated with any certain computer, virtual system or miscellaneous equipment provided herein
There is correlation.Various general-purpose systems can also be used together with teaching based on this.As described above, construction has this
Structure required by the system of scheme of the invention is obvious.In addition, the present invention is not also directed to any certain programmed language.
It should be understood that the content of various programming languages realization invention described herein can be utilized, and above to language-specific institute
The description done is the preferred forms in order to disclose the present invention.
In the specification provided in this place, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention
Example can be put into practice without these specific details.In some instances, well known method, knot is not been shown in detail
Structure and technology, so as not to obscure the understanding of this description.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of each inventive aspect,
Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single reality sometimes
It applies in example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:Want
Ask protection the present invention claims the more features of feature than being expressly recited in each claim.More precisely, such as
As claims reflect, inventive aspect is all features less than single embodiment disclosed above.Therefore,
Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim sheet
Separate embodiments of the body all as the present invention.
Those skilled in the art, which are appreciated that, to carry out adaptivity to the module in the equipment in embodiment
Ground changes and they is arranged in one or more equipment different from the embodiment.It can be the module in embodiment
Or unit or component are combined into a module or unit or component and can be divided into multiple submodule or son in addition
Unit or sub-component.It, can be with other than such feature and/or at least some of process or unit exclude each other
Using any combinations to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and such as
Any method of the displosure or all processes or unit of equipment are combined.Unless expressly stated otherwise, this specification
Each feature disclosed in (including adjoint claim, abstract and attached drawing) can be by providing identical, equivalent or similar purpose
Alternative features replace.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments means in the present invention
Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is arbitrary
One of mode can use in any combination.
The all parts embodiment of the present invention can be with hardware realization or to be transported on one or more processor
Capable software module is realized or is realized with combination thereof.It it will be understood by those of skill in the art that can be in practice
Picture material identifying schemes according to embodiments of the present invention are realized using microprocessor or digital signal processor (DSP)
In some or all components some or all functions.The present invention is also implemented as described here for performing
Some or all equipment of method or program of device (for example, computer program and computer program product).This
The program of the realization present invention of sample can may be stored on the computer-readable medium or can have one or more signal
Form.Such signal can be downloaded from internet website to be obtained either providing or with any on carrier signal
Other forms provide.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and this
Field technology personnel can design alternative embodiment without departing from the scope of the appended claims.In claim
In, any reference mark between bracket should not be configured to limitations on claims.Word "comprising" is not excluded for depositing
In element or step not listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple
Such element.The present invention can be by means of including the hardware of several different elements and by means of properly programmed calculating
Machine is realized.If in the unit claim for listing equipment for drying, several in these devices can be by same
Hardware branch embodies.The use of word first, second, and third does not indicate that any sequence.It can be by these word solutions
It is interpreted as title.
Claims (12)
1. a kind of picture material recognition methods, which is characterized in that the method includes:
During being trained to convolutional neural networks, the input sample image into convolutional neural networks, wherein, the sample
This image is used to be iterated training to the convolutional neural networks;
It determines to have trained iterations to the convolutional neural networks;
Iterations are trained based on described, regulation loss function obtains target loss function;
Training is iterated according to the target loss function, obtains target convolutional neural networks;
By the target convolutional neural networks, content recognition is carried out to images to be recognized.
2. according to the method described in claim 1, it is characterized in that, described trained iterations based on described, adjusting is default
Loss function obtains the step of target loss function, including:
Loss function is preset in extraction, has trained whether iterations are more than the first preset times described in judgement;If it is not, by described pre-
If the hyper parameter in loss function is adjusted to 0, target loss function is obtained;
If so, the hyper parameter in the default loss function is adjusted to preset value, target loss function is obtained.
3. according to the method described in claim 1, it is characterized in that, the default loss function is as follows:
SinFocallLoss=- (1-pt)γsin(2π*clip(s-i,0,i/2)/i)log(pt)
Wherein, ptFor probability value, γ is hyper parameter, and i is iterations upper limit value, and s is has trained iterations;
4. according to the method described in claim 3, it is characterized in that, described trained iterations based on described, adjusting is default
Loss function obtains the step of target loss function, including:
Determine iterations upper limit value;
By the iterations upper limit value and it is described trained iterations, substitute into the default loss function, obtain target
Loss function.
5. according to claim 1-4 any one of them methods, which is characterized in that carried out according to the target loss function primary
The step of repetitive exercise, including:
The corresponding characteristic pattern of the sample image is determined by the convolutional neural networks;
The characteristic pattern is subjected to average pond, dimension-reduction treatment is carried out to the characteristic pattern behind average pond, obtains feature vector;Its
In, the first eigenvector includes multiple points, each pair of point answer the tag along sort in the convolutional neural networks with
An and probability value;
The average loss value of the convolutional neural networks is calculated based on the target loss function;
It calculates target loss function partial derivative of each point in described eigenvector and obtains Grad, according to the Grad
The corresponding model parameter of the convolutional neural networks is updated.
6. a kind of picture material identification device, which is characterized in that described device includes:
Input module is configured as that during convolutional neural networks are trained, sample is inputted into convolutional neural networks
This image, wherein, the sample image is used to be iterated training to the convolutional neural networks;
Determining module is configured to determine that and has trained iterations to the convolutional neural networks;
Loss function adjustment module is configured as having trained iterations based on described, and regulation loss function obtains target loss
Function;
Training module is configured as carrying out current iteration training according to the target loss function, obtains target convolution nerve net
Network;
Prediction module is configured as through the target convolutional neural networks, and content recognition is carried out to images to be recognized.
7. device according to claim 6, which is characterized in that the loss function adjustment module includes:
Extracting sub-module is configured as extracting default loss function, iterations has been trained whether to be more than first described in judgement pre-
If number;
First adjusts submodule, is configured as if it is not, the hyper parameter in the default loss function is adjusted to 0, obtaining target
Loss function;
Second adjusts submodule, is configured as if so, the hyper parameter in the default loss function is adjusted to preset value, obtaining
Target loss function.
8. device according to claim 6, which is characterized in that the default loss function is as follows:
SinFocallLoss=- (1-pt)γsin(2π*clip(s-i,0,i/2)/i)log(pt)
Wherein, ptFor probability value, γ is hyper parameter, and i is iterations upper limit value, and s is has trained iterations;
9. device according to claim 8, which is characterized in that the loss function adjustment module includes:
Upper limit value determination sub-module is configured to determine that iterations upper limit value;
Substitute into submodule, be configured as by the iterations upper limit value and it is described trained iterations, substitute into described presets
In loss function, target loss function is obtained.
10. according to claim 6-9 any one of them devices, which is characterized in that the training module includes:
Characteristic pattern determination sub-module is configured as determining the corresponding feature of the sample image by the convolutional neural networks
Figure;
Submodule is handled, is configured as the characteristic pattern carrying out average pond, dimensionality reduction is carried out to the characteristic pattern behind average pond
Processing, obtains feature vector;Wherein, the first eigenvector includes multiple points, and each pair of point answers the convolution god
Through the tag along sort in network and a probability value;
Computational submodule is configured as calculating the average loss value of the convolutional neural networks based on the target loss function;
Submodule is updated, is configured as calculating target loss function partial derivative of each point in described eigenvector and obtains ladder
Angle value is updated the corresponding model parameter of the convolutional neural networks according to the Grad.
11. a kind of terminal, which is characterized in that including:It memory, processor and is stored on the memory and can be at the place
The convolutional neural networks training program run on reason device, is realized such as when described image content recognition program is performed by the processor
The step of picture material recognition methods described in any one of claim 1 to 5.
12. a kind of computer readable storage medium, which is characterized in that be stored in image on the computer readable storage medium
Hold recognizer, realized as described in any one of claim 1 to 5 when described image content recognition program is executed by processor
The step of picture material recognition methods.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711394566.7A CN108256555B (en) | 2017-12-21 | 2017-12-21 | Image content identification method and device and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711394566.7A CN108256555B (en) | 2017-12-21 | 2017-12-21 | Image content identification method and device and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108256555A true CN108256555A (en) | 2018-07-06 |
CN108256555B CN108256555B (en) | 2020-10-16 |
Family
ID=62722581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711394566.7A Active CN108256555B (en) | 2017-12-21 | 2017-12-21 | Image content identification method and device and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108256555B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109145898A (en) * | 2018-07-26 | 2019-01-04 | 清华大学深圳研究生院 | A kind of object detecting method based on convolutional neural networks and iterator mechanism |
CN109272514A (en) * | 2018-10-05 | 2019-01-25 | 数坤(北京)网络科技有限公司 | The sample evaluation method and model training method of coronary artery parted pattern |
CN109711386A (en) * | 2019-01-10 | 2019-05-03 | 北京达佳互联信息技术有限公司 | Obtain method, apparatus, electronic equipment and the storage medium of identification model |
CN110008956A (en) * | 2019-04-01 | 2019-07-12 | 深圳市华付信息技术有限公司 | Invoice key message localization method, device, computer equipment and storage medium |
CN110060314A (en) * | 2019-04-22 | 2019-07-26 | 深圳安科高技术股份有限公司 | A kind of CT iterative approximation accelerated method and system based on artificial intelligence |
CN110135582A (en) * | 2019-05-09 | 2019-08-16 | 北京市商汤科技开发有限公司 | Neural metwork training, image processing method and device, storage medium |
CN110210568A (en) * | 2019-06-06 | 2019-09-06 | 中国民用航空飞行学院 | The recognition methods of aircraft trailing vortex and system based on convolutional neural networks |
CN110414581A (en) * | 2019-07-19 | 2019-11-05 | 腾讯科技(深圳)有限公司 | Picture detection method and device, storage medium and electronic device |
CN110647916A (en) * | 2019-08-23 | 2020-01-03 | 苏宁云计算有限公司 | Pornographic picture identification method and device based on convolutional neural network |
CN110866880A (en) * | 2019-11-14 | 2020-03-06 | 上海联影智能医疗科技有限公司 | Image artifact detection method, device, equipment and storage medium |
CN110942090A (en) * | 2019-11-11 | 2020-03-31 | 北京迈格威科技有限公司 | Model training method, image processing method, device, electronic equipment and storage medium |
CN111274972A (en) * | 2020-01-21 | 2020-06-12 | 北京妙医佳健康科技集团有限公司 | Dish identification method and device based on metric learning |
CN111368900A (en) * | 2020-02-28 | 2020-07-03 | 桂林电子科技大学 | Image target object identification method |
CN111382772A (en) * | 2018-12-29 | 2020-07-07 | Tcl集团股份有限公司 | Image processing method and device and terminal equipment |
CN111477212A (en) * | 2019-01-04 | 2020-07-31 | 阿里巴巴集团控股有限公司 | Content recognition, model training and data processing method, system and equipment |
CN111612021A (en) * | 2019-02-22 | 2020-09-01 | 中国移动通信有限公司研究院 | Error sample identification method and device and terminal |
CN112101345A (en) * | 2020-08-26 | 2020-12-18 | 贵州优特云科技有限公司 | Water meter reading identification method and related device |
CN112562069A (en) * | 2020-12-24 | 2021-03-26 | 北京百度网讯科技有限公司 | Three-dimensional model construction method, device, equipment and storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292333A (en) * | 2017-06-05 | 2017-10-24 | 浙江工业大学 | A kind of rapid image categorization method based on deep learning |
-
2017
- 2017-12-21 CN CN201711394566.7A patent/CN108256555B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107292333A (en) * | 2017-06-05 | 2017-10-24 | 浙江工业大学 | A kind of rapid image categorization method based on deep learning |
Non-Patent Citations (2)
Title |
---|
TSUNG-YI LIN等: "Focal Loss for Dense Object Detection", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
李飞腾: "卷积神经网络及其应用", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 * |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109145898A (en) * | 2018-07-26 | 2019-01-04 | 清华大学深圳研究生院 | A kind of object detecting method based on convolutional neural networks and iterator mechanism |
CN109272514A (en) * | 2018-10-05 | 2019-01-25 | 数坤(北京)网络科技有限公司 | The sample evaluation method and model training method of coronary artery parted pattern |
CN109272514B (en) * | 2018-10-05 | 2021-07-13 | 数坤(北京)网络科技股份有限公司 | Sample evaluation method and model training method of coronary artery segmentation model |
CN111382772A (en) * | 2018-12-29 | 2020-07-07 | Tcl集团股份有限公司 | Image processing method and device and terminal equipment |
CN111382772B (en) * | 2018-12-29 | 2024-01-26 | Tcl科技集团股份有限公司 | Image processing method and device and terminal equipment |
CN111477212B (en) * | 2019-01-04 | 2023-10-24 | 阿里巴巴集团控股有限公司 | Content identification, model training and data processing method, system and equipment |
CN111477212A (en) * | 2019-01-04 | 2020-07-31 | 阿里巴巴集团控股有限公司 | Content recognition, model training and data processing method, system and equipment |
CN109711386A (en) * | 2019-01-10 | 2019-05-03 | 北京达佳互联信息技术有限公司 | Obtain method, apparatus, electronic equipment and the storage medium of identification model |
CN111612021B (en) * | 2019-02-22 | 2023-10-31 | 中国移动通信有限公司研究院 | Error sample identification method, device and terminal |
CN111612021A (en) * | 2019-02-22 | 2020-09-01 | 中国移动通信有限公司研究院 | Error sample identification method and device and terminal |
CN110008956A (en) * | 2019-04-01 | 2019-07-12 | 深圳市华付信息技术有限公司 | Invoice key message localization method, device, computer equipment and storage medium |
CN110060314A (en) * | 2019-04-22 | 2019-07-26 | 深圳安科高技术股份有限公司 | A kind of CT iterative approximation accelerated method and system based on artificial intelligence |
CN110135582A (en) * | 2019-05-09 | 2019-08-16 | 北京市商汤科技开发有限公司 | Neural metwork training, image processing method and device, storage medium |
CN110135582B (en) * | 2019-05-09 | 2022-09-27 | 北京市商汤科技开发有限公司 | Neural network training method, neural network training device, image processing method, image processing device and storage medium |
CN110210568A (en) * | 2019-06-06 | 2019-09-06 | 中国民用航空飞行学院 | The recognition methods of aircraft trailing vortex and system based on convolutional neural networks |
CN110414581B (en) * | 2019-07-19 | 2023-05-30 | 腾讯科技(深圳)有限公司 | Picture detection method and device, storage medium and electronic device |
CN110414581A (en) * | 2019-07-19 | 2019-11-05 | 腾讯科技(深圳)有限公司 | Picture detection method and device, storage medium and electronic device |
CN110647916A (en) * | 2019-08-23 | 2020-01-03 | 苏宁云计算有限公司 | Pornographic picture identification method and device based on convolutional neural network |
CN110942090A (en) * | 2019-11-11 | 2020-03-31 | 北京迈格威科技有限公司 | Model training method, image processing method, device, electronic equipment and storage medium |
CN110942090B (en) * | 2019-11-11 | 2024-03-29 | 北京迈格威科技有限公司 | Model training method, image processing device, electronic equipment and storage medium |
CN110866880A (en) * | 2019-11-14 | 2020-03-06 | 上海联影智能医疗科技有限公司 | Image artifact detection method, device, equipment and storage medium |
CN111274972A (en) * | 2020-01-21 | 2020-06-12 | 北京妙医佳健康科技集团有限公司 | Dish identification method and device based on metric learning |
CN111274972B (en) * | 2020-01-21 | 2023-08-29 | 北京妙医佳健康科技集团有限公司 | Dish identification method and device based on measurement learning |
CN111368900A (en) * | 2020-02-28 | 2020-07-03 | 桂林电子科技大学 | Image target object identification method |
CN112101345A (en) * | 2020-08-26 | 2020-12-18 | 贵州优特云科技有限公司 | Water meter reading identification method and related device |
CN112562069A (en) * | 2020-12-24 | 2021-03-26 | 北京百度网讯科技有限公司 | Three-dimensional model construction method, device, equipment and storage medium |
CN112562069B (en) * | 2020-12-24 | 2023-10-27 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for constructing three-dimensional model |
Also Published As
Publication number | Publication date |
---|---|
CN108256555B (en) | 2020-10-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108256555A (en) | Picture material recognition methods, device and terminal | |
CN108664989B (en) | Image tag determines method, apparatus and terminal | |
CN108171254A (en) | Image tag determines method, apparatus and terminal | |
CN108399409B (en) | Image classification method, device and terminal | |
CN109117862B (en) | Image tag recognition methods, device and server | |
CN108256549B (en) | Image classification method, device and terminal | |
CN109871896A (en) | Data classification method, device, electronic equipment and storage medium | |
CN106548468B (en) | The method of discrimination and device of image definition | |
CN110443280A (en) | Training method, device and the storage medium of image detection model | |
CN106202330A (en) | The determination methods of junk information and device | |
CN109801270A (en) | Anchor point determines method and device, electronic equipment and storage medium | |
CN106845377A (en) | Face key independent positioning method and device | |
CN106980840A (en) | Shape of face matching process, device and storage medium | |
CN111210844B (en) | Method, device and equipment for determining speech emotion recognition model and storage medium | |
CN106203306A (en) | The Forecasting Methodology at age, device and terminal | |
CN107527024A (en) | Face face value appraisal procedure and device | |
CN109858614A (en) | Neural network training method and device, electronic equipment and storage medium | |
CN109726709A (en) | Icon-based programming method and apparatus based on convolutional neural networks | |
CN110390086A (en) | A kind of method, apparatus and storage medium generating text | |
CN107977895A (en) | Vehicle damages the definite method, apparatus and user equipment of information | |
CN109819288A (en) | Determination method, apparatus, electronic equipment and the storage medium of advertisement dispensing video | |
CN107748867A (en) | The detection method and device of destination object | |
CN107194464A (en) | The training method and device of convolutional neural networks model | |
CN108764283A (en) | A kind of the loss value-acquiring method and device of disaggregated model | |
CN108133217B (en) | Characteristics of image determines method, apparatus and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |