CN108229379A - Image-recognizing method, device, computer equipment and storage medium - Google Patents

Image-recognizing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN108229379A
CN108229379A CN201711479546.XA CN201711479546A CN108229379A CN 108229379 A CN108229379 A CN 108229379A CN 201711479546 A CN201711479546 A CN 201711479546A CN 108229379 A CN108229379 A CN 108229379A
Authority
CN
China
Prior art keywords
image
feature
network model
convolutional neural
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711479546.XA
Other languages
Chinese (zh)
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711479546.XA priority Critical patent/CN108229379A/en
Publication of CN108229379A publication Critical patent/CN108229379A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention proposes a kind of image-recognizing method, device, computer equipment and storage medium, wherein, method includes:Obtain image to be identified, using the first convolution neural network model after training, image identification is carried out to images to be recognized, to determine the object shown in image to be identified, wherein, the first convolution neural network model includes extracting the first via convolutional neural networks of image overall feature and the second road convolutional neural networks for extracting image local feature.By the first convolution neural network model after training, images to be recognized is identified, the accuracy of image identification is improved, solves in the prior art, image identifies the problem of feature for using engineer extracts, and the accuracy that image identifies is low.

Description

Image-recognizing method, device, computer equipment and storage medium
Technical field
This application involves image identification technical field more particularly to a kind of image-recognizing method, device, computer equipment and Storage medium.
Background technology
With Internet of Things, artificial intelligence technology flourish, intelligent spring tide has swept across entire household electrical appliances industry, from intelligence Mobile phone, smart television, intelligent refrigerator by now, intelligent air condition etc., intelligence just become the master for influencing and changing people's life Lead strength.Important component of the smart mobile phone as smart home, by it, we can realize the intelligent management, remote of food Journey manipulation, even tone information, internet amusement etc..And in intelligent food control, core technology is food identification skill Art.
In the relevant technologies, food knowledge is mainly manually entered at present otherwise, bar code scan, image identify etc..Wherein, It is cumbersome to be manually entered addition, and bar code scan is related to the design of the electronic tag, antenna, card reader of food, the design cycle It is longer, simultaneously, it is also desirable to which artificial to stick electronic tag manually in advance, operation is also cumbersome.And traditional image-recognizing method master Feature extraction, recognition accuracy poor for the identification robustness of a large amount of different objects are carried out by the feature of engineer The problem of low.
Invention content
The present invention is directed to solve at least some of the technical problems in related technologies.
For this purpose, the present invention proposes a kind of image-recognizing method, to realize through trained first convolutional neural networks mould Type carries out image identification to images to be recognized, improves the accuracy of image identification.
The present invention proposes a kind of pattern recognition device.
The present invention proposes a kind of computer equipment.
The present invention proposes a kind of computer readable storage medium.
In order to achieve the above object, first aspect present invention embodiment proposes a kind of image-recognizing method, including:
Obtain image to be identified;
Using the first convolution neural network model after training, image identification is carried out to the images to be recognized, to determine The object shown in the image to be identified;Wherein, the first convolution neural network model includes extracting image The first via convolutional neural networks of global characteristics and the second road convolutional neural networks for extracting image local feature.
In the image-recognizing method of the embodiment of the present invention, image to be identified is obtained, using the first convolution god after training Through network model, image identification is carried out to images to be recognized, to determine the object shown in image to be identified, wherein, the One convolution neural network model is included for extracting the first via convolutional neural networks of image overall feature and for extracting image Second road convolutional neural networks of local feature.Image is identified by trained convolutional neural networks model, is improved The accuracy of image identification, solves in the relevant technologies, and image identification is extracted using the feature of engineer, and operation is multiple It is miscellaneous, cause accuracy that image identify low the problem of poor for a large amount of different Object identifying robustness.
In order to achieve the above object, second aspect of the present invention embodiment proposes a kind of pattern recognition device, including:
Acquisition module, for obtaining image to be identified;
Identification module, for using the first convolution neural network model after training, figure to be carried out to the images to be recognized As identification, to determine the object shown in the image to be identified;Wherein, the first convolution neural network model includes The second tunnel convolution god for the first via convolutional neural networks for extracting image overall feature and for extracting image local feature Through network.
In the pattern recognition device of the embodiment of the present invention, acquisition module, for obtaining image to be identified, identification module, For using the first convolution neural network model after training, image identification being carried out to images to be recognized, with determining to be identified The object shown in image, wherein, the first convolution neural network model includes the first via for extracting image overall feature Convolutional neural networks and the second road convolutional neural networks for extracting image local feature.Pass through trained first convolution god Images to be recognized is identified through network model, it is easy to operate, it is good for the robustness of different Object identifyings, while the first volume What is included in product neural network model can be used for the first via convolutional neural networks of extraction image overall feature and for extraction figure As the second road convolutional neural networks of local feature, accuracy and the fineness of image identification are improved..
In order to achieve the above object, third aspect present invention embodiment proposes a kind of computer equipment, the computer equipment packet Mobile terminal is included, the computer journey that specifically includes memory, processor and storage on a memory and can run on a processor Sequence when the processor performs described program, realizes image-recognizing method as described in relation to the first aspect.
In order to achieve the above object, fourth aspect present invention embodiment proposes a kind of computer readable storage medium, deposit thereon Computer program is contained, image-recognizing method as described in relation to the first aspect is realized when the program is executed by processor.
The additional aspect of the present invention and advantage will be set forth in part in the description, and will partly become from the following description It obtains significantly or is recognized by the practice of the present invention.
Description of the drawings
Above-mentioned and/or additional aspect and advantage of the invention will become from the following description of the accompanying drawings of embodiments Significantly and it is readily appreciated that, wherein:
The flow diagram of a kind of image-recognizing method that Fig. 1 is provided by the embodiment of the present invention;
Fig. 2 is the schematic diagram that Object identifying is carried out using mobile phone;
The flow diagram of method that Fig. 3 is trained by the first convolution neural network model that the embodiment of the present invention provides;
The flow diagram of a kind of image defogging method that Fig. 4 is provided by the embodiment of the present invention;
The flow diagram of another image-recognizing method that Fig. 5 is provided by the embodiment of the present invention;
The flow diagram of the second convolution neural network model training method that Fig. 6 is provided by the embodiment of the present invention;
The flow diagram for another image-recognizing method that Fig. 7 is provided by the embodiment of the present invention;
Fig. 8 is a kind of structure diagram of pattern recognition device provided in an embodiment of the present invention;
The structure diagram of another pattern recognition device that Fig. 9 is provided by the embodiment of the present invention;
One of structure diagram of identification module 72 that Figure 10 is provided by the embodiment of the present invention;And
The second structural representation of identification module 72 that Figure 11 is provided by the embodiment of the present invention.
Specific embodiment
The embodiment of the present invention is described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings image-recognizing method, device, computer equipment and the storage medium of the embodiment of the present invention are described.
The image-recognizing method of the embodiment of the present invention can be used for the image of various objects is identified, the present embodiment Execution terminal be specifically as follows mobile phone, those skilled in the art could be aware that, perform terminal can also be other mobile terminals, The method that there is provided in the present embodiment can be provided and carry out image identification, while the object in following embodiment is by taking food materials as an example, It is specifically described, for other kinds of object, realization principle is the same, repeats no more in embodiments of the present invention.
The flow diagram of a kind of image-recognizing method that Fig. 1 is provided by the embodiment of the present invention.
As shown in Figure 1, this method comprises the following steps:
Step 101, image to be identified is obtained.
Specifically, image to be identified is obtained, which can be image of the food materials under various states, in each image Only comprising a kind of food materials, wherein, the image of food materials can directly shoot acquisition by photographing device, can also be clapped in online download It the image for the food materials taken the photograph or is extracted from existing food materials image library.
Step 102, using the first convolution neural network model after training, image identification is carried out to images to be recognized, with Determine the object shown in image to be identified.
Specifically, the first convolution neural network model includes the first via convolutional Neural net for extracting image overall feature Network and the second road convolutional neural networks for extracting image local feature.Wherein, first via convolutional neural networks are used to extract Image overall feature, such as the position of food materials, profile global information in image pattern.Second road convolutional neural networks are used to extract The shape of food materials in image local feature, such as image pattern, color, surface local feature.By the images to be recognized of food materials It is input in trained first convolution neural network model, image identification is carried out to images to be recognized, to determine to wait to know The object shown in other image.
For example, the food materials in images to be recognized are the apple in a packaging bag, then using the first convolutional neural networks mould After type is identified, apple is identified, Fig. 2 is the schematic diagram that Object identifying is carried out using mobile phone, as shown in Fig. 2, passing through mobile phone After object in images to be recognized is identified, recognition result is obtained as apple.
In the image-recognizing method of the embodiment of the present invention, image to be identified is obtained, using the first convolution god after training Through network model, image identification is carried out to images to be recognized, to determine the object shown in image to be identified, wherein, the One convolution neural network model is included for extracting the first via convolutional neural networks of image overall feature and for extracting image Second road convolutional neural networks of local feature.By the first convolution neural network model after training, to images to be recognized into Row identification, improves the accuracy of image identification, solves in the prior art, and image identification is carried out using the feature of engineer The problem of extraction, the accuracy of image identification is low.
Based on above-described embodiment, before image identification is carried out using the first convolution neural network model, need first to the One convolution neural network model is trained, and after the completion of training, images to be recognized is carried out to the first convolution neural network model Identification, for this purpose, the present embodiment proposes a kind of possible realization side of method being trained to the first convolution neural network model Formula, the flow diagram of method that Fig. 3 is trained by the first convolution neural network model that the embodiment of the present invention provides, such as Fig. 3 Shown, this method comprises the following steps:
Step 201, image pattern is acquired.
Specifically, image pattern is obtained from the image library pre-established.Wherein, when pre-establishing image library, figure As the image in library acquisition modes there are many kinds of, the possible realization method of one of which shoots to obtain to a variety of objects, When a variety of objects are carried out with shooting acquisition image, image resolution ratio is without limiting;Alternatively possible realization method is from net Download what image obtained in upper or local picture library.After obtaining image pattern, it is also necessary to in the image pattern that gets Object be labeled, wherein, mark refer to all correspond to single object in every image, to the real name of each single object It is labeled, the label as image identification.The image pattern of acquisition includes a variety of objects under various different conditions simultaneously Image, such as the image under normal clear state, under the image and refrigerator intrinsic fog gas package status under packaging bag package status Image etc..For example, for food materials apple, the image collected sample can be the apple under normal clear state, can be to put The apple being placed in packaging bag can also be to be positioned over iced middle surface by the apple under fog package status in refrigerator.
During for obtaining the image pattern in image library by style of shooting, for using captured image during style of shooting Pixel without limiting,, can be by all image normalizations to a unified point for the ease of unified as a kind of possibility Resolution, as resolution ratio is 448x448 or is resolution ratio 224*224.But for not done in resolution ratio the present embodiment of image It limits.
Step 202, feature extraction is carried out using first via convolutional neural networks.
Specifically, the first convolution neural network model includes first via convolutional neural networks, first via convolutional neural networks For extracting the position of food materials in image overall feature, such as image training sample, profile global information.It will be obtained in image pattern The image training sample set taken is defined as X, and corresponding label is Y, and training sample is inputted in first via convolutional neural networks, into Row feature extraction, as a kind of possible realization method, trained depth volume can be used in first via convolutional neural networks Product neural network model such as visual geometric group (Visual Geometry Group, VGG) 21-layer net, generates feature Figure, is denoted as f1
Step 203, feature extraction is carried out using the second road convolutional neural networks.
Specifically, the first convolution neural network model further includes the second road convolutional neural networks, the second road convolutional Neural net Network be used for extract the shape of food materials in image local feature, such as image pattern, color, surface local feature.Image is instructed Practice sample to be input in the second road convolutional neural networks, carry out feature extraction, as a kind of possible realization method, ratio can be used The few convolutional neural networks of the first via convolutional neural networks number of plies, such as trained depth convolutional neural networks model, such as VGG 16-layer net, the characteristic pattern of generation, are denoted as f2
It should be noted that the execution sequential of step 202 and step 203 can be divided into 4 kinds of situations, it is specific as follows:
First, step 202 is first carried out, then performs step 203.
2nd, step 203 is first carried out, then performs step 202.
3rd, while step 202 performs, parallel execution of steps 203.
4th, the classification according to belonging to object determines, using step 202 and step 203, respectively to carry out image to be identified The sequence of feature extraction, specifically, can be more apparent according to the global characteristics of object to be identified as a kind of possible realization method Or local feature is more apparent, carries out feature extraction, if local feature is more apparent, the color such as object is more bright-coloured, then preferentially carries Local feature is taken, then extracts global characteristics, that is, first carries out step 203, then perform step 202;Conversely, step 202 is then first carried out, Step 203 is performed again.
Step 204, outer lamination is inputted, apposition calculating is carried out, obtains the feature of each pixel.
Specifically, by characteristic pattern f1With characteristic pattern f2, outer lamination is inputted, apposition operation is carried out as unit of pixel, is obtained each The feature of pixel, is denoted as fbi, calculation formula is:
Step 205, the feature of each pixel is summed up using pond layer, obtains the bilinearity feature vector of image, And it is normalized by normalization layer.
Specifically, the feature of each pixel is summed up using pond layer, i.e., obtains image using summing it up pond layer Bilinearity feature vector, is denoted as y,Wherein, x is characterized the pixel in figure, and is returned by normalization layer One change is handled, and formula is:
Step 206, normalized various features are carried out by Fusion Features using full articulamentum and inputs grader, determine to know The object not gone out.
Optionally, the output valve of pond layer is input to full articulamentum, full articulamentum uses softmax excitation function conducts The multilayer perceptron (Multi-Layer Perceptron) of output layer carries out Fusion Features by full articulamentum and classifies, has Body, pass through the Softmax excitation functions of full articulamentum so that the sum of output probability of full articulamentum is 1, i.e. Softmax letters The vector of the corresponding arbitrary real value of several titles food materials be transformed into element value be 0~1 and and be 1 vector, according to output Food materials title corresponding to probability, title probability is highest, the food materials title as identified.
Step 207, according to the Object identifying result of each training sample and the mark of training sample, loss function is determined Value.
Optionally, the Object identifying result of each training sample is defined asIt is Y by the annotation definition of training samplei, adopt With L2 grades of norms, calculateAnd YiBetween difference, then difference value is squared the value as loss function, is represented with Loss Loss function, then the value of loss function be
Step 208, judge the value of loss function and the size of threshold value, if value is less than threshold value, perform step 210, Otherwise, step 209 is performed.
Specifically, it is compared according to the value for the loss function being calculated and threshold value, if loss function is more than threshold value, Adjust the parameter of the first convolutional neural networks;If loss function is less than threshold value, it is determined that the first convolution neural network model is trained It completes.
Step 209, the parameter of the first convolution neural network model is adjusted.
Specifically, if the value of loss function is bigger than threshold value, the parameter of the first convolution neural network model is adjusted, In, the first convolution neural network model is using trained depth convolutional neural networks model at present, such as VGG21- Layer net models and VGG 16-layer net models, can be finely adjusted model parameter, obtain every layer of convolution kernel ginseng Number, such as the size of wave filter, step-length, so that it is determined that after adjustment the first convolution neural network model parameter, then, return from Step 202 and step 203 restart to perform, i.e., are regenerated pair by the first convolution neural network model after parameter adjustment As recognition result, and the value of loss function is redefined, until loss function value is less than threshold value, the first convolutional neural networks Model training is completed.
Step 210, determine that the training of the first convolution neural network model is completed.
Step 211, test sample is obtained from image pattern, and passes through test sample to the first convolutional Neural after training Network model accuracy is verified, determines that the accuracy of the first convolution neural network model is higher than threshold value.
Specifically, test sample is obtained from image pattern, wherein, there are ratios for the quantity of test sample and training sample Relationship according to the proportionate relationship, obtains test sample, and test sample is input to the first volume after training from image pattern In product neural network model, to obtain the object that test sample is shown.Object that each test sample is identified and The mark of test sample is compared, and judges whether the recognition result of the test sample is correct, so as to which statistics obtains all tests The object identified in sample is the number of correct test sample, by recognition result for correct test sample number divided by always Test sample number then obtains the accuracy of the first convolution neural network model, by the accuracy and threshold value comparison, if higher than threshold Value, then the accuracy of Model Identification is high, and trained effect is preferable;If accuracy is less than threshold value, the accuracy of Model Identification is low, Trained effect is poor, then the training samples number for adjusting model is needed to re-start training, i.e. 201 weight of return to step to model Newly it is trained and verifies.
In the training method of first convolution neural network model of the embodiment of the present invention, image pattern is acquired, is therefrom obtained Training sample, and training sample is input to progress image identification in the first convolution neural network model, to obtain all objects Recognition result according to the Object identifying result of all training samples and the mark of training sample, determines the value of loss function, According to the value of loss function, parameter adjustment is carried out to the first convolution neural network model, regenerates Object identifying as a result, simultaneously The value of loss function is redefined, until when the value of loss function is less than threshold value, determines the first convolution neural network model Training is completed, and the training effect of model is verified after the completion of training.By choosing certain proportion in the image pattern of acquisition Training sample the first convolution neural network model is trained, and trained convolutional neural networks model is passed through into test Sample is verified, by the way that two-way convolutional neural networks model is trained and is verified, can improve the accurate of Model Identification Degree, and by the control of training sample in sample and the ratio value of test sample, it can be accurate with identifying with training for promotion degree Degree.
In practical application, food materials object state in which is there are many situation, if packaging bag package status is either in refrigerator In have the state that fog wraps up, collected food materials image in this case, there are the unconspicuous problems of food materials characteristics of image, need It will be by being pre-processed to food materials, to improve the accuracy of follow-up food materials identification, for this problem, the embodiment of the present invention carries A kind of method of image defogging has been supplied, has been specifically illustrated in the image collected and image recognition processes during model training The images to be recognized of acquisition carries out the flow of defogging pretreatment, and based on above-described embodiment, Fig. 4 is provided by the embodiment of the present invention A kind of flow diagram of image defogging method, as shown in figure 4, this method includes the following steps:
Step 301, the image for treating defogging is obtained.
Specifically, the image for being used for trained image pattern and for carrying out image identification, as treating defogging Image.
Step 302, the image for treating defogging carries out defogging processing.
Treat that the image of defogging carries out defogging processing for each, specially:The first step treats mist elimination image using minimum Value filtering algorithm is filtered, and obtains dark channel diagram, is denoted as Jdark, calculation formula is: Wherein, JcRepresent each channel red (R) of coloured image, green (G) and blue (B), window of Ω (x) expressions centered on pixel x.
Second step, given threshold determine that brightness is higher than the object pixel of threshold value, according to from dark channel diagram from dark channel diagram In determine object pixel, in mist elimination image is treated determine respective pixel corresponding with object pixel, by the maximum of respective pixel Brightness is denoted as atmospheric brightness A as atmospheric brightness.
Third walk, according to atmospheric brightness and it is preset remove fog factor, treat mist elimination image and calculated, obtain transmittance figure, Transmittance figure is denoted as T, and as a kind of possible realization method, calculation formula is:Its In, I (x) is treats mist elimination image, and for ω to remove fog factor, ω takes empirical value 0.95.
Finally, according to the dark channel diagram for treating mist elimination image, atmospheric brightness and the transmittance figure got, to image pattern into The processing of row defogging obtains defogging treated image, is denoted as W (x).As a kind of possible realization method, calculation formula is:W (x)=(I (x)-A)/T (x)+A
In the image defogging method of the embodiment of the present invention, to the collected image pattern for model training, Yi Jitu As the images to be recognized in identification process, the pretreatment of defogging is carried out, by carrying out defogging processing to image pattern, is reduced Due to recognition accuracy caused by fog and food pack it is low the problem of.
Based on above-described embodiment, in order to which further clearly interpretation of images knows method for distinguishing, present embodiments provide another The possible realization method of kind image-recognizing method, more clearly explains the whole process of image identification, and Fig. 5 is the present invention The flow diagram of another image-recognizing method that embodiment is provided, as shown in figure 5, this method comprises the following steps:
Step 401, image to be identified is obtained.
Specifically, for there are many kinds of the acquisition modes of images to be recognized, a kind of possible mode is to utilize taking the photograph for mobile phone As the image of device acquisition object to be identified;Alternatively possible mode be obtained from the storage unit of mobile phone it is to be identified right The image of elephant;Another possible mode is the image for the object to be identified downloaded from the Internet.Acquisition for images to be recognized Mode is not limited in the present embodiment.
Step 402, defogging processing is carried out to image to be identified.
Specifically, the step 302 in Fig. 4 corresponding embodiments is can refer to, realization principle is identical, and details are not described herein again.
Step 403, feature extraction is carried out using first via convolutional neural networks.
Step 404, feature extraction is carried out using the second road convolutional neural networks.
Step 405, outer lamination is inputted, apposition calculating is carried out, obtains the feature of each pixel.
Step 406, the feature of each pixel is summed up using pond layer, obtains the bilinearity feature vector of image, And it is normalized by normalization layer.
Step 407, normalized feature is carried out by Fusion Features using full articulamentum and is input to grader, to determine to treat Identify the object in image.
Step 403~step 407 in the present embodiment can refer to step 202~step 206 in Fig. 3 corresponding embodiments, Realization principle is identical, no longer repeats one by one herein.
In the image-recognizing method of the embodiment of the present invention, image to be identified is obtained, defogging is carried out to image to be identified Pretreatment, and figure is carried out to images to be recognized using the first via convolutional neural networks after training and the second road convolutional neural networks It as identification, to determine the object shown in image to be identified, solves in relevant image recognition technology, image identification is adopted The problem of feature manually designed extracts, cumbersome, and the design cycle is long, poor user experience, and identification is accurate low.
In a upper embodiment, image identification is carried out using the first convolution neural network model, as alternatively possible Realization method, the second convolution neural network model can also be used to carry out image identification, the second convolutional neural networks are the On the basis of the training of one convolution neural network model is completed, full articulamentum is replaced so that the second convolutional neural networks are not It can only identify the title of objects in images, can also identify to obtain the position of objects in images, so as to improve image identification Fine degree, using the second convolution neural network model be identified before need first to the second convolution neural network model It is trained, based on above-described embodiment, the invention also provides a kind of to the possible of the second convolution neural network model training Realization method, the flow diagram of the second convolution neural network model training method that Fig. 6 is provided by the embodiment of the present invention, such as Shown in Fig. 6, the step of being trained to the second convolution neural network model, includes:
Step 501, image pattern is acquired.
The step 201 in Fig. 3 corresponding embodiments is can refer to, realization principle is identical, and details are not described herein again.
It should be noted that before being trained to the second convolution neural network model, in the image pattern of acquisition Object when being labeled, the name that not only mark object is referred to as the mark of object, also to mark the position of object in the picture It puts, collectively as the label of Object identifying.
Step 502, defogging processing is carried out to the image collected sample.
The step 302 in the corresponding embodiments of Fig. 4 is can refer to, realization principle is identical, and details are not described herein again.
Step 503, feature extraction is carried out using first via convolutional neural networks.
Step 504, feature extraction is carried out using the second road convolutional neural networks.
Step 505, characteristic value is inputted into outer lamination, carries out apposition calculating, obtain the feature of each pixel.
Step 506, the feature of each pixel is summed up using pond layer, obtains the bilinearity feature vector of image, And it is normalized by normalization layer.
Step 503~step 506 in the present embodiment can refer to step 202~step 205 in Fig. 3 corresponding embodiments, It can be used in a upper embodiment after the completion of the training of the first convolution neural network model, the determining model of step 202~step 205 Parameter, so as to improve the training speed of the second convolution neural network model.
Step 507, normalized various features are carried out by Fusion Features using convolutional layer, obtains characteristic pattern.
Optionally, convolutional layer can be used the wave filter of 3*3, step-length 1, to the corresponding feature of normalized various features to Amount carries out Fusion Features, obtains corresponding convolution characteristic pattern.
Step 508, it is up-sampled using warp lamination according to convolution characteristic pattern, each pixel in the characteristic pattern after up-sampling Each pixel in the corresponding input picture of point.
Specifically, deconvolution layer operation is the equal of that obtained convolution characteristic pattern is inserted input feature vector figure into row interpolation Then the characteristic pattern for being worth a bigger carries out convolution, it is restored to the feature identical with input picture size after the convolution made Figure, and each pixel corresponds to each pixel in input picture in the characteristic pattern after up-sampling, is acquired by up-sampling Characteristic pattern can not only identify the type of object, can also identify the position of object, improve the fine degree of Object identifying.
For example, input dimension of picture is the picture of 78*24, the height and width of 78 and 24 difference character pair figures, by convolution The output obtained after network is the characteristic pattern of 39*12, this output characteristic pattern is upsampled to original input picture size, i.e., 78*24, the size of the convolution kernel of selection is 4*4, and step-length is [1,2,2,1].Specifically, first by the characteristic pattern interpolation of 39*12 The size of high * wide (H*W) is obtained, by 4*4, the convolution kernel that step-length is 2 can obtain the characteristic pattern of this H*W made later The characteristic pattern of one 78*24 derives that the characteristic pattern that difference obtains highly should be 4+2* (78-1) and width according to Convolution Formula The width and height of convolution kernel are represented for the characteristic pattern of 4+2* (24-1), therein 4, and 2 be step-length.Then this spy obtained in interpolation The convolution of 4*4 is carried out on sign figure can just obtain the characteristic pattern of 78*24, and so far up-sampling is completed.
Step 509, what the feature and/or the second road convolutional neural networks extracted according to first via convolutional neural networks were extracted Feature continues to up-sample to the characteristic pattern extracted after up-sampling.
In feature and/or step 504 that optionally, the first via convolutional neural networks that step 503 obtains are extracted To the feature extracted of the second road convolutional neural networks, to the characteristic pattern up-sampled in step 508, with smaller volume Product core size and step-length, are further up-sampled, so that in the characteristic pattern obtained after up-sampling, the information of each pixel It is more accurate, further improve the mark of object and the accuracy of position identification.
Step 510, classified according to the characteristic information of pixel each in the characteristic pattern after up-sampling, determine object identity And object's position.
Specifically, classified according to the characteristic information of pixel each in the characteristic pattern after up-sampling, determine each pixel Corresponding object identity identifies the pixel region of corresponding same target mark, according to position of the region in characteristic pattern, Determine the position of object in the input image.
Step 511, according to the Object identifying result of each training sample and mark and the position of training sample, mark is determined The value of loss function and the value of position loss function.
Specifically, mark is the title of the object of training sample, is defined as Yi', position is defined as Yi", in each training sample Object identifying result in name information be defined asLocation information is defined asKnown according to the object of each training sample Not as a result, and training sample mark and location information, calculate the mark value of loss function and taking for position loss function Value, as a kind of possible realization method, using L2 grades of norms, calculates Y respectivelyi' andAnd Yi" andBetween difference It is different, then difference value is squared as mark loss function value and position loss function value.
Step 512, judge whether identify the value of loss function and the value of position loss function is respectively less than threshold value, if It is step 514 to be performed, if it is not, then performing step 513.
Specifically, judge that the value for identifying loss function is less than threshold value, and the value of position loss function is again smaller than threshold value When, model training is completed, and otherwise, is adjusted model parameter, is re-started training.
Step 513, the parameter of the second convolution neural network model is adjusted.
Specifically, step 513 can refer to the step 209 in the corresponding embodiments of Fig. 3, and realization principle is identical, herein no longer It repeats
Step 514, determine that the training of the second convolution neural network model is completed.
Specifically, step 514 can refer to the step 210 in the corresponding embodiments of Fig. 3, and realization principle is identical, herein no longer It repeats.
Step 515, test sample is obtained from image pattern, and passes through test sample to the second convolutional Neural after training Network model accuracy is verified, determines that the accuracy of the second convolution neural network model is higher than threshold value.
Specifically, step 515 can refer to the step 211 in the corresponding embodiments of Fig. 3, and realization principle is identical, herein no longer It repeats.
It, will be in the first convolution neural network model in second convolution neural network model training method of the embodiment of the present invention Full articulamentum be substituted for convolutional layer and warp lamination, so as to obtain the second convolution neural network model, acquire image pattern, will Image pattern carries out defogging processing in advance, and chooses training sample from defogging treated image pattern, is handled using defogging Training sample afterwards is trained the second convolution neural network model.By carrying out defogging processing to image pattern, reduce Due to recognition accuracy caused by fog and food pack it is low the problem of, using by trained first convolutional neural networks mould The replacement that type carries out full articulamentum obtains the second convolution neural network model, and the second convolution neural network model is trained, The title that second convolution neural network model can not only identify to obtain object can also identify to obtain the position of object, improve The accuracy of image identification and fineness, are solved in traditional images identification technology and are extracted using the feature of engineer, It is poor for a large amount of different Object identifying robustness, the problem of image recognition accuracy is low.
Based on above-described embodiment, after the training of the second convolution neural network model is completed, the second convolution nerve net can be used Image is identified in network model, and the embodiment of the present invention also proposed another possible image knowledge method for distinguishing thus, and Fig. 7 is The flow diagram for another image-recognizing method that the embodiment of the present invention is provided, as shown in fig. 7, utilizing the second convolutional Neural Network model carries out image knowledge method for distinguishing and includes the following steps:
Step 601, image to be identified is obtained.
The step 401 in Fig. 5 corresponding embodiments is can refer to, principle is identical, and details are not described herein again.
Step 602, defogging processing is carried out to image to be identified.
Step 603, feature extraction is carried out using first via convolutional neural networks.
Step 604, feature extraction is carried out using the second road convolutional neural networks.
Step 605, outer lamination is inputted, apposition calculating is carried out, obtains the feature of each pixel.
Step 606, the feature of each pixel is summed up using pond layer, obtains the bilinearity feature vector of image, And it is normalized by normalization layer.
Step 607, normalized various features are carried out by Fusion Features using convolutional layer, obtains characteristic pattern.
Step 608, it is up-sampled using warp lamination according to characteristic pattern, each pixel pair in the characteristic pattern after up-sampling Answer each pixel in input picture.
Step 609, what the feature and/or the second road convolutional neural networks extracted according to first via convolutional neural networks were extracted Feature continues to up-sample to the characteristic pattern extracted after up-sampling.
Step 610, classified according to the characteristic information of pixel each in the characteristic pattern after up-sampling, determine object identity And object's position.
Step 602~step 610 in the present embodiment can refer to step 502~510 in an embodiment, realization principle Identical, details are not described herein again.
In the image-recognizing method of the embodiment of the present invention, defogging processing is carried out in advance to image to be identified, utilizes training The second good convolution neural network model carries out image identification.By carrying out defogging processing to image pattern, reduce due to mist The problem of recognition accuracy caused by gas and food pack is low is treated using by trained second convolution neural network model Identification image is identified, and can not only identify to obtain the title of object can also identify to obtain the position of object, improve figure Accuracy and fineness as identification, are solved in traditional images identification technology and are extracted using the feature of engineer, right It is poor in a large amount of different Object identifying robustness, the problem of image recognition accuracy is low.
In order to realize above-described embodiment, the present invention also proposes a kind of pattern recognition device.
Fig. 8 is a kind of structure diagram of pattern recognition device provided in an embodiment of the present invention.
As shown in figure 8, the device includes:Acquisition module 71 and identification module 72.
Acquisition module 71, for obtaining image to be identified.
Identification module 72, for using the first convolution neural network model after training, image to be carried out to images to be recognized Identification, to determine the object shown in image to be identified, wherein, the first convolution neural network model includes extracting figure The second road convolutional neural networks as the first via convolutional neural networks of global characteristics and for extracting image local feature.
It should be noted that the aforementioned explanation to embodiment of the method is also applied for the device of the embodiment, herein not It repeats again.
In the pattern recognition device of the embodiment of the present invention, acquisition module, for obtaining image to be identified, identification module, For using the first convolution neural network model after training, image identification being carried out to images to be recognized, with determining to be identified The object shown in image, wherein, the first convolution neural network model includes the first via for extracting image overall feature Convolutional neural networks and the second road convolutional neural networks for extracting image local feature.Pass through trained first convolution god Images to be recognized is identified through network model, it is easy to operate, it is good for the robustness of different Object identifyings, while the first volume What is included in product neural network model can be used for the first via convolutional neural networks of extraction image overall feature and for extraction figure As the second road convolutional neural networks of local feature, accuracy and the fineness of image identification are improved.
Based on above-described embodiment, the embodiment of the present invention additionally provides a kind of possible realization method of pattern recognition device, The structure diagram of another pattern recognition device that Fig. 9 is provided by the embodiment of the present invention, as shown in figure 9, implementing upper one On the basis of example, which further includes:Acquisition module 73, training module 74, defogging module 75 and authentication module 76.
Acquisition module 73, for acquiring image pattern, wherein, image pattern is obtained from the image library pre-established, and root It is labeled according to the object in image pattern.
Training sample for obtaining training sample from image pattern, and is input to the first convolution god by training module 74 Through in network model, image identification is carried out using the first convolution neural network model, to determine object that training sample is shown, Wherein, the first convolution neural network model is included for extracting the first via convolutional neural networks of image overall feature and for carrying Take the second road convolutional neural networks of image local feature, the object and the mark of training sample shown according to each training sample Note determines the value of loss function, according to the value of loss function, parameter adjustment is carried out to the first convolutional neural networks, with root The object that training sample shown is redefined, and redefine loss function according to the first convolutional neural networks after parameter adjustment Value, until loss function value be less than threshold value when, determine the first convolution neural network model training complete.
Defogging module 75 for being filtered to image pattern using mini-value filtering algorithm, obtains dark channel diagram, from dark Determine that brightness is higher than the object pixel of threshold value in channel figure, the object pixel in dark channel diagram is determined in image pattern Respective pixel, using the maximum brightness of respective pixel as atmospheric brightness, according to atmospheric brightness and it is preset remove fog factor, to figure Decent is calculated, and obtains transmittance figure, and according to dark channel diagram, atmospheric brightness and transmittance figure, image pattern is gone Mist processing.
Authentication module 76, for obtaining test sample from image pattern, and test sample is input to after training the In one convolution neural network model, identification obtains the object that test sample is shown, the object and test specimens obtained according to identification This mark calculates the accuracy of the first convolution neural network model, determines that the accuracy of the first convolution neural network model is high In threshold value.
As a kind of possible realization method, authentication module 76, can basis when obtaining test sample from image pattern The quantitative proportion relationship of test sample and training sample, obtains test sample from image pattern.
It should be noted that the aforementioned explanation to embodiment of the method is also applied for the device of the embodiment, herein not It repeats again.
In the pattern recognition device of the embodiment of the present invention, acquisition module for acquiring image pattern, training module be used for from Training sample is obtained in image pattern, and training sample is input to progress image identification in the first convolution neural network model, With the object that determining training sample is shown, the object and the mark of training sample shown according to each training sample determines The value of loss function according to the value of loss function, carries out parameter adjustment to the first convolutional neural networks, redefines training The object that sample is shown, and redefine the value of loss function until when the value of loss function is less than threshold value, determines the The training of one convolution neural network model is completed, and identification module carries out image knowledge using the first convolution neural network model after training Not, the object shown in images to be recognized is obtained.The first convolution neural network model is carried out by the image pattern of acquisition Training, and images to be recognized is identified by trained first convolution neural network model, improve image identification Accuracy, while by carrying out defogging pretreatment to the image collected, reduce residing for object containing fog or being wrapped The problem of not easy to identify caused by pack package, further improve the accuracy of image identification.
Based on above-described embodiment, the invention also provides a kind of possible realization method of identification module 72, Figure 10 is this One of structure diagram of identification module 72 that inventive embodiments are provided, as shown in Figure 10, identification module 72 can include: First extraction unit 721, the first computing unit 722 and the first determination unit 723.
First extraction unit 721 for using first via convolutional neural networks and the second road convolutional neural networks, treats knowledge Other image carries out feature extraction.
First computing unit 722, for the spy for extracting first via convolutional neural networks and the second road convolutional neural networks Sign inputs outer lamination, so that outer lamination carries out apposition calculating according to image local feature and image overall feature, obtains each pixel The feature of point.The feature of each pixel is summed up using pond layer, obtains the bilinearity feature vector of image, and by normalizing Change layer to be normalized.
First determination unit 723, for normalized feature to be carried out Fusion Features using full articulamentum and is input to point Class device identifies target object.
It should be noted that the aforementioned explanation to embodiment of the method is also applied for the device of the embodiment, herein not It repeats again.
In the pattern recognition device of the embodiment of the present invention, pass through trained the first convolution god comprising two-way neural network Images to be recognized is identified through network model, improves the accuracy of image identification.
Based on above-described embodiment, the invention also provides the alternatively possible realization method of identification module 72, Tu11Wei The second structural representation of identification module 72 that the embodiment of the present invention is provided, as shown in figure 11, identification module 72 can also wrap It includes:Second extraction unit 724, the second computing unit 725, integrated unit 726,727 and second determination unit of up-sampling unit 728。
Second extraction unit 724, for using first via convolutional neural networks and the second road convolutional neural networks, to input Image carries out feature extraction.
Second computing unit 725, for the spy for extracting first via convolutional neural networks and the second road convolutional neural networks Sign inputs outer lamination, so that outer lamination carries out apposition calculating according to described image local feature and image overall feature, obtains each The feature of pixel sums up the feature of each pixel using pond layer, obtains the bilinearity feature vector of image, and adopt It is normalized with normalization layer.
Integrated unit 726 for normalized various features to be carried out Fusion Features using convolutional layer, obtains characteristic pattern.
Up-sampling unit 727, for being up-sampled according to the characteristic pattern using warp lamination, the feature after up-sampling Each pixel corresponds to each pixel in input picture in figure, to being believed according to the feature of pixel each in the characteristic pattern after up-sampling Breath is classified, and determines the corresponding object identity of each pixel.
Second determination unit 728, for identifying the pixel region of corresponding same target mark, according to the region Position in characteristic pattern determines the position of object in the input image.
As a kind of possible realization method, up-sampling unit 727 is additionally operable to be extracted according to first via convolutional neural networks Feature and/or the extraction of the second road convolutional neural networks feature, the characteristic pattern after up-sampling is continued to up-sample.
It should be noted that the aforementioned explanation to embodiment of the method is also applied for the device of the embodiment, herein not It repeats again.
In the pattern recognition device of the embodiment of the present invention, by the first convolution neural network model after the completion of training Full articulamentum, replace with convolutional layer, and increase warp lamination after convolutional layer, obtain the second convolutional neural networks.And pass through Using the second convolution neural network model comprising two-way neural network model, image is identified, can not only be identified The title of object may recognize that the position of object, improve the fineness of image identification.
In order to realize above-described embodiment, the present invention also proposes a kind of computer equipment, including:It memory, processor and deposits The computer program that can be run on a memory and on a processor is stored up, when the processor performs described program, is realized as before State the image-recognizing method described in embodiment of the method.
In order to realize above-described embodiment, the present invention also proposes a kind of computer equipment, which includes mobile whole End.
In order to realize above-described embodiment, the present invention also proposes a kind of computer readable storage medium, is stored thereon with calculating Machine program when the program is executed by processor, realizes the image-recognizing method as described in preceding method embodiment.
In the description of this specification, reference term " one embodiment ", " example ", " is specifically shown " some embodiments " The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment of the present invention or example.In the present specification, schematic expression of the above terms are not It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office It is combined in an appropriate manner in one or more embodiments or example.In addition, without conflicting with each other, the skill of this field Art personnel can tie the different embodiments or examples described in this specification and the feature of different embodiments or examples It closes and combines.
In addition, term " first ", " second " are only used for description purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the present invention, " multiple " are meant that at least two, such as two, three It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, represent to include Module, segment or the portion of the code of the executable instruction of one or more the step of being used to implement custom logic function or process Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable Sequence, including according to involved function by it is basic simultaneously in the way of or in the opposite order, to perform function, this should be of the invention Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction The system of row system, device or equipment instruction fetch and execute instruction) it uses or combines these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicate, propagate or pass Defeated program is for instruction execution system, device or equipment or the dress used with reference to these instruction execution systems, device or equipment It puts.The more specific example (non-exhaustive list) of computer-readable medium is including following:Electricity with one or more wiring Connecting portion (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or when necessary with it His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the present invention can be realized with hardware, software, firmware or combination thereof.Above-mentioned In embodiment, software that multiple steps or method can in memory and by suitable instruction execution system be performed with storage Or firmware is realized.Such as, if realized with hardware in another embodiment, following skill well known in the art can be used Any one of art or their combination are realized:With for data-signal realize logic function logic gates from Logic circuit is dissipated, the application-specific integrated circuit with suitable combinational logic gate circuit, programmable gate array (PGA), scene can compile Journey gate array (FPGA) etc..
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium In matter, the program when being executed, one or a combination set of the step of including embodiment of the method.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, it can also That each unit is individually physically present, can also two or more units be integrated in a module.Above-mentioned integrated mould The form that hardware had both may be used in block is realized, can also be realized in the form of software function module.The integrated module is such as Fruit is realized in the form of software function module and is independent product sale or in use, can also be stored in a computer In read/write memory medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although it has been shown and retouches above The embodiment of the present invention is stated, it is to be understood that above-described embodiment is exemplary, it is impossible to be interpreted as the limit to the present invention System, those of ordinary skill in the art can be changed above-described embodiment, change, replace and become within the scope of the invention Type.

Claims (19)

1. a kind of image-recognizing method, which is characterized in that the described method comprises the following steps:
Obtain image to be identified;
Using the first convolution neural network model after training, image identification is carried out to the images to be recognized, it is described to determine The object shown in image to be identified;Wherein, the first convolution neural network model includes extracting image overall The first via convolutional neural networks of feature and the second road convolutional neural networks for extracting image local feature.
2. image-recognizing method according to claim 1, which is characterized in that first convolutional Neural using after training Network model before carrying out image identification to the images to be recognized, further includes:
Acquire image pattern;Described image sample is obtained from the image library pre-established, and the object in image pattern into Rower is noted;
Training sample is obtained from described image sample, and the training sample is input to the first convolution neural network model In, image identification is carried out using the first convolution neural network model, the object shown with the determining training sample;
The object and the mark of training sample shown according to each training sample determines the value of loss function;
According to the value of the loss function, parameter adjustment is carried out to the first convolution neural network model, with according to parameter The first convolution neural network model after adjustment redefines the object that training sample is shown, and redefines the loss letter Several values until when the value of the loss function is less than threshold value, determines that the first convolution neural network model has been trained Into.
3. image-recognizing method according to claim 1, which is characterized in that first convolutional Neural using after training Network model carries out image identification to the images to be recognized, to determine the object shown in the image to be identified, packet It includes:
Using the first via convolutional neural networks and second road convolutional neural networks, image to be identified is carried out respectively Feature extraction;
The feature that the first via convolutional neural networks and second road convolutional neural networks are extracted, is input to outer lamination, The outer lamination is made to carry out apposition calculating according to described image local feature and described image global characteristics, obtains each pixel Feature;
The feature of each pixel is summed up using pond layer, obtains the bilinearity feature vector of image, and by normalization layer It is normalized;
Normalized feature is carried out by Fusion Features using full articulamentum and is input to grader, identifies target object.
4. image-recognizing method according to claim 3, which is characterized in that described to use the first via convolutional Neural net Network and second road convolutional neural networks carry out feature extraction to image to be identified respectively, including:
After extracting global characteristics using the first via convolutional neural networks, using the second road convolutional neural networks extraction office Portion's feature;
Alternatively, after extracting local feature using second road convolutional neural networks, using the first via convolutional neural networks Extract global characteristics;
Alternatively, while extracting global characteristics using the first via convolutional neural networks, parallel using second tunnel convolution Neural network extracts local feature;
Alternatively, the classification according to belonging to object, determines using the first via convolutional neural networks and second tunnel convolution god Through network, the sequence of feature extraction is carried out to image to be identified respectively.
5. image-recognizing method according to claim 2, which is characterized in that after the acquisition image pattern, further include Using described image sample as mist elimination image is treated, defogging processing is carried out;
After acquisition image to be identified, further include using the image to be identified as mist elimination image is treated, carry out defogging Processing.
6. image-recognizing method according to claim 5, which is characterized in that the progress defogging processing, including:
It treats that mist elimination image is filtered using mini-value filtering algorithm to described, obtains dark channel diagram;
Determine that brightness is higher than the object pixel of threshold value from the dark channel diagram;
The object pixel in the dark channel diagram determines and the respective pixel treated in mist elimination image;
Using the maximum brightness of the respective pixel as atmospheric brightness;
According to the atmospheric brightness and it is preset remove fog factor, described image sample is calculated, obtains transmittance figure;
According to dark channel diagram, atmospheric brightness and transmittance figure, treat that mist elimination image carries out defogging processing to described.
7. image-recognizing method according to claim 2, which is characterized in that described to determine first convolutional neural networks After model training is completed, further include:
Test sample is obtained from described image sample, and the test sample is input to the first convolution nerve net after training In network model, identification obtains the object that the test sample is shown;
The object and the mark of the test sample obtained according to identification, calculates the accurate of the first convolution neural network model Degree;
Determine that the accuracy of the first convolution neural network model is higher than threshold value.
8. image-recognizing method according to claim 7, which is characterized in that test is obtained in the sample from described image Sample, including:
According to test sample and the quantitative proportion relationship of training sample, test sample is obtained from described image sample.
9. image-recognizing method according to claim 3, which is characterized in that determine the first convolution neural network model After training is completed, further include:
The full articulamentum of the first convolution neural network model is replaced using convolutional layer, and is increased instead after the convolutional layer Convolutional layer, to obtain the second convolution neural network model;
The second convolution neural network model is trained according to the training sample by mark;
Image identification is carried out using trained second convolution neural network model, obtains object identity and object's position.
10. image-recognizing method according to claim 9, which is characterized in that use trained second convolution Neural network model carries out image identification, including:
Using the first via convolutional neural networks and second road convolutional neural networks, feature is carried out to input picture and is carried It takes;
The feature that the first via convolutional neural networks and second road convolutional neural networks are extracted, inputs outer lamination, with The outer lamination is made to carry out apposition calculating according to described image local feature and described image global characteristics, obtains each pixel Feature;
The feature of each pixel is summed up using pond layer, obtains the bilinearity feature vector of image, and using normalization Layer is normalized;
Normalized various features are carried out by Fusion Features using convolutional layer, obtain characteristic pattern;
It is up-sampled according to the characteristic pattern using warp lamination, each pixel corresponds to input figure in the characteristic pattern after up-sampling Each pixel as in;To classifying according to the characteristic information of pixel each in the characteristic pattern after up-sampling, each pixel is determined The corresponding object identity of point;The pixel region of the corresponding same target mark of identification, according to the region in the feature Position in figure determines the position of the object in the input image.
11. image-recognizing method according to claim 10, which is characterized in that warp lamination is used according to the characteristic pattern After being up-sampled, further include:
The spy that the feature and/or second road convolutional neural networks extracted according to the first via convolutional neural networks are extracted Sign, continues to up-sample to the characteristic pattern after the up-sampling.
12. a kind of pattern recognition device, which is characterized in that including:
Acquisition module, for obtaining image to be identified;
Identification module, for using the first convolution neural network model after training, image knowledge to be carried out to the images to be recognized Not, the object to be shown in the determining image to be identified;Wherein, the first convolution neural network model includes being used for Extract the first via convolutional neural networks of image overall feature and the second road convolutional Neural net for extracting image local feature Network.
13. pattern recognition device according to claim 12, which is characterized in that described device further includes:
Acquisition module, for acquiring image pattern;Described image sample is obtained from the image library pre-established;
The training sample for obtaining training sample from described image sample, and is input to the first convolution by training module In neural network model, image identification is carried out using the first convolution neural network model, to determine the training sample institute The object of displaying;Wherein, the first convolution neural network model includes the first via convolution for extracting image overall feature Neural network and the second road convolutional neural networks for extracting image local feature;The object shown according to each training sample And the mark of training sample, determine the value of loss function;According to the value of the loss function, to first convolution god Parameter adjustment is carried out through network model, to regenerate Object identifying according to the first convolution neural network model after parameter adjustment As a result, and redefine the value of the loss function, until when the value of the loss function is less than threshold value, determine described the The training of one convolution neural network model is completed.
14. pattern recognition device according to claim 12, which is characterized in that described device further includes:
Defogging module after acquiring image pattern, using described image sample as mist elimination image is treated, carries out defogging processing; And after obtaining image to be identified, using the image to be identified as mist elimination image is treated, defogging processing is carried out.
15. pattern recognition device according to claim 12, which is characterized in that the identification module, including:
First extraction unit, for using the first via convolutional neural networks and second road convolutional neural networks, respectively Feature extraction is carried out to image to be identified;
First computing unit, for the spy for extracting the first via convolutional neural networks and second road convolutional neural networks Sign, is input to outer lamination, and the outer lamination is made to carry out apposition meter according to described image local feature and described image global characteristics It calculates, obtains the feature of each pixel;The feature of each pixel is summed up using pond layer, obtains the bilinearity feature of image Vector, and be normalized by normalization layer;
First determination unit for normalized feature to be carried out Fusion Features using full articulamentum and is input to grader, is known Other target object.
16. pattern recognition device according to claim 12, which is characterized in that the identification module, including:
Second extraction unit, for using the first via convolutional neural networks and second road convolutional neural networks, to defeated Enter image and carry out feature extraction;
Second computing unit, for the spy for extracting the first via convolutional neural networks and second road convolutional neural networks Sign inputs outer lamination, so that the outer lamination carries out apposition meter according to described image local feature and described image global characteristics It calculates, obtains the feature of each pixel;The feature of each pixel is summed up using pond layer, obtains the bilinearity feature of image Vector, and be normalized using normalization layer;
Integrated unit for normalized various features to be carried out Fusion Features using convolutional layer, obtains characteristic pattern;
Up-sampling unit, it is each in the characteristic pattern after up-sampling for being up-sampled according to the characteristic pattern using warp lamination Pixel corresponds to each pixel in input picture;To being carried out according to the characteristic information of pixel each in the characteristic pattern after up-sampling Classification, determines the corresponding object identity of each pixel;
Second determination unit, for identifying the pixel region of corresponding same target mark, according to the region described Position in characteristic pattern determines the position of the object in the input image.
17. a kind of computer equipment, which is characterized in that including:Memory, processor and storage on a memory and can handled The computer program run on device when the processor performs described program, is realized as described in any in claim 1-11 Image-recognizing method.
18. computer equipment according to claim 17, which is characterized in that the computer equipment includes mobile terminal.
19. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The image-recognizing method as described in any in claim 1-11 is realized during execution.
CN201711479546.XA 2017-12-29 2017-12-29 Image-recognizing method, device, computer equipment and storage medium Pending CN108229379A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711479546.XA CN108229379A (en) 2017-12-29 2017-12-29 Image-recognizing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711479546.XA CN108229379A (en) 2017-12-29 2017-12-29 Image-recognizing method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN108229379A true CN108229379A (en) 2018-06-29

Family

ID=62646049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711479546.XA Pending CN108229379A (en) 2017-12-29 2017-12-29 Image-recognizing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN108229379A (en)

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063742A (en) * 2018-07-06 2018-12-21 平安科技(深圳)有限公司 Butterfly identifies network establishing method, device, computer equipment and storage medium
CN109166141A (en) * 2018-08-10 2019-01-08 Oppo广东移动通信有限公司 Dangerous based reminding method, device, storage medium and mobile terminal
CN109522939A (en) * 2018-10-26 2019-03-26 平安科技(深圳)有限公司 Image classification method, terminal device and computer readable storage medium
CN109598267A (en) * 2018-11-15 2019-04-09 北京天融信网络安全技术有限公司 Image data leakage prevention method, device and equipment
CN109635805A (en) * 2018-12-11 2019-04-16 上海智臻智能网络科技股份有限公司 Image text location method and device, image text recognition methods and device
CN109977826A (en) * 2019-03-15 2019-07-05 百度在线网络技术(北京)有限公司 The classification recognition methods of object and device
CN110147252A (en) * 2019-04-28 2019-08-20 深兰科技(上海)有限公司 A kind of parallel calculating method and device of convolutional neural networks
CN110163260A (en) * 2019-04-26 2019-08-23 平安科技(深圳)有限公司 Image-recognizing method, device, equipment and storage medium based on residual error network
CN110197143A (en) * 2019-05-17 2019-09-03 深兰科技(上海)有限公司 A kind of checkout station item identification method, device and electronic equipment
CN110245611A (en) * 2019-06-14 2019-09-17 腾讯科技(深圳)有限公司 Image-recognizing method, device, computer equipment and storage medium
CN110276345A (en) * 2019-06-05 2019-09-24 北京字节跳动网络技术有限公司 Convolutional neural networks model training method, device and computer readable storage medium
CN110378254A (en) * 2019-07-03 2019-10-25 中科软科技股份有限公司 Recognition methods, system, electronic equipment and the storage medium of vehicle damage amending image trace
CN110390350A (en) * 2019-06-24 2019-10-29 西北大学 A kind of hierarchical classification method based on Bilinear Structure
CN110427898A (en) * 2019-08-07 2019-11-08 广东工业大学 Wrap up safety check recognition methods, system, device and computer readable storage medium
CN110458079A (en) * 2019-08-05 2019-11-15 黑龙江电力调度实业有限公司 A kind of Image Acquisition and target identification method based on FPGA and convolutional neural networks
CN110555855A (en) * 2019-09-06 2019-12-10 聚好看科技股份有限公司 GrabCont algorithm-based image segmentation method and display device
CN110705564A (en) * 2019-09-09 2020-01-17 华为技术有限公司 Image recognition method and device
CN110751163A (en) * 2018-07-24 2020-02-04 杭州海康威视数字技术股份有限公司 Target positioning method and device, computer readable storage medium and electronic equipment
WO2020041962A1 (en) * 2018-08-28 2020-03-05 深圳鲲云信息科技有限公司 Parallel deconvolutional calculation method, single-engine calculation method and related product
CN110874099A (en) * 2018-08-13 2020-03-10 格力电器(武汉)有限公司 Target image identification method and device and movable air conditioner
CN110889428A (en) * 2019-10-21 2020-03-17 浙江大搜车软件技术有限公司 Image recognition method and device, computer equipment and storage medium
CN110956190A (en) * 2018-09-27 2020-04-03 深圳云天励飞技术有限公司 Image recognition method and device, computer device and computer readable storage medium
CN110969047A (en) * 2018-09-28 2020-04-07 珠海格力电器股份有限公司 Method and device for identifying food materials and refrigerator
CN110992725A (en) * 2019-10-24 2020-04-10 合肥讯图信息科技有限公司 Method, system and storage medium for detecting traffic signal lamp fault
CN111079575A (en) * 2019-11-29 2020-04-28 拉货宝网络科技有限责任公司 Material identification method and system based on packaging image characteristics
CN111126388A (en) * 2019-12-20 2020-05-08 维沃移动通信有限公司 Image recognition method and electronic equipment
CN111144408A (en) * 2019-12-24 2020-05-12 Oppo广东移动通信有限公司 Image recognition method, image recognition device, electronic equipment and storage medium
CN111291694A (en) * 2020-02-18 2020-06-16 苏州大学 Dish image identification method and device
CN111435432A (en) * 2019-01-15 2020-07-21 北京市商汤科技开发有限公司 Network optimization method and device, image processing method and device, and storage medium
CN111553420A (en) * 2020-04-28 2020-08-18 北京邮电大学 X-ray image identification method and device based on neural network
CN111652678A (en) * 2020-05-27 2020-09-11 腾讯科技(深圳)有限公司 Article information display method, device, terminal, server and readable storage medium
CN111860533A (en) * 2019-04-30 2020-10-30 深圳数字生命研究院 Image recognition method and device, storage medium and electronic device
CN111967515A (en) * 2020-08-14 2020-11-20 Oppo广东移动通信有限公司 Image information extraction method, training method and device, medium and electronic equipment
WO2021012508A1 (en) * 2019-07-19 2021-01-28 平安科技(深圳)有限公司 Ai image recognition method, apparatus and device, and storage medium
CN112396123A (en) * 2020-11-30 2021-02-23 上海交通大学 Image recognition method, system, terminal and medium based on convolutional neural network
CN113076889A (en) * 2021-04-09 2021-07-06 上海西井信息科技有限公司 Container lead seal identification method and device, electronic equipment and storage medium
CN113205054A (en) * 2021-05-10 2021-08-03 江苏硕世生物科技股份有限公司 Hypha microscopic image identification method and system, equipment and readable medium
CN113269276A (en) * 2021-06-28 2021-08-17 深圳市英威诺科技有限公司 Image recognition method, device, equipment and storage medium
CN113424197A (en) * 2018-09-21 2021-09-21 定位成像有限公司 Machine learning assisted self-improving object recognition system and method
CN113515983A (en) * 2020-06-19 2021-10-19 阿里巴巴集团控股有限公司 Model training method, mobile object identification method, device and equipment
CN115909329A (en) * 2023-01-10 2023-04-04 深圳前海量子云码科技有限公司 Microscopic target identification method and device, electronic equipment and storage medium
CN111401521B (en) * 2020-03-11 2023-10-31 北京迈格威科技有限公司 Neural network model training method and device, and image recognition method and device
WO2024077781A1 (en) * 2022-10-13 2024-04-18 深圳云天励飞技术股份有限公司 Convolutional neural network model-based image recognition method and apparatus, and terminal device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446782A (en) * 2016-08-29 2017-02-22 北京小米移动软件有限公司 Image identification method and device
CN106548145A (en) * 2016-10-31 2017-03-29 北京小米移动软件有限公司 Image-recognizing method and device
CN106982359A (en) * 2017-04-26 2017-07-25 深圳先进技术研究院 A kind of binocular video monitoring method, system and computer-readable recording medium
CN107516102A (en) * 2016-06-16 2017-12-26 北京市商汤科技开发有限公司 View data is classified and establishes disaggregated model method, apparatus and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107516102A (en) * 2016-06-16 2017-12-26 北京市商汤科技开发有限公司 View data is classified and establishes disaggregated model method, apparatus and system
CN106446782A (en) * 2016-08-29 2017-02-22 北京小米移动软件有限公司 Image identification method and device
CN106548145A (en) * 2016-10-31 2017-03-29 北京小米移动软件有限公司 Image-recognizing method and device
CN106982359A (en) * 2017-04-26 2017-07-25 深圳先进技术研究院 A kind of binocular video monitoring method, system and computer-readable recording medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
乔渭阳: "《航空发动机气动声学》", 30 June 2010 *
罗建豪等: "《基于深度卷积特征的细粒度图像分类研究综述》", 《自动化学报》 *

Cited By (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063742B (en) * 2018-07-06 2023-04-18 平安科技(深圳)有限公司 Butterfly identification network construction method and device, computer equipment and storage medium
CN109063742A (en) * 2018-07-06 2018-12-21 平安科技(深圳)有限公司 Butterfly identifies network establishing method, device, computer equipment and storage medium
CN110751163A (en) * 2018-07-24 2020-02-04 杭州海康威视数字技术股份有限公司 Target positioning method and device, computer readable storage medium and electronic equipment
CN109166141A (en) * 2018-08-10 2019-01-08 Oppo广东移动通信有限公司 Dangerous based reminding method, device, storage medium and mobile terminal
CN110874099A (en) * 2018-08-13 2020-03-10 格力电器(武汉)有限公司 Target image identification method and device and movable air conditioner
WO2020041962A1 (en) * 2018-08-28 2020-03-05 深圳鲲云信息科技有限公司 Parallel deconvolutional calculation method, single-engine calculation method and related product
US11961279B2 (en) 2018-09-21 2024-04-16 Position Imaging, Inc. Machine-learning-assisted self-improving object-identification system and method
CN113424197A (en) * 2018-09-21 2021-09-21 定位成像有限公司 Machine learning assisted self-improving object recognition system and method
CN110956190A (en) * 2018-09-27 2020-04-03 深圳云天励飞技术有限公司 Image recognition method and device, computer device and computer readable storage medium
CN110969047A (en) * 2018-09-28 2020-04-07 珠海格力电器股份有限公司 Method and device for identifying food materials and refrigerator
CN109522939A (en) * 2018-10-26 2019-03-26 平安科技(深圳)有限公司 Image classification method, terminal device and computer readable storage medium
CN109598267A (en) * 2018-11-15 2019-04-09 北京天融信网络安全技术有限公司 Image data leakage prevention method, device and equipment
CN109635805A (en) * 2018-12-11 2019-04-16 上海智臻智能网络科技股份有限公司 Image text location method and device, image text recognition methods and device
CN111435432B (en) * 2019-01-15 2023-05-26 北京市商汤科技开发有限公司 Network optimization method and device, image processing method and device and storage medium
CN111435432A (en) * 2019-01-15 2020-07-21 北京市商汤科技开发有限公司 Network optimization method and device, image processing method and device, and storage medium
CN109977826A (en) * 2019-03-15 2019-07-05 百度在线网络技术(北京)有限公司 The classification recognition methods of object and device
CN110163260A (en) * 2019-04-26 2019-08-23 平安科技(深圳)有限公司 Image-recognizing method, device, equipment and storage medium based on residual error network
CN110147252A (en) * 2019-04-28 2019-08-20 深兰科技(上海)有限公司 A kind of parallel calculating method and device of convolutional neural networks
CN111860533B (en) * 2019-04-30 2023-12-12 深圳数字生命研究院 Image recognition method and device, storage medium and electronic device
CN111860533A (en) * 2019-04-30 2020-10-30 深圳数字生命研究院 Image recognition method and device, storage medium and electronic device
CN110197143A (en) * 2019-05-17 2019-09-03 深兰科技(上海)有限公司 A kind of checkout station item identification method, device and electronic equipment
CN110197143B (en) * 2019-05-17 2021-09-24 深兰科技(上海)有限公司 Settlement station article identification method and device and electronic equipment
CN110276345A (en) * 2019-06-05 2019-09-24 北京字节跳动网络技术有限公司 Convolutional neural networks model training method, device and computer readable storage medium
CN110276345B (en) * 2019-06-05 2021-09-17 北京字节跳动网络技术有限公司 Convolutional neural network model training method and device and computer readable storage medium
CN110245611A (en) * 2019-06-14 2019-09-17 腾讯科技(深圳)有限公司 Image-recognizing method, device, computer equipment and storage medium
CN110245611B (en) * 2019-06-14 2021-06-15 腾讯科技(深圳)有限公司 Image recognition method and device, computer equipment and storage medium
CN110390350A (en) * 2019-06-24 2019-10-29 西北大学 A kind of hierarchical classification method based on Bilinear Structure
CN110390350B (en) * 2019-06-24 2021-06-15 西北大学 Hierarchical classification method based on bilinear structure
CN110378254A (en) * 2019-07-03 2019-10-25 中科软科技股份有限公司 Recognition methods, system, electronic equipment and the storage medium of vehicle damage amending image trace
CN110378254B (en) * 2019-07-03 2022-04-19 中科软科技股份有限公司 Method and system for identifying vehicle damage image modification trace, electronic device and storage medium
WO2021012508A1 (en) * 2019-07-19 2021-01-28 平安科技(深圳)有限公司 Ai image recognition method, apparatus and device, and storage medium
CN110458079A (en) * 2019-08-05 2019-11-15 黑龙江电力调度实业有限公司 A kind of Image Acquisition and target identification method based on FPGA and convolutional neural networks
CN110427898A (en) * 2019-08-07 2019-11-08 广东工业大学 Wrap up safety check recognition methods, system, device and computer readable storage medium
CN110427898B (en) * 2019-08-07 2022-07-29 广东工业大学 Package security check identification method, system, device and computer readable storage medium
CN110555855A (en) * 2019-09-06 2019-12-10 聚好看科技股份有限公司 GrabCont algorithm-based image segmentation method and display device
CN110705564B (en) * 2019-09-09 2023-04-18 华为技术有限公司 Image recognition method and device
CN110705564A (en) * 2019-09-09 2020-01-17 华为技术有限公司 Image recognition method and device
CN110889428A (en) * 2019-10-21 2020-03-17 浙江大搜车软件技术有限公司 Image recognition method and device, computer equipment and storage medium
CN110992725A (en) * 2019-10-24 2020-04-10 合肥讯图信息科技有限公司 Method, system and storage medium for detecting traffic signal lamp fault
CN111079575A (en) * 2019-11-29 2020-04-28 拉货宝网络科技有限责任公司 Material identification method and system based on packaging image characteristics
CN111126388A (en) * 2019-12-20 2020-05-08 维沃移动通信有限公司 Image recognition method and electronic equipment
CN111126388B (en) * 2019-12-20 2024-03-29 维沃移动通信有限公司 Image recognition method and electronic equipment
CN111144408A (en) * 2019-12-24 2020-05-12 Oppo广东移动通信有限公司 Image recognition method, image recognition device, electronic equipment and storage medium
CN111291694A (en) * 2020-02-18 2020-06-16 苏州大学 Dish image identification method and device
CN111291694B (en) * 2020-02-18 2023-12-01 苏州大学 Dish image recognition method and device
CN111401521B (en) * 2020-03-11 2023-10-31 北京迈格威科技有限公司 Neural network model training method and device, and image recognition method and device
CN111553420B (en) * 2020-04-28 2023-08-15 北京邮电大学 X-ray image identification method and device based on neural network
CN111553420A (en) * 2020-04-28 2020-08-18 北京邮电大学 X-ray image identification method and device based on neural network
CN111652678B (en) * 2020-05-27 2023-11-14 腾讯科技(深圳)有限公司 Method, device, terminal, server and readable storage medium for displaying article information
CN111652678A (en) * 2020-05-27 2020-09-11 腾讯科技(深圳)有限公司 Article information display method, device, terminal, server and readable storage medium
CN113515983A (en) * 2020-06-19 2021-10-19 阿里巴巴集团控股有限公司 Model training method, mobile object identification method, device and equipment
CN111967515A (en) * 2020-08-14 2020-11-20 Oppo广东移动通信有限公司 Image information extraction method, training method and device, medium and electronic equipment
CN112396123A (en) * 2020-11-30 2021-02-23 上海交通大学 Image recognition method, system, terminal and medium based on convolutional neural network
CN113076889B (en) * 2021-04-09 2023-06-30 上海西井信息科技有限公司 Container lead seal identification method, device, electronic equipment and storage medium
CN113076889A (en) * 2021-04-09 2021-07-06 上海西井信息科技有限公司 Container lead seal identification method and device, electronic equipment and storage medium
CN113205054A (en) * 2021-05-10 2021-08-03 江苏硕世生物科技股份有限公司 Hypha microscopic image identification method and system, equipment and readable medium
CN113269276A (en) * 2021-06-28 2021-08-17 深圳市英威诺科技有限公司 Image recognition method, device, equipment and storage medium
WO2024077781A1 (en) * 2022-10-13 2024-04-18 深圳云天励飞技术股份有限公司 Convolutional neural network model-based image recognition method and apparatus, and terminal device
CN115909329A (en) * 2023-01-10 2023-04-04 深圳前海量子云码科技有限公司 Microscopic target identification method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108229379A (en) Image-recognizing method, device, computer equipment and storage medium
CN109255322A (en) A kind of human face in-vivo detection method and device
CN108427920A (en) A kind of land and sea border defense object detection method based on deep learning
CN109325954A (en) Image partition method, device and electronic equipment
CN109685100A (en) Character identifying method, server and computer readable storage medium
CN107742107A (en) Facial image sorting technique, device and server
CN104866868B (en) Metal coins recognition methods based on deep neural network and device
CN107437092A (en) The sorting algorithm of retina OCT image based on Three dimensional convolution neutral net
CN108229326A (en) Face false-proof detection method and system, electronic equipment, program and medium
CN108256544A (en) Picture classification method and device, robot
CN107358242A (en) Target area color identification method, device and monitor terminal
CN109978918A (en) A kind of trajectory track method, apparatus and storage medium
CN108229341A (en) Sorting technique and device, electronic equipment, computer storage media, program
CN106951869B (en) A kind of living body verification method and equipment
CN105139004A (en) Face expression identification method based on video sequences
US20150310305A1 (en) Learning painting styles for painterly rendering
CN108717524A (en) It is a kind of based on double gesture recognition systems and method for taking the photograph mobile phone and artificial intelligence system
CN105787867B (en) The method and apparatus of processing video image based on neural network algorithm
CN108596197A (en) A kind of seal matching process and device
CN108108767A (en) A kind of cereal recognition methods, device and computer storage media
CN109492627A (en) A kind of scene text method for deleting of the depth model based on full convolutional network
CN108510504A (en) Image partition method and device
CN108765465A (en) A kind of unsupervised SAR image change detection
CN110472611A (en) Method, apparatus, electronic equipment and the readable storage medium storing program for executing of character attribute identification
CN104298974A (en) Human body behavior recognition method based on depth video sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180629