CN106339719A - Image identification method and image identification device - Google Patents

Image identification method and image identification device Download PDF

Info

Publication number
CN106339719A
CN106339719A CN201610703925.1A CN201610703925A CN106339719A CN 106339719 A CN106339719 A CN 106339719A CN 201610703925 A CN201610703925 A CN 201610703925A CN 106339719 A CN106339719 A CN 106339719A
Authority
CN
China
Prior art keywords
image
images
sample
neural networks
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610703925.1A
Other languages
Chinese (zh)
Inventor
杜康华
王崇
任文越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weimeng Chuangke Network Technology China Co Ltd
Original Assignee
Weimeng Chuangke Network Technology China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weimeng Chuangke Network Technology China Co Ltd filed Critical Weimeng Chuangke Network Technology China Co Ltd
Priority to CN201610703925.1A priority Critical patent/CN106339719A/en
Publication of CN106339719A publication Critical patent/CN106339719A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image identification method and an image identification device. First, image areas, matching a specified color, of sample images are standardized to get images in a first image set; then, an image classifier is trained by the first image set to get a trained image classifier; and finally, a to-be-identified image is input into the trained image classifier to get an identification result for the to-be-identified image. As the images in the first image set are obtained by processing the sample images, the percentage of the image area, matching a preset image category, in each image of the first image set is increased relatively, and information loss of the image areas, matching the preset image category, in the images during scaling of the images is reduced. Thus, through the method provided by the application, the number of images in the first image set is reduced effectively, and the training efficiency of the image classifier is improved while cost is reduced.

Description

A kind of image-recognizing method and device
Technical field
The application is related to areas of information technology, more particularly, to a kind of image-recognizing method and device.
Background technology
Development with informationized society and network social intercourse activity growth, people carry out network social intercourse activity when more Tend to be used the image not limited by region and language to replace word as passing the main media that word is expressed one's ideas, this makes in network Image quickly increases.How just to become one of focus of concern in recent years using the large nuber of images in network.
Because image is different from Word message, its content directly cannot enter the operation such as line retrieval, classification, institute by keyword So that, for how using image, the problem first having to solve is exactly the identification to picture material, namely image recognition technology.
The method that conventional images technology of identification mainly adopts machine learning, specifically it is necessary first to manually carry out to image Classification, determines image set (e.g., the image set of landscape image composition, the facial image structure being respectively constituted by the image of different content Image set that the image set of one-tenth, pornographic image are constituted etc.), it is directed to the image set of each content afterwards, extract this image set Common trait (being often characterized vector) between each image comprising, and the character modules of this image set are finally given by training Type, finally according to various image sets corresponding characteristic model respectively, carries out image recognition to the images to be recognized receiving, and really This images to be recognized generic fixed.
Due to carrying out image recognition with respect to manually arranging and extracting characteristic vector, obtained by machine learning and training Characteristic model avoids the impact of the subjective factorss of people, and can be continued to optimize by training, so that image recognition Accuracy rate is higher.
But, for the method for machine learning if it is desired to the accuracy rate of image recognition higher it is necessary first to substantial amounts of Image is used for learning and train the corresponding characteristic model of image set of different content, if being used for the image learning and training too Less it is determined that the accuracy of characteristic model will reduce, the robustness of impact image recognition, and the image trained is too many, The resource that the method for machine learning again can be led to increases, the efficiency of impact machine learning.
Secondly as when being training characteristics model, having uniform requirement (e.g., unified for the picture size for training Picture size be resolution: 100 × 100), thus also need to for training picture size be adjusted (include: amplify, The operation such as reduce, stretch), as shown in figure 1, and leading to the loss of feature comprising in image, thus affecting the standard of machine learning Really property (that is, the accuracy of the characteristic model that impact finally gives) is so that in order to ensure that the accuracy of machine learning needs into one Step increases the image of training.
Fig. 1 is characteristic loss schematic diagram high-definition picture being carried out comprise in the image that image scaling leads to.
Wherein, left side is the image of original size size, and right side is the image after downscaled images size, in order to embody this The loss of feature in image after downscaled images size, the image after this downscaled images size is amplified to this image again Original size size.It can be seen that, wherein vein texture has obscured, if with vein texture for if need the feature extracted, should The feature of the image after downscaled images size has occurred in that loss.
It can be seen that due to the problems referred to above, existing image recognition technology needs the amount of images for training more, leads to figure High cost as identification.
Content of the invention
The embodiment of the present application provides a kind of image-recognizing method, for solving in prior art, using machine learning When method carries out image recognition, need the image being largely used to train, lead to the problem that the cost of image recognition increases.
The embodiment of the present application provides a kind of pattern recognition device, for solving in prior art, using machine learning When method carries out image recognition, need the image being largely used to train, lead to the problem that the cost of image recognition increases.
The embodiment of the present application adopts following technical proposals:
A kind of image-recognizing method, comprising:
Determine images to be recognized;
By described images to be recognized input training in advance complete Image Classifier, obtain described image grader output The recognition result for described images to be recognized, wherein, described image grader is trained in the first used image set Image, be that the image-region with specified shade-matched blend in sample image is carried out obtained from standardization processing;
Described specified tone, the tone of the image according to pre-set image classification determines.
A kind of pattern recognition device, comprising:
Determining module, determines images to be recognized;
Identification module, the image classification module that described images to be recognized input training in advance is completed, obtain described image The recognition result for described images to be recognized of grader output, wherein, described image sort module is trained used Image in first image set is to carry out standardization processing to the image-region with specified shade-matched blend in sample image and obtains 's;
Described specified tone, the tone of the image according to pre-set image classification determines.
At least one technical scheme above-mentioned that the embodiment of the present application adopts can reach following beneficial effect:
First standardization processing is carried out to the image-region of each sample image and specified shade-matched blend, thus obtaining the first figure Each image in image set, then trains this Image Classifier by this first image set, to obtain training the image classification completing Device, when carrying out image recognition to images to be recognized, this images to be recognized is inputted in this Image Classifier that this training completes, To obtain the recognition result for this images to be recognized of this Image Classifier output.Wherein, due in this first image set Image is obtained from sample image is processed so that the image with specified shade-matched blend in this each image of the first image set Even if region in the images proportion relatively lifting so that needing this image is zoomed in and out, also can reduce in image with The characteristic loss of the image-region of specified shade-matched blend, increased the accuracy of the training result of this Image Classifier it is seen that leading to The method crossing the application offer, can be effectively reduced to image in the first image set in the case of not affecting training effect Quantity demand, also improve the training effectiveness of this Image Classifier while reducing cost.
Brief description
Accompanying drawing described herein is used for providing further understanding of the present application, constitutes the part of the application, this Shen Schematic description and description please is used for explaining the application, does not constitute the improper restriction to the application.In the accompanying drawings:
Fig. 1 is characteristic loss schematic diagram high-definition picture being carried out comprise in the image that image scaling leads to;
The image recognition processes that Fig. 2 provides for the embodiment of the present application;
The process that this convolutional neural networks model is trained that Fig. 3 provides for the embodiment of the present application;
Image-region with described specified shade-matched blend in this sample image of determination that Fig. 4 provides for the embodiment of the present application, Schematic diagram as intermediate image;
The structural representation of the convolutional neural networks model to be trained that Fig. 5 provides for the embodiment of the present application;
Fig. 6 is that the embodiment of the present application provides a kind of structural representation of pattern recognition device.
Specific embodiment
Purpose, technical scheme and advantage for making the application are clearer, below in conjunction with the application specific embodiment and Corresponding accompanying drawing is clearly and completely described to technical scheme.Obviously, described embodiment is only the application one Section Example, rather than whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not doing The every other embodiment being obtained under the premise of going out creative work, broadly falls into the scope of the application protection.
As previously described, because the substantial amounts of image for training is needed using the method for machine learning, and right in needs In the case that image for training carries out picture size unification, required amount of images also needs to increase further, so leading The required amount of images for training in prior art is caused to greatly increase.
Further, due to the method using machine learning, actually training obtains inhomogeneous image and corresponds to respectively Characteristic model, so also needing in the prior art in advance the image being used for training be classified, just can be according to dividing in advance The image that class is crossed is adjusted to characteristic model, and it is (that is, right to finally give the satisfactory characteristic model of image recognition accuracy The process that characteristic model is trained).Wherein, in advance the image for training is carried out classification and often relies on manually carrying out, That is, needing by the artificial content according to image, each image in the image set for training to be classified.
But, because prior art needs the image for training more, so needing the substantial amounts of image that manually carries out to divide Class works, and increased operating cost.
Further, due to artificial when classifying to image, rely primarily on the subjective sensation of people, for some Different classes of image can be classified to, different people may be not consistent to the classification results of this image, if by this simultaneously Different classes of image can be classified to characteristic model is trained simultaneously, then training effect may be had a negative impact, But because prior art needs the image for training more, or increase manually can be classified to inhomogeneity to this simultaneously Other image carries out examination, excludes in the image set for training, or needing to increase further the image for training.
Based on above content, the embodiment of the present application provides one kind can reduce training image, and does not affect to character modules The technical scheme for image recognition of type training effect.Below in conjunction with accompanying drawing, describe what each embodiment of the application provided in detail Technical scheme.
The image recognition processes that Fig. 2 provides for the embodiment of the present application, specifically include following steps:
S101: determine images to be recognized.
Conventionally, as generally adopting the resource that the method training characteristics model of machine learning consumes more, So the general process being trained this feature model by server, and can be completed according to training by terminal or this server Characteristic model carries out image recognition processes, in this application, carries out illustrating as a example image recognition processes by server.
Then, in the embodiment of the present application, this server can determine that images to be recognized, subsequently to carry out image recognition. Wherein, described images to be recognized can this server determine from locally stored image, may also be this server and receives The image arriving, for example, user, when issuing image by terminal, needs this image transmitting by this terminal to this server In, then by this server, the issue of this image is reached the standard grade, then now this server can be when receiving this image, will be true for this image It is set to images to be recognized, and carry out follow-up image recognition processes.
Further, because the now social level of informatization is very high, the image that each moment produces is also a lot, so should The amount of images that server arrives in each reception is also very huge, so in this application, this server also can be to receiving Image randomly drawed, and image will be randomly drawed be defined as this images to be recognized, and carry out follow-up image recognition Process.And, because this server is after receiving image, this server local or this server can be stored the image on In the data base of corresponding data storage, so this server can also have been stored and not carried out when operating pressure is less The image of image recognition is defined as images to be recognized, and carries out follow-up image recognition processes, i.e. this server is from locally stored Image in determine images to be recognized, by said method, can preferably being received using this server and storage All kinds of image resources.
Certainly, the method for above-mentioned this images to be recognized of determination is only the embodiment that the application provides, in actually used process In, can also not limit real using the application using the method for multiple this images to be recognized of determination same as the prior art Apply the method providing in example, how the application is to determining that images to be recognized is not specifically limited.
Further, due to image recognition is carried out to this images to be recognized, primarily for following two purposes: one be into Row risk control, two is the utilization to image resource, wherein, due to carrying out the impact comparatively to network security for the risk control Bigger, thus comparatively important, so the embodiment of the present application is subsequently to carry out image recognition to carry out to this images to be recognized Illustrate as a example risk control, specifically, whether the application is to be that the image of Pornograph carries out image recognition to this image As a example carry out follow-up explanation.
It should be noted that in this application it is also possible to carry out image recognition processes by terminal, this terminal can be handss Machine, PC, the equipment of panel computer, when carrying out image recognition processes by server, this server can be independent An equipment, or the network being made up of multiple devices, i.e. distributed server.For convenience of describing, subsequently entered with server Illustrate as a example row image recognition processes.
For example, it is assumed that server a have received the image of certain terminal transmission, and this server a determines that this image is to wait to know Other image, the size of wherein this images to be recognized size is: resolution 1000 × 1000.
S102: the Image Classifier that described images to be recognized input training in advance is completed, obtains described image grader The recognition result for described images to be recognized of output.
In the embodiment of the present application, when this server is after determining this images to be recognized, just can be by this figure to be identified As in the Image Classifier that input training completes, so that this server can be according to the output of this Image Classifier, determination is directed to should The recognition result of images to be recognized.And, this Image Classifier is trained the image in the first image set used, is to sample The image-region with specified shade-matched blend of this image is carried out obtained from standardization processing.
Specifically, in this application, because this Image Classifier mays include: convolutional neural networks model, so working as this figure During as grader for convolutional neural networks model, this server can be by the convolution god completing the input training of this images to be recognized Through network model, to obtain the image recognition result to this images to be recognized.
And, foregoing, existing in prior art needs training image many, the problem of training high cost, and this Shen Please the present embodiment provide image recognition processes in, for needed for the Image Classifier of image recognition for training image phase To less, and do not affect to train the image recognition accuracy of this Image Classifier completing it is possible to avoid prior art Present in problem.Subsequently, the application so that this Image Classifier is for convolutional neural networks model as a example illustrates.
Specifically, in this application, this convolutional neural networks model is trained with the main mistake passed through as described in Figure 3 Journey.
Fig. 3 for the application be embodiment supply the process that this convolutional neural networks model is trained, comprising:
S1021: determine the second image set of sample image composition.
In the embodiment of the present application, this server can first determine initial for training this convolutional neural networks model Sample image, and using the image set by each sample image construction as this second image set.Wherein, with this convolutional neural networks mould As a example type is used for carrying out image recognition to pornographic image, this second image set can be by the sample image structure of three kinds of picture materials The image set becoming, comprising: the sample image of Pornograph, the non-pornographic sample image of personage's content and the sample of non-personage's content This image.
Wherein, from the point of view of whether being related to Pornograph with the content of personage's content images, personage's content images can be categorized as: color Feelings content images and non-pornographic personage's content images.That is, Pornograph image and non-pornographic personage's content images, It is all the image of personage's content from picture material, it differs only in whether picture material is related to Pornograph, and this Some difference exactly trains this it is desirable to this convolutional neural networks model to be trained can when the convolutional neural networks model trained Arrived with study, and when obtaining training the convolutional neural networks model completing, the convolutional neural networks model that this training completes Pornographic whether can be related to picture material and make identification, so in this application, the sample image in this second image can wrap Include: the sample image of the sample image of Pornograph and non-pornographic personage's content.
Further, if due to only training this convolutional neural networks model to be trained using personage's content images, Obtain convolutional neural networks model that this training the completes images to be recognized when being identified to image, only to personage's content eventually There is higher recognition correct rate, and the recognition correct rate for the images to be recognized of non-personage's content is then unpredictable, so Sample image in this second image set may also include that the sample image of non-personage's content in this application.
Further, in this application, also the quantity of sample image in this second image set can be defined, make this The quantity of the sample image in two image sets is not more than default quantity, and for example, this predetermined number is 3000, then this server can Determine 3000 sample images.
Certainly, specifically the quantity of the sample image in this second image set can determine as needed, and the application is implemented Example only provides a kind of scheme, does not constitute the restriction to the application, simultaneously need to explanation, if selecting in this second image set Sample size too many (e.g., 10000,100000), then the method that the embodiment of the present application provides just is difficult to reduce training to be used Sample image, and the application is without substantial amounts of sample image.
Continue to use the example above it is assumed that this server a is used for identifying the convolutional neural networks model of pornographic image in training When, need first to determine 3000 sample images, including: the sample image of Pornograph, the sample of non-pornographic personage's content Image and the sample image of non-personage's content.
S1022: this second image set is classified.
In the embodiment of the present application, same as the prior art, this server needs to each sample in this second image set Figure is to being classified, and adds mark according to the classification results of each sample image to each sample image.
Specifically, due in application this second image set can comprise the sample image of three kinds of picture materials, so this clothes Each sample image in this second image set can be divided three classes by business device with the content of each sample image, and such as expectation has been trained The convolutional neural networks model becoming is capable of identify that pornographic image, then, sample image can be divided into: the sample graph of Pornograph Picture, the non-pornographic sample image of personage's content and the sample image three class image of non-personage's content.And, this server also may be used According to the classification results to each sample image in this second image set, respectively different marks are added to each class sample image Know so that follow-up train this convolutional neural networks model when, make this convolutional neural networks model can be according to different marks Knowledge determines the content of the image of input, and executes corresponding operation (e.g., calculation error value, calculating accuracy and reverse adjustment Parameter etc.).
Further, in the embodiment of the present application, due to being limited to the quantity of sample image for training, institute To be different from prior art, the quantity of the sample image of three kinds of picture materials in this second image set needs to protect in this application Card is consistent, if the quantity of the sample image of three kinds of picture materials in this second image set is inconsistent, and quantity variance is relatively Greatly, then this convolutional neural networks model may be made sufficiently complete to the feature learning of a certain picture material, thus leading to this The image recognition accuracy of convolutional neural networks model reduces.For example, it is assumed that comprising 3000 sample graphs in this second image set Picture, wherein, the quantity of the sample image of three kinds of picture materials is respectively as follows: 1500,1200,300 then, it is seen that with this 300 samples For image, this 300 sample images belong to the sample image of picture material of the same race, it is further assumed that this picture material tool of the same race A, b, c, d is had to have 4 features altogether, and the negligible amounts due to this 300 sample images, so leading to the sample of this picture material The probability of image whole a, b, c, d features of covering is compared with little that is to say, that because the quantity of this sample image is few, leading to this image The feature that the image of content is comprised may be missed, so leading to the imperfect probability of the feature learning to this kind of content images Larger, easily cause the reduction of the image recognition accuracy of this convolutional neural networks model.So in this application, edge is used Example, in this second image set, the quantity of the sample image of three kinds of picture materials can be respectively as follows: 1000,1000,1000, such that it is able to When training this convolutional neural networks model, make this convolutional neural networks model that the feature of this three kinds of picture materials is obtained fully Study.
Further, when adding mark to each sample image, this mark can be added to this sample with unified rule In the filename of this image, e.g., 3 bit digital marks are added after filename, and using symbol "-" as the separation with old file name Symbol, or add 3 English alphabet marks etc. before filename, specifically how to add mark the application and do not do concrete limit Fixed.
Continue to use the example above the Pornograph it is assumed that in step s1021, in the second image set that this server a determines The quantity of the sample image of sample image, the non-pornographic sample image of personage's content and non-personage's content be 1000, Then this server a can classify to each sample image in this second image set according to the content of each sample image, and According to classification results, different marks are added respectively to above three class sample images, as shown in table 1.
Sample image classification The mark that sample image adds
The sample image of Pornograph 001
The sample image of non-pornographic personage's content 002
The sample image of non-personage's content 003
Table 1
Wherein, this mark can be added to the filename of sample image, and for example, certain sample Image Name is: 92e8647ajw1exg20dc07hx6x.jpg, and the content of this sample image is Pornograph, then add being somebody's turn to do after labelling The file of sample image is just changed into: 92e8647ajw1exg20dc07hx6x-001.jpg.
S1023: standardization processing is carried out to the image-region of sample image and specified shade-matched blend, is trained Image in this first image set used.
Wherein, described specified tone, by prior art or artificial experience, the color of the image according to pre-set image classification Adjust and determine.
In the embodiment of the present application, this server is in determination the second image set and to each sample graph in this second image set After carrying out classifying, adding mark, the size due to each sample image in now this second image set does not meet For the dimensional requirement of the input picture of training, can't be used for training, so this server also needs to this second image set In each sample image processed, to obtain meeting the dimensional requirement of the input picture for training, can be trained using The first image set in each image.
Specifically, simply each sample image is entered in the size of unified each sample image due in prior art Row stretching and scaling are processed, as shown in figure 1, the characteristic loss that each sample image comprises may be led to, so in this application, First, this server can be directed to each of this second image set sample image, determines the tone saturation of this sample image Degree lightness (huesaturationvalue, hsv) color model, i.e. determine the tone of this each pixel of sample image, satisfy With degree and lightness.Wherein, because usual image is all by red green blueness (redgreenblue, rgb) color model Represent the value of each pixel with red, green, blue three-primary colours, so for the hsv color model obtaining this sample image, this clothes This sample image can be converted to hsv color model by rgb color model by below equation by business device.
v = r + g + b 3 s = 1 - 3 × [ min ( r , g , b ) ] r + g + b h = arccos { [ ( r - g ) + ( r - b ) ] / 2 ( r - g ) 2 + ( r - b ) ( g - b ) }
Wherein, min (r, g, b) represents, for each of this sample image pixel, takes this pixel r, g, b tri- value In minima.
Secondly, according to the hsv color model of this sample image, determine that the described pre-set image classification of this sample image corresponds to Shade-matched blend image-region, as intermediate image.Wherein, when this pre-set image classification is Pornograph image category, H ∈ [0,116] can be set to according to artificial experience with this corresponding tone of Pornograph image category, then, determine and this pornographic The image-region of the corresponding shade-matched blend of content images classification is it is simply that determine the tone value of this sample image in 0~116 scope Image-region corresponding to pixel, and using this image-region as intermediate image.And, determining this figure of this sample image During as region, the coordinate figure of each pixel in 0~116 scope for each tone value that can first determine, and each pixel to determine The x-axis maximum coordinate value among of point, x-axis maximum coordinate value among, x-axis maximum coordinate value among and x-axis maximum coordinate value among, determine this sample image Corresponding intermediate image.As shown in Fig. 4.
Image-region with described specified shade-matched blend in this sample image of determination that Fig. 4 provides for the embodiment of the present application, Schematic diagram as intermediate image.
It can be seen that, in Fig. 4, maximum rectangle frame is the image boundary of sample image, and gray area is tone value 0~116 Each pixel of scope, minimum dotted rectangle is the intermediate image determining, wherein, the border of this intermediate image, by each The x-axis maximum coordinate value among of each pixel in 0~116 scope for the tone value, x-axis maximum coordinate value among, x-axis maximum coordinate value among and x Axle maximum coordinate value among determines.
It should be noted that when this intermediate image corresponding is determined by this sample image, this server can be considered as According to this image-region of this sample image and described specified shade-matched blend, shot operation is carried out to this sample image, and Obtain this intermediate image, i.e. this server as shown in Figure 4 has intercepted the area of the dotted rectangle of minimum in this sample image Domain is as intermediate image.
Finally, it is assumed that the image of now this intermediate image after determining this sample image this intermediate image corresponding There is the situation of the dimensional requirement not meeting the input picture for training in size, so this server also can be to each intermediate image Carry out standardization processing, and each intermediate image after standardization processing will be carried out as each image in this first image set, i.e. Intermediate image is carried out obtaining after standardization processing for the image in the first image set of training.Wherein, carry out at standardization Reason includes: according to default picture size, using method same as the prior art, this intermediate image zoomed in and out and stretches, It is that the picture size of this intermediate image meets default picture size, for example, it is assumed that this default picture size is resolution 256 × 256, and the picture size of this intermediate image is resolution 300 × 400, then this server can zoom in and out to this intermediate image It is resolution 256 × 256 with stretching the picture size specification of this intermediate image.
It should be noted that this Pornograph image category in this application, it is that the sample image institute of Pornograph is right The image category answered, that is, image with the addition of the sample image being designated 001.
Because each image in this first image set is all to be obtained by each sample image in this second image set respectively , so there is one-to-one relation with each sample image in this second image set in each image in this first image set, So the visible process by this step s1023, this server first by each sample image, with this pre-set image classification (e.g., color Feelings content images classification) image-region of corresponding specified shade-matched blend intercepts out so that in this first image set Each image, expands the image-region with this specified shade-matched blend in this image, compared to one-to-one with this image Image-region with this specified shade-matched blend in sample image, shared ratio.As shown in Figure 4 it is seen that with respect to maximum square Shape frame, in this minimum dotted rectangle, this ratio shared by gray area is relatively large, even if so this server passes through again Follow-up standardization processing, when this intermediate image is stretched and scales, also can reduce each image for training and be wrapped The loss of the feature containing.Thus avoiding present in prior art, make image special due to needing to carry out standardization processing Levy loss, even and if needing the drawbacks of increase the amount of images trained so that this server is used for instructing using a small amount of image Practice it is also possible to reach preferable training effect.
Further, because this server has extracted this intermediate image from this sample image, so can be considered this clothes Substantial amounts of interference image-region excluded by business device, as shown in figure 4, when being trained, the image of input is minimum dotted line square Shape frame, with respect to maximum rectangle frame, this minimum dotted rectangle has been excluded substantial amounts of useless background and (that is, has been disturbed image district Domain), the part that is, this maximum rectangle frame dotted rectangle more minimum than this has more is so that when being trained, it is possible to reduce The not impact to training effect with the image-region of this specified shade-matched blend.
For example, it is assumed that the Pornograph image for training is all the image of white background, if not as institute in the application The operation of the determination intermediate image stated, then, in training, be likely to be obtained white background and associate very strong knot with Pornograph image Really, and it is known that whether the background color of image and this image are that pornographic image does not have direct correlation, so that training The image recognition accuracy obtaining this convolutional neural networks model reduces.But, when this server is according to this specified shade-matched blend Image-region, after determining each intermediate image, decrease much for the region of white background in the image of training so that White background is no longer a kind of main feature, thus not the training to this convolutional neural networks model for the impact, so that The image recognition accuracy that training obtains this convolutional neural networks model is higher.
S1024: according to this first image set, the convolutional neural networks model treating training is trained, to obtain this instruction The convolutional neural networks model that white silk completes.
In the embodiment of the present application, after determining this first image set, this server just can be according to this first image Collection, trains the convolutional neural networks model that this is to be trained, to obtain the convolutional neural networks model that this training completes.
Specifically, train this convolutional neural networks model using following methods:
First, this server determines the corresponding initiation parameter of each layer in convolutional neural networks model to be trained, and makees Initialization model for this convolutional neural networks model.Generally, this initiation parameter is random determination, certainly, also can be by people Work is rule of thumb determined, and the application does not limit to this.
Secondly, this server circulation execution following step, until the mistake of this convolutional neural networks model output to be trained Difference reaches first threshold and till image recognition accuracy reaches Second Threshold:
Each image in this first image set is sequentially input this convolutional neural networks model to be trained so that passing through to be somebody's turn to do The feature of convolutional neural networks model to be trained this training image to input is propagated to forward output layer, calculates output This error amount and this image recognition accuracy, this convolutional Neural to be trained reversely is adjusted from output layer according to this error amount The corresponding parameter of each layer in network model.
Then, when this server determines that this error amount calculating output reaches this first threshold and this image recognition is correct When rate reaches this Second Threshold, determine that this convolutional neural networks model training to be trained terminates, obtain the volume that this training completes Long-pending neural network model.
By image-recognizing method as shown in Figure 2, due to this server train used by this Image Classifier this first Image in image set, is that the image-region to sample image and specified shade-matched blend carries out standardization processing, obtained from, So the image being used in this first image set of training have passed through standardization processing and (that is, have passed through at scaling and stretching Reason), the characteristic loss that each image in this first image set comprises also can be greatly decreased so that be used in training this first Image quantity in image set, also can train and obtain this higher Image Classifier of image recognition accuracy, and, due to Minimizing for each amount of images in this first image set of training is so that the cost of image recognition reduces.
In addition, in step s101, the application for the size of this images to be recognized and is not specifically limited, but for The less images to be recognized of picture size, due to picture size less (for example, resolution 5 × 5) so in this images to be recognized The information comprising is very little it is difficult to utilize, then in this application, this server will can also be schemed according to the picture size of image As the image that size is more than threshold value is defined as images to be recognized.
Further, in step s102, specifically in step s1024, the structure of this convolutional neural networks model can be as figure Shown in 5.
The structural representation of the convolutional neural networks model to be trained that Fig. 5 provides for the embodiment of the present application.
It should be noted that only show an active coating in Figure 5, but in actual application, each convolution The data of layer output also needs to enter after line activating through active coating, just can enter next layer by this active coating, first in such as Fig. 5 Shown in convolutional layer, active coating, the version of the first pond layer, the data inputting this first convolutional layer is (that is, in the first image set Image), by this first convolutional layer export after, also can input in this active coating enter line activating and by this active coating export Afterwards, then input this first pond layer.In the same manner, this first to fourth convolution dimensionality reduction layer and this first to the 6th convolution feature extraction The data of layer output, after also can first inputting each self-corresponding active coating and being exported by each self-corresponding active coating, then inputs follow-up Each layer, and in Figure 5 in order to simplify the structure of this convolutional neural networks model to be trained, each active coating is not all shown Illustrate.
In addition, visible in Figure 5, this input layer is waited to train for each image in this first image set is sequentially input this Convolutional neural networks model in each layer, wherein, each convolution dimensionality reduction layer be used for by input this convolutional layer data enter line parameter Dimensionality reduction, and export to next layer.For example, the data of input is the characteristic image that 128 resolution are 32 × 32, if then directly right This input carries out convolution, and extracts feature it is assumed that being carried out with the convolution kernel of 32 3 × 3 extracting feature, the then ginseng of the configuration needing Number be 128 × 32 × 3 × 3, and according to if a convolution dimensionality reduction layer so that it may line parameter is entered by the convolution kernel of 32 1 × 1 Dimensionality reduction, then this convolution dimensionality reduction layer needs parameter is 128 × 32 × 1 × 1, exports 32 characteristic patterns, afterwards again with 32 3 × 3 When convolution kernel carries out extracting feature, need the parameter of configuration can be reduced to 32 × 32 × 3 × 3, contrast is as shown in table 2.
Convolution Rotating fields Need configuration parameter
Directly carry out feature extraction 128×32×3×3
First dimensionality reduction carries out feature extraction again 128×32×1×1+32×32×3×3
Table 2
Then pass through the convolutional neural networks model structure schematic diagram to be trained shown in Fig. 5 visible, for inputting this volume The data of lamination enters the convolutional layer of line parameter dimensionality reduction and the convolutional layer for the data inputting this convolutional layer is carried out with feature extraction Adjacent so that parameter totally required in this convolutional neural networks model to be trained reduces, training effectiveness can be improved.
In addition, visible first to fourth pond layer in Figure 5, entered convolution for reducing input data (e.g., characteristic pattern) The information redundancy producing after operation, to improve operational efficiency and the robust of convolutional neural networks model algorithm to be trained Property.
Further, this loss layer fully enters this convolutional neural networks mould to be trained for calculating this first image set After type, the error amount of this image recognition result of convolutional neural networks model output to be trained, this accuracy layer is used for calculating This first image set fully enters this after the convolutional neural networks model trained, and this convolutional neural networks model to be trained is defeated The accuracy of the image recognition result going out, wherein, this loss layer and this accuracy layer are required to add according in this input picture Plus mark calculate this error amount and this accuracy, for example, when calculating this accuracy, need according to this convolution god to be trained Through network model, the image recognition result of each input picture is contrasted with the mark of this input picture, if unanimously, Correctly, if inconsistent, mistake, and finally determine and in the image of all inputs, identify correct image proportion, as should The image recognition accuracy of convolutional neural networks model to be trained.
Further, this loss layer utilizes the gradient descent method associative learning rate consistent with prior art according to error amount (that is, step-length), reversely adjusts the parameter of each layer in this convolutional neural networks model to be trained.
Further, this convolutional neural networks model to be trained may determine that error amount that this loss layer calculates whether Reach first threshold and image recognition accuracy that this accuracy layer calculates reaches whether arrive Second Threshold, if being all, really Fixed this convolutional neural networks model to be trained is trained completes, obtain the convolutional neural networks model that this training completes, if At least one is no, then reversely can adjust each layer parameter by the adjustment of this loss layer, and again will be each in this first image set Image inputs in this convolutional neural networks model to be trained, until the error amount that this loss layer calculates reaches this first threshold And till the image recognition accuracy that calculates of this accuracy layer reaches this Second Threshold, wherein this first threshold and this Two threshold values all can be set according to demand, and the application to this and is not specifically limited.
Further, as shown in figure 5, this convolutional neural networks model to be trained is by below equation regularized learning algorithm rate:
Lr=base_lr × γ × (floor (iter/stepsize)), wherein, this lr is during each back propagation Habit rate (that is, step-length), base_lr is initialized Study rate parameter, stepsize and γ is constant, and iter is iterationses.
In addition, in this application, when this server passes through step s102, obtain the image recognition result of this images to be recognized Afterwards, in order to increase the quantity of the image of pre-set image classification in this first image set, this server can also be according to this image Recognition result, when the image recognition result determining this images to be recognized is this pre-set image classification, according to this images to be recognized The image-region with this specified shade-matched blend, determine the corresponding intermediate image of this images to be recognized, to this images to be recognized pair After the intermediate image answered carries out standardization processing, add carrying out the corresponding intermediate image of this images to be recognized after standardization processing Add in this first image set.
Certainly, after determining image recognition result, how using this image recognition result the application and be not specifically limited, on Stating is only a kind of embodiment, does not constitute the restriction to the application.
Further, do not limit the argument structure with each layer in this convolutional neural networks model in this application, such as roll up Convolution kernel size in lamination, the number of channels of convolutional layer, the number of channels of pond layer and pond step-length etc., also do not limit The mode in fixed specific pond, i.e. in this convolutional neural networks model, each layer parameter structure can be configured as needed, and Application does not limit to this.
It should be noted that the executive agent of each step of the provided method of the embodiment of the present application as shown in Figure 1 Being same equipment, or, the method also can be by distinct device as executive agent.Such as, step s1021 and step s1022 Executive agent can be equipment 1, the executive agent of step s1023 can be equipment 2;Again such as, the execution master of step s1021 Body can be equipment 1, and the executive agent of step s1022 and step s1023 can be equipment 2;Etc..
Based on the image recognition processes shown in Fig. 2, the embodiment of the present application is also corresponding to provide a kind of pattern recognition device, such as schemes Shown in 6.
Fig. 6 is that the embodiment of the present application provides a kind of structural representation of pattern recognition device, comprising:
Determining module 201, determines images to be recognized;
Identification module 202, the Image Classifier that described images to be recognized input training in advance is completed, obtain described image The recognition result for described images to be recognized of grader output, wherein, described image grader is trained used the Image in one image set is to carry out standardization processing to the image-region with specified shade-matched blend in sample image and obtains 's;
Described specified tone, the tone of the image according to pre-set image classification determines.
Described device also includes:
Image set determining module 203, determines the second image set being made up of sample image, for each sample image, According to the hsv color model of this sample image, determine the image-region with described specified shade-matched blend in this sample image, as All intermediate images are carried out standardization processing by intermediate image, by all set carrying out the intermediate image after standardization processing, It is trained described first image collection used as described image grader.
Described image grader includes: convolutional neural networks model.
Described device also includes: training module 204, trains described convolutional neural networks model: determine convolution to be trained The corresponding initiation parameter of the corresponding each layer of neutral net, circulation execution following step, until described convolutional Neural to be trained The error amount of network model's output reaches first threshold and till image recognition accuracy reaches Second Threshold, described convolution god Complete through network model's training: each image that described first image is concentrated sequentially input described in convolutional neural networks to be trained Model is so that the feature of the described training image to input is carried out to forward pass by described convolutional neural networks model to be trained Cast to output layer, calculate described error amount and the described image recognition correct rate of output, according to described error amount from output layer The reversely corresponding parameter of each layer in convolutional neural networks model to be trained described in adjustment.
At least one convolutional layer is comprised, for entering to the data inputting described convolutional layer in described convolutional neural networks model Line parameter dimensionality reduction, for convolutional layer each described, the convolutional layer adjacent with current convolutional layer is used for this convolutional layer of input Data carries out feature extraction.
Described image collection determining module 204, when described identification module 202 determines the image recognition knot of described images to be recognized When fruit is described pre-set image classification, specification will be carried out with the image-region of described specified shade-matched blend in described images to be recognized After change is processed, add and concentrate to described first image, and again instructed using the first image set that with the addition of described images to be recognized Practice described image grader.
Described pre-set image classification is pornographic image classification, the color of the corresponding described specified tone of described pornographic image classification Tone pitch scope is 0 to 116.
Specifically, above-mentioned pattern recognition device as shown in Figure 6 may be located in an equipment, also is located at by multiple devices In the system of composition.
Those skilled in the art are it should be appreciated that embodiments of the invention can be provided as method, system or computer program Product.Therefore, the present invention can be using complete hardware embodiment, complete software embodiment or the reality combining software and hardware aspect Apply the form of example.And, the present invention can be using in one or more computers wherein including computer usable program code The upper computer program implemented of usable storage medium (including but not limited to disk memory, cd-rom, optical memory etc.) produces The form of product.
The present invention is the flow process with reference to method according to embodiments of the present invention, equipment (system) and computer program Figure and/or block diagram are describing.It should be understood that can be by each stream in computer program instructions flowchart and/or block diagram Flow process in journey and/or square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided The processor instructing general purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device is to produce A raw machine is so that produced for reality by the instruction of computer or the computing device of other programmable data processing device The device of the function of specifying in present one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions may be alternatively stored in and can guide computer or other programmable data processing device with spy Determine in the computer-readable memory that mode works so that the instruction generation inclusion being stored in this computer-readable memory refers to Make the manufacture of device, this command device realize in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or The function of specifying in multiple square frames.
These computer program instructions also can be loaded in computer or other programmable data processing device so that counting On calculation machine or other programmable devices, execution series of operation steps to be to produce computer implemented process, thus in computer or On other programmable devices, the instruction of execution is provided for realizing in one flow process of flow chart or multiple flow process and/or block diagram one The step of the function of specifying in individual square frame or multiple square frame.
In a typical configuration, computing device includes one or more processors (cpu), input/output interface, net Network interface and internal memory.
Internal memory potentially includes the volatile memory in computer-readable medium, random access memory (ram) and/or The forms such as Nonvolatile memory, such as read only memory (rom) or flash memory (flash ram).Internal memory is computer-readable medium Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method Or technology is realizing information Store.Information can be computer-readable instruction, data structure, the module of program or other data. The example of the storage medium of computer includes, but are not limited to phase transition internal memory (pram), static RAM (sram), moves State random access memory (dram), other kinds of random access memory (ram), read only memory (rom), electric erasable Programmable read only memory (eeprom), fast flash memory bank or other memory techniques, read-only optical disc read only memory (cd-rom), Digital versatile disc (dvd) or other optical storage, magnetic cassette tape, the storage of tape magnetic rigid disk or other magnetic storage apparatus Or any other non-transmission medium, can be used for storing the information that can be accessed by a computing device.Define according to herein, calculate Machine computer-readable recording medium does not include temporary computer readable media (transitory media), the such as data signal of modulation and carrier wave.
Also, it should be noted term " inclusion ", "comprising" or its any other variant are intended to nonexcludability Comprising, so that including a series of process of key elements, method, commodity or equipment not only include those key elements, but also wrapping Include other key elements being not expressly set out, or also include for this process, method, commodity or intrinsic the wanting of equipment Element.In the absence of more restrictions, the key element being limited by sentence "including a ..." is it is not excluded that including described wanting Also there is other identical element in the process of element, method, commodity or equipment.
It will be understood by those skilled in the art that embodiments herein can be provided as method, system or computer program. Therefore, the application can adopt complete hardware embodiment, complete software embodiment or combine the embodiment of software and hardware aspect Form.And, the application can be deposited using can use in one or more computers wherein including computer usable program code The shape of the upper computer program implemented of storage media (including but not limited to disk memory, cd-rom, optical memory etc.) Formula.
The foregoing is only embodiments herein, be not limited to the application.For those skilled in the art For, the application can have various modifications and variations.All any modifications made within spirit herein and principle, equivalent Replace, improve etc., within the scope of should be included in claims hereof.

Claims (10)

1. a kind of image-recognizing method is it is characterised in that include:
Determine images to be recognized;
The Image Classifier that described images to be recognized input training in advance is completed, obtains being directed to of described image grader output The recognition result of described images to be recognized, wherein, described image grader is trained the image in the first image set used, It is that the image-region with specified shade-matched blend in sample image is carried out obtained from standardization processing;
Described specified tone, the tone of the image according to pre-set image classification determines.
2. the method for claim 1, it is characterised in that described first image collection, is obtained using following methods:
Determine the second image set being made up of sample image;
For each of described second image set sample image, according to the tone saturation lightness hsv face of this sample image Color model, determines the image-region with described specified shade-matched blend in this sample image;And
According to the image-region with described specified shade-matched blend in this sample image, determine middle graph corresponding with this sample image Picture;
Standardization processing is carried out to all intermediate images;
By all set carrying out the intermediate image after standardization processing, as described first image collection.
3. the method for claim 1 is it is characterised in that described image grader includes convolutional neural networks model.
4. method as claimed in claim 3 is it is characterised in that adopt following manner, trains described convolutional neural networks model:
Determine the corresponding each layer initiation parameter of convolutional neural networks model to be trained;
Circulation execution following step, until the error amount of described convolutional neural networks model output to be trained reaches first Till threshold value and image recognition accuracy reach Second Threshold, described convolutional neural networks model training completes:
Each image that described first image is concentrated sequentially input described in convolutional neural networks model to be trained so that pass through institute The feature stating convolutional neural networks model the to be trained each image to input is propagated to forward output layer, calculates output Described error amount and described image recognition correct rate, reverse based on initiation parameter from output layer according to described error amount The corresponding parameter of each layer in convolutional neural networks model to be trained described in adjustment.
5. method as claimed in claim 4 is it is characterised in that comprise at least one convolution in described convolutional neural networks model Layer, for entering line parameter dimensionality reduction to the data inputting described convolutional layer;
For convolutional layer each described, the convolutional layer adjacent with current convolutional layer is used for the data inputting this convolutional layer is carried out Feature extraction.
6. the method for claim 1 is it is characterised in that methods described also includes:
When the image recognition result determining described images to be recognized belongs to described pre-set image classification for described images to be recognized, To carry out after standardization processing with the image-region of described specified shade-matched blend in described images to be recognized, add to described first In image set;And
Using the first image set re -training described image grader that with the addition of described images to be recognized.
7. the method for claim 1 is it is characterised in that described pre-set image classification is pornographic image classification;
The tone value scope of the corresponding described specified tone of described pornographic image classification is 0 to 116.
8. a kind of pattern recognition device is it is characterised in that include:
Determining module, determines images to be recognized;
Identification module, the Image Classifier that described images to be recognized input training in advance is completed, obtain described image grader The recognition result for described images to be recognized of output, wherein, described image grader is trained the first image used The image concentrated, is that the image-region with specified shade-matched blend in sample image is carried out obtained from standardization processing;
Described specified tone, the tone of the image according to pre-set image classification determines.
9. device as claimed in claim 1 is it is characterised in that described device also includes:
Image set determining module, determines the second image set being made up of sample image, for each sample image, according to this sample The tone saturation lightness hsv color model of this image, determines the image district with described specified shade-matched blend in this sample image Domain, and according to the image-region with described specified shade-matched blend in this sample image, during determination is corresponding with this sample image Between image, standardization processing is carried out to all intermediate images, all set carrying out the intermediate image after standardization processing is made It is trained described first image collection used for described image grader.
10. device as claimed in claim 1 is it is characterised in that described image grader includes: convolutional neural networks model.
CN201610703925.1A 2016-08-22 2016-08-22 Image identification method and image identification device Pending CN106339719A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610703925.1A CN106339719A (en) 2016-08-22 2016-08-22 Image identification method and image identification device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610703925.1A CN106339719A (en) 2016-08-22 2016-08-22 Image identification method and image identification device

Publications (1)

Publication Number Publication Date
CN106339719A true CN106339719A (en) 2017-01-18

Family

ID=57824378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610703925.1A Pending CN106339719A (en) 2016-08-22 2016-08-22 Image identification method and image identification device

Country Status (1)

Country Link
CN (1) CN106339719A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016356A (en) * 2017-03-21 2017-08-04 乐蜜科技有限公司 Certain content recognition methods, device and electronic equipment
CN107864333A (en) * 2017-11-08 2018-03-30 广东欧珀移动通信有限公司 Image processing method, device, terminal and storage medium
CN108510194A (en) * 2018-03-30 2018-09-07 平安科技(深圳)有限公司 Air control model training method, Risk Identification Method, device, equipment and medium
CN109242792A (en) * 2018-08-23 2019-01-18 广东数相智能科技有限公司 A kind of white balance proofreading method based on white object
CN109284694A (en) * 2018-08-31 2019-01-29 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109635844A (en) * 2018-11-14 2019-04-16 网易传媒科技(北京)有限公司 The method and device and method of detecting watermarks and device of training classifier
CN109934077A (en) * 2017-12-19 2019-06-25 杭州海康威视数字技术股份有限公司 A kind of image-recognizing method and electronic equipment
CN110225367A (en) * 2019-06-27 2019-09-10 北京奇艺世纪科技有限公司 It has been shown that, recognition methods and the device of object information in a kind of video
CN111027390A (en) * 2019-11-11 2020-04-17 北京三快在线科技有限公司 Object class detection method and device, electronic equipment and storage medium
CN111611828A (en) * 2019-02-26 2020-09-01 北京嘀嘀无限科技发展有限公司 Abnormal image recognition method and device, electronic equipment and storage medium
CN112836655A (en) * 2021-02-07 2021-05-25 上海卓繁信息技术股份有限公司 Method and device for identifying identity of illegal actor and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120002879A1 (en) * 2010-07-05 2012-01-05 Olympus Corporation Image processing apparatus, method of processing image, and computer-readable recording medium
CN104537393A (en) * 2015-01-04 2015-04-22 大连理工大学 Traffic sign recognizing method based on multi-resolution convolution neural networks
CN104732220A (en) * 2015-04-03 2015-06-24 中国人民解放军国防科学技术大学 Specific color human body detection method oriented to surveillance videos
CN104809426A (en) * 2014-01-27 2015-07-29 日本电气株式会社 Convolutional neural network training method and target identification method and device
CN105787519A (en) * 2016-03-21 2016-07-20 浙江大学 Tree species classification method based on vein detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120002879A1 (en) * 2010-07-05 2012-01-05 Olympus Corporation Image processing apparatus, method of processing image, and computer-readable recording medium
CN104809426A (en) * 2014-01-27 2015-07-29 日本电气株式会社 Convolutional neural network training method and target identification method and device
CN104537393A (en) * 2015-01-04 2015-04-22 大连理工大学 Traffic sign recognizing method based on multi-resolution convolution neural networks
CN104732220A (en) * 2015-04-03 2015-06-24 中国人民解放军国防科学技术大学 Specific color human body detection method oriented to surveillance videos
CN105787519A (en) * 2016-03-21 2016-07-20 浙江大学 Tree species classification method based on vein detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
史忠植: "卷积神经网络", 《心智计算》 *
谢剑斌 等: "CNN学习", 《视觉机器学习20讲》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016356A (en) * 2017-03-21 2017-08-04 乐蜜科技有限公司 Certain content recognition methods, device and electronic equipment
CN107864333A (en) * 2017-11-08 2018-03-30 广东欧珀移动通信有限公司 Image processing method, device, terminal and storage medium
CN107864333B (en) * 2017-11-08 2020-04-21 Oppo广东移动通信有限公司 Image processing method, device, terminal and storage medium
CN109934077A (en) * 2017-12-19 2019-06-25 杭州海康威视数字技术股份有限公司 A kind of image-recognizing method and electronic equipment
CN108510194A (en) * 2018-03-30 2018-09-07 平安科技(深圳)有限公司 Air control model training method, Risk Identification Method, device, equipment and medium
WO2019184124A1 (en) * 2018-03-30 2019-10-03 平安科技(深圳)有限公司 Risk-control model training method, risk identification method and apparatus, and device and medium
CN109242792B (en) * 2018-08-23 2020-11-17 广东数相智能科技有限公司 White balance correction method based on white object
CN109242792A (en) * 2018-08-23 2019-01-18 广东数相智能科技有限公司 A kind of white balance proofreading method based on white object
CN109284694A (en) * 2018-08-31 2019-01-29 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN109635844A (en) * 2018-11-14 2019-04-16 网易传媒科技(北京)有限公司 The method and device and method of detecting watermarks and device of training classifier
CN109635844B (en) * 2018-11-14 2021-08-27 网易传媒科技(北京)有限公司 Method and device for training classifier and method and device for detecting watermark
CN111611828A (en) * 2019-02-26 2020-09-01 北京嘀嘀无限科技发展有限公司 Abnormal image recognition method and device, electronic equipment and storage medium
CN110225367A (en) * 2019-06-27 2019-09-10 北京奇艺世纪科技有限公司 It has been shown that, recognition methods and the device of object information in a kind of video
CN111027390A (en) * 2019-11-11 2020-04-17 北京三快在线科技有限公司 Object class detection method and device, electronic equipment and storage medium
CN111027390B (en) * 2019-11-11 2023-10-10 北京三快在线科技有限公司 Object class detection method and device, electronic equipment and storage medium
CN112836655A (en) * 2021-02-07 2021-05-25 上海卓繁信息技术股份有限公司 Method and device for identifying identity of illegal actor and electronic equipment
CN112836655B (en) * 2021-02-07 2024-05-28 上海卓繁信息技术股份有限公司 Method and device for identifying identity of illegal actor and electronic equipment

Similar Documents

Publication Publication Date Title
CN106339719A (en) Image identification method and image identification device
Zhang et al. Interpreting adversarially trained convolutional neural networks
Chen et al. DISC: Deep image saliency computing via progressive representation learning
CN112818862B (en) Face tampering detection method and system based on multi-source clues and mixed attention
US20210295114A1 (en) Method and apparatus for extracting structured data from image, and device
CN108985181A (en) A kind of end-to-end face mask method based on detection segmentation
EP2568429A1 (en) Method and system for pushing individual advertisement based on user interest learning
CN111275685B (en) Method, device, equipment and medium for identifying flip image of identity document
CN109840530A (en) The method and apparatus of training multi-tag disaggregated model
CN106778852A (en) A kind of picture material recognition methods for correcting erroneous judgement
CN112801146B (en) Target detection method and system
CN111291629A (en) Method and device for recognizing text in image, computer equipment and computer storage medium
CN108921061A (en) A kind of expression recognition method, device and equipment
US20230086552A1 (en) Image processing method and apparatus, device, storage medium, and computer program product
US9613296B1 (en) Selecting a set of exemplar images for use in an automated image object recognition system
CN110070101A (en) Floristic recognition methods and device, storage medium, computer equipment
CN105144239A (en) Image processing device, program, and image processing method
CN107871101A (en) A kind of method for detecting human face and device
CN107563280A (en) Face identification method and device based on multi-model
Chagas et al. Evaluation of convolutional neural network architectures for chart image classification
CN107463906A (en) The method and device of Face datection
CN110288602A (en) Come down extracting method, landslide extraction system and terminal
CN109740572A (en) A kind of human face in-vivo detection method based on partial color textural characteristics
CN105930834A (en) Face identification method and apparatus based on spherical hashing binary coding
CN111723815B (en) Model training method, image processing device, computer system and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170118