CN107871314A - A kind of sensitive image discrimination method and device - Google Patents

A kind of sensitive image discrimination method and device Download PDF

Info

Publication number
CN107871314A
CN107871314A CN201610846341.XA CN201610846341A CN107871314A CN 107871314 A CN107871314 A CN 107871314A CN 201610846341 A CN201610846341 A CN 201610846341A CN 107871314 A CN107871314 A CN 107871314A
Authority
CN
China
Prior art keywords
image
region
classification
sorted
sensitive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610846341.XA
Other languages
Chinese (zh)
Other versions
CN107871314B (en
Inventor
范宏伟
陈振方
旷章辉
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sensetime Group Ltd
Original Assignee
Sensetime Group Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sensetime Group Ltd filed Critical Sensetime Group Ltd
Priority to CN201610846341.XA priority Critical patent/CN107871314B/en
Publication of CN107871314A publication Critical patent/CN107871314A/en
Application granted granted Critical
Publication of CN107871314B publication Critical patent/CN107871314B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a kind of sensitive image discrimination method, for solving the prior art swimsuit big to colour of skin area, to shine into false drop rate when row differentiates high, and when smaller to colour of skin area but show sexual parts pornographic images differentiate the problem of easy missing inspection.Present invention method includes:Image to be identified is sent into the convolutional neural networks that training in advance is completed, obtains the convolution characteristic layer of image to be identified;The convolution characteristic layer of the image to be identified is divided into two or more region to be sorted;Extract the characteristic vector in region to be sorted;The characteristic vector in all regions to be sorted is sent into the full articulamentum of the convolutional neural networks and carries out discriminant classification, obtains classifying corresponding to each region to be sorted, the classification includes normal category and sensitive classification;Judge whether image to be identified is sensitive image according to the statistical result for being categorized as the other region to be sorted of sensitive kinds, obtain judged result.The embodiment of the present invention also provides a kind of sensitive image identification device.

Description

A kind of sensitive image discrimination method and device
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of sensitive image discrimination method and device.
Background technology
The booming of mobile Internet makes interpersonal information exchange become more and more simpler, convenient, greatly The development of society is promoted.But meanwhile mobile Internet also result in spreading unchecked using pornographic image as the obscene information of representative.Cause This, proposes that a kind of advanced, efficient algorithm goes to differentiate that pornographic image seems most important automatically.
Traditional pornographic image authentication technique is often based upon the framework of " Face Detection+sensitizing range differentiates ".This kind of technology Flow it is as follows:The area of skin color in image is first drawn using Bayes classifier, then to area of skin color extract SIFT, LBP, The low level feature such as Haar, finally again send these features to the pornographic sensitive part grader such as SVM that trains and In AdaBoost, so as to obtain final pornographic identification result.
This kind of technology often be there are problems that following two:(1) false drop rate is high:Traditional pornographic image authentication technique is very big Depend on Face Detection in degree, thus the big swimsuit of some colour of skin areas is according to just by differentiating by mistake being probably pornographic image. (2) recall rate has much room for improvement:It is right because the performance of the graders such as the ability to express of the classical feature such as SIFT, LBP and SVM limits It is smaller in some colour of skin areas, but the situation of missing inspection often occurs in the pornographic image of show sexual parts.
The content of the invention
The embodiments of the invention provide a kind of sensitive image discrimination method and device, can avoid the big swimming of colour of skin area Dress reduces false drop rate according to situation about differentiating as pornographic image by mistake;It is simultaneously smaller in discriminating colour of skin area but show sexual parts During pornographic image, recall rate is substantially increased.
A kind of sensitive image discrimination method provided in an embodiment of the present invention, including:
Image to be identified is sent into the convolutional neural networks that training in advance is completed, obtains the convolution feature of image to be identified Layer;
The convolution characteristic layer of the image to be identified is divided into two or more region to be sorted;
Extract the characteristic vector in the region to be sorted;
The characteristic vector in all regions to be sorted of extraction is sent into the full articulamentum of the convolutional neural networks Discriminant classification is carried out, obtains classifying corresponding to each region to be sorted, the classification includes normal category and sensitive classification;
According to the statistical result for being categorized as the other region to be sorted of sensitive kinds judge the image to be identified whether be Sensitive image, obtain judged result.
Alternatively, the convolutional neural networks are completed by following steps training in advance:
Training image is sent into convolutional neural networks, obtains the convolution characteristic layer of the training image, the training figure As including normal picture and sensitive image, the sensitizing range on the sensitive image is labeled as sensitive classification in advance;
The convolution characteristic layer of the training image is divided into two or more testing classification region;
Extract the characteristic vector in the testing classification region;
The characteristic vector in all testing classification regions of extraction is sent into the full articulamentum of the convolutional neural networks Middle carry out discriminant classification, obtain classifying corresponding to each testing classification region, the classification includes normal category and sensitivity Classification;
If the classification results in the testing classification region are consistent with the advance mark classification of the training image, it is determined that institute The classification for stating testing classification region is correct;
The model parameter of the convolutional neural networks is updated according to the correct testing classification region iteration of classification.
Alternatively, it is specific to be divided into two or more region to be sorted for the convolution characteristic layer by the image to be identified For:
The convolution characteristic layer of the image to be identified is divided into M*N net region as the region to be sorted, M and N is positive integer.
Alternatively, the characteristic vector in the extraction region to be sorted specifically includes:
Maximum sampling is carried out to long in the region to be sorted and wide respectively h/ √ n and w/ √ n adjacent subarea domain, The characteristic vector of n dimensions is obtained, h and w are respectively the length and width in the region to be sorted.
Alternatively, be categorized as described in the basis the other region to be sorted of sensitive kinds statistical result judge it is described to be identified Whether image is sensitive image, obtains judged result and specifically includes:
The sensitizing range quantity in the other region to be sorted of sensitive kinds is categorized as described in statistics;
Calculate the sensitizing range quantity and the ratio of the region sum in the region to be sorted;
Judge whether the ratio exceedes default threshold value, if, it is determined that the image to be identified is sensitive image, if It is no, it is determined that the image to be identified is normal picture.
A kind of sensitive image identification device provided in an embodiment of the present invention, including:
Characteristic layer acquisition module to be identified, for image to be identified to be sent into the convolutional neural networks of training in advance completion In, obtain the convolution characteristic layer of image to be identified;
Region division module to be sorted, treated point for the convolution characteristic layer of the image to be identified to be divided into two or more Class region;
Characteristic vector pickup module, for extracting the characteristic vector in the region to be sorted;
Area judging module to be sorted, for the characteristic vector in all regions to be sorted of extraction to be sent into the volume Discriminant classification is carried out in the full articulamentum of product neutral net, obtains classifying corresponding to each region to be sorted, the classification Including normal category and sensitive classification;
Sensitive image judge module, the statistical result for being categorized as the other region to be sorted of sensitive kinds according to judge Whether the image to be identified is sensitive image, obtains judged result.
Alternatively, the convolutional neural networks with lower module training in advance by being completed:
Training characteristics layer acquisition module, for training image to be sent into convolutional neural networks, obtain the training image Convolution characteristic layer, the training image includes normal picture and sensitive image, and the sensitizing range on the sensitive image is pre- First it is labeled as sensitive classification;
Test zone division module, for the convolution characteristic layer of the training image to be divided into two or more testing classification Region;
Testing feature vector extraction module, for extracting the characteristic vector in the testing classification region;
Testing classification area judging module, for the characteristic vector in all testing classification regions of extraction to be sent into institute State in the full articulamentum of convolutional neural networks and carry out discriminant classification, obtain classifying corresponding to each testing classification region, institute Stating classification includes normal category and sensitive classification;
Classification correctness determining module, if for the testing classification region classification results and the training image it is pre- It is consistent first to mark classification, it is determined that the classification in the testing classification region is correct;
Iteration update module, for correctly testing classification region iteration to update the convolutional Neural net according to classification The model parameter of network.
Alternatively, the region division module to be sorted is specifically used for dividing the convolution characteristic layer of the image to be identified Into M*N net region as the region to be sorted, M and N are positive integer.
Alternatively, the characteristic vector pickup module specifically includes:
Maximum sampling unit, for long in the region to be sorted and wide respectively h/ √ n and w/ √ n adjacent son Region carries out maximum sampling, obtains the characteristic vector of n dimensions, and h and w are respectively the length and width in the region to be sorted.
Alternatively, the sensitive image judge module specifically includes:
Sensitizing range quantity statistics unit, for counting the sensitizing range for being categorized as the other region to be sorted of sensitive kinds Quantity;
Ratio calculation unit, for calculating the sensitizing range quantity and the ratio of the region sum in the region to be sorted Value;
Image determination unit, for judging whether the ratio exceedes default threshold value, if, it is determined that it is described to be identified Image is sensitive image, if not, it is determined that the image to be identified is normal picture.
As can be seen from the above technical solutions, the embodiment of the present invention has advantages below:
In the embodiment of the present invention, first, image to be identified is sent into the convolutional neural networks that training in advance is completed, obtained The convolution characteristic layer of image to be identified;The convolution characteristic layer of the image to be identified is divided into two or more region to be sorted; Then, the characteristic vector in the region to be sorted is extracted;The characteristic vector in all regions to be sorted of extraction is sent into institute State in the full articulamentum of convolutional neural networks and carry out discriminant classification, obtain classifying corresponding to each region to be sorted, it is described Classification includes normal category and sensitive classification;Finally, according to the statistical result for being categorized as the other region to be sorted of sensitive kinds Judge whether the image to be identified is sensitive image, obtains judged result.In embodiments of the present invention, examined independent of the colour of skin Survey, avoid the big swimsuit of colour of skin area according to situation about differentiating as pornographic image by mistake, reduce false drop rate;Employ volume simultaneously The mechanism of product neutral net, and region division is carried out to characteristics of image layer, it is smaller in discriminating colour of skin area but show sexual parts During pornographic image, recall rate is substantially increased.
Brief description of the drawings
Fig. 1 is a kind of sensitive image discrimination method one embodiment flow chart in the embodiment of the present invention;
Fig. 2 is that the view data in the embodiment of the present invention in a kind of convolutional neural networks structural model flows schematic diagram;
Fig. 3 is a kind of sensitive image identification device one embodiment structure chart in the embodiment of the present invention.
Embodiment
The embodiments of the invention provide a kind of sensitive image discrimination method and device, for solving prior art to colour of skin face The big swimsuit of product shines into false drop rate height when row differentiates, and smaller to colour of skin area but show sexual parts pornographic images reflect When other the problem of easy missing inspection.
To enable goal of the invention, feature, the advantage of the present invention more obvious and understandable, below in conjunction with the present invention Accompanying drawing in embodiment, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that disclosed below Embodiment be only part of the embodiment of the present invention, and not all embodiment.Based on the embodiment in the present invention, this area All other embodiment that those of ordinary skill is obtained under the premise of creative work is not made, belongs to protection of the present invention Scope.
Referring to Fig. 1, a kind of sensitive image discrimination method one embodiment includes in the embodiment of the present invention:
101st, image to be identified is sent into the convolutional neural networks that training in advance is completed, obtains the convolution of image to be identified Characteristic layer;
In the present embodiment, when needing to differentiate image to be identified, image to be identified can be sent into training in advance In the convolutional neural networks of completion, the convolution characteristic layer of image to be identified is obtained.
Wherein, the convolutional neural networks can be completed by following steps training in advance:
(1) training image is sent into convolutional neural networks, obtains the convolution characteristic layer of the training image, the training Image includes normal picture and sensitive image, and the sensitizing range on the sensitive image is labeled as sensitive classification in advance;Enter one Step, sensitive image can be segmented further, and e.g., sensitive image can include pornographic, very sexy, sexy;Accordingly, this hair Bright embodiment can carry out the sensitive image classification of more multi-mode, and such as two classify (pornographic, non-pornographic), three classification (it is pornographic, it is sexy, Normally), four classification (pornographic, very sexy, sexuality, normal) etc..
(2) the convolution characteristic layer of the training image is divided into two or more testing classification region;
(3) characteristic vector in the testing classification region is extracted;
(4) characteristic vector in all testing classification regions of extraction is sent into the full connection of the convolutional neural networks Discriminant classification is carried out in layer, obtains classifying corresponding to each testing classification region, the classification includes normal category and quick Feel classification;
(5) if the classification results in the testing classification region are consistent with the advance mark classification of the training image, really The classification in the fixed testing classification region is correct;
(6) model parameter of the convolutional neural networks is updated according to the correct testing classification region iteration of classification.
For above-mentioned steps (1), specifically, a training image can be inputted to convolutional neural networks, then allow its In convolutional neural networks after a series of convolution, lower use and nonlinear transformation, the instruction is obtained on last convolutional layer Practicing picture strip has spatial information and the convolution characteristic layer of semantic information.Wherein, as in a large amount of training images of sample, normogram Half as can respectively account for total number of training with sensitive image (such as pornographic image), and to the sensitizing range on every sensitive image Domain is marked in advance, is labeled as sensitive classification, and remaining de-militarized zone is then labeled as normal category.For example, people can be passed through The mode of work mark, is the other region of sensitive kinds by the mammary areola on sensitive image and the area marking centered on private parts, and can be with Random cropping is carried out to every sensitive image, so as to obtain increased sensitive image training data.
In addition, in the present embodiment, the convolutional neural networks can choose arbitrary convolutional neural networks structural model.Fig. 2 Show the view data flowing schematic diagram in the embodiment of the present invention in a kind of convolutional neural networks structural model, the structural model It is emerging that several convolutional layers (conv1, conv2 ..., convN), several full articulamentums (fc1, fc2), a sense can be included Interesting area sampling layer (RoI) and softmax classification layer, in the present embodiment, training image can be sent to convolutional Neural net After network, view data can first pass through convolution, nonlinear change and the down-sampled operation of maximum for several times, then can be divided into S*S grid (S is positive integer), by the down-sampled layer of area-of-interest, finally it is sent to full articulamentum and softmax classification layer In.
For above-mentioned steps (2), specifically the convolution characteristic layer of the training image can be divided into M*N net region As the testing classification region, wherein, M and N are positive integer, it is known that each net region corresponds to the one of former training image Partial locus, the space characteristics of the full convolution characteristic layer of convolutional neural networks are sufficiently used, be divided into multiple small Testing classification region, it is more beneficial for differentiating the less characteristics of image of colour of skin area.
For above-mentioned steps (3), to each testing classification region, can using the down-sampled method of area-of-interest come Extract the characteristic vector of equal length.Specifically, if the length in testing classification region and wide respectively h and w, each full convolution feature It is n that the dimension of caused characteristic vector is wanted in region, then can be by long in the testing classification region and wide respectively h/ √ n Maximum sampling is carried out with w/ √ n adjacent subarea domain, obtains the characteristic vector of n dimensions, even if the chi so as to testing classification region Very little difference, but the output dimension of the characteristic vector in each testing classification region is still equal.
For above-mentioned steps (4), for the characteristic vector in each testing classification region, the convolution can be sent to Discriminant classification is carried out in the full articulamentum of neutral net, for example, in the convolutional neural networks structural model shown in Fig. 2, by institute Two full articulamentum and softmax classification layers that VGG-16 network models are sent into testing classification region are stated, so as to obtain each survey Try to classify corresponding to specification area.The classification includes normal category and sensitive classification, wherein, sensitive classification can specifically include breast Dizzy and private parts.Wherein, when carrying out discriminant classification for each testing classification region, each testing classification region can specifically be obtained Class probability.For example, tag along sort can be stamped for the corresponding region of every training image in advance, these tag along sorts are the back of the body Scape, mammary areola or private parts, at above-mentioned steps (4), the probability of every class tag along sort in each testing classification region is obtained, selection should Tag along sort of the label as the testing classification region corresponding to maximum probability in testing classification region.If the testing classification region The label in corresponding artwork region is identical with the tag along sort, then classification is correct, on the contrary then incorrect.Specifically, when one Label corresponding to maximum output probability is background in testing classification region, and artwork corresponding region only includes background, then the test The classification of specification area is correct;When the label of maximum output probability in a testing classification region is mammary areola, and artwork corresponds to area Domain includes mammary areola, then the classification in the testing classification region is correct;For private parts label, it is similar with the differentiation of mammary areola label, when one The label of maximum output probability is private parts in individual testing classification region, and artwork corresponding region includes private parts, then the testing classification The classification in region is correct.
For above-mentioned steps (5), see that the description of above-mentioned steps (4) is understood, when the classification results in the testing classification region When consistent with the advance mark classification of the training image, then it can determine that the classification in the testing classification region is correct.
For above-mentioned steps (6), for some correct testing classification region of classification, further, cross entropy can be used Loss function of the loss function as this training process, then iteration is gone to update the convolution god using stochastic gradient descent method Model parameter through network.
102nd, the convolution characteristic layer of the image to be identified is divided into two or more region to be sorted;
, can be by the convolution characteristic layer of the image to be identified after the convolution characteristic layer of the image to be identified is obtained It is divided into two or more region to be sorted.Specifically the convolution characteristic layer of the image to be identified can be divided into M*N grid As the region to be sorted, M and N are positive integer in region, it is known that each net region corresponds to the one of former image to be identified Partial locus, the space characteristics of the full convolution characteristic layer of convolutional neural networks are sufficiently used, be divided into multiple small Region to be sorted, it is more beneficial for differentiating the less characteristics of image of colour of skin area.
103rd, the characteristic vector in the region to be sorted is extracted;
After the convolution characteristic layer of the image to be identified is divided into two or more region to be sorted, institute can be extracted State the characteristic vector in region to be sorted.Specifically, can be to long in the region to be sorted and wide respectively h/ √ n and w/ √ n Adjacent subarea domain carry out maximum sampling, obtain the characteristic vector of n dimensions, h and w be respectively the region to be sorted length and It is wide.
For example, to each region to be sorted, equal length can be extracted using the down-sampled method of area-of-interest Characteristic vector, if the length in region to be sorted and width are respectively h and w, each full convolution characteristic area wants caused characteristic vector Dimension is n, then can be by being carried out to long in the region to be sorted and wide respectively h/ √ n and w/ √ n adjacent subarea domain Maximum samples, and obtains the characteristic vector of n dimensions, though it is different so as to the size in region to be sorted, but each region to be sorted The output dimension of characteristic vector is still equal.
The 104th, the characteristic vector in all regions to be sorted of extraction is sent into the full connection of the convolutional neural networks Discriminant classification is carried out in layer, obtains classifying corresponding to each region to be sorted;
, can be by all regions to be sorted of extraction after the characteristic vector in each region to be sorted is extracted Characteristic vector be sent into the full articulamentum of the convolutional neural networks and carry out discriminant classification, obtain each region to be sorted Corresponding classification, the classification include normal category and sensitive classification.
For the characteristic vector in each region to be sorted, the full articulamentum of the convolutional neural networks can be sent to Middle carry out discriminant classification, for example, in the convolutional neural networks structural model shown in Fig. 2, the region to be sorted is sent into The full articulamentum of two of VGG-16 network models and softmax classification layers, so as to obtain classification corresponding to each region to be sorted. The classification includes normal category and sensitive classification, wherein, sensitive classification can specifically include mammary areola and private parts.Wherein, in order to every When individual region to be sorted carries out discriminant classification, the class probability in each region to be sorted can be specifically obtained.For example, can be advance Tag along sort is stamped in corresponding region for every training image, and these tag along sorts are background, mammary areola or private parts, in above-mentioned steps (4) when, the probability of every class tag along sort in each region to be sorted is obtained, is selected in the region to be sorted corresponding to maximum probability Tag along sort of the label as the region to be sorted.If the label in artwork region corresponding to the region to be sorted and the tag along sort Identical, then classification is correct, on the contrary then incorrect.Specifically, when mark corresponding to maximum output probability in a region to be sorted Sign as background, and artwork corresponding region only includes background, then the classification in the region to be sorted is correct;When in a region to be sorted The label of maximum output probability is mammary areola, and artwork corresponding region includes mammary areola, then the classification in the region to be sorted is correct;For Private parts label, it is similar with the differentiation of mammary areola label, when the label of maximum output probability in a region to be sorted is private parts, and it is former Figure corresponding region includes private parts, then the classification in the region to be sorted is correct.
105th, the statistical result that the other region to be sorted of sensitive kinds is categorized as according to judges that the image to be identified is No is sensitive image, obtains judged result.
After classification is obtained corresponding to each region to be sorted, it can be categorized as that sensitive kinds are other to be treated according to described The statistical result of specification area judges whether the image to be identified is sensitive image, obtains judged result.
In the present embodiment, it is categorized as treating described in the statistical result judgement in the other region to be sorted of sensitive kinds described in the basis Differentiate whether image is sensitive image, and obtaining judged result can specifically include:
The sensitizing range quantity in the other region to be sorted of sensitive kinds is categorized as described in statistics;
Calculate the sensitizing range quantity and the ratio of the region sum in the region to be sorted;
Judge whether the ratio exceedes default threshold value, if, it is determined that the image to be identified is sensitive image, if It is no, it is determined that the image to be identified is normal picture.
For example, sensitizing range quantity is set as J, threshold value thre, as J/ (M*N)>During=thre, it is believed that this is to be identified Image is sensitive image or pornographic image, as J/ (M*N)<During thre, it is believed that the image to be identified is normal picture.
It should be noted that according to above-mentioned statistical result judge the image to be identified whether be sensitive image specific side Method can have a variety of, such as can also calculate in same image to be identified, be categorized as the other region to be sorted of sensitive kinds with point Ratio value of the class between the region to be sorted of normal category, if the ratio value exceedes some default first threshold, judges The image to be identified is sensitive image;It can also calculate in same image to be identified, be categorized as the other area to be sorted of sensitive kinds Whether the sensitizing range quantity in domain exceedes some default amount threshold, if so, then directly thinking the image to be identified for sensitivity Image, the on the contrary then image to be identified are normal picture.
In the present embodiment, first, image to be identified is sent into the convolutional neural networks that training in advance is completed, obtains waiting to reflect The convolution characteristic layer of other image;The convolution characteristic layer of the image to be identified is divided into two or more region to be sorted;Then, Extract the characteristic vector in the region to be sorted;The characteristic vector in all regions to be sorted of extraction is sent into the convolution Discriminant classification is carried out in the full articulamentum of neutral net, obtains classifying corresponding to each region to be sorted, the classification bag Include normal category and sensitive classification;Finally, institute is judged according to the statistical result for being categorized as the other region to be sorted of sensitive kinds State whether image to be identified is sensitive image, obtain judged result.In the present embodiment, independent of Face Detection, avoid By the big swimsuit of colour of skin area according to situation about differentiating as pornographic image by mistake, false drop rate is reduced.
Relative to prior art, as Application No. CN104992177 application in institute's technology, it simply introduces convolution god Through network as a grader end to end, the feature representation ability of convolutional neural networks is not utilized fully.And at this In inventive embodiments, region division is carried out while using convolutional neural networks mechanism, and to characteristics of image layer, further The probability that each zonule includes pornographic partial breast and private parts is calculated, more fully utilizes the full convolution of convolutional neural networks The space characteristics of layer, when differentiating smaller colour of skin area but the pornographic image of show sexual parts, substantially increase recall rate.
In addition, the sensitive image discrimination method of the present embodiment is used in convolution characteristic layer division two or more region to be sorted Method be also possible that most of convolutional calculation is shared, thus than directly in the inspection of image to be identified enterprising line slip window The method of survey is more efficient, and time complexity is lower.
The embodiment of the present invention can have a variety of scenes of realizing, e.g., public cloud (refers to what third party provider provided the user The cloud platform that can be used), (cloud platform built is used alone for a user) in private clound, x86 terminals, ARM terminals, figure Shape processing unit (GPU, Graphics Processing Unit), personal computer terminal, mobile phone terminal etc..
A kind of sensitive image discrimination method is essentially described above, a kind of sensitive image identification device will be carried out below detailed Thin description.
Fig. 3 shows a kind of sensitive image identification device one embodiment structure chart in the embodiment of the present invention.
In the present embodiment, a kind of sensitive image identification device includes:
Characteristic layer acquisition module 301 to be identified, for image to be identified to be sent into the convolutional Neural net of training in advance completion In network, the convolution characteristic layer of image to be identified is obtained;
Region division module 302 to be sorted, for the convolution characteristic layer of the image to be identified to be divided into two or more Region to be sorted;
Characteristic vector pickup module 303, for extracting the characteristic vector in the region to be sorted;
Area judging module 304 to be sorted, for the characteristic vector in all regions to be sorted of extraction to be sent into institute State in the full articulamentum of convolutional neural networks and carry out discriminant classification, obtain classifying corresponding to each region to be sorted, it is described Classification includes normal category and sensitive classification;
Sensitive image judge module 305, for being categorized as the statistical result in the other region to be sorted of sensitive kinds according to Judge whether the image to be identified is sensitive image, obtains judged result.
Further, the convolutional neural networks can be by being completed with lower module training in advance:
Training characteristics layer acquisition module, for training image to be sent into convolutional neural networks, obtain the training image Convolution characteristic layer, the training image includes normal picture and sensitive image, and the sensitizing range on the sensitive image is pre- First it is labeled as sensitive classification;
Test zone division module, for the convolution characteristic layer of the training image to be divided into two or more testing classification Region;
Testing feature vector extraction module, for extracting the characteristic vector in the testing classification region;
Testing classification area judging module, for the characteristic vector in all testing classification regions of extraction to be sent into institute State in the full articulamentum of convolutional neural networks and carry out discriminant classification, obtain classifying corresponding to each testing classification region, institute Stating classification includes normal category and sensitive classification;
Classification correctness determining module, if for the testing classification region classification results and the training image it is pre- It is consistent first to mark classification, it is determined that the classification in the testing classification region is correct;
Iteration update module, for correctly testing classification region iteration to update the convolutional Neural net according to classification The model parameter of network.
Further, the region division module to be sorted specifically can be used for the convolution feature of the image to be identified Layer is divided into M*N net region as the region to be sorted, and M and N are positive integer.
Further, the characteristic vector pickup module can specifically include:
Maximum sampling unit, for long in the region to be sorted and wide respectively h/ √ n and w/ √ n adjacent son Region carries out maximum sampling, obtains the characteristic vector of n dimensions, and h and w are respectively the length and width in the region to be sorted.
Further, the sensitive image judge module can specifically include:
Sensitizing range quantity statistics unit, for counting the sensitizing range for being categorized as the other region to be sorted of sensitive kinds Quantity;
Ratio calculation unit, for calculating the sensitizing range quantity and the ratio of the region sum in the region to be sorted Value;
Image determination unit, for judging whether the ratio exceedes default threshold value, if, it is determined that it is described to be identified Image is sensitive image, if not, it is determined that the image to be identified is normal picture.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, the corresponding process in preceding method embodiment is may be referred to, will not be repeated here.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method can be with Realize by another way.For example, device embodiment described above is only schematical, for example, the unit Division, only a kind of division of logic function, can there is other dividing mode, such as multiple units or component when actually realizing Another system can be combined or be desirably integrated into, or some features can be ignored, or do not perform.It is another, it is shown or The mutual coupling discussed or direct-coupling or communication connection can be the indirect couplings by some interfaces, device or unit Close or communicate to connect, can be electrical, mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list Member can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and is used as independent production marketing or use When, it can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially The part to be contributed in other words to prior art or all or part of the technical scheme can be in the form of software products Embody, the computer software product is stored in a storage medium, including some instructions are causing a computer Equipment (can be personal computer, server, or network equipment etc.) performs the complete of each embodiment methods described of the present invention Portion or part steps.And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey The medium of sequence code.
Described above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although with reference to before Embodiment is stated the present invention is described in detail, it will be understood by those within the art that:It still can be to preceding State the technical scheme described in each embodiment to modify, or equivalent substitution is carried out to which part technical characteristic;And these Modification is replaced, and the essence of appropriate technical solution is departed from the spirit and scope of various embodiments of the present invention technical scheme.

Claims (10)

  1. A kind of 1. sensitive image discrimination method, it is characterised in that including:
    Image to be identified is sent into the convolutional neural networks that training in advance is completed, obtains the convolution characteristic layer of image to be identified;
    The convolution characteristic layer of the image to be identified is divided into two or more region to be sorted;
    Extract the characteristic vector in the region to be sorted;
    The characteristic vector in all regions to be sorted of extraction is sent into the full articulamentum of the convolutional neural networks and carried out Discriminant classification, obtain classifying corresponding to each region to be sorted, the classification includes normal category and sensitive classification;
    Judge whether the image to be identified is sensitive according to the statistical result for being categorized as the other region to be sorted of sensitive kinds Image, obtain judged result.
  2. 2. sensitive image discrimination method according to claim 1, it is characterised in that the convolutional neural networks are by following step Rapid training in advance is completed:
    Training image is sent into convolutional neural networks, obtains the convolution characteristic layer of the training image, the training image bag Include normal picture and sensitive image, the sensitizing range on the sensitive image is labeled as sensitive classification in advance;
    The convolution characteristic layer of the training image is divided into two or more testing classification region;
    Extract the characteristic vector in the testing classification region;
    The characteristic vector in all testing classification regions of extraction is sent into the full articulamentum of the convolutional neural networks Row discriminant classification, obtain classifying corresponding to each testing classification region, the classification includes normal category and sensitive classification;
    If the classification results in the testing classification region are consistent with the advance mark classification of the training image, it is determined that the survey The classification for trying specification area is correct;
    The model parameter of the convolutional neural networks is updated according to the correct testing classification region iteration of classification.
  3. 3. sensitive image discrimination method according to claim 1, it is characterised in that the volume by the image to be identified Product characteristic layer is divided into two or more region to be sorted and is specially:
    It is equal as the region to be sorted, M and N that the convolution characteristic layer of the image to be identified is divided into M*N net region For positive integer.
  4. 4. sensitive image discrimination method according to claim 1, it is characterised in that the extraction region to be sorted Characteristic vector specifically includes:
    Maximum sampling is carried out to long in the region to be sorted and wide respectively h/ √ n and w/ √ n adjacent subarea domain, obtained The characteristic vector of n dimensions, h and w are respectively the length and width in the region to be sorted.
  5. 5. sensitive image discrimination method according to any one of claim 1 to 4, it is characterised in that described in the basis The statistical result for being categorized as the other region to be sorted of sensitive kinds judges whether the image to be identified is sensitive image, is judged As a result specifically include:
    The sensitizing range quantity in the other region to be sorted of sensitive kinds is categorized as described in statistics;
    Calculate the sensitizing range quantity and the ratio of the region sum in the region to be sorted;
    Judge whether the ratio exceedes default threshold value, if, it is determined that the image to be identified is sensitive image, if it is not, It is normal picture then to determine the image to be identified.
  6. A kind of 6. sensitive image identification device, it is characterised in that including:
    Characteristic layer acquisition module to be identified, for image to be identified to be sent into the convolutional neural networks that training in advance is completed, obtain To the convolution characteristic layer of image to be identified;
    Region division module to be sorted, for the convolution characteristic layer of the image to be identified to be divided into two or more area to be sorted Domain;
    Characteristic vector pickup module, for extracting the characteristic vector in the region to be sorted;
    Area judging module to be sorted, for the characteristic vector in all regions to be sorted of extraction to be sent into the convolution god Discriminant classification is carried out in full articulamentum through network, obtains classifying corresponding to each region to be sorted, the classification includes Normal category and sensitive classification;
    Sensitive image judge module, for being categorized as according to described in the statistical result judgement in the other region to be sorted of sensitive kinds Whether image to be identified is sensitive image, obtains judged result.
  7. 7. sensitive image identification device according to claim 6, it is characterised in that the convolutional neural networks are by following mould Block training in advance is completed:
    Training characteristics layer acquisition module, for training image to be sent into convolutional neural networks, obtain the volume of the training image Product characteristic layer, the training image include normal picture and sensitive image, and the sensitizing range on the sensitive image is marked in advance Note as sensitive classification;
    Test zone division module, for the convolution characteristic layer of the training image to be divided into two or more testing classification area Domain;
    Testing feature vector extraction module, for extracting the characteristic vector in the testing classification region;
    Testing classification area judging module, for the characteristic vector in all testing classification regions of extraction to be sent into the volume Discriminant classification is carried out in the full articulamentum of product neutral net, obtains classifying corresponding to each testing classification region, described point Class includes normal category and sensitive classification;
    Classification correctness determining module, if the advance mark of the classification results and the training image for the testing classification region It is consistent to note classification, it is determined that the classification in the testing classification region is correct;
    Iteration update module, for correctly testing classification region iteration to update the convolutional neural networks according to classification Model parameter.
  8. 8. sensitive image identification device according to claim 6, it is characterised in that the region division module tool to be sorted Body is used to the convolution characteristic layer of the image to be identified being divided into M*N net region as the region to be sorted, M and N It is positive integer.
  9. 9. sensitive image identification device according to claim 6, it is characterised in that the characteristic vector pickup module is specific Including:
    Maximum sampling unit, for long in the region to be sorted and wide respectively h/ √ n and w/ √ n adjacent subarea domain Maximum sampling is carried out, obtains the characteristic vector of n dimensions, h and w are respectively the length and width in the region to be sorted.
  10. 10. the sensitive image identification device according to any one of claim 6 to 9, it is characterised in that the sensitive image Judge module specifically includes:
    Sensitizing range quantity statistics unit, for counting the sensitizing range number for being categorized as the other region to be sorted of sensitive kinds Amount;
    Ratio calculation unit, for calculating the sensitizing range quantity and the ratio of the region sum in the region to be sorted;
    Image determination unit, for judging whether the ratio exceedes default threshold value, if, it is determined that the image to be identified For sensitive image, if not, it is determined that the image to be identified is normal picture.
CN201610846341.XA 2016-09-23 2016-09-23 Sensitive image identification method and device Active CN107871314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610846341.XA CN107871314B (en) 2016-09-23 2016-09-23 Sensitive image identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610846341.XA CN107871314B (en) 2016-09-23 2016-09-23 Sensitive image identification method and device

Publications (2)

Publication Number Publication Date
CN107871314A true CN107871314A (en) 2018-04-03
CN107871314B CN107871314B (en) 2022-02-18

Family

ID=61751619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610846341.XA Active CN107871314B (en) 2016-09-23 2016-09-23 Sensitive image identification method and device

Country Status (1)

Country Link
CN (1) CN107871314B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191451A (en) * 2018-09-11 2019-01-11 百度在线网络技术(北京)有限公司 Method for detecting abnormality, device, equipment and medium
CN109359551A (en) * 2018-09-21 2019-02-19 深圳市璇玑实验室有限公司 A kind of nude picture detection method and system based on machine learning
CN109640174A (en) * 2019-01-28 2019-04-16 Oppo广东移动通信有限公司 Method for processing video frequency and relevant device
CN109840590A (en) * 2019-01-31 2019-06-04 福州瑞芯微电子股份有限公司 A kind of scene classification circuit framework neural network based and method
CN110163300A (en) * 2019-05-31 2019-08-23 北京金山云网络技术有限公司 A kind of image classification method, device, electronic equipment and storage medium
CN110490027A (en) * 2018-05-15 2019-11-22 触景无限科技(北京)有限公司 A kind of face characteristic extraction training method and system for recognition of face
CN111738290A (en) * 2020-05-14 2020-10-02 北京沃东天骏信息技术有限公司 Image detection method, model construction and training method, device, equipment and medium
CN112598016A (en) * 2020-09-17 2021-04-02 北京小米松果电子有限公司 Image classification method and device, communication equipment and storage medium
CN113936195A (en) * 2021-12-16 2022-01-14 云账户技术(天津)有限公司 Sensitive image recognition model training method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6675162B1 (en) * 1997-10-01 2004-01-06 Microsoft Corporation Method for scanning, analyzing and handling various kinds of digital information content
CN101150977A (en) * 2005-04-13 2008-03-26 奥林巴斯医疗株式会社 Image processor and image processing method
CN104182735A (en) * 2014-08-18 2014-12-03 厦门美图之家科技有限公司 Training optimization pornographic picture or video detection method based on convolutional neural network
CN104346622A (en) * 2013-07-31 2015-02-11 富士通株式会社 Convolutional neural network classifier, and classifying method and training method thereof
CN104992177A (en) * 2015-06-12 2015-10-21 安徽大学 Internet porn image detection method based on deep convolution nerve network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6675162B1 (en) * 1997-10-01 2004-01-06 Microsoft Corporation Method for scanning, analyzing and handling various kinds of digital information content
CN101150977A (en) * 2005-04-13 2008-03-26 奥林巴斯医疗株式会社 Image processor and image processing method
CN104346622A (en) * 2013-07-31 2015-02-11 富士通株式会社 Convolutional neural network classifier, and classifying method and training method thereof
CN104182735A (en) * 2014-08-18 2014-12-03 厦门美图之家科技有限公司 Training optimization pornographic picture or video detection method based on convolutional neural network
CN104992177A (en) * 2015-06-12 2015-10-21 安徽大学 Internet porn image detection method based on deep convolution nerve network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KAILONG ZHOU等: "Convolutional Neural Networks based Pornographic Image Classification", 《2016 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490027A (en) * 2018-05-15 2019-11-22 触景无限科技(北京)有限公司 A kind of face characteristic extraction training method and system for recognition of face
CN110490027B (en) * 2018-05-15 2023-06-16 触景无限科技(北京)有限公司 Face feature extraction training method and system
CN109191451A (en) * 2018-09-11 2019-01-11 百度在线网络技术(北京)有限公司 Method for detecting abnormality, device, equipment and medium
CN109359551A (en) * 2018-09-21 2019-02-19 深圳市璇玑实验室有限公司 A kind of nude picture detection method and system based on machine learning
CN109640174A (en) * 2019-01-28 2019-04-16 Oppo广东移动通信有限公司 Method for processing video frequency and relevant device
CN109840590A (en) * 2019-01-31 2019-06-04 福州瑞芯微电子股份有限公司 A kind of scene classification circuit framework neural network based and method
CN110163300A (en) * 2019-05-31 2019-08-23 北京金山云网络技术有限公司 A kind of image classification method, device, electronic equipment and storage medium
CN111738290A (en) * 2020-05-14 2020-10-02 北京沃东天骏信息技术有限公司 Image detection method, model construction and training method, device, equipment and medium
CN111738290B (en) * 2020-05-14 2024-04-09 北京沃东天骏信息技术有限公司 Image detection method, model construction and training method, device, equipment and medium
CN112598016A (en) * 2020-09-17 2021-04-02 北京小米松果电子有限公司 Image classification method and device, communication equipment and storage medium
CN113936195A (en) * 2021-12-16 2022-01-14 云账户技术(天津)有限公司 Sensitive image recognition model training method and device and electronic equipment

Also Published As

Publication number Publication date
CN107871314B (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN107871314A (en) A kind of sensitive image discrimination method and device
CN109344736B (en) Static image crowd counting method based on joint learning
CN108960409B (en) Method and device for generating annotation data and computer-readable storage medium
CN110070067A (en) The training method of video classification methods and its model, device and electronic equipment
CN113486981B (en) RGB image classification method based on multi-scale feature attention fusion network
CN106951825A (en) A kind of quality of human face image assessment system and implementation method
CN109241871A (en) A kind of public domain stream of people&#39;s tracking based on video data
CN108846426A (en) Polarization SAR classification method based on the twin network of the two-way LSTM of depth
CN109410184B (en) Live broadcast pornographic image detection method based on dense confrontation network semi-supervised learning
CN107832835A (en) The light weight method and device of a kind of convolutional neural networks
CN107133651A (en) The functional magnetic resonance imaging data classification method of subgraph is differentiated based on super-network
CN107506793A (en) Clothes recognition methods and system based on weak mark image
CN107808358A (en) Image watermark automatic testing method
CN108229262A (en) A kind of pornographic video detecting method and device
CN107203775A (en) A kind of method of image classification, device and equipment
CN106295502A (en) A kind of method for detecting human face and device
CN106778851B (en) Social relationship prediction system and method based on mobile phone evidence obtaining data
CN107909102A (en) A kind of sorting technique of histopathology image
CN107545271A (en) Image-recognizing method, device and system
CN106203539A (en) The method and apparatus identifying container number
CN106651973A (en) Image structuring method and device
CN107273824A (en) Face identification method based on multiple dimensioned multi-direction local binary patterns
CN112418360A (en) Convolutional neural network training method, pedestrian attribute identification method and related equipment
CN109993187A (en) A kind of modeling method, robot and the storage device of object category for identification
Naqvi et al. Feature quality-based dynamic feature selection for improving salient object detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant