CN107871314B - Sensitive image identification method and device - Google Patents
Sensitive image identification method and device Download PDFInfo
- Publication number
- CN107871314B CN107871314B CN201610846341.XA CN201610846341A CN107871314B CN 107871314 B CN107871314 B CN 107871314B CN 201610846341 A CN201610846341 A CN 201610846341A CN 107871314 B CN107871314 B CN 107871314B
- Authority
- CN
- China
- Prior art keywords
- image
- classification
- sensitive
- classified
- identified
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Abstract
The embodiment of the invention discloses a sensitive image identification method, which is used for solving the problems that in the prior art, the false detection rate is high when the swimsuit photo with a large skin color area is identified, and the false detection is easy to miss when the pornographic image with a small skin color area but exposed sexual organs is identified. The method provided by the embodiment of the invention comprises the following steps: sending the image to be identified into a convolutional neural network trained in advance to obtain a convolutional characteristic layer of the image to be identified; dividing the convolution characteristic layer of the image to be identified into more than two regions to be classified; extracting a feature vector of a region to be classified; sending the feature vectors of all the regions to be classified into a full-connection layer of the convolutional neural network for classification and judgment to obtain a classification corresponding to each region to be classified, wherein the classification comprises a normal classification and a sensitive classification; and judging whether the image to be identified is a sensitive image according to the statistical result of the to-be-classified area classified into the sensitive category to obtain a judgment result. The embodiment of the invention also provides a sensitive image identification device.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a sensitive image identification method and device.
Background
The vigorous development of the mobile internet enables information exchange between people to be simpler and more convenient, and greatly promotes the development of the society. However, at the same time, the mobile internet has also caused a flood of obscene information represented by pornographic images. Therefore, it is very important to provide an advanced and efficient algorithm to automatically identify pornographic images.
The traditional pornographic image identification technology is usually based on a frame of 'skin color detection + sensitive area discrimination'. The flow of such techniques is as follows: firstly, a Bayes classifier is used to obtain a skin color region in an image, then low-level features such as SIFT, LBP, Haar and the like are extracted from the skin color region, and finally the features are sent to a trained pornographic sensitive part classifier such as SVM and AdaBoost, so that a final pornographic identification result is obtained.
These techniques tend to suffer from two problems: (1) the false detection rate is high: the traditional pornographic image identification technology relies on skin color detection to a great extent, so that some swimsuit photos with large skin color areas are likely to be identified as pornographic images by mistake. (2) The detection rate needs to be improved: due to the expression capability of the classical features such as SIFT, LBP and the like and the performance limitation of classifiers such as SVM and the like, the condition that some skin color areas are small, but the pornographic images of the exposed organs are always missed to be detected is often generated.
Disclosure of Invention
The embodiment of the invention provides a sensitive image identification method and device, which can avoid the situation that a swimsuit with a large skin color area is wrongly identified as a pornographic image, and reduce the false detection rate; meanwhile, when the pornographic images of the exposed sexual organs with small skin color areas are identified, the detection rate is greatly improved.
The sensitive image identification method provided by the embodiment of the invention comprises the following steps:
sending the image to be identified into a convolutional neural network trained in advance to obtain a convolutional characteristic layer of the image to be identified;
dividing the convolution characteristic layer of the image to be identified into more than two regions to be classified;
extracting a feature vector of the region to be classified;
sending the extracted feature vectors of all the regions to be classified into a full connection layer of the convolutional neural network for classification and judgment to obtain a classification corresponding to each region to be classified, wherein the classification comprises a normal classification and a sensitive classification;
and judging whether the image to be identified is a sensitive image according to the statistical result of the to-be-classified area classified into the sensitive category to obtain a judgment result.
Optionally, the convolutional neural network is pre-trained by the following steps:
sending a training image into a convolutional neural network to obtain a convolutional characteristic layer of the training image, wherein the training image comprises a normal image and a sensitive image, and a sensitive area on the sensitive image is marked as a sensitive category in advance;
dividing the convolution characteristic layer of the training image into more than two test classification areas;
extracting a feature vector of the test classification area;
sending the extracted feature vectors of all the test classification areas into a full connection layer of the convolutional neural network for classification and judgment to obtain a classification corresponding to each test classification area, wherein the classification comprises a normal classification and a sensitive classification;
if the classification result of the test classification region is consistent with the pre-labeled class of the training image, determining that the classification of the test classification region is correct;
and iteratively updating the model parameters of the convolutional neural network according to the test classification region with correct classification.
Optionally, the dividing the convolution feature layer of the image to be identified into more than two regions to be classified specifically includes:
and dividing the convolution characteristic layer of the image to be identified into M x N grid areas serving as the areas to be classified, wherein M and N are positive integers.
Optionally, the extracting the feature vector of the region to be classified specifically includes:
for the length and width of the area to be classified h-And w-The adjacent sub-regions are subjected to maximum value sampling to obtain n-dimensional feature vectors, and h and w are the length and the width of the region to be classified respectively.
Optionally, the determining whether the image to be identified is a sensitive image according to the statistical result of the to-be-classified area classified into the sensitive category specifically includes:
counting the number of the sensitive areas of the areas to be classified into the sensitive categories;
calculating the ratio of the number of the sensitive areas to the total number of the areas to be classified;
and judging whether the ratio exceeds a preset threshold value, if so, determining that the image to be identified is a sensitive image, and if not, determining that the image to be identified is a normal image.
The embodiment of the invention provides a sensitive image identification device, which comprises:
the system comprises a to-be-identified characteristic layer acquisition module, a to-be-identified characteristic layer acquisition module and a to-be-identified characteristic layer acquisition module, wherein the to-be-identified characteristic layer acquisition module is used for sending an image to be identified into a convolutional neural network which is trained in advance to obtain a convolutional characteristic layer of the image to be identified;
the to-be-classified area dividing module is used for dividing the convolution characteristic layer of the to-be-identified image into more than two to-be-classified areas;
the characteristic vector extraction module is used for extracting the characteristic vector of the region to be classified;
the to-be-classified area distinguishing module is used for sending the extracted feature vectors of all the to-be-classified areas into a full connection layer of the convolutional neural network for classification and distinguishing to obtain a classification corresponding to each to-be-classified area, wherein the classification comprises a normal classification and a sensitive classification;
and the sensitive image judging module is used for judging whether the image to be identified is a sensitive image according to the statistical result of the to-be-classified area classified into the sensitive category to obtain a judging result.
Optionally, the convolutional neural network is pre-trained by the following modules:
the training characteristic layer acquisition module is used for sending a training image into a convolutional neural network to obtain a convolutional characteristic layer of the training image, wherein the training image comprises a normal image and a sensitive image, and a sensitive area on the sensitive image is labeled as a sensitive category in advance;
the test region dividing module is used for dividing the convolution characteristic layer of the training image into more than two test classification regions;
the test feature vector extraction module is used for extracting feature vectors of the test classification areas;
the test classification region distinguishing module is used for sending the extracted feature vectors of all the test classification regions into a full connection layer of the convolutional neural network for classification and distinguishing to obtain a classification corresponding to each test classification region, and the classification comprises a normal classification and a sensitive classification;
the classification correctness determining module is used for determining that the classification of the test classification region is correct if the classification result of the test classification region is consistent with the pre-labeled class of the training image;
and the iteration updating module is used for iteratively updating the model parameters of the convolutional neural network according to the test classification area with correct classification.
Optionally, the to-be-classified region dividing module is specifically configured to divide the convolution feature layer of the to-be-identified image into M × N grid regions as the to-be-classified region, where M and N are positive integers.
Optionally, the feature vector extraction module specifically includes:
a maximum value sampling unit for selecting h-And w-The adjacent sub-regions are subjected to maximum value sampling to obtain n-dimensional feature vectors, and h and w are the length and the width of the region to be classified respectively.
Optionally, the sensitive image determining module specifically includes:
the sensitive region number counting unit is used for counting the number of the sensitive regions of the regions to be classified into the sensitive categories;
the ratio calculation unit is used for calculating the ratio of the number of the sensitive areas to the total number of the areas to be classified;
and the image determining unit is used for judging whether the ratio exceeds a preset threshold value, if so, determining that the image to be identified is a sensitive image, and if not, determining that the image to be identified is a normal image.
According to the technical scheme, the embodiment of the invention has the following advantages:
in the embodiment of the invention, firstly, an image to be identified is sent into a convolutional neural network which is trained in advance to obtain a convolutional characteristic layer of the image to be identified; dividing the convolution characteristic layer of the image to be identified into more than two regions to be classified; then, extracting the feature vector of the region to be classified; sending the extracted feature vectors of all the regions to be classified into a full connection layer of the convolutional neural network for classification and judgment to obtain a classification corresponding to each region to be classified, wherein the classification comprises a normal classification and a sensitive classification; and finally, judging whether the image to be identified is a sensitive image according to the statistical result of the to-be-classified area classified into the sensitive category to obtain a judgment result. In the embodiment of the invention, the condition that the swimsuit with large skin color area is wrongly identified as the pornographic image is avoided without depending on skin color detection, and the false detection rate is reduced; meanwhile, a convolutional neural network mechanism is adopted, the image characteristic layer is subjected to region division, and the detection rate is greatly improved when the pornographic image with a small skin color area but exposing sexual organs is identified.
Drawings
FIG. 1 is a flow chart of one embodiment of a sensitive image authentication method in accordance with one embodiment of the present invention;
FIG. 2 is a schematic diagram of image data flow in a convolutional neural network structure model according to an embodiment of the present invention;
fig. 3 is a block diagram of an embodiment of a sensitive image authentication apparatus according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a sensitive image identification method and a sensitive image identification device, which are used for solving the problems that in the prior art, the false detection rate is high when the swimming costume photo with a large skin color area is identified, and the false detection is easy to miss when the pornographic image with a small skin color area but an exposed sexual organ is identified.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of a sensitive image identification method according to an embodiment of the present invention includes:
101. sending the image to be identified into a convolutional neural network trained in advance to obtain a convolutional characteristic layer of the image to be identified;
in this embodiment, when the image to be identified needs to be identified, the image to be identified may be sent to a convolutional neural network trained in advance, so as to obtain a convolutional characteristic layer of the image to be identified.
Wherein the convolutional neural network can be pre-trained by the following steps:
(1) sending a training image into a convolutional neural network to obtain a convolutional characteristic layer of the training image, wherein the training image comprises a normal image and a sensitive image, and a sensitive area on the sensitive image is marked as a sensitive category in advance; further, the sensitive image may be further subdivided, e.g., the sensitive image may include pornography, non-sexual, sexual; accordingly, the embodiment of the invention can perform more modes of sensitive image classification, such as two-classification (pornography, non-pornography), three-classification (pornography, sexuality and normal), four-classification (pornography, non-sexuality, sexuality and normal) and the like.
(2) Dividing the convolution characteristic layer of the training image into more than two test classification areas;
(3) extracting a feature vector of the test classification area;
(4) sending the extracted feature vectors of all the test classification areas into a full connection layer of the convolutional neural network for classification and judgment to obtain a classification corresponding to each test classification area, wherein the classification comprises a normal classification and a sensitive classification;
(5) if the classification result of the test classification region is consistent with the pre-labeled class of the training image, determining that the classification of the test classification region is correct;
(6) and iteratively updating the model parameters of the convolutional neural network according to the test classification region with correct classification.
For the step (1), specifically, a training image may be input to the convolutional neural network, and then after a series of convolution, down-sampling and nonlinear transformation are performed on the training image in the convolutional neural network, a convolutional feature layer of the training image with spatial information and semantic information is obtained on the final convolutional layer. Among a large number of training images as samples, normal images and sensitive images (such as pornographic images) can respectively account for half of the total number of the training samples, sensitive areas on each sensitive image are labeled in advance and are labeled as sensitive types, and the rest non-sensitive areas are labeled as normal types. For example, the region with the areola and the pudendum as the center on the sensitive image can be labeled as the region of the sensitive category by a manual labeling mode, and each sensitive image can be randomly cropped, so that the added sensitive image training data can be obtained.
In addition, in this embodiment, the convolutional neural network may select any convolutional neural network structure model. Fig. 2 shows a schematic diagram of image data flow in a convolutional neural network structure model according to an embodiment of the present invention, where the structure model may include several convolutional layers (conv1, conv2, … …, convN), several fully-connected layers (fc1, fc2), a region-of-interest sampling layer (RoI), and a softmax classification layer, in this embodiment, after a training image may be sent to a convolutional neural network, the image data may be first subjected to several convolution, nonlinear change, and maximum value downsampling operations, then may be divided into S × S grids (S is a positive integer), pass through the region-of-interest downsampling layer, and finally sent to the fully-connected layer and the softmax classification layer.
For the step (2), the convolution feature layer of the training image may be specifically divided into M × N grid regions as the test classification region, where M and N are positive integers, and it is known that each grid region corresponds to a spatial position of a part of the original training image, and the spatial features of the convolution neural network full convolution feature layer are fully utilized to divide the convolution feature layer into a plurality of small test classification regions, which is more beneficial to identifying image features with small skin color areas.
For the step (3), for each test classification region, a region-of-interest downsampling method may be adopted to extract feature vectors of equal length. Specifically, assuming that the length and width of the test classification region are h and w, respectively, and the dimension of the feature vector to be generated for each full convolution feature region is n, then the length and width of the test classification region can be determined by determining h-And w-The maximum value sampling is carried out on the adjacent sub-areas to obtain n-dimensional feature vectors, so that the output dimension of the feature vectors of each test classification area is equal even if the sizes of the test classification areas are different.
For the step (4), the feature vector of each test classification region may be sent to the full-link layer of the convolutional neural network for classification and discrimination, for example, in the convolutional neural network structure model shown in fig. 2, the test classification region is sent to the two full-link layers and the softmax classification layer of the VGG-16 network model, so as to obtain the classification corresponding to each test classification region. The classification includes a normal category and a sensitive category, wherein the sensitive category may specifically include areola and pudendum. When each test classification region is classified and distinguished, the classification probability of each test classification region can be obtained specifically. For example, a classification label may be marked on a corresponding region of each training image in advance, where the classification labels are a background, a areola, or a pudendum, in step (4), the probability of each class of classification label of each test classification region is obtained, and the label corresponding to the maximum probability in the test classification region is selected as the classification label of the test classification region. If the label of the original image area corresponding to the test classification area is the same as the classification label, the classification is correct, otherwise, the classification is incorrect. Specifically, when the label corresponding to the maximum output probability in a test classification region is the background and the region corresponding to the original image only contains the background, the classification of the test classification region is correct; when the label of the maximum output probability in a test classification area is areola and the area corresponding to the original image contains areola, the classification of the test classification area is correct; for the pudendum label, the identification is similar to the identification of the areola label, and when the label with the maximum output probability in a test classification area is the pudendum and the area corresponding to the original image contains the pudendum, the classification of the test classification area is correct.
As for the step (5), as can be seen from the description of the step (4), when the classification result of the test classification region is consistent with the pre-labeled class of the training image, it can be determined that the classification of the test classification region is correct.
For the step (6), for a test classification region with correct classification, further, a cross entropy loss function can be used as a loss function of the training process, and then a random gradient descent method is used to iteratively update the model parameters of the convolutional neural network.
102. Dividing the convolution characteristic layer of the image to be identified into more than two regions to be classified;
after obtaining the convolution feature layer of the image to be identified, the convolution feature layer of the image to be identified may be divided into more than two regions to be classified. The convolution feature layer of the image to be identified can be divided into M x N grid regions serving as the regions to be classified, M and N are positive integers, the spatial position of each grid region corresponding to one part of the original image to be identified can be known, the spatial features of the convolution neural network full convolution feature layer are fully utilized, the grid regions are divided into a plurality of small regions to be classified, and the identification of the image features with small skin color areas is facilitated.
103. Extracting a feature vector of the region to be classified;
after the convolution feature layer of the image to be identified is divided into more than two regions to be classified, feature vectors of the regions to be classified can be extracted. In particular, it is possible for the length and width in the region to be classified to be h-And w-The adjacent sub-regions are subjected to maximum value sampling to obtain n-dimensional feature vectors, and h and w are the length and the width of the region to be classified respectively.
For example, for each region to be classified, a region-of-interest downsampling method may be used to extract feature vectors with equal length, the length and width of the region to be classified are h and w, respectively, and the dimension of the feature vector to be generated for each full-convolution feature region is n, so that the length and width of the region to be classified are h ∑ and/or h ∑ respectivelyAnd w-The maximum value sampling is carried out on the adjacent sub-areas to obtain n-dimensional feature vectors, so that the output dimension of the feature vector of each region to be classified is still equal even if the sizes of the regions to be classified are different.
104. Sending the extracted feature vectors of all the regions to be classified into a full connection layer of the convolutional neural network for classification and judgment to obtain a classification corresponding to each region to be classified;
after extracting the feature vectors of the regions to be classified, the extracted feature vectors of all the regions to be classified can be sent to a full connection layer of the convolutional neural network for classification and discrimination, so as to obtain a classification corresponding to each region to be classified, wherein the classification comprises a normal classification and a sensitive classification.
For the feature vector of each to-be-classified area, the feature vector may be sent to a full-link layer of the convolutional neural network for classification and discrimination, for example, in the convolutional neural network structure model shown in fig. 2, the to-be-classified area is sent to two full-link layers and a softmax classification layer of the VGG-16 network model, so as to obtain a classification corresponding to each to-be-classified area. The classification includes a normal category and a sensitive category, wherein the sensitive category may specifically include areola and pudendum. When each region to be classified is classified and distinguished, the classification probability of each region to be classified can be obtained specifically. For example, a classification label may be marked on a corresponding region of each training image in advance, where the classification labels are a background, a areola, or a pudendum, in step (4), the probability of each class of classification label of each region to be classified is obtained, and the label corresponding to the maximum probability in the region to be classified is selected as the classification label of the region to be classified. If the label of the original image area corresponding to the area to be classified is the same as the classification label, the classification is correct, otherwise, the classification is incorrect. Specifically, when the label corresponding to the maximum output probability in an area to be classified is a background and the area corresponding to the original image only contains the background, the classification of the area to be classified is correct; when the label of the maximum output probability in a region to be classified is areola and the region corresponding to the original image contains areola, the region to be classified is classified correctly; for the pudendum label, the identification is similar to the identification of the areola label, when the label with the maximum output probability in an area to be classified is the pudendum and the area corresponding to the original image contains the pudendum, the classification of the area to be classified is correct.
105. And judging whether the image to be identified is a sensitive image according to the statistical result of the to-be-classified area classified into the sensitive category to obtain a judgment result.
After the classification corresponding to each to-be-classified area is obtained, whether the to-be-identified image is a sensitive image or not can be judged according to the statistical result of the to-be-classified areas classified into the sensitive categories, and a judgment result is obtained.
In this embodiment, the determining whether the image to be identified is a sensitive image according to the statistical result of the to-be-classified area classified into the sensitive category may specifically include:
counting the number of the sensitive areas of the areas to be classified into the sensitive categories;
calculating the ratio of the number of the sensitive areas to the total number of the areas to be classified;
and judging whether the ratio exceeds a preset threshold value, if so, determining that the image to be identified is a sensitive image, and if not, determining that the image to be identified is a normal image.
For example, let the number of sensitive regions be J, and the threshold be thre, when J/(M × N) > < thre, the image to be identified may be considered as a sensitive image or a pornographic image, and when J/(M × N) < thre, the image to be identified may be considered as a normal image.
It should be noted that, there may be a plurality of specific methods for determining whether the image to be identified is a sensitive image according to the statistical result, for example, a ratio value between a region to be classified into a sensitive category and a region to be classified into a normal category in the same image to be identified may be calculated, and if the ratio value exceeds a certain preset first threshold, the image to be identified is determined to be a sensitive image; whether the number of the sensitive areas of the to-be-classified areas classified into the sensitive categories in the same to-be-identified image exceeds a certain preset number threshold value or not can be calculated, if yes, the to-be-identified image is directly considered as the sensitive image, and otherwise, the to-be-identified image is the normal image.
In the embodiment, firstly, an image to be identified is sent to a convolutional neural network which is trained in advance, so as to obtain a convolutional characteristic layer of the image to be identified; dividing the convolution characteristic layer of the image to be identified into more than two regions to be classified; then, extracting the feature vector of the region to be classified; sending the extracted feature vectors of all the regions to be classified into a full connection layer of the convolutional neural network for classification and judgment to obtain a classification corresponding to each region to be classified, wherein the classification comprises a normal classification and a sensitive classification; and finally, judging whether the image to be identified is a sensitive image according to the statistical result of the to-be-classified area classified into the sensitive category to obtain a judgment result. In the embodiment, the condition that the swimwear with large skin color area is wrongly identified as the pornographic image is avoided without depending on skin color detection, and the false detection rate is reduced.
Compared with the prior art, such as the technology disclosed in the application with the application number of CN104992177, the simple introduction of the convolutional neural network as an end-to-end classifier does not fully utilize the feature expression capability of the convolutional neural network. In the embodiment of the invention, the convolutional neural network mechanism is adopted, meanwhile, the image characteristic layer is subjected to region division, the probability that each small region contains pornographic partial breasts and pudendum is further calculated, the spatial characteristics of the convolutional neural network full convolutional layer are more fully utilized, and the detection rate is greatly improved when pornographic images with small skin color areas but exposed organs are identified.
In addition, the sensitive image identification method of the embodiment adopts the method of dividing more than two regions to be classified by the convolution characteristic layer, so that most convolution calculation can be shared, and therefore, the method is more efficient and has lower time complexity than the method of directly performing sliding window detection on the image to be identified.
The embodiment of the invention can be implemented in various scenarios, such as a public cloud (which refers to a cloud platform provided by a third-party provider for a user and capable of being used), a private cloud (which is a cloud platform constructed for a user to use alone), an x86 terminal, an ARM terminal, a Graphics Processing Unit (GPU), a personal computer terminal, a mobile phone terminal, and the like.
The above mainly describes a sensitive image authentication method, and a sensitive image authentication apparatus will be described in detail below.
Fig. 3 is a block diagram showing an embodiment of a sensitive image authentication apparatus according to an embodiment of the present invention.
In this embodiment, a sensitive image discrimination apparatus includes:
the to-be-identified feature layer acquisition module 301 is configured to send the to-be-identified image to a pre-trained convolutional neural network to obtain a convolutional feature layer of the to-be-identified image;
a to-be-classified region dividing module 302, configured to divide the convolution feature layer of the to-be-identified image into more than two to-be-classified regions;
a feature vector extraction module 303, configured to extract a feature vector of the region to be classified;
a to-be-classified region distinguishing module 304, configured to send the extracted feature vectors of all the to-be-classified regions into a full connection layer of the convolutional neural network for classification and distinguishing, so as to obtain a classification corresponding to each to-be-classified region, where the classification includes a normal classification and a sensitive classification;
the sensitive image determining module 305 is configured to determine whether the image to be identified is a sensitive image according to the statistical result of the to-be-classified area classified into the sensitive category, so as to obtain a determination result.
Further, the convolutional neural network can be pre-trained by the following modules:
the training characteristic layer acquisition module is used for sending a training image into a convolutional neural network to obtain a convolutional characteristic layer of the training image, wherein the training image comprises a normal image and a sensitive image, and a sensitive area on the sensitive image is labeled as a sensitive category in advance;
the test region dividing module is used for dividing the convolution characteristic layer of the training image into more than two test classification regions;
the test feature vector extraction module is used for extracting feature vectors of the test classification areas;
the test classification region distinguishing module is used for sending the extracted feature vectors of all the test classification regions into a full connection layer of the convolutional neural network for classification and distinguishing to obtain a classification corresponding to each test classification region, and the classification comprises a normal classification and a sensitive classification;
the classification correctness determining module is used for determining that the classification of the test classification region is correct if the classification result of the test classification region is consistent with the pre-labeled class of the training image;
and the iteration updating module is used for iteratively updating the model parameters of the convolutional neural network according to the test classification area with correct classification.
Further, the to-be-classified region dividing module may be specifically configured to divide the convolution feature layer of the to-be-identified image into M × N mesh regions as the to-be-classified region, where M and N are positive integers.
Further, the feature vector extraction module may specifically include:
a maximum value sampling unit for sampling the region to be classifiedLength and width are h-And w-The adjacent sub-regions are subjected to maximum value sampling to obtain n-dimensional feature vectors, and h and w are the length and the width of the region to be classified respectively.
Further, the sensitive image determining module may specifically include:
the sensitive region number counting unit is used for counting the number of the sensitive regions of the regions to be classified into the sensitive categories;
the ratio calculation unit is used for calculating the ratio of the number of the sensitive areas to the total number of the areas to be classified;
and the image determining unit is used for judging whether the ratio exceeds a preset threshold value, if so, determining that the image to be identified is a sensitive image, and if not, determining that the image to be identified is a normal image.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (8)
1. A sensitive image authentication method, comprising:
sending the image to be identified into a convolutional neural network trained in advance to obtain a convolutional characteristic layer of the image to be identified;
dividing the convolution characteristic layer of the image to be identified into M x N grid areas serving as areas to be classified, wherein M and N are positive integers; extracting a feature vector of the region to be classified;
sending the extracted feature vectors of all the regions to be classified into a full-connection layer of the convolutional neural network for classification and judgment to obtain the probability of each class of each region to be classified, and selecting the class corresponding to the maximum probability in the classified region as the class of the region to be classified, wherein the class comprises a normal class and a sensitive class;
and judging whether the image to be identified is a sensitive image according to the statistical result of the to-be-classified area classified into the sensitive category to obtain a judgment result.
2. The sensitive image identification method according to claim 1, wherein the convolutional neural network is pre-trained by the following steps:
sending a training image into a convolutional neural network to obtain a convolutional characteristic layer of the training image, wherein the training image comprises a normal image and a sensitive image, and a sensitive area on the sensitive image is marked as a sensitive category in advance;
dividing the convolution characteristic layer of the training image into more than two test classification areas;
extracting a feature vector of the test classification area;
sending the extracted feature vectors of all the test classification areas into a full connection layer of the convolutional neural network for classification and judgment to obtain a classification corresponding to each test classification area, wherein the classification comprises a normal classification and a sensitive classification;
if the classification result of the test classification region is consistent with the pre-labeled class of the training image, determining that the classification of the test classification region is correct;
and iteratively updating the model parameters of the convolutional neural network according to the test classification region with correct classification.
3. The sensitive image identification method according to claim 1, wherein said extracting the feature vector of the region to be classified specifically comprises:
4. The sensitive image identification method according to any one of claims 1 to 3, wherein the judging whether the image to be identified is a sensitive image according to the statistical result of the region to be classified into the sensitive category specifically includes:
counting the number of the sensitive areas of the areas to be classified into the sensitive categories;
calculating the ratio of the number of the sensitive areas to the total number of the areas to be classified;
and judging whether the ratio exceeds a preset threshold value, if so, determining that the image to be identified is a sensitive image, and if not, determining that the image to be identified is a normal image.
5. A sensitive image authentication apparatus, comprising:
the system comprises a to-be-identified characteristic layer acquisition module, a to-be-identified characteristic layer acquisition module and a to-be-identified characteristic layer acquisition module, wherein the to-be-identified characteristic layer acquisition module is used for sending an image to be identified into a convolutional neural network which is trained in advance to obtain a convolutional characteristic layer of the image to be identified;
the to-be-classified region dividing module is used for dividing the convolution characteristic layer of the to-be-identified image into M x N grid regions serving as the to-be-classified regions, wherein M and N are positive integers; the characteristic vector extraction module is used for extracting the characteristic vector of the region to be classified;
the to-be-classified area distinguishing module is used for sending the extracted feature vectors of all the to-be-classified areas into a full connection layer of the convolutional neural network for classification and distinguishing to obtain the probability of each class of each to-be-classified area, and selecting the class corresponding to the maximum probability in the to-be-classified area as the class of the to-be-classified area, wherein the class comprises a normal class and a sensitive class;
and the sensitive image judging module is used for judging whether the image to be identified is a sensitive image according to the statistical result of the to-be-classified area classified into the sensitive category to obtain a judging result.
6. The sensitive image discrimination apparatus according to claim 5, wherein the convolutional neural network is pre-trained by:
the training characteristic layer acquisition module is used for sending a training image into a convolutional neural network to obtain a convolutional characteristic layer of the training image, wherein the training image comprises a normal image and a sensitive image, and a sensitive area on the sensitive image is labeled as a sensitive category in advance;
the test region dividing module is used for dividing the convolution characteristic layer of the training image into more than two test classification regions;
the test feature vector extraction module is used for extracting feature vectors of the test classification areas;
the test classification region distinguishing module is used for sending the extracted feature vectors of all the test classification regions into a full connection layer of the convolutional neural network for classification and distinguishing to obtain a classification corresponding to each test classification region, and the classification comprises a normal classification and a sensitive classification;
the classification correctness determining module is used for determining that the classification of the test classification region is correct if the classification result of the test classification region is consistent with the pre-labeled class of the training image;
and the iteration updating module is used for iteratively updating the model parameters of the convolutional neural network according to the test classification area with correct classification.
7. The apparatus according to claim 5, wherein the feature vector extraction module specifically comprises:
8. The apparatus according to any one of claims 5 to 7, wherein the sensitive image determining module specifically includes:
the sensitive region number counting unit is used for counting the number of the sensitive regions of the regions to be classified into the sensitive categories;
the ratio calculation unit is used for calculating the ratio of the number of the sensitive areas to the total number of the areas to be classified;
and the image determining unit is used for judging whether the ratio exceeds a preset threshold value, if so, determining that the image to be identified is a sensitive image, and if not, determining that the image to be identified is a normal image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610846341.XA CN107871314B (en) | 2016-09-23 | 2016-09-23 | Sensitive image identification method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610846341.XA CN107871314B (en) | 2016-09-23 | 2016-09-23 | Sensitive image identification method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107871314A CN107871314A (en) | 2018-04-03 |
CN107871314B true CN107871314B (en) | 2022-02-18 |
Family
ID=61751619
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610846341.XA Active CN107871314B (en) | 2016-09-23 | 2016-09-23 | Sensitive image identification method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107871314B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110490027B (en) * | 2018-05-15 | 2023-06-16 | 触景无限科技(北京)有限公司 | Face feature extraction training method and system |
CN109191451B (en) * | 2018-09-11 | 2020-10-23 | 百度在线网络技术(北京)有限公司 | Abnormality detection method, apparatus, device, and medium |
CN109359551A (en) * | 2018-09-21 | 2019-02-19 | 深圳市璇玑实验室有限公司 | A kind of nude picture detection method and system based on machine learning |
CN109640174A (en) * | 2019-01-28 | 2019-04-16 | Oppo广东移动通信有限公司 | Method for processing video frequency and relevant device |
CN109840590A (en) * | 2019-01-31 | 2019-06-04 | 福州瑞芯微电子股份有限公司 | A kind of scene classification circuit framework neural network based and method |
CN110163300B (en) * | 2019-05-31 | 2021-04-23 | 北京金山云网络技术有限公司 | Image classification method and device, electronic equipment and storage medium |
CN111738290B (en) * | 2020-05-14 | 2024-04-09 | 北京沃东天骏信息技术有限公司 | Image detection method, model construction and training method, device, equipment and medium |
CN112598016A (en) * | 2020-09-17 | 2021-04-02 | 北京小米松果电子有限公司 | Image classification method and device, communication equipment and storage medium |
CN113936195B (en) * | 2021-12-16 | 2022-02-25 | 云账户技术(天津)有限公司 | Sensitive image recognition model training method and device and electronic equipment |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6266664B1 (en) * | 1997-10-01 | 2001-07-24 | Rulespace, Inc. | Method for scanning, analyzing and rating digital information content |
JP4624841B2 (en) * | 2005-04-13 | 2011-02-02 | オリンパスメディカルシステムズ株式会社 | Image processing apparatus and image processing method in the image processing apparatus |
CN104346622A (en) * | 2013-07-31 | 2015-02-11 | 富士通株式会社 | Convolutional neural network classifier, and classifying method and training method thereof |
CN104182735A (en) * | 2014-08-18 | 2014-12-03 | 厦门美图之家科技有限公司 | Training optimization pornographic picture or video detection method based on convolutional neural network |
CN104992177A (en) * | 2015-06-12 | 2015-10-21 | 安徽大学 | Internet porn image detection method based on deep convolution nerve network |
-
2016
- 2016-09-23 CN CN201610846341.XA patent/CN107871314B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN107871314A (en) | 2018-04-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107871314B (en) | Sensitive image identification method and device | |
CN107358242B (en) | Target area color identification method and device and monitoring terminal | |
WO2022033150A1 (en) | Image recognition method, apparatus, electronic device, and storage medium | |
CN111738357B (en) | Junk picture identification method, device and equipment | |
US20180322321A1 (en) | Image identification method, terminal and non-volatile storage medium | |
CN112966691B (en) | Multi-scale text detection method and device based on semantic segmentation and electronic equipment | |
CN109472209B (en) | Image recognition method, device and storage medium | |
CN112801008A (en) | Pedestrian re-identification method and device, electronic equipment and readable storage medium | |
CN113128481A (en) | Face living body detection method, device, equipment and storage medium | |
CN112836625A (en) | Face living body detection method and device and electronic equipment | |
CN110852327A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN116311214B (en) | License plate recognition method and device | |
CN112364873A (en) | Character recognition method and device for curved text image and computer equipment | |
CN111401343B (en) | Method for identifying attributes of people in image and training method and device for identification model | |
CN112417955A (en) | Patrol video stream processing method and device | |
CN113705294A (en) | Image identification method and device based on artificial intelligence | |
CN112132867B (en) | Remote sensing image change detection method and device | |
CN115223022B (en) | Image processing method, device, storage medium and equipment | |
CN115731422A (en) | Training method, classification method and device of multi-label classification model | |
CN114283087A (en) | Image denoising method and related equipment | |
CN114549502A (en) | Method and device for evaluating face quality, electronic equipment and storage medium | |
CN115424250A (en) | License plate recognition method and device | |
CN113762249A (en) | Image attack detection and image attack detection model training method and device | |
CN112712080B (en) | Character recognition processing method for acquiring image by moving character screen | |
CN116405330B (en) | Network abnormal traffic identification method, device and equipment based on transfer learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |