WO2010037332A1 - 分类器的训练方法及装置、识别图片的方法及装置 - Google Patents
分类器的训练方法及装置、识别图片的方法及装置 Download PDFInfo
- Publication number
- WO2010037332A1 WO2010037332A1 PCT/CN2009/074110 CN2009074110W WO2010037332A1 WO 2010037332 A1 WO2010037332 A1 WO 2010037332A1 CN 2009074110 W CN2009074110 W CN 2009074110W WO 2010037332 A1 WO2010037332 A1 WO 2010037332A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature
- skin color
- picture
- classifier
- sample set
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
- G06F18/24155—Bayesian classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
Definitions
- the present invention relates to the field of image recognition, and in particular, to a training method and device for a classifier, and a method and device for identifying a picture. Background of the invention
- Sensitive images such as erotic images in bad information pollute the social atmosphere, endangering the physical and mental health of young people, and identifying and intercepting such sensitive images is a key task in purifying Internet content.
- the existing skin color detection technology is mainly based on the statistical probability distribution of human skin color.
- the widely used skin color detection method is Bayes decision method. The method counts the distribution of skin color and non-skin color on a large sample set. For a given color, the Bayes formula is used to calculate the posterior probability of the skin color based on the two distributions, depending on the probability. Whether it is a skin color area or a non-skin color area.
- the shape characteristics of the human body area commonly used in the prior art mainly include the area ratio of the skin area to the image (the skin area refers to the area composed of all the skin pixels, and does not require continuous), and the ratio of the area of the largest skin blob to the image (the skin blob refers to the skin pixel)
- the connected area the number of skin blobs, the area ratio of the skin blob to the circumscribed rectangle (or convex hull), the half-axis length of the equivalent ellipse of the skin blob, the eccentricity, the direction, etc., the moment invariance of the skin area, and the person The area of the face area, etc.
- the training picture set consists of a positive sample set (composed of sensitive pictures) and a counter sample set (normal picture).
- the features extracted on each sample set are labeled with their respective tags and then used to train the classifier.
- the classifiers used for this problem mainly include support vector machines (SVMs), perceptron networks (MLPs), decision trees, and the like.
- the embodiment of the invention provides a training method and device for a picture classifier, which can reduce the missed detection rate and the false detection rate of the classifier obtained by training;
- a training method for a picture classifier comprising the steps of:
- A dividing the training picture set used for classifier training into a positive example sample set and two or more counterexample sample sets;
- B determining, for each counter sample set, a feature set used to distinguish the positive sample set from the counter sample set;
- the second classifier is obtained by the determined feature set training.
- the invention also discloses a training device for a picture classifier, comprising:
- Training a picture set where the training picture set includes a positive sample set and two or more negative examples;
- a feature determining module for each counter sample set, determining a feature set for distinguishing the positive sample set from the counter sample set;
- a feature training module is configured to obtain a classifier by performing classifier training through features of the feature set.
- the present invention classifies the counter sample set, and performs separability experiments on a large number of regional shape features for each type of counterexample sample set, and separately finds feature sets for distinguishing different counter images and sensitive images, using different
- the feature group trains multiple classifiers, so that the missed detection rate and false detection rate of the trained classifier are greatly reduced.
- the embodiment of the invention further provides a method and a device for recognizing a picture, which can improve the accuracy of identifying a picture.
- a method for identifying a picture by using the above picture classifier comprising the steps of:
- the region shape feature included in the feature group is extracted in the skin color or similar skin color region, and the image to be reviewed is identified based on the region shape feature and a classifier trained by the feature group including the region shape feature.
- the invention also discloses a picture recognition device, comprising:
- a skin color area mapping module configured to obtain a skin color or a similar skin color area of the picture to be reviewed
- the shape feature identifies the picture to be reviewed according to the shape feature of the area.
- the region shape feature in the classifier used to identify the image to be audited is a distinguishing region shape feature that is found after the separability experiment for each type of counterexample sample set, and thus various types are
- the counter-example image can achieve better discrimination accuracy, which can improve the accuracy of sensitive image recognition.
- FIG. 1b is a basic flowchart of training of a picture classifier according to an embodiment of the present invention
- FIG. 1b is a detailed flowchart of training of a picture classifier according to an embodiment of the present invention
- FIG. 2a is a diagram for identifying an image according to an embodiment of the present invention
- FIG. 2b is a detailed flowchart of a method for identifying a picture according to an embodiment of the present invention
- FIG. 3 is a view showing an example of a skin color test result
- FIG. 4a is a basic structural diagram of a training device for a picture classifier according to an embodiment of the present invention
- FIG. 4b is a detailed structural diagram of a training device for a picture classifier according to an embodiment of the present invention
- FIG. 1 is a basic flowchart of training of a picture classifier according to an embodiment of the present invention. As shown in Figure la, the process can include the following steps:
- step 101a the training picture set for classifier training is divided into a positive example sample set and two or more counterexample sample sets.
- the embodiment of the present invention further subdivides the counter sample in the prior art, for example, according to the actual situation.
- the inverse example image is subdivided into the first counterexample sample set, the second counterexample sample set, etc. according to the degree of overlap with the feature of the positive example image, and the distribution of some regional shape features of some counterexample images is further dispersed.
- the problem is that the degree of overlap of the features of the positive and negative examples is increased.
- Step 102a for each counterexample sample set, determines to distinguish the positive example sample set from the negative example sample. The set of features.
- the feature sets determined in step 102a are respectively: used to distinguish the positive example sample set from the first counterexample sample set a first feature set, and a second feature set for distinguishing between the positive example sample set and the second negative example sample set, wherein each feature set includes a corresponding area shape feature.
- the operation of determining the shape feature of the region included in each feature group may be implemented in various implementations, for example, may be set according to actual conditions in advance; or in the positive sample set and each counter sample according to the regional shape feature. The distribution of sets is determined and so on. Wherein, according to the distribution of the regional shape feature in the positive sample set and the respective negative sample set, the steps 102b to 103b in FIG. 1b can be specifically determined.
- Step 103a obtaining a classifier by the determined feature set training.
- the step 103a includes: obtaining a first classifier by the determined first feature set training, and obtaining a second classifier by the determined second feature set training.
- FIG. 1b is a detailed flowchart of training of a picture classifier according to an embodiment of the present invention.
- this embodiment takes the example of subdividing the counter sample set into the first counter sample set and the second counter sample set.
- this embodiment can also continue to subdivide the counter sample set, and the specific operation is similar to that of FIG.
- the divided counter sample set is mainly determined according to the principle of the overlap feature with the positive sample set.
- the first counter sample set is usually the sample with the least overlap feature of the positive sample set.
- the second counter sample set has overlapping features with the positive sample set more than the first negative sample set and the positive sample set.
- the scene picture is taken as the first counterexample
- the portrait picture is taken as the second counterexample
- the sensitive picture is taken as a positive example.
- other pictures may be used in this embodiment, as shown in FIG. It is merely an example and is not intended to limit the embodiments of the invention.
- the process can include the following steps:
- the separability experiment is first performed on the region shape feature: the region shape feature is extracted in each of the three types of sample sets (step 100b), and the extracted region shape feature is measured in the positive example sample set. And different distribution features of the first counter sample set and the second counter sample set (step 101b); and then determining the separability of the region shape feature according to the distribution feature (step 102b). According to the different separability of different regional shape features in different sample sets, the regional shape features with better separability are selected, and the regional shape features with better separability relative to the positive sample set and the first counter sample set are obtained.
- the area shape feature having better separability with respect to the positive example sample set and the second counterexample sample set is labeled as the second feature set (step 103b); finally, the area shape of the first feature set is used.
- the feature trains the classifier to obtain a first classifier, and the classifier is trained by the region shape feature of the second feature group to obtain a second classifier (step 104b).
- the present embodiment proposes two sets of feature groups, and two types of classifiers are trained to perform multiple layers of the picture to be recognized. Classification can reduce the rate of false positives of the classifier.
- Typical area shape features include, but are not limited to, the following types:
- Skin area to image area ratio skin blob number, maximum skin Blob to image area ratio, maximum skin blob eccentricity (the eccentricity of the ellipse of the moment of inertia equal to the maximum skin Blob's moment of inertia), compactness (Blob wheel ⁇ length to Blob area ratio), near-roundness (ratio of Blob area to circumscribed area), near-rectangularity (ratio of Blob area to minimum circumscribed rectangle area);
- the density of the edge pixel of the largest skin blob (the edge pixel refers to the point on the Canny edge line of the image), and the number of the medium and long straight line segments in the largest skin blob (the medium long straight line segment refers to the line segment containing the number of pixels greater than a certain threshold, Detected and filtered with a line detector);
- the ratio of the face blob to the maximum skin blob, the center of gravity of the face blob is the ratio of the horizontal and vertical distances of the maximum skin blob to the height and width of the face blob.
- At least one of the above various regional shape features may be extracted when step 100b is performed. It is worth noting that other region shape features may also be extracted for feature separability experiments.
- step 101b there are various methods in the prior art for measuring different distribution characteristics of extracted region shape features in respective sample sets, for example, a divergence matrix based method, a distributed histogram based method, and the like.
- a method based on a distribution histogram is used as a means of obtaining distribution features. The specific process is as follows:
- the distribution histogram of the region shape feature in each sample set is counted. Then, the histogram is normalized, and the distribution histogram of the shape feature in the sensitive picture and the distribution histogram in the scene picture, and the distribution histogram in the sensitive picture and the distribution in the portrait picture are sequentially compared. The histogram, then the histogram intersection ratio is used to measure the distinguishability of the shape feature of the region from the positive sample set and a counter sample set. As an embodiment of the invention, the intersection ratio of the normalized distribution histograms is the area of the intersection of the two normalized distribution histograms:
- Equation (4) is the definition of the intersection ratio
- Equation (5) indicates that the distribution histogram H is normalized.
- the separability of the shape feature of the region may be determined according to the intersection ratio r, and the smaller the r is, the shape feature of the region is for two sample sets, such as a positive sample and a counter sample.
- the predetermined threshold can be determined according to the specific application, and the shape feature of the region is determined according to the intersection ratio r and the predetermined threshold based on the shape feature of a certain region. Whether the set is separable.
- a region shape feature in the group wherein the selected region shape feature comprises at least one of:
- step 103b when step 103b is performed, the above selected area is The domain shape feature is labeled as the first feature set.
- the probability distribution of the face-related statistical features in the face picture class (distribution histogram) and the probability distribution in the sensitive picture class (distribution histogram) are trained Bayes classification
- the overall recognition error rate is ⁇ 10% when identifying sensitive pictures and sensitive pictures, that is, it is separable, and the shape features can be used to distinguish sensitive pictures from portrait pictures.
- the statistical feature related to the face is marked as the second feature set.
- step 104b When step 104b is executed, the positive example sample set and the scene picture are used to form the first counter sample set, the first classifier is trained by the area shape feature in the first feature set, and then the sensitive sample is used to form the positive example sample set. And the portrait picture constitutes a second counterexample sample set, and the second classifier is trained by the features in the second feature set.
- the classifiers that can be used are mainly support vector machines (SVMs), perceptron networks (MLPs), decision trees, and so on.
- SVMs support vector machines
- MLPs perceptron networks
- decision trees and so on.
- both the first classifier and the second classifier can use the naive Bayes classification:
- the classifier assumes that the dimensions of the feature are independent of each other in the form of:
- w ⁇ is an N-dimensional region shape feature of the first feature set
- the counter-example sample is estimated to be obtained.
- the training process of the Naive Bayes classifier is the process of accounting lc from the positive and negative sample sets.
- the dimension features of (1) can be exponentially weighted: ( 3 )
- Cj , 7 1, 2 respectively represent the positive example and the second counterexample, 3 ⁇ 4 is an N-dimensional region shape feature of the second feature set;
- the dimension of the region shape feature of the first feature group may be the same as or different from the dimension of the region shape feature of the second feature group, the two groups of region shapes Features can be coincident or different.
- a weighting factor determined according to the intersection ratio, and its value is greater than zero. A larger value indicates a larger weight, and a larger weighting factor can be used for a good separability feature.
- a probability histogram may be used to represent its probability distribution, and the specific process may refer to the steps in the "differentiation experiment of features" described above.
- the picture classifier trained by the method described above can recognize each picture.
- the following describes a method of recognizing a picture using the picture classifier described in FIG. As shown in Figure 2a, the process includes the following steps:
- Step 200a obtaining a skin color or a similar skin color region of the image to be reviewed
- step 200a can be specifically referred to as 200b shown in FIG. 2b, which will not be described in detail herein.
- Step 201a extracting a region shape feature included in the feature group in the skin color or similar skin color region, and identifying the image to be reviewed according to the region shape feature and a classifier trained by the feature group including the region shape feature.
- the classifier obtained by the method shown in Fig. la can accurately recognize the image.
- FIG. 2b is a detailed flowchart of identifying a picture by using the above classifier provided by the present invention.
- a typical use of picture classifiers trained by the methods described above is for identifying sensitive pictures.
- the embodiment is to identify the sensitive picture, and subdivide the counter sample set into the first counter sample set and the second sample sample set as shown in FIG. 1b, wherein the positive sample set is a sensitive picture, and the first counter sample is Set For the scene picture, the second counterexample sample set is a portrait picture. Referring to FIG.
- the present invention first detects a skin color or similar skin color region of a picture to be audited by a skin color detecting technique (step 200b); and extracts a first region shape feature of the first feature group in a skin color or similar skin color region ( Step 201b), wherein the first region shape feature for distinguishing the first counter sample set and the positive sample set is first used in identifying the sensitive image, mainly because the overlap between the first counter sample set and the positive sample set is The feature is relatively small, and it is relatively easy to judge. Thus, if the judgment result in this step is YES, the current flow can be directly ended, saving resources; and the first classifier identification according to the first region shape feature and the above method is obtained.
- the picture to be audited is a scene picture (step 202b); if yes, it is determined as a scene picture, that is, the scene picture is a normal picture with respect to the sensitive picture (step 205b), and if not, the second color is extracted in the skin color or similar skin color area a second area shape feature of the feature set (step 203b), where the extracted area is shaped for ease of description
- the feature is recorded as a second region shape feature; according to the second region shape feature and identifying, by the second classifier, whether the image to be reviewed is a sensitive image (step 204b). If not, determining that the image is a normal image relative to the sensitive image ( Step 205b), otherwise, it is determined to be a sensitive picture (step 206b), and is handed over to the manual for further review.
- the invention selects a set of regional shape features with better separability based on the feature separability experiment, and can achieve higher discrimination accuracy for scene pictures and sensitive pictures; and for scene pictures and portrait pictures and sensitive pictures.
- two sets of feature groups are proposed, and two classifiers are trained respectively, and the two normal images are processed separately by the two classifiers, which greatly improves the accuracy of sensitive picture recognition.
- the currently widely used skin color detection method is the Bayes decision method.
- the method calculates the distribution of skin color and non-skin color on a large sample set.
- the Bayes formula is used to calculate the posterior probability that the color is the skin color according to the two distributions, and the probability is determined according to the probability. Is it skin color or non-skin color? Taking the skin color classification of the pixel x as an example, assuming that the color of the pixel X is, the likelihood probability of X in the two classes is P( « ⁇ r
- the Bayes decision rule 'J can be expressed as ⁇ ( ⁇ / ⁇ ) > ⁇ ( ⁇ / ⁇ ) , ie skin
- the posterior probability in the above formula can be reduced to the likelihood probability. It can be proved that the overall risk (error rate) of the classification results obtained by the Bayes decision method is the smallest.
- the precondition for skin color testing using this method is the overall distribution within the known class, that is, the color distribution in the skin color and non-skin color classes is counted on the large sample set.
- the total color of the skin color detection technique is detected.
- the skin area accounts for a large proportion of the area of the entire picture, and the automatic distinction between such pictures and sensitive pictures is also difficult. If the distinguishing shape of the area shape extracted on the detected "skin area" is not good enough, a large number of normal pictures (such as natural scene pictures, portrait pictures, etc. with similar color and skin color) are misidentified as sensitive.
- the present invention can also use the skin color test disclosed in the applicant's application number 2008100841302, entitled “A Skin Color Detection Method and Apparatus", in the execution of the step 200b. technology.
- the special The application provides a training method for a multi-skin probability model, and a method for detecting skin color using a multi-skin probability model.
- the multi skin color probability model provided is a plurality of skin color probability models obtained for skin color or different types of skin color training under different illumination conditions.
- the appropriate skin color probability model can be selected for the image to be detected, thereby reducing the false positive rate or the missed detection rate.
- the skin color pixels in the training sample set are clustered in the color space to obtain at least one skin color chromaticity class; the candidate skin color regions in the training sample are extracted, and the chromaticity mean value and the skin color chromaticity class of the candidate skin color region are calculated.
- the distance of the center the training samples are classified into the skin color chromaticity class with the smallest distance, and the training subset corresponding to the skin color chromaticity class is obtained; the skin color probability distribution and the non-skin color probability distribution of each training subset are counted, and each The skin color probability model corresponding to the skin color chromaticity class.
- obtaining the skin color or similar skin color region of the image to be audited in step 200b includes: extracting a candidate skin color region of the image to be audited, and calculating a distance between the candidate skin color region mean value and the skin color class center, according to the minimum distance
- the skin color probability model corresponding to the skin color chromaticity class performs skin color discrimination on the pixels in the image to be detected, and the pixels determined to be skin color constitute a skin color or a similar skin color region.
- the process of classifying (identifying) the images to be reviewed using the naive Bayes classifier described above is as follows:
- the value of ⁇ is generally 0.5, and can also be adjusted according to the risk of two types of misclassification.
- P ⁇ IJW• A P(c 1 1 ⁇ 2 ---x N ) + P(c 2 1 x x x 2 ---x N )) in the above formula is called a confidence value, when When the confidence value is lower than the threshold, the picture to be reviewed is recognized as a scene picture. Otherwise, the picture to be reviewed is further identified by the following steps:
- the area shape of the second feature group to be audited ( 2 ... 3 ⁇ 4 )
- the class posterior probability ⁇ ( ⁇ ) obtained by the second classifier (the plain Bayes classifier), j l, 2, it is worth noting that
- the dimension of the region shape feature of a feature group may be the same as or different from the dimension of the region shape feature of the second feature group.
- the shape features of the two groups may be coincident or different, and then the Bayes decision is performed using the threshold ⁇ :
- the value of ⁇ is generally 0.5, and can also be adjusted according to the risk of two types of misclassification.
- the threshold value when using the first classifier for Bayes decision can be the same or different, and there is no necessary relationship.
- the picture to be reviewed is recognized as a portrait picture; otherwise, the picture to be reviewed is recognized as a sensitive picture.
- the present invention also provides a training device for the corresponding classifier.
- the training device of the picture classifier disclosed by the present invention basically comprises: a training picture set 401a, as above
- the training picture set may include a positive example sample set and two or more counter-example sample sets.
- the counter-example sample set may specifically include a first counter-example sample set and a second counter-example sample set.
- the feature determining module 402a determines, for each counterexample sample set, a feature set for distinguishing the positive example sample set from the counter sample set; and a feature training module 403a for performing classifier training by using the feature of the feature set Get the classifier.
- FIG. 4b is a detailed structural diagram of a training device according to an embodiment of the present invention.
- the device includes: a training picture set 401b, a feature determining module 402b, and a feature training module 403b, wherein the functions of the training picture set 401b, the feature determining module 402b, and the feature training module 403b are respectively associated with the training picture set 401a.
- the functions of the feature determining module 402a and the feature training module 403a are similar, and are not described herein again.
- the feature determining module 402b may specifically include: a feature separability decision module 4021b and features.
- Tag module 4022b may specifically include: a feature separability decision module 4021b and features.
- the feature separability decision module 4021b obtains the regional shape features in the positive example sample set, the first counter sample set and the second counter sample set, respectively, and measures the shape feature of the area for each region shape feature.
- the distribution characteristics in the positive sample set, the first counter sample set, and the second counter sample set; and the separability of the regional shape feature is determined according to the distribution feature; here, the method of determining the separability can be performed by the method described above Any one of the methods is implemented, and details are not described herein again.
- the feature tagging module 4022b marks the region shape feature having separability with respect to the first counterexample sample set as the first feature group;
- the region shape feature with separability of the counter sample set is labeled as the second feature group.
- the feature training module 403b is configured to obtain a first classifier through the determined first feature set training, and obtain a second classifier through the determined second feature set training.
- the first feature set of the device may further include at least one of the first sub-feature group, the second sub-feature group, and the third sub-feature group, and each sub-feature group includes various types that can be obtained through separability experiments.
- a well-divided regional shape feature includes at least one of the following regional shape features: the first 3 components of the skin region Hu moment, the Zemike front 4 moments of the largest skin blob, the Z22, Z40, Z42, and the maximum skin blob Fourier descriptor High frequency Component, curvature energy, near rectangle.
- the second sub-feature group includes at least one of the following regional shape features: Z11 in the Zemike moment of the largest skin blob, and eccentricity of the largest skin blob.
- the third sub-feature group includes at least one of the following regional shape features: maximum skin blob to image area ratio, compactness, density of edge pixels.
- the feature separability decision module 4021b may include:
- the distribution probability statistics module 4023b is configured to separately calculate a distribution histogram of the shape feature of the region in the positive example sample set, the first counter sample set, and the second counter sample set for each extracted shape feature;
- the separability module 4024b is configured to normalize the distribution histogram and determine an intersection ratio of the normalized histogram; and determine the separability of the shape feature of the region according to the intersection ratio.
- the implementation method of the training apparatus for the picture classifier shown in Fig. 4a or Fig. 4b can be implemented in the relevant manners mentioned in the training method of the classifier described above, and will not be described again. It is worth noting that the training device of the classifier in Fig. 4a or Fig. 4b is only one of the instantiation devices of the training method of the classifier, and is not the only physical device that can implement the training method of the classifier.
- the present invention also proposes a corresponding picture recognition device.
- the picture recognition device includes a skin color area map module 501 and a classifier 502.
- the skin color area mapping module acquires the skin color or similar skin color area of the picture to be reviewed;
- the classifier is configured to extract the area shape feature included in the feature group in the skin color or similar skin color area, according to the The area shape feature identifies the picture to be reviewed.
- the embodiment of the present invention takes an example of identifying a sensitive picture.
- the classifier includes the first a classifier 5021 and a second classifier 5022, wherein
- the first classifier 5021 extracts, in the skin color or similar skin color region, a first region shape feature of the first feature group, where the first feature group is used to distinguish the positive example sample set from the first negative example sample set a feature set, the first counter sample set is a scene picture set; according to the first area shape feature, whether the picture to be reviewed is a scene picture, and if not, notifying the second classifier 5022;
- the second classifier 5022 is connected to the first classifier 5021, and is configured to extract a second region shape feature of the second feature group in the skin color or similar skin color region, where the second feature group is used to distinguish the positive example sample And a feature set of the second counter sample set, wherein the positive sample set is a sensitive image set; and the second area shape feature is used to identify whether the to-be-reviewed picture is a sensitive picture.
- the implementation method of the first classifier and the second classifier is as described in FIG. 1b above, and details are not described herein again.
- the first classifier or the second classifier is a Bayes classifier
- the Bayes classifier may include: a posterior probability calculation module, configured to calculate the first feature group by using the first feature set The feature vector belongs to the posterior probability of the positive example or the first counterexample; calculating, by the second feature set, the feature vector of the second feature set belongs to the posterior probability of the positive or second counterexample; and the decision module is configured according to the posterior
- the probability is Bayesian decision to identify whether the image to be reviewed is a scene picture or a sensitive picture.
- the skin color or similar skin color region of the image to be reviewed acquired by the skin color region image detecting module can be implemented by the Bayes decision method in the prior art, and can also be applied by the applicant described above.
- the technical solution disclosed in the application file of No. 2008100841302 is implemented.
- the identification device of the sensitive image may further include a module related to detecting a skin color or a similar skin color region, for example, comprising: a candidate skin color region extracting module, configured to extract a candidate skin color region image of the image to be detected; a skin color region image detecting module, a chromaticity mean for calculating the candidate skin color region, according to the color chromaticity center and the color a skin color probability model corresponding to the nearest skin color chromaticity class, the skin color discrimination is performed on the pixels in the image to be detected, and the skin color region image is formed by the pixels determined to be skin color; the skin color chromaticity class is obtained by concentrating the skin color pixels in the training sample Clustering is obtained in the color space; the skin color probability model classifies the training sample into the skin color with the smallest distance by calculating the distance between the chromaticity mean value of the candidate skin color region of each training sample and the center of the skin color chromaticity class a degree class, a training subset corresponding to the skin color
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Probability & Statistics with Applications (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/856,856 US8611644B2 (en) | 2008-09-26 | 2010-08-16 | Method and apparatus for training classifier, method and apparatus for image recognition |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200810198788.6 | 2008-09-26 | ||
CN2008101987886A CN101359372B (zh) | 2008-09-26 | 2008-09-26 | 分类器的训练方法及装置、识别敏感图片的方法及装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/856,856 Continuation US8611644B2 (en) | 2008-09-26 | 2010-08-16 | Method and apparatus for training classifier, method and apparatus for image recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010037332A1 true WO2010037332A1 (zh) | 2010-04-08 |
Family
ID=40331818
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2009/074110 WO2010037332A1 (zh) | 2008-09-26 | 2009-09-22 | 分类器的训练方法及装置、识别图片的方法及装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US8611644B2 (zh) |
CN (1) | CN101359372B (zh) |
WO (1) | WO2010037332A1 (zh) |
Families Citing this family (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8995715B2 (en) * | 2010-10-26 | 2015-03-31 | Fotonation Limited | Face or other object detection including template matching |
US8335404B2 (en) * | 2007-07-20 | 2012-12-18 | Vision Louis Winter | Dynamically varying classified image display system |
CN101359372B (zh) * | 2008-09-26 | 2011-05-11 | 腾讯科技(深圳)有限公司 | 分类器的训练方法及装置、识别敏感图片的方法及装置 |
JP5521881B2 (ja) * | 2010-08-12 | 2014-06-18 | 富士ゼロックス株式会社 | 画像識別情報付与プログラム及び画像識別情報付与装置 |
CN102270303B (zh) * | 2011-07-27 | 2013-06-05 | 重庆大学 | 敏感图像的联合检测方法 |
CN103093180B (zh) * | 2011-10-28 | 2016-06-29 | 阿里巴巴集团控股有限公司 | 一种色情图像侦测的方法和系统 |
US8565486B2 (en) * | 2012-01-05 | 2013-10-22 | Gentex Corporation | Bayesian classifier system using a non-linear probability function and method thereof |
US9361377B1 (en) * | 2012-01-06 | 2016-06-07 | Amazon Technologies, Inc. | Classifier for classifying digital items |
CN102590052B (zh) * | 2012-02-28 | 2014-06-11 | 清华大学 | 液体内异物微粒粒径标定方法 |
CN102842032B (zh) * | 2012-07-18 | 2015-07-22 | 郑州金惠计算机系统工程有限公司 | 基于多模式组合策略的移动互联网色情图像识别方法 |
CN110909825B (zh) * | 2012-10-11 | 2024-05-28 | 开文公司 | 使用概率模型在视觉数据中检测对象 |
US9230383B2 (en) * | 2012-12-28 | 2016-01-05 | Konica Minolta Laboratory U.S.A., Inc. | Document image compression method and its application in document authentication |
US9305208B2 (en) * | 2013-01-11 | 2016-04-05 | Blue Coat Systems, Inc. | System and method for recognizing offensive images |
US9355406B2 (en) * | 2013-07-18 | 2016-05-31 | GumGum, Inc. | Systems and methods for determining image safety |
CN103413145B (zh) * | 2013-08-23 | 2016-09-21 | 南京理工大学 | 基于深度图像的关节点定位方法 |
KR20150051711A (ko) * | 2013-11-05 | 2015-05-13 | 한국전자통신연구원 | 유해 콘텐츠 영상 차단을 위한 피부 영역 추출 장치 및 방법 |
KR20150092546A (ko) * | 2014-02-05 | 2015-08-13 | 한국전자통신연구원 | 무해 프레임 필터 및 이를 포함하는 유해 영상 차단 장치, 무해 프레임을 필터링하는 방법 |
JP2016057918A (ja) * | 2014-09-10 | 2016-04-21 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
US9684831B2 (en) * | 2015-02-18 | 2017-06-20 | Qualcomm Incorporated | Adaptive edge-like feature selection during object detection |
US11868851B2 (en) * | 2015-03-11 | 2024-01-09 | Symphonyai Sensa Llc | Systems and methods for predicting outcomes using a prediction learning model |
CN105095911B (zh) | 2015-07-31 | 2019-02-12 | 小米科技有限责任公司 | 敏感图片识别方法、装置以及服务器 |
CN105354589A (zh) * | 2015-10-08 | 2016-02-24 | 成都唐源电气有限责任公司 | 一种在接触网图像中智能识别绝缘子裂损的方法及系统 |
CN105488502B (zh) * | 2015-11-27 | 2018-12-21 | 北京航空航天大学 | 目标检测方法与装置 |
CN107291737B (zh) * | 2016-04-01 | 2019-05-14 | 腾讯科技(深圳)有限公司 | 敏感图像识别方法及装置 |
US10795926B1 (en) * | 2016-04-22 | 2020-10-06 | Google Llc | Suppressing personally objectionable content in search results |
CN106650780B (zh) * | 2016-10-18 | 2021-02-12 | 腾讯科技(深圳)有限公司 | 数据处理方法及装置、分类器训练方法及系统 |
WO2018119406A1 (en) * | 2016-12-22 | 2018-06-28 | Aestatix LLC | Image processing to determine center of balance in a digital image |
CN108460319B (zh) * | 2017-02-22 | 2021-04-20 | 浙江宇视科技有限公司 | 异常人脸检测方法及装置 |
CN107197331B (zh) * | 2017-05-03 | 2020-01-31 | 北京奇艺世纪科技有限公司 | 一种实时监测直播内容的方法及装置 |
CN107194419A (zh) * | 2017-05-10 | 2017-09-22 | 百度在线网络技术(北京)有限公司 | 视频分类方法及装置、计算机设备与可读介质 |
WO2019027451A1 (en) * | 2017-08-02 | 2019-02-07 | Hewlett-Packard Development Company, L.P. | TRAINING CLASSIFIER TO REDUCE ERROR RATE |
CN107729924B (zh) * | 2017-09-25 | 2019-02-19 | 平安科技(深圳)有限公司 | 图片复审概率区间生成方法及图片复审判定方法 |
US20190114673A1 (en) * | 2017-10-18 | 2019-04-18 | AdobeInc. | Digital experience targeting using bayesian approach |
US11694093B2 (en) * | 2018-03-14 | 2023-07-04 | Adobe Inc. | Generation of training data to train a classifier to identify distinct physical user devices in a cross-device context |
CN109034169B (zh) * | 2018-06-29 | 2021-02-26 | 广州雅特智能科技有限公司 | 智能食物容器识别方法、装置、系统和存储介质 |
CN109586950B (zh) * | 2018-10-18 | 2022-08-16 | 锐捷网络股份有限公司 | 网络场景识别方法、网络管理设备、系统及存储介质 |
CN111292285B (zh) * | 2018-11-21 | 2023-04-07 | 中南大学 | 一种基于朴素贝叶斯与支持向量机的糖网病自动筛查方法 |
CN109902578B (zh) * | 2019-01-25 | 2021-01-08 | 南京理工大学 | 一种红外目标检测与跟踪方法 |
CN109740018B (zh) * | 2019-01-29 | 2021-03-02 | 北京字节跳动网络技术有限公司 | 用于生成视频标签模型的方法和装置 |
CN110222791B (zh) * | 2019-06-20 | 2020-12-04 | 杭州睿琪软件有限公司 | 样本标注信息的审核方法及装置 |
CN110610206A (zh) * | 2019-09-05 | 2019-12-24 | 腾讯科技(深圳)有限公司 | 图片的低俗归因识别方法、装置及设备 |
CN112819020A (zh) * | 2019-11-15 | 2021-05-18 | 富士通株式会社 | 训练分类模型的方法和装置及分类方法 |
CN110909224B (zh) * | 2019-11-22 | 2022-06-10 | 浙江大学 | 一种基于人工智能的敏感数据自动分类识别方法及系统 |
CN111047336A (zh) * | 2019-12-24 | 2020-04-21 | 太平金融科技服务(上海)有限公司 | 用户标签推送、用户标签展示方法、装置和计算机设备 |
CN111178442B (zh) * | 2019-12-31 | 2023-05-12 | 北京容联易通信息技术有限公司 | 一种提高算法精度的业务实现方法 |
CN111639665B (zh) * | 2020-04-08 | 2024-05-14 | 浙江科技学院 | 一种汽车换挡面板图像自动分类方法 |
CN115443490A (zh) * | 2020-05-28 | 2022-12-06 | 深圳市欢太科技有限公司 | 影像审核方法及装置、设备、存储介质 |
CN111639718B (zh) * | 2020-06-05 | 2023-06-23 | 中国银行股份有限公司 | 分类器应用方法及装置 |
CN112686047B (zh) * | 2021-01-21 | 2024-03-29 | 北京云上曲率科技有限公司 | 一种基于命名实体识别的敏感文本识别方法、装置、系统 |
CN116244738B (zh) * | 2022-12-30 | 2024-05-28 | 浙江御安信息技术有限公司 | 一种基于图神经网络的敏感信息检测方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101178773A (zh) * | 2007-12-13 | 2008-05-14 | 北京中星微电子有限公司 | 基于特征提取和分类器的图像识别系统及方法 |
CN100412888C (zh) * | 2006-04-10 | 2008-08-20 | 中国科学院自动化研究所 | 基于内容的敏感网页识别方法 |
CN101359372A (zh) * | 2008-09-26 | 2009-02-04 | 腾讯科技(深圳)有限公司 | 分类器的训练方法及装置、识别敏感图片的方法及装置 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1323370C (zh) | 2004-05-28 | 2007-06-27 | 中国科学院计算技术研究所 | 一种色情图像检测方法 |
CN101251898B (zh) | 2008-03-25 | 2010-09-15 | 腾讯科技(深圳)有限公司 | 一种肤色检测方法及装置 |
-
2008
- 2008-09-26 CN CN2008101987886A patent/CN101359372B/zh active Active
-
2009
- 2009-09-22 WO PCT/CN2009/074110 patent/WO2010037332A1/zh active Application Filing
-
2010
- 2010-08-16 US US12/856,856 patent/US8611644B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100412888C (zh) * | 2006-04-10 | 2008-08-20 | 中国科学院自动化研究所 | 基于内容的敏感网页识别方法 |
CN101178773A (zh) * | 2007-12-13 | 2008-05-14 | 北京中星微电子有限公司 | 基于特征提取和分类器的图像识别系统及方法 |
CN101359372A (zh) * | 2008-09-26 | 2009-02-04 | 腾讯科技(深圳)有限公司 | 分类器的训练方法及装置、识别敏感图片的方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN101359372A (zh) | 2009-02-04 |
US8611644B2 (en) | 2013-12-17 |
US20100310158A1 (en) | 2010-12-09 |
CN101359372B (zh) | 2011-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2010037332A1 (zh) | 分类器的训练方法及装置、识别图片的方法及装置 | |
CN110348319B (zh) | 一种基于人脸深度信息和边缘图像融合的人脸防伪方法 | |
CN100592322C (zh) | 照片人脸与活体人脸的计算机自动鉴别方法 | |
US8379961B2 (en) | Mitotic figure detector and counter system and method for detecting and counting mitotic figures | |
WO2018072233A1 (zh) | 一种基于选择性搜索算法的车标检测识别方法及系统 | |
WO2017190574A1 (zh) | 一种基于聚合通道特征的快速行人检测方法 | |
CN106650669A (zh) | 一种鉴别仿冒照片欺骗的人脸识别方法 | |
CN104036278B (zh) | 人脸算法标准脸部图像的提取方法 | |
CN102214309B (zh) | 一种基于头肩模型的特定人体识别方法 | |
CN103870811B (zh) | 一种用于视频监控的正面人脸快速判别方法 | |
TWI687159B (zh) | 魚苗計數系統及魚苗計數方法 | |
CN105989331B (zh) | 脸部特征提取装置、脸部特征提取方法、图像处理设备和图像处理方法 | |
KR20170006355A (ko) | 모션벡터 및 특징벡터 기반 위조 얼굴 검출 방법 및 장치 | |
CN101984453B (zh) | 一种人眼识别系统及方法 | |
CN106845328A (zh) | 一种基于双摄像头的智能人脸识别方法及系统 | |
CN106650623A (zh) | 一种基于人脸检测的出入境人证核实的方法 | |
CN104182769B (zh) | 一种车牌检测方法及系统 | |
KR20110102073A (ko) | 얼굴 인식 시스템에서의 위조 검출 방법 | |
CN108108651B (zh) | 基于视频人脸分析的驾驶员非专心驾驶检测方法及系统 | |
JP5004181B2 (ja) | 領域識別装置およびコンテンツ識別装置 | |
CN106599880A (zh) | 一种面向无人监考的同人判别方法 | |
CN106326839A (zh) | 一种基于出操视频流的人数统计方法 | |
CN114299606A (zh) | 一种基于前端相机的睡觉检测方法及装置 | |
CN108520208A (zh) | 局部化面部识别方法 | |
Ye et al. | A new text detection algorithm in images/video frames |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09817246 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 5107/CHENP/2010 Country of ref document: IN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC- FORM 1205A DATED 12-08-2011 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09817246 Country of ref document: EP Kind code of ref document: A1 |