CA3153067C - Picture-detecting method and apparatus - Google Patents

Picture-detecting method and apparatus Download PDF

Info

Publication number
CA3153067C
CA3153067C CA3153067A CA3153067A CA3153067C CA 3153067 C CA3153067 C CA 3153067C CA 3153067 A CA3153067 A CA 3153067A CA 3153067 A CA3153067 A CA 3153067A CA 3153067 C CA3153067 C CA 3153067C
Authority
CA
Canada
Prior art keywords
region image
background
pixel
subject region
subject
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CA3153067A
Other languages
French (fr)
Other versions
CA3153067A1 (en
Inventor
Chong MU
Xuyang Zhou
Erlong LIU
Mingxiu HAN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
10353744 Canada Ltd
Original Assignee
10353744 Canada Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 10353744 Canada Ltd filed Critical 10353744 Canada Ltd
Publication of CA3153067A1 publication Critical patent/CA3153067A1/en
Application granted granted Critical
Publication of CA3153067C publication Critical patent/CA3153067C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The present invention relates to the technical field of picture recognition. Disclosed are a picture test method and device, which improves the quality of an uploaded picture by adding compliance test items, i.e. a picture background purity and a main body position. Said method comprises: acquiring a denoised picture to be tested, performing pixel-level semantic segmentation processing on same, and then recognizing a main body area image and a background area image; performing color space conversion on said picture, and outputting hue space data and lightness space data of the images; expanding the main body area image and then fusing same with the hue space data, and extracting background purity values corresponding to pixels in the background area image, so as to determine whether the background purity of said picture is compliant; processing the lightness space data by means of a plurality of binarization methods, and outputting a plurality of binarization results; and fusing the main body area image with the plurality of binarization results, respectively, extracting coordinate values and a corresponding background purity value of each pixel in the fused main body area image, so as to determine whether the main body position of said picture is compliant.

Description

PICTURE-DETECTING METHOD AND APPARATUS
BACKGROUND OF THE INVENTION
Technical Field [0001] The present invention relates to the technical field of picture recognition, and more particularly to a picture-detecting method and a picture-detecting apparatus.
Description of Related Art
[0002] With the popularization of the Internet, more and more web-based platforms allow users to upload pictures. Particularly, in leading e-commerce platforms, hundreds of millions of pictures are uploaded by vendors and users every day, among which there are always some non-compliant or even illegal pictures. For preventing such improper uploading, examination of pictures is conventionally conducted as a combination of machine works and human works.
[0003] The existing examination technologies solely for a single type of illegal picture contents such as violent, terrorism, or porn are not satisfying to the modern e-commerce platforms.
For improving quality of uploaded pictures, it is desirable that a picture examination technology has the ability to determine compliance of pictures in addition to the ability to detect contents related to violent, terrorism, and porn, so as to filter out upload pictures that contain inaesthetic layouts such as non-centered subjects, busy backgrounds, and too much blank.
SUMMARY OF THE INVENTION
[0004] The objective of the present invention is to provide a picture-detecting method and a picture-detecting apparatus, which ensure quality of uploaded pictures by adding detection items about picture background purity and subject location compliance.
[0005] To achieve the foregoing objective, in a first aspect the present invention provides a picture-detecting method. The picture-detecting method comprises:

Date Recue/Date Received 2022-03-02
[0006] acquiring a to-be-detected picture that has been denoised, and after pixel-level semantic segmentation, recognizing a subject region image and a background region image;
[0007] performing hue space conversion on the to-be-detected picture, so as to output hue space data and brightness space data of the picture;
[0008] fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant;
[0009] processing the brightness space data by means of plural binarization methods, so as to output plural binarization results correspondingly; and
[0010] fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant.
[0011] Preferably, the step of acquiring a to-be-detected picture that has been denoised, and after pixel-level semantic segmentation, recognizing a subject region image and a background region image comprises:
[0012] denoising the to-be-detected picture by means of a nonlinear filtering method; and
[0013] performing pixel-level semantic segmentation on the denoised to-be-detected picture through a multi-channel deep residual fully convolutional network model, so as to recognize the subject region image and the background region image.
[0014] Preferably, the step of performing hue space conversion on the to-be-detected picture, so as to output hue space data and brightness space data of the picture comprises:
[0015] using HSV hue space conversion method to convert the to-be-detected picture and output the hue space data of the picture, in which the hue space data include a hue space component H; and
[0016] using LUV hue space conversion method to convert the to-be-detected picture and output the brightness space data of the picture, in which the brightness space data include a brightness space channel L.

Date Recue/Date Received 2022-03-02
[0017] More preferably, the step of fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises:
[0018] filtering edge pixels of the subject region image by means of a filter kernel, so as to dilate the subject region image;
[0019] updating the part other than the dilated subject region image in the to-be-detected picture as the background region image;
[0020] fusing the updated background region image with data of the hue space component H, and determining whether the background purity value corresponding to every pixel in the updated background region image is compliant to a first threshold, and if yes, determining that the background purity of the to-be-detected picture is compliant, or if not, determining that the background purity of the to-be-detected picture is non-compliant;
and
[0021] wherein the first threshold includes a first background purity threshold.
[0022] More preferably, the step of processing the brightness space data by means of plural binarization methods, so as to output plural binarization results correspondingly comprises:
[0023] processing data of the brightness space channel L by means of a fixed-threshold binarization method, so as to obtain a first binarization result; and
[0024] processing the data of the brightness space channel L by means of a Gaussian-window binarization method, so as to obtain a second binarization result.
[0025] Further, after the step of so as to output plural binarization results correspondingly, the method further comprises:
[0026] performing non-coherence region suppression on the first binarization result and the second binarization result, respectively, by means of a non-maximum suppression method.
[0027] Further, the step of fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region Date Recue/Date Received 2022-03-02 image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
[0028] fusing the subject region image recognized through pixel-level semantic segmentation with the first binarization result and the second binarization result, respectively;
[0029] extracting coordinate values of the pixels belonging to the subject region image and the first binarization result from fusing results and their corresponding background purity values, and extracting coordinate values of the pixels belonging to the subject region image and the second binarization result from fusing results and their corresponding background purity values;
[0030] summarizing and extracting the coordinate values of the pixels and their corresponding background purity values, and determining whether both the coordinate value of each pixel and its corresponding background purity value are compliant to a second threshold, and if yes, determining that the location of the subject in the to-be-detected picture is compliant, or if not, determining that the location of the subject in the to-be-detected picture is non-compliant; and
[0031] wherein the second threshold includes a second background purity threshold and a location coordinate interval threshold.
[0032] As compared to the prior art, the picture-detecting method of the present invention has the following beneficial effects:
[0033] The picture-detecting method provided by the present invention first identifies a subject region image and a background region image in a denoised to-be-detected picture through pixel-level semantic segmentation, and then performs hue space conversion on the picture, so as to output hue space data and brightness space data of the picture.
During detection of background purity, since pixel-level semantic segmentation processes edge pixels of the subject region image and of the background region image in a relatively rough manner, the present invention dilates the subject region image to dilate the range of edge pixels of the subject region image in order to ensure complete coverage over the subject region image. Afterward, the dilated subject region image is fused with hue space data.
Afterward, background purity values corresponding to individual pixels in the Date Recue/Date Received 2022-03-02 background region image as the final result of said fusing are fused, and whether the background purity of the to-be-detected picture is compliant can be determined. To detect the location of the subject in the picture, the brightness space data are first processed by means of plural binarization methods so as to generate plural corresponding binarization results. The subject region image identified through pixel-level semantic segmentation is then fused with the plural binarization result, respectively. At last, based on the coordinate value of every pixel in the fused subject region image and its corresponding background purity value, whether the location of the subject in the to-be-detected picture is compliant can be determined.
[0034] It is thus clear that the present invention detects background purity of an uploaded picture and determines the location of the subject with significantly improved efficiency as compared to the conventional human examination.
[0035] In another aspect, the present invention provides a picture-detecting apparatus, which is applied to the method for picture-detecting method as recited in the foregoing technical scheme. The apparatus comprises:
[0036] a pixel-processing unit, for acquiring a to-be-detected picture that has been denoised, and after pixel-level semantic segmentation, recognizing a subject region image and a background region image;
[0037] a hue-space-converting unit, for performing hue space conversion on the to-be-detected picture, so as to output hue space data and brightness space data of the picture;
[0038] a first determining unit, for fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant;
[0039] a binarization-processing unit, for processing the brightness space data by means of plural binarization methods, so as to output plural binarization results correspondingly; and
[0040] a second determining unit, for fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a Date Recue/Date Received 2022-03-02 location of a subject in the to-be-detected picture is compliant.
[0041] Preferably, between the binarization-processing unit and the second determining unit, the apparatus further comprises:
[0042] performing non-coherence region suppression on the first binarization result and the second binarization result, respectively, by means of a non-maximum suppression method.
[0043] As compared to the prior art, the disclosed picture-detecting apparatus provides beneficial effects that are similar to those provided by the disclosed picture-detecting method as enumerated above, and thus no repetitions are made herein.
[0044] In a third aspect the present invention provides a computer readable storage medium, storing thereon a computer program. When the computer program is executed by a processor, it implements the steps of the picture-detecting method as described previously.
[0045] As compared to the prior art, the disclosed computer-readable storage medium provides beneficial effects that are similar to those provided by the disclosed picture-detecting method as enumerated above, and thus no repetitions are made herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0046] The accompanying drawings are provided herein for better understanding of the present invention and form a part of this disclosure. The illustrative embodiments and their descriptions are for explaining the present invention and by no means form any improper limitation to the present invention, wherein:
[0047] FIG. 1 is a flowchart of the picture-detecting method according to one embodiment of the present invention; and
[0048] FIG. 2 is another flowchart of the picture-detecting method according to the embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0049] To make the foregoing objectives, features, and advantages of the present invention Date Recue/Date Received 2022-03-02 clearer and more understandable, the following description will be directed to some embodiments as depicted in the accompanying drawings to detail the technical schemes disclosed in these embodiments. It is, however, to be understood that the embodiments referred herein are only a part of all possible embodiments and thus not exhaustive. Based on the embodiments of the present invention, all the other embodiments can be conceived without creative labor by people of ordinary skill in the art, and all these and other embodiments shall be embraced in the scope of the present invention.
Embodiment 1
[0050] Referring to FIG. 1 and FIG. 2, the present embodiment provides a picture-detecting method, comprises:
[0051] acquiring a to-be-detected picture that has been denoised, and after pixel-level semantic segmentation, recognizing a subject region image and a background region image;
performing hue space conversion on the to-be-detected picture, so as to output hue space data and brightness space data of the picture; fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant;
processing the brightness space data by means of plural binarization methods, so as to output plural binarization results correspondingly; and fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant.
[0052] The picture-detecting method provided by the present embodiment first identifies a subject region image and a background region image in a denoised to-be-detected picture through pixel-level semantic segmentation, and then performs hue space conversion on the picture, so as to output hue space data and brightness space data of the picture. During detection of background purity, since pixel-level semantic segmentation processes edge pixels of the subject region image and of the background region image in a relatively Date Recue/Date Received 2022-03-02 rough manner, the present invention dilates the subject region image to dilate the range of edge pixels of the subject region image in order to ensure complete coverage over the subject region image. Afterward, the dilated subject region image is fused with hue space data. Afterward, background purity values corresponding to individual pixels in the background region image as the final result of said fusing are fused, and whether the background purity of the to-be-detected picture is compliant can be determined. To detect the location of the subject in the picture, the brightness space data are first processed by means of plural binarization methods so as to generate plural corresponding binarization results. The subject region image identified through pixel-level semantic segmentation is then fused with the plural binarization result, respectively. At last, based on the coordinate value of every pixel in the fused subject region image and its corresponding background purity value, whether the location of the subject in the to-be-detected picture is compliant can be determined.
[0053] It is thus clear that the present embodiment detects background purity of an uploaded picture and determines the location of the subject with significantly improved efficiency as compared to the conventional human examination.
[0054] In the foregoing embodiment, the step of acquiring a to-be-detected picture that has been denoised, and after pixel-level semantic segmentation, recognizing a subject region image and a background region image comprises:
[0055] denoising the to-be-detected picture by means of a nonlinear filtering method; performing pixel-level semantic segmentation on the denoised to-be-detected picture through a multi-channel deep residual fully convolutional network model, so as to recognize the subject region image and the background region image. Exemplarily, the nonlinear filtering is a median filtering denoising algorithm.
[0056] Specifically, in the foregoing embodiment, the step of performing hue space conversion on the to-be-detected picture, so as to output hue space data and brightness space data of the picture comprises:
[0057] using HSV hue space conversion method to convert the to-be-detected picture and so as to output the hue space data of the picture, hue space data comprises hue space component Date Recue/Date Received 2022-03-02 H; using LUV hue space conversion method to convert the to-be-detected picture and output the brightness space data of the picture, in which the brightness space data include a brightness space channel L.
[0058] For conversion of the hue space data, since the commonly used RGB color space method is based on computer hardware and its color space is not suitable for characterizing color purity, in particular implementations, the present employs the HSV hue space conversion method to convert the to-be-detected picture in the RGB color space into the HSV color space that is closer to human visual perceptual characteristics, so as to better characterize color purity and in turn improve detection precision of the background purity.
The conversion equation is as below:
-r,'.= max(R,G,B) S max(R,G,B)¨ min (R,O,B) ¨ ____________________________ max(R,G,B) (0¨B) 60x _____________________ S# Amax R,G,B)=R
S x V
60x 2+ __________________ SO_Emax(R,G,B)=0 r (R-0)\
60x 4+ __________________ S#0Amax(R,O,B)=B
Sxr,' , 1/-=I-1+360 If <0
[0059] Features of brightness space data include extensive color gamut coverage, high visual consistency, and good capability of expressing color perception. Therefore, the present implantation coverts brightness space data of the picture to be examined through: first converting the to-be-detected picture from RGB space data into CIE XYZ space data, and then converting the CIE XYZ space data into LUV brightness space data using the following conversion equation:
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B

Date Recue/Date Received 2022-03-02 Y Y r 6 116x ¨ ¨16 Y L Yn ,29;
=.
29' Y r 6 n Yn 29) /./ = 13 xL 4/1-12õ1 =13xL x(vi¨vni) where kin and vn are light source constants, and " is a preset fixed value;

kir= X +15Y + 3Z

v, =
X+15Y+ 3Z
[0060] Specifically, in the foregoing embodiment, the step of fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises:
[0061] filtering edge pixels of the subject region image by means of a filter kernel, so as to dilate the subject region image; updating the part other than the dilated subject region image in the to-be-detected picture as the background region image; fusing the updated background region image with data of the hue space component H, and determining whether the background purity value corresponding to every pixel in the updated background region image is compliant to a first threshold, and if yes, determining that the background purity of the to-be-detected picture is compliant, or if not, determining that the background purity of the to-be-detected picture is non-compliant; and wherein the first threshold includes a first background purity threshold.
[0062] In particular implementations, a round filter kernel k is used for filtering pixels of the subject region image. Taking a round filter kernel having a diameter of 4 for example:
Date Recue/Date Received 2022-03-02 k= 1 1 1 1 x 1 1 1 1
[0063] The filtering equation is:
= P ED K i1) z 1 (k np j) cP
-
[0064] In the above equation, (i, j) represents the pixel coordinates, P
represents the subject region image, Zu represents the background purity value corresponding to the pixel, (k) represents the punctured neighborhood region corresponding to each pixel obtained using the round filter kernel k as the mask, B represents the dilates subject region image, wherein the background region image is updated when the subject region image is dilated, and D represents the updated background region image. Then the updated background region image is fused with data of the hue space component H so as to generate a result C, wherein the fusion equation is as below:
ic(i,j) = H(i,j), (i,j) E
C = ic(i,j)1 C(i,j) = 0, (i,j) e D
where (0) represents pixel coordinates, H(i,j) represents the background purity value corresponding to the pixel in the hue space component H. When a pixel located at coordinates (i,j) belongs to the dilated background region imageD, the background purity of the pixel in the hue space component H is valuated. When the pixel located at coordinates (i, j) does not belong to the dilated background region image D, the background purity value of the pixel in the hue space component H is valuated as zero.
The coordinates of the pixels and their corresponding background purity values are gathered to form an array C, which is the array composed of the location coordinates of the pixels in the background region image D and the correspondingly converted background purity values. Then the predetermined first threshold is compared to the array Date Recue/Date Received 2022-03-02 C. If all the background purity values corresponding to the induvial pixels in the background region image D are smaller than the first background purity threshold, it is determined that the background purity of the to-be-detected picture is compliant.
[0065] Preferably, in the foregoing embodiment, the step of processing the brightness space data by means of plural binarization methods, so as to output plural binarization results correspondingly comprises:
[0066] processing data of the brightness space channel L by means of a fixed-threshold binarization method, so as to obtain a first binarization result T; processing the data of the brightness space channel L by means of a Gaussian-window binarization method, so as to obtain a second binarization result G. Then non-coherence region suppression is performed on the first binarization result T and the second binarization result G, respectively by means of the non-maximum suppression method, so as to nullify the impact of non-coherence regions caused by complicated a background on the detection results, thereby further improving detection precision.
[0067] In the foregoing embodiment, the step of fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
[0068] fusing the subject region image recognized through pixel-level semantic segmentation with the first binarization result and the second binarization result, respectively;
extracting coordinate values of the pixels belonging to the subject region image and the first binarization result from fusing results and their corresponding background purity values; and/or extracting coordinate values of the pixels belonging to the subject region image and the second binarization result from fusing results and their corresponding background purity values; summarizing and extracting the coordinate values of the pixels and their corresponding background purity values, and determining whether both the coordinate value of each pixel and its corresponding background purity value are compliant to a second threshold, and if yes, determining that the location of the subject in the to-be-detected picture is compliant, or if not, determining that the location of the Date Recue/Date Received 2022-03-02 subject in the to-be-detected picture is non-compliant; and wherein the second threshold includes a second background purity threshold and a location coordinate interval threshold.
[0069] In particular implementations, the fusing process is well known in the art, and is described herein by example. The fusion equation is F = ff (i, DI f , and I f (mi, ni), (mi, ni) # 0 f f(ui, vj) eTnP
. When the background purity value of the pixel (i,j) is in the (f (mi, ni)eGnP
intersection between the first binarization result T and the subject region image, the coordinates of the pixel and its corresponding background purity value are taken.
Alternatively, when the background purity value of the pixel (i,j) is in the intersection between the second binarization result G and the subject region image, the coordinates of the pixel and its corresponding background purity value are taken. At last, the pixel coordinates and their corresponding background purity values are assembled to form an array F, which is an array composed of location coordinates of the pixels after assembling of T n P and G n P and their corresponding background purity values. A preset second threshold array is then used to compare with the array F. If the pixel coordinates are all within the location coordinate interval threshold, and the background purity values are all within the second background purity threshold, it is determined that the location of the subject in the to-be-detected picture is compliant.
Embodiment 2
[0070] The present embodiment provides a picture-detecting apparatus, which comprises:
[0071] a pixel-processing unit, for acquiring a to-be-detected picture that has been denoised, and after pixel-level semantic segmentation, recognizing a subject region image and a background region image;
[0072] a hue-space-converting unit, for performing hue space conversion on the to-be-detected picture, so as to output hue space data and brightness space data of the picture;
[0073] a first determining unit, for fusing the subject region image after dilation processing with Date Recue/Date Received 2022-03-02 the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant;
[0074] a binarization-processing unit, for processing the brightness space data by means of plural binarization methods, so as to output plural binarization results correspondingly; and
[0075] a second determining unit, for fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant.
[0076] Preferably, between the binarization-processing unit and the second determining unit, the apparatus further comprises:
[0077] performing non-coherence region suppression on the first binarization result and the second binarization result, respectively, by means of a non-maximum suppression method.
[0078] As compared to the prior art, the picture-detecting apparatus of the present embodiment provides beneficial effects that are similar to those provided by the based on the picture-detecting method of a convolutional neural network as enumerated in the previous embodiment, and thus no repetitions are made herein.
Embodiment 3
[0079] The present embodiment provides a computer-readable storage medium, storing thereon a computer program. When the computer program is executed by a processor, it implements the steps of the picture-detecting method as described previously.
[0080] As compared to the prior art, the computer-readable storage medium of the present embodiment provides beneficial effects that are similar to those provided by the picture-detecting method as enumerated in the previous embodiment, and thus no repetitions are made herein.
[0081] As will be appreciated by people of ordinary skill in the art, implementation of all or a part of the steps of the method of the present invention as described previously may be Date Recue/Date Received 2022-03-02 realized by having a program instruct related hardware components. The program may be stored in a computer-readable storage medium, and the program is about performing the individual steps of the methods described in the foregoing embodiments.
The storage medium may be a ROM/RAM, a hard drive, an optical disk, a memory card or the like.
[0082] The present invention has been described with reference to the preferred embodiments and it is understood that the embodiments are not intended to limit the scope of the present invention. Moreover, as the contents disclosed herein should be readily understood and can be implemented by a person skilled in the art, all equivalent changes or modifications which do not depart from the concept of the present invention should be encompassed by the appended claims. Hence, the scope of the present invention shall only be defined by the appended claims.
Date Recue/Date Received 2022-03-02

Claims (176)

Claims:
1. A picture-detecting apparatus, the apparatus comprising:
a pixel-processing unit, for acquiring a to-be-detected picture that has been denoised, performing pixel-level semantic segmentation on a denoised to-be-detected picture, and recognizing a subject region image and a background region image;
a hue-space-converting unit, for performing hue space conversion on the to-be-detected picture, so as to output hue space data and brightness space data of the picture; and a first determining unit, for fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant, wherein a compliant picture comprises any one or more of non-violent, non-pornographic, blank, and aesthetic features, and wherein the aesthetic features include centered image subjects.
2. The apparatus of claim 1, the apparatus further comprising: a binarizafion-processing unit, for processing the brightness space data by means of plural binarization apparatus, so as to output plural binarization results correspondingly.
3. The apparatus of the claim 2, the apparatus further comprising: a second determining unit, for fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant.
4. The apparatus of the claim 3, wherein between the binarization-processing unit and the second determining unit, the apparatus further comprises: performing non-coherence region suppression on a first binarization result and a second binarization result, respectively, by means of a non-maximum suppression apparatus.

Date Reçue/Date Received 2023-12-18
5. The apparatus of the claim 4, wherein acquiring a to-be-detected picture that has been denoised, and after pixel-level semantic segmentation, recognizing a subject region image and a background region image comprises: denoising the to-be-detected picture by means of a nonlinear filtering apparatus; and performing pixel-level semantic segmentation on the denoised to-be-detected picture through a multi-channel deep residual fully convolutional network model, so as to recognize the subject region image and the background region image.
6. The apparatus of the claim 5, wherein performing hue space conversion on the to-be-detected picture to output hue space data and brightness space data of the picture comprises:
using HSV hue space conversion apparatus to convert the to-be-detected picture and output the hue space data of the picture, in which the hue space data include a hue space component H; and using LUV hue space conversion apparatus to convert the to-be-detected picture and output the brightness space data of the picture, in which the brightness space data include a brightness space channel L.
7. The apparatus of the claim 6, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: filtering edge pixels of the subject region image by means of a filter kemel, so as to dilate the subject region image.
8. The apparatus of the claim 7, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: updating the part other than the dilated subject region image in the to-be-detected picture as the background region image.

Date Recue/Date Received 2023-12-18
9. The apparatus of the claim 8, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: fusing the updated background region image with data of the hue space component H, and determining whether the background purity value corresponding to every pixel in the updated background region image is compliant to a first threshold.
10. The apparatus of the claim 9, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: determining that the background purity of the to-be-detected picture is compliant.
11. The apparatus of the claim 10, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: determining that the background purity of the to-be-detected picture is non-compliant, and wherein the first threshold includes a first background purity threshold.
12. The apparatus of the claim 11, processing the brightness space data by means of the plural binarizati on apparatus to output the plural binarization results correspondingly comprises:
processing data of the brightness space channel L by means of a fixed-threshold binarization apparatus, so as to obtain the first binarization result.
13. The apparatus of the claim 12, processing the brightness space data by means of the plural binarizati on apparatus to output the plural binarization results correspondingly comprises:
processing the data of the brightness space channel L by means of a Gaussian-window binarizati on apparatus, so as to obtain the second binarizati on result.

Date Recue/Date Received 2023-12-18
14. The apparatus of the claim 13, the apparatus further comprising:
performing non-coherence region suppression on the first binarization result and the second binarizati on result, respectively, by means of a non-maximum suppression apparatus.
15. The apparatus of the claim 14, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises: fusing the subject region image recognized through pixel-level semantic segmentation with the first binarizati on result and the second binarization result, respectively.
16. The apparatus of the claim 15, wherein fusing the subject region image with the plural binarizati on results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
extracting coordinate values of the pixels belonging to the subject region image and the first binarization result from fusing results and their corresponding background purity values.
17. The apparatus of the claim 16, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
extracting coordinate vaiues of the pixels belonging to the subject region image and the second binarization result from fusing results and their corresponding background purity values.
18. The apparatus of the claim 17, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
summarizing and extracting the coordinate values of the pixels and their corresponding background purity values, and determining whether both the coordinate value of each pixel and its corresponding background purity value are compliant to a second threshold.

Date Recue/Date Received 2023-12-18
19. The apparatus of the claim 18, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
determining that the location of the subject in the to-be-detected picture is compliant.
20. The apparatus of the claim 19, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
determining that the location of the subject in the to-be-detected picture is non-compliant;
and wherein the second threshold includes a second background purity threshold and a location coordinate interval threshold.
21. The apparatus of the claim 20, the apparatus further comprising:
idenfifying a subject region image and a background region image in a denoised to-be-detected picture through pixel-level semantic segmentation, and performing hue space conversion on the picture to output hue space data and brightness space data of the picture.
22. The apparatus of the claim 21, the apparatus further comprising: dilating the subject region image to dilate the range of edge pixels of the subject region image in order to ensure complete coverage over the subject region image.
23. The apparatus of the claim 22, the apparatus further comprising: dilating subject region image is fused with hue space data.
24. The apparatus of the claim 23, the apparatus further comprising: to detect the location of the subject in the picture, the brightness space data are first processed by means of the plural binarization apparatus so as to generate the plural binarization results.
25. The apparatus of the claim 24, wherein the subject region image identified through pixel-level semantic segmentation is then fused with the plural binarization results, respectively.
Date Recue/Date Received 2023-12-18
26. The apparatus of the claim 25, wherein based on the coordinate value of every pixel in the fused subject region image and its corresponding background purity value, whether the location of the subject in the to-be-detected picture is compliant can be determined.
27. The apparatus of the claim 26, the apparatus further comprising: employing HSV (hue, saturation, value) hue space conversion apparatus to convert the to-be-detected picture in the RGB (red, green and blue) color space into the HSV color space that is closer to human visual perceptual characteristics.
28. The apparatus of the claim 27, wherein the conversion apparatus includes:
-V= max(R,G, B) max(R,G,B)-mm(R,G,B) rnax (R, 0, B) (G - B) 60x ___________________ S~Oftmax(RAB)=R
Sx V
H = 60 x(2+ (B- R)) aamax(R,G,B)= G
S x V
60x [4+ (R-G1 0~Oftmax(RAB)=B
r7 x = 1-1+360 H<O
29. The apparatus of the claim 28, wherein the features of brightness space data include extensive color gamut coverage, high visual consistency, and good capability of expressing color perception.
30. The apparatus of the claim 29, wherein the implantation converting brightness space data of the picture to be examined through: converting the to-be-detected picture from RGB space data into CIE XYZ space data; and converting the CIE XYZ space data into LUV
brightness space data using the following conversion equation:
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B

Date Reçue/Date Received 2023-12-18 v N:31 Yõ 29 L
29)1 Y
x Yõ k.29 ,u =13 xi_ x(11 ¨
v =13xLx(vi¨vni wherein P. and v. are light source constants, and ri is a preset fixed value;
and wherein ,LI= X +15Y+ 3Z

v, =
X +15Y+ 3Z
31. The apparatus of the claim 30, wherein a round filter kernel k is used for filtering pixels of the subject region image.
32. The apparatus of the claim 31, wherein when a round filter kernel has a diameter of 4 represented by:

k = 1 1 1 1 x 1 1 1 1
33. The apparatus of the claim 32, wherein the filtering equation is:
= P K = tzu (k) P,(i, j)c P}

Date Reçue/Date Received 2023-12-18 wherein (i,j) represents the pixel coordinates, P represents the subject region image, Zy (k) represents the background purity value corresponding to the pixel, Zu represents the punctured neighborhood region corresponding to each pixel obtained using the round filter kemel k as the mask, B represents the dilates subject region image, wherein the background region image is updated when the subject region image is dilated, and D
represents the updated background region image.
34. The apparatus of the claim 33, wherein the background region image is fused with data of the hue space component H to generate a result C.
35. The apparatus of the claim 34, wherein the fusion equation is as below:
c = ic(0) c(0) = H(i,j),(i,j) c c(i,j) = 0, (0) E D
36. The apparatus of the claim 35, wherein (0) represents pixel coordinates, H(i, j) represents the background purity value corresponding to the pixel in the hue space component H; and wherein when a pixel located at coordinates (i,j) belongs to the dilated background region image D, the background purity of the pixel in the hue space component His valuated.
37. The apparatus of the claim 36, wherein when the pixel located at coordinates (i,j) does not belong to the dilated background region image D, the background purity value of the pixel in the hue space component His valuated as zero.
38. The apparatus of the claim 37, wherein the coordinates of the pixels and their corresponding background purity values are gathered to form an array C, which is the array composed of the location coordinates of the pixels in the background region image D and the correspondingly converted background purity values.
39. The apparatus of the claim 38, wherein the predetermined first threshold is compared to the array C.

Date Recue/Date Received 2023-12-18
40. The apparatus of the claim 39, wherein when all the background purity values corresponding to the induvial pixels in the background region image D are smaller than the first background purity threshold, it is determined that the background purity of the to-be-detected picture is compliant.
41. The apparatus of the claim 40, the apparatus further comprising:
processing data of the brightness space channel L by means of a fixed-threshold binarization apparatus, so as to obtain a first binarization result T.
42. The apparatus of the claim 41, the apparatus further comprising:
processing the data of the brightness space channel L by means of a Gaussian-window binarization apparatus, so as to obtain a second binarization result G.
43. The apparatus of the claim 42, the apparatus further comprising: non-coherence region suppression is performed on the first binarization result T and the second binarization result G, respectively by means of the non-maximum suppression apparatus to nullify the impact of non-coherence regions caused by complicated a background on the detection results, thereby further improving detection precision.
44. The apparatus of the claim 43, the apparatus further comprising: fusing the subject region image recognized through pixel-level semantic segmentation with the first binarizati on result and the second binarization result, respectively.
45. An electronic system comprising:
at least one processor;
a memory, connected with the at least one processor;
wherein the memory stores an instruction executable by the at least one processor configured to:
acquire a to-be-detected picture that has been denoised, perform pixel-level semantic segmentation on a denoised to-be-detected picture, and recognizing a subject region image and a background region image;

Date Recue/Date Received 2023-12-18 perform hue space conversion on the to-be-detected picture, so as to output hue space data and brightness space data of the picture; and fuse the subject region image after dilation processing with the hue space data, extract a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determine whether background purity of the to-be-detected picture is compliant wherein a compliant picture comprises any one or more of non-violent, non-pornographic, blank, and aesthetic features, and wherein the aesthetic features include centered image subjects.
46. The system of claim 45, the system further comprising: processing the brightness space data by means of plural binarization systems, so as to output plural binarization results correspondingly.
47. The system of the claim 46, the system further comprising: fusing the subject region image with the plural binarization results.
48. The system of the claim 47, the system further comprising: extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant.
49. The system of the claim 48, wherein acquiring a to-be-detected picture that has been denoised, and after pixel-level semantic segmentation, recognizing a subject region image and a background region image comprises: denoising the to-be-detected picture by means of a nonlinear filtering system; and performing pixel-level semantic segmentation on the denoised to-be-detected picture through a multi-channel deep residual fully convolutional network model, so as to recognize the subject region image and the background region image.
Date Recue/Date Received 2023-12-18
50. The system of the claim 49, wherein performing hue space conversion on the to-be-detected picture to output hue space data and brightness space data of the picture comprises: using HSV hue space conversion system to convert the to-be-detected picture and output the hue space data of the picture, in which the hue space data include a hue space component H; and using LUV hue space conversion system to convert the to-be-detected picture and output the brightness space data of the picture, in which the brightness space data include a brightness space channel L.
51. The system of the claim 50, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: filtering edge pixels of the subject region image by means of a filter kernel, so as to dilate the subject region image.
52. The system of the claim 51, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: updating the part other than the dilated subject region image in the to-be-detected picture as the background region image.
53. The system of the claim 52, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: fusing the updated background region image with data of the hue space component H, and determining whether the background purity value corresponding to every pixel in the updated background region image is compliant to a first threshold.

Date Recue/Date Received 2023-12-18
54. The system of the claim 53, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: deteimining that the background purity of the to-be-detected picture is compliant.
55. The system of the claim 54, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: deteimining that the background purity of the to-be-detected picture is non-compliant, and wherein the first threshold includes a first background purity threshold.
56. The system of the claim 55, processing the brightness space data by means of the plural binarizati on systems to output the plural binarization results correspondingly comprises:
processing data of the brightness space channel L by means of a fixed-threshold binarization system, so as to obtain a first binarization result.
57. The system of the claim 56, processing the brightness space data by means of the plural binarizati on systems to output the plural binarization results correspondingly comprises:
processing the data of the brightness space channel L by means of a Gaussian-window binarizati on system, so as to obtain a second binarization result.
58. The system of the claim 57, the system further comprising: performing non-coherence region suppression on the first binarization result and the second binarizati on result, respectively, by means of a non-maximum suppression system.

Date Recue/Date Received 2023-12-18
59. The system of the claim 58, wherein fusing the subject region image with the plural binarizati on results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises: fusing the subject region image recognized through pixel-level semantic segmentation with the first binarizati on result and the second binarization result, respectively.
60. The system of the claim 59, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
extracting coordinate values of the pixels belonging to the subject region image and the first binarizati on result from fusing results and their corresponding background purity values.
61. The system of the claim 60, wherein fusing the subject region image with the plural binarizati on results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
extracting coordinate values of the pixels belonging to the subject region image and the second binarization result from fusing results and their corresponding background purity values.
62. The system of the claim 61, wherein fusing the subject region image with the plural binarizati on results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
summarizing and extracting the coordinate values of the pixels and their corresponding background purity values, and determining whether both the coordinate value of each pixel and its corresponding background purity value are compliant to a second threshold.

Date Recue/Date Received 2023-12-18
63. The system of the claim 62, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
determining that the location of the subject in the to-be-detected picture is compliant.
64. The system of the claim 63, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
determining that the location of the subject in the to-be-detected picture is non-compliant;
and wherein the second threshold includes a second background purity threshold and a location coordinate interval threshold.
65. The system of the claim 64, the system further comprising: identifying a subject region image and a background region image in a denoised to-be-detected picture through pixel-level semantic segmentation, and performing hue space conversion on the picture to output hue space data and brightness space data of the picture.
66. The system of the claim 65, the system further comprising: dilating the subject region image to dilate the range of edge pixels of the subject region image in order to ensure complete coverage over the subject region image.
67. The system of the claim 66, the system further comprising: dilating subject region image is fused with hue space data.
68. The system of the claim 67, the system further comprising: to detect the location of the subject in the picture, the brightness space data are first processed by means of the plural binarization systems so as to generate the plural binarization results.
69. The system of the claim 68, wherein the subject region image identified through pixel-level semantic segmentation is then fused with the plural binarization results, respectively.

Date Recue/Date Received 2023-12-18
70. The system of the claim 69, wherein based on the coordinate value of every pixel in the fused subject region image and its corresponding background purity value, whether the location of the subject in the to-be-detected picture is compliant can be determined.
71. The system of the claim 70, the system further comprising: employing HSV
(hue, saturation, value) hue space conversion system to convert the to-be-detected picture in the RGB (red, geen and blue) color space into the HSV color space that is closer to human visual perceptual characteristics.
72. The system of the claim 71, wherein the conversion system includes:
-V= max(R,G, B) s - max (R, G, B )- mm (R, G, B) rnax (R, 0, B) (G - B) 60x ___________________ S~ Oftmax(RAB)=R
Sx V
H = 60 x(2+ (B- R )) aamax(R,G,B)= G
S x V
60x [4 + (R-G1 ~ Oftmax(RAB)=B
xV
= 1-1 +360 H<O
73. The system of the claim 72, wherein the features of brightness space data include extensive color gamut coverage, high visual consistency, and good capability of expressing color perception.
74. The system of the claim 73, wherein the implantation converting brightness space data of the picture to be examined through: converting the to-be-detected picture from RGB space data into CIE XYZ space data; and converting the CIE XYZ space data into LUV
brightness space data using the following conversion equation:
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B
Date Reçue/Date Received 2023-12-18 v N:31 Yni Yõ 29 L
29)1 Y
x Yõ k.29 ,u =13 xi_ x(11 ¨ ,uni) v =13xLx(vi¨vni wherein Pr, and v. are light source constants, and ri is a preset fixed value;
and wherein ,LI= X +15Y+ 3Z

v, =
X+15Y+ 3Z
75. The system of the claim 74, wherein a round filter kernel k is used for filtering pixels of the subject region image.
76. The system of the claim 75, wherein when a round filter kernel has a diameter of 4 represented by:

k = 1 1 1 1 x 1 1 1 1
77. The system of the claim 76, wherein the filtering equation is:
= P K = tzu (k) P,(i, j)c Date Reçue/Date Received 2023-12-18 wherein (i,j) represents the pixel coordinates, P represents the subject region image, Zy (k) represents the background purity value corresponding to the pixel, Zu represents the punctured neighborhood region corresponding to each pixel obtained using the round filter kemel k as the mask, B represents the dilates subject region image, wherein the background region image is updated when the subject region image is dilated, and D
represents the updated background region image.
78. The system of the claim 77, wherein the background region image is fused with data of the hue space component H to generate a result C.
79. The system of the claim 78, wherein the fusion equation is as below:
c = ic(0) c(i,j) = H(i,j),(i,j) c c(i,j) = 0, (i,j) E D
80. The system of the claim 79, wherein (i,j) represents pixel coordinates, H(i,j) represents the background purity value corresponding to the pixel in the hue space component 11; and wherein when a pixel located at coordinates (i,j) belongs to the dilated background region image D, the background purity of the pixel in the hue space component His valuated.
81. The system of the claim 80, wherein when the pixel located at coordinates (i,j) does not belong to the dilated background region image D, the background purity value of the pixel in the hue space component His valuated as zero.
82. The system of the claim 81, wherein the coordinates of the pixels and their corresponding background purity values are gathered to form an array C, which is the array composed of the location coordinates of the pixels in the background region image D and the correspondingly converted background purity values.
83. The system of the claim 82, wherein the predetermined first threshold is compared to the array C.

Date Recue/Date Received 2023-12-18
84. The system of the claim 83, wherein when all the background purity values corresponding to the induvial pixels in the background region image D are smaller than the first background purity threshold, it is determined that the background purity of the to-be-detected picture is compliant.
85. The system of the claim 84, the system further comprising: processing data of the brightness space channel L by means of a fixed-threshold binarization system, so as to obtain a first binarization result T
86. The system of the claim 85, the system further comprising: processing the data of the brightness space channel L by means of a Gaussian-window binarization system, so as to obtain a second binarization result G.
87. The system of the claim 86, the system further comprising: non-coherence region suppression is performed on the first binarization result T and the second binarization result G, respectively by means of the non-maximum suppression system to nullify the impact of non-coherence regions caused by complicated a background on the detection results, thereby further improving detection precision.
88. The system of the claim 87, the system further comprising: fusing the subject region image recognized through pixel-level semantic segmentation with the first binarization result and the second binarization result, respectively.
89. A computer readable physical memory having stored thereon a computer program executed by a computer configured to:
acquire a to-be-detected picture that has been denoised, and perform pixel-level semantic segmentation on a denoised to-be-detected picture, recognizing a subject region image and a background region image;
perform hue space conversion on the to-be-detected picture to output hue space data and brightness space data of the picture; and Date Recue/Date Received 2023-12-18 fuse the subject region image after dilation processing with the hue space data, extract a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determine whether background purity of the to-be-detected picture is compliant, wherein a compliant picture comprises any one or more of non-violent, non-pornographic, blank, and aesthetic features, and wherein the aesthetic features include centred image subjects.
90. The memory of claim 89, the memory further comprising: processing the brightness space data by means of plural binarization memories, so as to output plural binarization results correspondingly.
91. The memory of the claim 90, the memory further comprising: fusing the subject region image with the plural binarization results.
92. The memory of the claim 91, the memory further comprising: extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant.
93. The memory of the claim 92, wherein acquiring a to-be-detected picture that has been denoised, and after pixel-level semantic segmentation, recognizing a subject region image and a background region image comprises: denoising the to-be-detected picture by means of a nonlinear filtering memory; and performing pixel-level semantic segmentation on the denoised to-be-detected picture through a multi-channel deep residual fully convolutional network model, so as to recognize the subject region image and the background region image.

Date Recue/Date Received 2023-12-18
94. The memory of the claim 93, wherein performing hue space conversion on the to-be-detected picture to output hue space data and brightness space data of the picture comprises:
using HSV hue space conversion memory to convert the to-be-detected picture and output the hue space data of the picture, in which the hue space data include a hue space component H; and using LUV hue space conversion memory to convert the to-be-detected picture and output the brightness space data of the picture, in which the brightness space data include a brightness space channel L.
95. The memory of the claim 94, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: filtering edge pixels of the subject region image by means of a filter kernel, so as to dilate the subject region image.
96. The memory of the claim 95, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: updating the part other than the dilated subject region image in the to-be-detected picture as the background region image.
97. The memory of the claim 96, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: fusing the updated background region image with data of the hue space component H, and determining whether the background purity value corresponding to every pixel in the updated background region image is compliant to a first threshold.
Date Recue/Date Received 2023-12-18
98. The memory of the claim 97, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and detennining whether background purity of the to-be-detected picture is compliant comprises: deteimining that the background purity of the to-be-detected picture is compliant.
99. The memory of the claim 98, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image fomied after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: deteimining that the background purity of the to-be-detected picture is non-compliant, and wherein the first threshold includes a first background purity threshold.
100. The memory of the claim 99, processing the brightness space data by means of the plural binarizati on memories to output the plural binarization results correspondingly comprises:
processing data of the brightness space channel L by means of a fixed-threshold binarization memory, so as to obtain a first binarization result.
101. The memory of the claim 100, processing the brightness space data by means of the plural binarizati on memories to output the plural binarization results correspondingly comprises:
processing the data of the brightness space channel L by means of a Gaussian-window binarizati on memory, so as to obtain a second binarization result.
102. The memory of the claim 101, the memory further comprising: performing non-coherence region suppression on the first binarization result and the second binarizati on result, respectively, by means of a non-maximum suppression memory.

Date Recue/Date Received 2023-12-18
103. The memory of the claim 102, wherein fusing the subject region image with the plural binarizati on results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises: fusing the subject region image recognized through pixel-level semantic segmentation with the first binarizati on result and the second binarization result, respectively.
104. The memory of the claim 103, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
extracting coordinate values of the pixels belonging to the subject region image and the first binarizati on result from fusing results and their corresponding background purity values.
105. The memory of the claim 104, wherein fusing the subject region image with the plural binarizati on results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
extracting coordinate values of the pixels belonging to the subject region image and the second binarization result from fusing results and their corresponding background purity values.
106. The memory of the claim 105, wherein fusing the subject region image with the plural binarizati on results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
summarizing and extracting the coordinate values of the pixels and their corresponding background purity values, and determining whether both the coordinate value of each pixel and its corresponding background purity value are compliant to a second threshold.

Date Recue/Date Received 2023-12-18
107. The memory of the claim 106, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
determining that the location of the subject in the to-be-detected picture is compliant.
108. The memory of the claim 107, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
determining that the location of the subject in the to-be-detected picture is non-compliant;
and wherein the second threshold includes a second background purity threshold and a location coordinate interval threshold.
109. The memory of the claim 108, the memory further comprising: identifying a subject region image and a background region image in a denoised to-be-detected picture through pixel-level semantic segmentation, and performing hue space conversion on the picture to output hue space data and brightness space data of the picture.
110. The memory of the claim 109, the memory further comprising: dilating the subject region image to dilate the range of edge pixels of the subject region image in order to ensure complete coverage over the subject region image.
111. The memory of the claim 110, the memory further comprising: dilating subject region image is fused with hue space data.
112. The memory of the claim 111, the memory further comprising: to detect the location of the subject in the picture, the brightness space data are first processed by means of the plural binarization memories so as to generate the plural binarization results.
113. The memory of the claim 112, wherein the subject region image identified through pixel-level semantic segmentation is then fused with the plural binarization results, respectively.

Date Recue/Date Received 2023-12-18
114. The memory of the claim 113, wherein based on the coordinate value of every pixel in the fused subject region image and its corresponding background purity value, whether the location of the subject in the to-be-detected picture is compliant can be determined.
115. The memory of the claim 114, the memory further comprising: employing HSV
(hue, saturation, value) hue space conversion memory to convert the to-be-detected picture in the RGB (red, green and blue) color space into the HSV color space that is closer to human visual perceptual characteristics.
116. The memory of the claim 115, wherein the conversion memory includes:
-V= max(R,G, B) s - max (R, G, B )- mm (R, G, B) rnax (R, 0, B) (G - B) 60x ___________________ S~ Oftmax(RAB)=R
Sx V
= 60 x(2+ (-B
H R)) aamax(R,G,B)= G
S x V
60x [4+ (R-G1 ~ Oftmax(RAB)=B
r7 g x = 1-1 +360 H<O
117. The memory of the claim 116, wherein the features of brightness space data include extensive color gamut coverage, high visual consistency, and good capability of expressing color perception.
118. The memory of the claim 117, wherein the implantation converting brightness space data of the picture to be examined through: converting the to-be-detected picture from RGB space data into CIE XYZ space data; and converting the CIE XYZ space data into LUV
brightness space data using the following conversion equation:
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B

Date Reçue/Date Received 2023-12-18 v N:31 L
29)1 Y
x,13 Yõ k.29 ,u =13 xi_ x(11 ¨ ,uni) v =13xLx(vi¨vni wherein Pr, and v. are light source constants, and ri is a preset fixed value;
and wherein ,LI= X +15Y+ 3Z

v, =
X+15Y+ 3Z
119. The memory of the claim 118, wherein a round filter kernel k is used for filtering pixels of the subject region image.
120. The memory of the claim 119, wherein when a round filter kernel has a diameter of 4 represented by:

k = 1 1 1 1 x 1 1 1 1
121. The memory of the claim 120, wherein the filtering equation is:
= P K = fzu 1 (k) P,(i, j)c Date Reçue/Date Received 2023-12-18 wherein (i,j) represents the pixel coordinates, P represents the subject region image, Zy (k) represents the background purity value corresponding to the pixel, Zu represents the punctured neighborhood region corresponding to each pixel obtained using the round filter kemel k as the mask, B represents the dilates subject region image, wherein the background region image is updated when the subject region image is dilated, and D
represents the updated background region image.
122. The memory of the claim 121, wherein the background region image is fused with data of the hue space component H to generate a result C.
123. The memory of the claim 122, wherein the fusion equation is as below:
c(i,j) = H(i,j),(i,j) c C = ic(i,j) c(i,j) = 0, (i,j) E D 5;
124. The memory of the claim 122, wherein (i,j) represents pixel coordinates, H (i, j) represents the background purity value corresponding to the pixel in the hue space component H; and wherein when a pixel located at coordinates (i,j) belongs to the dilated background region image D, the background purity of the pixel in the hue space component His valuated.
125. The memory of the claim 124, wherein when the pixel located at coordinates (i,j) does not belong to the dilated background region image D, the background purity value of the pixel in the hue space component His valuated as zero.
126. The memory of the claim 125, wherein the coordinates of the pixels and their corresponding background purity values are gathered to form an array C, which is the array composed of the location coordinates of the pixels in the background region image D and the correspondingly converted background purity values.
127. The memory of the claim 126, wherein the predetermined first threshold is compared to the array C.

Date Recue/Date Received 2023-12-18
128. The memory of the claim 127, wherein when all the background purity values corresponding to the induvial pixels in the background region image D are smaller than the first background purity threshold, it is determined that the background purity of the to-be-detected picture is compliant.
129. The memory of the claim 128, the memory further comprising: processing data of the brightness space channel L by means of a fixed-threshold binarization memory, so as to obtain a first binarization result T.
130. The memory of the claim 129, the memory further comprising: processing the data of the brightness space channel L by means of a Gaussian-window binarization memory, so as to obtain a second binarization result G.
131. The memory of the claim 130, the memory further comprising: non-coherence region suppression is performed on the first binarization result T and the second binarization result G, respectively by means of the non-maximum suppression memory to nullify the impact of non-coherence regions caused by complicated a background on the detection results, thereby further improving detection precision.
132. The memory of the claim 131, the memory further comprising: fusing the subject region image recognized through pixel-level semantic segmentation with the first binarizati on result and the second binarization result, respectively.
133. A picture-detecting method, the method comprising:
acquiring a to-be-detected picture that has been denoised, perform pixel-level semantic segmentation on a denoised to-be-detected picture, and recognizing a subject region image and a background region image;
performing hue space conversion on the to-be-detected picture, so as to output hue space data and brightness space data of the picture; and Date Recue/Date Received 2023-12-18 fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant, wherein a compliant picture comprises any one or more of non-violent, non-pornographic, blank, and aesthetic features, and wherein the aesthetic features include centred image subjects.
134. The method of claim 133, the method further comprising: processing the brightness space data by means of plural binarization methods, so as to output plural binarization results correspondingly.
135. The method of the claim 134, the method further comprising: fusing the subject region image with the plural binarization results.
136. The method of the claim 135, the method further comprising: extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant.
137. The method of the claim 136, wherein acquiring a to-be-detected picture that has been denoised, and after pixel-level semantic segmentation, recognizing a subject region image and a background region image comprises: denoising the to-be-detected picture by means of a nonlinear filleting method; and performing pixel-level semantic segmentation on the denoised to-be-detected picture through a multi-channel deep residual fully convolutional network model, so as to recognize the subject region image and the background region image.

Date Recue/Date Received 2023-12-18
138. The method of the claim 137, wherein performing hue space conversion on the to-be-detected picture to output hue space data and brightness space data of the picture comprises:
using HSV hue space conversion method to convert the to-be-detected picture and output the hue space data of the picture, in which the hue space data include a hue space component H; and using LUV hue space conversion method to convert the to-be-detected picture and output the brightness space data of the picture, in which the brightness space data include a brightness space channel L.
139. The method of the claim 138, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: filtering edge pixels of the subject region image by means of a filter kernel, so as to dilate the subject region image.
140. The method of the claim 139, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: updating the part other than the dilated subject region image in the to-be-detected picture as the background region image.
141. The method of the claim 140, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: fusing the updated background region image with data of the hue space component H, and determining whether the background purity value corresponding to every pixel in the updated background region image is compliant to a first threshold.

Date Recue/Date Received 2023-12-18
142. The method of the claim 141, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and detennining whether background purity of the to-be-detected picture is compliant comprises: deteimining that the background purity of the to-be-detected picture is compliant.
143. The method of the claim 142, wherein fusing the subject region image after dilation processing with the hue space data, extracting a background purity value corresponding to every pixel in the background region image formed after dilation processing, and determining whether background purity of the to-be-detected picture is compliant comprises: deteimining that the background purity of the to-be-detected picture is non-compliant, and wherein the first threshold includes a first background purity threshold.
144. The method of the claim 143, processing the brightness space data by means of the plural binarizati on methods to output the plural binarization results correspondingly comprises:
processing data of the brightness space channel L by means of a fixed-threshold binarization method, so as to obtain a first binarization result.
145. The method of the claim 144, processing the brightness space data by means of the plural binarizati on methods to output the plural binarization results correspondingly comprises:
processing the data of the brightness space channel L by means of a Gaussian-window binarizati on method, so as to obtain a second binarizati on result.
146. The method of the claim 145, the method further comprising: performing non-coherence region suppression on the first binarization result and the second binarizati on result, respectively, by means of a non-maximum suppression method.
Date Recue/Date Received 2023-12-18
147. The method of the claim 146, wherein fusing the subject region image with the plural binarizati on results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and deteimining whether a location of a subject in the to-be-detected picture is compliant comprises: fusing the subject region image recognized through pixel-level semantic segmentation with the first binarizati on result and the second binarization result, respectively.
148. The method of the claim 147, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
extracting coordinate values of the pixels belonging to the subject region image and the first binarizati on result from fusing results and their corresponding background purity values.
149. The method of the claim 148, wherein fusing the subject region image with the plural binarizati on results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
extracting coordinate values of the pixels belonging to the subject region image and the second binarization result from fusing results and their corresponding background purity values.
150. The method of the claim 149, wherein fusing the subject region image with the plural binarizati on results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
summarizing and extracting the coordinate values of the pixels and their corresponding background purity values, and determining whether both the coordinate value of each pixel and its corresponding background purity value are compliant to a second threshold.

Date Recue/Date Received 2023-12-18
151. The method of the claim 150, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
determining that the location of the subject in the to-be-detected picture is compliant.
152. The method of the claim 151, wherein fusing the subject region image with the plural binarization results, respectively, extracting a coordinate value of every pixel in the fused subject region image and its corresponding background purity value, and determining whether a location of a subject in the to-be-detected picture is compliant comprises:
determining that the location of the subject in the to-be-detected picture is non-compliant;
and wherein the second threshold includes a second background purity threshold and a location coordinate interval threshold.
153. The method of the claim 152, the method further comprising: identifying a subject region image and a background region image in a denoised to-be-detected picture through pixel-level semantic segmentation, and performing hue space conversion on the picture to output hue space data and brightness space data of the picture.
154. The method of the claim 153, the method further comprising: dilating the subject region image to dilate the range of edge pixels of the subject region image in order to ensure complete coverage over the subject region image.
155. The method of the claim 154, the method further comprising: dilating subject region image is fused with hue space data.
156. The method of the claim 155, the method further comprising: to detect the location of the subject in the picture, the brightness space data are first processed by means of the plural binarization methods so as to generate the plural binarization results.
157. The method of the claim 156, wherein the subject region image identified through pixel-level semantic segmentation is then fused with the plural binarization results, respectively.

Date Reçue/Date Received 2023-12-18
158. The method of the claim 157, wherein based on the coordinate value of every pixel in the fused subject region image and its corresponding background purity value, whether the location of the subject in the to-be-detected picture is compliant can be determined.
159. The method of the claim 158, the method further comprising: employing HSV
(hue, saturation, value) hue space conversion method to convert the to-be-detected picture in the RGB (red, green and blue) color space into the HSV color space that is closer to human visual perceptual characteristics.
160. The method of the claim 159, wherein the conversion method includes:
-V= max(R,G, B) s - max (R, G, B )- mm (R, G, B) rnax (R, 0, B) (G - B) 60x ___________________ S~ Oftmax(RAB)=R
Sx V
H = 60 x(2+ (B- R)) aamax(R,G,B)= G
S x V
60x [4 + (R-G1 ~ Oftmax(RAB)=B
xV
= 1-1 +360 H<O
161. The method of the claim 160, wherein the features of brightness space data include extensive color gamut coverage, high visual consistency, and good capability of expressing color perception.
162. The method of the claim 161, wherein the implantation converting brightness space data of the picture to be examined through: converting the to-be-detected picture from RGB space data into CIE XYZ space data; and converting the CIE XYZ space data into LUV
brightness space data using the following conversion equation:
X 0.412453 0.357580 0.180423 R
Y = 0.212671 0.715160 0.072169 G
Z 0.019334 0.119193 0.950227 B

Date Reçue/Date Received 2023-12-18 v L
29)1 Y
x,13 Yõ k.29 ,u =13 xi_ x(11 ¨ ,uni) v =13xLx(vi¨vni wherein Pr, and v. are light source constants, and ri is a preset fixed value;
and wherein ,LI= X +15Y+ 3Z

v, =
X+15Y+ 3Z
=
163. The method of the claim 162, wherein a round filter kernel k is used for filtering pixels of the subject region image.
164. The method of the claim 163, wherein when a round filter kernel has a diameter of 4 represented by:

k = 1 1 1 1 x 1 1 1 1
165. The method of the claim 164, wherein the filtering equation is:
= P K = tzu 1 (k) P,(i, j)c Date Reçue/Date Received 2023-12-18 wherein (i,j) represents the pixel coordinates, P represents the subject region image, Zy (k) represents the background purity value corresponding to the pixel, Zu represents the punctured neighborhood region corresponding to each pixel obtained using the round filter kemel k as the mask, B represents the dilates subject region image, wherein the background region image is updated when the subject region image is dilated, and D
represents the updated background region image.
166. The method of the claim 165, wherein the background region image is fused with data of the hue space component H to generate a result C.
167. The method of the claim 166, wherein the fusion equation is as below:
c(i,j) = H(i,j),(i,j) c C = ic(i,j) c(i,j) = 0, (i,j) E D 5;
168. The method of the claim 166, wherein (i,j) represents pixel coordinates, H (i, j) represents the background purity value corresponding to the pixel in the hue space component H; and wherein when a pixel located at coordinates (i,j) belongs to the dilated background region image D, the background purity of the pixel in the hue space component His valuated.
169. The method of the claim 168, wherein when the pixel located at coordinates (i,j) does not belong to the dilated background region image D, the background purity value of the pixel in the hue space component His valuated as zero.
170. The method of the claim 169, wherein the coordinates of the pixels and their corresponding background purity values are gathered to form an array C, which is the array composed of the location coordinates of the pixels in the background region image D and the correspondingly converted background purity values.
171. The method of the claim 170, wherein the predetermined first threshold is compared to the array C.
Date Recue/Date Received 2023-12-18
172. The method of the claim 170, wherein when all the background purity values corresponding to the induvial pixels in the background region image D are smaller than the first background purity threshold, it is determined that the background purity of the to-be-detected picture is compliant.
173. The method of the claim 172, the method further comprising: processing data of the brightness space channel L by means of a fixed-threshold binarization method, so as to obtain a first binarization result T.
174. The method of the claim 173, the method further comprising: processing the data of the brightness space channel L by means of a Gaussian-window binarizafion method, so as to obtain a second binarization result G.
175. The method of the claim 174, the method further comprising: non-coherence region suppression is performed on the first binarization result T and the second binarization result G, respectively by means of the non-maximum suppression method to nullify the impact of non-coherence regions caused by complicated a background on the detection results, thereby further improving detection precision.
176. The method of the claim 175, the method further comprising: fusing the subject region image recognized through pixel-level semantic segmentation with the first binarizati on result and the second binarization result, respectively.

Date Recue/Date Received 2023-12-18
CA3153067A 2019-09-02 2020-06-24 Picture-detecting method and apparatus Active CA3153067C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201910826006.7A CN110717865B (en) 2019-09-02 2019-09-02 Picture detection method and device
CN201910826006.7 2019-09-02
PCT/CN2020/097857 WO2021042823A1 (en) 2019-09-02 2020-06-24 Picture test method and device

Publications (2)

Publication Number Publication Date
CA3153067A1 CA3153067A1 (en) 2021-03-11
CA3153067C true CA3153067C (en) 2024-03-19

Family

ID=69210275

Family Applications (1)

Application Number Title Priority Date Filing Date
CA3153067A Active CA3153067C (en) 2019-09-02 2020-06-24 Picture-detecting method and apparatus

Country Status (3)

Country Link
CN (1) CN110717865B (en)
CA (1) CA3153067C (en)
WO (1) WO2021042823A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717865B (en) * 2019-09-02 2022-07-29 苏宁云计算有限公司 Picture detection method and device
CN112991470B (en) * 2021-02-08 2023-12-26 上海通办信息服务有限公司 Certificate photo background color checking method and system under complex background
CN115126267A (en) * 2022-07-25 2022-09-30 中建八局第三建设有限公司 Optical positioning control system and method applied to concrete member embedded joint bar alignment
CN114996488B (en) * 2022-08-08 2022-10-25 北京道达天际科技股份有限公司 Skynet big data decision-level fusion method

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101686338B (en) * 2008-09-26 2013-12-25 索尼株式会社 System and method for partitioning foreground and background in video
CN103578098B (en) * 2012-08-07 2017-05-10 阿里巴巴集团控股有限公司 Method and device for extracting commodity body in commodity picture
CN102982335A (en) * 2012-12-21 2013-03-20 北京邮电大学 Intelligent safe multi-license-plate positioning identification method based on cellular neural network
WO2014205231A1 (en) * 2013-06-19 2014-12-24 The Regents Of The University Of Michigan Deep learning framework for generic object detection
CN104268537A (en) * 2014-10-14 2015-01-07 杭州淘淘搜科技有限公司 Main body detection method based on complex commodity images
CN105631455B (en) * 2014-10-27 2019-07-05 阿里巴巴集团控股有限公司 A kind of image subject extracting method and system
CN105139383B (en) * 2015-08-11 2018-08-10 北京理工大学 Medical image cutting method based on definition circle hsv color space
CN106611431A (en) * 2015-10-22 2017-05-03 阿里巴巴集团控股有限公司 An image detection method and apparatus
CN106407983A (en) * 2016-09-12 2017-02-15 南京理工大学 Image body identification, correction and registration method
KR101995523B1 (en) * 2017-12-14 2019-10-01 동국대학교 산학협력단 Apparatus and method for object detection with shadow removed
CN108447064B (en) * 2018-02-28 2022-12-13 苏宁易购集团股份有限公司 Picture processing method and device
CN113298845A (en) * 2018-10-15 2021-08-24 华为技术有限公司 Image processing method, device and equipment
CN109214367A (en) * 2018-10-25 2019-01-15 东北大学 A kind of method for detecting human face of view-based access control model attention mechanism
CN109544583B (en) * 2018-11-23 2023-04-18 广东工业大学 Method, device and equipment for extracting interested area of leather image
CN110717865B (en) * 2019-09-02 2022-07-29 苏宁云计算有限公司 Picture detection method and device

Also Published As

Publication number Publication date
WO2021042823A1 (en) 2021-03-11
CN110717865A (en) 2020-01-21
CA3153067A1 (en) 2021-03-11
CN110717865B (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CA3153067C (en) Picture-detecting method and apparatus
KR101715486B1 (en) Image processing apparatus and image processing method
JP4627557B2 (en) Method and apparatus for correcting hybrid flash artifacts in digital images
US6728401B1 (en) Red-eye removal using color image processing
US7747071B2 (en) Detecting and correcting peteye
EP3644599B1 (en) Video processing method and apparatus, electronic device, and storage medium
KR101631012B1 (en) Image processing apparatus and image processing method
JP2004326805A (en) Method of detecting and correcting red-eye in digital image
CN101983507A (en) Automatic redeye detection
CN110674759A (en) Monocular face in-vivo detection method, device and equipment based on depth map
Lee et al. Color image enhancement using histogram equalization method without changing hue and saturation
US8498496B2 (en) Method and apparatus for filtering red and/or golden eye artifacts
KR20080043080A (en) Device for detecting skin of people and method thereof
JP2009038737A (en) Image processing apparatus
JP2003209704A (en) Image processing method, image processor, image forming device, image processing program, and recording medium
JP2019121810A (en) Image processing apparatus, image processing method, and image processing program
KR20030091471A (en) YCrCb color based human face location detection method
KR101233986B1 (en) Apparatus and method of correcting purple fringing
JP7455234B2 (en) Methods, devices, equipment and storage medium for facial pigment detection model training
US20230306720A1 (en) Method for recognizing arteries and veins on a fundus image using hyperspectral imaging technique
KR101915039B1 (en) Method and apparatus for of correcting color automatically for color blindness
JP4586963B2 (en) Automatic pupil correction method
CN114926461A (en) Method for evaluating quality of full-blind screen content image
WO2021102928A1 (en) Image processing method and apparatus
CN116074640A (en) Image processing method and related device

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916

EEER Examination request

Effective date: 20220916