CN111429535B - Method, system, equipment and medium for evaluating difference degree between clothes and background in image - Google Patents
Method, system, equipment and medium for evaluating difference degree between clothes and background in image Download PDFInfo
- Publication number
- CN111429535B CN111429535B CN202010176413.0A CN202010176413A CN111429535B CN 111429535 B CN111429535 B CN 111429535B CN 202010176413 A CN202010176413 A CN 202010176413A CN 111429535 B CN111429535 B CN 111429535B
- Authority
- CN
- China
- Prior art keywords
- image
- area
- region
- color
- background
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000003384 imaging method Methods 0.000 claims abstract description 32
- 238000012545 processing Methods 0.000 claims abstract description 5
- 238000001514 detection method Methods 0.000 claims description 7
- 238000000926 separation method Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 claims description 2
- 238000004590 computer program Methods 0.000 claims description 2
- 238000003064 k means clustering Methods 0.000 claims description 2
- 239000003086 colorant Substances 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 235000019642 color hue Nutrition 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image processing, in particular to a method, a system, equipment and a storage medium for evaluating the difference degree between clothes and background in an image, wherein the method comprises the following steps: separating a foreground region and a background region from an image to be detected; converting the image to be detected into a YCrCb model to obtain a converted image; detecting the converted image to determine a skin area; removing the skin area from the foreground area to finally obtain a clothing area; converting the image to be detected into an LAB model, and respectively calculating color values of a clothes region and a background region; and calculating the color distance between the two, judging whether the color distance is within a preset range, and if the calculated result is within the preset range, judging that the imaging requirement is not met. The invention can calculate the color distance between the clothing color and the background color in the image, and effectively improve the certificate photo yield of the self-help photographing equipment and the use experience of users.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method for evaluating the difference degree between clothes and background in an image, a system for evaluating the difference degree between clothes and background in the image, equipment of the system and a storage medium storing the method.
Background
Along with the rapid development of society, the variety of certificates used by masses of citizens is increased, and part of certificates need to be attached with certificates, so that the shooting requirement of the certificates is gradually increased; in order to facilitate the citizens to shoot the credentials, some self-service shooting devices appear on the market nowadays, and the application of the self-service shooting devices brings very convenient experience for people who need to shoot credentials. However, there is a certain rule for taking a certificate photo, and one rule is that the color of clothes worn by a person in the photo cannot be the same as the background color.
In the existing self-service photographing equipment, the self-service photographing equipment does not have a function of comparing the wearing color of a user with the background color; general users sometimes have difficulty in judging whether the colors of clothes worn by the users are proper or not, and the photographed credentials are similar to the colors of the clothes of the people and the background colors, so that the generated photos cannot meet related requirements, the use experience of the users is affected to a certain extent, and further popularization of self-service photographing equipment on the market is hindered.
Disclosure of Invention
In order to overcome the above-mentioned drawbacks, the present invention is directed to a method, a system, a device and a storage medium storing the method for distinguishing the difference between the clothes and the background color in the image imaged by the self-help photographing device.
The aim of the invention is realized by the following technical scheme:
the invention relates to a method for evaluating the difference degree between clothes and background in an image, which comprises the following steps:
acquiring an image to be detected, and separating a foreground region and a background region of the image from the image to be detected;
converting the image to be detected from an RGB model to a YCrCb model to obtain a converted image; detecting the converted image by using a preset skin color detection model, and determining a skin area in the image to be detected;
separating the skin area from the foreground area to obtain a clothing area in the image to be detected;
converting the image to be detected from an RGB model to an LAB model, and respectively calculating color values of the clothes region and the background region;
and calculating the color distance between the clothes area and the background area, judging whether the color distance is within a preset range, and judging that the imaging requirement is not met if the color distance is within the preset range.
In the present invention, after the judgment is that the imaging requirements are not met, the method includes:
and prompting that the imaging requirements are not met.
In the present invention, the calculating the color values of the clothing region and the background region, respectively, further includes:
the clothing region is divided into a plurality of sub-regions, and the area of each sub-region is calculated.
In the present invention, the calculating the color distance between the clothing region and the background region and determining whether the color distance is within a predetermined range includes:
calculating the color distance between each subarea in the background area and the clothes area according to the color distance, carrying out normalization processing on the color distance, and carrying out statistics to obtain a minimum distance subarea with the minimum color distance; and judging whether the color distance of the minimum distance subarea is within a preset range.
In the present invention, the determining whether the color distance of the minimum distance sub-region is within a predetermined range includes:
calculating the area ratio of the minimum distance subarea in the clothes area, judging whether the area ratio reaches a preset area ratio, and if the area ratio reaches the preset area ratio, judging whether the color distance of the minimum distance subarea is within a preset range.
In the invention, before the image to be detected is acquired, the method comprises the following steps:
an original image is obtained through a camera, and an image to be detected is selected from the original image according to a positioning frame with preset parameters.
In the present invention, the determining whether the color distance is within a predetermined range further includes:
if the color distance is not within the preset range, judging that the imaging requirements are met;
and generating a certificate according to the original image.
The invention relates to a system for evaluating the difference degree between clothes and background in an image, which comprises the following steps:
the image acquisition module is used for converting an original image into an image to be detected;
the foreground and background separation module is connected with the image acquisition module and is used for separating a foreground area and a background area of an image from the image to be detected;
the skin area determining module is connected with the image acquisition module and used for converting the image to be detected from an RGB model to a YCrCb model to obtain a converted image; detecting the converted image by using a preset skin color detection model, and determining a skin area in the image to be detected;
the clothing region determining module is respectively connected with the skin region determining module and the foreground and background separating module and is used for separating the skin region from the foreground region to obtain a clothing region in the image to be detected;
the color value calculation module is respectively connected with the clothes region determination module, the foreground and background separation module and the image acquisition module, and is used for converting the image to be detected from an RGB model to an LAB model and respectively calculating color values of the clothes region and the background region;
the color distance judging module is connected with the color value calculating module and is used for calculating the color distance between the clothes area and the background area and judging whether the color distance is within a preset range, and if the color distance is within the preset range, the imaging requirement is not met; and if the color distance is not within the preset range, judging that the imaging requirements are met.
The invention is a self-service photographing apparatus, comprising: the clothing and background difference evaluation system in the image comprises a camera, a prompter and a credential photo generator;
the camera is connected with the image acquisition module and used for generating an original image;
the prompter is connected with the color distance judging module and is used for prompting that the imaging requirements are not met when the imaging requirements are not met;
the certificate photo generator is respectively connected with the color distance judging module and the camera and is used for generating certificate photo according to the original image when the imaging requirement is met.
The present invention is a computer readable program storage medium storing computer program instructions which, when executed by a computer, cause the computer to perform a method as described above.
The method and the device can calculate the color distance between the clothing color and the background color in the image, provide possibility for the equipment to automatically remind the user of the clothing color, enable the generated certificate photo to meet the relevant standard, effectively improve the certificate photo yield of the self-help photo equipment and the use experience of the user, and are favorable for further popularization of the self-help photo equipment on the market.
Drawings
For ease of illustration, the invention is described in detail by the following preferred embodiments and the accompanying drawings.
FIG. 1 is a schematic workflow diagram of one embodiment of a method for evaluating the degree of difference between clothes and background in an image according to the present invention;
FIG. 2 is a schematic workflow diagram of another embodiment of a method for evaluating the degree of difference between clothes and background in an image according to the present invention;
FIG. 3 is a schematic diagram showing a logic structure of an embodiment of a system for evaluating the difference between clothes and background in an image according to the present invention;
fig. 4 is a schematic diagram of a logic structure of the self-service photographing apparatus of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present invention, it should be noted that the terms "mounted," "connected," and "coupled" are to be construed broadly, as well as, for example, fixedly coupled, detachably coupled, or integrally coupled, unless otherwise specifically indicated and defined. Either mechanically or electrically. Can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the following, a method for evaluating the difference between clothes and background in an image according to an embodiment of the present invention will be described with reference to fig. 1, which includes:
s101, separating a foreground region and a background region
Acquiring an image to be detected, and separating a foreground region and a background region of the image from the image to be detected; in this embodiment, the foreground region refers to the region of the photographed person's body part in the photograph, which includes the skin region and the clothing region; while other areas belong to background areas;
the original image is arranged to obtain an image I to be detected o Image I to be detected o Image filtering is carried out, and a median filtering method is adopted for the image I to be detected o Smoothing the median filter to obtain a filtered image I v The method comprises the steps of carrying out a first treatment on the surface of the For filtered image I v Then adopts the Sobel method to make gradient extraction in x-direction and y-direction respectively as I tx 、I ty Adding the gradient images to obtain a gradient image I t The method comprises the steps of carrying out a first treatment on the surface of the The sobel operator is a method for detecting edges by weighting differences between gray values in four fields, namely upper, lower, left and right, of each pixel in an image and reaching extremum at the edges. For gradient image I again t Adding 1 to each pixel point; gradient image I t Selecting 4-8 rows for zero clearing treatment, then randomly selecting one point of the rows as a seed point, carrying out prior method treatment of water filling, wherein the value of each pixel point of a filled region mark is 1, and finally taking 1 as a threshold value, respectively carrying out binarization treatment according to the values which are larger than 1 or smaller than 2 to obtain a foreground region I in an image to be detected fg Background area I bg 。
S102, determining skin area in image to be detected
Converting the image to be detected from an RGB model to a YCrCb model to obtain a converted image; wherein YCrCb is YUV, where "Y" represents brightness, i.e., gray scale value; while "U" and "V" represent chromaticity, which is used to describe the image color and saturation for specifying the color of the pixel. The "brightness" is established through the RGB input signals by superimposing specific parts of the RGB signals together. "chroma" defines two aspects of color-hue and saturation, denoted by Cr and Cb, respectively. Wherein Cr reflects a difference between a red portion of the RGB input signal and a luminance value of the RGB signal; and Cb reflects the difference between the blue portion of the RGB input signal and the luminance value of the RGB signal. Detecting the converted image by using a preset skin color detection model, and determining a skin area in the image to be detected;
the method comprises the following steps: image I to be detected o Conversion from RGB space toYCrCb color space gets image I oy Training an elliptical skin color model of skin color in a color space YCrCb according to the existing photo sample, and detecting an image I by using the skin color model oy Skin color region, obtaining skin region I skin 。
S103, obtaining a clothing region in the image to be detected
Separating the skin area from the foreground area to obtain a clothing area in the image to be detected;
the method comprises the following steps: using foreground region I fg Subtracting the skin area I skin Obtaining a foreground region I containing no skin fgo The method comprises the steps of carrying out a first treatment on the surface of the Then the lower boundary of the face positioning frame is taken as the opposite boundary line to take the foreground area I under the chin of the person fgo Is divided into sections, thereby obtaining a clothing region I clo 。
S104, calculating color values of the clothes area and the background area
The image to be detected is converted from an RGB model to a LAB model, wherein the LAB color model consists of three elements, one element being the luminance (L), and a and b being the two color channels. a includes colors ranging from dark green to gray to bright pink; b is from bright blue to gray to yellow; and respectively calculating color values of the clothes area and the background area;
the method comprises the following steps: image I to be detected o Conversion from RGB space to LAB color space to yield LAB image I lab The method comprises the steps of carrying out a first treatment on the surface of the In background area I bg As template constraint, in LAB image I lab The color value Vbg of the background region is obtained lab In the clothing region I clo As template constraint, in LAB image I lab The color value of the clothing region is obtained.
S105, judging whether the color distance is within a preset range
Calculating a color distance l between the clothing region and the background region t And judge the color distance l t Whether or not it is within a predetermined range; wherein the predetermined range is a range empirically set.
S106, defining that the imaging requirements are not met
If the color distance is within the preset range, the background color is similar to the clothes color, and the imaging requirement is not met; so that the device automatically reminds the user according to the judgment result in the subsequent step.
The method for evaluating the difference degree between clothes and the background in the image can be applied to self-help photographing equipment, so that users with the color which does not meet the requirements of the clothes can be reminded before the credentials of the users are produced; and the method can also be applied to a certification equipment for automatically screening whether the electronic photo provided by the user meets the color requirement of clothes.
The following describes a method for evaluating the difference between clothes and background in an image according to an embodiment of a self-service photographing apparatus, referring to fig. 2, which includes:
s201, selecting an image to be detected from original images
An original image is obtained through a camera, and an image to be detected is selected from the original image according to a positioning frame with preset parameters.
The method comprises the following steps: positioning frame (x) for empirically setting predetermined parameters lftp ,y lftp Width, height), where x lftp For the upper left-hand corner abscissa, y lftp The upper left corner ordinate of the frame, width of the frame, height of the frame; while the original image size is (width s ,height s ) Solving an image I to be detected o The positioning frame on the original picture is (x) o ,y o ,width o ,height o ) And (3) solving through a formula, and if the calculation result exceeds the original image range, limiting by using the actual edges of the original image.
x o =x lftp -width
y o =y lftp -height
width o =x lftp +width-x o
height o =height s
S202, separating a foreground region and a background region
Acquiring an image to be detected, and separating a foreground region and a background region of the image from the image to be detected; in this embodiment, the foreground region refers to the region of the photographed person's body part in the photograph, which includes the skin region and the clothing region; while other areas belong to background areas;
the original image is arranged to obtain an image I to be detected o Image I to be detected o Image filtering is carried out, and a median filtering method is adopted for the image I to be detected o Smoothing the median filter to obtain a filtered image I v The method comprises the steps of carrying out a first treatment on the surface of the For filtered image I v Then adopts the Sobel method to make gradient extraction in x-direction and y-direction respectively as I tx 、I ty Adding the gradient images to obtain a gradient image I t The method comprises the steps of carrying out a first treatment on the surface of the The sobel operator is a method for detecting edges by weighting differences between gray values in four fields, namely upper, lower, left and right, of each pixel in an image and reaching extremum at the edges. For gradient image I again t Adding 1 to each pixel point; gradient image I t Selecting 4-8 rows for zero clearing treatment, then randomly selecting one point of the rows as a seed point, carrying out prior method treatment of water filling, wherein the value of each pixel point of a filled region mark is 1, and finally taking 1 as a threshold value, respectively carrying out binarization treatment according to the values which are larger than 1 or smaller than 2 to obtain a foreground region I in an image to be detected fg Background area I bg 。
S203, determining skin area in image to be detected
Converting the image to be detected from an RGB model to a YCrCb model to obtain a converted image; wherein YCrCb is YUV, where "Y" represents brightness, i.e., gray scale value; while "U" and "V" represent chromaticity, which is used to describe the image color and saturation for specifying the color of the pixel. The "brightness" is established through the RGB input signals by superimposing specific parts of the RGB signals together. "chroma" defines two aspects of color-hue and saturation, denoted by Cr and Cb, respectively. Wherein Cr reflects a difference between a red portion of the RGB input signal and a luminance value of the RGB signal; and Cb reflects the difference between the blue portion of the RGB input signal and the luminance value of the RGB signal. Detecting the converted image by using a preset skin color detection model, and determining a skin area in the image to be detected;
the method comprises the following steps: image I to be detected o Conversion from RGB space to YCrCb color space to obtain image I oy Training an elliptical skin color model of skin color in a color space YCrCb according to the existing photo sample, and detecting an image I by using the skin color model oy Skin color region, obtaining skin region I skin 。
S204, obtaining a clothing region in the image to be detected
Separating the skin area from the foreground area to obtain a clothing area in the image to be detected;
the method comprises the following steps: using foreground region I fg Subtracting the skin area I skin Obtaining a foreground region I containing no skin fgo The method comprises the steps of carrying out a first treatment on the surface of the Then the lower boundary of the face positioning frame is taken as the opposite boundary line to take the foreground area I under the chin of the person fgo Is divided into sections, thereby obtaining a clothing region I clo 。
S205, calculating the color value of the background area
The image to be detected is converted from an RGB model to a LAB model, wherein the LAB color model consists of three elements, one element being the luminance (L), and a and b being the two color channels. a includes colors ranging from dark green to gray to bright pink; b is from bright blue to gray to yellow; and calculating a color value of the background area;
the method comprises the following steps: image I to be detected o Conversion from RGB space to LAB color space to yield LAB image I lab The method comprises the steps of carrying out a first treatment on the surface of the In background area I bg As template constraint, in LAB image I lab The color value Vbg of the background region is obtained lab 。
S206, calculating color values and areas of all subregions of the clothes
Dividing the clothing region into a plurality of sub-regions, and calculating the area and color value of each sub-region;
the method comprises the following steps: in clothing region I clo As a dieThe plate constraint is that clustering treatment is carried out on each pixel point in the region by using a K-means method (K-means clustering algorithm), k=8 is taken in the embodiment, each pixel point cluster in the clothing region is divided into K sub-regions, and the area size { A ] of each sub-region is counted 1 ,A 2 ,…,A k -a }; then taking each sub-area as template constraint, and counting LAB color mean { V } of each sub-area 1 ,V 2 ,…,V k -a }; comparing the Euclidean distance between LAB colors in pairs between the subareas, if the distance is smaller than the set threshold T a The two sub-regions are combined into a new clustered sub-region, and the area size A of the sub-region is recalculated t Color average size V t . Repeating the comparison and merging of sub-regions until the color distance between the new sub-regions is greater than the threshold value T a Until now, a new k' sub-region is formed, and the new sub-regions have the size { A } 1 ,A 2 ,…,A k′ Size of color mean { V }, color mean 1 ,V 2 ,…,V k′ }。
S207, obtaining the color distance and area ratio of the minimum distance subarea
The color distance l between the background area and each sub-area in the clothing area is calculated t And normalizing the color distance t =l t 255, counting the color distance l of the smallest distance subarea with the smallest color distance ret The method comprises the steps of carrying out a first treatment on the surface of the Calculating an area ratio P of the minimum distance subregion in the garment region ret =A ret /A total Wherein is A total The size of the area of the whole clothing region.
S208, judging whether the area ratio reaches a preset area ratio
And judging whether the area ratio reaches a preset area ratio or not, if the area ratio reaches the preset area ratio, the color distance of the minimum distance subarea is larger than the weight of the clothing area, so that the photographing effect of the credentials is affected, and performing step S209, judging whether the color distance of the minimum distance subarea is within a preset range or not. In the present embodiment, the distance threshold T is set by synthesis lret Area duty ratio threshold T pret The color distance from the subarea is l ret Area ratio P ret Comparing and judging a similarity result: the size of the subarea is not significant to a certain extent, so in this embodiment, T is set pret In a manner that solves this problem.
S209, judging whether the color distance of the minimum distance subarea is within a preset range
Judging whether the color distance of the minimum distance subarea is within a preset range, if so, judging that the color distance does not meet the imaging requirement; s210, prompting that the imaging requirements are not met; if the imaging requirement is not within the preset range, judging that the imaging requirement is met; and S211, generating a certificate photo according to the original image.
S210, prompting that imaging requirements are not met
The information that the color of clothes does not meet the imaging requirement is prompted in a mode of displaying sound, an indicator light and a screen, so that a user can replace the dressing in time.
S211, generating certificate photos according to original images
Generating a certificate according to the original image, wherein the generation mode comprises the following steps: printing the credentials, uploading the credentials to a network, and the like. Because in this embodiment, the certificate is generated only when meeting the imaging requirement, it effectively reduces the generation of unqualified certificates, effectively improves the qualification rate of certificates, and improves the use experience of users.
Referring to fig. 3, the present invention is a system for evaluating the difference between clothes and background in an image, comprising:
an image acquisition module 101, wherein the image acquisition module 101 is used for converting an original image into an image to be detected;
the foreground and background separation module 102 is connected with the image acquisition module 101, and is used for separating a foreground area and a background area of an image from the image to be detected; wherein the foreground region refers to the region of the body part of the photographed person in the photograph, and comprises a skin region and a clothing region; while other areas belong to background areas;
a skin area determining module 103, where the skin area determining module 103 is connected to the image acquiring module 101, and is configured to convert the image to be detected from an RGB model to a YCrCb model, where YCrCb is YUV, and "Y" represents brightness, that is, a gray scale value; while "U" and "V" represent chromaticity, which is used to describe the image color and saturation for specifying the color of the pixel. The "brightness" is established through the RGB input signals by superimposing specific parts of the RGB signals together. "chroma" defines two aspects of color-hue and saturation, denoted by Cr and Cb, respectively. Wherein Cr reflects a difference between a red portion of the RGB input signal and a luminance value of the RGB signal; while Cb reflects the difference between the blue part of the RGB input signal and the luminance value of the RGB signal; obtaining a converted image; detecting the converted image by using a preset skin color detection model, and determining a skin area in the image to be detected;
a clothing region determining module 104, where the clothing region determining module 104 is connected to the skin region determining module 103 and the foreground-background separating module 102, respectively, and is configured to separate the skin region from the foreground region, so as to obtain a clothing region in the image to be detected;
the color value calculating module 105 is respectively connected with the clothes region determining module 104, the foreground and background separating module 102 and the image obtaining module 101, and is used for converting the image to be detected from an RGB model to a LAB model, and respectively calculating color values of the clothes region and the background region;
a color distance judging module 106, where the color distance judging module 106 is connected to the color value calculating module 105, and is configured to calculate a color distance between the clothing region and the background region, and judge whether the color distance is within a predetermined range, if the color distance is within the predetermined range, it is indicated that the background color is similar to the clothing color, and if it is determined that the imaging requirement is not met; and if the color distance is not within the preset range, judging that the imaging requirements are met.
The evaluation system for the difference degree between clothes and the background in the image can be applied to self-help photographing equipment, so that users with the color which does not meet the requirements of the clothes can be reminded before the credentials of the users are produced; and the method can also be applied to a certification equipment for automatically screening whether the electronic photo provided by the user meets the color requirement of clothes.
Referring to fig. 4, the present invention is a self-service photographing apparatus, comprising: the clothing and background difference degree evaluation system 100 in the image, the camera 200, the prompter 300 and the credential photo generator 400 are as described above;
the camera 200 is connected with an image acquisition module in the evaluation system 100, and is used for generating an original image;
the prompter 300 is connected with the color distance judging module in the evaluation system 100, and is used for prompting that the imaging requirements are not met when the imaging requirements are not met; wherein, the prompter 300 may include: horn, display, indicator lights, etc.;
the credential generator 400 is respectively connected to the color distance judging module in the evaluation system 100 and the camera 200, and is configured to generate a credential according to the original image when the imaging requirement is met. The generation mode comprises the following steps: printing the credentials, uploading the credentials to a network, and the like.
The present invention includes a computer readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention as described in the "exemplary methods" section of this specification, when said program product is run on the terminal device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on the above readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
Claims (7)
1. A method for evaluating the degree of difference between clothes and background in an image, comprising:
acquiring an image to be detected, and separating a foreground region and a background region of the image from the image to be detected;
converting the image to be detected from an RGB model to a YCrCb model to obtain a converted image; detecting the converted image by using a preset skin color detection model, and determining a skin area in the image to be detected;
separating the skin area from the foreground area to obtain a clothing area in the image to be detected;
converting the image to be detected from an RGB model to an LAB model, respectively calculating color values of the clothes region and the background region, carrying out clustering treatment on each pixel point in the clothes region according to a K-means clustering algorithm, further dividing the clothes region into a plurality of sub-regions, and calculating the area of each sub-region according to each pixel point in the clothes region; sequentially calculating the color distance between the background area and each sub-area in the clothes area, carrying out normalization processing on the color distance, and carrying out statistics to obtain a minimum distance sub-area with the minimum color distance; calculating the area ratio of the minimum distance subarea in the clothes area, judging whether the area ratio reaches a preset area ratio, if so, judging whether the color distance of the minimum distance subarea is within a preset range, and if so, judging that the imaging requirement is not met.
2. The method for evaluating the difference between clothes and the background in the image according to claim 1, wherein after the judgment is that the imaging requirement is not met, the method comprises the steps of:
and prompting the result which does not meet the imaging requirement.
3. The method for evaluating the difference between clothes and the background in the image according to claim 2, wherein the step of acquiring the image to be detected comprises the steps of:
an original image is obtained through a camera, and an image to be detected is selected from the original image according to a positioning frame with preset parameters.
4. The method for evaluating the degree of difference between clothes and background in an image according to claim 3, wherein said determining whether said color distance is within a predetermined range further comprises:
if the color distance calculation result is not within the preset range, judging that the imaging requirement is met;
and generating a certificate according to the original image.
5. A system for evaluating the degree of difference between clothing and background in an image, comprising:
the image acquisition module is used for converting an original image into an image to be detected;
the foreground and background separation module is connected with the image acquisition module and is used for separating a foreground area and a background area of an image from the image to be detected;
the skin area determining module is connected with the image acquisition module and used for converting the image to be detected from an RGB model to a YCrCb model to obtain a converted image; detecting the converted image by using a preset skin color detection model, and determining a skin area in the image to be detected;
the clothing region determining module is respectively connected with the skin region determining module and the foreground and background separating module and is used for separating the skin region from the foreground region to obtain a clothing region in the image to be detected;
the color value calculation module is respectively connected with the clothes region determination module, the foreground and background separation module and the image acquisition module, and is used for converting the image to be detected from an RGB model to an LAB model and respectively calculating color values of the clothes region and the background region;
the color distance judging module is connected with the color value calculating module and is used for dividing the clothing region into a plurality of sub-regions and calculating the area of each sub-region according to each pixel point in the clothing region; sequentially calculating the color distance between the background area and each sub-area in the clothes area, carrying out normalization processing on the color distance, and carrying out statistics to obtain a minimum distance sub-area with the minimum color distance; calculating the area ratio of the minimum distance subarea in the clothes area, judging whether the area ratio reaches a preset area ratio, if so, judging whether the color distance of the minimum distance subarea is within a preset range, and if so, judging that the imaging requirement is not met.
6. A self-service photographing apparatus, comprising: the system for evaluating the degree of difference between clothes and background in an image according to claim 5, and a camera, a prompter and a credential generator;
the camera is connected with the image acquisition module and used for generating an original image;
the prompter is connected with the color distance judging module and is used for prompting the result which does not meet the imaging requirement when the color distance judging module does not meet the imaging requirement;
the certificate photo generator is respectively connected with the color distance judging module and the camera and is used for generating certificate photo according to the original image when the imaging requirement is met.
7. A computer readable program storage medium, characterized in that it stores computer program instructions, which when executed by a computer, cause the computer to perform the method according to any of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010176413.0A CN111429535B (en) | 2020-03-13 | 2020-03-13 | Method, system, equipment and medium for evaluating difference degree between clothes and background in image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010176413.0A CN111429535B (en) | 2020-03-13 | 2020-03-13 | Method, system, equipment and medium for evaluating difference degree between clothes and background in image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111429535A CN111429535A (en) | 2020-07-17 |
CN111429535B true CN111429535B (en) | 2023-09-08 |
Family
ID=71547897
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010176413.0A Active CN111429535B (en) | 2020-03-13 | 2020-03-13 | Method, system, equipment and medium for evaluating difference degree between clothes and background in image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111429535B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473780A (en) * | 2013-09-22 | 2013-12-25 | 广州市幸福网络技术有限公司 | Portrait background cutout method |
CN105825161A (en) * | 2015-01-07 | 2016-08-03 | 阿里巴巴集团控股有限公司 | Image skin color detection method and system thereof |
CN106558046A (en) * | 2016-10-31 | 2017-04-05 | 深圳市飘飘宝贝有限公司 | A kind of quality determining method and detection means of certificate photo |
WO2017092431A1 (en) * | 2015-12-01 | 2017-06-08 | 乐视控股(北京)有限公司 | Human hand detection method and device based on skin colour |
CN110222555A (en) * | 2019-04-18 | 2019-09-10 | 江苏图云智能科技发展有限公司 | The detection method and device of area of skin color |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8861847B2 (en) * | 2012-12-21 | 2014-10-14 | Intel Corporation | System and method for adaptive skin tone detection |
-
2020
- 2020-03-13 CN CN202010176413.0A patent/CN111429535B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473780A (en) * | 2013-09-22 | 2013-12-25 | 广州市幸福网络技术有限公司 | Portrait background cutout method |
CN105825161A (en) * | 2015-01-07 | 2016-08-03 | 阿里巴巴集团控股有限公司 | Image skin color detection method and system thereof |
WO2017092431A1 (en) * | 2015-12-01 | 2017-06-08 | 乐视控股(北京)有限公司 | Human hand detection method and device based on skin colour |
CN106558046A (en) * | 2016-10-31 | 2017-04-05 | 深圳市飘飘宝贝有限公司 | A kind of quality determining method and detection means of certificate photo |
CN110222555A (en) * | 2019-04-18 | 2019-09-10 | 江苏图云智能科技发展有限公司 | The detection method and device of area of skin color |
Non-Patent Citations (1)
Title |
---|
基于内容的服装检索系统中颜色特征提取算法的研究和改进;陈倩 等;《激光杂志》;第37卷(第4期);第62-68页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111429535A (en) | 2020-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103973977B (en) | Virtualization processing method, device and the electronic equipment of a kind of preview interface | |
US9530045B2 (en) | Method, system and non-transitory computer storage medium for face detection | |
JP4351911B2 (en) | Method and apparatus for evaluating photographic quality of captured images in a digital still camera | |
CN110136071A (en) | A kind of image processing method, device, electronic equipment and storage medium | |
CN110807759B (en) | Method and device for evaluating photo quality, electronic equipment and readable storage medium | |
CN101599175B (en) | Detection method for determining alteration of shooting background and image processing device | |
CN107563976B (en) | Beauty parameter obtaining method and device, readable storage medium and computer equipment | |
CN108419014A (en) | The method for capturing face using panoramic camera and the linkage of Duo Tai candid cameras | |
EP2797051B1 (en) | Image processing device, image processing method, program, and recording medium | |
US20060097172A1 (en) | Imaging apparatus, medium, and method using infrared rays with image discrimination | |
EP3134850A2 (en) | System and method for controlling a camera based on processing an image captured by other camera | |
KR20110025621A (en) | Image processing apparatus, image processing method and computer readable-medium | |
CN111429536B (en) | Method, system and storage medium for toning skin color in image | |
JP2011003180A (en) | Sky detection system and method used in image collection device | |
JPH08138053A (en) | Subject imformation processor and remote control device | |
CN107172354A (en) | Method for processing video frequency, device, electronic equipment and storage medium | |
CN102697446A (en) | Image processing device and image processing method | |
CN109089041A (en) | Recognition methods, device, electronic equipment and the storage medium of photographed scene | |
CN103501411A (en) | Image shooting method and system | |
CN108805144A (en) | Shell hole recognition methods based on morphology correction and system, indication of shots equipment | |
CN112036209A (en) | Portrait photo processing method and terminal | |
CN112700376B (en) | Moire pattern removing method and device for image, terminal equipment and storage medium | |
CN105915785A (en) | Double-camera shadedness determining method and device, and terminal | |
CN113743378B (en) | Fire monitoring method and device based on video | |
CN116883426A (en) | Lung region segmentation method, lung disease assessment method, lung region segmentation device, lung disease assessment device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |