CN110415237B - Skin flaw detection method, skin flaw detection device, terminal device and readable storage medium - Google Patents

Skin flaw detection method, skin flaw detection device, terminal device and readable storage medium Download PDF

Info

Publication number
CN110415237B
CN110415237B CN201910698746.7A CN201910698746A CN110415237B CN 110415237 B CN110415237 B CN 110415237B CN 201910698746 A CN201910698746 A CN 201910698746A CN 110415237 B CN110415237 B CN 110415237B
Authority
CN
China
Prior art keywords
detected
pixel point
area
image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910698746.7A
Other languages
Chinese (zh)
Other versions
CN110415237A (en
Inventor
康健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910698746.7A priority Critical patent/CN110415237B/en
Publication of CN110415237A publication Critical patent/CN110415237A/en
Application granted granted Critical
Publication of CN110415237B publication Critical patent/CN110415237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application provides a skin defect detection method, a detection device, a terminal device and a readable storage medium. The method comprises the following steps: acquiring an image to be processed containing a portrait, and determining a skin area of the portrait; determining a detection radius based on the size of the portrait relative to the image to be processed; selecting pixel points as pixel points to be detected in the skin area; determining a to-be-detected ring corresponding to the to-be-detected pixel point; determining the gray value of each pixel point on the to-be-detected circular ring; calculating the variance of the gray values of the pixels on the to-be-detected ring based on the gray values of the pixels on the to-be-detected ring, and determining whether the number of gray values in the to-be-detected ring, which are greater than or less than the gray values of the pixels to be detected, reaches a preset number; and if the number reaches the preset number and the variance is smaller than the preset variance, determining the pixel point to be detected as a defect point. The parallel processing capacity of the GPU can be fully utilized.

Description

Skin flaw detection method, skin flaw detection device, terminal device and readable storage medium
Technical Field
The present application belongs to the field of computers, and in particular, relates to a skin defect detection method, a skin defect detection apparatus, a terminal device, and a computer-readable storage medium.
Background
The currently common skin defect detection method is as follows: and detecting edges of the image by adopting a Difference of Gaussian (DoG) operator or a Laplacian of Gaussian (LoG) operator, and then rejecting the non-defective region detected by the DoG operator or the LoG operator based on the property of the connected domain, thereby obtaining the defective region.
However, the method of combining the property of the connected component to exclude the false detection area cannot perform parallel operations, and cannot fully utilize the parallel Processing capability of the Graphics Processing Unit (GPU), which results in the waste of GPU resources.
Disclosure of Invention
In view of the above, embodiments of the present disclosure provide a skin defect detection method, a skin defect detection apparatus, a terminal device, and a computer-readable storage medium, which can utilize the parallel processing capability of the GPU to detect a skin defect area.
A first aspect of an embodiment of the present application provides a skin defect detection method, including:
acquiring an image to be processed containing a portrait, and determining the position information of a skin area of the portrait in the image to be processed;
determining a detection radius based on the size of the portrait relative to the image to be processed;
selecting pixel points as pixel points to be detected in the skin area indicated by the position information;
determining a to-be-detected ring corresponding to the to-be-detected pixel point, wherein the to-be-detected ring takes the to-be-detected pixel point as a circle center and takes the detection radius as a radius;
determining the gray value of each pixel point on the to-be-detected circular ring, wherein the gray value is the Y value of the corresponding pixel point in the YUV domain;
calculating the variance of the gray values of the pixels on the to-be-detected ring based on the gray values of the pixels on the to-be-detected ring, and determining whether the number of the gray values of the pixels in the to-be-detected ring, which are larger or smaller than the gray value of the pixels to be detected, reaches a preset number, wherein the gray value of the pixels to be detected is the Y value of the pixels to be detected in a YUV domain;
and if the number reaches the preset number and the variance is smaller than the preset variance, determining the pixel point to be detected as a defect point.
A second aspect of an embodiment of the present application provides a skin defect detection apparatus, including:
the skin determining module is used for acquiring an image to be processed containing a portrait and determining the position information of a skin area of the portrait in the image to be processed;
the radius determining module is used for determining a detection radius based on the size of the portrait relative to the image to be processed;
a point-to-be-detected selection module, configured to select a pixel point as a pixel point to be detected in the skin area indicated by the location information;
a to-be-detected ring determining module, configured to determine a to-be-detected ring corresponding to the to-be-detected pixel point, where the to-be-detected ring is a ring with the to-be-detected pixel point as a circle center and the detection radius as a radius;
the gray value determining module is used for determining the gray value of each pixel point on the to-be-detected circular ring, wherein the gray value is the Y value of the corresponding pixel point in the YUV domain;
the variance and comparison module is used for calculating the variance of the gray values of the pixels on the to-be-detected ring based on the gray values of the pixels on the to-be-detected ring, and determining whether the number of the gray values of the pixels to be detected, which are larger or smaller than the gray values of the pixels to be detected, in the to-be-detected ring reaches a preset number, wherein the gray values of the pixels to be detected are Y values of the pixels to be detected in a YUV domain;
and a defect point determining module, configured to determine that the pixel point to be detected is a defect point if the number reaches the preset number and the variance is smaller than a preset variance.
A third aspect of the embodiments of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the skin defect detection method according to the first aspect when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, implements the steps of the skin defect detection method according to the first aspect.
A fifth aspect of the application provides a computer program product comprising a computer program which, when executed by one or more processors, performs the steps of the skin defect detection method of the first aspect as described above.
Thus, the application provides a skin defect detection method. And selecting a pixel point to be detected in the skin area of the image to be processed, and determining whether the pixel point to be detected is a defect point or not based on the gray value of the pixel point to be detected and the gray value of each pixel point on the circular ring to be detected. Therefore, in the skin defect detection method provided by the application, when determining whether a certain pixel point is a defect point, it is completely unnecessary to know whether other pixel points are defect points, and therefore, the technical scheme provided by the application is not a defect point serial detection method, so that when the technical scheme provided by the application is adopted to simultaneously detect whether a plurality of pixel points are defect points, the GPU can be used for simultaneously and parallelly detecting the pixel points to determine whether each pixel point is a defect point. Therefore, the method provided by the application can fully utilize the parallel processing capacity of the GPU, and avoids the waste of GPU resources to a certain extent.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart illustrating an implementation of a skin defect detection method according to an embodiment of the present application;
FIG. 2 is a table showing a relationship between portrait size and detection radius according to an embodiment of the present application;
FIG. 3 is a table showing the relationship between portrait size and detection radius according to another embodiment of the present application;
FIG. 4 is a schematic view of a ring to be detected according to an embodiment of the present application;
fig. 5 is a schematic flow chart illustrating an implementation of another skin defect detection method according to the second embodiment of the present application;
FIG. 6 is a schematic diagram of determining a third defective area according to the second embodiment of the present application;
fig. 7 is a schematic structural diagram of a skin defect detection apparatus according to a third embodiment of the present application;
fig. 8 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The skin defect detection method provided by the embodiment of the application can be applied to terminal equipment, and the terminal equipment includes, but is not limited to: smart phones, tablet computers, notebooks, smart wearable devices, desktop computers, and the like.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Example one
Referring to fig. 1, a skin defect detecting method according to an embodiment of the present application is described below, including:
in step S101, acquiring an image to be processed including a portrait, and determining position information of a skin area of the portrait in the image to be processed;
in the embodiment of the present application, in order to detect a blemish (e.g., pox mark, and/or speckle) on the skin, the step S101 needs to first determine the position information of the skin area in the image to be processed. Specifically, a neural network model for detecting a skin region may be trained in advance, and the to-be-processed image is detected by the neural network model to determine position information of the skin region in the to-be-processed image; or outputting prompt information to remind a user of selecting the skin area of the portrait in the image to be processed, and then determining the image area selected by the user as the skin area in the image to be processed, so as to determine the position information of the skin area.
In step S102, determining a detection radius based on a size of the portrait relative to the image to be processed;
in the embodiment of the present application, it is necessary to determine the detection radii corresponding to different portrait sizes in advance, so as to obtain a corresponding relationship curve between portrait size and detection radius, or a corresponding relationship table between portrait size and detection radius, and store the obtained corresponding relationship curve or the obtained corresponding relationship table in the terminal device in advance, so that in the step S102, the "detection radius" in the step S102 is determined by the corresponding relationship curve or the corresponding relationship table stored in advance.
As shown in fig. 2, the intention is expressed for the correspondence relationship between the portrait size and the detection radius determined in advance, and the detection radii corresponding to the different portrait sizes are recorded in the correspondence table. If the area of the to-be-processed image occupied by the portrait in the to-be-processed image obtained in step S101 is 30%, it can be obtained that the detection radius determined in step S102 is 5 pixels long according to the correspondence table shown in fig. 2.
The following illustrates how to determine the correspondence table of portrait size-detection radius:
first, a plurality of images containing a portrait, such as image a, image B, image C, and image D, may be acquired;
secondly, determining the size of the portrait in image a relative to image a, such as occupying 20% of the area of image a, the size of the portrait in image B occupying 30% of the area of image B, the size of the portrait in image C occupying 40% of the area of image C, and the size of the portrait in image D occupying 40% of the area of image D;
then, determining a defect radius of the portrait in the image a, for example, 2 pixels long (which may be a maximum defect radius of the portrait in the image a, or an average value of each defect radius of the portrait in the image a), determining a defect radius of the portrait in the image B, for example, 5 pixels long, determining a defect radius of the portrait in the image C, for example, 10 pixels long, and determining a defect radius of the portrait in the image D, for example, 12 pixels long;
finally, since the defect radius of the portrait in the image a is 2 pixel points, it can be determined that, if the occupied area of the portrait is 20%, the detection radius may be 2.5 pixel points long or 3 pixel points long (in the embodiment of the present application, the detection radius should be slightly larger than the defect radius); since the defect radius of the portrait in the image B is 5 pixels long, it can be determined that if the occupied area of the portrait is 30%, the detection radius may be 6 pixels long (slightly longer than 5 pixels long); since the defect radius of the portrait in the image C is 10 pixel points long, and the defect radius of the portrait in the image D is 12 defect points long, if the occupied area of the portrait is 40%, the detection radius may be 12.5 pixel points long (i.e., the detection radius may be determined to be slightly larger than the maximum defect radius, or the detection radius may also be determined to be slightly larger than the average defect radius, i.e., the detection radius may be determined to be 11.5 pixel points long).
In addition, the difference between the sizes of different types of blemishes on the skin is often large, for example, pox is often small, and scar is often large, so in the embodiment of the present application, different detection radii can be determined for different blemishes, that is, a correspondence table between portrait size and detection radius set shown in fig. 3 needs to be obtained in advance, in fig. 3, the same portrait size corresponds to a plurality of detection radii (different detection radii correspond to different blemishes), and in addition, a person skilled in the art should understand that the detection radius in fig. 3 should be slightly larger than the radius of the corresponding blemish.
In this embodiment of the present application, after the to-be-processed image is acquired according to step S101, if a detection radius set including a plurality of detection radii can be determined according to the size of the to-be-processed image occupied by the portrait in the to-be-processed image, in step S102, one of the detection radii in the detection radius set may be selected as the "detection radius" in step S102.
In step S103, selecting a pixel point as a pixel point to be detected in the skin region indicated by the position;
in the embodiment of the application, one pixel point can be selected in the skin area as the pixel point to be detected, and a plurality of pixel points can also be selected as the pixel points to be detected.
In step S104, determining a to-be-detected ring corresponding to the to-be-detected pixel point, where the to-be-detected ring is a ring having the to-be-detected pixel point as a circle center and the detection radius as a radius;
in this embodiment of the application, if only one pixel point to be detected is determined in step S103, the subsequent steps S104 to S107 are performed, and if a plurality of pixel points to be detected are determined in step S103, the subsequent steps S104 to S107 are performed once for each pixel point to be detected, so as to determine whether each pixel point to be detected is a defective point.
For a certain pixel point to be detected, before determining whether the pixel point to be detected is a defect point, a ring to be detected of the defect point to be detected needs to be determined, where the ring to be detected is a ring taking the pixel point to be detected as a circle center and the detection radius determined in step S103 as a radius. In addition, in the embodiment of the present application, the "ring to be detected" in step S104 may be equal to a "rectangle to be detected" (the rectangle to be detected may be a circumscribed rectangle of the ring to be detected), and a person skilled in the art can easily understand that the "ring to be detected" in the first embodiment of the present application can be completely replaced by the "rectangle to be detected", so as to achieve the same technical effect as the first embodiment of the present application.
In step S105, determining a gray value of each pixel point on the to-be-detected ring, where the gray value is a Y value of the corresponding pixel point in the YUV domain;
for a certain pixel point to be detected, after the ring to be detected of the pixel point to be detected is determined, the Y value of each pixel point on the ring to be detected needs to be obtained.
Those skilled in the art will readily understand that, in order to more accurately determine whether the pixel point to be detected is a defective point, Y values of more pixel points on the ring to be detected need to be obtained. Therefore, if the number of the pixel points included in the to-be-detected ring is small, the to-be-detected ring can be up-sampled, so that the Y values of more pixel points on the to-be-detected ring can be obtained. As shown in fig. 4, for the to-be-processed image 401, only 2 pixels (i.e., pixel a and pixel B) are on the to-be-detected ring 402, and in this case, the to-be-detected ring may be upsampled (the Y value of the upsampled pixel is estimated according to the Y values of the surrounding pixels), so as to obtain the Y values of the multiple pixels on the to-be-detected ring.
In the embodiment of the present application, the step S105 may include:
determining the number of the pixel points participating in the calculation of the subsequent gray value in the to-be-detected ring based on the detection radius and the pixel point number calculation formula, wherein the pixel point number calculation formula is as follows:
M=ceil(2×π×r)
wherein, M is the number of pixels participating in the subsequent gray value calculation, r is the detection radius, ceil (x) is an operation of taking the minimum integer greater than or equal to x (as those skilled in the art can easily understand, a floor operation may also be taken);
and if the number of the pixel points in the to-be-detected ring is less than M, performing up-sampling on the to-be-detected ring to acquire the gray values of the M pixel points in the to-be-detected ring.
In addition, if the number of the pixel points in the to-be-detected ring is greater than or equal to M, the gray value of each pixel point in the to-be-detected ring can be selected; or, M pixel points may be selected from the pixel points of the to-be-detected ring to obtain the gray values of the M pixel points in the to-be-detected ring.
In step S106, based on the gray value of each pixel point on the ring to be detected, calculating the variance of the gray value of each pixel point, and determining whether the number of gray values in the ring to be detected, which are greater than or less than the gray value of the pixel point to be detected, reaches a preset number, wherein the gray value of the pixel point to be detected is the Y value of the pixel point to be detected in the YUV domain;
in step S107, if the number reaches the preset number and the variance is smaller than a preset variance, determining that the pixel point to be detected is a defective point;
according to the description of the step S102, the detection radius is usually slightly larger than the flaw radius, so that if the gray value of the pixel point to be detected is larger than the gray values of the pixel points on most of the rings to be detected and the variance of the gray values of the pixel points on the rings to be detected is small, the pixel point to be detected should be in the central area of the dark flaw point; if the gray value of the pixel point to be detected is smaller than the gray values of the pixel points on most of the rings to be detected, and the variance of the gray values of all the pixel points on the rings to be detected is smaller, at this moment, the pixel point to be detected is located in the center area of the light-color flaw point. Therefore, according to the above steps S106 and S107, the pixel points located in the center region of the skin defect can be found to some extent.
In addition, in the first embodiment of the present application, if the detection radius determined in step S102 is one of the detection radius sets (i.e., if the correspondence table between the portrait size and the detection radius determined in advance is shown in fig. 3), after step S106, the following steps may be further included:
if the number does not reach the preset number or the variance is larger than or equal to the preset variance, updating the detection radius to be another radius except the selected radius in the detection radius set, updating the to-be-detected ring to be a ring with the to-be-detected pixel point as the center of a circle and the updated detection radius as the radius, and returning to execute the step S105 until all radii of the detection radius set are traversed.
Furthermore, with the above-mentioned technical solution, edge points (such as hair strings and glasses frames) in the image to be processed may be determined as defect points, and therefore, in order to avoid determining edge points in the image to be processed as defect points, step S104 may be further defined, that is, step S104 is defined as: and if the pixel point to be detected is not the edge point in the image to be processed, determining the to-be-detected ring corresponding to the pixel point to be detected. That is, after a pixel point is selected as a pixel point to be detected, it is determined whether the pixel point to be detected is an edge point, and if not, the ring to be detected corresponding to the pixel point to be detected is determined.
In the embodiment of the present application, whether a pixel point to be detected is an edge point may be determined according to the following method:
method I, judging whether the pixel point to be detected is an edge point or not based on Harris angular point
Firstly, the gradient I of the pixel point to be detected in the x direction is obtainedxAnd gradient I in the y-directiony
Secondly, a matrix S is obtained, which matrix S is obtained
Figure BDA0002150181030000101
Then, the R value, R ═ λ, is calculated1λ2-k(λ12)2Wherein k is a constant, and k is a constant,λ1and λ2Is the eigenvalue of the matrix S;
and finally, if R is less than 0, the pixel point to be detected is an edge point.
And determining the maximum gradient direction of the pixel point to be detected, determining the maximum gradient directions of other pixel points adjacent to the pixel point to be detected, and determining the pixel point to be detected as an edge point if the maximum gradient directions of the other pixel points are consistent with the maximum gradient direction of the pixel point to be detected.
In addition, the first method and the second method can be combined to determine whether the pixel point to be detected is an edge point (that is, the first method can be firstly adopted to determine whether the pixel point to be detected is an edge point, and if the R calculated by the first method is not less than 0, the second method is continuously adopted to determine whether the pixel point to be detected is an edge point).
Those skilled in the art will readily understand that, by using the technical solution described in the first embodiment of the present application, the pixel point located in the defect central area can be determined to some extent, and therefore, in order to further determine the defect area, the expansion operation may be performed on the defect point, so as to determine the defect area.
Therefore, in the skin defect detection method provided by the first embodiment of the present application, when determining whether a certain pixel is a defect point, it is not necessary to know whether other pixels are defect points, and therefore, the technical scheme provided by the present application is not a method for detecting defect points in series, so that when detecting whether multiple pixels are defect points simultaneously by using the technical scheme provided by the present application, the GPU can detect the pixels simultaneously in parallel, the parallel processing capability of the GPU can be fully utilized, and the waste of GPU resources is avoided to a certain extent.
Example two
Referring to fig. 5, another skin defect detecting method provided in the second embodiment of the present application is described below, and the skin defect detecting method includes:
in step S501, an image to be processed including a portrait is acquired, and position information of a skin area of the portrait in the image to be processed is determined;
in step S502, a detection radius is determined based on the size of the portrait relative to the image to be processed;
in step S503, in the skin area indicated by the position, P pixel points are selected as to-be-detected pixel points, where P is an integer and P is greater than 1;
in step S504, for each pixel point to be detected, the following steps are performed:
s1, determining a to-be-detected ring corresponding to the to-be-detected pixel point, wherein the to-be-detected ring is a ring which takes the to-be-detected pixel point as the center of a circle and takes the detection radius as the radius;
s2, determining the gray value of each pixel point on the to-be-detected circular ring, wherein the gray value is the Y value of the corresponding pixel point in a YUV domain;
s3, calculating the variance of the gray values of the pixels based on the gray values of the pixels on the ring to be detected, and determining whether the number of the gray values of the pixels to be detected, which are larger or smaller than the gray value of the pixels to be detected, in the ring to be detected reaches a preset number, wherein the gray value of the pixels to be detected is the Y value of the pixels to be detected in a YUV domain;
s4, if the number reaches the preset number and the variance is smaller than the preset variance, determining the pixel point to be detected as a defect point;
the above steps S501 to S504 are all described in the first embodiment, and specific reference may be made to the description of the first embodiment, which is not repeated herein.
In step S505, performing edge detection on the image to be processed to obtain image regions each representing an edge contour in the image to be processed;
in step S505, edge detection may be performed on the image to be processed by using a DoG operator or a Laplacian of Gaussian (LoG) operator, so as to obtain each image region for representing an edge in the image to be processed. This step is prior art and will not be described herein.
Furthermore, those skilled in the art should note that step S505 is not necessarily performed after step S504, and step S505 may be performed at any time before step S506, and the application does not limit the execution time of step S505.
In step S506, traversing each defective spot, and if the defective spot falls into the image region obtained by edge detection, determining that the defective spot is a true defective spot;
according to the above steps S501-S504, which of the P pixel points to be detected are defective points can be determined, and according to the step S506, which of the defective points are true defective points and which of the defective points are false detected can be further determined.
In the second embodiment of the present application, the technical solution described in the first embodiment is combined with a traditional algorithm for performing edge detection by using a DoG operator or a LoG operator to remove the fault points which are erroneously detected.
Further, after the true defect points are obtained after step S506, the dilation operation may be performed on each true defect point, thereby obtaining a defect region. The following describes a method for determining a defective area based on true defective points, provided in embodiment two of the present application:
and performing expansion operation on each first defect area, and performing intersection operation on each defect area obtained by expansion processing and each image area generated by edge detection to obtain each second defect area, wherein each first defect area is a connected area formed by adjacent true defect points, and each second defect area is a connected area.
That is, first, each first defect area is obtained based on each real defect, and each first defect area is a connected area composed of real defects; secondly, performing expansion operation on each first defect area to obtain each expanded first defect area; then, the intersection of each first defective region after the dilation operation and the image region obtained in step S505 is taken to obtain each second defective region, where each second defective region is a connected region. The second defective area may be considered as each defective area of the skin in the image to be processed.
In addition, in the second embodiment of the present application, after each second defective region is obtained, range expansion may be further performed on each second defective region, and each defective region obtained after range expansion is used as each defective region of skin in the image to be processed. In the second embodiment of the present application, the method for expanding the range of each second defective area may be:
for each second defective area, the following steps are performed:
judging whether a pixel point set which is adjacent to the second defective area and is positioned in the image area obtained in the step S505 exists;
if a pixel point set which is adjacent to the second defective area and is located in the image area exists, determining whether a target pixel point exists in the pixel point set, wherein the target pixel point has a difference value of which the absolute value of the difference value with a gray value of a pixel point in the second defective area (the gray value of any pixel point in the second defective area, the gray value of a certain pixel point at the edge of the second defective area, or the average value of the gray values of all the pixel points in the second defective area) is smaller than a preset threshold value;
if the target pixel point exists, taking a union set of the target pixel point and the second defect area, if the union set is a communication area, determining the union set as a third defect area corresponding to the second defect area, and if the union set is not a communication area, directly taking the second defect area as the third defect area corresponding to the second defect area;
and if the target pixel point does not exist, directly taking the second defective area as a third defective area corresponding to the second defective area.
Next, referring to fig. 6, the range expansion operation for each second defective area is described, as shown in fig. 6, step S505 obtains two image areas, namely a circular area 601 and a rectangular area 602, and the second defective area is a shuttle-shaped area 603 in the circular area and a trapezoid-shaped area 604 in the rectangular area, if according to the range expansion algorithm provided above, for the shuttle-shaped area, the union of the shuttle 603 and the target pixel is the shuttle 603 plus the circle 605, and since the union is not a connected region, at this time, the third defective area corresponding to the shuttle-shaped area 603 is still the shuttle-shaped area 603, and for the trapezoid-shaped area 604, the union of the trapezoidal region 604 and the target pixel is the trapezoidal region 604 plus the sector region 606, and at this time, since the union is a connected region, therefore, the third defective area corresponding to the trapezoid area 604 is still the union of the trapezoid area 604 and the sector area 606.
The obtained third defect areas are the defect areas of the skin in the image to be processed.
After determining each defective area in the image to be processed, image processing may be performed on the defective area, so as to eliminate the defective area and achieve a beauty effect.
For each defective area, the following steps are performed:
acquiring pixel values of all pixel points which are adjacent to the defect area and have a distance with the defect area within a preset distance in a YUV domain respectively;
determining a median Y1 of Y values, a median U1 of U values and a median V1 of V values of all the pixel points based on the pixel values of all the pixel points in the YUV domain;
replacing the U value of each pixel point in the defect area with U1, and replacing the V value of each pixel point in the defect area with V1;
calculating a Y value replacement value corresponding to each pixel point in the defect area based on a Y value replacement value calculation formula, wherein the Y value replacement value calculation formula is as follows:
Yreplace=Yoriginal+blurr(Y1-Yoriginal)
wherein, YoriginalFor the pixel of the Y value to be replacedGrey value of the point, YreplaceFor the calculated Y value replacement value corresponding to the pixel point, blu (x) is: after local smoothing operation is carried out on the x value of each pixel point in the region with the pixel point as the center, the value of the pixel point is obtained;
and replacing the Y value of each pixel point in the defect area with the corresponding Y value replacement value respectively.
In addition, in the embodiment of the present application, in order to reduce the data processing amount of the GPU, the image to be processed acquired in step S501 may be: and acquiring an initial image containing the portrait, and performing downsampling operation on the initial image to obtain the to-be-processed image containing the portrait. That is, each step in the above method embodiment is processing of the small-size image after the down-sampling, and therefore, after each defective region of the skin is obtained based on the small-size image, each defective region needs to be mapped to the above initial image based on the down-sampling multiple to obtain each defective region in the initial image.
In the actual development process, it is found that each defective area of the obtained initial image may include a beard area, and therefore, the following steps are required to remove the beard area in each defective area of the initial image to obtain a final defective area in the initial image:
performing edge detection on the initial image (a sobel operator or a DoG operator and the like can be adopted);
and if it is detected that an image area of the beard exists in the initial image and an intersection exists between the image area of the beard and each defective area in the initial image, eliminating the image area of the beard in each defective area to obtain each final defective area in the initial image, wherein each final defective area is a connected image area.
Further, the correction process for each of the final defective regions may be:
for each final defect area, the following steps are performed:
acquiring pixel values of all pixel points which are adjacent to the final defect area in the initial image and have a distance with the final defect area within a preset distance in a YUV domain respectively;
determining a median Y1 of Y values, a median U1 of U values and a median V1 of V values of all the pixel points based on the pixel values of all the pixel points in the YUV domain;
replacing the U value of each pixel point in the final defect area with U1, and replacing the V value of each pixel point in the final defect area with V1;
and calculating a Y value replacement value corresponding to each pixel point in the final defect area based on a Y value replacement value calculation formula, wherein the Y value replacement value calculation formula is as follows:
Yreplace=Yoriginal+blurr(Y1-Yoriginal)
wherein, YoriginalIs the gray value of the pixel point of the Y value to be replaced, YreplaceFor the calculated Y value replacement value corresponding to the pixel point of the Y value to be replaced, blu (x) is: after local smoothing operation is carried out on the x value of each pixel point in the region with the pixel point as the center, the value of the pixel point is obtained;
and replacing the Y value of each pixel point in the final defect area with the corresponding Y value replacement value respectively.
The method can recover the brightness and color of normal skin while maintaining the original skin texture,
the skin defect detection method provided by the second embodiment of the application can more accurately detect each defect point and remove the erroneously detected defect points.
It should be understood that, the size of the serial number of each step in the foregoing method embodiments does not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. Furthermore, it should be understood by those skilled in the art that the phrase "when a occurs, then B is performed" in the method embodiments of the present application does not mean that a occurs at the same time as B is performed, but may be slightly different in time.
EXAMPLE III
The third embodiment of the application provides a skin flaw detection device. For convenience of explanation, only a portion related to the present application is shown, and as shown in fig. 7, the skin defect detecting apparatus 700 includes:
a skin determining module 701, configured to acquire an image to be processed including a portrait, and determine position information of a skin area of the portrait in the image to be processed;
a radius determining module 702, configured to determine a detection radius based on a size of the portrait relative to the to-be-processed image;
a point-to-be-detected selecting module 703, configured to select a pixel point as a pixel point to be detected in the skin area indicated by the location information;
a to-be-detected ring determining module 704, configured to determine a to-be-detected ring corresponding to the to-be-detected pixel point, where the to-be-detected ring is a ring with the to-be-detected pixel point as a center and the detection radius as a radius;
a gray value determining module 705, configured to determine a gray value of each pixel point on the to-be-detected circular ring, where the gray value is a Y value of the corresponding pixel point in the YUV domain;
a variance and comparison module 706, configured to calculate a variance of the gray values of the pixels based on the gray values of the pixels on the ring to be detected, and determine whether the number of gray values in the ring to be detected, which are greater than or less than the gray value of the pixels to be detected, reaches a preset number, where the gray value of the pixels to be detected is a Y value of the pixels to be detected in a YUV domain;
and a defect point determining module 707, configured to determine that the pixel point to be detected is a defect point if the number reaches the preset number and the variance is smaller than a preset variance.
Optionally, the to-be-detected loop determining module 704 is specifically configured to:
and if the pixel point to be detected is not the edge point in the image to be processed, determining the to-be-detected circular ring corresponding to the pixel point to be detected.
Optionally, the radius determining module 702 includes:
the radius set determining unit is used for determining a detection radius set based on the size of the portrait relative to the image to be processed, wherein the detection radius set comprises N radius values, and N is an integer greater than 1;
a selecting unit, configured to select a radius from the detection radius set as the detection radius;
accordingly, the skin defect detecting apparatus 700 further includes:
an updating and triggering module, configured to update the detection radius if the number does not reach the preset number, or the variance is greater than or equal to the preset variance, update the detection radius to "another radius in the detection radius set, which is other than the selected radius", update the ring to be detected to "a ring which takes the pixel point to be detected as the center of a circle and the updated detection radius as the radius", and then trigger the gray value determining module 705 to continue to perform the step of determining the gray value of each pixel point on the ring to be detected until the detection radius set is traversed.
Optionally, the gray value determining module 705 includes:
the pixel number determining unit is configured to determine, based on the detection radius and a pixel number calculation formula, the number of pixels participating in subsequent gray value calculation in the to-be-detected ring, where the pixel number calculation formula is:
M=ceil(2×π×r)
wherein, M is the number of pixel points participating in the subsequent gray value calculation, r is the detection radius, ceil (x) is the operation of taking the minimum integer greater than or equal to x;
and the up-sampling unit is used for performing up-sampling on the to-be-detected ring to acquire the gray value of M pixel points in the to-be-detected ring if the number of the pixel points in the to-be-detected ring is less than M.
Optionally, the to-be-detected point selecting module 703 is specifically configured to:
selecting P pixel points in the skin area as pixel points to be detected in the skin area indicated by the position information, wherein P is an integer and is more than or equal to 2;
the skin defect detecting apparatus 700 further includes:
the edge detection module is used for carrying out edge detection on the image to be processed to obtain image areas which are used for representing edge outlines in the image to be processed;
accordingly, the skin defect detecting apparatus 700 further includes:
and the true flaw determining module is used for traversing each flaw point, and if the flaw point falls into the image area obtained by edge detection, determining that the flaw point is a true flaw point.
Accordingly, the skin defect detecting apparatus 700 further includes:
and the expansion and intersection module is used for performing expansion operation on each first defect area, performing intersection operation on each defect area obtained by expansion processing and each image area generated by edge detection to obtain each second defect area, wherein each first defect area is a connected area formed by adjacent real defect points, and each second defect area is a connected area.
Accordingly, the skin defect detecting apparatus 700 further includes:
the second defect expanding module is used for judging whether a pixel point set which is adjacent to the second defect area and is positioned in the image area exists or not; if a pixel point set which is adjacent to the second defect area and is positioned in the image area exists, determining whether a target pixel point of which the absolute value of the difference value with the gray value of the pixel point in the second defect area is smaller than a preset threshold exists in the pixel point set; if the target pixel point exists, taking a union set of the target pixel point and the second defect area, and if the union set is a communication area, determining the union set as a third defect area corresponding to the second defect area; and if the target pixel point does not exist, directly taking the second defective area as a third defective area corresponding to the second defective area.
Optionally, the skin determination module 701 is specifically configured to:
acquiring an initial image containing the portrait, performing downsampling operation on the initial image to obtain the to-be-processed image containing the portrait, and determining position information of a skin area of the portrait in the to-be-processed image;
accordingly, the skin defect detecting apparatus 700 further includes:
a third defect mapping module, configured to map each third defect area to the initial image based on a downsampling multiple adopted by the downsampling operation, so as to obtain each third mapped defect area of the portrait in the initial image;
the initial edge detection module is used for carrying out edge detection on the initial image;
and the removing module is used for removing the image areas which are the beards in the third mapping defective areas to obtain the final defective areas in the initial image if the initial image is detected to have the image areas which are the beards and the intersection of the image areas which are the beards and the third mapping defective areas, wherein each final defective area is a communicated image area.
Optionally, the skin defect detecting apparatus 700 further includes:
a normal pixel value obtaining module, configured to obtain pixel values of pixel points, which are adjacent to the final defect region in the initial image and have a distance to the final defect region within a preset distance, in a YUV domain, respectively;
the median calculation module is used for determining a median Y1 of Y values, a median U1 of U values and a median V1 of V values of all the pixel points based on the pixel values of all the pixel points in the YUV domain respectively;
the YU median replacement module is used for replacing the U value of each pixel point in the final defect area with U1 and replacing the V value of each pixel point in the final defect area with V1;
a Y replacement value calculation module, configured to calculate a Y value replacement value corresponding to each pixel point in the final defect region based on a Y value replacement value calculation formula, where the Y value replacement value calculation formula is:
Yreplace=Yoriginal+blurr(Y1-Yoriginal)
wherein, YoriginalIs the gray value of the pixel point of the Y value to be replaced, YreplaceFor the calculated Y value replacement value corresponding to the pixel point of the Y value to be replaced, blu (x) is: after local smoothing operation is carried out on the x value of each pixel point in the region with the pixel point as the center, the value of the pixel point is obtained;
and the Y replacing module is used for replacing the Y value of each pixel point in the final defect area with the corresponding Y value replacing value respectively.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Example four
Fig. 8 is a schematic diagram of a terminal device according to a fourth embodiment of the present application. As shown in fig. 8, the terminal apparatus 800 of this embodiment includes: a processor 801, a memory 802, and a computer program 803 stored in the memory 802 and operable on the processor 801. The steps in the various method embodiments described above are implemented when the processor 801 described above executes the computer program 803 described above. Alternatively, the processor 801 implements the functions of the modules/units in the device embodiments when executing the computer program 803.
Illustratively, the computer program 803 may be divided into one or more modules/units, which are stored in the memory 802 and executed by the processor 801 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 803 in the terminal device 800. For example, the computer program 803 may be divided into a skin determination module, a radius determination module, a point to be detected selection module, a ring to be detected determination module, a gray value determination module, a variance and comparison module, and a flaw point determination module, where each module has the following specific functions:
acquiring an image to be processed containing a portrait, and determining the position information of a skin area of the portrait in the image to be processed;
determining a detection radius based on the size of the portrait relative to the image to be processed;
selecting pixel points as pixel points to be detected in the skin area indicated by the position information;
determining a to-be-detected ring corresponding to the to-be-detected pixel point, wherein the to-be-detected ring takes the to-be-detected pixel point as a circle center and takes the detection radius as a radius;
determining the gray value of each pixel point on the to-be-detected circular ring, wherein the gray value is the Y value of the corresponding pixel point in the YUV domain;
calculating the variance of the gray values of the pixels on the to-be-detected ring based on the gray values of the pixels on the to-be-detected ring, and determining whether the number of the gray values of the pixels in the to-be-detected ring, which are larger or smaller than the gray value of the pixels to be detected, reaches a preset number, wherein the gray value of the pixels to be detected is the Y value of the pixels to be detected in a YUV domain;
and if the number reaches the preset number and the variance is smaller than the preset variance, determining the pixel point to be detected as a defect point.
The terminal device may include, but is not limited to, a processor 801 and a memory 802. Those skilled in the art will appreciate that fig. 8 is merely an example of a terminal device 800 and does not constitute a limitation of terminal device 800 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 801 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 802 may be an internal storage unit of the terminal device 800, such as a hard disk or a memory of the terminal device 800. The memory 802 may also be an external storage device of the terminal device 800, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 800. Further, the memory 802 may include both an internal storage unit and an external storage device of the terminal device 800. The memory 802 is used for storing the computer program and other programs and data required by the terminal device. The memory 802 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other division manners in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the above method embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and executed by a processor, so as to implement the steps of the above method embodiments. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable medium described above may include content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (12)

1. A skin blemish detection method, comprising:
acquiring an image to be processed containing a portrait, and determining position information of a skin area of the portrait in the image to be processed;
determining a detection radius based on the size of the portrait relative to the image to be processed;
selecting pixel points as pixel points to be detected in the skin area indicated by the position information;
determining a to-be-detected ring corresponding to the to-be-detected pixel point, wherein the to-be-detected ring takes the to-be-detected pixel point as a circle center and the detection radius as a radius;
determining the gray value of each pixel point on the to-be-detected circular ring, wherein the gray value is the Y value of the corresponding pixel point in the YUV domain;
calculating the variance of the gray values of the pixels based on the gray values of the pixels on the to-be-detected ring, and determining whether the number of the gray values of the pixels to be detected, which are larger or smaller than the gray value of the pixels to be detected, in the to-be-detected ring reaches a preset number, wherein the gray value of the pixels to be detected is the Y value of the pixels to be detected in a YUV domain;
if the number reaches the preset number and the variance is smaller than the preset variance, determining the pixel point to be detected as a defect point;
the determining a detection radius based on the size of the portrait relative to the image to be processed comprises:
determining a detection radius set based on the size of the portrait relative to the image to be processed, wherein the detection radius set comprises N radius values, and N is an integer greater than 1;
selecting a radius from the detection radius set as the detection radius;
correspondingly, after the step of calculating the variance of the gray values of the pixels based on the gray values of the pixels on the ring to be detected and determining whether the number of the gray values of the pixels to be detected, which are greater than or less than the gray values of the pixels to be detected, in the ring to be detected reaches the preset number, the method further comprises the following steps:
if the number does not reach the preset number or the variance is larger than or equal to the preset variance, updating the detection radius, and updating the detection radius as follows: and in the detection radius set, updating the to-be-detected ring by using another radius except the selected radius, and updating the to-be-detected ring into: and taking the pixel point to be detected as the circle center, taking the updated detection radius as the radius of the ring, and then returning to the step of determining the gray value of each pixel point on the ring to be detected until the detection radius set is traversed.
2. The method of detecting skin imperfections of claim 1, wherein said determining the circle to be detected corresponding to the pixel point to be detected comprises:
and if the pixel point to be detected is not the edge point in the image to be processed, determining the to-be-detected circular ring corresponding to the pixel point to be detected.
3. The method of claim 1, wherein the determining the gray value of each pixel point on the ring to be detected comprises:
determining the number of pixels participating in subsequent gray value calculation in the to-be-detected ring based on the detection radius and a pixel number calculation formula, wherein the pixel number calculation formula is as follows:
M=ceil(2×π×r)
wherein, M is the number of pixel points participating in the subsequent gray value calculation, r is the detection radius, ceil (x) is the operation of taking the minimum integer greater than or equal to x;
and if the number of the pixel points in the to-be-detected ring is less than M, performing up-sampling on the to-be-detected ring to acquire the gray values of the M pixel points in the to-be-detected ring.
4. The method for detecting skin defects according to any one of claims 1 to 3, wherein selecting pixel points as pixel points to be detected in the skin region indicated by the location information comprises:
and selecting P pixel points in the skin area as pixel points to be detected in the skin area indicated by the position information, wherein P is an integer and is more than or equal to 2.
5. The skin blemish detection method of claim 4, further comprising:
performing edge detection on the image to be processed to obtain image areas for representing edge contours in the image to be processed;
correspondingly, after traversing each pixel point to be detected and determining whether each pixel point to be detected is a defective point, the method further comprises the following steps:
and traversing each flaw, and if the flaw falls into the image area obtained by edge detection, determining that the flaw is a true flaw.
6. The method of detecting skin imperfections of claim 5, wherein after said step of traversing each blemish, if the blemish falls into the image area resulting from edge detection, determining that the blemish is a true blemish, further comprising:
and performing expansion operation on each first defect area, and performing intersection operation on each defect area obtained by expansion processing and each image area generated by edge detection to obtain each second defect area, wherein each first defect area is a connected area formed by adjacent true defect points, and each second defect area is a connected area.
7. The skin defect detection method of claim 6, further comprising, after said obtaining each second defect region:
for each second defective area, the following steps are performed:
judging whether a pixel point set which is adjacent to the second defect area and is positioned in the image area exists or not;
if a pixel point set which is adjacent to the second defect area and is positioned in the image area exists, determining whether a target pixel point of which the absolute value of the difference value with the gray value of the pixel point in the second defect area is smaller than a preset threshold exists in the pixel point set;
if the target pixel point exists, taking a union set of the target pixel point and the second defect area, and if the union set is a communication area, determining the union set as a third defect area corresponding to the second defect area;
and if the target pixel point does not exist, directly taking the second defective area as a third defective area corresponding to the second defective area.
8. The method of skin blemish detection of claim 7, wherein said obtaining an image to be processed containing a human image comprises:
acquiring an initial image containing the portrait, and performing downsampling operation on the initial image to obtain the to-be-processed image containing the portrait;
correspondingly, after traversing each second defective region to obtain each third defective region, the method further includes:
mapping each third defect area to the initial image based on the down-sampling multiple adopted by the down-sampling operation to obtain each third mapping defect area of the portrait in the initial image;
performing edge detection on the initial image;
and if it is detected that an image area of the beard exists in the initial image and an intersection exists between the image area of the beard and each third mapping defective area, eliminating the image area of the beard in each third mapping defective area to obtain each final defective area in the initial image, wherein each final defective area is a communicated image area.
9. The skin defect detection method of claim 8, further comprising, after said obtaining each final defect region in said initial image:
for each final defect area, the following steps are performed:
acquiring pixel values of all pixel points which are adjacent to the final defect area in the initial image and have a distance with the final defect area within a preset distance in a YUV domain respectively;
determining a median Y1 of Y values, a median U1 of U values and a median V1 of V values of all the pixel points based on the pixel values of all the pixel points in the YUV domain;
replacing the U value of each pixel point in the final defect area with U1, and replacing the V value of each pixel point in the final defect area with V1;
and calculating a Y value replacement value corresponding to each pixel point in the final defect area based on a Y value replacement value calculation formula, wherein the Y value replacement value calculation formula is as follows:
Yreplace=Yoriginal+blurr(Y1-Yoriginal)
wherein, YoriginalIs the gray value of the pixel point of the Y value to be replaced, YreplaceFor the calculated Y value replacement value corresponding to the pixel point of the Y value to be replaced, blu (x) is: after local smoothing operation is carried out on the x value of each pixel point in the region with the pixel point as the center, the value of the pixel point is obtained;
and replacing the Y value of each pixel point in the final defect area with the corresponding Y value replacement value respectively.
10. A skin blemish detection device, comprising:
the skin determining module is used for acquiring an image to be processed containing a portrait and determining the position information of a skin area of the portrait in the image to be processed;
the radius determining module is used for determining a detection radius based on the size of the portrait relative to the image to be processed;
the point-to-be-detected selection module is used for selecting pixel points as pixel points to be detected in the skin area indicated by the position information;
the to-be-detected ring determining module is used for determining a to-be-detected ring corresponding to the to-be-detected pixel point, wherein the to-be-detected ring takes the to-be-detected pixel point as the circle center and takes the detection radius as the radius;
the gray value determining module is used for determining the gray value of each pixel point on the to-be-detected circular ring, wherein the gray value is the Y value of the corresponding pixel point in the YUV domain;
the variance and comparison module is used for calculating the variance of the gray values of all the pixels on the to-be-detected ring based on the gray values of all the pixels on the to-be-detected ring, and determining whether the number of the gray values, which are greater than or less than the gray values of the to-be-detected pixels, in the to-be-detected ring reaches a preset number, wherein the gray values of the to-be-detected pixels are Y values of the to-be-detected pixels in a YUV domain;
a defect point determining module, configured to determine that the pixel point to be detected is a defect point if the number reaches the preset number and the variance is smaller than a preset variance;
the radius determination module includes:
the radius set determining unit is used for determining a detection radius set based on the size of the portrait relative to the image to be processed, wherein the detection radius set comprises N radius values, and N is an integer greater than 1;
a selecting unit, configured to select a radius from the detection radius set as the detection radius;
accordingly, the skin blemish detection device further comprises:
the updating and triggering module is used for updating the detection radius if the number does not reach the preset number or the variance is larger than or equal to the preset variance, and updating the detection radius as follows: and in the detection radius set, updating the to-be-detected ring by using another radius except the selected radius, and updating the to-be-detected ring into: and taking the pixel point to be detected as the circle center, taking the updated detection radius as the radius of the ring, and then returning to the step of determining the gray value of each pixel point on the ring to be detected until the detection radius set is traversed.
11. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the skin defect detection method according to any one of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the skin blemish detection method according to any one of claims 1 to 9.
CN201910698746.7A 2019-07-31 2019-07-31 Skin flaw detection method, skin flaw detection device, terminal device and readable storage medium Active CN110415237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910698746.7A CN110415237B (en) 2019-07-31 2019-07-31 Skin flaw detection method, skin flaw detection device, terminal device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910698746.7A CN110415237B (en) 2019-07-31 2019-07-31 Skin flaw detection method, skin flaw detection device, terminal device and readable storage medium

Publications (2)

Publication Number Publication Date
CN110415237A CN110415237A (en) 2019-11-05
CN110415237B true CN110415237B (en) 2022-02-08

Family

ID=68364406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910698746.7A Active CN110415237B (en) 2019-07-31 2019-07-31 Skin flaw detection method, skin flaw detection device, terminal device and readable storage medium

Country Status (1)

Country Link
CN (1) CN110415237B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991253A (en) * 2019-12-02 2021-06-18 合肥美亚光电技术股份有限公司 Central area determining method, foreign matter removing device and detecting equipment
CN112446865A (en) * 2020-11-25 2021-03-05 创新奇智(广州)科技有限公司 Flaw identification method, flaw identification device, flaw identification equipment and storage medium
CN112565601B (en) * 2020-11-30 2022-11-04 Oppo(重庆)智能科技有限公司 Image processing method, image processing device, mobile terminal and storage medium
CN112750105B (en) * 2020-12-30 2022-06-28 北京极豪科技有限公司 Image abnormal point detection method and device, electronic device and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7277595B1 (en) * 2003-01-06 2007-10-02 Apple Inc. Method and apparatus for digital image manipulation to remove image blemishes
FR2944899B1 (en) * 2009-04-23 2014-04-25 Lvmh Rech PROCESS AND APPARATUS FOR CHARACTERIZING PIGMENTARY TASKS AND METHOD OF ASSESSING THE EFFECT OF TREATING A PIGMENT TASK WITH A COSMETIC PRODUCT
CN103927718B (en) * 2014-04-04 2017-02-01 北京金山网络科技有限公司 Picture processing method and device
CN103927719B (en) * 2014-04-04 2017-05-17 北京猎豹网络科技有限公司 Picture processing method and device
KR101926495B1 (en) * 2014-06-23 2018-12-07 한화에어로스페이스 주식회사 Method and Apparatus for enhanced detection of discontinuities in the surface of a substrate
CN104318262A (en) * 2014-09-12 2015-01-28 上海明穆电子科技有限公司 Method and system for replacing skin through human face photos
CN105741231B (en) * 2016-02-02 2019-02-01 深圳中博网络技术有限公司 The skin makeup treating method and apparatus of image
CN107358573A (en) * 2017-06-16 2017-11-17 广东欧珀移动通信有限公司 Image U.S. face treating method and apparatus
CN110334706B (en) * 2017-06-30 2021-06-01 清华大学深圳研究生院 Image target identification method and device
CN107680096A (en) * 2017-10-30 2018-02-09 北京小米移动软件有限公司 Image processing method and device
CN108198152B (en) * 2018-02-07 2020-05-12 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN110415237A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110415237B (en) Skin flaw detection method, skin flaw detection device, terminal device and readable storage medium
CN108596944B (en) Method and device for extracting moving target and terminal equipment
CN110705583B (en) Cell detection model training method, device, computer equipment and storage medium
US8805077B2 (en) Subject region detecting apparatus
US9058650B2 (en) Methods, apparatuses, and computer program products for identifying a region of interest within a mammogram image
AU2011250827B2 (en) Image processing apparatus, image processing method, and program
CN111539238B (en) Two-dimensional code image restoration method and device, computer equipment and storage medium
CN109005368B (en) High dynamic range image generation method, mobile terminal and storage medium
CN110751620B (en) Method for estimating volume and weight, electronic device, and computer-readable storage medium
CN110969046B (en) Face recognition method, face recognition device and computer-readable storage medium
CN110503704B (en) Method and device for constructing three-dimensional graph and electronic equipment
CN109741394B (en) Image processing method, image processing device, electronic equipment and storage medium
CN112651953A (en) Image similarity calculation method and device, computer equipment and storage medium
CN112668577A (en) Method, terminal and device for detecting target object in large-scale image
CN114638294A (en) Data enhancement method and device, terminal equipment and storage medium
CN111260564A (en) Image processing method and device and computer storage medium
CN114298985B (en) Defect detection method, device, equipment and storage medium
CN113487473B (en) Method and device for adding image watermark, electronic equipment and storage medium
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN113570725A (en) Three-dimensional surface reconstruction method and device based on clustering, server and storage medium
US20160005200A1 (en) Image processing device, image processing method, and image processing program
CN111311610A (en) Image segmentation method and terminal equipment
CN111429450B (en) Corner point detection method, system, equipment and storage medium
CN113470028A (en) Chromosome karyotype image quality evaluation method, chromosome analyzer, and storage medium
CN114596210A (en) Noise estimation method, device, terminal equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant