CN114782309A - Skin detection method, electronic device, storage medium and product - Google Patents

Skin detection method, electronic device, storage medium and product Download PDF

Info

Publication number
CN114782309A
CN114782309A CN202210214727.4A CN202210214727A CN114782309A CN 114782309 A CN114782309 A CN 114782309A CN 202210214727 A CN202210214727 A CN 202210214727A CN 114782309 A CN114782309 A CN 114782309A
Authority
CN
China
Prior art keywords
area
face
value
determining
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210214727.4A
Other languages
Chinese (zh)
Inventor
辛琪
孙宇超
魏文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd, Beijing Megvii Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN202210214727.4A priority Critical patent/CN114782309A/en
Publication of CN114782309A publication Critical patent/CN114782309A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a skin detection method, an electronic device, a storage medium and a product, wherein the method comprises the following steps: acquiring a face image of an object to be detected; determining a texture map corresponding to the face image; determining at least one highlight region in the texture map; and removing the interference region in the at least one highlight region to obtain an oil light region in the face image. That is to say, in the embodiment of the present application, a texture map of a face image of an object to be detected is processed through a series of automatic operations, so as to obtain at least one highlight area in the texture map, and then interference areas such as pox, hemorrhoid and the like in the skin brightness area are removed, so as to obtain an accurate oily area on the face image, which not only saves the cost of manual detection, but also improves the accuracy of detecting the facial oily area.

Description

Skin detection method, electronic device, storage medium and product
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a skin detection method, an electronic device, a computer-readable storage medium, and a computer program product.
Background
Facial skin detection is a research direction in the field of computer vision, and has wide application in the aspects of portrait beautification and face tracking. The detection of the state of facial skin using Artificial Intelligence (AI) technology has become a new trend.
In the related art, when facial skin detection is performed, an oil-out area of the skin and an oil-out degree can be detected. The detection of the existing face oil outlet area is almost finished manually, so that the cost is high, the efficiency is low, and the identification result is not accurate enough.
Therefore, how to accurately detect the facial gloss area is a technical problem to be solved at present.
Disclosure of Invention
The application provides a skin detection method, an electronic device, a computer readable storage medium and a computer program product, so as to at least solve the technical problem of low accuracy of surface oil light region detection in the related art. The technical scheme of the application is as follows:
the technical scheme of the application is as follows:
according to a first aspect of embodiments herein, there is provided a skin detection method, comprising:
acquiring a face image of an object to be detected;
determining a texture map corresponding to the face image;
determining at least one highlight region in the texture map;
and removing the interference region in the at least one highlight region to obtain an oil light region in the face image.
Optionally, the texture map is a texture map represented by a gray scale map; the determining at least one highlight region in the texture map comprises:
comparing the gray value of each pixel point in the texture map with a target screening threshold value respectively;
and determining the area corresponding to the pixel point with the gray value higher than the target screening threshold value as the highlight area.
Optionally, the target screening threshold is determined according to the following steps:
determining the face average gray value of the face image;
determining a sum value between the face average gray value and a target gray difference value as the target screening threshold value; the target gray difference value is determined based on the average gray value of the oil area and the average gray value of the face area in each labeled image.
Optionally, determining a face average gray value of the face image includes:
selecting representative key points from a plurality of different position areas of the face image;
and determining the gray average value of the representative key points, and determining the gray average value as the face average gray value.
Optionally, the removing the interference region in the at least one highlight region to obtain an oil light region in the face image includes:
determining the area of each highlight area;
and filtering out a highlight area with the area smaller than the area threshold value to obtain the oil light area.
Optionally, the method further includes:
determining a difference value between the average gray value of the oil light area in the face image and the average gray value of the face area in the face image;
comparing the difference value with a corresponding first classification threshold value to determine the severity level of the oil light area; wherein the first classification threshold is determined based on a difference between an average gray value of the oil light region in each of the labeled images and an average gray value of the face region.
Optionally, the method further includes:
determining the average gray value of an oil light area in the face image;
comparing the average gray value of the oily light region in the face image with a second classification threshold value to determine the severity level of the oily light region; wherein the second classification threshold is determined based on an average gray value of the oil light area in each of the labeled images.
According to a second aspect of embodiments of the present application, there is provided a skin detection apparatus comprising:
the acquisition module is used for acquiring a face image of an object to be detected;
the first determining module is used for determining a texture map corresponding to the face image;
a second determining module, configured to determine at least one highlight region in the texture map;
and the removing module is used for removing the interference region in the at least one highlight region to obtain an oil light region in the face image.
Optionally, the texture map is a texture map represented by a gray scale map; the second determining module comprises:
the first comparison module is used for respectively comparing the gray value of each pixel point in the texture map with a target screening threshold;
and the region determining module is used for determining the region corresponding to the pixel point with the gray value higher than the target screening threshold value as the highlight region.
Optionally, the apparatus further comprises: the screening threshold determination module specifically comprises:
the first gray value determining module is used for determining the average gray value of the face image;
the target screening threshold value determining module is used for determining the sum value of the average face gray value and the target gray difference value as the target screening threshold value; the target gray difference value is determined based on the average gray value of the oil area and the average gray value of the face area in each labeled image.
Optionally, the first gray value determining module includes:
the selecting module is used for selecting representative key points from a plurality of different position areas of the face image;
and the second gray value determining module is used for determining the gray mean value of the representative key points and determining the gray mean value as the average gray value of the face.
Optionally, the removing module includes:
the area determining module is used for determining the area of each highlight area;
and the area filtering module is used for filtering out the highlight area with the area smaller than the area threshold value to obtain the oil light area.
Optionally, the apparatus further comprises:
a difference determining module, configured to determine a difference between an average gray-scale value of a light area in the face image and an average gray-scale value of a face area in the face image;
the second comparison module is used for comparing the difference value with a corresponding first classification threshold value and determining the severity level of the oil light area; wherein the first classification threshold is determined based on a difference between an average gray value of the oil area in each of the labeled images and an average gray value of the face area.
Optionally, the apparatus further comprises:
the third gray value determining module is used for determining the average gray value of an oil light area in the face image;
the third comparison module is used for comparing the average gray value of the oil light area in the face image with a second classification threshold value to determine the severity level of the oil light area; and the second classification threshold is determined based on the average gray value of the oil light area in each marked image.
According to a third aspect of embodiments herein, there is provided an electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the skin detection method as described above.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform a skin detection method as described above.
According to a fifth aspect of embodiments herein, there is provided a computer program product comprising a computer program or instructions, characterized in that the computer program or instructions, when executed by a processor, implement the skin detection method as described above.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
in the embodiment of the application, a face image of an object to be detected is obtained; determining a texture map corresponding to the face image; determining at least one highlight region in the texture map; and removing the interference region in the at least one highlight region to obtain an oil light region in the face image. That is to say, in the embodiment of the present application, a texture map of a face image of an object to be detected is processed through a series of automatic operations to obtain at least one highlight area in the texture map, and then interference areas such as pox, hemorrhoid and the like in the skin brightness area are removed to obtain an accurate oily area on the face image, so that not only is the cost of manual detection saved, but also the accuracy of detecting the facial oily area is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application and are not to be construed as limiting the application.
Fig. 1 is a flowchart illustrating a skin detection method according to an embodiment of the present application.
Fig. 2 is a block diagram of a skin detection device according to an embodiment of the present application.
Fig. 3 is a block diagram of a second determining module according to an embodiment of the present disclosure.
Fig. 4 is a block diagram of a removal module according to an embodiment of the present disclosure.
Fig. 5 is another block diagram of a skin detection device according to an embodiment of the present application.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
FIG. 7 is a block diagram illustrating an apparatus with skin detection in accordance with an exemplary embodiment.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
In recent years, technical research based on artificial intelligence, such as computer vision, deep learning, machine learning, image processing, and image recognition, has been actively developed. Artificial Intelligence (AI) is an emerging scientific technology for studying and developing theories, methods, techniques and application systems for simulating and extending human Intelligence. The artificial intelligence subject is a comprehensive subject and relates to various technical categories such as chips, big data, cloud computing, internet of things, distributed storage, deep learning, machine learning and neural networks. Computer vision is used as an important branch of artificial intelligence, specifically, a machine is used for identifying the world, and computer vision technologies generally comprise technologies such as face identification, living body detection, fingerprint identification and anti-counterfeiting verification, biological feature identification, face detection, pedestrian detection, target detection, pedestrian identification, image processing, image identification, image semantic understanding, image retrieval, character identification, video processing, video content identification, three-dimensional reconstruction, virtual reality, augmented reality, synchronous positioning and map construction (SLAM), computational photography, robot navigation and positioning and the like. With the research and progress of artificial intelligence technology, the technology is applied to many fields, such as safety control, city management, traffic management, building management, park management, face passage, face attendance, logistics management, warehouse management, robots, intelligent marketing, computational photography, mobile phone images, cloud services, smart homes, wearable equipment, unmanned driving, automatic driving, intelligent medical treatment, face payment, face unlocking, fingerprint unlocking, person certificate verification, smart screens, smart televisions, cameras, mobile internet, live webcasts, beauty treatment, medical beauty treatment, intelligent temperature measurement and the like.
Fig. 1 is a flowchart of a skin detection method according to an embodiment of the present application, and as shown in fig. 1, the skin detection method includes the following steps:
in step 101, acquiring a face image of an object to be detected;
in step 102, determining a texture map corresponding to the face image;
in step 103, at least one highlight region in the texture map is determined;
in step 104, removing the interference region in the at least one highlight region to obtain an oil light region in the face image.
The skin detection method disclosed by the disclosure can be applied to terminals, servers and the like, and is not limited herein, and the terminal implementation equipment can be electronic equipment such as a smart phone, a notebook computer, a tablet computer and the like.
The following describes in detail specific implementation steps of a skin detection method provided in an embodiment of the present disclosure with reference to fig. 1.
In step 101, a face image of an object to be detected is acquired.
In this step, the face image of the object to be detected may generally include a face region, and certainly, may include other skin regions, the face image to be detected may be a face test picture acquired through a terminal or an acquisition device, and certainly, the face test picture in this embodiment may be a face test picture of a user of any age and any gender. In general, in order to improve the detection accuracy, the face image in this embodiment may adopt a high-resolution face image with sufficient light and without overexposure, such as an RGB color image of the face, where RGB represents colors of three color channels of red (R), green (G), and blue (B), and various colors can be obtained through changes of the three color channels and their mutual superposition. Each color, in turn, can be visually synthesized with different ratios of red, green, and blue. It should be noted that each pixel of a color image is represented by different scales of RGB, and such an image is referred to as an RGB image.
In step 102, a texture map corresponding to the face image is determined.
In the step, a texture map of the face image can be determined by adopting a histogram equalization algorithm, so that the contrast of a skin bright and dark area is more obvious. For example, for an acquired front high-definition non-blocking face image, a standard 81-point face model is used to determine a face region on the face image, and then a histogram equalization algorithm is used for the face region to adjust the brightness distribution of pixels in the face region, so as to obtain a texture map of the face skin represented by a gray image. That is, the histogram equalization algorithm is considered to be the most effective method for improving the contrast of an image, and the basic idea is to adjust the luminance distribution of pixels mathematically so that the adjusted histogram has the maximum dynamic range.
Specifically, the present embodiment may adopt Adaptive Histogram Equalization (AHE), which is a computer image processing technique, to improve the contrast of an image, and unlike a general histogram equalization algorithm, the AHE algorithm changes the contrast of the image by calculating a local histogram of the image and then redistributing luminance. When the AHE algorithm is used for a place which is obviously brighter or darker than other areas in an image, the common histogram equalization algorithm cannot describe the detail information of the place. The AHE algorithm achieves the effects of expanding the local contrast and displaying the details of the smooth region by performing histogram equalization in a rectangular region around the currently processed pixel. That is, the AHE performs histogram enhancement on each pixel by computing a transform function for each pixel neighborhood, which in its simplest form is to equalize each pixel based on the histogram of the pixel's square neighborhood.
The AHE algorithm has two attributes: one is that the local neighborhood processed by the AHE algorithm is small in rectangular neighborhood, strong in local contrast, large in rectangular neighborhood and weak in local contrast. Another is that if the image block information in the rectangular area is relatively flat and has close gray levels, the gray level histogram is sharp, and the noise may be excessively amplified during the histogram equalization process.
Of course, the embodiment may also adopt a Limited Contrast Adaptive histogram equalization (CLAHE) algorithm, where the CLAHE algorithm is different from the common Adaptive histogram equalization algorithm in the Contrast clipping. This feature can also be applied in global histogram equalization, i.e. constituting the so-called limited contrast histogram equalization (CLHE) algorithm, in which for each small area a contrast clipping has to be used. Can be used to overcome the problem of over-amplifying noise of the AHE algorithm. That is to say, the CLAHE algorithm can effectively limit the noise amplification situation when the contrast of the image is improved. For example, in the gray level histogram in the local rectangular neighborhood, since the degree of contrast amplification is proportional to the slope of the curve of the probability distribution histogram of the pixel points, a part larger than a certain threshold is equally distributed to other parts of the histogram in order to limit the contrast.
In step 103, at least one highlight area in the texture map is determined;
and screening the pixel values of the texture map to obtain at least one highlight area in the texture map, wherein each highlight area generally refers to an area with high skin brightness.
The highlight region may also be understood as an oil-light region, and the highlight region may include some discrete points and some small defect regions, which are collectively referred to as interference regions, such as closed-mouth, suppurative pox, and other interference regions.
In this embodiment, the highlight region is usually represented by a grayscale Image, i.e. a Binary Image, where any pixel point on the Image has only two grayscale values, i.e. 0 or 255, representing black and white, respectively.
In step 104, removing the interference region in the at least one highlight region to obtain an oil light region in the face image.
In this step, a corresponding algorithm or threshold judgment may be used to remove a fine interference region in at least one highlight region, so as to obtain an oil light region in the face image, and the specific removal process is described in the following embodiments.
In the embodiment of the application, when the face image of an object to be detected is obtained, a texture map corresponding to the face image is determined; then determining at least one highlight region in the texture map; and finally, removing the interference region in the at least one highlight region to obtain an oil light region in the face image. That is to say, in the embodiment of the present application, a texture map of a face image of an object to be detected is processed through a series of automatic operations, so as to obtain at least one highlight in the texture map, and then, interference areas such as pox and hemorrhoid in the highlight area are removed, so as to obtain an accurate oily area on the face image, which not only saves the cost of manual detection, but also improves the accuracy of detecting the facial oily area.
Optionally, in another embodiment, on the basis of the foregoing embodiment, the determining a texture map corresponding to the face image includes:
1) and determining a facial skin area on the face image by using a face key point model.
The face key point model in this embodiment usually adopts a face 81 point model, and this model takes the face 81 point as an example to determine the face skin area of the face. In the application, the face image is input into the 81-point model, and the face skin area on the face image can be obtained.
2) The facial skin area is divided into a plurality of image blocks, and a histogram of pixels of each image block is determined respectively.
In this embodiment, the facial skin region may be first divided into a plurality of image blocks, such as 8 × 8 image blocks, and certainly may also be divided into 5 × 5 image blocks and the like, or divided into 5 × 4 image blocks and the like, which is not limited in this embodiment; then, a histogram of pixels of each image block is calculated in units of image blocks. The way of dividing the image blocks is well known to those skilled in the art and will not be described herein.
3) And pruning and balancing the histogram to obtain a histogram balanced image of each image block.
In this step, the histogram may be trimmed in a contrast amplitude limiting manner, and the histogram of each image block is equalized to obtain a histogram equalized image of each image block. The manner of contrast clipping may include: and a contrast histogram equalization algorithm, which is considered to be the most effective method for improving the image contrast, that is, a mathematical method is used to adjust the brightness distribution of pixels so that the adjusted histogram has the maximum dynamic range. The image has color cast after the contrast-limited histogram equalization algorithm is processed, so that the embodiment adds the color filtering mixing operation processing of the image layer after the processing, can well reduce the color cast, and enables the enhanced image to look more natural and comfortable.
4) Each pixel value in the respective image block is mapped to a new pixel value by linear interpolation.
In this step, the cumulative distribution of each image block is calculated, and when calculating the cumulative distribution, the contrast needs to be limited to avoid a case where the histogram is particularly steep, and then, after calculating the cumulative distribution, each pixel value in the original image block needs to be mapped to a new pixel value through the cumulative distribution. In the mapping process, a linear interpolation algorithm (such as a bilinear interpolation algorithm) is adopted, and a new numerical value obtained by interpolation is used as a pixel value of a pixel point. The specific implementation process is well known to those skilled in the art, and will not be described herein.
In the embodiment, each image block is traversed by adopting a linear interpolation algorithm, each pixel value in each pixel block is mapped into a new pixel value through cumulative distribution, the purpose is to solve the problem of discontinuous values among image blocks, reduce the amount of calculation and increase the calculation speed by an inter-block linear interpolation algorithm, for example, pixels of image blocks located in the middle part of a graph formed by the image blocks can adopt bilinear interpolation, image blocks located at the edge part of the graph can adopt linear interpolation, image blocks located at the corner points of the graph can directly use a transformation function and the like of the image blocks, in the processing process, the number of times of calculation required by the transformation function can be greatly reduced, and only the calculated amount of some bilinear interpolation is simply increased, so that the calculated amount is saved, and the calculation efficiency is improved.
That is, the interpolation (or linear interpolation) process, which results in a luminance transformation (CDF) function, can reduce the amount of computation by the interpolation process when computing the transformation function. The transformation functions of the image blocks at the image corner points are completely obtained according to the definition, the transformation functions of the image blocks at the image edges are obtained by linear interpolation of the transformation functions of the two adjacent image blocks, and the transformation functions of the image blocks at the image center are obtained by bilinear interpolation.
5) And performing image layer color filtering mixing processing on the histogram equalization image and an original image (namely a human face image to be detected) based on the new pixel value to obtain skin texture represented by a gray scale map.
An overlay color filtering blending processing algorithm may employ: and f (a, b) ═ 1- (1-a) × (1-b), wherein a represents a face image to be detected, b represents a histogram equalization image, and a texture map represented by a gray scale map can be obtained after the RGB image of the face skin region is applied to a CLAHE algorithm. In the obtained texture map, there are highlight regions and non-highlight regions, wherein the highlight regions correspond to oil regions, and the non-highlight regions correspond to non-oil regions.
It should be noted that the contrast-limited histogram equalization algorithm and the image-layer color filtering hybrid processing algorithm in the present application are well known to those skilled in the art, and will not be described in detail herein.
In the embodiment of the application, a face key point model is used for determining a face skin area on a face image, a texture map is generated by using a histogram equalization algorithm for limiting the contrast, so that the contrast of the image brightness is improved, a highlight area is obtained by screening a threshold value, then an interference area in the highlight area is removed, and an accurate oil light area on the face image is obtained, so that the cost of manual detection is saved, and the accuracy of detecting the facial oil light area is also improved.
Optionally, in another embodiment, on the basis of the foregoing embodiment, the texture map is a texture map characterized by a gray scale map; the determining at least one highlight region in the texture map comprises: firstly, comparing the gray value of each pixel point in the texture map with a target screening threshold value respectively; and secondly, determining the area corresponding to the pixel point with the gray value higher than the target screening threshold value as the highlight area.
In this embodiment, the gray value of each pixel point in the texture map is compared with the determined target screening threshold, and according to the comparison result, in one case, the region corresponding to the pixel point of which the gray value is higher than the target screening threshold is determined first, wherein the region corresponding to the pixel point of which the gray value is higher than the target screening threshold can be determined through a maximum connected region algorithm, and then the corresponding region is directly screened out as a highlight region; in another case, the area corresponding to the pixel point with the gray value lower than the target screening threshold is determined, wherein the area corresponding to the pixel point with the gray value lower than the target screening threshold can be determined through a maximum connected area algorithm, and then the area is removed to obtain the highlight area.
The maximum connection area algorithm comprises the following steps: starting from any non-0 pixel point, judging whether the upper, lower, left and right sides of the pixel point and adjacent pixel points of diagonal lines are not 0, if not, judging that the pixel point and the current pixel point belong to the same region, taking the new pixel points as boundaries, repeating the process until no new adjacent pixel points are generated, generating a plurality of regions by a gray-scale map through the processing, selecting the region with the most pixel points as a maximum connection region, and taking the maximum connection region as the region corresponding to the pixel points with the gray-scale value lower than a target screening threshold value.
In this embodiment, the target screening threshold is determined according to the following steps: determining the face average gray value of the face image; determining a sum value between the face average gray value and a target gray difference value as the target screening threshold value; the target gray difference value is determined based on the average gray value of the oil area and the average gray value of the face area in each marked image.
The target gray scale difference in this embodiment may be determined in advance, may also be determined in the detection process, and the like, and this embodiment is not limited.
The method for determining the target gray level difference specifically comprises the following steps: acquiring a plurality of face pictures, wherein each face picture comprises a marked oil-light area; determining a texture map of each face picture; determining the average gray value of the face area and the marked average gray value of the oil light area according to the texture map of each face picture; and determining the target gray difference value according to the difference between the average gray value of the face area and the average gray value of the marked oil light area. And then, determining the sum of the average face gray value of the face image of the object to be detected and the target gray difference value as the target screening threshold.
In this embodiment, in order to determine the oil-shine region, a threshold value K (i.e. a target screening threshold value) needs to be predetermined, and if the threshold value is higher than the threshold value, the region is considered as the oil-shine region, and if the threshold value is lower than the threshold value, the region is considered as the non-oil-shine region, so the setting of the threshold value K in this embodiment is very important, and if the threshold value K is set too large, the detection omission may be caused; if the threshold K is set too low, false detection may result, i.e. some lighter skin areas are also identified as oil light areas.
Therefore, in the embodiment of the present application, in order to determine a reasonable target screening threshold K, and to overcome the influence of different skin colors and different light conditions on the detection of the oil-light region, in the embodiment, the target screening threshold K is a relative value, which is exemplified by the following two embodiments, but in practical applications, the invention is not limited thereto.
The first embodiment: a batch of high-definition face pictures are collected in advance, in the embodiment, N face pictures are taken as an example, the boundary line of the oil light area in each face picture is marked manually, in this embodiment, a CLAHE algorithm is used to generate a corresponding texture map for each face picture, and the average gray value of the gloss area of each face picture is calculated, taking a human face 81-point bitmap as an example, taking part of points such as 35, 36, 37, 65, 69, 64, 77, 73 as an example to calculate the average gray value of the gloss area, and calculate the average gray value of the face area of each human face picture, then, calculating the difference value between the average gray value of the oil light area of the N human face pictures and the average gray value of the face area, and finally, after the average gray value of the face in the face image of the object to be detected is calculated, the sum of the difference and the average gray value of the face region is used as a target screening threshold. That is to say, in this embodiment, the determination of the target screening threshold is determined based on the difference between the average gray-scale value of the light area and the average gray-scale value of the face area in each annotated human face picture, and the sum of the obtained difference and the average gray-scale value of the face in the human face image of the current object to be detected.
The second embodiment: on the basis of the first embodiment, when the boundary line of the oil light area in each human face picture is marked manually, the severity level of the oil light area can be marked, and the severity level can be simply divided into a high level, a medium level and a low level for data acquisition and marking of different-depth learning. That is, in this embodiment, the determination of the target screening threshold may also be calculated based on the severity level of the marked gloss region, so as to obtain a relative target screening threshold based on the severity level.
This embodiment is different from the first embodiment in that when calculating the average gray value of the gloss area and the average gray value of the face area of each face picture, it needs to be calculated by combining a certain severity level, for example, calculating the average gray value v of the gloss areaij shinyComprises the following steps: the average gray value of the jth oil-light area of the ith labeled face picture with the severity level of 1 is calculated, of course, the average gray value of the jth oil-light area of the ith labeled face picture with the severity level of 2 can also be calculated, and specifically, which severity level is selected can be set manually, or can be calculated according to the default severity level in the system. Similarly, calculating the average gray value v of the face picturei faceComprises the following steps: it should be noted that other parts similar to the first embodiment in this embodiment are described in detail in the first embodiment, and are not described again here. Specifically, the process of determining the target screening threshold in this embodiment is as follows:
Figure BDA0003532355230000111
wherein in the formula, VfaceMean gray-scale value of the face representing the face image of the object to be tested, l representing the severity level, N representing the total number of annotated face pictures (i.e. annotated images), vij shinyThe average gray value v of the jth oil-light area of the ith labeled face picture with the severity level of li faceAnd the average gray value of the ith labeled face picture with the severity level of l is represented. Wherein in the formula vij shinyAnd vi faceFor example, simultaneously select l to 1, or simultaneously select l to 2, etc.
That is to say, in the embodiment of the present application, by using the above formula, the face average gray scale value of the face image is determined first, and then the sum of the face average gray scale value and the target gray scale difference value is determined as the target screening threshold.
Optionally, in another embodiment, on the basis of the foregoing embodiment, the determining a face mean gray-scale value of the face image in this embodiment includes: selecting representative key points from a plurality of different position areas of the face image; and determining the gray average value of the representative key points, and determining the gray average value as the face average gray value.
In the embodiment, representative key points are selected from each position of the face image to calculate the average gray value of the face, instead of calculating the average value of the face by using all the face key points, so that the calculation amount is reduced, and the calculation efficiency is improved.
Optionally, in another embodiment, on the basis of the foregoing embodiment, the removing the interference region in the at least one highlight region to obtain an oil light region in the face image includes: and determining the area of each highlight area, and filtering out the highlight areas with the areas smaller than the area threshold value to obtain the oil light areas.
In this embodiment, by using a maximum connected component algorithm, at least one highlight area in a texture map is determined, where the determined highlight area may include some noisy areas such as pockmarks, and the noisy area can be obtained only by removing the noisy areas, so that each determined highlight area needs to be labeled, and then the area of each labeled highlight area is calculated; and filtering out a highlight area with the area smaller than the area threshold value to obtain an accurate oil light area.
In this embodiment, after the texture map is subjected to the target screening threshold K screening, a binary map of the skin brightness region may be obtained, where the gray values of any pixel points in the binary map are 0 or 255, and represent black and white, respectively, the binary map may include some discrete points and some small defect interference regions, such as closed mouth, suppurative pox, and the like, and then the skin brightness regions are connected by using a maximum connected region algorithm, and the formed region is referred to as a highlight region. One post-processing algorithm is:
that is, in this step, a maximum connected region algorithm is used for the binary image to assign adjacent points in the skin brightness region, the judgment of the adjacent points can be performed by 4 adjacent (up, down, left and right) or 8 adjacent (up, down, left and right diagonal), the maximum connected region algorithm can be implemented by a greedy algorithm, that is, starting from any un-labeled point, the depth-first search is used to judge the adjacent points until no new adjacent points are generated, the region surrounded by the adjacent points is represented by the same label, then, the next un-labeled point is searched until all the points are labeled as a certain region, and these regions are called highlight regions. Then, in order to remove discrete points in the highlight region and the interference regions such as pox, closed mouth, etc., the present embodiment proposes: the area of each highlight area is calculated, the area of each highlight area is compared with an area threshold value, and the area with the area smaller than a certain area threshold value S is filtered out, so that an accurate oil light area is obtained. It should be noted that the setting of the area threshold S depends on the size of the pixel frame of the face picture, and in this embodiment, the area threshold S is set according to a formula
Figure BDA0003532355230000121
An area threshold S is set, where w and h in the formula represent the width and height, respectively, of the smallest rectangular box formed by the face 81 points.
Optionally, in another embodiment, on the basis of the foregoing embodiment, the method further includes: determining a difference value between the average gray value of the oil light area in the face image and the average gray value of the face area in the face image; comparing the difference value with a corresponding first classification threshold value to determine the severity level of the oil light area; wherein the first classification threshold is determined based on a difference between an average gray value of the oil area in each of the labeled images and an average gray value of the face area.
In this embodiment, in order to accurately determine the severity levels of the gloss areas, that is, the high, medium and low severity levels, in this embodiment, the existing labeling results for the face picture, that is, all the high, medium and low gloss areas in the labeling data, may be used. The gray level difference between an oil light area in the face image and a face area (non-oil light area) in the face image is calculated, and then traversing summation is carried out to obtain an average difference. Namely:
Figure BDA0003532355230000122
where l represents the severity level of the gloss region, and is generally classified into three levels, i.e., l is 0, 1, 2,
Figure BDA0003532355230000123
the texture map (gray scale map) of the oil light region indicating that the jth region in the ith human face picture is marked as the ith equal severity degree, namely
Figure BDA0003532355230000131
After the experiment, t is finally adopted1And t2As a screening classification threshold for high severity in the gloss region, where t1Mean difference of all regions with a severity level label of 1; t is t2The average difference of all regions with a severity level label of 2.
Therefore, for the obtained texture map, an accurate oil light region can be obtained through a post-processing algorithm, and then, the difference value between the average gray value of the oil light region and the average gray value of the face region in the face image is compared with a first classification threshold (the first classification threshold is also the difference value), so that the severity level of the oil light region is judged.
In the embodiment of the application, after an oil light area in a face image is determined, a difference value between an average gray value of the oil light area in the face image and an average gray value of a face area in the face image is further determined; then, the difference value is compared with a corresponding first classification threshold value, and the severity level of the oil light area is determined, wherein the first classification threshold value is determined based on the difference value of the average gray value of the oil light area in each labeled image and the average gray value of the face area. That is to say, in the embodiment of the present application, the severity level of the gloss area is determined by determining the difference between the average gray-scale value of the gloss area in the face image and the average gray-scale value of the face area in the face image, and comparing the difference with the first classification threshold, so that the cost of manual detection is saved, and a basis is provided for determining the severity level of the gloss area.
Optionally, in another embodiment, on the basis of the foregoing embodiment, the comparing the difference value with the corresponding first classification threshold value to determine the severity level of the gloss section in this embodiment includes:
if the difference value is greater than or equal to a first screening classification threshold value, determining the severity level of the oil light area to be high;
if the difference value is larger than a second screening classification threshold value and smaller than the first screening classification threshold value, determining the severity level of the oil light area to be a medium level;
and if the difference is less than or equal to the second screening classification threshold, determining that the severity level of the oil light area is low.
Optionally, in another embodiment, on the basis of the above embodiment, the method further includes: determining the average gray value of an oil light area in the face image; and comparing the average gray value of the oil light area in the face image with a second classification threshold value to determine the severity level of the oil light area, wherein the second classification threshold value is determined based on the average gray value of the oil light area in each labeled image.
In this embodiment, the severity level of the gloss area is determined, which is similar to the implementation process of the above embodiment, but the difference is that the above embodiment is compared by a difference value, and in this embodiment, the comparison is performed by an average gray value, that is, the average gray value of the gloss area is compared with the average gray value of the gloss area in each labeled image, so as to determine the severity level of the gloss area. The cost of manual detection is saved, and a basis is provided for judging the severity level of the gloss region.
It is noted that while for simplicity of explanation, the method embodiments are shown as a series of acts or combination of acts, those skilled in the art will appreciate that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders and concurrently. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Fig. 2 is a block diagram of a skin detection apparatus according to an embodiment of the present application. Referring to fig. 2, the apparatus includes: an acquisition module 401, a first determination module 402, a second determination module 403, and a removal module 404, wherein,
the acquiring module 401 is configured to acquire a face image of an object to be detected;
the first determining module 402 is configured to determine a texture map corresponding to the face image;
the second determining module 403 is configured to determine at least one highlight area in the texture map;
the removing module 404 is configured to remove an interference region in the at least one highlight region, so as to obtain an oil light region in the face image.
Optionally, in another embodiment, on the basis of the foregoing embodiment, the texture map is a texture map represented by a gray scale map; the second determining module 403 includes: a first comparing module 501 and a region determining module 502, which are schematically shown in fig. 3, wherein,
the first comparing module 501 is configured to compare the gray value of each pixel in the texture map with a target screening threshold, and determine a region corresponding to a pixel having a gray value higher than the target screening threshold as the highlight region;
the region determining module 502 is configured to determine a region corresponding to a pixel point with a gray value higher than the target screening threshold as the highlight region.
Optionally, in another embodiment, on the basis of the above embodiment, the apparatus further includes: the screening threshold determination module specifically comprises: a first gray value determination and target screening threshold determination module, wherein,
the first gray value determining module is used for determining the face average gray value of the face image;
the image determining module is used for determining a texture map of each face test picture;
the target screening threshold determining module is used for determining the sum of the average face gray value and the target gray difference as the target screening threshold; the target gray difference value is determined based on the average gray value of the oil area and the average gray value of the face area in each marked image.
Optionally, in another embodiment, on the basis of the foregoing embodiment, the first gray value determining module includes: a selecting module and a second gray value determining module, wherein,
the selection module is used for selecting representative key points from a plurality of different position areas of the face image;
the second gray value determining module is configured to determine a gray mean value of the representative key points, and determine the gray mean value as the average face gray value.
Optionally, in another embodiment, on the basis of the above embodiment, the removing module 404 includes: the area determining module 601 and the area filtering module 602 are schematically shown in fig. 4, wherein,
the area determining module 601 is configured to determine a region area of each highlight region;
the area filtering module 602 is configured to filter out a highlight area with an area smaller than an area threshold, so as to obtain the oil light area.
Optionally, in another embodiment, on the basis of the above embodiment, the apparatus further includes: the schematic structural diagrams of the difference determining module 701 and the second comparing module 702 are shown in fig. 5, wherein,
the difference determining module 701 is configured to determine a difference between an average grayscale value of an oil light region in the face image and an average grayscale value of a face region in the face image;
the second comparing module 702 is configured to compare the difference with a corresponding first classification threshold, and determine a severity level of the gloss area; wherein the first classification threshold is determined based on a difference between an average gray value of the oil light region in each of the labeled images and an average gray value of the face region.
Optionally, in another embodiment, on the basis of the above embodiment, the apparatus further includes: a third gray value determination module and a third comparison module, wherein,
the third gray value determining module is used for determining the average gray value of the oil light area in the face image;
the third comparison module is used for comparing the average gray value of the oil light area in the face image with a second classification threshold value to determine the severity level of the oil light area; wherein the second classification threshold is determined based on an average gray value of the oil light area in each of the labeled images.
The skin detection device provided in the embodiment of the present application may implement all the method steps in the above method embodiments, and is not described herein again.
Optionally, an embodiment of the present application further provides an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the skin detection method as described above.
Optionally, the present application further provides a computer-readable storage medium, and when a processor of an electronic device executes instructions in the computer-readable storage medium, the electronic device is enabled to execute the skin detection method as described above. Alternatively, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Optionally, the present application further provides a computer program product, which includes a computer program or instructions, and when executed by a processor, the computer program or instructions implement the skin detection method as described above.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 is a block diagram of an electronic device 800 according to an embodiment of the present application. For example, the electronic device 800 may be a mobile terminal or a server, and in the embodiment of the present application, the electronic device is taken as a mobile terminal for example. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile and non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 can detect the open/closed state of the device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 can also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the skin detection methods shown above.
In an embodiment, a computer-readable storage medium, such as the memory 804, is also provided that includes instructions executable by the processor 820 of the electronic device 800 to perform the skin detection method shown above. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an embodiment, there is also provided a computer program product, the instructions in which, when executed by the processor 820 of the electronic device 800, cause the electronic device 400 to perform the skin detection method shown above.
Fig. 7 is a block diagram of an apparatus 900 for skin detection according to an embodiment of the present application. For example, the apparatus 900 may be provided as a server. Referring to fig. 7, the apparatus 900 includes a processing component 922, which further includes one or more processors and memory resources, represented by memory 932, for storing instructions, such as applications, that may be executed by the processing component 922. The application programs stored in the memory 932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 922 is configured to execute instructions to perform the skin detection methods described above.
The device 900 may also include a power component 926 configured to perform power management of the device 900, a wired or wireless network interface 950 configured to connect the device 900 to a network, and an input output (I/O) interface 958. The apparatus 900 may operate based on an operating system, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like, stored in the memory 932.
The user information (including but not limited to the device information of the user, the personal information of the user, etc.), the related data, etc. referred to in this application are all information authorized by the user or authorized by each party.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of skin detection, comprising:
acquiring a face image of an object to be detected;
determining a texture map corresponding to the face image;
determining at least one highlight region in the texture map;
and removing the interference region in the at least one highlight region to obtain an oil light region in the face image.
2. The skin detection method of claim 1, wherein the texture map is a grayscale map representation of the texture map;
the determining at least one highlight region in the texture map comprises:
comparing the gray value of each pixel point in the texture map with a target screening threshold value respectively;
and determining the area corresponding to the pixel point with the gray value higher than the target screening threshold value as the highlight area.
3. The skin detection method of claim 2, wherein the target screening threshold is determined by:
determining the face average gray value of the face image;
determining a sum value between the face average gray value and a target gray difference value as the target screening threshold value; the target gray difference value is determined based on the average gray value of the oil area and the average gray value of the face area in each marked image.
4. The skin detection method of claim 3, wherein the determining a face mean gray value of the face image comprises:
selecting representative key points from a plurality of different position areas of the face image;
and determining the gray average value of the representative key points, and determining the gray average value as the face average gray value.
5. The skin detection method according to any one of claims 1 to 4, wherein the removing the interference region in the at least one highlight region to obtain an oil light region in the face image comprises:
determining the area of each highlight area;
and filtering out a highlight area with the area smaller than the area threshold value to obtain the oil light area.
6. The skin detection method according to any one of claims 1 to 5, characterized in that the method further comprises:
determining a difference value between the average gray value of the oil light area in the face image and the average gray value of the face area in the face image;
comparing the difference value with a corresponding first classification threshold value to determine the severity level of the oil light area; wherein the first classification threshold is determined based on a difference between an average gray value of the oil light region in each of the labeled images and an average gray value of the face region.
7. The skin detection method according to any one of claims 1 to 5, characterized in that the method further comprises:
determining the average gray value of an oil light area in the face image;
comparing the average gray value of the oil light area in the face image with a second classification threshold value to determine the severity level of the oil light area; and the second classification threshold is determined based on the average gray value of the oil light area in each marked image.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the skin detection method of any one of claims 1 to 7.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the skin detection method of any one of claims 1-7.
10. A computer program product comprising a computer program or instructions, wherein the computer program or instructions, when executed by a processor, implement the skin detection method of any one of claims 1 to 7.
CN202210214727.4A 2022-03-04 2022-03-04 Skin detection method, electronic device, storage medium and product Pending CN114782309A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210214727.4A CN114782309A (en) 2022-03-04 2022-03-04 Skin detection method, electronic device, storage medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210214727.4A CN114782309A (en) 2022-03-04 2022-03-04 Skin detection method, electronic device, storage medium and product

Publications (1)

Publication Number Publication Date
CN114782309A true CN114782309A (en) 2022-07-22

Family

ID=82423779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210214727.4A Pending CN114782309A (en) 2022-03-04 2022-03-04 Skin detection method, electronic device, storage medium and product

Country Status (1)

Country Link
CN (1) CN114782309A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719234A (en) * 2016-01-26 2016-06-29 厦门美图之家科技有限公司 Automatic gloss removing method and system for face area and shooting terminal
CN113298753A (en) * 2021-03-26 2021-08-24 阿里巴巴新加坡控股有限公司 Sensitive muscle detection method, image processing method, device and equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105719234A (en) * 2016-01-26 2016-06-29 厦门美图之家科技有限公司 Automatic gloss removing method and system for face area and shooting terminal
CN113298753A (en) * 2021-03-26 2021-08-24 阿里巴巴新加坡控股有限公司 Sensitive muscle detection method, image processing method, device and equipment

Similar Documents

Publication Publication Date Title
CN108229277B (en) Gesture recognition method, gesture control method, multilayer neural network training method, device and electronic equipment
US12056883B2 (en) Method for testing skin texture, method for classifying skin texture and device for testing skin texture
Guo et al. An efficient fusion-based defogging
CN108197546B (en) Illumination processing method and device in face recognition, computer equipment and storage medium
US10073602B2 (en) System and method for displaying a suggested luminance adjustment for an image
CN107172354B (en) Video processing method and device, electronic equipment and storage medium
US20140079319A1 (en) Methods for enhancing images and apparatuses using the same
CN108664946A (en) Stream of people's characteristic-acquisition method based on image and device
CN114096986A (en) Automatically segmenting and adjusting an image
CN112258404B (en) Image processing method, device, electronic equipment and storage medium
CN112329851B (en) Icon detection method and device and computer readable storage medium
CN109982012B (en) Image processing method and device, storage medium and terminal
CN106600556A (en) Image processing method and apparatus
CN110689478B (en) Image stylization processing method and device, electronic equipment and readable medium
CN116263942A (en) Method for adjusting image contrast, storage medium and computer program product
CN107590830A (en) Method for determining height, device and storage medium based on image
CN112381737B (en) Image processing method, device, electronic equipment and storage medium
CN114372931A (en) Target object blurring method and device, storage medium and electronic equipment
Chen et al. Hybrid saliency detection for images
CN110209861A (en) Image processing method, device, electronic equipment and computer readable storage medium
CN115620054A (en) Defect classification method and device, electronic equipment and storage medium
CN114782309A (en) Skin detection method, electronic device, storage medium and product
CN114067275A (en) Target object reminding method and system in monitoring scene and electronic equipment
CN116977190A (en) Image processing method, apparatus, device, storage medium, and program product
Nair et al. Benchmarking single image dehazing methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination