CN112801997A - Image enhancement quality evaluation method and device, electronic equipment and storage medium - Google Patents

Image enhancement quality evaluation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112801997A
CN112801997A CN202110161308.4A CN202110161308A CN112801997A CN 112801997 A CN112801997 A CN 112801997A CN 202110161308 A CN202110161308 A CN 202110161308A CN 112801997 A CN112801997 A CN 112801997A
Authority
CN
China
Prior art keywords
skin color
image
face
region
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110161308.4A
Other languages
Chinese (zh)
Other versions
CN112801997B (en
Inventor
肖尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202110161308.4A priority Critical patent/CN112801997B/en
Publication of CN112801997A publication Critical patent/CN112801997A/en
Application granted granted Critical
Publication of CN112801997B publication Critical patent/CN112801997B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides an image enhancement quality evaluation method and device, electronic equipment and a storage medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring a first image and a first enhanced image obtained by performing image enhancement processing on the first image; extracting a face skin color area in the first image as a first skin color area, and extracting a face skin color area in the first enhanced image as a second skin color area; determining a skin color probability corresponding to a first skin color area as a first skin color probability and determining a skin color probability corresponding to a second skin color area as a second skin color probability through a pre-trained skin color recognition model, wherein the skin color recognition model is used for recognizing the skin color area as the probability of a standard natural skin color; and determining the skin color distortion degree of the first enhanced image relative to the first image according to the first skin color probability and the second skin color probability. The method and the device can objectively evaluate the influence degree of image enhancement processing on skin color, and improve evaluation efficiency.

Description

Image enhancement quality evaluation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image enhancement quality assessment method and apparatus, an electronic device, and a storage medium.
Background
The image enhancement is a general term of a series of technologies for enhancing useful information of an image and improving the visual effect of the image, and can enhance the image interpretation and recognition effect by purposefully emphasizing the overall or local characteristics of the image and enlarging the difference between different object characteristics in the image so as to meet the needs of certain special analysis.
The existing objective evaluation methods are used for comparing the similarity of an enhanced image and an original image or the loss of bottom layer characteristics, evaluation is not carried out aiming at skin color distortion, but the skin color distortion evaluation is introduced, the subjective evaluation is often dependent on subjective evaluation, but the subjective evaluation is easily interfered by personal preference of an evaluator, so that the evaluation efficiency is low, the evaluation stability is not high, and the large-scale application is difficult.
Disclosure of Invention
The present disclosure provides a method, an apparatus, an electronic device, and a storage medium for image enhancement quality evaluation, which can objectively evaluate a change in chromaticity of an image. The technical scheme is as follows:
in a first aspect, a method for evaluating image enhancement quality is provided, the method comprising:
acquiring a first image and a first enhanced image obtained by performing image enhancement processing on the first image;
extracting a face skin color area in the first image as a first skin color area, and extracting a face skin color area in the first enhanced image as a second skin color area;
determining a skin color probability corresponding to a first skin color area as a first skin color probability and determining a skin color probability corresponding to a second skin color area as a second skin color probability through a pre-trained skin color recognition model, wherein the skin color recognition model is used for recognizing the skin color area as the probability of a standard natural skin color;
and determining the skin color distortion degree of the first enhanced image relative to the first image according to the first skin color probability and the second skin color probability.
In a second aspect, an apparatus for image enhancement quality assessment is provided, the apparatus comprising:
the acquisition module is used for acquiring a first image and a first enhanced image obtained by performing image enhancement processing on the first image;
the extraction module is used for extracting a face skin color area in the first image as a first skin color area and extracting a face skin color area in the first enhanced image as a second skin color area;
the first determining module is used for determining a skin color probability corresponding to a first skin color area as a first skin color probability and determining a skin color probability corresponding to a second skin color area as a second skin color probability through a pre-trained skin color recognition model, and the skin color recognition model is used for recognizing the probability that the skin color area is a standard natural skin color;
and the second determining module is used for determining the skin color distortion degree of the first enhanced image relative to the first image according to the first skin color probability and the second skin color probability.
In a third aspect, an electronic device is provided, which includes:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform operations corresponding to the method of image enhancement quality assessment shown in accordance with the first aspect of the present disclosure.
In a fourth aspect, a storage medium is provided, on which a computer program is stored, which when executed by a processor, implements the method of image enhancement quality assessment shown in the first aspect of the present disclosure.
The technical scheme provided by the disclosure has the following beneficial effects:
the method comprises the steps of extracting a face skin color area in a first image to serve as a first skin color area, extracting a face skin color area in a first enhanced image to serve as a second skin color area, enabling the overall effect of the image to be evaluated based on the extracted skin color area rather than the whole image area, inputting the extracted skin color area into a pre-trained skin color recognition model, objectively evaluating the probability that skin colors in the first image and the first enhanced image are natural standard skin colors, improving the evaluation efficiency, finally determining the skin color distortion degree of the first enhanced image relative to the first image by utilizing the first skin color probability and the second skin color probability, and visually measuring the influence of image enhancement on skin colors in the face image.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flowchart of an image enhancement quality evaluation method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of 68 key points of a human face in a human face image according to an embodiment of the present disclosure;
fig. 3 is a reference diagram of a skin color region on a face image according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a skin color identification model construction method according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of another image enhancement quality evaluation method provided in the embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image enhancement quality evaluation apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device for image enhancement quality evaluation according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing the devices, modules or units, and are not used for limiting the devices, modules or units to be different devices, modules or units, and also for limiting the sequence or interdependence relationship of the functions executed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
The present disclosure provides an image enhancement quality evaluation method, apparatus, electronic device and storage medium, which aim to solve the above technical problems of the prior art.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
The embodiment of the present disclosure provides a method for evaluating image enhancement quality, as shown in fig. 1, the method includes:
step S101: the method comprises the steps of obtaining a first image and a first enhanced image obtained by carrying out image enhancement processing on the first image.
The first image comprises an original image containing a portrait, and the first enhanced image comprises an image obtained after the original image is subjected to image enhancement processing.
It is understood that the selected first image to be evaluated and the corresponding first enhanced image may be acquired locally; alternatively, the externally transmitted first image and the corresponding first enhanced image may be received via a network transmission. The first enhanced image may be obtained by performing enhancement processing in advance, or may be obtained by performing enhancement processing in real time.
Step S102: and extracting a face skin color area in the first image as a first skin color area, and extracting a face skin color area in the first enhanced image as a second skin color area.
In one embodiment of the present disclosure, extracting a face skin color region in the first image as a first skin color region and extracting a face skin color region in the first enhanced image as a second skin color region comprises:
(1) and detecting a face key point region in the first image as a first face key point region, and extracting a face triangle region from the first face key point region according to a preset face triangle region position point as a first skin color region.
(2) And detecting a face key point region in the first enhanced image as a second face key point region, and extracting a face triangular region from the second face key point region according to a preset face triangular region position point as a second skin color region.
It can be understood that when the image is subjected to image enhancement processing, the change of the face tends to be more concerned, and therefore, in a specific implementation process, the whole area related to the face in the image can be taken as a skin color area, then the image enhancement quality is evaluated for the skin color area, and the whole face area is selected instead of the whole image, so that the whole evaluation efficiency can be improved.
Further, in an embodiment of the present disclosure, the face triangle region may be further extracted from the face key point region according to a preset face triangle region to perform image enhancement quality evaluation. It can be understood that the human face is easily illuminated and affected by five sense organs, so that a smaller face area, for example, a flatter area can be extracted as a skin color area in the whole face area, so as to reduce the influence of the environment on the evaluation and improve the objectivity of the evaluation.
In one embodiment of the present disclosure, face keypoints in an image are extracted by:
and inputting the image of the face key point to be extracted into a pre-trained face key point detection model, and acquiring 68 face key points output by the face key point model.
The pre-trained face key point detection model includes a 68-point face detection model dan (deep Alignment network) trained on a 300W data set, and the face key points of the face image can be output by inputting an image including a face into the face key point detection model, specifically referring to fig. 2, where fig. 2 is 68-person face key points in the face image. The evaluation can be directly terminated for face images in which the face key points cannot be detected.
It should be noted that, when extracting key points of a face in an image according to the embodiment of the present disclosure, in addition to detecting key points of the face by using the face 68 point detection model, other face detection models may also be used for detection, and specifically, the face key point detection model further includes multiple detection models such as faces 72, 128, 150, and 201. The coordinate positions of key points of the human face can be returned, and the coordinate positions comprise a human face contour, eyes, eyebrows, lips, a nose contour and the like.
Therefore, the face key point region in the first image can be extracted as the first face key point region according to the face key point detection model, and the face key point region in the first enhanced image can be extracted as the second face key point region. Skin color regions can be extracted from the first face key point region and the second face key point region, wherein the positions and the number of the extracted key points can be set by a person skilled in the art.
In one embodiment of the present disclosure, the preset positions of the face triangle areas include triangle areas constructed by three points 37, 32 and 49 and triangle areas constructed by three points 46, 36 and 55 in fig. 2. It can be understood that the area is selected as the skin color area, rather than the whole human face, because the area is relatively flat, the influence of illumination and five sense organs is reduced, and the recognition accuracy can be improved. The range of the area on the face image can refer to a triangular area enclosed by a white line in fig. 3.
It should be noted that the skin color region in the present disclosure may be a skin color mean value of the determined skin color region, and it can be understood that, after the face triangle region is extracted from the face key point region according to the preset face triangle region position point, the skin color mean value of the skin color region may be obtained according to a pixel mean value of pixel points in the region.
Specifically, the pixel mean includes values of r, g, and b. Where r, g, b is a color standard that includes almost all colors that can be perceived by human vision, and is obtained by changing three color channels of red (r), green (g), and blue (b) and superimposing them on each other to obtain various colors. The skin color mean includes values of Y, Cb, Cr. Where YCbCr represents a color space, Y represents a luminance component of a color, Cb and Cr represent chrominance components of a color, and specifically, Cb represents a blue chrominance component, and Cr represents a red chrominance component.
The average value of the skin color area obtained according to the average value of the pixels in the skin color area can be obtained through color gamut change, and specifically, the skin color value of the skin color area can be obtained through the following color gamut conversion formula (1):
Figure BDA0002936800300000071
in one embodiment of the present disclosure, the face triangle region is extracted by:
(1) and extracting a left face triangular region and/or a right face triangular region from a face key point region of the face triangular region to be extracted.
(2) And when only the left face triangular region is extracted, taking the extracted left face triangular region as the face triangular region.
(3) And when only the right-face triangular region is extracted, taking the extracted right-face triangular region as the face triangular region.
(4) And when the left face triangular region and the right face triangular region are extracted, averaging the pixels of the extracted left face triangular region and the extracted right face triangular region, and taking the regions obtained after averaging as the face triangular regions.
It is understood that, for each image containing a face, the containing conditions of the face are different, and exemplarily, some images include only a left face, some images include only a right face, and some images include a whole face. Therefore, the extraction of the left-face triangular region and the right-face triangular region from the face key point region of the face triangular region to be extracted includes various cases.
It can be understood that, for the left face triangle area and the right face triangle area, the pixels on both sides will be slightly different due to the influence of illumination and five sense organs, in order to achieve the purpose of accurate evaluation, the left face triangle area and the right face triangle area can be subjected to mean processing of the pixels, and the area after the mean processing is used as a new face triangle area.
Step S103: and determining the skin color probability corresponding to the first skin color area as a first skin color probability and determining the skin color probability corresponding to the second skin color area as a second skin color probability through a pre-trained skin color recognition model, wherein the skin color recognition model is used for recognizing the skin color area as the probability of the standard natural skin color.
The pre-trained skin color recognition model can recognize the probability that the skin color area is the standard natural skin color. In one embodiment of the present disclosure, as shown in fig. 4, the building process of the skin color identification model includes:
step S401: and acquiring a preset face image data set, and extracting a skin color data set corresponding to the preset face image data set.
The preset face image data set uses a 300W face image data set by default, and a more accurate skin color recognition model can be constructed by using sample data of a heavy weight level. After the 300W data set is obtained, for each face image data, the skin color area in the face image can be extracted, the pixel value of each skin color area is utilized, then the skin color value of each skin color area is obtained according to the color gamut variation formula in the formula 1, and the corresponding 300W skin color data set is obtained through statistics.
It should be noted that, as the number of evaluation images increases, the skin color value of the evaluation image may be added to the skin color data set to update the skin color data set. By continuously updating the skin color data set, the constructed skin color identification model can be more accurate.
Step S402: and calculating based on the skin color data set to obtain skin color parameters, wherein the skin color parameters comprise a chrominance mean value, a chrominance standard deviation and a chrominance covariance.
It should be noted that the skin colors of different human faces have a large difference in the luminance component Y, but are more concentrated in the chrominance components Cr and Cb; in order to avoid the influence of different race proportions in the data set, the brightness component Y can be cut off so as to compatibly identify the faces with various skin colors.
Specifically, the blue chrominance mean value and the red chrominance mean value can be obtained by the following equations (2) and (3), respectively:
Figure BDA0002936800300000081
Figure BDA0002936800300000082
wherein the content of the first and second substances,
Figure BDA0002936800300000083
which represents the mean value of the chromaticity of blue,
Figure BDA0002936800300000084
representing the red chroma mean, and N representing the number of skin tone data sets, in one of the present disclosureWhich in one embodiment may be 300W, i indicates the number of skin tone data sets and has no actual physical meaning.
The blue chromaticity standard deviation and the red chromaticity standard deviation can be obtained by the following equations (4) and (5), respectively:
Figure BDA0002936800300000085
Figure BDA0002936800300000086
wherein σCbDenotes the standard deviation, σ, of the blue chromaticityCrIndicating the red color standard deviation.
The chroma covariance can be obtained by the following equation (6):
Figure BDA0002936800300000091
where Cov (Cb, Cr) represents the chroma covariance.
Step S403: and training the Gaussian mixture model by using the skin color parameters to obtain a skin color identification model.
It is understood that since the skin color parameters relate to two components, namely, the blue component and the red component, the gaussian mixture model may be a two-dimensional gaussian mixture model obtained by combining the skin color parameters, such as the chroma mean, the chroma standard deviation and the chroma covariance. When the skin color recognition model is input into the two-dimensional gaussian mixture model, the skin color recognition model can be obtained, and the skin color recognition model can be expressed by the following formula (7).
Figure BDA0002936800300000092
Wherein, CbRepresenting a blue chromaticity variable, CrRepresenting the red chrominance variable, f (Cb),Cr) A skin tone recognition function, i.e. a skin tone recognition model, is represented. By the aboveAs can be seen from the formula, by inputting the skin color region into the model, the probability that the skin color region is a standard natural skin color can be identified.
In one embodiment of the present disclosure, the skin color probability corresponding to the skin color region is determined by:
determining the chroma component of the skin color area to be identified, inputting the chroma component into a skin color identification model, and acquiring the skin color probability identified by the skin color identification model based on the chroma component.
Firstly, it should be noted that, by determining the chrominance component of the skin color region to be identified and excluding the luminance component, faces with various skin colors can be compatibly identified.
Specifically, Cb in equation 7Representing a blue chromaticity variable, CrThe red chrominance variable is represented, so that chrominance components, namely blue and red components, of the skin color area to be recognized can be input into a skin color recognition model trained in advance, and the probability that the current skin color area to be recognized is a natural skin color area is output. And inputting the first color component in the first skin color area into a skin color recognition model to output a first skin color probability corresponding to the first skin color area, and inputting the second color component in the second skin color area into the skin color recognition model to output a second skin color probability corresponding to the second skin color area.
Step S104: and determining the skin color distortion degree of the first enhanced image relative to the first image according to the first skin color probability and the second skin color probability.
It is understood that the first enhanced image is an image of the first image after image enhancement processing, and in order to evaluate the degree of influence of the image enhancement processing on skin color, a distortion value of the enhanced image with respect to the original image may be further obtained by using a probability of a natural standard skin color obtained in the skin color identification model.
In one embodiment of the present disclosure, determining a skin tone distortion factor of a first enhanced image relative to a first image based on a first skin tone probability and a second skin tone probability comprises:
and calculating a joint probability according to the first skin color probability and the second skin color probability, and taking the joint probability as the skin color distortion degree of the first enhanced image relative to the first image.
The first skin color probability is the probability that the skin color in the original face image is the natural standard skin color, the second skin color probability is the probability that the skin color in the image subjected to image enhancement according to the original face image is the natural standard skin color, and in order to more intuitively measure the influence of the image enhancement on the skin color in the face image, the joint probability can be calculated according to the first skin color probability and the second skin color probability, and the joint probability is used as the skin color distortion degree of the enhanced image relative to the original image.
Specifically, the skin color distortion degree of the first enhanced image with respect to the first image can be obtained by the following formula (8):
Figure BDA0002936800300000101
wherein y represents the degree of skin color distortion, POriginal drawingRepresenting a first skin tone probability, PAfter enhancementIndicating that the second skin tone probability δ represents a minimum value is of no practical significance only to avoid calculation errors.
In addition, the skin color distortion degree represents the distortion degree of the enhanced image relative to the original image, and when the distortion degree reaches a certain threshold value, the enhanced image with the larger distortion degree can be further screened and deleted, so that discomfort of human eyes is avoided.
The method comprises the steps of extracting a face skin color area in a first image to serve as a first skin color area, extracting a face skin color area in a first enhanced image to serve as a second skin color area, enabling the overall effect of the image to be evaluated based on the extracted skin color area rather than the whole image area, inputting the extracted skin color area into a pre-trained skin color recognition model, objectively evaluating the probability that skin colors in the first image and the first enhanced image are natural standard skin colors, improving the evaluation efficiency, finally determining the skin color distortion degree of the first enhanced image relative to the first image by utilizing the first skin color probability and the second skin color probability, and visually measuring the influence of image enhancement on skin colors in the face image.
For a better understanding of the present disclosure, in one embodiment of the present disclosure, another method of image enhancement quality assessment is provided, as shown in fig. 5.
Specifically, the first image is an original image including a human face, the first enhanced image is an image obtained by enhancing the image according to the original image, the original image and the image obtained by enhancing the image are respectively input into a human face skin color extraction unit, human face key points are extracted, and evaluation is continued if the key points are related; and if no key point exists in the face, terminating the evaluation of the image enhancement quality.
Then, connecting the specific face key points in the corresponding regions of the first image and the first enhanced image according to the points (37, 32, 49 and 46, 36, 55) in fig. 2 to determine the first skin color region and the second skin color region, it can be understood that the region is selected as the skin color region, rather than the complete face as the skin color region, because the region is relatively flat, the influence of illumination and five sense organs is reduced, and the recognition accuracy can be improved.
After the skin color area is determined, respectively extracting pixel mean values in a first skin color area and a second skin color area, and converting a pixel parameter value rgb into a chrominance parameter value YCbCr by using a color gamut conversion formula in a formula (1), wherein it needs to be noted that the skin colors of different human faces have larger difference on a luminance component Y but are more concentrated on chrominance components Cb and Cr; in order to avoid the influence of different race proportions, the brightness component Y can be cut off so as to compatibly recognize the faces with various skin colors.
And inputting the chrominance component into a pre-trained skin color recognition model to output the probability that the current skin color area to be recognized is a natural skin color area. Namely, the first color component in the first skin color area is input into the skin color identification model to output a first skin color probability corresponding to the first skin color area, and the second color component in the second skin color area is input into the skin color identification model to output a second skin color probability corresponding to the second skin color area.
In order to more intuitively measure the influence of image enhancement on skin color in the face image, the joint probability can be calculated according to the first skin color probability and the second skin color probability, and the joint probability is used as the skin color distortion degree of the enhanced image relative to the original image.
It should be noted that, as the number of evaluation images increases, the skin color value of the evaluation image may be added to the skin color data set to update the skin color data set. By continuously updating the skin color data set, the constructed skin color identification model can be more accurate.
An embodiment of the present disclosure provides an image enhancement quality evaluation apparatus, as shown in fig. 6, the image evaluation apparatus 60 may include: an acquisition module 601, an extraction module 602, a first determination module 603, and a second determination module 604, wherein,
an obtaining module 601, configured to obtain a first image and a first enhanced image obtained by performing image enhancement processing on the first image;
an extracting module 602, configured to extract a face skin color region in the first image as a first skin color region, and extract a face skin color region in the first enhanced image as a second skin color region;
a first determining module 603, configured to determine, through a pre-trained skin color recognition model, a skin color probability corresponding to a first skin color region as a first skin color probability, and determine a skin color probability corresponding to a second skin color region as a second skin color probability, where the skin color recognition model is used to recognize that the skin color region is a probability of a standard natural skin color;
a second determining module 604, configured to determine a skin color distortion degree of the first enhanced image with respect to the first image according to the first skin color probability and the second skin color probability.
The image enhancement quality evaluation apparatus of the present embodiment can perform the image enhancement quality evaluation method shown in the foregoing embodiments of the present disclosure, and the implementation principles thereof are similar, and are not described herein again.
The method comprises the steps of extracting a face skin color area in a first image to serve as a first skin color area, extracting a face skin color area in a first enhanced image to serve as a second skin color area, enabling the overall effect of the image to be evaluated based on the extracted skin color area rather than the whole image area, inputting the extracted skin color area into a pre-trained skin color recognition model, objectively evaluating the probability that skin colors in the first image and the first enhanced image are natural standard skin colors, improving the evaluation efficiency, finally determining the skin color distortion degree of the first enhanced image relative to the first image by utilizing the first skin color probability and the second skin color probability, and visually measuring the influence of image enhancement on skin colors in the face image.
Referring now to FIG. 7, shown is a schematic diagram of an electronic device 700 suitable for use in implementing embodiments of the present disclosure. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device includes: a memory and a processor, wherein the processor may be referred to as a processing device 701 described below, and the memory may include at least one of a Read Only Memory (ROM)702, a Random Access Memory (RAM)703 and a storage device 708, which are described below:
as shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from storage 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 707 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
acquiring a first image and a first enhanced image obtained by performing image enhancement processing on the first image;
extracting a face skin color area in the first image as a first skin color area, and extracting a face skin color area in the first enhanced image as a second skin color area;
determining a skin color probability corresponding to a first skin color area as a first skin color probability and determining a skin color probability corresponding to a second skin color area as a second skin color probability through a pre-trained skin color recognition model, wherein the skin color recognition model is used for recognizing the skin color area as the probability of a standard natural skin color;
and determining the skin color distortion degree of the first enhanced image relative to the first image according to the first skin color probability and the second skin color probability.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the designation of a module or unit does not in some cases constitute a limitation of the unit itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an image enhancement quality evaluation method including:
acquiring a first image and a first enhanced image obtained by performing image enhancement processing on the first image;
extracting a face skin color area in the first image as a first skin color area, and extracting a face skin color area in the first enhanced image as a second skin color area;
determining a skin color probability corresponding to a first skin color area as a first skin color probability and determining a skin color probability corresponding to a second skin color area as a second skin color probability through a pre-trained skin color recognition model, wherein the skin color recognition model is used for recognizing the skin color area as the probability of a standard natural skin color;
and determining the skin color distortion degree of the first enhanced image relative to the first image according to the first skin color probability and the second skin color probability.
In one embodiment of the present disclosure, extracting a face skin color region in the first image as a first skin color region and extracting a face skin color region in the first enhanced image as a second skin color region comprises:
detecting a face key point region in a first image as a first face key point region, and extracting a face triangle region from the first face key point region according to a preset face triangle region position point as a first skin color region;
and detecting a face key point region in the first enhanced image as a second face key point region, and extracting a face triangular region from the second face key point region according to a preset face triangular region position point as a second skin color region.
In one embodiment of the present disclosure, the face triangle region is extracted by:
extracting a left face triangular region and/or a right face triangular region from a face key point region of a face triangular region to be extracted;
when only the left face triangular region is extracted, taking the extracted left face triangular region as a human face triangular region;
when only the right triangular area is extracted, the extracted right triangular area is used as a human face triangular area;
and when the left face triangular region and the right face triangular region are extracted, averaging the pixels of the extracted left face triangular region and the extracted right face triangular region, and taking the regions obtained after averaging as the face triangular regions.
In one embodiment of the present disclosure, face keypoints in an image are extracted by:
and inputting the image of the face key point to be extracted into a pre-trained face key point detection model, and acquiring 68 face key points output by the face key point model.
In one embodiment of the present disclosure, the skin color probability corresponding to the skin color region is determined by:
determining the chroma component of the skin color area to be identified, inputting the chroma component into a skin color identification model, and acquiring the skin color probability identified by the skin color identification model based on the chroma component.
In one embodiment of the present disclosure, determining a skin tone distortion factor of a first enhanced image relative to a first image based on a first skin tone probability and a second skin tone probability comprises:
and calculating a joint probability according to the first skin color probability and the second skin color probability, and taking the joint probability as the skin color distortion degree of the first enhanced image relative to the first image.
In one embodiment of the present disclosure, a skin color identification model building process includes:
acquiring a preset face image data set, and extracting a skin color data set corresponding to the preset face image data set;
calculating based on the skin color data set to obtain skin color parameters, wherein the skin color parameters comprise a chrominance mean value, a chrominance standard deviation and a chrominance covariance;
and training the Gaussian mixture model by using the skin color parameters to obtain a skin color identification model.
According to one or more embodiments of the present disclosure, there is provided an image enhancement quality evaluation apparatus including:
the acquisition module is used for acquiring a first image and a first enhanced image obtained by performing image enhancement processing on the first image;
the extraction module is used for extracting a face skin color area in the first image as a first skin color area and extracting a face skin color area in the first enhanced image as a second skin color area;
the first determining module is used for determining a skin color probability corresponding to a first skin color area as a first skin color probability and determining a skin color probability corresponding to a second skin color area as a second skin color probability through a pre-trained skin color recognition model, and the skin color recognition model is used for recognizing the probability that the skin color area is a standard natural skin color;
and the second determining module is used for determining the skin color distortion degree of the first enhanced image relative to the first image according to the first skin color probability and the second skin color probability.
In one embodiment of the present disclosure, the extraction module includes:
the first extraction submodule is used for detecting a face key point region in the first image as a first face key point region, and extracting a face triangle region from the first face key point region according to a preset face triangle region position point as a first skin color region;
and the second extraction submodule is used for detecting a face key point region in the first enhanced image as a second face key point region, and extracting a face triangular region from the second face key point region according to a preset face triangular region position point as a second skin color region.
In one embodiment of the present disclosure, the face triangle region is extracted by:
extracting a left face triangular region and/or a right face triangular region from a face key point region of a face triangular region to be extracted;
when only the left face triangular region is extracted, taking the extracted left face triangular region as a human face triangular region;
when only the right triangular area is extracted, the extracted right triangular area is used as a human face triangular area;
and when the left face triangular region and the right face triangular region are extracted, averaging the pixels of the extracted left face triangular region and the extracted right face triangular region, and taking the regions obtained after averaging as the face triangular regions.
In one embodiment of the present disclosure, face keypoints in an image are extracted by:
and inputting the image of the face key point to be extracted into a pre-trained face key point detection model, and acquiring 68 face key points output by the face key point model.
In one embodiment of the present disclosure, the first determining module includes:
the first obtaining submodule is used for determining the chroma component of the skin color area to be identified, inputting the chroma component into the skin color identification model and obtaining the skin color probability identified by the skin color identification model based on the chroma component.
In one embodiment of the present disclosure, the second determining module includes:
and the determining submodule is used for calculating a joint probability according to the first skin color probability and the second skin color probability and determining the joint probability as the skin color distortion degree of the first enhanced image relative to the first image.
In an embodiment of the present disclosure, the image enhancement quality assessment apparatus further includes a skin color identification model building module, which specifically includes:
the second acquisition submodule is used for acquiring a preset face image data set and extracting a skin color data set corresponding to the preset face image data set;
the calculation submodule is used for calculating to obtain skin color parameters based on the skin color data set, and the skin color parameters comprise a chrominance mean value, a chrominance standard deviation and a chrominance covariance;
and the training submodule is used for training the Gaussian mixture model by using the skin color parameters to obtain a skin color identification model.
According to one or more embodiments of the present disclosure, there is provided an electronic device including:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the image enhancement quality assessment method of the first aspect of the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the image enhancement quality assessment method of the first aspect of the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. An image enhancement quality evaluation method is characterized by comprising the following steps:
acquiring a first image and a first enhanced image obtained by performing image enhancement processing on the first image;
extracting a face skin color area in the first image to be used as a first skin color area, and extracting a face skin color area in the first enhanced image to be used as a second skin color area;
determining a skin color probability corresponding to the first skin color area as a first skin color probability and determining a skin color probability corresponding to the second skin color area as a second skin color probability through a pre-trained skin color recognition model, wherein the skin color recognition model is used for recognizing the probability that the skin color area is a standard natural skin color;
and determining the skin color distortion degree of the first enhanced image relative to the first image according to the first skin color probability and the second skin color probability.
2. The method of claim 1, wherein extracting a face skin color region in the first image as a first skin color region and extracting a face skin color region in the first enhanced image as a second skin color region comprises:
detecting a face key point region in the first image as a first face key point region, and extracting a face triangle region from the first face key point region according to a preset face triangle region position point as a first skin color region;
and detecting a face key point region in the first enhanced image as a second face key point region, and extracting a face triangle region from the second face key point region according to a preset face triangle region position point as a second skin color region.
3. The method of claim 2, wherein the face triangle region is extracted by:
extracting a left face triangular region and/or a right face triangular region from a face key point region of a face triangular region to be extracted;
when only the left face triangular region is extracted, taking the extracted left face triangular region as a human face triangular region;
when only the right triangular area is extracted, the extracted right triangular area is used as a human face triangular area;
and when the left face triangular region and the right face triangular region are extracted, averaging the pixels of the extracted left face triangular region and the extracted right face triangular region, and taking the regions obtained after averaging as the face triangular regions.
4. The method of claim 1, wherein the face key points in the image are extracted by:
and inputting the image of the face key point to be extracted into a pre-trained face key point detection model, and acquiring 68 face key points output by the face key point model.
5. The method of claim 1, wherein the skin tone probability corresponding to the skin tone region is determined by:
determining a chroma component of a skin color area to be identified, inputting the chroma component into the skin color identification model, and acquiring the skin color probability identified by the skin color identification model based on the chroma component.
6. The method of claim 1, wherein said determining a skin tone distortion level of the first enhanced image relative to the first image based on the first skin tone probability and the second skin tone probability comprises:
and calculating a joint probability according to the first skin color probability and the second skin color probability, and determining the joint probability as the skin color distortion degree of the first enhanced image relative to the first image.
7. The method according to any one of claims 1-6, wherein the skin color identification model is constructed by a process comprising:
acquiring a preset face image data set, and extracting a skin color data set corresponding to the preset face image data set;
calculating to obtain skin color parameters based on the skin color data set, wherein the skin color parameters comprise a chrominance mean value, a chrominance standard deviation and a chrominance covariance;
and training a Gaussian mixture model by using the skin color parameters to obtain the skin color identification model.
8. An image enhancement quality evaluation apparatus characterized by comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a first image and a first enhanced image obtained by performing image enhancement processing on the first image;
the extraction module is used for extracting a face skin color area in the first image to be used as a first skin color area and extracting a face skin color area in the first enhanced image to be used as a second skin color area;
the first determining module is used for determining a skin color probability corresponding to the first skin color area as a first skin color probability and determining a skin color probability corresponding to the second skin color area as a second skin color probability through a pre-trained skin color recognition model, wherein the skin color recognition model is used for recognizing the probability that the skin color area is a standard natural skin color;
and the second determining module is used for determining the skin color distortion degree of the first enhanced image relative to the first image according to the first skin color probability and the second skin color probability.
9. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: performing the image enhancement quality assessment method according to any one of claims 1 to 7.
10. A computer-readable storage medium on which a computer program is stored, the program, when being executed by a processor, implementing the image enhancement quality evaluation method according to any one of claims 1 to 7.
CN202110161308.4A 2021-02-05 2021-02-05 Image enhancement quality evaluation method, device, electronic equipment and storage medium Active CN112801997B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110161308.4A CN112801997B (en) 2021-02-05 2021-02-05 Image enhancement quality evaluation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110161308.4A CN112801997B (en) 2021-02-05 2021-02-05 Image enhancement quality evaluation method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112801997A true CN112801997A (en) 2021-05-14
CN112801997B CN112801997B (en) 2023-06-06

Family

ID=75814462

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110161308.4A Active CN112801997B (en) 2021-02-05 2021-02-05 Image enhancement quality evaluation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112801997B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888543A (en) * 2021-08-20 2022-01-04 北京达佳互联信息技术有限公司 Skin color segmentation method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881853A (en) * 2015-05-28 2015-09-02 厦门美图之家科技有限公司 Skin color rectification method and system based on color conceptualization
CN106580294A (en) * 2016-12-30 2017-04-26 上海交通大学 Physiological signal remote monitoring system based on multimodal imaging technique and application thereof
CN107911625A (en) * 2017-11-30 2018-04-13 广东欧珀移动通信有限公司 Light measuring method, device, readable storage medium storing program for executing and computer equipment
CN111524080A (en) * 2020-04-22 2020-08-11 杭州夭灵夭智能科技有限公司 Face skin feature identification method, terminal and computer equipment
US20200258260A1 (en) * 2017-09-19 2020-08-13 Guangzhou Baiguoyuan Information Technology Co., Ltd. Skin color detection method, skin color detection apparatus, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881853A (en) * 2015-05-28 2015-09-02 厦门美图之家科技有限公司 Skin color rectification method and system based on color conceptualization
CN106580294A (en) * 2016-12-30 2017-04-26 上海交通大学 Physiological signal remote monitoring system based on multimodal imaging technique and application thereof
US20200258260A1 (en) * 2017-09-19 2020-08-13 Guangzhou Baiguoyuan Information Technology Co., Ltd. Skin color detection method, skin color detection apparatus, and storage medium
CN107911625A (en) * 2017-11-30 2018-04-13 广东欧珀移动通信有限公司 Light measuring method, device, readable storage medium storing program for executing and computer equipment
CN111524080A (en) * 2020-04-22 2020-08-11 杭州夭灵夭智能科技有限公司 Face skin feature identification method, terminal and computer equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
AYUB SHOKROLLAHI等: "Image quality assessment for contrast enhancement evaluation", 《INTERNATIONAL JOURNAL OF ELECTRONOCS AND COMMUNICATIONS》, vol. 77 *
张锋: "基于FPGA的人体静脉血管成像仪的设计与实现", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》 *
梁东;张雷洪;杜晓萌;: "基于感兴趣区域的图像质量评价算法研究", 包装工程, no. 05 *
韩磊;曲中水;: "一种RGB模型彩色图像增强方法", 哈尔滨理工大学学报, no. 06 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888543A (en) * 2021-08-20 2022-01-04 北京达佳互联信息技术有限公司 Skin color segmentation method and device, electronic equipment and storage medium
CN113888543B (en) * 2021-08-20 2024-03-19 北京达佳互联信息技术有限公司 Skin color segmentation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112801997B (en) 2023-06-06

Similar Documents

Publication Publication Date Title
CN111476309B (en) Image processing method, model training method, device, equipment and readable medium
CN112241714B (en) Method and device for identifying designated area in image, readable medium and electronic equipment
CN112419151B (en) Image degradation processing method and device, storage medium and electronic equipment
CN111314614B (en) Image processing method and device, readable medium and electronic equipment
CN110865862B (en) Page background setting method and device and electronic equipment
CN110069974B (en) Highlight image processing method and device and electronic equipment
CN110211030B (en) Image generation method and device
CN110084154B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN111080595A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN114494298A (en) Object segmentation method, device, equipment and storage medium
US20110293177A1 (en) Efficient Image and Video Recoloring for Colorblindness
CN112967193A (en) Image calibration method and device, computer readable medium and electronic equipment
CN115272182A (en) Lane line detection method, lane line detection device, electronic device, and computer-readable medium
CN112801997B (en) Image enhancement quality evaluation method, device, electronic equipment and storage medium
CN114037716A (en) Image segmentation method, device, equipment and storage medium
CN111738950B (en) Image processing method and device
WO2023217117A1 (en) Image assessment method and apparatus, and device, storage medium and program product
CN113238652B (en) Sight line estimation method, device, equipment and storage medium
CN113780148A (en) Traffic sign image recognition model training method and traffic sign image recognition method
CN112070034A (en) Image recognition method and device, electronic equipment and computer readable medium
CN114372974B (en) Image detection method, device, equipment and storage medium
CN112418233B (en) Image processing method and device, readable medium and electronic equipment
CN116958279A (en) Color classification method and device, electronic equipment and storage medium
CN117425086A (en) Multimedia data processing method and device, electronic equipment and storage medium
CN115908862A (en) Image color processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Tiktok vision (Beijing) Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: BEIJING BYTEDANCE NETWORK TECHNOLOGY Co.,Ltd.

Address after: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant after: Douyin Vision Co.,Ltd.

Address before: 100041 B-0035, 2 floor, 3 building, 30 Shixing street, Shijingshan District, Beijing.

Applicant before: Tiktok vision (Beijing) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant