CN114022395B - Method, device and medium for correcting hair color of certificate photo - Google Patents

Method, device and medium for correcting hair color of certificate photo Download PDF

Info

Publication number
CN114022395B
CN114022395B CN202210007450.8A CN202210007450A CN114022395B CN 114022395 B CN114022395 B CN 114022395B CN 202210007450 A CN202210007450 A CN 202210007450A CN 114022395 B CN114022395 B CN 114022395B
Authority
CN
China
Prior art keywords
hair
color
image
segmentation
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210007450.8A
Other languages
Chinese (zh)
Other versions
CN114022395A (en
Inventor
李博
曹婉玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Prestige Technology Co ltd
Original Assignee
Guangzhou Prestige Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Prestige Technology Co ltd filed Critical Guangzhou Prestige Technology Co ltd
Priority to CN202210007450.8A priority Critical patent/CN114022395B/en
Publication of CN114022395A publication Critical patent/CN114022395A/en
Application granted granted Critical
Publication of CN114022395B publication Critical patent/CN114022395B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for correcting hair color by certificate photo, which comprises the following steps: s1, performing hair segmentation on the certificate photo by using a hair segmentation convolutional neural network to obtain a first segmentation image, wherein the first segmentation image is an image only containing a hair region; s2, judging whether the hair color of the first segmentation image is qualified or not by using the hair color qualification judgment model; and S3, when the hair color is unqualified, dyeing the hair color in the identification photo to obtain the corrected identification photo. According to the invention, the certificate photo is firstly segmented to obtain the hair region, whether the hair in the certificate photo meets the photographing standard or not is judged, and the hair color which does not meet the standard is automatically corrected, so that the operation experience of a user can be improved, the photographing time of the user is saved, and convenience is brought to the user who dyes the hair.

Description

Method, device and medium for correcting hair color of certificate photo
Technical Field
The invention relates to the technical field of images, in particular to a method, a device and a medium for correcting the color of identification photo hair.
Background
When using the certificate photo to shoot, the user must make up and dress up strictly according to the standard of the certificate photo, when shooting the certificate photo such as an identity card, because can not dye hair, lead to a lot of users who dye hair to be unable through detecting to can't shoot qualified identity card photo on the spot on the certificate photo camera.
In the prior art, reference 1 provides a method for generating a certificate photo, a client and a server. The method comprises the following steps: acquiring an initial image under the condition of receiving a preset instruction; the initial image is an image of a certificate photo to be generated; sending the initial image to a server; receiving a target image sent by a server; the target image is an image obtained by the server side through character recognition and background removal on the initial image through the trained convolutional neural network model; the trained convolutional neural network model is obtained by training a preset convolutional neural network model through a deep learning technology; acquiring target image processing parameters; the target image processing parameters include at least: head portrait ratio, background fill color, and image scale; and according to the target image processing parameters, carrying out background filling and cutting on the target image to obtain the certificate photo. The technique lacks careful judgment of the hair, resulting in that the dyed client photograph does not meet the requirements of the certificate photograph even after the machine inspection is passed.
Reference 2 provides a SegNet segmentation network for implementing the segmentation of pictures.
Reference 1: CN202010191062.0
Reference 2: SegNet, A Deep conditional Encoder-Decoder Architecture for Image Segmentation, Vijay Badrinarayan,et al.
the background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the material described in this section is not prior art to the claims in this application and is not admitted to be prior art by inclusion in this section.
Disclosure of Invention
Aiming at the technical problems in the related art, the invention provides a method for correcting the hair color of an identification photo, which comprises the following steps:
s1, performing hair segmentation on the certificate photo by using a hair segmentation convolutional neural network to obtain a first segmentation image, wherein the first segmentation image is an image only containing a hair region;
s2, judging whether the hair color of the first segmentation image is qualified or not by using the hair color qualification judgment model;
and S3, when the hair color is unqualified, dyeing the hair color in the identification photo to obtain the corrected identification photo.
Specifically, the hair segmentation convolutional neural network is formed by replacing an encoder network structure in a SegNet network with MobileNet V2.
Specifically, the step S2 specifically includes:
s21, acquiring total pixels N of the hair area in the first segmentation image;
s22, creating a three-channel image O with length and width (n, n), wherein
Figure 708301DEST_PATH_IMAGE001
S23, filling the pixels of the hair area into the three-channel image O in sequence;
and S24, inputting the three-channel image O into a hair color qualification judgment model to judge whether the hair color is qualified.
Specifically, the dyeing of the hair color of the identification photo comprises the following steps:
s31, performing decolorizing treatment on the hair area of the identification photo, removing hue and saturation information of the color of the hair area, and only keeping brightness information;
s32, the brightness of the hair region from which the hue and saturation information is removed is adjusted.
Specifically, during brightness adjustment, the proportion of bright pixels or dark pixels in the color of the hair region is counted, and when the proportion of the bright pixels is larger than a first preset value, the brightness is adjusted. And ending the brightness adjustment until the ratio of the bright pixels to the bright pixels is smaller than a first preset value.
Specifically, the brightness adjustment formula is as follows:
Figure 179734DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure 765436DEST_PATH_IMAGE003
is the generated pixel value of the pixel, P is the original pixel value of the pixel, B is the brightness coefficient value range of [ -1,1]And k is a constraint coefficient.
In a second aspect, another embodiment of the present invention discloses a certificate photo camera, which includes the following units:
the hair segmentation unit is used for performing hair segmentation on the certificate photo by using a hair segmentation convolutional neural network to obtain a first segmentation image, and the first segmentation image is an image only containing a hair region;
a hair color qualification judging unit for judging whether the hair color of the first segmented image is qualified or not by using the hair color qualification judging model;
and the hair color correction unit is used for dyeing the hair color in the identification photo to obtain the corrected identification photo when the hair color is unqualified.
Specifically, the hair segmentation convolutional neural network is formed by replacing an encoder network structure in a SegNet network with MobileNet V2.
Specifically, the certificate photo further comprises the following units:
the decolorizing unit is used for decolorizing the hair area of the identification photo, removing hue and saturation information of the color of the hair area and only keeping brightness information;
and a brightness adjusting unit for adjusting the brightness of the hair region with the hue and saturation information removed.
Specifically, during brightness adjustment, the proportion of bright pixels or dark pixels in the color of the hair region is counted, and when the proportion of the bright pixels is larger than a first preset value, the brightness is adjusted. And ending the brightness adjustment until the ratio of the bright pixels to the bright pixels is smaller than a first preset value.
In a third aspect, another embodiment of the present invention provides a non-volatile storage medium having instructions stored thereon, which when executed, implement the method for color correction of identification photo hair described above.
According to the invention, the certificate photo is firstly segmented to obtain the hair region, whether the hair in the certificate photo meets the photographing standard or not is judged, and the hair color which does not meet the standard is automatically corrected, so that the operation experience of a user can be improved, the photographing time of the user is saved, and convenience is brought to the user who dyes the hair. In addition, the present embodiment uses mobileNetV2 to replace the encoder network of SegNet, so that the hair convolution segmentation network of the present embodiment can meet the embedded requirement.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flow chart of a method for color correction of identification photo hair according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a SegNet network structure provided in the embodiment of the present invention;
FIG. 3 is a schematic diagram of an identification photo camera according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a device for correcting hair color of an identification photo according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
Example one
Referring to fig. 1, the present embodiment provides a method for correcting colors of a certificate photo, which includes the following steps:
s1, performing hair segmentation on the certificate photo by using a hair segmentation convolutional neural network to obtain a first segmentation image, wherein the first segmentation image is an image only containing a hair region;
referring to fig. 2, fig. 2 is a schematic diagram of a SegNet network structure, and the SegNet network mainly comprises two parts: encoder and decoder. The encoder is a network model following VGG16, and mainly analyzes object information. The decoder maps the parsed information to the final image form, i.e. each pixel is represented by a color (or label) corresponding to its object information.
The present embodiment uses a convolutional neural network to segment the hair. In addition, because the hair color of the camera needs to be locally deployed for judgment and correction, the SegNet is adopted to finely divide the hair part of the certificate photo, but the speed is also considered under the condition that the dividing quality needs to be ensured, so the mobile netv2 is utilized to improve the main network of the SegNet in the embodiment, and the embedded requirement on the speed is met under the condition that the accuracy can meet the requirement.
The hair segmentation convolutional neural network of the embodiment is a network structure in which an encoder in a SegNet network is replaced by using MobileNetV 2.
In this embodiment, the hair segmentation of the certificate photo by using the hair segmentation convolutional neural network specifically comprises: inputting a sample graph to be segmented
Figure 788756DEST_PATH_IMAGE004
The length and width are transformed into (224, 3) pictures by a resize function. And (2) performing convolution on the input picture by using the convolution kernel of (1, 1,3,32, 2) to obtain a feature map of (112, 32), performing convolution through a bottleeck with the size of (6, 16,1, 1) to obtain a feature map of (112, 16), performing convolution through a bottleeck with the size of (6, 24,2, 2) to obtain a feature map of (56, 56, 24), performing convolution through a bottleeck with the size of (6, 32,3, 2) to obtain a feature map of (28, 28, 32), performing convolution through a bottleeck with the size of (6, 64,4, 2) to obtain a feature map of (28, 28, 64), performing convolution through a bottleeck with the size of (6, 96,3, 1) to obtain a feature map of (14, 14, 96), and performing convolution through a bottleeck with the size of (6, 160,3, 2) to obtain a feature map of (7, 160). The resulting signature of (7, 160) size is spliced to the Zeropad part of the Decoder of SegNet, which is still the original network structure.
The specific training process of the hair segmentation convolutional neural network is as follows:
(1) hair segmentation dataset production
(1a) The method comprises the steps of adopting a camera to take pictures of 500 volunteers in different hair states, such as different conditions of hair disorder, hair arrangement and the like, wherein 10 pictures are totalized for each person, 5000 pictures are totalized, and a sample set is obtained
Figure 325654DEST_PATH_IMAGE005
Wherein, in the step (A),
Figure 335198DEST_PATH_IMAGE006
is the ith photo.
(1b) Using image annotation tools on a sample set
Figure 775407DEST_PATH_IMAGE006
Marking the pixel level to obtain
Figure 172890DEST_PATH_IMAGE007
Wherein, in the step (A),
Figure 964129DEST_PATH_IMAGE008
the labeled picture of the ith sample is shown.
(2) Hair segmentation dataset augmentation
(2a) Will train the set of samples to
Figure 403463DEST_PATH_IMAGE006
And
Figure 963757DEST_PATH_IMAGE008
performing a translation operation, i.e. to
Figure 532142DEST_PATH_IMAGE009
Arbitrary pixels of each sample pair in the dataset: (
Figure 951622DEST_PATH_IMAGE010
) Random translation along the x-axis
Figure 693182DEST_PATH_IMAGE011
Individual pixels, randomly shifted along the y-axis
Figure 612377DEST_PATH_IMAGE012
A pixel, obtaining
Figure 23767DEST_PATH_IMAGE013
) Wherein
Figure 258439DEST_PATH_IMAGE014
Figure 272531DEST_PATH_IMAGE015
And W and H are the width and height of the image, respectively.
(2b) Will train the set of samples to
Figure 479522DEST_PATH_IMAGE016
And
Figure 389709DEST_PATH_IMAGE017
performing a rotating operation, i.e. about
Figure 409880DEST_PATH_IMAGE009
Arbitrary pixel of each sample in the dataset: (
Figure 368608DEST_PATH_IMAGE010
) With the center of the image (
Figure 23581DEST_PATH_IMAGE018
Figure 104669DEST_PATH_IMAGE019
) For the rotation point to rotate randomly A degrees in the clockwise direction, obtain (
Figure 986038DEST_PATH_IMAGE013
) Wherein A is
Figure 873091DEST_PATH_IMAGE020
Using cross entropy as a loss function and SGD as an optimizer
Figure 523515DEST_PATH_IMAGE009
The dataset was model trained on images into the network in each batch 128, and model validation was performed on images in each batch 32.
S2, judging whether the color of the hair is qualified or not by using a hair color qualification judgment model;
the hair color acceptance judging model of this example was mobilenetV 2.
Acquiring the image of the hair region output in the step S1, counting the total pixels of the hair region as N, and creating a three-channel image O with a length and a width of (N, N), wherein
Figure 8461DEST_PATH_IMAGE001
And taking the three-channel image O as the input of a hair color qualification judgment model. The three-channel image O may be filled with the pixels of the hair region in sequence, specifically, the pixels of the hair region may be segmented by traversal and filled into the image in sequence.
Specifically, in step S2, the hair region in the first segmented image is used as an input of a hair color qualification determination model; that is, in step S2, it is determined whether or not the color of the hair in the hair region in the first segmented image is acceptable using a hair color acceptance determination model.
This step S2 includes: s21, acquiring total pixels N of the hair area in the first segmentation image;
s22, creating a three-channel image O with length and width (n, n), wherein
Figure 767338DEST_PATH_IMAGE001
S23, filling the pixels of the hair area into the three-channel image O in sequence;
and S24, inputting the three-channel image O into a hair color qualification judgment model to judge whether the hair color is qualified.
The embodiment rearranges the image pixels of the divided hair region, so that the size of the image to be input can be reduced, and the accuracy of hair color identification can be improved. Because the size of the hair region of each identified image is different, if the hair region is directly used as a template or only the hair region is reserved but the other regions are set to be 0, the size of the input image is increased on one hand, and calculation for 0 is also brought. Therefore, the present embodiment can retain only the hair region by the above processing, and can effectively improve the accuracy of hair color identification by including only the pixels of the hair region.
And transforming the length and width of the three-channel image O into (224, 3) pictures by using a resize function. And (2) performing convolution on the input picture by using the convolution kernel of (1, 1,3,32, 2) to obtain a feature map of (112, 32), performing convolution through a bottleeck with the size of (6, 16,1, 1) to obtain a feature map of (112, 16), performing convolution through a bottleeck with the size of (6, 24,2, 2) to obtain a feature map of (56, 56, 24), performing convolution through a bottleeck with the size of (6, 32,3, 2) to obtain a feature map of (28, 28, 32), performing convolution through a bottleeck with the size of (6, 64,4, 2) to obtain a feature map of (28, 28, 64), performing convolution through a bottleeck with the size of (6, 96,3, 1) to obtain a feature map of (14, 14, 96), and performing convolution through a bottleeck with the size of (6, 160,3, 2) to obtain a feature map of (7, 160). Obtaining a feature map of (7, 320) through bottleeck convolution of (6, 320,1, 1), continuously using convolution kernels of (1, 1,1280, 1) to perform convolution to obtain a feature map with the size of (7, 1280), using avgpool with the size of (7, 7) to obtain a feature map with the size of (1, 1, 1280), then using convolution kernels of (1, 1,1280,2, 1) to obtain a feature map with the size of (1, 1, 2), and finally using cross entropy as a training loss function, wherein the calculation method is as follows:
Figure 130187DEST_PATH_IMAGE021
the specific training process of the hair color qualification judgment model is as follows:
(3) hair color qualification determination data set creation
(3a) And classifying and labeling the X data set according to the hair color judgment standard of the certificate photo, classifying the qualified hair color into one class, and classifying the unqualified hair color into the other class.
(3b) Counting the total pixels of the hair area as N, and creating a three-channel image with the length and the width of (N, N)
Figure 900696DEST_PATH_IMAGE022
Wherein
Figure 323588DEST_PATH_IMAGE001
. Traversing the pixels of the hair region and filling the pixels into the image in sequence
Figure 805647DEST_PATH_IMAGE022
And its length and width are transformed into (224 ), the whole X data set can be made
Figure 909869DEST_PATH_IMAGE023
Wherein
Figure 597202DEST_PATH_IMAGE022
Inputting pictures for hair color with size (224, 3), and marking the marked information as
Figure 784470DEST_PATH_IMAGE024
Figure 682719DEST_PATH_IMAGE025
The classification information of whether the hair is standard is marked as 1, the classification information of whether the hair is not standard is marked as 0, and the coding form is OneHot coding.
Using the SGD as an optimizer, will
Figure 715266DEST_PATH_IMAGE026
The data set was model trained on images in each batch 128 into a hair classification network and the images were model validated in each batch 32.
And S3, when the hair color is unqualified, dyeing the hair color in the identification photo to obtain the corrected identification photo.
Specifically, the dyeing of the hair color of the identification photo comprises the following steps:
and S31, performing decolorizing treatment on the hair area of the identification photo, removing hue and saturation information of the hair area color, and only keeping brightness information.
The specific formula of the step is as follows:
Figure 522685DEST_PATH_IMAGE027
wherein the content of the first and second substances,
Figure 959483DEST_PATH_IMAGE028
is the generated R, G, B color value for the pixel, R is the red channel value of the pixel, G is the green channel value of the pixel, and B is the blue channel value of the pixel.
S32, the brightness of the hair region from which the hue and saturation information is removed is adjusted.
The brightness adjustment formula of the step is as follows:
Figure 914669DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 924476DEST_PATH_IMAGE030
is the generated pixel value of the pixel, P is the original pixel value of the pixel, B is the brightness coefficient value range of [ -1,1]And k is a constraint coefficient.
Specifically, during brightness adjustment, the proportion of bright pixels or dark pixels in the color of the hair region is counted, and when the proportion of the bright pixels is larger than a first preset value, the brightness is adjusted. And ending the brightness adjustment until the ratio of the bright pixels to the bright pixels is smaller than a first preset value.
Specifically, the first preset value is 10%. The bright pixels are pixels with pixel values larger than a second preset value, and a general second preset value is 50.
The heuristic strategy is generally adopted when adjusting the brightness, that is, the ratio of bright pixels or dark pixels in the color of the hair region is counted, and the boundary is 50, that is, dark pixels with a pixel value less than 50 are counted, and bright pixels with a pixel value greater than 50 are counted. And when the ratio of the bright pixels is more than 10%, the bright coefficient is reduced by one grade, calculation is carried out, whether the ratio of the bright pixels is more than 10% or not is judged, if so, the ratio is continuously reduced by one grade, and if the ratio is less than 10%, the adjustment is finished.
Whether the hair accords with the standard of shooing when this embodiment can differentiate the user of certificate photo and shoot to and carry out automatic correction to the hair colour that does not accord with the standard, can improve user's operation experience, save user's shooting time, bring the convenience for the user who dyes hair. In addition, the present embodiment uses mobileNetV2 to replace the encoder network of SegNet, so that the hair convolution segmentation network of the present embodiment can meet the embedded requirement.
Example two
Referring to fig. 3, the present embodiment discloses a certificate photo camera, which includes the following units:
the hair segmentation unit is used for performing hair segmentation on the certificate photo by using a hair segmentation convolutional neural network to obtain a first segmentation image, and the first segmentation image is an image only containing a hair region;
the hair segmentation convolutional neural network of the embodiment is formed by replacing an encoder network structure in a SegNet network with MobileNetV 2.
A hair color qualification judging unit for judging whether the hair color of the first segmented image is qualified or not by using the hair color qualification judging model;
the hair color qualification determination model of this embodiment is mobileNetV 2.
The hair color qualification determining unit of this embodiment obtains the image of the hair region output by the hair segmenting unit, counts the total pixels of the hair region as N, and creates a three-channel image O with a length and a width of (N, N), where the three-channel image O is obtained by using the total pixels of the hair region as N
Figure 524085DEST_PATH_IMAGE001
And taking the three-channel image O as the input of a hair color qualification judgment model. The three-channel image O may be filled with the pixels of the hair region in sequence, specifically, the pixels of the hair region may be segmented by traversal and filled into the image in sequence.
Specifically, the hair color qualification judgment unit is configured to take a hair region in the first segmented image as an input of a hair color qualification judgment model; namely, a hair color qualification judging unit for judging whether the color of the hair region in the first divided image is qualified or not by using a hair color qualification judging model.
The hair color qualification judging unit further comprises: a total hair pixel acquiring unit, configured to acquire a total hair region pixel N in the first segmented image;
a three-channel image creating unit for creating a three-channel image O having a length and a width (n, n), wherein
Figure 459679DEST_PATH_IMAGE001
The pixel filling unit is used for filling the pixels of the hair area into the three-channel image O in sequence;
and the hair color judgment subunit is used for inputting the three-channel image O into a hair color qualification judgment model to judge whether the color of the hair is qualified.
And the hair color correction unit is used for dyeing the hair color in the identification photo to obtain the corrected identification photo when the hair color is unqualified.
The certificate photo also comprises the following units:
the decolorizing unit is used for decolorizing the hair area of the identification photo, removing hue and saturation information of the color of the hair area and only keeping brightness information;
the specific formula of the decolorizing unit is as follows:
Figure 902162DEST_PATH_IMAGE031
wherein the content of the first and second substances,
Figure 948616DEST_PATH_IMAGE032
is the generated R, G, B color value for the pixel, R is the red channel value of the pixel, G is the green channel value of the pixel, and B is the blue channel value of the pixel.
And a brightness adjusting unit for adjusting the brightness of the hair region with the hue and saturation information removed.
The brightness adjustment formula is as follows:
Figure 668310DEST_PATH_IMAGE033
wherein the content of the first and second substances,
Figure 273341DEST_PATH_IMAGE032
is the generated pixel value of the pixel, P is the original pixel value of the pixel, B is the brightness coefficient value range of [ -1,1]And k is a constraint coefficient.
Specifically, during brightness adjustment, the proportion of bright pixels or dark pixels in the color of the hair region is counted, and when the proportion of the bright pixels is larger than a first preset value, the brightness is adjusted. And ending the brightness adjustment until the ratio of the bright pixels to the bright pixels is smaller than a first preset value.
Specifically, the first preset value is 10%. The bright pixels are pixels with pixel values larger than a second preset value, and a general second preset value is 50.
The heuristic strategy is generally adopted when adjusting the brightness, that is, the ratio of bright pixels or dark pixels in the color of the hair region is counted, and the boundary is 50, that is, dark pixels with a pixel value less than 50 are counted, and bright pixels with a pixel value greater than 50 are counted. And when the ratio of the bright pixels is more than 10%, the bright coefficient is reduced by one grade, calculation is carried out, whether the ratio of the bright pixels is more than 10% or not is judged, if so, the ratio is continuously reduced by one grade, and if the ratio is less than 10%, the adjustment is finished.
The certificate photo camera of this embodiment can differentiate whether the hair accords with the standard of shooing when the certificate photo user shoots, and carry out automatic correction to the hair colour that does not accord with the standard, can improve user's operation and experience, save user's shooting time, bring the convenience for the user who dyes hair. In addition, the present embodiment uses mobileNetV2 to replace the encoder network of SegNet, so that the hair convolution segmentation network of the present embodiment can meet the embedded requirement.
EXAMPLE III
Referring to fig. 4, fig. 4 is a schematic structural diagram of a device for developing and correcting identification photo hair according to the embodiment. The identification photo hair development and correction device 20 of this embodiment includes a processor 21, a memory 22, and a computer program stored in the memory 22 and executable on the processor 21. The processor 21 realizes the steps in the above-described method embodiments when executing the computer program. Alternatively, the processor 21 implements the functions of the modules/units in the above-described device embodiments when executing the computer program.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 22 and executed by the processor 21 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions that describe the execution of the computer program in the identification photo hair development and correction device 20. For example, the computer program may be divided into the modules in the second embodiment, and for the specific functions of the modules, reference is made to the working process of the apparatus in the foregoing embodiment, which is not described herein again.
The identification photo hair development and correction device 20 may include, but is not limited to, a processor 21, a memory 22. Those skilled in the art will appreciate that the schematic diagram is merely an example of a credential hairspray correction device 20 and does not constitute a limitation of a credential hairspray correction device 20 and may include more or fewer components than shown, or some components in combination, or different components, for example, the credential hairspray correction device 20 may also include an input-output device, a network access device, a bus, etc.
The Processor 21 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 21 is the control center of the identification photo hair development and correction device 20 and connects the various parts of the entire identification photo hair development and correction device 20 using various interfaces and lines.
The memory 22 may be used to store the computer programs and/or modules, and the processor 21 may implement the various functions of the identification photo hair development and correction device 20 by running or executing the computer programs and/or modules stored in the memory 22 and calling the data stored in the memory 22. The memory 22 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory 22 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein the integrated module/unit of the identification photo hair development and correction device 20 may be stored in a computer readable storage medium if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, all or part of the flow of the method according to the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and used by the processor 21 to implement the steps of the above embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (4)

1. A method for correcting the color of hair by certificate photo comprises the following steps:
s1, performing hair segmentation on the certificate photo by using a hair segmentation convolutional neural network to obtain a first segmentation image, wherein the first segmentation image is an image only containing a hair region; the hair segmentation convolutional neural network is formed by replacing an encoder network structure in a SegNet network with MobileNet V2;
s2, judging whether the hair color of the first segmentation image is qualified or not by using the hair color qualification judgment model;
s3, when the hair color is unqualified, dyeing the hair color in the identification photo to obtain a corrected identification photo;
the method for dyeing the hair color of the identification photo comprises the following steps:
s31, performing decolorizing treatment on the hair area of the identification photo, removing hue and saturation information of the color of the hair area, and only keeping brightness information;
s32, adjusting the brightness of the hair area with hue and saturation information removed;
in the step S32, when adjusting the brightness, the proportion of the bright color pixels or the dark color pixels in the color of the hair area is counted first, and when the proportion of the bright color pixels is greater than a first preset value, the brightness is adjusted; ending the brightness adjustment until the bright pixel proportion is smaller than a first preset value;
the brightness adjustment formula is as follows:
Figure DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 516079DEST_PATH_IMAGE002
is the generated pixel value of the pixel, P is the original pixel value of the pixel, B is the brightness coefficient value range of [ -1,1]And k is a constraint coefficient.
2. The method according to claim 1, wherein the step S2 specifically comprises:
s21, acquiring total pixels N of the hair area in the first segmentation image;
s22, creating a three-channel image O with length and width (n, n), wherein
Figure DEST_PATH_IMAGE003
S23, filling the pixels of the hair area into the three-channel image O in sequence;
and S24, inputting the three-channel image O into a hair color qualification judgment model to judge whether the hair color is qualified.
3. A camera for identification photographs, said camera comprising the following units:
the hair segmentation unit is used for performing hair segmentation on the certificate photo by using a hair segmentation convolutional neural network to obtain a first segmentation image, and the first segmentation image is an image only containing a hair region; the hair segmentation convolutional neural network is formed by replacing an encoder network structure in a SegNet network with MobileNet V2;
a hair color qualification judging unit for judging whether the hair color of the first segmented image is qualified or not by using the hair color qualification judging model;
the hair color correction unit is used for dyeing the hair color in the identification photo to obtain the corrected identification photo when the hair color is unqualified;
the certificate photo also comprises the following units:
the decolorizing unit is used for decolorizing the hair area of the identification photo, removing hue and saturation information of the color of the hair area and only keeping brightness information;
a brightness adjustment unit for adjusting brightness of the hair region from which the hue and saturation information is removed;
the brightness adjusting unit is used for firstly counting the proportion of bright pixels or dark pixels in the color of the hair area during brightness adjustment, and adjusting the brightness when the proportion of the bright pixels is larger than a first preset value; ending the brightness adjustment until the bright pixel ratio is smaller than a first preset value;
the brightness adjustment formula is as follows:
Figure 495536DEST_PATH_IMAGE004
wherein,
Figure DEST_PATH_IMAGE005
Is the generated pixel value of the pixel, P is the original pixel value of the pixel, B is the brightness coefficient value range of [ -1,1]And k is a constraint coefficient.
4. A non-volatile storage medium having instructions stored thereon that, when executed, are configured to implement the method of authenticating photo hair color correction according to any one of claims 1-2.
CN202210007450.8A 2022-01-06 2022-01-06 Method, device and medium for correcting hair color of certificate photo Active CN114022395B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210007450.8A CN114022395B (en) 2022-01-06 2022-01-06 Method, device and medium for correcting hair color of certificate photo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210007450.8A CN114022395B (en) 2022-01-06 2022-01-06 Method, device and medium for correcting hair color of certificate photo

Publications (2)

Publication Number Publication Date
CN114022395A CN114022395A (en) 2022-02-08
CN114022395B true CN114022395B (en) 2022-04-12

Family

ID=80069689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210007450.8A Active CN114022395B (en) 2022-01-06 2022-01-06 Method, device and medium for correcting hair color of certificate photo

Country Status (1)

Country Link
CN (1) CN114022395B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102985941A (en) * 2010-06-30 2013-03-20 日本电气株式会社 Color image processing method, color image processing device, and color image processing program
CN111127591A (en) * 2019-12-24 2020-05-08 腾讯科技(深圳)有限公司 Image hair dyeing processing method, device, terminal and storage medium
CN111448581A (en) * 2017-10-24 2020-07-24 巴黎欧莱雅公司 System and method for image processing using deep neural networks
CN113191938A (en) * 2021-04-29 2021-07-30 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9928601B2 (en) * 2014-12-01 2018-03-27 Modiface Inc. Automatic segmentation of hair in images
CN108629834B (en) * 2018-05-09 2020-04-28 华南理工大学 Three-dimensional hair reconstruction method based on single picture
TWI689892B (en) * 2018-05-18 2020-04-01 瑞昱半導體股份有限公司 Background blurred method and electronic apparatus based on foreground image
CN110969631B (en) * 2019-11-25 2023-04-11 杭州小影创新科技股份有限公司 Method and system for dyeing hair by refined photos
CN112614060A (en) * 2020-12-09 2021-04-06 深圳数联天下智能科技有限公司 Method and device for rendering human face image hair, electronic equipment and medium
CN113989895A (en) * 2021-11-04 2022-01-28 展讯通信(天津)有限公司 Face skin segmentation method, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102985941A (en) * 2010-06-30 2013-03-20 日本电气株式会社 Color image processing method, color image processing device, and color image processing program
CN111448581A (en) * 2017-10-24 2020-07-24 巴黎欧莱雅公司 System and method for image processing using deep neural networks
CN111127591A (en) * 2019-12-24 2020-05-08 腾讯科技(深圳)有限公司 Image hair dyeing processing method, device, terminal and storage medium
CN113191938A (en) * 2021-04-29 2021-07-30 北京市商汤科技开发有限公司 Image processing method, image processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114022395A (en) 2022-02-08

Similar Documents

Publication Publication Date Title
US8194992B2 (en) System and method for automatic enhancement of seascape images
Bako et al. Removing shadows from images of documents
CN107862663A (en) Image processing method, device, readable storage medium storing program for executing and computer equipment
CN107730444A (en) Image processing method, device, readable storage medium storing program for executing and computer equipment
US10477128B2 (en) Neighborhood haze density estimation for single-image dehaze
US20070274573A1 (en) Image processing method and image processing apparatus
CN109862389B (en) Video processing method, device, server and storage medium
US20050286793A1 (en) Photographic image processing method and equipment
CN109871845B (en) Certificate image extraction method and terminal equipment
CN107945135A (en) Image processing method, device, storage medium and electronic equipment
CN110827371B (en) Certificate generation method and device, electronic equipment and storage medium
CN110691226B (en) Image processing method, device, terminal and computer readable storage medium
CN112785572B (en) Image quality evaluation method, apparatus and computer readable storage medium
CN112233077A (en) Image analysis method, device, equipment and storage medium
US11138693B2 (en) Attention-driven image manipulation
CN110554991A (en) Method for correcting and managing text picture
CN116030453A (en) Digital ammeter identification method, device and equipment
JPH04346333A (en) Data extracting method for human face and exposure deciding method
US20240127404A1 (en) Image content extraction method and apparatus, terminal, and storage medium
CN110414522A (en) A kind of character identifying method and device
CN114170565A (en) Image comparison method and device based on unmanned aerial vehicle aerial photography and terminal equipment
CN114022395B (en) Method, device and medium for correcting hair color of certificate photo
JP2848749B2 (en) Feature image data extraction method
JPH04346332A (en) Exposure deciding method
KR20190017635A (en) Apparatus and method for acquiring foreground image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant