CN112700396A - Illumination evaluation method and device for face picture, computing equipment and storage medium - Google Patents

Illumination evaluation method and device for face picture, computing equipment and storage medium Download PDF

Info

Publication number
CN112700396A
CN112700396A CN201910990524.2A CN201910990524A CN112700396A CN 112700396 A CN112700396 A CN 112700396A CN 201910990524 A CN201910990524 A CN 201910990524A CN 112700396 A CN112700396 A CN 112700396A
Authority
CN
China
Prior art keywords
illumination
face
face area
picture
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910990524.2A
Other languages
Chinese (zh)
Inventor
李伟
严昱超
蒋云
陈宁华
穆铁马
李志勇
陈挺
戚靓亮
陈青青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Zhejiang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Zhejiang Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910990524.2A priority Critical patent/CN112700396A/en
Publication of CN112700396A publication Critical patent/CN112700396A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for evaluating illumination of a face picture, computing equipment and a storage medium, wherein the method comprises the following steps: acquiring a forward face picture to be processed; masking the forward face picture to obtain a face area picture; converting the color data in the face area image into a preset color space to obtain an illumination component value of the face area image; and evaluating the illumination quality of the full-face area, the left-face area, the right-face area and/or the key area according to the illumination component values to obtain an image illumination evaluation result. According to the invention, the picture mask is processed and cut to obtain the face area image so as to remove the picture background, so that the influence of the picture background on the evaluation of the illumination quality of the human face is avoided, the illumination evaluation result accurately reflects the illumination quality of the human face, the evaluation result is more accurate, and the technical problem that the final evaluation result is influenced by the too-bright or too-dark illumination of the picture background in the illumination evaluation method in the prior art and the overall illumination condition of the face area cannot be effectively reflected is solved.

Description

Illumination evaluation method and device for face picture, computing equipment and storage medium
Technical Field
The invention relates to the technical field of picture processing, in particular to a method and a device for evaluating illumination of a face picture, computing equipment and a storage medium.
Background
At present, a large amount of face picture data are shot and stored by a camera, and the quality of the stored pictures becomes more important in various production and living scenes using the camera shooting technology. For example, a business hall records a picture of the face of a user for recording, and the like. In the process of shooting and storing the face picture, the quality control of the face picture is an effective guarantee for subsequent services and functions.
In the prior art, a face image is analyzed and processed, pixels of the filtered face image are mainly analyzed, a correlation analysis technology of fusion of illumination symmetry and global illumination intensity is used, and specific logics are as follows: firstly, inputting a face image on the front side, and filtering the image to obtain a filtered image; dividing the face image into a left half part IL and a right half part IR which are symmetrical left and right; turning the IR of the divided right half part of the face into IFR; taking the left half part of the human face IL as a reference image and the IFR as a test object, performing block calculation by using brightness evaluation indexes to obtain a quality distribution matrix, and obtaining an illumination symmetry value Qs by adopting a correlation weighting method; determining the gray scale range of pixels which can excite human eyes according to the sensitivity of the human eyes to gray scale, counting the number of pixel values which are mainly interested by the human eyes, solving the proportion of the pixel values which occupy the whole image, reflecting the illumination intensity QI of the image and taking the illumination intensity QI as a weight; and finally, combining the illumination symmetry value Qs and the illumination intensity QI to obtain a non-reference illumination evaluation value QNL. The human face image illumination analysis method can effectively compare the left half face illumination condition and the right half face illumination condition, but cannot eliminate the influence of the whole image background on illumination quality evaluation, the final evaluation result is greatly influenced by over-bright or over-dark illumination of the background, and the total illumination condition of a human face area cannot be effectively reflected.
Disclosure of Invention
In view of the above problems, the present invention has been made to provide a face picture illumination evaluation method, apparatus, computing device and storage medium that overcome or at least partially solve the above problems.
According to one aspect of the invention, a method for evaluating illumination of a face picture is provided, and the method comprises the following steps:
acquiring a forward face picture to be processed;
masking the forward face picture to obtain a face area picture;
converting the color data in the face area image into a preset color space to obtain an illumination component value of the face area image;
and evaluating the illumination quality of the full-face area, the left-face area, the right-face area and/or the key area according to the illumination component values to obtain an image illumination evaluation result.
According to another aspect of the present invention, there is provided a face picture illumination evaluation apparatus, including:
the image acquisition module is used for acquiring a forward face image to be processed;
the face region intercepting module is used for carrying out mask processing on the forward face picture to obtain a face region picture;
the illumination component acquisition module is used for converting the color data in the face area image into a preset color space to obtain an illumination component value of the face area image;
and the picture illumination evaluation module is used for evaluating the illumination quality of the full-face area, the left-face area, the right-face area and/or the key area according to the illumination component value to obtain a picture illumination evaluation result.
According to yet another aspect of the present invention, there is provided a computing device comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the human face picture illumination evaluation method.
According to still another aspect of the present invention, a computer storage medium is provided, where at least one executable instruction is stored in the storage medium, and the executable instruction causes a processor to execute operations corresponding to the above-mentioned face picture illumination evaluation method.
According to the illumination evaluation method, the illumination evaluation device, the computing equipment and the storage medium of the face picture, the forward face picture to be processed is obtained; masking the forward face picture to obtain a face area picture; converting the color data in the face area image into a preset color space to obtain an illumination component value of the face area image; and evaluating the illumination quality of the full-face area, the left-face area, the right-face area and/or the key area according to the illumination component values to obtain an image illumination evaluation result. According to the invention, after the front face picture is obtained, the face area picture is obtained by processing and cutting the picture mask so as to remove the picture background, so that the influence of the picture background on the illumination quality evaluation of the face is avoided, the illumination evaluation result can accurately reflect the illumination quality of the face, the evaluation result is more accurate, and the technical problem that the final evaluation result is influenced by the over-bright or over-dark illumination of the picture background in the illumination evaluation method in the prior art and the overall illumination condition of the face area cannot be effectively reflected is solved; meanwhile, the face region image is converted into a preset color space, illumination components are extracted, and the illumination quality of the picture is comprehensively evaluated by the illumination quality of the whole face, the left face, the right face and/or the key region, so that the illumination evaluation result is more comprehensive.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a flow chart of a human face image illumination evaluation method provided by an embodiment of the invention;
fig. 2 is a schematic diagram illustrating a processing procedure of a multitask convolutional neural network in a face image illumination evaluation method according to an embodiment of the present invention;
fig. 3 shows a mask processing schematic diagram in the illumination evaluation method for a face picture provided by the embodiment of the invention;
fig. 4 is a schematic structural diagram illustrating a face picture illumination evaluation apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a computing device provided by an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Example one
Fig. 1 shows a flowchart of an embodiment of a method for evaluating illumination of a face picture according to the present invention, where as shown in fig. 1, the method includes the following steps:
and S101, acquiring a forward face picture to be processed.
In an optional manner, step S101 further includes: identifying and processing a portrait picture acquired by a camera by utilizing a multitask convolutional neural network to obtain key point information of a five sense organ region; calculating position information between at least two key points with a symmetrical relation according to the key point information of the five sense organ regions, and determining a human face approximate region in the portrait picture; and adjusting the human image picture according to the position information, and extracting a picture part corresponding to the approximate human face region from the adjusted human image picture to obtain a forward human face picture.
In this step, after the terminal collects a human image picture through a camera or other camera devices, a front face picture is extracted through a Multi-task Convolutional neural network (Multi-task Cascaded connected Networks), that is, the front face picture is extracted. Specifically, as shown in fig. 2, the portrait image is used as a test image, and the test image is adjusted to different scales to construct an image pyramid. The forward face picture is obtained through three stages.
The first stage is as follows: a full convolution Network (P-Net) is used to generate face candidate window and border regression vectors. The candidate window is corrected based on the bounding box regression vector. Thereafter, a Non-maximum suppression (NMS) method is used to merge the highly overlapping candidate windows.
And a second stage: all candidate windows are fed to a training Network (R-Net), which is tasked with further cleaning up a large number of non-face candidate windows, and then still continuing to use the bounding box regression vector for calibration and NMS methods for merging.
And a third stage: the Network used in this stage is an Output Network (O-Net), and this stage uses the frame regression vector to perform calibration and the NMS method to perform merging, it should be noted that, in this layer, the main purpose is to determine the approximate area of the face by using more supervision. Meanwhile, the network outputs the key point information of the facial five sense organ regions, and then the positions of the five key regions can be determined according to the key point information of the five sense organ regions. Wherein the five sense organ regions may include: the areas corresponding to the eyebrow, eyes, ears, nose and mouth, five key areas may include: eyebrow area, eye area, ear area, nose area, and mouth area.
After key point information of the five sense organ region is obtained through the multitask convolution neural network, position information between at least two key points with a symmetrical relation is calculated according to the key point information of the five sense organ region, and a human face approximate region in a portrait picture is determined according to position coordinates of the two key points.
The inclination of the human face can be determined according to the position information between at least two key points with a symmetrical relation, for example, the included angle degree between the horizontal line is calculated according to the connecting line of the left eye central point and the right eye central point, the obtained included angle degree is the human face inclination angle, the human face image is rotated by a corresponding angle according to the calculated human face inclination angle to enable the human face to be horizontal, and the alignment adjustment of the human image is achieved.
And S102, performing mask processing on the forward human face picture to obtain a human face area picture.
In an optional manner, step S102 further includes: determining the edge of a forward face area according to the forward face picture, and configuring a mask matrix corresponding to the edge of the forward face area; and carrying out operation processing on the forward face image and the mask matrix to obtain a face region image.
Specifically, as shown in fig. 3, the forward face region edge is determined according to the forward face picture, the size and format of the face front picture are extracted, a corresponding mask matrix is configured according to the size and format of the face front picture and the forward face region edge, and the face front picture is subjected to xor operation according to the mask matrix, so that the clipping operation is completed. By the mask processing mode, the background in the forward face picture can be quickly and effectively removed, and the interference of the background of the picture on the evaluation of the illumination quality of the picture is avoided.
And S103, converting the color data in the face area image into a preset color space to obtain the illumination component value of the face area image.
In an alternative manner, the preset color space may be a YUV color space, and step S103 further includes: and converting the color data in the face area image into a YUV color space to obtain a brightness value corresponding to the face area image, and determining the brightness value as an illumination component value of the face area image.
It should be noted that YUV is a color coding method adopted by the european television system, and is a color space adopted by pal (phase Alternation line) analog color television system. Where Y represents luminance, UV represents color difference, and U and V are two components constituting color. In modern color television systems, a three-tube color camera or a color CCD camera is usually used for image capture, then the obtained color image signals are subjected to color separation and respective amplification and correction to obtain RGB, a luminance signal Y and two color difference signals B-U, R-V are obtained through a matrix transformation circuit, and finally a transmitting end respectively encodes the luminance signal and the color difference signals and transmits the encoded signals through the same channel. This color representation is called YUV color space representation.
In the step, the cut face area graph is converted from an RGB format to a YUV format; and if the Y component value in the YUV format is the brightness value corresponding to the face area image, determining the brightness value as the illumination component value of the face area image. The conversion formula can be the following formula 1 to formula 3:
y is 0.299R +0.587G +0.114B formula 1
U ═ 0.147R-0.289G +0.436B formula 2
V-0.615R-0.515G-0.100B formula 3
And S104, evaluating the illumination quality of the full-face area, the left-face area, the right-face area and/or the key area according to the illumination component value to obtain an image illumination evaluation result.
In an optional manner, step S104 further includes: calculating the illumination mean value of the whole face area, the illumination standard deviation of the whole face area, the illumination deviation of the left face area and the right face area and the illumination deviation of the key area according to the illumination component values; and determining a picture illumination evaluation result according to the whole-face area illumination mean value, the whole-face area illumination standard deviation, the left and right face area illumination deviation and the key area illumination deviation.
Specifically, calculating the average value of the illumination component values of all pixel points in the face area image to obtain the illumination mean value of the whole face area, and calculating the illumination standard deviation of the whole face area according to the illumination component values of all pixel points in the face area image and the illumination mean value of the whole face area;
calculating the average value of the illumination component values of the pixel points positioned in the left face area and the average value of the illumination component values of the pixel points positioned in the right face area in the face area image to obtain the illumination average value of the left face area and the illumination average value of the right face area, and calculating the illumination deviation of the left face area and the right face area according to the illumination average value of the left face area and the illumination average value of the right face area;
aiming at each key area in the face area image, calculating the average value of the illumination component values of all pixel points positioned in the key area in the face area image to obtain the illumination average value corresponding to the key area; and calculating the illumination deviation of the key areas according to the illumination mean value corresponding to each key area.
Specifically, the illumination quality of the full-face area, the left-face area, the right-face area and/or the key area is evaluated according to the illumination component values, the illumination component values of all pixels of the full-face area are averaged aiming at the full-face area, and a full-face area illumination mean value A1 is obtained, wherein A1 reflects the average pixel condition of the full-face area, and the standard deviation of the illumination component values of the full-face pixels, namely the full-face area illumination standard deviation N1 is obtained according to the mean value A1, wherein N1 reflects the pixel dispersion condition of the full-face area.
Aiming at left and right face areas, the light component values of pixel points of the left and right face areas are respectively averaged to obtain a left face area light average value A2 and a right face area light average value A3, wherein A2 and A3 reflect the average pixel conditions of the left and right face areas, and then the light deviation X1 ═ A2-A3|, of the left and right face areas is calculated according to A2 and A3.
Taking 5 key regions in the face region map as an example, for example, the five key regions may be defined as a left eye region, a right eye region, a nose region, a left lip region, and a right lip region, and for the 5 key regions, an average value of light component values of all pixel points located in the 5 key regions is respectively calculated, so as to obtain light averages A4, A5, A6, a7, and a8 corresponding to the 5 key regions, and calculate a key region light deviation X2, where X2 is max { A4, A5, A6, a7, a8} -minA4, A5, A6, a7, and a 8.
The final picture illumination evaluation result R is the synthesis of the whole-face area illumination mean value a1, the whole-face area illumination standard deviation N1, the left and right face area illumination deviation X1, and the key area illumination deviation X2, the calculation of these four parameters with respect to the final result is as follows, k is a constant:
Figure BDA0002238117380000071
setting the standard result of the four expressions as n1、n2、n3、n4Performing dispersion standardization on each expression about standard result, unifying multidimensional analysis results into a single measurement dimension to obtain a result r1、r2、r3、r4And adding to obtain a picture illumination evaluation result R.
In an optional manner, step S104 may be followed by the steps of:
judging whether the image illumination evaluation result meets the preset illumination requirement or not;
if yes, storing the forward face picture; and if not, deleting the forward face picture.
In this step, through carrying out illumination requirement detection to the picture of gathering, carry out quality management and control to the face picture, and then improve the efficiency of face identification process, judge picture illumination evaluation result through this testing process simultaneously, delete the picture that does not conform to and predetermine the illumination requirement, save time and storage space when guaranteeing the picture quality. Specifically, the forward face picture meeting the illumination requirement can be used for access control systems, face attendance, traffic control or face recognition of new retail scenes, user information record of business halls and the like after being stored. And sending a prompt message to remind the user that the picture quality is unqualified for the picture which does not meet or does not meet the preset illumination requirement so as to shoot again.
By adopting the method provided by the embodiment, after the front face picture is obtained, the face area picture is obtained by processing and cutting the picture mask, so that the picture background is removed, the influence of the picture background on the illumination quality evaluation of the human image is avoided, the illumination evaluation result accurately reflects the illumination quality of the face, the evaluation result is more accurate and is not influenced by the complex background of the picture, and the technical problem that the final evaluation result is influenced by the over-bright or over-dark illumination of the picture background in the illumination evaluation method in the prior art and the overall illumination condition of the face area cannot be effectively reflected is solved; meanwhile, the face area image is converted into a YUV color space, the illumination component value is extracted, the illumination quality of the image is comprehensively evaluated by the illumination quality of the whole face, the left face, the right face and/or the key area, and the illumination evaluation condition is analyzed by combining the discrete conditions of the local area and the whole area, so that the illumination evaluation result is more comprehensive.
Example two
Fig. 4 shows a schematic structural diagram of an embodiment of the illumination evaluation device for a face picture according to the present invention. As shown in fig. 4, the apparatus includes:
the image obtaining module 401 is configured to obtain a forward face image to be processed.
In an optional manner, the picture acquiring module 401 is further configured to: identifying and processing a portrait picture acquired by a camera by utilizing a multitask convolutional neural network to obtain key point information of a five sense organ region; calculating position information between at least two key points with a symmetrical relation according to the key point information of the five sense organ regions, and determining a human face approximate region in the portrait picture; and adjusting the human image picture according to the position information, and extracting a picture part corresponding to the approximate human face region from the adjusted human image picture to obtain a forward human face picture.
A face region intercepting module 402, configured to perform mask processing on the forward face image to obtain a face region image.
In an alternative manner, the face region intercepting module 402 is further configured to: determining the edge of a forward face area according to the forward face picture, and configuring a mask matrix corresponding to the edge of the forward face area; and carrying out operation processing on the forward face image and the mask matrix to obtain a face region image.
An illumination component obtaining module 403, configured to convert the color data in the face region map into a preset color space, so as to obtain an illumination component value of the face region map.
In an optional manner, the illumination component obtaining module 403 is further configured to: and converting the color data in the face area image into a YUV color space to obtain a brightness value corresponding to the face area image, and determining the brightness value as an illumination component value of the face area image.
The picture illumination evaluation module 404 is configured to evaluate illumination quality of the full-face area, the left-face area, the right-face area and/or the key area according to the illumination component value, so as to obtain a picture illumination evaluation result.
In an optional manner, the picture illumination evaluation module 404 is further configured to: calculating the illumination mean value of the whole face area, the illumination standard deviation of the whole face area, the illumination deviation of the left face area and the right face area and the illumination deviation of the key area according to the illumination component values; and determining a picture illumination evaluation result according to the whole-face area illumination mean value, the whole-face area illumination standard deviation, the left and right face area illumination deviation and the key area illumination deviation.
Further, the picture illumination evaluation module 404 is further configured to: calculating the average value of the illumination component values of all the pixel points in the face area image to obtain the illumination mean value of the whole face area, and calculating the illumination standard deviation of the whole face area according to the illumination component values of all the pixel points in the face area image and the illumination mean value of the whole face area; calculating the average value of the illumination component values of the pixel points positioned in the left face area and the average value of the illumination component values of the pixel points positioned in the right face area in the face area image to obtain the illumination average value of the left face area and the illumination average value of the right face area, and calculating the illumination deviation of the left face area and the right face area according to the illumination average value of the left face area and the illumination average value of the right face area; aiming at each key area in the face area image, calculating the average value of the illumination component values of all pixel points positioned in the key area in the face area image to obtain the illumination average value corresponding to the key area; and calculating the illumination deviation of the key areas according to the illumination mean value corresponding to each key area.
In an optional manner, the illumination evaluation apparatus for a face picture further includes:
the picture storage module is used for judging whether the picture illumination evaluation result meets the preset illumination requirement or not;
if yes, storing the forward face picture; and if not, deleting the forward face picture.
By adopting the device provided by the embodiment, after the front face picture is obtained, the face area picture is obtained by processing and cutting the picture mask, so that the picture background is removed, the influence of the picture background on the illumination quality evaluation of the human image is avoided, the illumination evaluation result accurately reflects the illumination quality of the face, the evaluation result is more accurate and is not influenced by the complex background of the picture, and the technical problem that the final evaluation result is influenced by the over-bright or over-dark illumination of the picture background in the illumination evaluation method in the prior art and the overall illumination condition of the face area cannot be effectively reflected is solved; meanwhile, the face area image is converted into a YUV color space, the illumination component value is extracted, the illumination quality of the image is comprehensively evaluated by the illumination quality of the whole face, the left face, the right face and/or the key area, and the illumination evaluation condition is analyzed by combining the discrete conditions of the local area and the whole area, so that the illumination evaluation result is more comprehensive.
EXAMPLE III
The embodiment of the invention provides a nonvolatile computer storage medium, wherein at least one executable instruction is stored in the computer storage medium, and the computer executable instruction can execute the illumination evaluation method of the face picture in any method embodiment.
The executable instructions may be specifically configured to cause the processor to:
acquiring a forward face picture to be processed;
masking the forward face picture to obtain a face area picture;
converting the color data in the face area image into a preset color space to obtain an illumination component value of the face area image;
and evaluating the illumination quality of the full-face area, the left-face area, the right-face area and/or the key area according to the illumination component values to obtain an image illumination evaluation result.
Example four
Fig. 5 is a schematic structural diagram of an embodiment of a computing device according to the present invention, and a specific embodiment of the present invention does not limit a specific implementation of the computing device.
As shown in fig. 5, the computing device may include: a processor (processor), a Communications Interface (Communications Interface), a memory (memory), and a Communications bus.
Wherein: the processor, the communication interface, and the memory communicate with each other via a communication bus. A communication interface for communicating with network elements of other devices, such as clients or other servers. And the processor is used for executing a program, and specifically can execute relevant steps in the embodiment of the human face image illumination evaluation method.
In particular, the program may include program code comprising computer operating instructions.
The processor may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The server comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And the memory is used for storing programs. The memory may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program may specifically be adapted to cause a processor to perform the following operations:
acquiring a forward face picture to be processed;
masking the forward face picture to obtain a face area picture;
converting the color data in the face area image into a preset color space to obtain an illumination component value of the face area image;
and evaluating the illumination quality of the full-face area, the left-face area, the right-face area and/or the key area according to the illumination component values to obtain an image illumination evaluation result.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.

Claims (10)

1. A face picture illumination evaluation method is characterized by comprising the following steps:
acquiring a forward face picture to be processed;
performing mask processing on the forward face picture to obtain a face area picture;
converting the color data in the face area image into a preset color space to obtain an illumination component value of the face area image;
and evaluating the illumination quality of the full-face area, the left-face area, the right-face area and/or the key area according to the illumination component values to obtain an image illumination evaluation result.
2. The method of claim 1, wherein the obtaining the forward human face picture to be processed further comprises:
identifying and processing a portrait picture acquired by a camera by utilizing a multitask convolutional neural network to obtain key point information of a five sense organ region;
calculating position information between at least two key points with a symmetrical relation according to the key point information of the five sense organ regions, and determining a human face approximate region in the portrait picture;
and adjusting the portrait picture according to the position information, and extracting a picture part corresponding to the approximate human face area from the adjusted portrait picture to obtain a forward human face picture.
3. The method of claim 1, wherein the masking the forward face image to obtain a face region image further comprises:
determining the edge of a forward face region according to the forward face picture, and configuring a mask matrix corresponding to the edge of the forward face region;
and carrying out operation processing on the forward face image and the mask matrix to obtain a face region image.
4. The method according to claim 1, wherein the converting the color data in the face region map into a predetermined color space to obtain the illumination component value of the face region map further comprises:
and converting the color data in the face area image into a YUV color space to obtain a brightness value corresponding to the face area image, and determining the brightness value as an illumination component value of the face area image.
5. The method according to claim 1, wherein the evaluating the illumination quality of the full-face area, the left-face area, the right-face area and/or the key area according to the illumination component value to obtain the image illumination evaluation result further comprises:
calculating the illumination mean value of the whole face area, the illumination standard deviation of the whole face area, the illumination deviation of the left face area and the right face area and the illumination deviation of the key area according to the illumination component values;
and determining a picture illumination evaluation result according to the whole-face area illumination mean value, the whole-face area illumination standard deviation, the left and right face area illumination deviation and the key area illumination deviation.
6. The method according to claim 5, wherein said calculating a full-face area illumination mean, a full-face area illumination standard deviation, a left-right-face area illumination deviation, and a key area illumination deviation according to the illumination component values further comprises:
calculating the average value of the illumination component values of all pixel points in the face area image to obtain the illumination mean value of the whole face area, and calculating the illumination standard deviation of the whole face area according to the illumination component values of all pixel points in the face area image and the illumination mean value of the whole face area;
calculating the average value of the illumination component values of the pixel points positioned in the left face area and the average value of the illumination component values of the pixel points positioned in the right face area in the face area image to obtain the illumination average value of the left face area and the illumination average value of the right face area, and calculating the illumination deviation of the left face area and the right face area according to the illumination average value of the left face area and the illumination average value of the right face area;
aiming at each key area in the face area image, calculating the average value of the illumination component values of all pixel points positioned in the key area in the face area image to obtain the illumination average value corresponding to the key area; and calculating the illumination deviation of the key areas according to the illumination mean value corresponding to each key area.
7. The method according to any one of claims 1 to 6, wherein after the evaluating the illumination quality of the full-face area, the left-face area, the right-face area and/or the key area according to the illumination component value to obtain a picture illumination evaluation result, the method further comprises:
judging whether the image illumination evaluation result meets a preset illumination requirement or not;
if yes, storing the forward face picture; and if not, deleting the forward face picture.
8. An illumination evaluation device for a face picture, the device comprising:
the image acquisition module is used for acquiring a forward face image to be processed;
the face region intercepting module is used for carrying out mask processing on the forward face picture to obtain a face region picture;
the illumination component acquisition module is used for converting the color data in the face area image into a preset color space to obtain an illumination component value of the face area image;
and the picture illumination evaluation module is used for evaluating the illumination quality of the full-face area, the left-face area, the right-face area and/or the key area according to the illumination component values to obtain a picture illumination evaluation result.
9. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the human face picture illumination evaluation method according to any one of claims 1-7.
10. A computer storage medium, wherein at least one executable instruction is stored in the storage medium, and the executable instruction causes a processor to execute the operation corresponding to the face picture illumination evaluation method according to any one of claims 1-7.
CN201910990524.2A 2019-10-17 2019-10-17 Illumination evaluation method and device for face picture, computing equipment and storage medium Pending CN112700396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910990524.2A CN112700396A (en) 2019-10-17 2019-10-17 Illumination evaluation method and device for face picture, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910990524.2A CN112700396A (en) 2019-10-17 2019-10-17 Illumination evaluation method and device for face picture, computing equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112700396A true CN112700396A (en) 2021-04-23

Family

ID=75504677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910990524.2A Pending CN112700396A (en) 2019-10-17 2019-10-17 Illumination evaluation method and device for face picture, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112700396A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705650A (en) * 2021-08-20 2021-11-26 网易(杭州)网络有限公司 Processing method, device, medium and computing equipment for face picture set
CN113743284A (en) * 2021-08-30 2021-12-03 杭州海康威视数字技术股份有限公司 Image recognition method, device, equipment, camera and access control equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657714A (en) * 2015-02-09 2015-05-27 重庆大学 Illumination symmetry and global illumination intensity integrated no-reference face illumination evaluation method
CN106558046A (en) * 2016-10-31 2017-04-05 深圳市飘飘宝贝有限公司 A kind of quality determining method and detection means of certificate photo
CN107945107A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN108805097A (en) * 2018-06-21 2018-11-13 冷霜 A kind of recognition of face and method for real time tracking based on color space conversion
CN109255763A (en) * 2018-08-28 2019-01-22 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and storage medium
CN109344724A (en) * 2018-09-05 2019-02-15 深圳伯奇科技有限公司 A kind of certificate photo automatic background replacement method, system and server
US20190114468A1 (en) * 2017-10-16 2019-04-18 Fujitsu Limited Method and apparatus for evaluating illumination condition in face image
CN109961055A (en) * 2019-03-29 2019-07-02 广州市百果园信息技术有限公司 Face critical point detection method, apparatus, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657714A (en) * 2015-02-09 2015-05-27 重庆大学 Illumination symmetry and global illumination intensity integrated no-reference face illumination evaluation method
CN106558046A (en) * 2016-10-31 2017-04-05 深圳市飘飘宝贝有限公司 A kind of quality determining method and detection means of certificate photo
US20190114468A1 (en) * 2017-10-16 2019-04-18 Fujitsu Limited Method and apparatus for evaluating illumination condition in face image
CN107945107A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN108805097A (en) * 2018-06-21 2018-11-13 冷霜 A kind of recognition of face and method for real time tracking based on color space conversion
CN109255763A (en) * 2018-08-28 2019-01-22 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and storage medium
CN109344724A (en) * 2018-09-05 2019-02-15 深圳伯奇科技有限公司 A kind of certificate photo automatic background replacement method, system and server
CN109961055A (en) * 2019-03-29 2019-07-02 广州市百果园信息技术有限公司 Face critical point detection method, apparatus, equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KAIPENG ZHANG ET AL: "Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks", vol. 23, no. 10, pages 1, XP011622636, DOI: 10.1109/LSP.2016.2603342 *
KAIPENG ZHANG等: "Joint Face Detection and Alignment Using Multitask Cascaded Convolution Networks", 《IEEE SIGNAL PROCESSING LETTERS》, vol. 23, no. 10, pages 1, XP011622636, DOI: 10.1109/LSP.2016.2603342 *
魏正: "基于Caffe平台深度学习的人脸识别研究与实现" *
魏正: "基于Caffe平台深度学习的人脸识别研究与实现", 《中国优秀硕士学位论文全文数据库(信息科技辑)》, no. 3, pages 4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705650A (en) * 2021-08-20 2021-11-26 网易(杭州)网络有限公司 Processing method, device, medium and computing equipment for face picture set
CN113743284A (en) * 2021-08-30 2021-12-03 杭州海康威视数字技术股份有限公司 Image recognition method, device, equipment, camera and access control equipment

Similar Documents

Publication Publication Date Title
EP3757890A1 (en) Method and device for image processing, method and device for training object detection model
CN107451969B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
US8135184B2 (en) Method and apparatus for detection and correction of multiple image defects within digital images using preview or other reference images
CN107808136A (en) Image processing method, device, readable storage medium storing program for executing and computer equipment
US20150063692A1 (en) Image capture device with contemporaneous reference image capture mechanism
CN110334635A (en) Main body method for tracing, device, electronic equipment and computer readable storage medium
CN107993209B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107911625A (en) Light measuring method, device, readable storage medium storing program for executing and computer equipment
US9299011B2 (en) Signal processing apparatus, signal processing method, output apparatus, output method, and program for learning and restoring signals with sparse coefficients
WO2019037739A1 (en) Image processing parameter acquisition method, readable storage medium and computer device
CN101983507A (en) Automatic redeye detection
CN111160202A (en) AR equipment-based identity verification method, AR equipment-based identity verification device, AR equipment-based identity verification equipment and storage medium
CN110276831A (en) Constructing method and device, equipment, the computer readable storage medium of threedimensional model
CN111415304A (en) Underwater vision enhancement method and device based on cascade deep network
CN112633221A (en) Face direction detection method and related device
CN112700396A (en) Illumination evaluation method and device for face picture, computing equipment and storage medium
CN110658918B (en) Positioning method, device and medium for eyeball tracking camera of video glasses
CN113810611A (en) Data simulation method and device for event camera
WO2013114803A1 (en) Image processing device, image processing method therefor, computer program, and image processing system
TWI255429B (en) Method for adjusting image acquisition parameters to optimize objection extraction
CN107909542A (en) Image processing method, device, computer-readable recording medium and electronic equipment
CN113506275B (en) Urban image processing method based on panorama
CN113781326A (en) Demosaicing method and device, electronic equipment and storage medium
CN107392870A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN110766631A (en) Face image modification method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination