WO2018095293A1 - 人脸图像处理方法、终端及存储介质 - Google Patents

人脸图像处理方法、终端及存储介质 Download PDF

Info

Publication number
WO2018095293A1
WO2018095293A1 PCT/CN2017/111810 CN2017111810W WO2018095293A1 WO 2018095293 A1 WO2018095293 A1 WO 2018095293A1 CN 2017111810 W CN2017111810 W CN 2017111810W WO 2018095293 A1 WO2018095293 A1 WO 2018095293A1
Authority
WO
WIPO (PCT)
Prior art keywords
brightness
terminal
face image
brightness value
display screen
Prior art date
Application number
PCT/CN2017/111810
Other languages
English (en)
French (fr)
Inventor
袁丽娜
郭计伟
李轶峰
王亮
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018095293A1 publication Critical patent/WO2018095293A1/zh
Priority to US16/279,901 priority Critical patent/US10990806B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means

Definitions

  • the present invention relates to image technology in the field of computers, and in particular, to a face image processing method, a terminal, and a storage medium.
  • biometric technologies such as fingerprints, faces, and irises have been widely applied to many fields of mobile life.
  • fingerprint recognition and iris recognition have higher hardware requirements for electronic devices than face image recognition. Therefore, the application of face image recognition is relatively more extensive.
  • face image recognition is required to authenticate the user's identity. It can be seen that the success rate of face image recognition directly affects the user experience. However, there is no effective solution to how to improve the success rate of face image recognition.
  • the embodiments of the present invention are directed to providing a method, a terminal, and a storage medium for a face image, at least for solving the problem that it is difficult to effectively improve the success rate of face image recognition in the related art.
  • an embodiment of the present invention provides a method for processing a face image, including:
  • the brightness value is less than the first preset threshold, the brightness of the light emitted from the display screen of the terminal is enhanced to the target brightness value, and the face image is re-acquired and the corresponding brightness value is calculated;
  • the face recognition is performed according to the re-acquired face image.
  • an embodiment of the present invention provides a method for processing a face image, which is applied to a terminal, where the terminal includes one or more processors and a memory, and one or more programs, wherein the one or one The above program is stored in a memory, and the program may include one or more units each corresponding to a set of instructions, the one or more processors being configured to execute instructions; the method comprising:
  • the brightness value is less than the first preset threshold, the brightness of the light emitted from the display screen of the terminal is enhanced to the target brightness value, and the face image is re-acquired and the corresponding brightness value is calculated;
  • the face recognition is performed according to the re-acquired face image.
  • an embodiment of the present invention provides a face image processing terminal, including:
  • An acquisition module configured to collect a face image in response to a face recognition operation instruction
  • a calculation module configured to calculate a brightness value of the collected face image
  • the enhancement module is configured to: when the brightness value is less than the first preset threshold, enhance the brightness of the light emitted from the display screen of the terminal to the target brightness value, re-collect the face image and calculate the corresponding brightness value;
  • the identification module is configured to perform face recognition according to the re-acquired face image when the brightness value of the re-acquired face image conforms to the preset brightness value range.
  • an embodiment of the present invention provides a storage medium, where an executable program is stored, and when the executable program is executed by a processor, the face image processing method provided by the embodiment of the present invention is implemented.
  • the embodiment of the present invention further provides a face image processing terminal, including:
  • a memory configured to store an executable program
  • the face image processing method provided by the embodiment of the present invention is implemented when the processor is configured to execute the executable program stored in the memory.
  • the brightness value of the collected face image when the brightness value is less than the first preset threshold, the brightness of the light emitted from the display screen of the terminal is enhanced to the target brightness value, and then The face image is re-acquired and the corresponding brightness value is calculated; when the brightness value of the re-acquired face image meets the preset brightness value range, the face recognition is automatically performed according to the re-acquired face image.
  • the unqualified face image data source can be reduced from the data source, thereby effectively improving the success rate of face image recognition.
  • FIG. 1 is a schematic diagram of an HSV color space model provided by an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of an optional application scenario of a method for processing a face image according to an embodiment of the present invention
  • FIG. 3 is an optional schematic flowchart of a method for processing a face image according to an embodiment of the present invention
  • FIG. 4 is another optional schematic flowchart of a face image processing method according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an optional interface for turning on an enhanced brightness mode according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of an optional interface for adding a mask layer to a target area of a terminal display interface according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram showing an optional functional structure of a face image processing terminal according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of another optional functional structure of a face image processing terminal according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of an optional hardware structure of a face image processing terminal according to an embodiment of the present invention.
  • RGB color space model a color space model in which three colors of red (Red), green (Green), and blue (Blue) are mixed in different proportions.
  • Color space Learning methods such as geometric coordinate spaces to describe color collections.
  • Each color component has a value range of [0, 255].
  • YUV color space model a color space model in which luminance and chromaticity are mixed in different proportions.
  • Y is the brightness (Luminance or Luma), which is the grayscale value;
  • U and V are the chroma (Chrominance or Chroma), which are used to describe the color and saturation of the image.
  • the camera of the electronic device using the Android operating system collects image data in the YUV format by default.
  • the YUV color space is characterized by the separation of the luminance signal Y and the chrominance signals U, V. If there is only a Y signal component and no U, V signal components, then the represented image is a black and white grayscale image.
  • Color TV adopts YUV color space model, which is to use the brightness signal Y to solve the compatibility problem between color TV and black and white TV, so that black and white TV can also receive color TV signals.
  • the YUV color space model can compress the color components in a large amount, so that the storage space is minimized; therefore, the YUV color space model is widely used in network transmission.
  • FIG. 1 is a schematic diagram of an HSV color space model according to an embodiment of the present invention.
  • the parameter components representing colors in the HSV color space model are: hue (H), saturation (S), and brightness (V), respectively.
  • Component H the hue of the color, with a value of [0,360), an integer. Calculated by angle, starting from red, counterclockwise, red is 0°, green is 120°, blue is 240°.
  • the component S the purity of the color, that is, the degree to which the color is close to the spectral color, takes the value [0, 1), and the floating point number.
  • Component V the degree of color brightness, the value is [0, 1), floating point number.
  • V is related to the lightness of the illuminant; for the color of the object, V is related to the transmittance or reflectance of the object.
  • FIG. 2 is a schematic diagram of an optional application scenario of a method for processing a face image according to an embodiment of the present invention.
  • face recognition is required.
  • the mobile terminal will open a camera module such as a camera to collect the user's face image, and identify and verify the collected face image. If you pass, you can continue the subsequent operations such as setting passwords, making payment transactions, etc. If the verification fails, the user is prompted to fail the verification and subsequent operations cannot be performed.
  • the mobile terminal may include, but is not limited to, a mobile phone, a mobile computer, a tablet computer, a personal digital assistant (PDA, Personal Digital Assistant), a media player, a smart TV, a smart watch, a smart glasses, a smart bracelet, and the like.
  • PDA Personal Digital Assistant
  • FIG. 3 is a schematic flowchart diagram of a method for processing a face image according to an embodiment of the present invention.
  • the method for processing a face image can be applied to the foregoing solution.
  • the embodiment of the present invention is not limited herein; as shown in FIG. 3, the implementation process of the face image processing method in the embodiment of the present invention may include the following steps:
  • Step S300 Acquire a face image in response to the face recognition operation instruction.
  • the camera module of the mobile terminal may be used to collect the person. Face image.
  • the camera module can include a mobile terminal A camera that it has, or a camera that is connected to an electronic device connected to the mobile terminal.
  • the face recognition operation instruction in the embodiment of the present invention may be triggered when the user performs a sensitive operation on the security center of the mobile terminal, such as changing a password, paying an amount, or modifying a payment limit.
  • the mobile terminal in the embodiment of the present invention may also be simply referred to as a terminal.
  • Step S301 Calculate the brightness value of the collected face image.
  • calculating the brightness value of the collected face image may be implemented by converting the collected face image to hue, saturation, and brightness according to the color space model used by the terminal.
  • the color space model and extract the corresponding brightness value from the brightness channel of the converted face image.
  • the image data of the YUV format is collected by default.
  • the YUV color space model corresponding to the collected face image is first converted to the RGB color space model, and then Then, the RGB color space model is converted to the HSV color space model, and finally the corresponding brightness value is extracted from the brightness channel of the converted face image;
  • the operating system type of the terminal is IOS system, the default image is RGB format.
  • the RGB color space model corresponding to the acquired face image is directly converted to the HSV color space model, and finally the corresponding brightness value is extracted from the brightness channel of the converted face image.
  • Step S302 When the brightness value is less than the first preset threshold, the brightness of the light emitted from the display screen of the terminal is enhanced to the target brightness value, and the face image is re-acquired and the corresponding brightness value is calculated.
  • enhancing the brightness of the light emitted from the display screen of the terminal to the target brightness value may be implemented as follows:
  • Mode 1 It is determined that the highest brightness that can be achieved by the display screen of the terminal is the target brightness value, and the brightness of the display screen of the terminal is adjusted to the highest brightness.
  • the display resolutions of different terminals may be different, the brightness of the display of different terminals may be different, and therefore, the highest brightness that can be achieved by the display of the terminal is used as the target brightness value, based on Maximum brightness enhances the display of the terminal at one time Brightness, which adjusts the brightness of the display of the terminal to the highest brightness. In this way, it is not necessary to repeatedly adjust the brightness of the display screen of the terminal, thereby improving the efficiency of face image recognition.
  • the brightness compensation value is determined as the target brightness value, and the brightness of the display screen of the terminal is adjusted to the brightness compensation value.
  • the brightness of the display screen of the terminal can be enhanced according to the brightness of the light in the environment where the terminal is located, and the ideal brightness of the face image recognition, that is, the required ambient light brightness.
  • a difference value may be obtained by subtracting the required ambient light luminance value from the light luminance value of the environment in which the current terminal is located, wherein the obtained difference may be equal to the luminance compensation value.
  • the acquired face image can satisfy the ideal brightness of the recognition (the required ambient light brightness), thereby ensuring the accuracy based on the captured face image recognition result.
  • the product of the obtained difference value and a compensation coefficient greater than 1 may be used as the brightness compensation value; the brightness of the display screen of the terminal is gradually adjusted according to the brightness compensation value to adjust the brightness of the display screen of the terminal to the brightness compensation value. In this way, it is possible to effectively avoid the negative influence of the scattering of light on the compensated light brightness.
  • the method for processing the face image before enhancing the brightness of the light emitted from the display screen of the terminal to the target brightness value, the method for processing the face image further includes: displaying a button for enhancing brightness on the display screen of the terminal, and using the button In the prompting whether the mode of enhancing brightness has been turned on; in response to the selected instruction for the button, and enhancing the brightness of the light emitted from the display screen of the terminal to the target brightness value, adding a mask layer to the target area of the terminal display interface;
  • the target area includes an area in the display screen of the terminal where the face image is not displayed.
  • the mask layer added in the target area of the terminal display interface may be an opaque white mask layer, and may be other types of mask layers, which are not limited herein.
  • the mask layer in the target area may be an opaque white mask layer, and may be other types of mask layers, which are not limited herein.
  • the first preset threshold may be obtained by acquiring hardware and/or software version information of the terminal, and acquiring a first preset that matches hardware and/or software version information of the terminal. Threshold. It can be seen that the method is to obtain the first preset threshold directly by the terminal itself. It can be understood that, in another optional embodiment, the first preset threshold may be obtained by the server, that is, the terminal sends its own hardware and/or software version information to the server, and the server according to the hardware and/or software of the terminal. The version information is assigned a corresponding first preset threshold, and the assigned first preset threshold is returned to the terminal.
  • the hardware version information of the terminal may include, but is not limited to, hardware information such as a model, a brand, a motherboard, or a display identification chip of the terminal; the software version information of the terminal may include, but is not limited to, an operating system version, and a current request for facial image recognition.
  • Software information such as the application and its version information.
  • how to determine that the brightness value of the collected face image is smaller than the first preset threshold may be directly compared by comparing the brightness value of the face image with the first preset threshold, or It is realized by determining which preset brightness value range the brightness value of the collected face image falls into.
  • the first preset threshold may be a minimum value of a certain brightness value range a, and then, when the brightness value range b of the acquired face image falls within the brightness value range a, it may be determined. The brightness value of the collected face image is smaller than the first preset threshold.
  • the brightness value of the collected face image is smaller than the first preset threshold, it may be known that the current environment for performing face image recognition is relatively dark. At this time, it is necessary to enhance the brightness of the light emitted from the display screen of the terminal to the target brightness.
  • the value is such that the brightness of the light that is illuminated on the user's face is increased, and then when the face image is re-acquired, a clearer face image can be acquired, so that the facial features can be extracted more accurately.
  • the face image processing method further includes: when the collected face When the brightness value of the image is greater than the second preset threshold, the prompt information is output, and the prompt information is used to prompt to adjust the angle and/or the area of the acquisition; the face image is re-acquired, and the brightness value of the re-acquired face image is calculated.
  • the prompt information that the light is too bright is generated and displayed, so that when the ambient light is not ideal, the user can be actively guided to perform correct operations (such as adjusting the collection). Angle and / or region, etc., by actively reminding the user that the face image verification is unsuccessful, greatly improving the user experience.
  • the second preset threshold may be obtained by acquiring hardware and/or software version information of the terminal, and acquiring a second preset that matches hardware and/or software version information of the terminal. Threshold, it can be seen that the method is to obtain the second preset threshold directly by the terminal itself.
  • the second preset threshold may be obtained by the server, that is, the terminal sends its own hardware and/or software version information to the server, according to the hardware and/or software version of the terminal. The information is assigned a corresponding second preset threshold, and the assigned second preset threshold is returned to the terminal.
  • Step S303 When the brightness value of the re-acquired face image conforms to the preset brightness value range, the face recognition is performed according to the re-acquired face image.
  • the brightness value of the re-acquired face image conforms to the preset brightness value range may include: determining that the brightness value of the re-acquired face image is greater than or equal to the first preset threshold and less than the second preset.
  • the threshold value indicates that the brightness of the environment in which the current terminal is located is suitable for face recognition. Then, the user can know that the brightness of the environment in which the current terminal is located is not too dark or too bright, thereby avoiding the brightness of the light being too dark or too bright. The problem of the accuracy of face image recognition is reduced.
  • the brightness value of the collected face image is calculated by responding to the face recognition operation instruction; when the brightness value is less than the first preset threshold, the enhancement is transmitted from the display screen of the terminal. The brightness of the light reaches the target brightness value, and then the face image is re-acquired and the corresponding brightness value is calculated; when the brightness value of the re-acquired face image meets the preset brightness Face recognition is automatically performed based on the re-acquired face image.
  • the unqualified face image data source can be reduced from the data source, thereby effectively improving the success rate of face image recognition.
  • FIG. 4 is another schematic flowchart of a method for processing a face image according to an embodiment of the present invention.
  • the method for processing a face image can be applied to various types of mobile terminals as described above.
  • the specific implementation process of the face image processing method in the embodiment of the present invention may include the following steps:
  • Step S400 Acquire a face image according to the face recognition operation instruction.
  • the camera module of the mobile terminal may be used to collect the person. Face image.
  • the camera module may include a camera that the mobile terminal has, or a camera that the electronic device connected to the mobile terminal has.
  • the face recognition operation instruction in the embodiment of the present invention may be triggered when the user performs a sensitive operation on the security center of the mobile terminal, such as changing a password, paying an amount, or modifying a payment limit.
  • the mobile terminal in the embodiment of the present invention may also be simply referred to as a terminal.
  • Step S401 Send the hardware and/or software version information of the terminal to the server.
  • the terminal may send the hardware and/or software version information to the server while the camera module is being used to capture the face image.
  • the hardware and/or software version information of the terminal is sent to the server periodically.
  • the hardware and/or software version information of the terminal is sent according to the frequency of one day or two days.
  • the embodiment of the present invention is not limited herein.
  • the hardware version information of the terminal may include, but is not limited to, hardware information such as a model, a brand, a motherboard, or a display identification chip of the terminal;
  • the software version information of the terminal may include, but is not limited to, an operating system version, and a current request for facial image recognition.
  • Software information such as the application and its version information.
  • the terminal can send the hardware and/or software version information of the terminal from the background to the corresponding server through the application that currently requests the face image recognition to obtain the matched light threshold.
  • Step S402 The first preset threshold and the second preset threshold that match the hardware and/or software version information of the terminal returned by the server are received.
  • the server may search for a matching light threshold for performing face image recognition from the database, and the light threshold may include a first preset threshold and The second preset threshold.
  • the first preset threshold is smaller than the second preset threshold.
  • the server returns the first preset threshold and the second preset threshold to the terminal, that is, the terminal receives the first preset threshold and the second preset threshold.
  • the first preset threshold and the second preset threshold are obtained.
  • the first preset threshold or the second may also be obtained in the following manner.
  • Preset threshold acquire hardware and/or software version information of the terminal; acquire a first preset threshold and/or a second preset threshold that match hardware and/or software version information of the terminal. It can be seen that the method is to obtain the first preset threshold and/or the second preset threshold directly by the terminal itself.
  • Step S403 Calculate the brightness value of the collected face image.
  • calculating the brightness value of the collected face image may be implemented by converting the collected face image to hue, saturation, and brightness according to the color space model used by the terminal.
  • the color space model and extract the corresponding brightness value from the brightness channel of the converted face image.
  • the image data of the YUV format is collected by default.
  • the YUV color space model corresponding to the collected face image is first converted to the RGB color space model, and then Then, the RGB color space model is converted to the HSV color space model, and finally the corresponding brightness value is extracted from the brightness channel of the converted face image;
  • the operating system type of the terminal is IOS system
  • the image data of the RGB format is collected by default.
  • the RGB color space model corresponding to the collected face image is directly converted into the HSV color space model, and finally, after the conversion.
  • the brightness channel of the face image extracts the corresponding brightness value.
  • the algorithm transform RGB2HSV that can convert the RGB color space model into the HSV color space model is defined as follows:
  • MAX be the maximum of the three components R, G, and B; MIN is the minimum of the three components.
  • the algorithm transformYUV2RGB which can first convert the YUV color space model into the RGB color space model is defined as follows:
  • the RGB color space model is converted into the HSV color space model.
  • the algorithm transformRGB2HSV is used to calculate the brightness value of the collected face image.
  • the Y component can also be directly used as the brightness value of the face image.
  • the conversion algorithm between the above color space models is only an optional embodiment of the present invention, and the embodiment of the present invention does not limit how to complete the conversion between the respective color space models.
  • Step S404 determining a relationship between the brightness value of the face image and the first preset threshold and the second preset threshold. If the brightness value is less than the first preset threshold, step S405 is performed; if the brightness value is greater than the second preset If the threshold is exceeded, step S408 is performed; otherwise, step S410 is performed.
  • Step S405 A button for enhancing brightness is displayed on the display screen of the terminal.
  • the button is used to indicate whether the mode of enhancing brightness has been turned on, and if the mode of enhancing brightness has been turned on, step S406 is performed; otherwise, step S409 is performed.
  • Step S406 Enhance the brightness of the light emitted from the display screen of the terminal to the target brightness value in response to the selection instruction for the button.
  • FIG. 5 is a schematic diagram of an optional interface for enabling an enhanced brightness mode according to an embodiment of the present invention.
  • the terminal determines that the brightness value of the collected face image is less than a first preset threshold. It is known that the brightness of the current environment of the terminal is too dark.
  • the prompt information such as "current light is too dark” or "light is too dark, and the enhanced brightness mode is turned on” can be displayed in the terminal interface to prompt the user to the current terminal.
  • the brightness of the ambient light is too dark, and at the same time, the option button for enhancing brightness is displayed on the display of the terminal, and the option button is used to indicate whether the mode for enhancing brightness has been turned on. If the user clicks on the option button, then the user enters a selected command for the option button to turn on the enhanced brightness mode.
  • enhancing the brightness of the light emitted from the display screen of the terminal to the target brightness value may be implemented as follows:
  • Mode 1 It is determined that the highest brightness that can be achieved by the display screen of the terminal is the target brightness value, and the brightness of the display screen of the terminal is adjusted to the highest brightness.
  • the brightness of the display of different terminals may be different, and therefore, the highest brightness that can be achieved by the display of the terminal is used as the target brightness value, based on The highest brightness enhances the display brightness of the terminal at one time, that is, the brightness of the display of the terminal is adjusted to the highest brightness. In this way, it is not necessary to repeatedly adjust the brightness of the display screen of the terminal, thereby improving the efficiency of face image recognition.
  • the brightness compensation value is determined as the target brightness value, and the brightness of the display screen of the terminal is adjusted to the brightness compensation value.
  • the brightness of the display screen of the terminal can be enhanced according to the brightness of the light in the environment where the terminal is located, and the ideal brightness of the face image recognition, that is, the required ambient light brightness.
  • a difference value may be obtained by subtracting the required ambient light luminance value from the light luminance value of the environment in which the current terminal is located, wherein the obtained difference may be equal to the luminance compensation value.
  • the acquired face image can satisfy the ideal brightness of the recognition (the required ambient light brightness), thereby ensuring the accuracy based on the captured face image recognition result.
  • the product of the obtained difference value and a compensation coefficient greater than 1 may be used as the brightness compensation value; the brightness of the display screen of the terminal is gradually adjusted according to the brightness compensation value to adjust the brightness of the display screen of the terminal to the brightness compensation value. In this way, it is possible to effectively avoid the negative influence of the scattering of light on the compensated light brightness.
  • Step S407 Add a mask layer to the target area of the terminal display interface, and then perform step S409.
  • FIG. 6 is a schematic diagram of an optional interface for adding a mask layer to a target area of a terminal display interface according to an embodiment of the present invention.
  • the target area includes a face not shown in the display screen of the terminal.
  • the area of the image, or the target area includes the face in the current terminal display interface All areas except the image acquisition area, of course, the target area may not include the navigation menu bar and the uppermost running status bar or drop-down menu bar.
  • the mask layer added in the target area of the terminal display interface may be an opaque white mask layer, and may be other types of mask layers, which are not limited herein. In this way, by adding a mask layer in the target area, the display screen of the terminal can emit more white light to illuminate the user's face, effectively improving the success rate of face image recognition in a completely dark environment, so that The use of face image recognition is more extensive.
  • the brightness value of the collected face image is smaller than the first preset threshold, it may be known that the current environment for performing face image recognition is relatively dark. At this time, it is necessary to enhance the brightness of the light emitted from the display screen of the terminal to the target brightness.
  • the value is such that the brightness of the light that is illuminated on the user's face is increased, and then when the face image is re-acquired, a clearer face image can be acquired, so that the facial features can be extracted more accurately.
  • Step S408 Generate and display prompt information.
  • the prompt information is used to prompt to adjust the angle and/or area of the acquisition.
  • the terminal determines that the brightness value of the collected face image is greater than the second preset threshold, it is known that the brightness of the environment in which the current terminal is located is too bright, and the brightness of the light is too bright, which may affect the image capturing module.
  • the terminal when the brightness value of the collected face image is greater than the second preset threshold, the terminal generates and displays a prompt message that the light is too bright, for example, “the light is too bright, please adjust to a suitable position” or “the light is too bright, please The prompt information such as adjustment to a suitable angle can realize the reason that the user can actively guide the user to perform correct operation when the ambient light is not ideal, and the user experience is unsuccessful by actively reminding the user that the image verification is unsuccessful, thereby greatly improving the user experience.
  • Step S409 The face image is re-acquired, and the process returns to step S403.
  • Step S410 When the brightness value of the collected/re-acquired face image conforms to the preset brightness value range, the face recognition is performed according to the collected/re-acquired face image.
  • the brightness value of the captured/reacquired face image conforms to a preset brightness value.
  • the range may include: determining that the brightness value of the collected/re-acquired face image is greater than or equal to the first preset threshold and less than the second preset threshold, indicating that the brightness of the environment in which the current terminal is located is suitable for face recognition, then The user can know that the brightness of the environment in which the current terminal is located is not too dark or too bright, and avoids the problem that the accuracy of face image recognition due to excessive or too bright light is reduced.
  • the brightness value of the collected face image when the brightness value is less than the first preset threshold, the brightness of the light emitted from the display screen of the terminal is enhanced to the target brightness value, and then Re-collecting the face image and calculating the corresponding brightness value; when the brightness value of the re-acquired face image meets the preset brightness value range, the face recognition is automatically performed according to the re-acquired face image, so that the data can be obtained from the data source.
  • the image data source of the unqualified face image is reduced, and the success rate of the face recognition is improved; and when the brightness value of the face image is greater than the second preset threshold, the prompt information of the light illuminating is generated and displayed, and the surrounding light is realized.
  • it can actively guide the user to perform correct operations (such as adjusting the angle and/or area of the collection), and actively reminds the user that the face image verification is unsuccessful, which greatly improves the user experience; in addition, the light is seriously insufficient.
  • a button for enhancing brightness is generated and displayed to indicate whether the mode for enhancing brightness has been turned on, and the terminal is turned on after being turned on. Add mask layer shown the target area of the interface, effectively improve in total darkness by face image recognition success rate, such that the face image recognition more extensive usage scenarios.
  • FIG. 7 is a schematic diagram showing an optional functional structure of a face image processing terminal according to an embodiment of the present invention.
  • the face image processing terminal 700 includes: an acquisition module 701, a calculation module 702, an enhancement module 703, and an identification module 704. The function of each module is explained below.
  • the collecting module 701 is configured to collect a face image in response to the face recognition operation instruction
  • the calculating module 702 is configured to calculate a brightness value of the collected face image
  • the enhancement module 703 is configured to: when the brightness value is less than the first preset threshold, enhance the brightness of the light emitted from the display screen of the terminal to the target brightness value, re-collect the face image and calculate the corresponding brightness value;
  • the identification module 704 is configured to perform face recognition according to the re-acquired face image when the brightness value of the re-acquired face image conforms to the preset brightness value range.
  • the specific configuration is: converting the collected face image to a color space model based on hue, saturation and lightness according to the color space model used by the terminal, And extracting the corresponding brightness value from the brightness channel of the converted face image.
  • the image data of the YUV format is collected by default.
  • the YUV color space model corresponding to the collected face image is first converted to the RGB color space model, and then Then, the RGB color space model is converted to the HSV color space model, and finally the corresponding brightness value is extracted from the brightness channel of the converted face image;
  • the operating system type of the terminal is IOS system, the default image is RGB format.
  • the RGB color space model corresponding to the acquired face image is directly converted to the HSV color space model, and finally the corresponding brightness value is extracted from the brightness channel of the converted face image.
  • the enhancement module 703 enhances the brightness of the light emitted from the display screen of the terminal to the target brightness value.
  • the specific configuration is: determining that the highest brightness that can be achieved by the display screen of the terminal is the target brightness value, and adjusting the brightness of the display screen of the terminal to the highest. Brightness; or,
  • the brightness compensation value is determined as the target brightness value, and the brightness of the display screen of the terminal is adjusted to the brightness compensation value.
  • FIG. 8 is a schematic diagram of another optional functional structure of a face image processing terminal according to an embodiment of the present invention.
  • the face image processing terminal 700 includes an acquisition module 701, a calculation module 702, an enhancement module 703, and an identification module.
  • the face image processing terminal 700 may further include: an obtaining module 705 configured to acquire hardware and/or software version information of the terminal; and an acquiring module 705.
  • the 705 is further configured to obtain a first preset threshold that matches hardware and/or software version information of the terminal.
  • the face image processing terminal 700 may further include a prompting module 706 configured to output prompt information when the brightness value of the collected face image is greater than a second preset threshold, and the prompt information is used to prompt to adjust the angle and/or region of the collected image;
  • the face image is re-acquired and the brightness value of the re-acquired face image is calculated.
  • the obtaining module 705 in the face image processing terminal 700 is further configured to acquire hardware and/or software version information of the terminal;
  • the obtaining module 705 is further configured to acquire a second preset threshold that matches hardware and/or software version information of the terminal.
  • the face image processing terminal 700 may further include a display module 707 configured to display a button for enhancing brightness on the display screen of the terminal before the enhancement module 703 enhances the brightness of the light emitted from the display screen of the terminal to the target brightness value.
  • the button is used to indicate whether the mode for enhancing brightness has been turned on.
  • the face image processing terminal 700 may further include an adding module 708 configured to add in the target area of the terminal display interface after responding to the selected instruction for the button and enhancing the brightness of the light emitted from the display screen of the terminal to the target brightness value.
  • an adding module 708 configured to add in the target area of the terminal display interface after responding to the selected instruction for the button and enhancing the brightness of the light emitted from the display screen of the terminal to the target brightness value.
  • Mask layer configured to add in the target area of the terminal display interface after responding to the selected instruction for the button and enhancing the brightness of the light emitted from the display screen of the terminal to the target brightness value.
  • the target area includes an area where the face image is not displayed in the display screen of the terminal, or the target area includes all areas except the collection area of the face image in the current terminal display interface; and the cover added in the target area of the terminal display interface
  • the cover layer may be an opaque white mask layer, and may be other types of mask layers, which are not limited herein.
  • each of the above program modules may be a Central Processing Unit (CPU), a Micro Processor Unit (MPU), a Digital Signal Processor (DSP), or a Field Programmable Gate Array. (FPGA, Field Programmable Gate Array) and other implementations.
  • CPU Central Processing Unit
  • MPU Micro Processor Unit
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • the face image processing terminal provided by the embodiment of the present invention performs face image processing
  • only the division of each of the above-mentioned program modules is illustrated. In actual applications, the above processing may be assigned differently according to needs.
  • the program module is completed, that is, the internal structure of the device is divided into different program modules to complete all or part of the processing described above.
  • FIG. 9 only shows an exemplary structure of the face image processing terminal, not the entire structure, and can be implemented as shown in FIG. 9 as needed. Partial or all structure.
  • FIG. 9 is a schematic diagram of an optional hardware structure of a face image processing terminal according to an embodiment of the present invention.
  • the actual application may be applied to various terminals that run the application.
  • the face image processing terminal 900 shown in FIG. 9 may include at least one processor 901 such as a CPU, at least one communication bus 902, a user interface 903, at least one network interface 904, a memory 905, a display screen 906, and a camera module 907.
  • the various components in the face image processing terminal 900 are coupled together via a communication bus 902.
  • communication bus 902 is used to implement connection communication between these components.
  • the communication bus 902 includes a power bus, a control bus, and a status signal bus in addition to the data bus.
  • various buses are labeled as communication bus 902 in FIG.
  • the user interface 903 may include a display, a keyboard, a mouse, a trackball, a click wheel, a button, a button, a touch panel, or a touch screen.
  • Network interface 904 can include a standard wired interface, a wireless interface such as a WIFI interface.
  • the memory 905 can be a high speed RAM memory or a non-volatile memory, such as at least one disk memory.
  • the memory 905 can also be at least one storage system remote from the processor 901.
  • the memory 905 in the embodiment of the present invention is used to store various types of data to support the operation of the face image processing terminal 900. These data Examples include: any computer program for operating on the face image processing terminal 900, such as an operating system, a network communication module, a user interface module, and a face recognition program, a program for implementing the face image processing method of the embodiment of the present invention Can be included in the face recognition program.
  • the face image processing method disclosed in the embodiment of the present invention may be applied to the processor 901 or implemented by the processor 901.
  • the processor 901 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the face image processing method may be completed by an integrated logic circuit of hardware in the processor 901 or an instruction in a form of software.
  • the processor 901 described above may be a general purpose processor, a DSP or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like.
  • the processor 901 can implement or perform various methods, steps, and logic blocks provided in the embodiments of the present invention.
  • a general purpose processor can be a microprocessor or any conventional processor or the like.
  • the steps of the face image processing method provided by the embodiment of the present invention may be directly implemented as a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a storage medium, and the storage medium is located in the memory 905.
  • the processor 901 reads the information in the memory 905 and combines the hardware to complete the steps of the face image processing method provided by the embodiment of the present invention.
  • an embodiment of the present invention further provides a storage medium, where an executable program is stored, and when the executable program is executed by the processor, the face image processing method provided by the embodiment of the present invention is implemented, for example, FIG. 3 Or the face image processing method shown in FIG.
  • the storage medium provided by the embodiment of the present invention may be a storage medium such as an optical disk, a flash memory or a magnetic disk, and may be a non-transitory storage medium.
  • the face image processing terminal 700 or the face image processing terminal 900 in the embodiment of the present invention includes, but is not limited to, a personal computer, a mobile computer, a tablet computer, a mobile phone, a PDA, a smart TV, a smart watch, and smart glasses. , electronic devices such as smart bracelets. It is to be understood that the functions of the modules in the face image processing terminal 700 or the face image processing terminal 900 may be referred to the specific implementations of any of the embodiments in the foregoing method embodiments, and are not described herein.
  • the face image processing method provided by the embodiment of the present invention can be applied to an application related to face image recognition.
  • all sensitive operations related to face image recognition applied to the QQ Security Center such as changing the face, changing the face, changing the phone or the face activation password, etc.
  • the following is an implementation process of the face image processing method provided by the embodiment of the present invention, taking the terminal as the mobile phone, the application as the QQ security center application, and the face recognition operation instruction as the instruction generated by the brush face change. Be explained.
  • the user opens the QQ Security Center application on the mobile phone and uses the brush face to change the secret function.
  • the QQ security center application reports the hardware and/or software version information of the mobile phone to the background server to query the light threshold of the mobile phone.
  • the light threshold includes a first preset threshold Threshhold_luminance_min and a second preset threshold Threshhold_luminance_max.
  • the background server may allocate a first preset threshold and a second preset threshold that match the hardware and/or software version information of the mobile phone according to the hardware and/or software version information of the received mobile phone, and return it to the mobile phone.
  • the hardware version information of the mobile phone may include, but is not limited to, hardware information such as a model, a brand, a motherboard, or a display identification chip of the mobile phone;
  • the software version information of the mobile phone may include, but is not limited to, an operating system version, and a current request for face image recognition.
  • Software information such as the application and its version information.
  • the QQ Security Center application collects the face image by turning on the camera of the mobile phone.
  • the captured face image is subjected to YUV-RGB-
  • the color space model of the HSV is converted, and the corresponding brightness value V is extracted from the brightness channel of the converted face image; if the operating system of the mobile phone is detected as the IOS system, the collected face image is subjected to the RGB-HSV color space.
  • the model is converted, and the corresponding brightness value V is extracted from the brightness channel of the converted face image.
  • the QQ security center application does not perform face image recognition first, but actively prompts the user, for example, "the light is too dark, and the enhanced brightness mode is turned on" as shown in FIG. Prompt message to prompt the user for the current phone
  • the light in the environment is too dark.
  • the brightness of the light emitted from the display screen of the mobile phone is enhanced to the target brightness value, so that the light emitted by the display screen of the mobile phone is irradiated onto the person's face. If the user clicks on the option button for enhanced brightness, the enhanced brightness mode can be turned on to change all areas outside the face image acquisition area on the QQ Security Center application from translucent to opaque white mask.
  • the target brightness value may be the highest brightness that can be achieved by the display screen of the mobile phone, or the brightness compensation value determined according to the difference between the light brightness of the environment in which the mobile phone is located and the required ambient light brightness, or may be determined by an empirical value.
  • the QQ Security Center application will generate and display a prompt message for prompting to adjust the angle and/or area of the collection.
  • the face recognition is performed according to the collected face image.
  • the user enters the interface for modifying the QQ password to modify the password.
  • the embodiments of the present invention can achieve the following beneficial effects:
  • a button for enhancing brightness is generated and displayed to indicate whether the mode for enhancing brightness has been turned on, and after opening, a mask layer is added in the target area of the terminal display interface, thereby effectively improving the environment in a completely dark environment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Telephone Function (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种人脸图像处理方法、终端及存储介质,方法包括:响应于人脸识别操作指令,采集人脸图像;计算采集到的人脸图像的亮度值;当所述亮度值小于第一预设阈值时,增强从终端的显示屏发射出去的光线亮度至目标亮度值,重新采集人脸图像并计算对应的亮度值;当所重新采集的人脸图像的亮度值符合预设亮度值范围时,根据所重新采集的人脸图像进行人脸识别。

Description

人脸图像处理方法、终端及存储介质
相关申请的交叉引用
本申请基于申请号为201611046965.X、申请日为2016年11月23日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的内容在此以引入方式并入本申请。
技术领域
本发明涉及计算机领域中的图像技术,尤其涉及一种人脸图像处理方法、终端及存储介质。
背景技术
随着电子技术以及互联网特别是移动互联网的快速发展,电子设备特别是智能移动设备的功能越来越强大,用户可以根据自身的需求在电子设备上安装各种应用程序,以完成各种事务。例如,通过安装在电子设备上的应用程序实现图像识别的过程。
目前,在图像识别技术领域中,指纹、人脸、虹膜等生物识别技术已经广泛应用到移动生活中很多领域。在这些生物特征中,相比于人脸图像识别,指纹识别和虹膜识别对电子设备的硬件要求更高,因此,人脸图像识别的应用相对更广泛些。例如,在电子设备的安全中心上进行敏感操作,比如修改密码、支付金额或修改支付限额等时,需要进行人脸图像识别以实现对用户身份的鉴定。可见,人脸图像识别的成功率直接影响到用户使用体验。而对于如何提高人脸图像识别的成功率,相关技术尚无有效解决方案。
发明内容
有鉴于此,本发明实施例期望提供一种人脸图像处理方法、终端及存储介质,至少用以解决相关技术中难以有效提高人脸图像识别的成功率的问题。
为了解决上述技术问题,本发明实施例的技术方案是这样实现的:
第一方面,本发明实施例提供一种人脸图像处理方法,包括:
响应于人脸识别操作指令,采集人脸图像;
计算采集到的人脸图像的亮度值;
当所述亮度值小于第一预设阈值时,增强从终端的显示屏发射出去的光线亮度至目标亮度值,重新采集人脸图像并计算对应的亮度值;
当所重新采集的人脸图像的亮度值符合预设亮度值范围时,根据所重新采集的人脸图像进行人脸识别。
第二方面,本发明实施例提供一种人脸图像处理方法,应用于终端,所述终端包括有一个或多个处理器以及存储器,以及一个或一个以上的程序,其中,所述一个或一个以上的程序存储于存储器中,所述程序可以包括一个或一个以上的每一个对应于一组指令的单元,所述一个或多个处理器被配置为执行指令;所述方法包括:
响应于人脸识别操作指令,采集人脸图像;
计算采集到的人脸图像的亮度值;
当所述亮度值小于第一预设阈值时,增强从终端的显示屏发射出去的光线亮度至目标亮度值,重新采集人脸图像并计算对应的亮度值;
当所重新采集的人脸图像的亮度值符合预设亮度值范围时,根据所重新采集的人脸图像进行人脸识别。
第三方面,本发明实施例提供一种人脸图像处理终端,包括:
采集模块,配置为响应于人脸识别操作指令,采集人脸图像;
计算模块,配置为计算采集到的人脸图像的亮度值;
增强模块,配置为当所述亮度值小于第一预设阈值时,增强从终端的显示屏发射出去的光线亮度至目标亮度值,重新采集人脸图像并计算对应的亮度值;
识别模块,配置为当所重新采集的人脸图像的亮度值符合预设亮度值范围时,根据所重新采集的人脸图像进行人脸识别。
第四方面,本发明实施例提供一种存储介质,存储有可执行程序,所述可执行程序被处理器执行时,实现本发明实施例提供的人脸图像处理方法。
第五方面,本发明实施例还提供一种人脸图像处理终端,包括:
存储器,配置为存储可执行程序;
处理器,配置为执行所述存储器中存储的可执行程序时,实现本发明实施例提供的人脸图像处理方法。
采用本发明实施例的技术方案,通过计算采集到的人脸图像的亮度值,当亮度值小于第一预设阈值时,增强从终端的显示屏发射出去的光线亮度至目标亮度值,然后再重新采集人脸图像并计算对应的亮度值;当所重新采集的人脸图像的亮度值符合预设亮度值范围时,才自动根据所重新采集的人脸图像进行人脸识别。如此,可以从数据源头上减少不合格的人脸图像数据源,进而能够有效提高人脸图像识别的成功率。
附图说明
为了更清楚地说明本发明实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来说,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的HSV颜色空间模型示意图;
图2是本发明实施例提供的人脸图像处理方法的一可选的应用场景示意图;
图3是本发明实施例提供的人脸图像处理方法的一可选的流程示意图;
图4是本发明实施例提供的人脸图像处理方法的另一可选的的流程示意图;
图5是本发明实施例提供的开启增强亮度模式的一可选的界面示意图;
图6是本发明实施例提供的在终端显示界面的目标区域中添加遮罩层的一可选的界面示意图;
图7是本发明实施例提供的人脸图像处理终端的一可选的功能结构示意图;
图8是本发明实施例提供的人脸图像处理终端的另一可选的功能结构示意图;
图9是本发明实施例提供的人脸图像处理终端的一可选的硬件结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明部分实施例,而不是全部的实施例。基于本发明实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
对本发明实施例进行进一步详细说明之前,对本发明实施例中涉及的名词和术语进行说明,本发明实施例中涉及的名词和术语适用于如下的解释。
1)RGB颜色空间模型,由红色(Red)、绿色(Green)和蓝色(Blue)三种色光按照不同的比例混合而成的色彩空间模型。其中,色彩空间用数 学方式如几何上的坐标空间来描述颜色集合。每个颜色分量的取值范围为[0,255]。采用IOS操作系统的电子设备如苹果手机的摄像头默认采集的是RGB格式的图像数据。
2)YUV颜色空间模型,由亮度和色度按照不同的比例混合而成的色彩空间模型。其中,Y表示亮度(Luminance或Luma),也就是灰阶值;U和V表示色度(Chrominance或Chroma),用于描述图像的色彩及饱和度。采用安卓(Android)操作系统的电子设备的摄像头默认采集的是YUV格式的图像数据。
YUV颜色空间的特点是亮度信号Y和色度信号U、V分离。若只有Y信号分量而没有U、V信号分量,那么,所表示的图像就是黑白灰度图像。彩色电视采用YUV颜色空间模型,正是为了利用亮度信号Y来解决彩色电视机与黑白电视机的兼容问题,使黑白电视机也能接收彩色电视信号。
相比于RGB颜色空间模型中三个颜色重要程度相同而言,YUV颜色空间模型可以大量压缩颜色分量,从而使存储空间尽量减少;因此,YUV颜色空间模型在网络传输中得到大量应用。
3)HSV颜色空间模型,由色调(Hue)、饱和度(Saturation)和明度(Value)组合而成的色彩空间模型,也称为六角椎体模型。该模型具有颜色的直观特性的优点,易于观察,应用较广泛。参见图1,图1是本发明实施例提供的HSV颜色空间模型示意图,HSV颜色空间模型中表示颜色的参数分量分别为:色调(H)、饱和度(S)和明度(V)。
分量H,颜色的色相,取值为[0,360),整数型。用角度度量,从红色开始按逆时针方向计算,红色为0°,绿色为120°,蓝色为240°。
分量S,颜色的纯度,即颜色接近光谱色的程度,取值为[0,1),浮点数型。光谱色所占比例越大,颜色接近光谱色的程度就越高,颜色的饱和度也就越高。
分量V,颜色明亮的程度,取值为[0,1),浮点数型。对于光源色来说,V与发光体的光亮度有关;对于物体色来说,V与物体的透射比或反射比有关。
为了更好理解本发明实施例提供的人脸图像处理方法及人脸图像处理终端的技术方案,下面对本发明实施例适用的应用场景进行说明。参见图2,图2是本发明实施例提供的人脸图像处理方法的一可选的应用场景示意图,在图2中,用户通过移动终端进行各项操作的过程中,当需要进行人脸识别,例如,运行某应用程序的过程中需要设置密码、进行支付交易等时,移动终端将开启摄像模块如摄像头采集用户的人脸图像,并对所采集的人脸图像进行识别和验证,若验证通过,则可继续后续的操作如设置密码、进行支付交易等;若验证失败,则提示用户此次验证失败,将不能进行后续的操作。
本发明实施例中,移动终端可以包括但不限于移动电话、移动电脑、平板电脑、个人数字助理(PDA,Personal Digital Assistant)、媒体播放器、智能电视、智能手表、智能眼镜、智能手环等具有摄像模块或可以连接摄像模块进行人脸图像采集的电子设备。
基于图2所示的一可选的应用场景示意图,参见图3,图3是本发明实施例提供的人脸图像处理方法的一可选的流程示意图,人脸图像处理方法可以应用于前述所示的各种类型的移动终端,本发明实施例在此不做限定;如图3所示,本发明实施例中人脸图像处理方法的实现流程,可以包括以下步骤:
步骤S300:响应于人脸识别操作指令,采集人脸图像。
在一实施例中,移动终端进行各项操作的过程中,当需要进行人脸识别时,即接收到系统产生或用户输入的人脸识别操作指令时,可以通过开启移动终端的摄像模块采集人脸图像。其中,摄像模块可以包括移动终端 自身具有的摄像头,或者与移动终端连接的电子设备所具有的摄像头。
需要说明的是,本发明实施例的人脸识别操作指令可以是基于用户在移动终端的安全中心上进行敏感操作,例如修改密码、支付金额或修改支付限额等时触发产生的。本发明实施例中的移动终端也可以简称为终端。
步骤S301:计算采集到的人脸图像的亮度值。
在本发明可选实施例中,计算采集到的人脸图像的亮度值,可以采用如下方式实现:根据终端使用的颜色空间模型,将所采集的人脸图像转换到基于色调、饱和度和明度的颜色空间模型,并从转换后人脸图像的明度通道提取相应的亮度值。
举例来说,如果终端的操作系统类型为Android系统,则默认采集的是YUV格式的图像数据,此时,将所采集的人脸图像对应的YUV颜色空间模型先转换到RGB颜色空间模型,然后再由RGB颜色空间模型转换到HSV颜色空间模型,最后再从转换后的人脸图像的明度通道提取相应的亮度值;如果终端的操作系统类型为IOS系统,则默认采集的是RGB格式的图像数据,此时,将所采集的人脸图像对应的RGB颜色空间模型直接转换到HSV颜色空间模型,最后再从转换后的人脸图像的明度通道提取相应的亮度值。
步骤S302:当亮度值小于第一预设阈值时,增强从终端的显示屏发射出去的光线亮度至目标亮度值,重新采集人脸图像并计算对应的亮度值。
在本发明可选实施例中,增强从终端的显示屏发射出去的光线亮度至目标亮度值,可以采用如下方式实现:
方式1)确定终端的显示屏能够实现的最高亮度为目标亮度值,将终端的显示屏亮度调节至最高亮度。
对于上述方式1)来说,由于不同终端的显示屏分辨率可能不同,导致不同终端的显示屏所能够实现的亮度不同,因此,将终端的显示屏能够实现的最高亮度作为目标亮度值,基于最高亮度一次性地增强终端的显示屏 亮度,即将终端的显示屏亮度调节至最高亮度。这样,无需反复调整终端的显示屏亮度,提升人脸图像识别的效率。
方式2)根据终端所处环境的光线亮度与要求的环境光线亮度的差值,确定亮度补偿值为目标亮度值,将终端的显示屏亮度调节至亮度补偿值。
对于上述方式2)来说,可以根据当前终端所处环境的光线亮度,以及人脸图像识别的理想亮度即所要求的环境光线亮度,来增强终端的显示屏亮度。这里,将所要求的环境光线亮度值减去当前终端所处环境的光线亮度值,可以获得一差值,其中,所获得的差值可以与亮度补偿值等同。这样,就可以使采集的人脸图像满足识别的理想亮度(所要求的环境光线亮度),进而确保基于采集人脸图像识别结果的精度。
另外,可将所获得的差值与大于1的一补偿系数的乘积作为亮度补偿值;根据亮度补偿值逐渐对终端的显示屏亮度进行调节,以将终端的显示屏亮度调节至亮度补偿值。这样,可以有效避免光线的散射对补偿的光线亮度产生的负面影响。
可以理解,除了采用上述方式1)和方式2)的方法来增强从终端的显示屏发射出去的光线亮度,还可以先根据经验值确定目标亮度值,再增强从终端的显示屏发射出去的光线亮度至由经验值确定的目标亮度值。
在本发明可选实施例中,在增强从终端的显示屏发射出去的光线亮度至目标亮度值之前,人脸图像处理方法还包括:在终端的显示屏显示用于增强亮度的按钮,按钮用于提示是否已经开启增强亮度的模式;响应于针对按钮的选中指令,并增强从终端的显示屏发射出去的光线亮度至目标亮度值后,在终端显示界面的目标区域中添加遮罩层;其中,目标区域包括终端的显示屏中未显示人脸图像的区域。
这里,在终端显示界面的目标区域中添加的遮罩层,可以是不透明白色遮罩层,当然还可以是其他类型的遮罩层,本发明实施例在此不做限定。 这样,通过在目标区域中设置遮罩层,可以有效提高在完全黑暗的环境下通过人脸图像识别的成功率,使得人脸图像识别的使用场景更加广泛。
在本发明一可选实施例中,可以通过以下方式获得第一预设阈值:获取终端的硬件和/或软件版本信息;获取与终端的硬件和/或软件版本信息相匹配的第一预设阈值。可见,该方式是直接由终端自身获得第一预设阈值。可以理解,在另一可选实施例中,可以由服务器来获得第一预设阈值,即:终端将自身的硬件和/或软件版本信息发送给服务器,由服务器根据终端的硬件和/或软件版本信息分配相应的第一预设阈值,并将分配的第一预设阈值返回给终端。
其中,终端的硬件版本信息可以包括但不限于终端的机型、品牌、主板或显示识别芯片等硬件信息;终端的软件版本信息可以包括但不限于操作系统版本、当前请求进行人脸图像识别的应用程序及其版本信息等软件信息。
在一实施例中,对于如何判断所采集的人脸图像的亮度值小于第一预设阈值而言,可以直接通过将人脸图像的亮度值与第一预设阈值进行比较来实现,也可以通过判断所采集的人脸图像的亮度值落入哪个预设亮度值范围来实现。可理解的是,第一预设阈值可以为某一亮度值范围a的最小值,那么,当所采集的人脸图像的亮度值落入的亮度值范围b小于亮度值范围a时,即可判断出所采集的人脸图像的亮度值小于第一预设阈值。
当判断出所采集的人脸图像的亮度值小于第一预设阈值时,可以获知当前进行人脸图像识别的环境比较黑暗,此时,需要增强从终端的显示屏发射出去的光线亮度至目标亮度值,从而使得照射在用户脸部的光线亮度增加,然后在重新采集人脸图像时,就可以采集到更加清晰的人脸图像,从而可以更加准确地提取出人脸面部特征。
在本发明可选实施例中,人脸图像处理方法还包括:当所采集的人脸 图像的亮度值大于第二预设阈值时,输出提示信息,提示信息用于提示调整采集的角度和/或区域;重新采集人脸图像,并计算所重新采集到的人脸图像的亮度值。这样,当所采集的人脸图像的亮度值大于第二预设阈值时,生成并显示光线过亮的提示信息,实现了在周围光线不理想时,能主动引导用户进行正确操作(如调整采集的角度和/或区域等),通过主动提醒用户人脸图像验证不成功的原因,大大提高了用户使用体验。
在本发明一可选实施例中,第二预设阈值可以通过以下方式获得:获取终端的硬件和/或软件版本信息;获取与终端的硬件和/或软件版本信息相匹配的第二预设阈值,可见,该方式是直接由终端自身获得第二预设阈值。当然,在另一可选实施例中,可以由服务器来获得第二预设阈值,即:终端将自身的硬件和/或软件版本信息发送给服务器,由服务器根据终端的硬件和/或软件版本信息分配相应的第二预设阈值,并将所分配的第二预设阈值返回给终端。
步骤S303:当所重新采集的人脸图像的亮度值符合预设亮度值范围时,根据所重新采集的人脸图像进行人脸识别。
在一实施例中,所重新采集的人脸图像的亮度值符合预设亮度值范围可以包括:判断出所重新采集的人脸图像的亮度值大于或等于第一预设阈值且小于第二预设阈值,表明当前终端所处环境的光线亮度适合进行人脸识别,那么,用户可以获知当前终端所处环境的光线亮度不会过暗或过亮,避免了因光线亮度过暗或过亮所导致的人脸图像识别的准确性降低的问题。
采用本发明实施例的技术方案,通过响应于人脸识别操作指令,对采集到的人脸图像的亮度值进行计算;当亮度值小于第一预设阈值时,增强从终端的显示屏发射出去的光线亮度至目标亮度值,然后再重新采集人脸图像并计算对应的亮度值;当所重新采集的人脸图像的亮度值符合预设亮 度值范围时,才自动根据所重新采集的人脸图像进行人脸识别。如此,可以从数据源头上减少不合格的人脸图像数据源,进而能够有效提高人脸图像识别的成功率。
下面以一具体实施例对本发明实施例人脸图像处理方法的具体实现过程做进一步地详细说明。
参见图4,图4是本发明实施例提供的人脸图像处理方法的另一可选的流程示意图,人脸图像处理方法可以应用于前述所示的各种类型的移动终端,本发明实施例在此不做限定;如图4所示,本发明实施例中人脸图像处理方法的具体实现流程,可以包括如下步骤:
步骤S400:根据人脸识别操作指令,采集人脸图像。
在一实施例中,移动终端进行各项操作的过程中,当需要进行人脸识别时,即接收到系统产生或用户输入的人脸识别操作指令时,可以通过开启移动终端的摄像模块采集人脸图像。其中,摄像模块可以包括移动终端自身具有的摄像头,或者与移动终端连接的电子设备所具有的摄像头。
需要说明的是,本发明实施例的人脸识别操作指令可以是基于用户在移动终端的安全中心上进行敏感操作,例如修改密码、支付金额或修改支付限额等时触发产生的。本发明实施例中的移动终端也可以简称为终端。
步骤S401:将终端的硬件和/或软件版本信息发送给服务器。
在一实施例中,终端可以在开启摄像模块采集人脸图像的同时,将自身的硬件和/或软件版本信息发送给服务器;在另一实施例中,可以是在终端采集人脸图像之前,将自身的硬件和/或软件版本信息周期性地发送给服务器,比如按照周期为1天或2天的频率发送终端的硬件和/或软件版本信息,本发明实施例在此不作限定。其中,终端的硬件版本信息可以包括但不限于终端的机型、品牌、主板或显示识别芯片等硬件信息;终端的软件版本信息可以包括但不限于操作系统版本、当前请求进行人脸图像识别的 应用程序及其版本信息等软件信息。
可理解的是,终端可以通过当前请求进行人脸图像识别的应用程序从后台向对应的服务器发送终端的硬件和/或软件版本信息,以获取相匹配的光线阈值。
步骤S402:接收服务器返回的与终端的硬件和/或软件版本信息相匹配的第一预设阈值和第二预设阈值。
在一实施例中,服务器接收到终端的硬件和/或软件版本信息后,即可以从数据库中查找相匹配的用于进行人脸图像识别的光线阈值,光线阈值可以包括第一预设阈值和第二预设阈值。其中,第一预设阈值小于第二预设阈值。服务器将查找到的第一预设阈值和第二预设阈值返回给终端,即终端接收到第一预设阈值和第二预设阈值。
除了采用上述步骤S401和步骤S402的方法来获取第一预设阈值和第二预设阈值之外,在本发明一可选实施例中,还可以通过以下方式获得第一预设阈值或第二预设阈值:获取终端的硬件和/或软件版本信息;获取与终端的硬件和/或软件版本信息相匹配的第一预设阈值和/或第二预设阈值。可见,该方式是直接由终端自身获得第一预设阈值和/或第二预设阈值。
步骤S403:计算采集到的人脸图像的亮度值。
在本发明可选实施例中,计算采集到的人脸图像的亮度值,可以采用如下方式实现:根据终端使用的颜色空间模型,将所采集的人脸图像转换到基于色调、饱和度和明度的颜色空间模型,并从转换后人脸图像的明度通道提取相应的亮度值。
举例来说,如果终端的操作系统类型为Android系统,则默认采集的是YUV格式的图像数据,此时,将所采集的人脸图像对应的YUV颜色空间模型先转换到RGB颜色空间模型,然后再由RGB颜色空间模型转换到HSV颜色空间模型,最后再从转换后的人脸图像的明度通道提取相应的亮度值; 如果终端的操作系统类型为IOS系统,则默认采集的是RGB格式的图像数据,此时,将所采集的人脸图像对应的RGB颜色空间模型直接转换到HSV颜色空间模型,最后再从转换后的人脸图像的明度通道提取相应的亮度值。
在本发明实施例中,若终端的操作系统默认采集到的图像数据为RGB格式,那么,可以将RGB颜色空间模型转换为HSV颜色空间模型的算法transformRGB2HSV定义如下:
令MAX为R、G、B三个分量的最大值;MIN为三个分量的最小值。
若MAX=MIN,则:
H=0
S=0
V=MAX/255
若MAX≠MIN
当G≥B时
H=(MAX-R’+G’-MIN+B’-MIN)/(MAX-MIN)×60
S=1-MIN/MAX
V=MAX/255
当G<B时
H=360-(MAX-R’+G’-MIN+B’-MIN)/(MAX-MIN)×60
S=1-MIN/MAX
V=MAX/255
若终端的操作系统默认采集到的图像数据为YUV格式,那么,可以先将YUV颜色空间模型转为RGB颜色空间模型的算法transformYUV2RGB定义如下:
R=Y+1.14V
G=Y-0.39U-0.58V
B=Y+2.03U
然后,再通过上述将RGB颜色空间模型转换为HSV颜色空间模型的 算法transformRGB2HSV来计算出采集到的人脸图像的亮度值。
需要说明的是,当采集到的人脸图像为YUV格式时,也可以直接将Y分量作为人脸图像的亮度值。上述颜色空间模型之间的转换算法仅是本发明一可选实施例,本发明实施例不限定如何完成各个颜色空间模型之间的转换。
步骤S404:判断人脸图像的亮度值与第一预设阈值和第二预设阈值之间的关系,若亮度值小于第一预设阈值,则执行步骤S405;若亮度值大于第二预设阈值,则执行步骤S408;否则执行步骤S410。
步骤S405:在终端的显示屏显示用于增强亮度的按钮。
这里,按钮用于提示是否已经开启增强亮度的模式,若已经开启增强亮度的模式,则执行步骤S406;否则,执行步骤S409。
步骤S406:响应于针对按钮的选中指令,增强从终端的显示屏发射出去的光线亮度至目标亮度值。
参见图5,图5是本发明实施例提供的开启增强亮度模式的一可选的界面示意图,在图5中,当终端判断出采集到的人脸图像的亮度值小于第一预设阈值时,获知当前终端所处环境的光线亮度过暗,此时,可以在终端界面中显示“当前光线太暗”或“光线太暗,开启增强亮度模式吧”等提示信息,以提示用户当前终端所处环境的光线亮度过暗,并同时在终端的显示屏显示用于增强亮度的选项按钮,选项按钮用于提示是否已经开启增强亮度的模式。若用户点击选项按钮,那么即表明用户输入了针对选项按钮的选中指令,可开启增强亮度的模式。
在本发明可选实施例中,增强从终端的显示屏发射出去的光线亮度至目标亮度值,可以采用如下方式实现:
方式1)确定终端的显示屏能够实现的最高亮度为目标亮度值,将终端的显示屏亮度调节至最高亮度。
对于上述方式1)来说,由于不同终端的显示屏分辨率可能不同,导致不同终端的显示屏所能够实现的亮度不同,因此,将终端的显示屏能够实现的最高亮度作为目标亮度值,基于最高亮度一次性地增强终端的显示屏亮度,即将终端的显示屏亮度调节至最高亮度。这样,无需反复调整终端的显示屏亮度,提升人脸图像识别的效率。
方式2)根据终端所处环境的光线亮度与要求的环境光线亮度的差值,确定亮度补偿值为目标亮度值,将终端的显示屏亮度调节至亮度补偿值。
对于上述方式2)来说,可以根据当前终端所处环境的光线亮度,以及人脸图像识别的理想亮度即所要求的环境光线亮度,来增强终端的显示屏亮度。这里,将所要求的环境光线亮度值减去当前终端所处环境的光线亮度值,可以获得一差值,其中,所获得的差值可以与亮度补偿值等同。这样,就可以使采集的人脸图像满足识别的理想亮度(所要求的环境光线亮度),进而确保基于采集人脸图像识别结果的精度。
另外,可将所获得的差值与大于1的一补偿系数的乘积作为亮度补偿值;根据亮度补偿值逐渐对终端的显示屏亮度进行调节,以将终端的显示屏亮度调节至亮度补偿值。这样,可以有效避免光线的散射对补偿的光线亮度产生的负面影响。
可以理解,除了采用上述方式1)和方式2)的方法来增强从终端的显示屏发射出去的光线亮度,还可以先根据经验值确定目标亮度值,再增强从终端的显示屏发射出去的光线亮度至由经验值确定的目标亮度值。
步骤S407:在终端显示界面的目标区域中添加遮罩层,然后执行步骤S409。
参见图6,图6是本发明实施例提供的在终端显示界面的目标区域中添加遮罩层的一可选的界面示意图,在图6中,目标区域包括终端的显示屏中未显示人脸图像的区域,或者说目标区域包括当前终端显示界面中人脸 图像采集区域以外的所有区域,当然目标区域可以不包括导航菜单栏和最上方的运行状态栏或下拉菜单栏等区域。在终端显示界面的目标区域中添加的遮罩层,可以是不透明白色遮罩层,当然也可以是其他类型的遮罩层,本发明实施例在此不做限定。这样,通过在目标区域中添加遮罩层可以使终端的显示屏发射出更多的白色光,以照亮用户的面部,有效提高在完全黑暗的环境下通过人脸图像识别的成功率,使得人脸图像识别的使用场景更加广泛。
当判断出所采集的人脸图像的亮度值小于第一预设阈值时,可以获知当前进行人脸图像识别的环境比较黑暗,此时,需要增强从终端的显示屏发射出去的光线亮度至目标亮度值,从而使得照射在用户脸部的光线亮度增加,然后在重新采集人脸图像时,就可以采集到更加清晰的人脸图像,从而可以更加准确地提取出人脸面部特征。
步骤S408:生成并显示提示信息。
这里,提示信息用于提示调整采集的角度和/或区域。
在一实施例中,当终端判断出所采集的人脸图像的亮度值大于第二预设阈值时,即获知当前终端所处环境的光线亮度过亮,而光线亮度过亮同样会影响摄像模块采集到的人脸图像的亮度值。那么,当所采集的人脸图像的亮度值大于第二预设阈值时,终端生成并显示光线过亮的提示信息,例如,“光线过亮,请调整到合适位置”或“光线过亮,请调整到合适角度”等提示信息,实现了在周围光线不理想时,能主动引导用户进行正确操作,通过主动提醒用户人脸图像验证不成功的原因,大大提高了用户使用体验。
步骤S409:重新采集人脸图像,返回步骤S403。
步骤S410:当所采集/重新采集的人脸图像的亮度值符合预设亮度值范围时,根据所采集/重新采集的人脸图像进行人脸识别。
在一实施例中,所采集/重新采集的人脸图像的亮度值符合预设亮度值 范围,可以包括:判断出所采集/重新采集的人脸图像的亮度值大于或等于第一预设阈值且小于第二预设阈值,表明当前终端所处环境的光线亮度适合进行人脸识别,那么,用户可以获知当前终端所处环境的光线亮度不会过暗或过亮,避免了因光线亮度过暗或过亮所导致的人脸图像识别的准确性降低的问题。
采用本发明实施例的技术方案,通过计算采集到的人脸图像的亮度值,当亮度值小于第一预设阈值时,增强从终端的显示屏发射出去的光线亮度至目标亮度值,然后再重新采集人脸图像并计算对应的亮度值;当所重新采集的人脸图像的亮度值符合预设亮度值范围时,自动根据所重新采集的人脸图像进行人脸识别,从而可以从数据源头上减少不合格的人脸图像数据源,提高人脸识别的成功率;并且,当人脸图像的亮度值大于第二预设阈值时,生成并显示光线过亮的提示信息,实现了在周围光线不理想时,能主动引导用户进行正确操作(如调整采集的角度和/或区域等),通过主动提醒用户人脸图像验证不成功的原因,大大提高了用户使用体验;另外,在光线严重不足时,生成并显示用于增强亮度的按钮以提示是否已经开启增强亮度的模式,开启后在终端显示界面的目标区域中添加遮罩层,有效提高在完全黑暗的环境下通过人脸图像识别的成功率,使得人脸图像识别的使用场景更加广泛。
为了便于更好地实施本发明实施例提供的人脸图像处理方法,本发明实施例还提供了一种人脸图像处理终端,下面结合附图来进行详细说明。参见图7,图7是本发明实施例提供的人脸图像处理终端的一可选的功能结构示意图,人脸图像处理终端700包括:采集模块701、计算模块702、增强模块703和识别模块704,下面对各模块的功能进行说明。
采集模块701,配置为响应于人脸识别操作指令,采集人脸图像;
计算模块702,配置为计算采集到的人脸图像的亮度值;
增强模块703,配置为当亮度值小于第一预设阈值时,增强从终端的显示屏发射出去的光线亮度至目标亮度值,重新采集人脸图像并计算对应的亮度值;
识别模块704,配置为当所重新采集的人脸图像的亮度值符合预设亮度值范围时,根据所重新采集的人脸图像进行人脸识别。
就计算模块702计算采集到的人脸图像的亮度值来说,具体配置为:根据终端使用的颜色空间模型,将所采集的人脸图像转换到基于色调、饱和度和明度的颜色空间模型,并从转换后人脸图像的明度通道提取相应的亮度值。
举例来说,如果终端的操作系统类型为Android系统,则默认采集的是YUV格式的图像数据,此时,将所采集的人脸图像对应的YUV颜色空间模型先转换到RGB颜色空间模型,然后再由RGB颜色空间模型转换到HSV颜色空间模型,最后再从转换后的人脸图像的明度通道提取相应的亮度值;如果终端的操作系统类型为IOS系统,则默认采集的是RGB格式的图像数据,此时,将所采集的人脸图像对应的RGB颜色空间模型直接转换到HSV颜色空间模型,最后再从转换后的人脸图像的明度通道提取相应的亮度值。
就增强模块703增强从终端的显示屏发射出去的光线亮度至目标亮度值来说,具体配置为:确定终端的显示屏能够实现的最高亮度为目标亮度值,将终端的显示屏亮度调节至最高亮度;或者,
根据终端所处环境的光线亮度与要求的环境光线亮度的差值,确定亮度补偿值为目标亮度值,将终端的显示屏亮度调节至亮度补偿值。
参见图8,图8是本发明实施例提供的人脸图像处理终端的另一可选的功能结构示意图,人脸图像处理终端700除了包括采集模块701、计算模块702、增强模块703和识别模块704之外,人脸图像处理终端700还可以包括:获取模块705,配置为获取终端的硬件和/或软件版本信息;获取模块 705,还配置为获取与终端的硬件和/或软件版本信息相匹配的第一预设阈值。
人脸图像处理终端700还可以包括提示模块706,配置为当所采集的人脸图像的亮度值大于第二预设阈值时,输出提示信息,提示信息用于提示调整采集的角度和/或区域;重新采集人脸图像,并计算所重新采集到的人脸图像的亮度值。
这里,人脸图像处理终端700中的获取模块705,还配置为获取终端的硬件和/或软件版本信息;
获取模块705,还配置为获取与终端的硬件和/或软件版本信息相匹配的第二预设阈值。
这里,人脸图像处理终端700还可以包括显示模块707,配置为当增强模块703增强从终端的显示屏发射出去的光线亮度至目标亮度值之前,在终端的显示屏显示用于增强亮度的按钮,按钮用于提示是否已经开启增强亮度的模式。
人脸图像处理终端700还可以包括添加模块708,配置为响应于针对按钮的选中指令,并增强从终端的显示屏发射出去的光线亮度至目标亮度值后,在终端显示界面的目标区域中添加遮罩层。
其中,目标区域包括终端的显示屏中未显示人脸图像的区域,或者说目标区域包括当前终端显示界面中人脸图像的采集区域以外的所有区域;在终端显示界面的目标区域中添加的遮罩层,可以是不透明白色遮罩层,当然也可以是其他类型的遮罩层,本发明实施例在此不做限定。
在实际应用中,上述各程序模块可由中央处理器(CPU,Central Processing Unit)、微处理器(MPU,Micro Processor Unit)、数字信号处理器(DSP,Digital Signal Processor)、或现场可编程门阵列(FPGA,Field Programmable Gate Array)等实现。
需要说明的是:本发明实施例提供的人脸图像处理终端在进行人脸图像处理时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将装置的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。
现在将参考附图描述实现本发明实施例的人脸图像处理终端,人脸图像处理终端可以以各种形式的终端,如台式机电脑、笔记本电脑或智能手机等来实施。下面对本发明实施例的人脸图像处理终端的硬件结构做进一步说明,可以理解,图9仅仅示出了人脸图像处理终端的示例性结构而非全部结构,根据需要可以实施图9示出的部分结构或全部结构。
参见图9,图9是本发明实施例提供的人脸图像处理终端的一可选的硬件结构示意图,实际应用中可以应用于前述运行应用程序的各种终端。图9所示的人脸图像处理终端900可以包括:至少一个处理器901如CPU、至少一个通信总线902、用户接口903、至少一个网络接口904、存储器905、显示屏906以及摄像模块907。人脸图像处理终端900中的各个组件通过通信总线902耦合在一起。可以理解,通信总线902用于实现这些组件之间的连接通信。通信总线902除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图9中将各种总线都标为通信总线902。
其中,用户接口903可以包括显示器、键盘、鼠标、轨迹球、点击轮、按键、按钮、触感板或者触摸屏等。网络接口904可以包括标准的有线接口、无线接口如WIFI接口。
可以理解,存储器905可以是高速RAM存储器,也可以是非不稳定的存储器(Non-Volatile Memory),例如至少一个磁盘存储器。存储器905还可以是至少一个远离处理器901的存储系统。本发明实施例中的存储器905用于存储各种类型的数据以支持人脸图像处理终端900的操作。这些数据 的示例包括:用于在人脸图像处理终端900上操作的任何计算机程序,如操作系统、网络通信模块、用户接口模块以及人脸识别程序,实现本发明实施例的人脸图像处理方法的程序可以包含在人脸识别程序中。
本发明实施例揭示的人脸图像处理方法可以应用于处理器901中,或者由处理器901实现。处理器901可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,人脸图像处理方法的各步骤可以通过处理器901中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器901可以是通用处理器、DSP或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器901可以实现或者执行本发明实施例中的提供的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本发明实施例所提供的人脸图像处理方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于存储器905,处理器901读取存储器905中的信息,结合其硬件完成本发明实施例提供的人脸图像处理方法的步骤。
在示例性实施例中,本发明实施例还提供了一种存储介质,存储有可执行程序,可执行程序被处理器执行时,实现本发明实施例提供的人脸图像处理方法,例如图3或图4示出的人脸图像处理方法。本发明实施例提供的存储介质可为光盘、闪存或磁盘等存储介质,可选为非瞬间存储介质。
需要说明的是,本发明实施例中的人脸图像处理终端700或人脸图像处理终端900包括但不限于个人计算机、移动电脑、平板电脑、移动电话、PDA、智能电视、智能手表、智能眼镜、智能手环等电子设备。可理解的是,人脸图像处理终端700或人脸图像处理终端900中各模块的功能可对应参考上述各方法实施例中图3或图4任意实施例的具体实现方式,这里不再赘述。
基于图2所示的一可选的应用场景示意图,本发明实施例提供的人脸图像处理方法可应用于与人脸图像识别相关的应用程序中。例如,应用于QQ安全中心的所有与人脸图像识别相关的敏感操作如刷脸改密、刷脸改手机或人脸启动密码等中。下面以终端为手机,以应用程序为QQ安全中心应用程序,以人脸识别操作指令为基于刷脸改密所触发产生的指令为例,对本发明实施例提供的人脸图像处理方法的实现过程进行说明。
一、用户打开手机上的QQ安全中心应用程序,使用刷脸改密功能。
二、QQ安全中心应用程序向后台服务器上报手机的硬件和/或软件版本信息,以查询手机的光线阈值,光线阈值包括第一预设阈值Threshhold_luminance_min和第二预设阈值Threshhold_luminance_max。
后台服务器根据接收到的手机的硬件和/或软件版本信息,可分配与手机的硬件和/或软件版本信息相匹配的第一预设阈值和第二预设阈值,并将其返回给手机。其中,手机的硬件版本信息可以包括但不限于手机的机型、品牌、主板或显示识别芯片等硬件信息;手机的软件版本信息可以包括但不限于操作系统版本、当前请求进行人脸图像识别的应用程序及其版本信息等软件信息。
三、QQ安全中心应用程序通过开启手机的摄像头采集人脸图像,在采集人脸图像的过程中,若检测到手机的操作系统为Android系统,则将所采集的人脸图像进行YUV-RGB-HSV的颜色空间模型转换,并从转换后人脸图像的明度通道提取相应的亮度值V;若检测到手机的操作系统为IOS系统,则将所采集的人脸图像进行RGB-HSV的颜色空间模型转换,并从转换后人脸图像的明度通道提取相应的亮度值V。
四、当亮度值V小于Threshhold_luminance_min时,则此时QQ安全中心应用程序先不进行人脸图像识别,而是主动给用户提示例如图5所示的“光线太暗,开启增强亮度模式吧”的提示信息,以提示用户当前手机 所处环境的光线亮度过暗。同时,增强从手机的显示屏发射出去的光线亮度至目标亮度值,使手机的显示屏发射出去的光线照射到人脸上。如果用户点击用于增强亮度的选项按钮,则就可以开启增强亮度的模式,将QQ安全中心应用程序上人脸图像采集区域外的所有区域由半透明变为不透明白色遮罩层。
目标亮度值可以是手机的显示屏能够实现的最高亮度,或者是根据手机所处环境的光线亮度与要求的环境光线亮度的差值,确定的亮度补偿值,还可以是由经验值确定的。
五、当亮度值V大于Threshhold_luminance_max时,则此时QQ安全中心应用程序将生成并显示提示信息,提示信息用于提示调整采集的角度和/或区域。
六、当所采集的人脸图像的亮度值V在[Threshhold_luminance_min,Threshhold_luminance_max]之间时,根据采集的人脸图像进行人脸识别。用户进入修改QQ密码的界面进行修改密码的操作。
综上所述,本发明实施例可实现以下有益效果:
1)通过计算采集到的人脸图像的亮度值,当亮度值小于第一预设阈值时,增强从终端的显示屏发射出去的光线亮度至目标亮度值,然后再重新采集人脸图像并计算对应的亮度值;当所重新采集的人脸图像的亮度值符合预设亮度值范围时,自动根据所重新采集的人脸图像进行人脸识别,从而可以从数据源头上减少不合格的人脸图像数据源,提高人脸识别的成功率。
2)当人脸图像的亮度值大于第二预设阈值时,生成并显示光线过亮的提示信息,实现了在周围光线不理想时,能主动引导用户进行正确操作(如调整采集的角度和/或区域等),通过主动提醒用户人脸图像验证不成功的原因,大大提高了用户使用体验。
3)在光线严重不足时,生成并显示用于增强亮度的按钮以提示是否已经开启增强亮度的模式,开启后在终端显示界面的目标区域中添加遮罩层,有效提高在完全黑暗的环境下通过人脸图像识别的成功率,使得人脸图像识别的使用场景更加广泛。
4)确定终端的显示屏能够实现的最高亮度为目标亮度值,将终端的显示屏亮度调节至最高亮度。基于最高亮度一次性地增强终端的显示屏亮度,即将终端的显示屏亮度调节至最高亮度。这样,无需反复调整终端的显示屏亮度,提升人脸图像识别的效率。
5)将终端所处环境的光线亮度与要求的环境光线亮度的差值,与亮度补偿值等同,这样,就可以使采集的人脸图像满足识别的理想亮度(所要求的环境光线亮度),进而确保基于采集人脸图像识别结果的精度。
6)将终端所处环境的光线亮度与要求的环境光线亮度的差值,与大于1的一补偿系数的乘积作为亮度补偿值,将终端的显示屏亮度调节至亮度补偿值。这样,可以有效避免光线的散射对补偿的光线亮度产生的负面影响。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。

Claims (26)

  1. 一种人脸图像处理方法,包括:
    响应于人脸识别操作指令,采集人脸图像;
    计算采集到的人脸图像的亮度值;
    当所述亮度值小于第一预设阈值时,增强从终端的显示屏发射出去的光线亮度至目标亮度值,重新采集人脸图像并计算对应的亮度值;
    当所重新采集的人脸图像的亮度值符合预设亮度值范围时,根据所重新采集的人脸图像进行人脸识别。
  2. 如权利要求1所述的方法,其中,所述计算采集到的人脸图像的亮度值,包括:
    根据所述终端使用的颜色空间模型,将所采集的人脸图像转换到基于色调、饱和度和明度的颜色空间模型,并从转换后人脸图像的明度通道提取相应的亮度值。
  3. 如权利要求1所述的方法,其中,还包括:
    获取所述终端的硬件和/或软件版本信息;
    获取与所述终端的硬件和/或软件版本信息相匹配的所述第一预设阈值。
  4. 如权利要求1所述的方法,其中,还包括:
    当所采集的人脸图像的亮度值大于第二预设阈值时,输出提示信息,所述提示信息用于提示调整采集的角度和/或区域;
    重新采集人脸图像,并计算所重新采集到的人脸图像的亮度值。
  5. 如权利要求4所述的方法,其中,还包括:
    获取所述终端的硬件和/或软件版本信息;
    获取与所述终端的硬件和/或软件版本信息相匹配的所述第二预设阈值。
  6. 如权利要求1至5任一项所述的方法,其中,所述增强从终端的显示屏发射出去的光线亮度至目标亮度值,包括:
    确定所述终端的显示屏能够实现的最高亮度为所述目标亮度值,将所述终端的显示屏亮度调节至所述最高亮度;或者,
    根据所述终端所处环境的光线亮度与要求的环境光线亮度的差值,确定亮度补偿值为所述目标亮度值,将所述终端的显示屏亮度调节至所述亮度补偿值。
  7. 如权利要求6所述的方法,其中,还包括:
    所述增强从所述终端的显示屏发射出去的光线亮度至所述目标亮度值之前,在所述终端的显示屏显示用于增强亮度的按钮,所述按钮用于提示是否已经开启增强亮度的模式。
  8. 如权利要求7所述的方法,其中,还包括:
    响应于针对所述按钮的选中指令,并增强从所述终端的显示屏发射出去的光线亮度至所述目标亮度值后,在所述终端显示界面的目标区域中添加遮罩层;
    其中,所述目标区域包括所述终端的显示屏中未显示所述人脸图像的区域。
  9. 一种人脸图像处理方法,应用于终端,所述终端包括有一个或多个处理器以及存储器,以及一个或一个以上的程序,其中,所述一个或一个以上的程序存储于存储器中,所述程序可以包括一个或一个以上的每一个对应于一组指令的单元,所述一个或多个处理器被配置为执行指令;所述方法包括:
    响应于人脸识别操作指令,采集人脸图像;
    计算采集到的人脸图像的亮度值;
    当所述亮度值小于第一预设阈值时,增强从终端的显示屏发射出去的 光线亮度至目标亮度值,重新采集人脸图像并计算对应的亮度值;
    当所重新采集的人脸图像的亮度值符合预设亮度值范围时,根据所重新采集的人脸图像进行人脸识别。
  10. 如权利要求9所述的方法,其中,所述计算采集到的人脸图像的亮度值,包括:
    根据所述终端使用的颜色空间模型,将所采集的人脸图像转换到基于色调、饱和度和明度的颜色空间模型,并从转换后人脸图像的明度通道提取相应的亮度值。
  11. 如权利要求9所述的方法,其中,还包括:
    获取所述终端的硬件和/或软件版本信息;
    获取与所述终端的硬件和/或软件版本信息相匹配的所述第一预设阈值。
  12. 如权利要求9所述的方法,其中,还包括:
    当所采集的人脸图像的亮度值大于第二预设阈值时,输出提示信息,所述提示信息用于提示调整采集的角度和/或区域;
    重新采集人脸图像,并计算所重新采集到的人脸图像的亮度值。
  13. 如权利要求12所述的方法,其中,还包括:
    获取所述终端的硬件和/或软件版本信息;
    获取与所述终端的硬件和/或软件版本信息相匹配的所述第二预设阈值。
  14. 如权利要求9至13任一项所述的方法,其中,所述增强从终端的显示屏发射出去的光线亮度至目标亮度值,包括:
    确定所述终端的显示屏能够实现的最高亮度为所述目标亮度值,将所述终端的显示屏亮度调节至所述最高亮度;或者,
    根据所述终端所处环境的光线亮度与要求的环境光线亮度的差值,确 定亮度补偿值为所述目标亮度值,将所述终端的显示屏亮度调节至所述亮度补偿值。
  15. 如权利要求14所述的方法,其中,还包括:
    所述增强从所述终端的显示屏发射出去的光线亮度至所述目标亮度值之前,在所述终端的显示屏显示用于增强亮度的按钮,所述按钮用于提示是否已经开启增强亮度的模式。
  16. 如权利要求15所述的方法,其中,还包括:
    响应于针对所述按钮的选中指令,并增强从所述终端的显示屏发射出去的光线亮度至所述目标亮度值后,在所述终端显示界面的目标区域中添加遮罩层;
    其中,所述目标区域包括所述终端的显示屏中未显示所述人脸图像的区域。
  17. 一种人脸图像处理终端,包括:
    采集模块,配置为响应于人脸识别操作指令,采集人脸图像;
    计算模块,配置为计算采集到的人脸图像的亮度值;
    增强模块,配置为当所述亮度值小于第一预设阈值时,增强从终端的显示屏发射出去的光线亮度至目标亮度值,重新采集人脸图像并计算对应的亮度值;
    识别模块,配置为当所重新采集的人脸图像的亮度值符合预设亮度值范围时,根据所重新采集的人脸图像进行人脸识别。
  18. 如权利要求17所述的终端,其中,所述计算模块,具体配置为:根据所述终端使用的颜色空间模型,将所采集的人脸图像转换到基于色调、饱和度和明度的颜色空间模型,并从转换后人脸图像的明度通道提取相应的亮度值。
  19. 如权利要求17所述的终端,其中,还包括:
    获取模块,配置为获取所述终端的硬件和/或软件版本信息;
    所述获取模块,还配置为获取与所述终端的硬件和/或软件版本信息相匹配的所述第一预设阈值。
  20. 如权利要求17所述的终端,其中,还包括:
    提示模块,配置为当所采集的人脸图像的亮度值大于第二预设阈值时,输出提示信息,所述提示信息用于提示调整采集的角度和/或区域;重新采集人脸图像,并计算所重新采集到的人脸图像的亮度值。
  21. 如权利要求20所述的终端,其中,还包括:
    获取模块,配置为获取所述终端的硬件和/或软件版本信息;
    所述获取模块,还配置为获取与所述终端的硬件和/或软件版本信息相匹配的所述第二预设阈值。
  22. 如权利要求17至21任一项所述的终端,其中,所述增强模块,具体配置为:
    确定所述终端的显示屏能够实现的最高亮度为所述目标亮度值,将所述终端的显示屏亮度调节至所述最高亮度;或者,
    根据所述终端所处环境的光线亮度与要求的环境光线亮度的差值,确定亮度补偿值为所述目标亮度值,将所述终端的显示屏亮度调节至所述亮度补偿值。
  23. 如权利要求22所述的终端,其中,还包括:
    显示模块,配置为当所述增强模块增强从所述终端的显示屏发射出去的光线亮度至所述目标亮度值之前,在所述终端的显示屏显示用于增强亮度的按钮,所述按钮用于提示是否已经开启增强亮度的模式。
  24. 如权利要求23所述的终端,其中,还包括:
    添加模块,配置为响应于针对所述按钮的选中指令,并增强从所述终端的显示屏发射出去的光线亮度至所述目标亮度值后,在所述终端显示界 面的目标区域中添加遮罩层;
    其中,所述目标区域包括所述终端的显示屏中未显示所述人脸图像的区域。
  25. 一种存储介质,存储有可执行程序,所述可执行程序被处理器执行时,实现如权利要求1至8任一项所述的人脸图像处理方法。
  26. 一种人脸图像处理终端,包括:
    存储器,配置为存储可执行程序;
    处理器,配置为执行所述存储器中存储的可执行程序时,实现如权利要求1至8任一项所述的人脸图像处理方法。
PCT/CN2017/111810 2016-11-23 2017-11-20 人脸图像处理方法、终端及存储介质 WO2018095293A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/279,901 US10990806B2 (en) 2016-11-23 2019-02-19 Facial image processing method, terminal, and data storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611046965.X 2016-11-23
CN201611046965.XA CN108090405B (zh) 2016-11-23 2016-11-23 一种人脸识别方法及终端

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/279,901 Continuation US10990806B2 (en) 2016-11-23 2019-02-19 Facial image processing method, terminal, and data storage medium

Publications (1)

Publication Number Publication Date
WO2018095293A1 true WO2018095293A1 (zh) 2018-05-31

Family

ID=62171648

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/111810 WO2018095293A1 (zh) 2016-11-23 2017-11-20 人脸图像处理方法、终端及存储介质

Country Status (3)

Country Link
US (1) US10990806B2 (zh)
CN (1) CN108090405B (zh)
WO (1) WO2018095293A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111698432A (zh) * 2019-03-12 2020-09-22 北京猎户星空科技有限公司 一种参数控制方法、装置、设备及介质
US20210338388A1 (en) * 2018-10-02 2021-11-04 Dentsply Sirona Inc. Method for incorporating photographic facial images and or films of a person into the planning of odontological and or cosmetic dental treatments and or the preparation of restorations for said person

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019133991A1 (en) * 2017-12-29 2019-07-04 Wu Yecheng System and method for normalizing skin tone brightness in a portrait image
US10762336B2 (en) * 2018-05-01 2020-09-01 Qualcomm Incorporated Face recognition in low light conditions for unlocking an electronic device
CN109598515B (zh) * 2018-11-29 2020-08-04 阿里巴巴集团控股有限公司 一种支付方法、支付装置及终端设备
CN109995761B (zh) * 2019-03-06 2021-10-19 百度在线网络技术(北京)有限公司 服务处理方法、装置、电子设备及存储介质
CN110519485B (zh) * 2019-09-09 2021-08-31 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111652131A (zh) * 2020-06-02 2020-09-11 浙江大华技术股份有限公司 人脸识别装置及其补光方法、可读存储介质
CN112016495A (zh) * 2020-09-03 2020-12-01 福建库克智能科技有限公司 人脸识别的方法、装置和电子设备
CN112784741A (zh) * 2021-01-21 2021-05-11 宠爱王国(北京)网络科技有限公司 宠物身份识别方法、装置及非易失性存储介质
US11587528B2 (en) * 2021-02-12 2023-02-21 Microsoft Technology Licensing, Llc Optimized facial illumination from adaptive screen content
CN113221774A (zh) * 2021-05-19 2021-08-06 重庆幸尚付科技有限责任公司 人脸识别系统及考勤装置
US11722779B2 (en) 2021-06-22 2023-08-08 Snap Inc. Viewfinder ring flash
US11683592B2 (en) * 2021-06-30 2023-06-20 Snap Inc. Adaptive front flash view
JP2023053733A (ja) * 2021-10-01 2023-04-13 パナソニックIpマネジメント株式会社 撮像誘導装置、撮像誘導方法及びプログラム
CN114220399B (zh) * 2022-01-11 2023-03-24 深圳Tcl数字技术有限公司 背光值的控制方法、装置、存储介质及电子设备
CN115205509B (zh) * 2022-09-16 2022-11-18 上海英立视电子有限公司 一种调整图像立体感的方法及系统
CN117275138A (zh) * 2023-11-21 2023-12-22 建信金融科技有限责任公司 基于自动取款机的身份认证方法、装置、设备和存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140009639A1 (en) * 2012-07-09 2014-01-09 Samsung Electronics Co. Ltd. Camera control system, mobile device having the system, and camera control method
CN104301598A (zh) * 2013-07-18 2015-01-21 国龙信息技术(上海)有限公司 一种移动终端设置前置摄像头灯光效果的方法
CN104320578A (zh) * 2014-10-22 2015-01-28 厦门美图之家科技有限公司 一种基于屏幕亮度进行自拍柔光补偿的方法
CN104424467A (zh) * 2013-08-21 2015-03-18 中移电子商务有限公司 一种人脸图像采集方法及装置、终端
CN105323382A (zh) * 2015-11-04 2016-02-10 广东欧珀移动通信有限公司 一种屏幕补光方法、装置及移动终端
CN105389572A (zh) * 2015-12-10 2016-03-09 威海北洋电气集团股份有限公司 一种人脸和身份证识别一体机及自动调整亮度的补光方法
CN105744174A (zh) * 2016-02-15 2016-07-06 广东欧珀移动通信有限公司 一种自拍方法、装置及移动终端
CN106056064A (zh) * 2016-05-26 2016-10-26 汉王科技股份有限公司 一种人脸识别方法及人脸识别装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080122821A1 (en) * 2006-11-24 2008-05-29 Sony Ericsson Mobile Communications Ab Luminance control for a display
US8836532B2 (en) * 2009-07-16 2014-09-16 Gentex Corporation Notification appliance and method thereof
JP5786254B2 (ja) * 2011-04-29 2015-09-30 ▲華▼▲為▼▲終▼端有限公司 端末デバイス中の発光デバイスを制御するための方法と装置、ならびに端末デバイス
US8606011B1 (en) * 2012-06-07 2013-12-10 Amazon Technologies, Inc. Adaptive thresholding for image recognition
CN105744176B (zh) * 2016-02-15 2017-11-21 广东欧珀移动通信有限公司 一种屏幕补光方法、装置及移动终端
US10720126B2 (en) * 2018-08-31 2020-07-21 Apple Inc. Color ambient light sensor with adjustable neutral density filter

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140009639A1 (en) * 2012-07-09 2014-01-09 Samsung Electronics Co. Ltd. Camera control system, mobile device having the system, and camera control method
CN104301598A (zh) * 2013-07-18 2015-01-21 国龙信息技术(上海)有限公司 一种移动终端设置前置摄像头灯光效果的方法
CN104424467A (zh) * 2013-08-21 2015-03-18 中移电子商务有限公司 一种人脸图像采集方法及装置、终端
CN104320578A (zh) * 2014-10-22 2015-01-28 厦门美图之家科技有限公司 一种基于屏幕亮度进行自拍柔光补偿的方法
CN105323382A (zh) * 2015-11-04 2016-02-10 广东欧珀移动通信有限公司 一种屏幕补光方法、装置及移动终端
CN105389572A (zh) * 2015-12-10 2016-03-09 威海北洋电气集团股份有限公司 一种人脸和身份证识别一体机及自动调整亮度的补光方法
CN105744174A (zh) * 2016-02-15 2016-07-06 广东欧珀移动通信有限公司 一种自拍方法、装置及移动终端
CN106056064A (zh) * 2016-05-26 2016-10-26 汉王科技股份有限公司 一种人脸识别方法及人脸识别装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210338388A1 (en) * 2018-10-02 2021-11-04 Dentsply Sirona Inc. Method for incorporating photographic facial images and or films of a person into the planning of odontological and or cosmetic dental treatments and or the preparation of restorations for said person
CN111698432A (zh) * 2019-03-12 2020-09-22 北京猎户星空科技有限公司 一种参数控制方法、装置、设备及介质

Also Published As

Publication number Publication date
CN108090405A (zh) 2018-05-29
CN108090405B (zh) 2020-08-14
US20190188454A1 (en) 2019-06-20
US10990806B2 (en) 2021-04-27

Similar Documents

Publication Publication Date Title
WO2018095293A1 (zh) 人脸图像处理方法、终端及存储介质
US10827126B2 (en) Electronic device for providing property information of external light source for interest object
CN107408168B (zh) 使用显示信息的虹膜认证方法和装置
US10482325B2 (en) User authentication method and electronic device supporting the same
US11281892B2 (en) Technologies for efficient identity recognition based on skin features
US8861847B2 (en) System and method for adaptive skin tone detection
US20160171280A1 (en) Method of updating biometric feature pattern and electronic device for same
US11120536B2 (en) Apparatus and method for determining image sharpness
AU2015201759A1 (en) Electronic apparatus for providing health status information, method of controlling the same, and computer readable storage medium
US11968439B2 (en) Electronic device comprising camera and electronic device control method
US10592759B2 (en) Object recognition apparatus and control method therefor
KR102287109B1 (ko) 피부에 해당하는 이미지 처리 영역을 보정하는 방법 및 전자 장치
KR102359276B1 (ko) 전자 장치의 화이트 밸런스 기능 제어 방법 및 장치
US11416974B2 (en) Image processing method and electronic device supporting the same
CN109618098A (zh) 一种人像面部调整方法、装置、存储介质及终端
US11144197B2 (en) Electronic device performing function according to gesture input and operation method thereof
US20200104471A1 (en) User authentication using variant illumination
US20160239992A1 (en) Image processing method and electronic device for supporting the same
US10440283B2 (en) Electronic device and method for controlling the same
US9684828B2 (en) Electronic device and eye region detection method in electronic device
WO2023093390A1 (zh) 布局方法、可读介质和电子设备
US9563252B2 (en) Display apparatus and display method thereof
WO2019242249A1 (zh) 界面显示方法及电子设备
CN114424520B (zh) 图像处理方法和支持该图像处理方法的电子装置
TW201344593A (zh) 人臉辨識的方法及使用該方法之人臉辨識系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17873392

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17873392

Country of ref document: EP

Kind code of ref document: A1