WO2020078243A1 - Image processing and face image identification method, apparatus and device - Google Patents

Image processing and face image identification method, apparatus and device Download PDF

Info

Publication number
WO2020078243A1
WO2020078243A1 PCT/CN2019/110266 CN2019110266W WO2020078243A1 WO 2020078243 A1 WO2020078243 A1 WO 2020078243A1 CN 2019110266 W CN2019110266 W CN 2019110266W WO 2020078243 A1 WO2020078243 A1 WO 2020078243A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
area
face
target object
face image
Prior art date
Application number
PCT/CN2019/110266
Other languages
French (fr)
Chinese (zh)
Inventor
郎平
姚迪狄
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2020078243A1 publication Critical patent/WO2020078243A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image processing and face image recognition method, device, and equipment.
  • Image quality evaluation is one of the basic techniques in image processing. It mainly analyzes and studies the characteristics of the image, and then evaluates the quality of the image.
  • the image is generally fused with other types of images of the object to improve the quality evaluation value.
  • the embodiments of the present specification provide an image processing method for optimizing image quality.
  • An embodiment of this specification also provides an image processing method, including:
  • the first area is compensated.
  • An embodiment of the present specification also provides a face image recognition method, including:
  • the first area is compensated, and the compensated first face image is identified.
  • An embodiment of this specification also provides an image processing device, including:
  • An obtaining module configured to obtain a first image and a second image of the target object, where the first image and the second image are images collected under different lighting conditions;
  • An evaluation module configured to perform quality evaluation processing on the first image
  • a determining module configured to determine a second area in the second image corresponding to the first area if there is a first area in the first image whose quality evaluation value is less than a preset quality evaluation threshold
  • the compensation module is used to compensate the first area based on the image data of the second area.
  • An embodiment of this specification also provides a face image recognition device, including:
  • An obtaining module configured to obtain a first face image and a second face image of the target object, where the first face image and the second face image are images collected under different lighting conditions;
  • An evaluation module configured to perform quality evaluation processing on the first face image
  • a determining module configured to determine a second area in the second face image corresponding to the first area if there is a first area in the first face image whose quality evaluation value is less than a preset quality evaluation threshold ;
  • the compensation module is configured to compensate the first area based on the image data of the second area and identify the compensated first face image.
  • An embodiment of the present specification also provides a face image recognition device, including: an RGB camera, a black and white camera, an infrared fill light, and a processing chip; wherein:
  • the RGB camera is used to collect RGB face images of the target object
  • the infrared fill light is used to emit infrared light to the face of the target object;
  • the black-and-white camera is used to collect infrared face images of the target object under infrared lighting conditions
  • the processing chip is configured to use one of the RGB face image and the infrared face image as a first face image and the other as a second face image, and perform any of claims 15 to 17 A method step.
  • An embodiment of this specification also provides an electronic device, including:
  • a memory arranged to store computer-executable instructions, which when executed, causes the processor to perform the steps of the image processing method or face image recognition method as described above.
  • Embodiments of this specification also provide a computer-readable storage medium that stores a computer program on the computer-readable storage medium.
  • the computer program is executed by a processor, the steps of the image processing method or face image recognition method described above are implemented .
  • the image data of another second area corresponding to the first area is used to compensate the first area to improve the overall image quality.
  • this solution can locate areas that require quality compensation and perform image quality compensation regionally, thereby simplifying image quality compensation.
  • FIG. 1a is a schematic diagram of an application scenario provided by this specification.
  • FIG. 1b is a schematic diagram of another application scenario provided by this specification.
  • FIG. 2 is a schematic flowchart of an image processing method according to an embodiment of the present specification
  • FIG. 3 is a schematic flowchart of the quality evaluation steps provided by this specification.
  • FIG. 5 is a schematic flowchart of a face image recognition method according to an embodiment of the present specification.
  • FIG. 6 is a schematic flowchart of a face image recognition method according to another embodiment of the present specification.
  • FIG. 7 is a schematic structural diagram of hardware and a system provided by an embodiment of the present specification.
  • FIG. 8 is a schematic diagram of different working modes of a black and white camera provided by an embodiment of the present specification.
  • 9a is a schematic diagram of the data format of each frame of the 2D infrared output provided by an embodiment of the present specification.
  • 9b is a schematic diagram of the data format of each frame of 3D depth output provided by an embodiment of the present specification.
  • FIG. 10 is a schematic diagram of the data format of each frame of RGB and infrared output provided by an embodiment of the present specification
  • FIG. 11 is a schematic structural diagram of an image processing device according to an embodiment of the present specification.
  • FIG. 12 is a schematic structural diagram of a face image recognition device according to an embodiment of the present specification.
  • FIG. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present specification.
  • FIG. 14 is a schematic structural diagram of an electronic device according to another embodiment of the present specification.
  • the prior art generally combines the advantages of image acquisition under different lighting conditions through the steps of preprocessing-registration-image fusion to prevent the impact of dark light and occlusion on the image quality, and then Provide image processing support for business scenarios with high security requirements.
  • the preprocessing, registration, image fusion and other steps are all based on the post-algorithm, the algorithm complexity is very large.
  • the present invention provides an image processing method by collecting the first image and the second image under different lighting conditions of the same target object, and performing a regional quality evaluation value to detect the first image and When one of the second images has a first area whose quality evaluation value is less than the quality evaluation threshold, the image data of the second area corresponding to the first area in the other is used to compensate the first area to improve the overall image quality.
  • this solution can locate the area where quality compensation is required, and then perform quality compensation in a targeted manner, thereby achieving the purpose of simplifying image quality compensation.
  • image quality evaluation is one of the basic techniques in image processing, mainly by analyzing the characteristics of the image, and then assessing the quality of the image (the degree of image distortion).
  • the first image is specifically an RGB image
  • the second image is an infrared image
  • the first image is an infrared image
  • the second image is an RGB image.
  • one application scenario may be:
  • the image acquisition device collects the RGB image and infrared image of the object and inputs it to the image processing terminal;
  • the image processing terminal performs image quality evaluation on the RGB image and the infrared image to determine whether one of the RGB image and the infrared image has an area whose quality evaluation value is less than the quality evaluation threshold, and if so, the image data of the corresponding area of the other image is used Perform compensation to obtain the compensated RGB image and infrared image whose quality evaluation value is greater than or equal to the quality evaluation threshold.
  • another application scenario may be:
  • the image collection device collects the user's RGB face image and infrared face image, and inputs it to the image processing terminal;
  • the image processing terminal performs image quality evaluation on the RGB face image and the infrared face image to determine whether the RGB face image and the infrared face image have areas where the quality evaluation value is less than the quality evaluation threshold, and if so, use the image from another image Compensate the image data of the corresponding area, get the compensated RGB face image and infrared face image whose quality evaluation value is greater than or equal to the quality evaluation threshold, and input it to the business processing terminal, otherwise directly convert the RGB face image and infrared face The image is input to the business processing terminal;
  • the service processing terminal verifies the user's identity information and authorization information based on the compensated RGB face image and / or infrared face image; after the verification is passed, the service requested by the user is performed, and the service processing result is output.
  • the image acquisition device can be an integrated machine with the image processing terminal; the image processing terminal and the business processing terminal can be an integrated machine, which can be exemplified by image quality optimization software, banking counters, offline stores, hotel room access control, etc.
  • the image processing terminal and business processing terminal can be a PC terminal, a mobile terminal, or a mobile communication terminal refers to a computer device that can be used on the move. Broadly speaking, it includes mobile phones, notebooks, tablet computers, POS machines and even Including car computer. But in most cases it refers to mobile phones or smartphones and tablets with multiple application functions.
  • FIG. 2 is a schematic flowchart of an image processing method according to an embodiment of the present specification. The method may be executed by the image processing terminal in FIG. 1a. Referring to FIG. 2, the method may specifically include the following steps:
  • Step 220 Acquire a first image and a second image of the target object, where the first image and the second image are images collected under different lighting conditions;
  • the first image and the second image are preferably images acquired synchronously.
  • one example of the first image and the second image in this embodiment is an RGB image, and the other example is an infrared image.
  • step 220 may be:
  • the multi-eye camera may be a binocular camera or a trinocular camera, including a color camera part and a black-and-white camera part.
  • RGB image refers to the image collected by the color camera part under visible light conditions. It is an image displayed in the RGB color mode.
  • the RGB color mode is a color standard in the industry. It is through the red (R), green (G ), The blue (B) three color channels change and superimpose each other to get a variety of colors, RGB is the color representing the three channels of red, green, and blue; infrared image refers to infrared
  • the image collected by the black-and-white camera part under lighting conditions is an image obtained by acquiring the intensity of infrared light of an object.
  • Step 240 Perform quality evaluation processing on the first image; referring to FIG. 3, an implementation manner thereof may be:
  • Step 320 Determine the quality-related parameter distribution data of the first image
  • the quality-related parameters include: one or more of sharpness, resolution, and brightness; sharpness refers to the sharpness of each detail shadow and its boundary on the image; resolution refers to the precision of the screen image; Brightness refers to the brightness of the picture.
  • the RGB image is taken as an example to illustrate the brightness distribution data in the quality-related parameter distribution data in the first image.
  • the brightness of the image changes from left to right (401-406 The corresponding column) gradually decreases, and the uniform brightness value is lower.
  • the area 401 may be a pixel-level area, and the brightness value may be the brightness of the pixel; it may also be a pre-divided area size, and the brightness value may be a uniform brightness value of the area.
  • the distribution data of sharpness and resolution can be constructed.
  • Step 340 Based on the quality-related parameter distribution data, determine the quality evaluation value distribution data of the first image; with reference to FIG. 4, specific examples may be:
  • the brightness, sharpness, resolution, etc. corresponding to any area can be obtained, and then compared with the preset threshold corresponding to each quality-related parameter to obtain a difference, and Based on the difference and preset weight corresponding to each quality-related parameter, the quality evaluation value of the area is calculated, and the quality evaluation values of other areas are obtained by analogy, and then the distribution data of the quality evaluation value of the image is obtained.
  • the infrared image can ensure uniform light
  • the brightness parameter can be ignored when the first image is an infrared image and the quality evaluation value distribution data is calculated; when the first image is the RGB image and the quality evaluation value distribution data is calculated, the The uniform brightness value of the infrared image is used as the preset threshold of the brightness of the RGB image.
  • Step 360 Based on the quality evaluation value distribution data, determine an area in the first image whose quality evaluation value is less than a preset quality evaluation threshold.
  • the quality evaluation value of each area in the image is compared with a preset quality evaluation threshold to determine an area where the quality evaluation value is less than the preset quality evaluation threshold, and this area is used as the first area.
  • Step 260 If there is a first area in the first image whose quality evaluation value is less than a preset quality evaluation threshold, determine a second area in the second image corresponding to the first area; an implementation manner thereof Can be:
  • the second area in the RGB image is determined; if there is a first area in the RGB image, the second area in the infrared image is determined.
  • step 260 the steps of image registration processing may be completed by a multi-eye camera, and specific examples may be:
  • Step 280 Compensate the first area based on the image data of the second area.
  • first image is an RGB image and the second image is an infrared image
  • one implementation manner may be:
  • the brightness of the first area of the RGB image is less than a preset brightness threshold, based on the image data of the second area of the infrared image, compensate the brightness of the first area of the RGB image; or,
  • the sharpness of the first area of the RGB image is less than a preset sharpness threshold, the sharpness of the first area of the RGB image is compensated based on the image data of the second area of the infrared image.
  • first image is an infrared image and the second image is an RGB image
  • another implementation manner may be:
  • the image data of the target object in the first area of the infrared image is compensated based on the image data of the second area of the RGB image; or,
  • the first region resolution of the infrared image is compensated based on the image data of the second region of the RGB image.
  • step 280 may be:
  • FIG. 5 is a schematic flowchart of a face image recognition method provided by an embodiment of the present specification. The method may be jointly executed by the image processing terminal and the business processing terminal in FIG.
  • Step 502 Acquire a first face image and a second face image of a target object, where the first face image and the second face image are images collected under different lighting conditions;
  • first face image and the second face image are preferably images collected synchronously.
  • step 502 may be:
  • the color camera and the black and white camera of the binocular camera are in the normally open state.
  • step 502 may be:
  • the color camera is in a normally open state
  • the black and white camera is in a normally closed state.
  • Step 504 Perform quality evaluation processing on the first face image
  • Step 506 If there is a first area in the first face image whose quality evaluation value is less than a preset quality evaluation threshold, determine a second area in the second face image corresponding to the first area;
  • the preset quality evaluation threshold may be an image quality evaluation threshold required by the target user to handle the business.
  • Step 508 Compensate the first area based on the image data of the second area
  • Steps 504 to 508 correspond to steps 240 to 280 in the embodiment shown in FIG. 2 respectively, and their implementations are also similar. Therefore, they will not be repeated here.
  • step 510 also includes the step of detecting whether the compensated first face image can meet the requirements of face recognition, and one implementation manner thereof may be:
  • the quality evaluation value of the compensated first face image meets the face recognition quality evaluation requirements, it is allowed to identify the compensated first face image; otherwise, return to step 502 until the quality evaluation value is up to standard The first image after compensation or the second image after compensation.
  • Step 510 Identify the compensated first face image.
  • One of its implementation methods can be:
  • face recognition processing is performed to determine the identity information of the target object.
  • the target object may also be subjected to a living body detection before handling related business.
  • the business requested by the target object is payment. Because the security required by the payment business is relatively high, it is necessary to further determine whether the target object is a living body.
  • One of its implementation methods can be:
  • Determining the depth face image of the target object, and performing live detection on the target object based on the depth face image can be:
  • the classifier selects the most distinguishing feature to train the classifier; then, use the 3D face data corresponding to the deep face image as the input of the training classifier, and obtain the result that the target object output by the training classifier is a living or non-living body.
  • the compensated first face image and the depth map face image can also be fused, and the target object can be subjected to living body detection based on the fused image.
  • Another way to implement the living body detection step can be:
  • living body movements include blinking, shaking head, opening mouth, etc.
  • the first face image and the second face image are detected If any one of the first areas has a quality evaluation value less than the quality evaluation threshold, use the image data of the second area corresponding to the first area to compensate the first area to improve the overall quality of the face image
  • a living body detection is performed on the user based on the deep face image to improve the security of business handling.
  • FIG. 6 is a schematic flowchart of a face image recognition method according to another embodiment of the present specification. Referring to FIG. 6, the method may specifically include the following steps:
  • Step 602 Initialize the hardware and system configuration
  • the hardware and system design part includes: RGB part, black and white part, IR structured light projector, special ISP and SOC processing chip, and infrared fill light, in which:
  • the RGB part which is the color camera Color
  • the RGB part includes: a dedicated lens + RGB image sensor, used to collect normal color images, taking into account the actual business needs, using the resolution of the corresponding specifications, for example: for the business needs of new retail scenarios, use The resolution of 1080P specification (1920 * 1080), the FOV of the special lens is larger (such as the diagonal of 110 °), and it is customized according to the security specifications to ensure no distortion (required for face recognition).
  • the black and white part the infrared camera Infrared Camera, includes: a dedicated lens + black and white image sensor, used to collect grayscale images, considering the business needs of new retail scenarios and ensuring alignment with RGB pixels, the same resolution of 1080P (1920 * 1080), the FOV of the special lens is large (for example, 110 ° diagonal), customized according to the security specifications to ensure no distortion (required for face recognition).
  • the infrared camera collects infrared grayscale images
  • 2D pure infrared image collection filling through a narrow-band infrared fill light
  • 3D depth Depth image collection through a specially coded structure Light
  • Infrared Camera for image collection, so the two parts need to be processed in time-sharing, that is, the two processes are independent, and it is not possible to collect 2D pure infrared images and 3D depth Depth images at the same time.
  • the 2D pure infrared image acquisition condition is triggered, confirm to turn off the IR structured light projector (IR Laser Projector), turn on the custom band (850nm / 940nm) infrared fill light, and start collecting 2D infrared face images; 3D depth information
  • the image acquisition condition is triggered, confirm to turn off the infrared fill light of the custom band (850nm / 940nm), turn on the IR structured light projector (IR Laser Projector), and start to collect the face image of 3D depth information.
  • the IR structured light projector is a light source with a special structure, such as discrete light spots, fringe light, coded structured light, etc., used to project onto the object or plane to be detected, when the structured light after coding is projected on a plane of different depths
  • a special structure such as discrete light spots, fringe light, coded structured light, etc.
  • RGB and grayscale images such as lens distortion Adjust LSC, automatic white balance AWB, automatic exposure AE, wide dynamic WDR, color matrix adjustment CCM, 2D noise reduction, etc., optimize and format RGB and grayscale (RAW format is converted to the final YUV format).
  • ISP Image signal processing
  • SOC processing chip System-on-a-Chip
  • the infrared fill light mainly fills the infrared infrared face (such as 850nm and 940nm) to ensure the uniform brightness of the face under different light source scenes.
  • the initial configuration parameters include: RGB sensor is turned on by default, MONO sensor, infrared fill light and IR structure light emitter are turned off to collect only RGB image data in each frame.
  • Step 604 Detect whether there is a human face in each frame of RGB images
  • step 606 If yes, go to step 606; if no, continue to collect RGB image data.
  • the face recognition technology is a relatively mature technology, therefore, the face recognition based on RGB images will not be repeated here.
  • Step 606 Turn on the infrared fill light and the black-and-white image sensor MONO Sensor to collect infrared image data while collecting RGB image data in each frame;
  • this embodiment further integrates the output formats of RGB image data and infrared image data, that is, each frame of data includes both RGB image data and infrared image data. Taking h as 1080P for example, the specific number of single frames is shown in Figure 10.
  • Step 608 Perform face feature extraction on the infrared image and the RGB image to determine whether the face quality meets the recognition feature extraction requirements;
  • step 610 If yes, perform step 610; if not, perform compensation processing on the face image, and perform step 608 again. If the recognition feature extraction requirements are not met after multiple compensation processing, return to step 606 to reacquire infrared images and RGB image;
  • Step 610 Perform face recognition processing on the face image to determine the user's identity, and then conduct related business processing; turn off the infrared fill light to return to step 604 in the next round;
  • Step 612 Based on the security required by the service requested by the user, determine whether a living body test is required;
  • step 614 If yes, go to step 614; if no, the process ends.
  • Step 614 Turn on the IR structure light coding transmitter, and the MONO Sensor collects the depth image and performs 3D calculation;
  • Step 616 Cyclically collect the grayscale image of each frame of RGB + Depth and output a real-time depth map
  • Step 618 Based on the depth map, determine whether to pass the living body detection
  • step 620 If yes, go to step 620; if no, go back to step 616;
  • Step 620 transact related business; synchronize, turn off the IR structure optical coding transmitter to return to step 604 in the next round;
  • this embodiment compensates the RGB image with an infrared image to ensure the uniform brightness of the image, and solves the problem of uneven exposure of the face when RGB is exposed to various complex lights, such as a yin and yang face; by compensating the infrared image with the RGB image, In order to solve the problem of face occlusion caused by reflective objects such as glasses, RGB + IR redundancy strategy is ensured; when performing live detection in business scenarios with high security requirements for payment, the IR-encoded structured light emitter is also collected through MONOSensor Project the returned depth information (point cloud image) to ensure safety.
  • FIG. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present specification. Referring to FIG. 11, it may specifically include: an acquisition module 111, an evaluation module 112, a determination module 113, and a compensation module 114, where:
  • the obtaining module 111 is used to obtain a first image and a second image of the target object, where the first image and the second image are images collected under different lighting conditions;
  • the evaluation module 112 is configured to perform quality evaluation processing on the first image
  • the determining module 113 is configured to determine a second area in the second image corresponding to the first area if there is a first area in the first image whose quality evaluation value is less than a preset quality evaluation threshold;
  • the compensation module 114 is configured to compensate the first area based on the image data of the second area.
  • the evaluation module 112 is specifically used for:
  • Determining quality-related parameter distribution data of the first image determining quality evaluation value distribution data of the first image based on the quality related parameter distribution data; determining the first image based on the quality evaluation value distribution data The area where the medium quality evaluation value is less than the preset quality evaluation threshold.
  • the quality-related parameters include one or more of clarity, resolution, and brightness.
  • the first image is an RGB image
  • the second image is an infrared image
  • the compensation module 114 is specifically used to:
  • the brightness of the first area of the RGB image is less than a preset brightness threshold, based on the image data of the second area of the infrared image, compensate the brightness of the first area of the RGB image; or,
  • the sharpness of the first area of the RGB image is less than a preset sharpness threshold, the sharpness of the first area of the RGB image is compensated based on the image data of the second area of the infrared image.
  • the first image is an infrared image
  • the second image is an RGB image
  • the compensation module 114 is specifically used to:
  • the image data of the target object in the first area of the infrared image is compensated based on the image data of the second area of the RGB image; or,
  • the first region resolution of the infrared image is compensated based on the image data of the second region of the RGB image.
  • the compensation module 114 is specifically used for:
  • the first image and the second image are collected and registered by a binocular camera.
  • the target object includes a human face
  • the first image is a first human face image
  • the second image is a second human face image.
  • the obtaining module 111 is specifically used for:
  • Collect the first image of the target object when it is determined that there is a human face in the first image, synchronously acquire the first human face image and the second human face image of the target object.
  • the device further includes:
  • the face recognition module is used to recognize the compensated first face image and determine the identity information of the target object.
  • the device further includes:
  • the living body detection module is configured to determine that the safety of the business requirements of the target object reaches a preset safety threshold, and perform living body detection on the target object.
  • the living body detection module is specifically used for:
  • the living body detection module is specifically used for:
  • the living body detection module is specifically used for:
  • the device may specifically include: an acquisition module 121, an evaluation module 122, a determination module 123, a compensation module 124, and a recognition module 125 ,among them:
  • the obtaining module 121 is used to obtain a first face image and a second face image of the target object, where the first face image and the second face image are images collected under different lighting conditions;
  • the evaluation module 122 is configured to perform quality evaluation processing on the first face image
  • the determining module 123 is configured to determine a second area corresponding to the first area in the second face image if there is a first area in the first face image whose quality evaluation value is less than a preset quality evaluation threshold region;
  • the compensation module 124 is configured to compensate the first area based on the image data of the second area
  • the recognition module 125 is used to recognize the compensated first face image.
  • the evaluation module 122 is also used to:
  • the identification module 125 is specifically used for:
  • the first face image and the second face image are detected If any one of the first areas has a quality evaluation value less than the quality evaluation threshold, use the image data of the second area corresponding to the first area to compensate the first area to improve the overall quality of the face image
  • a living body detection is performed on the user based on the deep face image to improve the security of business handling.
  • FIG. 7 is a schematic structural diagram of hardware and a system provided by an embodiment of the present specification.
  • the face image recognition device provided by this embodiment will be described in detail below with reference to FIG.
  • Light lamps and processing chips including:
  • the RGB camera (Color Camera) is used to collect RGB face images of the target object
  • the infrared fill light is used to emit infrared light to the face of the target object;
  • the black and white camera (Inrared Camera) is used to collect infrared face images of the target object under infrared illumination conditions;
  • the processing chip (for example: an image signal processing unit ISP + integrated circuit SOC) is configured to use one of the RGB face image and the infrared face image as a first face image and the other as a second face Image and execute the steps of the method described in any one of Embodiment 7 corresponding to FIG. 2, FIG. 5, and FIG. 6 above.
  • the RGB camera and the black-and-white camera belong to the same binocular camera.
  • the RGB camera is in a normally-on state, and the infrared camera and the infrared fill light process a normally-off state;
  • the processing chip is also used to determine that there is a face in the RGB face image collected by the RGB camera, and wake up the infrared camera and the infrared fill light to synchronize the infrared camera with the RGB camera Collect a face image of the target object.
  • the processing chip is also used to turn off the infrared fill light when identifying the identity information of the target object.
  • the device further includes: a structured light emitter in a normally closed state;
  • the processing chip is also used to determine that the structured light emitter needs to be awakened when it is necessary to perform a live detection on the target object;
  • the structured light emitter is configured to emit structured light to the face of the target object for the black-and-white camera to collect the depth face image of the target object under structured light illumination conditions;
  • the processing chip is also used to turn off the structured light emitter when it is determined that the target object passes live detection based on the depth face image.
  • the startup / shutdown timing of each device of the hardware is reconstructed in the processes of face detection, quality judgment, feature extraction, and comparison recognition to ensure the optimal face recognition strategy in various environments .
  • FIG. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present specification.
  • the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory. Of course, it may include other business offices. Required hardware.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it to form an image processing device at a logical level.
  • this application does not exclude other implementations, such as a logic device or a combination of software and hardware, etc., that is to say, the execution body of the following processing flow is not limited to each logical unit, it can also be Hardware or logic device.
  • the network interface, processor and memory can be connected to each other via a bus system.
  • the bus may be an ISA (Industry Standard Architecture, Industry Standard Architecture) bus, a PCI (Peripheral Components Interconnect, Peripheral Component Interconnect Standard) bus, or an EISA (Extended Industry Standard, Extended Industry Standard Architecture) bus, etc.
  • the bus can be divided into an address bus, a data bus, and a control bus. For ease of representation, only one bidirectional arrow is used in FIG. 13, but it does not mean that there is only one bus or one type of bus.
  • the memory is used to store programs.
  • the program may include program code, and the program code includes computer operation instructions.
  • the memory may include read-only memory and random access memory, and provide instructions and data to the processor.
  • the memory may include a high-speed random access memory (Random-Access Memory, RAM), or may also include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • RAM Random-Access Memory
  • non-volatile memory non-volatile memory
  • the processor is used to execute the program stored in the memory and specifically execute:
  • the first area is compensated.
  • the method executed by the image processing apparatus or the master node disclosed in the embodiment shown in FIG. 11 of the present application may be applied to a processor, or implemented by a processor.
  • the processor may be an integrated circuit chip with signal processing capabilities.
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software.
  • the aforementioned processor may be a general-purpose processor, including a central processor (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc .; it may also be a digital signal processor (Digital Signal Processor, DSP), dedicated integration Circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application may be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied and executed by a hardware decoding processor, or may be executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a mature storage medium in the art, such as a random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, and register.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the image processing apparatus can also execute the methods of FIGS. 2 to 3 and 6 and implement the method executed by the manager node.
  • the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores one or more programs, and the one or more programs include multiple application programs.
  • the electronic device executes, the electronic device is caused to execute the image processing method provided in the embodiments corresponding to FIG. 2 to FIG. 3, and FIG. 6.
  • FIG. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present specification.
  • the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory. Of course, it may include other business offices. Required hardware.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it to form a face image recognition device at a logical level.
  • this application does not exclude other implementations, such as a logic device or a combination of software and hardware, etc., that is to say, the execution body of the following processing flow is not limited to each logical unit, it can also be Hardware or logic device.
  • the network interface, processor and memory can be connected to each other via a bus system.
  • the bus may be an ISA (Industry Standard Architecture, Industry Standard Architecture) bus, a PCI (Peripheral Components Interconnect, Peripheral Component Interconnect Standard) bus, or an EISA (Extended Industry Standard, Extended Industry Standard Architecture) bus, etc.
  • the bus can be divided into an address bus, a data bus, and a control bus. For ease of representation, only one bidirectional arrow is used in FIG. 14, but it does not mean that there is only one bus or one type of bus.
  • the memory is used to store programs.
  • the program may include program code, and the program code includes computer operation instructions.
  • the memory may include read-only memory and random access memory, and provide instructions and data to the processor.
  • the memory may include a high-speed random access memory (Random-Access Memory, RAM), or may also include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • RAM Random-Access Memory
  • non-volatile memory non-volatile memory
  • the processor is used to execute the program stored in the memory and specifically execute:
  • the first area is compensated, and the compensated first face image is identified.
  • the method performed by the face image recognition device or the master node disclosed in the embodiment shown in FIG. 12 of the present application may be applied to a processor, or implemented by a processor.
  • the processor may be an integrated circuit chip with signal processing capabilities.
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software.
  • the aforementioned processor may be a general-purpose processor, including a central processor (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc .; it may also be a digital signal processor (Digital Signal Processor, DSP), dedicated integration Circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application may be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied and executed by a hardware decoding processor, or may be executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a mature storage medium in the art, such as a random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, and register.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the face image recognition device can also execute the methods of FIGS. 5 to 6 and implement the method executed by the manager node.
  • the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores one or more programs, and the one or more programs include multiple application programs.
  • the electronic device executes, the electronic device is caused to execute the face image recognition method provided in the embodiments corresponding to FIG. 5 to FIG. 6.
  • the embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may take the form of a computer program product implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
  • computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce an article of manufacture including an instruction device, the instructions The device implements the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processing, which is executed on the computer or other programmable device
  • the instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.
  • the computing device includes one or more processors (CPUs), input / output interfaces, network interfaces, and memory.
  • processors CPUs
  • input / output interfaces output interfaces
  • network interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory, random access memory (RAM) and / or non-volatile memory in computer-readable media, such as read only memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash memory
  • Computer-readable media including permanent and non-permanent, removable and non-removable media, can store information by any method or technology.
  • the information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, read-only compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • computer-readable media does not include temporary computer-readable media (transitory media), such as modulated data signals and carrier waves.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Therefore, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, the present application may take the form of a computer program product implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
  • computer usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.

Abstract

Disclosed are an image processing and face image identification method, apparatus and device. The method comprises: acquiring a first image and a second image of a target object under different lighting conditions; performing quality evaluation processing on one of the images; if there is a first region with a quality evaluation value less than a pre-set quality evaluation threshold value, determining a second region, in the other image, corresponding to the first region; and compensating the first region based on image data of the second region.

Description

一种图像处理和人脸图像识别方法、装置及设备Image processing and face image recognition method, device and equipment
本申请要求2018年10月19日递交的申请号为201811222689.7、发明名称为“一种图像处理和人脸图像识别方法、装置及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application requires the priority of the Chinese patent application filed on October 19, 2018 with the application number 201811222689.7 and the invention titled "An Image Processing and Face Image Recognition Method, Device, and Equipment" In this application.
技术领域Technical field
本申请涉及计算机技术领域,尤其涉及一种图像处理和人脸图像识别方法、装置及设备。The present application relates to the field of computer technology, and in particular, to an image processing and face image recognition method, device, and equipment.
背景技术Background technique
图像质量评价是图像处理中的基本技术之一,主要通过对图像进行特性分析研究,然后评估出图像优劣。Image quality evaluation is one of the basic techniques in image processing. It mainly analyzes and studies the characteristics of the image, and then evaluates the quality of the image.
目前,当某对象的图像的质量评价值不达标时,一般是将该图像与该对象的其他种类的图像进行融合,以提高其质量评价值。Currently, when the quality evaluation value of an image of an object does not meet the standard, the image is generally fused with other types of images of the object to improve the quality evaluation value.
因此,需要更加可靠的方案。Therefore, a more reliable solution is needed.
发明内容Summary of the invention
本说明书实施例提供一种图像处理方法,用于优化图像质量。The embodiments of the present specification provide an image processing method for optimizing image quality.
本说明书实施例还提供一种图像处理方法,包括:An embodiment of this specification also provides an image processing method, including:
获取目标对象的第一图像和第二图像,所述第一图像和所述第二图像为不同光照条件下采集的图像;Acquiring a first image and a second image of the target object, the first image and the second image being images collected under different lighting conditions;
对所述第一图像进行质量评价处理;Performing quality evaluation processing on the first image;
若所述第一图像中存在质量评价值小于预设质量评价阈值的第一区域时,确定所述第二图像中与所述第一区域对应的第二区域;If there is a first area in the first image whose quality evaluation value is less than a preset quality evaluation threshold, determine a second area in the second image corresponding to the first area;
基于所述第二区域的图像数据,补偿所述第一区域。Based on the image data of the second area, the first area is compensated.
本说明书实施例还提供一种人脸图像识别方法,包括:An embodiment of the present specification also provides a face image recognition method, including:
获取目标对象的第一人脸图像和第二人脸图像,所述第一人脸图像和所述第二人脸图像为不同光照条件下采集的图像;Acquiring a first face image and a second face image of the target object, where the first face image and the second face image are images collected under different lighting conditions;
对所述第一人脸图像进行质量评价处理;Performing quality evaluation processing on the first face image;
若所述第一人脸图像中存在质量评价值小于预设质量评价阈值的第一区域,则确定 所述第二人脸图像中与所述第一区域对应的第二区域;If there is a first area in the first face image whose quality evaluation value is less than a preset quality evaluation threshold, determine a second area in the second face image corresponding to the first area;
基于所述第二区域的图像数据,补偿所述第一区域,并识别补偿后的所述第一人脸图像。Based on the image data of the second area, the first area is compensated, and the compensated first face image is identified.
本说明书实施例还提供一种图像处理装置,包括:An embodiment of this specification also provides an image processing device, including:
获取模块,用于获取目标对象的第一图像和第二图像,所述第一图像和所述第二图像为不同光照条件下采集的图像;An obtaining module, configured to obtain a first image and a second image of the target object, where the first image and the second image are images collected under different lighting conditions;
评价模块,用于对所述第一图像进行质量评价处理;An evaluation module, configured to perform quality evaluation processing on the first image;
确定模块,用于若所述第一图像中存在质量评价值小于预设质量评价阈值的第一区域时,确定所述第二图像中与所述第一区域对应的第二区域;A determining module, configured to determine a second area in the second image corresponding to the first area if there is a first area in the first image whose quality evaluation value is less than a preset quality evaluation threshold;
补偿模块,用于基于所述第二区域的图像数据,补偿所述第一区域。The compensation module is used to compensate the first area based on the image data of the second area.
本说明书实施例还提供一种人脸图像识别装置,包括:An embodiment of this specification also provides a face image recognition device, including:
获取模块,用于获取目标对象的第一人脸图像和第二人脸图像,所述第一人脸图像和所述第二人脸图像为不同光照条件下采集的图像;An obtaining module, configured to obtain a first face image and a second face image of the target object, where the first face image and the second face image are images collected under different lighting conditions;
评价模块,用于对所述第一人脸图像进行质量评价处理;An evaluation module, configured to perform quality evaluation processing on the first face image;
确定模块,用于若所述第一人脸图像中存在质量评价值小于预设质量评价阈值的第一区域,则确定所述第二人脸图像中与所述第一区域对应的第二区域;A determining module, configured to determine a second area in the second face image corresponding to the first area if there is a first area in the first face image whose quality evaluation value is less than a preset quality evaluation threshold ;
补偿模块,用于基于所述第二区域的图像数据,补偿所述第一区域,并识别补偿后的所述第一人脸图像。The compensation module is configured to compensate the first area based on the image data of the second area and identify the compensated first face image.
本说明书实施例还提供一种人脸图像识别装置,包括:RGB摄像头、黑白摄像头、红外补光灯以及处理芯片;其中:An embodiment of the present specification also provides a face image recognition device, including: an RGB camera, a black and white camera, an infrared fill light, and a processing chip; wherein:
所述RGB摄像头,用于采集目标对象的RGB人脸图像;The RGB camera is used to collect RGB face images of the target object;
所述红外补光灯,用于对所述目标对象的人脸发射红外光;The infrared fill light is used to emit infrared light to the face of the target object;
所述黑白摄像头,用于采集红外光照条件下的所述目标对象的红外人脸图像;The black-and-white camera is used to collect infrared face images of the target object under infrared lighting conditions;
所述处理芯片,用于将所述RGB人脸图像和所述红外人脸图像中的一个作为第一人脸图像,另一个作为第二人脸图像,并执行如权利要求15至17中任一项所述的方法的步骤。The processing chip is configured to use one of the RGB face image and the infrared face image as a first face image and the other as a second face image, and perform any of claims 15 to 17 A method step.
本说明书实施例还提供一种电子设备,包括:An embodiment of this specification also provides an electronic device, including:
处理器;以及Processor; and
被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行如上述图像处理方法或人脸图像识别方法的步骤。A memory arranged to store computer-executable instructions, which when executed, causes the processor to perform the steps of the image processing method or face image recognition method as described above.
本说明书实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上述图像处理方法或人脸图像识别方法的步骤。Embodiments of this specification also provide a computer-readable storage medium that stores a computer program on the computer-readable storage medium. When the computer program is executed by a processor, the steps of the image processing method or face image recognition method described above are implemented .
本说明书实施例采用的上述至少一个技术方案能够达到以下有益效果:The above at least one technical solution adopted by the embodiments of the present specification can achieve the following beneficial effects:
通过获取同一目标对象的不同光照条件下采集的第一图像和第二图像,并对其进行区域性地质量评价,以在检测到第一图像和第二图像中的任一一个存在质量评价值小于质量评价阈值的第一区域时,使用另一个中与第一区域相对应的第二区域的图像数据补偿该第一区域,以提高图像整体质量。与现有技术中融合图像的以提高图像质量的方案相比,本方案能定位需要进行质量补偿的区域,并区域性的进行图像质量补偿,从而实现简化图像质量补偿的目的。By acquiring the first image and the second image collected under different lighting conditions of the same target object, and performing regional quality evaluation on it, to detect whether there is a quality evaluation in any one of the first image and the second image When the value of the first area is less than the quality evaluation threshold, the image data of another second area corresponding to the first area is used to compensate the first area to improve the overall image quality. Compared with the prior art solution for fusing images to improve image quality, this solution can locate areas that require quality compensation and perform image quality compensation regionally, thereby simplifying image quality compensation.
附图说明BRIEF DESCRIPTION
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:The drawings described here are used to provide a further understanding of the present application and form a part of the present application. The schematic embodiments and descriptions of the present application are used to explain the present application and do not constitute an undue limitation on the present application. In the drawings:
图1a为本说明书提供的一种应用场景的示意图;FIG. 1a is a schematic diagram of an application scenario provided by this specification;
图1b为本说明书提供的另一种应用场景的示意图;FIG. 1b is a schematic diagram of another application scenario provided by this specification;
图2为本说明书一实施例提供的一种图像处理方法的流程示意图;2 is a schematic flowchart of an image processing method according to an embodiment of the present specification;
图3为本说明书提供的质量评价步骤的流程示意图;FIG. 3 is a schematic flowchart of the quality evaluation steps provided by this specification;
图4为本说明书提供的图像的亮度分布数据的示意图;4 is a schematic diagram of brightness distribution data of an image provided by the specification;
图5为本说明书一实施例提供的一种人脸图像识别方法的流程示意图;FIG. 5 is a schematic flowchart of a face image recognition method according to an embodiment of the present specification;
图6为本说明书另一实施例提供的一种人脸图像识别方法的流程示意图;6 is a schematic flowchart of a face image recognition method according to another embodiment of the present specification;
图7为本说明书一实施例提供的硬件及系统的结构示意图;7 is a schematic structural diagram of hardware and a system provided by an embodiment of the present specification;
图8为本说明书一实施例提供的黑白摄像头不同工作模式的示意图;8 is a schematic diagram of different working modes of a black and white camera provided by an embodiment of the present specification;
图9a为本说明书一实施例提供的2D红外输出每帧的数据格式的示意图;9a is a schematic diagram of the data format of each frame of the 2D infrared output provided by an embodiment of the present specification;
图9b为本说明书一实施例提供的3D深度输出每帧的数据格式的示意图;9b is a schematic diagram of the data format of each frame of 3D depth output provided by an embodiment of the present specification;
图10为本说明书一实施例提供的RGB和红外输出的每帧的数据格式的示意图;10 is a schematic diagram of the data format of each frame of RGB and infrared output provided by an embodiment of the present specification;
图11为本说明书一实施例提供的一种图像处理装置的结构示意图;11 is a schematic structural diagram of an image processing device according to an embodiment of the present specification;
图12为本说明书一实施例提供的一种人脸图像识别装置的结构示意图;12 is a schematic structural diagram of a face image recognition device according to an embodiment of the present specification;
图13为本说明书一实施例提供的一种电子设备的结构示意图;13 is a schematic structural diagram of an electronic device according to an embodiment of the present specification;
图14为本说明书另一实施例提供的一种电子设备的结构示意图。14 is a schematic structural diagram of an electronic device according to another embodiment of the present specification.
具体实施方式detailed description
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请具体实施例及相应的附图对本申请技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purpose, technical solutions and advantages of the present application more clear, the technical solutions of the present application will be described clearly and completely in conjunction with specific embodiments of the present application and corresponding drawings. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all the embodiments. Based on the embodiments in this application, all other embodiments obtained by a person of ordinary skill in the art without creative work fall within the scope of protection of this application.
正如背景技术部分陈述的,现有技术一般是通过预处理-配准-图像融合等步骤,结合不同光照条件下的图像采集优点,防止暗光、遮挡等情况下,对图像质量的影响,进而为高安全性要求的业务场景提供图像处理支持。但由于其中的预处理、配准、图像融合等步骤都是基于后期算法进行的,因此,导致算法复杂度非常大。As stated in the background section, the prior art generally combines the advantages of image acquisition under different lighting conditions through the steps of preprocessing-registration-image fusion to prevent the impact of dark light and occlusion on the image quality, and then Provide image processing support for business scenarios with high security requirements. However, because the preprocessing, registration, image fusion and other steps are all based on the post-algorithm, the algorithm complexity is very large.
基于此,本发明提供一种图像处理方法,通过采集同一目标对象的不同光照条件下的第一图像和第二图像,并对其进行区域性地质量评价值,以在检测到第一图像和第二图像中的一个存在质量评价值小于质量评价阈值的第一区域时,使用另一个中与第一区域相对应的第二区域的图像数据补偿该第一区域,以提高图像整体质量。与现有技术相比,本方案能定位需要进行质量补偿的区域,进而有针对性的进行质量补偿,从而实现简化图像质量补偿的目的。Based on this, the present invention provides an image processing method by collecting the first image and the second image under different lighting conditions of the same target object, and performing a regional quality evaluation value to detect the first image and When one of the second images has a first area whose quality evaluation value is less than the quality evaluation threshold, the image data of the second area corresponding to the first area in the other is used to compensate the first area to improve the overall image quality. Compared with the prior art, this solution can locate the area where quality compensation is required, and then perform quality compensation in a targeted manner, thereby achieving the purpose of simplifying image quality compensation.
其中,图像质量评价是图像处理中的基本技术之一,主要通过对图像进行特性分析研究,然后评估出图像优劣(图像失真程度)。Among them, image quality evaluation is one of the basic techniques in image processing, mainly by analyzing the characteristics of the image, and then assessing the quality of the image (the degree of image distortion).
为便于描述,下面将第一图像具体为RGB图像,第二图像具体为红外图像,或则,将第一图像具体为红外图像,第二图像具体为RGB图像对本发明的应用场景进行示例性说明。For ease of description, the first image is specifically an RGB image, the second image is an infrared image, or the first image is an infrared image, and the second image is an RGB image. .
参见图1a,其一种应用场景可以为:Referring to FIG. 1a, one application scenario may be:
图像采集装置采集对象的RGB图像和红外图像,并输入至图像处理终端;The image acquisition device collects the RGB image and infrared image of the object and inputs it to the image processing terminal;
图像处理终端对RGB图像和红外图像进行图像质量评价,以确定RGB图像和红外图像中的一个是否存在质量评价值小于质量评价阈值的区域,若是,则使用另一幅图像的对应区域的图像数据进行补偿,得到补偿后的、质量评价值大于等于质量评价阈值的RGB图像和红外图像。The image processing terminal performs image quality evaluation on the RGB image and the infrared image to determine whether one of the RGB image and the infrared image has an area whose quality evaluation value is less than the quality evaluation threshold, and if so, the image data of the corresponding area of the other image is used Perform compensation to obtain the compensated RGB image and infrared image whose quality evaluation value is greater than or equal to the quality evaluation threshold.
参见图1b,其另一种应用场景可以为:Referring to FIG. 1b, another application scenario may be:
图像采集装置采集用户的RGB人脸图像和红外人脸图像,并输入至图像处理终端;The image collection device collects the user's RGB face image and infrared face image, and inputs it to the image processing terminal;
图像处理终端对RGB人脸图像和红外人脸图像进行图像质量评价,以确定RGB人脸图像和红外人脸图像是否存在质量评价值小于质量评价阈值的区域,若是,则使用另 一幅图像的对应区域的图像数据进行补偿,得到补偿后的、质量评价值大于等于质量评价阈值的RGB人脸图像和红外人脸图像,并输入至业务处理终端,否则直接将RGB人脸图像和红外人脸图像输入至业务处理终端;The image processing terminal performs image quality evaluation on the RGB face image and the infrared face image to determine whether the RGB face image and the infrared face image have areas where the quality evaluation value is less than the quality evaluation threshold, and if so, use the image from another image Compensate the image data of the corresponding area, get the compensated RGB face image and infrared face image whose quality evaluation value is greater than or equal to the quality evaluation threshold, and input it to the business processing terminal, otherwise directly convert the RGB face image and infrared face The image is input to the business processing terminal;
业务处理终端基于补偿后的RGB人脸图像和/或红外人脸图像验证用户的身份信息及其授权信息;在验证通过后进行该用户请求的业务,并输出业务处理结果。The service processing terminal verifies the user's identity information and authorization information based on the compensated RGB face image and / or infrared face image; after the verification is passed, the service requested by the user is performed, and the service processing result is output.
其中,图像采集装置可以为与图像处理终端的一体机;图像处理终端和业务处理终端可以为一体机,其可以示例为应用于图像质量优化软件、银行业务柜台、线下门店、酒店房间门禁等等,图像处理终端和业务处理终端可以为PC端,也可以为移动终端,或者叫移动通信终端是指可以在移动中使用的计算机设备,广义的讲包括手机、笔记本、平板电脑、POS机甚至包括车载电脑。但是大部分情况下是指手机或者具有多种应用功能的智能手机以及平板电脑。Among them, the image acquisition device can be an integrated machine with the image processing terminal; the image processing terminal and the business processing terminal can be an integrated machine, which can be exemplified by image quality optimization software, banking counters, offline stores, hotel room access control, etc. The image processing terminal and business processing terminal can be a PC terminal, a mobile terminal, or a mobile communication terminal refers to a computer device that can be used on the move. Broadly speaking, it includes mobile phones, notebooks, tablet computers, POS machines and even Including car computer. But in most cases it refers to mobile phones or smartphones and tablets with multiple application functions.
以下结合附图,详细说明本申请各实施例提供的技术方案。The technical solutions provided by the embodiments of the present application will be described in detail below in conjunction with the drawings.
图2为本说明书一实施例提供的一种图像处理方法的流程示意图,该方法可由图1a中的图像处理终端执行,参见图2,具体可以包括如下步骤:FIG. 2 is a schematic flowchart of an image processing method according to an embodiment of the present specification. The method may be executed by the image processing terminal in FIG. 1a. Referring to FIG. 2, the method may specifically include the following steps:
步骤220、获取目标对象的第一图像和第二图像,所述第一图像和所述第二图像为不同光照条件下采集的图像;Step 220: Acquire a first image and a second image of the target object, where the first image and the second image are images collected under different lighting conditions;
其中,第一图像和第二图像优选为同步采集的图像。The first image and the second image are preferably images acquired synchronously.
为便于描述,下面将本实施例中第一图像和第二图像中的一个示例为RGB图像,另一个示例为红外图像。For ease of description, one example of the first image and the second image in this embodiment is an RGB image, and the other example is an infrared image.
相应地,步骤220的一种实现方式可以为:Accordingly, an implementation manner of step 220 may be:
使用多目摄像头,同步采集目标对象的RGB图像和红外图像。Use multi-eye camera to collect RGB and infrared images of the target object simultaneously.
其中,多目摄像头可以为双目摄像头,也可以为三目摄像头,包括彩色摄像部分和黑白摄像部分。RGB图像是指在可见光光照条件下彩色摄像部分采集的图像,其是用RGB色彩模式来显示的图像,RGB色彩模式是工业界的一种颜色标准,是通过对红(R)、绿(G)、蓝(B)三个颜色通道的变化以及它们相互之间的叠加来得到各式各样的颜色的,RGB即是代表红、绿、蓝三个通道的颜色;红外图象是指红外光照条件下黑白摄像部分采集的图像,其是获取物体红外光的强度而成的图象。The multi-eye camera may be a binocular camera or a trinocular camera, including a color camera part and a black-and-white camera part. RGB image refers to the image collected by the color camera part under visible light conditions. It is an image displayed in the RGB color mode. The RGB color mode is a color standard in the industry. It is through the red (R), green (G ), The blue (B) three color channels change and superimpose each other to get a variety of colors, RGB is the color representing the three channels of red, green, and blue; infrared image refers to infrared The image collected by the black-and-white camera part under lighting conditions is an image obtained by acquiring the intensity of infrared light of an object.
步骤240、对所述第一图像进行质量评价处理;参见图3,其一种实现方式可以为:Step 240: Perform quality evaluation processing on the first image; referring to FIG. 3, an implementation manner thereof may be:
步骤320、确定所述第一图像的质量相关参数分布数据;Step 320: Determine the quality-related parameter distribution data of the first image;
其中,所述质量相关参数包括:清晰度、分辨率、亮度中的一个或多个;清晰度是 指影像上各细部影纹及其边界的清晰程度;分辨率是指屏幕图像的精密度;亮度是指画面的明亮程度。Among them, the quality-related parameters include: one or more of sharpness, resolution, and brightness; sharpness refers to the sharpness of each detail shadow and its boundary on the image; resolution refers to the precision of the screen image; Brightness refers to the brightness of the picture.
参见图4,其以RGB图像为例示出的第一图像中质量相关参数分布数据中的亮度分布数据,在该RGB图像中,由于光源部署问题,导致图像的亮度由左向右(401-406所对应的列)逐渐降低,亮度均匀值较低。Referring to FIG. 4, the RGB image is taken as an example to illustrate the brightness distribution data in the quality-related parameter distribution data in the first image. In this RGB image, due to the problem of light source deployment, the brightness of the image changes from left to right (401-406 The corresponding column) gradually decreases, and the uniform brightness value is lower.
其中,区域401可以为像素级别区域,亮度取值为像素的亮度;也可以为预划分的区域大小,亮度取值为该区域的亮度均匀值。The area 401 may be a pixel-level area, and the brightness value may be the brightness of the pixel; it may also be a pre-divided area size, and the brightness value may be a uniform brightness value of the area.
同理,可构建清晰度、分辨率的分布数据。Similarly, the distribution data of sharpness and resolution can be constructed.
步骤340、基于所述质量相关参数分布数据,确定所述第一图像的质量评价值分布数据;结合图4,具体可以示例为:Step 340: Based on the quality-related parameter distribution data, determine the quality evaluation value distribution data of the first image; with reference to FIG. 4, specific examples may be:
基于步骤320构建的各质量相关参数分布数据,可得到任一区域对应的亮度、清晰度、分辨率等等,然后将其与各质量相关参数对应的预设阈值进行对比,得到差值,进而基于各质量相关参数对应的差值及其预设权重,计算出该区域的质量评价值,以此类推得到其他区域的质量评价值,进而得到图像的质量评价值分布数据。Based on the distribution data of each quality-related parameter constructed in step 320, the brightness, sharpness, resolution, etc. corresponding to any area can be obtained, and then compared with the preset threshold corresponding to each quality-related parameter to obtain a difference, and Based on the difference and preset weight corresponding to each quality-related parameter, the quality evaluation value of the area is calculated, and the quality evaluation values of other areas are obtained by analogy, and then the distribution data of the quality evaluation value of the image is obtained.
另外,由于红外图像能够保证光线均匀,因此,在第一图像为红外图像,计算质量评价值分布数据时,可忽略亮度参数;在第一图像为RGB图像,计算质量评价值分布数据时,可将红外图像的亮度均匀值作为RGB图像的亮度的预设阈值。In addition, because the infrared image can ensure uniform light, the brightness parameter can be ignored when the first image is an infrared image and the quality evaluation value distribution data is calculated; when the first image is the RGB image and the quality evaluation value distribution data is calculated, the The uniform brightness value of the infrared image is used as the preset threshold of the brightness of the RGB image.
步骤360、基于所述质量评价值分布数据,确定所述第一图像中质量评价值小于预设质量评价阈值的区域。结合图4,具体可以示例为:Step 360: Based on the quality evaluation value distribution data, determine an area in the first image whose quality evaluation value is less than a preset quality evaluation threshold. With reference to Figure 4, specific examples can be:
将图像中各区域的质量评价值与预设质量评价阈值进行对比,以确定其中质量评价值小于预设质量评价阈值的区域,并将该区域作为第一区域。The quality evaluation value of each area in the image is compared with a preset quality evaluation threshold to determine an area where the quality evaluation value is less than the preset quality evaluation threshold, and this area is used as the first area.
步骤260、若所述第一图像中存在质量评价值小于预设质量评价阈值的第一区域时,确定所述第二图像中与所述第一区域对应的第二区域;其一种实现方式可以为:Step 260: If there is a first area in the first image whose quality evaluation value is less than a preset quality evaluation threshold, determine a second area in the second image corresponding to the first area; an implementation manner thereof Can be:
S1、对红外图像和RGB图像进行图像配准处理;具体可以示例为:S1. Perform image registration processing on infrared images and RGB images; specific examples can be:
首先,对两幅图像进行特征提取处理,得到特征点,并通过进行相似性度量处理,找到匹配的特征点对;然后,通过匹配的特征点对得到图像空间坐标变换参数;最后,由坐标变换参数进行图像配准。First, perform feature extraction processing on the two images to obtain feature points, and find matching feature point pairs by performing similarity measurement processing; then, obtain image space coordinate transformation parameters from the matched feature point pairs; and finally, coordinate transformation Parameters for image registration.
S2、基于配准得到的RGB图像和红外图像之间的空间坐标变换关系,确定与第一区域的坐标对应坐标的第二区域。S2. Based on the spatial coordinate transformation relationship between the RGB image and the infrared image obtained by the registration, determine a second area with coordinates corresponding to the coordinates of the first area.
即,若红外图像中存在第一区域时,则确定RGB图像中的第二区域;若RGB图像 中存在第一区域时,则确定红外图像中的第二区域。That is, if there is a first area in the infrared image, the second area in the RGB image is determined; if there is a first area in the RGB image, the second area in the infrared image is determined.
在步骤260的另一种实现方式中,图像配准处理的步骤可以由多目摄像头完成,具体可以示例为:In another implementation manner of step 260, the steps of image registration processing may be completed by a multi-eye camera, and specific examples may be:
获取多目摄像头同步采集的RGB图像和红外图像;基于多目摄像头中RGB摄像部分和红外摄像部分的之间视差,对RGB图像和红外图像进行内部标定,保证RGB图像和红外图像的画面内容完全一致,以完成配准。Obtain the RGB image and infrared image collected by the multi-eye camera simultaneously; based on the parallax between the RGB camera part and the infrared camera part in the multi-eye camera, internally calibrate the RGB image and the infrared image to ensure that the RGB and infrared images have complete screen content Consistent to complete the registration.
步骤280、基于所述第二区域的图像数据,补偿所述第一区域。Step 280: Compensate the first area based on the image data of the second area.
若所述第一图像为RGB图像,所述第二图像为红外图像,其一种实现方式可以为:If the first image is an RGB image and the second image is an infrared image, one implementation manner may be:
若所述RGB图像的第一区域的亮度小于预设亮度阈值,则基于所述红外图像的第二区域的图像数据,补偿所述RGB图像的第一区域的亮度;或者,If the brightness of the first area of the RGB image is less than a preset brightness threshold, based on the image data of the second area of the infrared image, compensate the brightness of the first area of the RGB image; or,
若所述RGB图像的第一区域的清晰度小于预设清晰度阈值,则基于所述红外图像的第二区域的图像数据,补偿所述RGB图像的第一区域的清晰度。If the sharpness of the first area of the RGB image is less than a preset sharpness threshold, the sharpness of the first area of the RGB image is compensated based on the image data of the second area of the infrared image.
若所述第一图像为红外图像,所述第二图像为RGB图像,其另一种实现方式可以为:If the first image is an infrared image and the second image is an RGB image, another implementation manner may be:
若所述红外图像的第一区域为被遮挡区域,则基于所述RGB图像的第二区域的图像数据,补偿所述红外图像的第一区域的所述目标对象的图像数据;或者,If the first area of the infrared image is a blocked area, the image data of the target object in the first area of the infrared image is compensated based on the image data of the second area of the RGB image; or,
若所述红外图像的第一区域的分辨率小于预设分辨率阈值,则基于所述RGB图像的第二区域的图像数据,补偿所述红外图像的第一区域分辨率。If the resolution of the first region of the infrared image is less than a preset resolution threshold, the first region resolution of the infrared image is compensated based on the image data of the second region of the RGB image.
步骤280的又一种实现方式可以为:Another implementation of step 280 may be:
融合所述第一区域和所述第二区域的图像数据,并将融合后的图像数据作为补偿后的所述第一区域的图像数据。Fusing the image data of the first area and the second area, and using the fused image data as the compensated image data of the first area.
可见,本实施例通过获取同一目标对象的不同光照条件下采集的第一图像和第二图像,并对其进行区域性地质量评价值,以在检测到第一图像和第二图像中的一个存在质量评价值小于质量评价阈值的第一区域时,使用另一个中与第一区域相对应的第二区域的图像数据补偿该第一区域,以提高图像整体质量。与现有技术中融合图像以提高图像质量的方案相比,能准确定位需要质量补偿的区域,并有针对性地进行图像数据的补偿,从而实现简化图像质量补偿的目的。It can be seen that in this embodiment, by acquiring the first image and the second image collected under different lighting conditions of the same target object, and performing a regional quality evaluation value thereof, one of the first image and the second image is detected When there is a first area whose quality evaluation value is less than the quality evaluation threshold, the image data of another second area corresponding to the first area is used to compensate the first area to improve the overall image quality. Compared with the scheme of fusing images in the prior art to improve image quality, it can accurately locate areas requiring quality compensation, and perform targeted image data compensation, thereby simplifying image quality compensation.
图5为本说明书一实施例提供的一种人脸图像识别方法的流程示意图,该方法可由图1b中的图像处理终端和业务处理终端共同执行,参见图5,具体可以包括如下步骤:FIG. 5 is a schematic flowchart of a face image recognition method provided by an embodiment of the present specification. The method may be jointly executed by the image processing terminal and the business processing terminal in FIG.
步骤502、获取目标对象的第一人脸图像和第二人脸图像,所述第一人脸图像和所述第二人脸图像为不同光照条件下采集的图像;Step 502: Acquire a first face image and a second face image of a target object, where the first face image and the second face image are images collected under different lighting conditions;
其中,第一人脸图像和第二人脸图像优选为同步采集的图像。Wherein, the first face image and the second face image are preferably images collected synchronously.
为便于描述,下面将本实施例中第一人脸图像和第二人脸图像中的一个示例为RGB图像,另一个示例为红外图像。相应地,需要说明的是,步骤502的一种实现方式可以为:For ease of description, one example of the first face image and the second face image in this embodiment is an RGB image, and the other example is an infrared image. Accordingly, it should be noted that an implementation manner of step 502 may be:
通过多目摄像头同步采集目标对象的RGB人脸图像和红外人脸图像;Simultaneously collect RGB face images and infrared face images of the target object through the multi-eye camera;
其中,双目摄像头的彩色摄像头和黑白摄像头均处于常开状态。Among them, the color camera and the black and white camera of the binocular camera are in the normally open state.
步骤502的另一种实现方式可以为:Another implementation manner of step 502 may be:
通过双目摄像头采集目标对象的RGB图像,并在确定所述RGB图像中存在人脸时,同步采集所述目标对象的RGB人脸图像和红外人脸图像;Collect the RGB image of the target object through a binocular camera, and when it is determined that there is a human face in the RGB image, simultaneously collect the RGB facial image and the infrared facial image of the target object;
其中,彩色摄像头处于常开状态,黑白摄像头处于常闭状态。Among them, the color camera is in a normally open state, and the black and white camera is in a normally closed state.
步骤504、对所述第一人脸图像进行质量评价处理;Step 504: Perform quality evaluation processing on the first face image;
步骤506、若所述第一人脸图像中存在质量评价值小于预设质量评价阈值的第一区域时,确定所述第二人脸图像中与所述第一区域对应的第二区域;Step 506: If there is a first area in the first face image whose quality evaluation value is less than a preset quality evaluation threshold, determine a second area in the second face image corresponding to the first area;
其中,预设质量评价阈值可以为目标用户所要办理的业务要求的图像质量评价阈值。Wherein, the preset quality evaluation threshold may be an image quality evaluation threshold required by the target user to handle the business.
步骤508、基于所述第二区域的图像数据,补偿所述第一区域;Step 508: Compensate the first area based on the image data of the second area;
对于步骤504至步骤508,其分别与图2对于的实施例中的步骤240至步骤280相对应,其实现方式也对应相似,故,此处不再对其进行赘述。 Steps 504 to 508 correspond to steps 240 to 280 in the embodiment shown in FIG. 2 respectively, and their implementations are also similar. Therefore, they will not be repeated here.
进一步地,在执行步骤510之前,还包括:检测补偿后的第一人脸图像能否满足人脸识别的要求的步骤,其一种实现方式可以为:Further, before step 510 is performed, it also includes the step of detecting whether the compensated first face image can meet the requirements of face recognition, and one implementation manner thereof may be:
对补偿后的所述第一人脸图像进行质量评价处理;Performing quality evaluation processing on the compensated first face image;
若补偿后的所述第一人脸图像的质量评价值达到人脸识别质量评价要求,则允许识别补偿后的所述第一人脸图像;否则,返回步骤502,直至得到质量评价值达标的补偿后的第一图像或补偿后的第二图像。If the quality evaluation value of the compensated first face image meets the face recognition quality evaluation requirements, it is allowed to identify the compensated first face image; otherwise, return to step 502 until the quality evaluation value is up to standard The first image after compensation or the second image after compensation.
步骤510、识别补偿后的所述第一人脸图像。其一种实现方式可以为:Step 510: Identify the compensated first face image. One of its implementation methods can be:
对补偿后的所述第一人脸图像进行人脸特征提取处理;Performing face feature extraction processing on the compensated first face image;
基于提取出的人脸特征,进行人脸识别处理,以确定所述目标对象的身份信息。Based on the extracted face features, face recognition processing is performed to determine the identity information of the target object.
不难理解的是,确定所述目标对象(目标用户)的身份信息后,可基于目标对象的身份信息办理相关业务。It is not difficult to understand that, after determining the identity information of the target object (target user), related services can be handled based on the identity information of the target object.
进一步地,当确定所述目标对象的业务需求的安全度达到预设安全度阈值时,还可在办理相关业务之前,对所述目标对象进行活体检测。例如:目标对象请求办理的业务 为支付,由于支付业务要求的安全度比较高,因此,还需要进一步判断目标对象是否为活体。其一种实现方式可以为:Further, when it is determined that the security level of the business requirements of the target object reaches a preset security threshold, the target object may also be subjected to a living body detection before handling related business. For example, the business requested by the target object is payment. Because the security required by the payment business is relatively high, it is necessary to further determine whether the target object is a living body. One of its implementation methods can be:
确定所述目标对象的深度人脸图像,并基于深度人脸图像对所述目标对象进行活体检测。具体可以示例为:Determining the depth face image of the target object, and performing live detection on the target object based on the depth face image. Specific examples can be:
首先,选择最具有区分度的特征来训练分类器;然后,将深度人脸图像对应的3D人脸数据作为训练分类器的输入,得到训练分类器输出的目标对象是活体或非活体的结果。First, select the most distinguishing feature to train the classifier; then, use the 3D face data corresponding to the deep face image as the input of the training classifier, and obtain the result that the target object output by the training classifier is a living or non-living body.
另外,为进一步地提供活体检测降低,还可将补偿后的第一人脸图像与深度图人脸像进行融合,并基于融合后的图像对所述目标对象进行活体检测。In addition, in order to further provide reduction in living body detection, the compensated first face image and the depth map face image can also be fused, and the target object can be subjected to living body detection based on the fused image.
活体检测步骤的另一种实现方式可以为:Another way to implement the living body detection step can be:
确定所述目标对象的活体动作,并基于所述活体动作确定所述目标对象是否为活体。Determine the living body motion of the target object, and determine whether the target object is a living body based on the living body motion.
其中,活体动作包括眨眼、摇头、张嘴等等。Among them, living body movements include blinking, shaking head, opening mouth, etc.
可见,本实施例通过获取对象的不同光照条件下的第一人脸图像和第二人脸图像,并确定其质量评价分布数据,以在检测到第一人脸图像和第二人脸图像中的任一一个存在质量评价值小于质量评价阈值的第一区域时,使用另一个中与第一区域相对应的第二区域的图像数据补偿该第一区域,以提高人脸图像的整体质量,进而提高人脸识别精度;而且,本实施例还基于深度人脸图像对用户进行活体检测,以提高业务办理安全性。It can be seen that in this embodiment, by acquiring the first face image and the second face image under different lighting conditions of the object, and determining the quality evaluation distribution data, the first face image and the second face image are detected If any one of the first areas has a quality evaluation value less than the quality evaluation threshold, use the image data of the second area corresponding to the first area to compensate the first area to improve the overall quality of the face image To further improve the accuracy of face recognition; moreover, in this embodiment, a living body detection is performed on the user based on the deep face image to improve the security of business handling.
图6为本说明书另一实施例提供的一种人脸图像识别方法的流程示意图,参见图6,该方法具体可以包括如下步骤:FIG. 6 is a schematic flowchart of a face image recognition method according to another embodiment of the present specification. Referring to FIG. 6, the method may specifically include the following steps:
步骤602、初始化硬件及系统配置;Step 602: Initialize the hardware and system configuration;
结合图7,硬件及系统设计部分包括:RGB部分、黑白部分、IR结构光投射器、专用的ISP和SOC处理芯片、以及红外补光灯,其中:With reference to Fig. 7, the hardware and system design part includes: RGB part, black and white part, IR structured light projector, special ISP and SOC processing chip, and infrared fill light, in which:
RGB部分,即彩色摄像头Color Camera,包括:专用镜头+RGB图像传感器,用于采集正常的色彩图像,考虑到实际业务需求,使用对应规格的分辨率,例如:对于新零售场景的业务需求,使用1080P规格的分辨率(1920*1080),专用镜头的视场角FOV较大(比如对角线110°),按照安防规格进行定制,保证无畸变(人脸识别需要)。The RGB part, which is the color camera Color, includes: a dedicated lens + RGB image sensor, used to collect normal color images, taking into account the actual business needs, using the resolution of the corresponding specifications, for example: for the business needs of new retail scenarios, use The resolution of 1080P specification (1920 * 1080), the FOV of the special lens is larger (such as the diagonal of 110 °), and it is customized according to the security specifications to ensure no distortion (required for face recognition).
黑白部分,即红外摄像头Infrared Camera,包括:专用镜头+黑白图像传感器,用于采集灰度图像,考虑新零售场景的业务需求以及保证和RGB像素点对齐,同样使用了1080P规格的分辨率(1920*1080),专用镜头的视场角FOV较大(比如对角线110°),按照安防规格进行定制,保证无畸变(人脸识别需要)。The black and white part, the infrared camera Infrared Camera, includes: a dedicated lens + black and white image sensor, used to collect grayscale images, considering the business needs of new retail scenarios and ensuring alignment with RGB pixels, the same resolution of 1080P (1920 * 1080), the FOV of the special lens is large (for example, 110 ° diagonal), customized according to the security specifications to ensure no distortion (required for face recognition).
结合图8,需要说明的是,红外摄像头采集的是红外灰度图像,2D纯红外的图像采集(通过窄带红外补光灯进行补光)、3D深度Depth的图像采集(通过经特殊编码的结构光)都是用Infrared Camera进行图像采集,所以这两部分需要分时处理,即两个过程是独立的,不可以同时采集2D纯红外的图像和3D深度Depth图像。With reference to FIG. 8, it should be noted that the infrared camera collects infrared grayscale images, 2D pure infrared image collection (filling through a narrow-band infrared fill light), and 3D depth Depth image collection (through a specially coded structure Light) uses Infrared Camera for image collection, so the two parts need to be processed in time-sharing, that is, the two processes are independent, and it is not possible to collect 2D pure infrared images and 3D depth Depth images at the same time.
其中,2D纯红外的图像采集条件触发,确认关闭IR结构光投射器(IR Laser Projector),打开定制波段(850nm/940nm)的红外补光灯,开始采集2D红外人脸图像;3D深度信息的图像采集条件触发,确认关闭定制波段(850nm/940nm)的红外补光灯,打开IR结构光投射器(IR Laser Projector),开始采集3D深度信息的人脸图像。Among them, the 2D pure infrared image acquisition condition is triggered, confirm to turn off the IR structured light projector (IR Laser Projector), turn on the custom band (850nm / 940nm) infrared fill light, and start collecting 2D infrared face images; 3D depth information The image acquisition condition is triggered, confirm to turn off the infrared fill light of the custom band (850nm / 940nm), turn on the IR structured light projector (IR Laser Projector), and start to collect the face image of 3D depth information.
而且,鉴于不可以同时采集2D纯红外的图像和3D深度Depth图像,所以在最后输出输出时也要相应进行处理,即在采集2D纯红外的图像流程时输出每帧的灰度是2D红外数据,具体输出格式参见图9a;在采集3D深度信息的图像流程时输出每帧的度是3D深度数据,具体输出格式参见图9b。图9a和图9b中“Y”表示明亮度,“U”表示色度,“V”表示浓度,h示例为1080P。Moreover, in view of the fact that it is not possible to collect 2D pure infrared images and 3D depth Depth images at the same time, it must be processed accordingly when the final output is output, that is, the grayscale of each frame is 2D infrared data when collecting the 2D pure infrared image process The specific output format is shown in FIG. 9a; when the image flow of collecting 3D depth information is output, the degree of each frame is 3D depth data, and the specific output format is shown in FIG. 9b. In FIGS. 9a and 9b, "Y" indicates brightness, "U" indicates chroma, "V" indicates density, and h is 1080P.
IR结构光投射器是有特殊结构的光源,比如离散光斑、条纹光、编码结构光等,用于投射到待检测物体或平面上,当经过编码后的结构光投到不同深度的平面上时,光的纹路会发生变化,因此可以通过采集这些纹理变化,计算出不同的深度。The IR structured light projector is a light source with a special structure, such as discrete light spots, fringe light, coded structured light, etc., used to project onto the object or plane to be detected, when the structured light after coding is projected on a plane of different depths The texture of light will change, so you can calculate different depths by collecting these texture changes.
专用的图像信号处理(Image Signal Processing,ISP)和SOC处理芯片(System-on-a-Chip),除了用于计算深度信息,还会对RGB和灰度图像进行数字图像处理,比如:镜头畸变调整LSC、自动白平衡AWB、自动曝光AE、宽动态WDR、颜色矩阵调整CCM、2D降噪等,将RGB和灰度进行优化和格式处理(RAW格式转为最终的YUV格式)。Dedicated image signal processing (Image) Processing (ISP) and SOC processing chip (System-on-a-Chip), in addition to calculating depth information, digital image processing of RGB and grayscale images, such as lens distortion Adjust LSC, automatic white balance AWB, automatic exposure AE, wide dynamic WDR, color matrix adjustment CCM, 2D noise reduction, etc., optimize and format RGB and grayscale (RAW format is converted to the final YUV format).
红外补光灯,主要对IR红外人脸进行特殊红外波段(比如:850nm、940nm)的补光处理,以保证人脸在不同光源场景下的亮度均匀。The infrared fill light mainly fills the infrared infrared face (such as 850nm and 940nm) to ensure the uniform brightness of the face under different light source scenes.
初始化的配置参数包括:默认开启RGB Sensor,关闭MONO Sensor、红外补光灯以及IR结构光发射器,以在每帧仅采集RGB图像数据。The initial configuration parameters include: RGB sensor is turned on by default, MONO sensor, infrared fill light and IR structure light emitter are turned off to collect only RGB image data in each frame.
步骤604、检测每帧RGB图像中是否存在人脸;Step 604: Detect whether there is a human face in each frame of RGB images;
若是,则执行步骤606;若否,则继续采集RGB图像数据。If yes, go to step 606; if no, continue to collect RGB image data.
其中,人脸识别技术为较为成熟的技术,故,此处不再赘述基于RGB图像的人脸识别。Among them, the face recognition technology is a relatively mature technology, therefore, the face recognition based on RGB images will not be repeated here.
步骤606、开启红外补光灯和黑白图像传感器MONO Sensor,以在每帧采集RGB图 像数据的同时采集红外图像数据;Step 606: Turn on the infrared fill light and the black-and-white image sensor MONO Sensor to collect infrared image data while collecting RGB image data in each frame;
优选的,为保证场景和调整触发后的图像灵活性,本实施例进一步地对RGB图像数据和红外图像数据的输出格式进行了融合,即每帧数据同时包括RGB图像数据和红外图像数据。以h为1080P为例,其具体单帧个数见图10。Preferably, in order to ensure the flexibility of the scene and the adjusted image after triggering, this embodiment further integrates the output formats of RGB image data and infrared image data, that is, each frame of data includes both RGB image data and infrared image data. Taking h as 1080P for example, the specific number of single frames is shown in Figure 10.
步骤608、对红外图像和RGB图像进行人脸特征提取以判断人脸质量是否达到识别特征提取要求;Step 608: Perform face feature extraction on the infrared image and the RGB image to determine whether the face quality meets the recognition feature extraction requirements;
若是,则执行步骤610;若否,则对人脸图像进行补偿处理,并再次执行步骤608,若多次补偿处理后仍然未达到识别特征提取要求,则返回步骤606,以重新采集红外图像和RGB图像;If yes, perform step 610; if not, perform compensation processing on the face image, and perform step 608 again. If the recognition feature extraction requirements are not met after multiple compensation processing, return to step 606 to reacquire infrared images and RGB image;
其中,不同的业务对应有不同的识别特征提取要求。Among them, different services have different identification feature extraction requirements.
步骤610、对人脸图像进行人脸识别处理,以确定用户身份,进而进行相关业务办理;关闭红外补光灯,以在下一轮返回步骤604;Step 610: Perform face recognition processing on the face image to determine the user's identity, and then conduct related business processing; turn off the infrared fill light to return to step 604 in the next round;
步骤612、基于用户请求的业务所需安全度,判断是否需要进行活体检测;Step 612: Based on the security required by the service requested by the user, determine whether a living body test is required;
若是,则执行步骤614;若否,则流程结束。If yes, go to step 614; if no, the process ends.
其中,对于支付、酒店门禁等安全要求较高的业务,可配置其需要进行活体检测;对于手机套餐办理、银行业务开通等安全要求交底的业务,可配置其不需要进行活体检测。Among them, for services with high security requirements such as payment and hotel access control, it can be configured to perform live detection; for mobile phone package processing, banking business opening and other security requirements, it can be configured to not require live detection.
步骤614、开启IR结构光编码发射器,由MONO Sensor采集深度图像,并进行3D计算;Step 614: Turn on the IR structure light coding transmitter, and the MONO Sensor collects the depth image and performs 3D calculation;
步骤616、循环采集RGB+Depth每帧的灰度部分图像输出实时深度图;Step 616: Cyclically collect the grayscale image of each frame of RGB + Depth and output a real-time depth map;
步骤618、基于深度图判断是否通过活体检测;Step 618: Based on the depth map, determine whether to pass the living body detection;
若是,则执行步骤620;若否,则返回步骤616;If yes, go to step 620; if no, go back to step 616;
步骤620、办理相关业务;同步,关闭IR结构光编码发射器,以在下一轮返回步骤604;Step 620, transact related business; synchronize, turn off the IR structure optical coding transmitter to return to step 604 in the next round;
可见,本实施例通过用红外图像补偿RGB图像,以保证图像的亮度均匀,解决RGB受到各类复杂光照射时的人脸曝光不均,比如阴阳脸的问题;通过用RGB图像补偿红外图像,以解决有眼镜等反光物体导致的面部遮挡问题,保证了RGB+IR的冗余策略;在支付类高安全性要求的业务场景进行活体检测时,还通过MONO Sensor采集由IR编码结构光发射器投射返回的深度信息(点云图),从而保证安全性。It can be seen that this embodiment compensates the RGB image with an infrared image to ensure the uniform brightness of the image, and solves the problem of uneven exposure of the face when RGB is exposed to various complex lights, such as a yin and yang face; by compensating the infrared image with the RGB image, In order to solve the problem of face occlusion caused by reflective objects such as glasses, RGB + IR redundancy strategy is ensured; when performing live detection in business scenarios with high security requirements for payment, the IR-encoded structured light emitter is also collected through MONOSensor Project the returned depth information (point cloud image) to ensure safety.
另外,对于上述方法实施方式,为了简单描述,故将其都表述为一系列的动作组合, 但是本领域技术人员应该知悉,本发明实施方式并不受所描述的动作顺序的限制,因为依据本发明实施方式,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施方式均属于优选实施方式,所涉及的动作并不一定是本发明实施方式所必须的。In addition, the above method embodiments are described as a series of action combinations for the sake of simple description, but those skilled in the art should be aware that the embodiments of the present invention are not limited by the described sequence of actions, because according to this In embodiments of the invention, certain steps may be performed in other orders or simultaneously. Secondly, those skilled in the art should also know that the embodiments described in the specification are all preferred embodiments, and the actions involved are not necessarily required by the embodiments of the present invention.
图11为本说明书一实施例提供的一种图像处理装置的结构示意图,参见图11,具体可以包括:获取模块111、评价模块112、确定模块113和补偿模块114,其中:FIG. 11 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present specification. Referring to FIG. 11, it may specifically include: an acquisition module 111, an evaluation module 112, a determination module 113, and a compensation module 114, where:
获取模块111,用于获取目标对象的第一图像和第二图像,所述第一图像和所述第二图像为不同光照条件下采集的图像;The obtaining module 111 is used to obtain a first image and a second image of the target object, where the first image and the second image are images collected under different lighting conditions;
评价模块112,用于对所述第一图像进行质量评价处理;The evaluation module 112 is configured to perform quality evaluation processing on the first image;
确定模块113,用于若所述第一图像中存在质量评价值小于预设质量评价阈值的第一区域时,确定所述第二图像中与所述第一区域对应的第二区域;The determining module 113 is configured to determine a second area in the second image corresponding to the first area if there is a first area in the first image whose quality evaluation value is less than a preset quality evaluation threshold;
补偿模块114,用于基于所述第二区域的图像数据,补偿所述第一区域。The compensation module 114 is configured to compensate the first area based on the image data of the second area.
可选的,评价模块112,具体用于:Optionally, the evaluation module 112 is specifically used for:
确定所述第一图像的质量相关参数分布数据;基于所述质量相关参数分布数据,确定所述第一图像的质量评价值分布数据;基于所述质量评价值分布数据,确定所述第一图像中质量评价值小于预设质量评价阈值的区域。Determining quality-related parameter distribution data of the first image; determining quality evaluation value distribution data of the first image based on the quality related parameter distribution data; determining the first image based on the quality evaluation value distribution data The area where the medium quality evaluation value is less than the preset quality evaluation threshold.
其中,所述质量相关参数包括:清晰度、分辨率、亮度中的一个或多个。Wherein, the quality-related parameters include one or more of clarity, resolution, and brightness.
可选的,所述第一图像为RGB图像,所述第二图像为红外图像;补偿模块114,具体用于:Optionally, the first image is an RGB image, and the second image is an infrared image; the compensation module 114 is specifically used to:
若所述RGB图像的第一区域的亮度小于预设亮度阈值,则基于所述红外图像的第二区域的图像数据,补偿所述RGB图像的第一区域的亮度;或者,If the brightness of the first area of the RGB image is less than a preset brightness threshold, based on the image data of the second area of the infrared image, compensate the brightness of the first area of the RGB image; or,
若所述RGB图像的第一区域的清晰度小于预设清晰度阈值,则基于所述红外图像的第二区域的图像数据,补偿所述RGB图像的第一区域的清晰度。If the sharpness of the first area of the RGB image is less than a preset sharpness threshold, the sharpness of the first area of the RGB image is compensated based on the image data of the second area of the infrared image.
可选的,所述第一图像为红外图像,所述第二图像为RGB图像;补偿模块114,具体用于:Optionally, the first image is an infrared image, and the second image is an RGB image; the compensation module 114 is specifically used to:
若所述红外图像的第一区域为被遮挡区域,则基于所述RGB图像的第二区域的图像数据,补偿所述红外图像的第一区域的所述目标对象的图像数据;或者,If the first area of the infrared image is a blocked area, the image data of the target object in the first area of the infrared image is compensated based on the image data of the second area of the RGB image; or,
若所述红外图像的第一区域的分辨率小于预设分辨率阈值,则基于所述RGB图像的第二区域的图像数据,补偿所述红外图像的第一区域分辨率。If the resolution of the first region of the infrared image is less than a preset resolution threshold, the first region resolution of the infrared image is compensated based on the image data of the second region of the RGB image.
可选的,补偿模块114,具体用于:Optionally, the compensation module 114 is specifically used for:
融合所述第一区域和所述第二区域的图像数据,并将融合后的图像数据作为补偿后的所述第一区域的图像数据。Fusing the image data of the first area and the second area, and using the fused image data as the compensated image data of the first area.
可选的,所述第一图像和所述第二图像由双目摄像头采集并配准。Optionally, the first image and the second image are collected and registered by a binocular camera.
可选的,所述目标对象包括人脸,所述第一图像为第一人脸图像,所述第二图像为第二人脸图像。Optionally, the target object includes a human face, the first image is a first human face image, and the second image is a second human face image.
可选的,获取模块111,具体用于:Optionally, the obtaining module 111 is specifically used for:
采集目标对象的第一图像;确定所述第一图像中存在人脸时,同步采集所述目标对象的第一人脸图像和第二人脸图像。Collect the first image of the target object; when it is determined that there is a human face in the first image, synchronously acquire the first human face image and the second human face image of the target object.
可选的,装置还包括:Optionally, the device further includes:
人脸识别模块,用于识别补偿后的所述第一人脸图像,确定所述目标对象的身份信息。The face recognition module is used to recognize the compensated first face image and determine the identity information of the target object.
可选的,装置还包括:Optionally, the device further includes:
活体检测模块,用于确定所述目标对象的业务需求的安全度达到预设安全度阈值时,对所述目标对象进行活体检测。The living body detection module is configured to determine that the safety of the business requirements of the target object reaches a preset safety threshold, and perform living body detection on the target object.
可选的,所述活体检测模块,具体用于:Optionally, the living body detection module is specifically used for:
确定所述目标对象的深度人脸图像,并基于深度人脸图像对所述目标对象进行活体检测。Determining the depth face image of the target object, and performing live detection on the target object based on the depth face image.
可选的,所述活体检测模块,具体用于:Optionally, the living body detection module is specifically used for:
融合所述深度人脸图像和补偿后的所述第一人脸图像,并基于融合后的图像对所述目标对象进行活体检测。Fusing the deep face image and the compensated first face image, and performing live detection on the target object based on the fused image.
可选的,所述活体检测模块,具体用于:Optionally, the living body detection module is specifically used for:
确定所述目标对象的活体动作,并基于所述活体动作确定所述目标对象是否为活体。Determine the living body motion of the target object, and determine whether the target object is a living body based on the living body motion.
可见,本实施例通过获取同一目标对象的不同光照条件下采集的第一图像和第二图像,并对其进行区域性地质量评价值,以在检测到第一图像和第二图像中的一个存在质量评价值小于质量评价阈值的第一区域时,使用另一个中与第一区域相对应的第二区域的图像数据补偿该第一区域,以提高图像整体质量。与现有技术中融合图像以提高图像质量的方案相比,能准确定位需要质量补偿的区域,并有针对性地进行图像数据的补偿,从而实现简化图像质量补偿的目的。It can be seen that in this embodiment, by acquiring the first image and the second image collected under different lighting conditions of the same target object, and performing a regional quality evaluation value thereof, one of the first image and the second image is detected When there is a first area whose quality evaluation value is less than the quality evaluation threshold, the image data of another second area corresponding to the first area is used to compensate the first area to improve the overall image quality. Compared with the scheme of fusing images in the prior art to improve image quality, it can accurately locate areas requiring quality compensation, and perform targeted image data compensation, thereby simplifying image quality compensation.
图12为本说明书一实施例提供的一种人脸图像识别装置的结构示意图,参见图12,该装置具体可以包括:获取模块121、评价模块122、确定模块123、补偿模块124以及 识别模块125,其中:12 is a schematic structural diagram of a face image recognition device according to an embodiment of the present specification. Referring to FIG. 12, the device may specifically include: an acquisition module 121, an evaluation module 122, a determination module 123, a compensation module 124, and a recognition module 125 ,among them:
获取模块121,用于获取目标对象的第一人脸图像和第二人脸图像,所述第一人脸图像和所述第二人脸图像为不同光照条件下采集的图像;The obtaining module 121 is used to obtain a first face image and a second face image of the target object, where the first face image and the second face image are images collected under different lighting conditions;
评价模块122,用于对所述第一人脸图像进行质量评价处理;The evaluation module 122 is configured to perform quality evaluation processing on the first face image;
确定模块123,用于若所述第一人脸图像中存在质量评价值小于预设质量评价阈值的第一区域,则确定所述第二人脸图像中与所述第一区域对应的第二区域;The determining module 123 is configured to determine a second area corresponding to the first area in the second face image if there is a first area in the first face image whose quality evaluation value is less than a preset quality evaluation threshold region;
补偿模块124,用于基于所述第二区域的图像数据,补偿所述第一区域;The compensation module 124 is configured to compensate the first area based on the image data of the second area;
识别模块125,用于识别补偿后的所述第一人脸图像。The recognition module 125 is used to recognize the compensated first face image.
可选的,评价模块122,还用于:Optionally, the evaluation module 122 is also used to:
对补偿后的所述第一人脸图像进行质量评价处理;若补偿后的所述第一人脸图像的质量评价值达到人脸识别质量评价要求,则允许识别补偿后的所述第一人脸图像。Performing quality evaluation processing on the compensated first face image; if the quality evaluation value of the compensated first face image meets the face recognition quality evaluation requirement, it is allowed to recognize the compensated first person Face image.
可选的,识别模块125,具体用于:Optionally, the identification module 125 is specifically used for:
对补偿后的所述第一人脸图像进行人脸特征提取处理;基于提取出的人脸特征,进行人脸识别处理,以确定所述目标对象的身份信息。Perform face feature extraction processing on the compensated first face image; perform face recognition processing based on the extracted face features to determine the identity information of the target object.
可见,本实施例通过获取对象的不同光照条件下的第一人脸图像和第二人脸图像,并确定其质量评价分布数据,以在检测到第一人脸图像和第二人脸图像中的任一一个存在质量评价值小于质量评价阈值的第一区域时,使用另一个中与第一区域相对应的第二区域的图像数据补偿该第一区域,以提高人脸图像的整体质量,进而提高人脸识别精度;而且,本实施例还基于深度人脸图像对用户进行活体检测,以提高业务办理安全性。It can be seen that in this embodiment, by acquiring the first face image and the second face image under different lighting conditions of the object, and determining the quality evaluation distribution data, the first face image and the second face image are detected If any one of the first areas has a quality evaluation value less than the quality evaluation threshold, use the image data of the second area corresponding to the first area to compensate the first area to improve the overall quality of the face image To further improve the accuracy of face recognition; moreover, in this embodiment, a living body detection is performed on the user based on the deep face image to improve the security of business handling.
另外,对于上述装置实施方式而言,由于其与方法实施方式基本相似,所以描述的比较简单,相关之处参见方法实施方式的部分说明即可。而且,应当注意的是,在本发明的装置的各个部件中,根据其要实现的功能而对其中的部件进行了逻辑划分,但是,本发明不受限于此,可以根据需要对各个部件进行重新划分或者组合。In addition, for the above-mentioned device implementations, since they are basically similar to the method implementations, the description is relatively simple, and the relevant parts can be referred to the description of the method implementations. Moreover, it should be noted that among the various components of the device of the present invention, the components therein are logically divided according to the functions to be realized, but the present invention is not limited to this, and each component can be Re-divide or combine.
图7为本说明书一实施例提供的硬件及系统的结构示意图,下面结合图7,对本实施例提供的人脸图像识别装置进行详细说明,该装置具体可以包括:GB摄像头、黑白摄像头、红外补光灯以及处理芯片;其中:FIG. 7 is a schematic structural diagram of hardware and a system provided by an embodiment of the present specification. The face image recognition device provided by this embodiment will be described in detail below with reference to FIG. Light lamps and processing chips; including:
所述RGB摄像头(Color Camera),用于采集目标对象的RGB人脸图像;The RGB camera (Color Camera) is used to collect RGB face images of the target object;
所述红外补光灯,用于对所述目标对象的人脸发射红外光;The infrared fill light is used to emit infrared light to the face of the target object;
所述黑白摄像头(Inrared Camera),用于采集红外光照条件下的所述目标对象的红外人脸图像;The black and white camera (Inrared Camera) is used to collect infrared face images of the target object under infrared illumination conditions;
所述处理芯片(例如:图像信号处理单元ISP+集成电路SOC),用于将所述RGB人脸图像和所述红外人脸图像中的一个作为第一人脸图像,另一个作为第二人脸图像,并执行如上述图2、图5、图6对应实施例7中任一项所述的方法的步骤。The processing chip (for example: an image signal processing unit ISP + integrated circuit SOC) is configured to use one of the RGB face image and the infrared face image as a first face image and the other as a second face Image and execute the steps of the method described in any one of Embodiment 7 corresponding to FIG. 2, FIG. 5, and FIG. 6 above.
可选的,所述RGB摄像头和黑白摄像头属于同一双目摄像头。Optionally, the RGB camera and the black-and-white camera belong to the same binocular camera.
可选的,所述RGB摄像头处于常开状态,所述红外摄像头和所述红外补光灯处理常闭状态;Optionally, the RGB camera is in a normally-on state, and the infrared camera and the infrared fill light process a normally-off state;
所述处理芯片,还用于确定所述RGB摄像头采集的RGB人脸图像中存在人脸时,唤醒所述红外摄像头和所述红外补光灯,以使所述红外摄像头与所述RGB摄像头同步采集所述目标对象的人脸图像。The processing chip is also used to determine that there is a face in the RGB face image collected by the RGB camera, and wake up the infrared camera and the infrared fill light to synchronize the infrared camera with the RGB camera Collect a face image of the target object.
可选的,所述处理芯片,还用于识别出所述目标对象的身份信息时,关闭所述红外补光灯。Optionally, the processing chip is also used to turn off the infrared fill light when identifying the identity information of the target object.
可选的,装置还包括:处于常闭状态的结构光发射器;Optionally, the device further includes: a structured light emitter in a normally closed state;
所述处理芯片,还用于确定需要对所述目标对象进行活体检测时,唤醒所述结构光发射器;The processing chip is also used to determine that the structured light emitter needs to be awakened when it is necessary to perform a live detection on the target object;
所述结构光发射器,用于向所述目标对象的人脸发射结构光,以供所述黑白摄像头采集结构光光照条件下的所述目标对象的深度人脸图像;The structured light emitter is configured to emit structured light to the face of the target object for the black-and-white camera to collect the depth face image of the target object under structured light illumination conditions;
所述处理芯片,还用于基于所述深度人脸图像确定所述目标对象通过活体检测时,关闭所述结构光发射器。The processing chip is also used to turn off the structured light emitter when it is determined that the target object passes live detection based on the depth face image.
可见,本实施例通过定制硬件,在人脸检测、质量判断及特征提取、对比识别等流程对硬件的各器件的启动/关闭时机进行重构,以确保各类环境的最优人脸识别策略。It can be seen that in this embodiment, by customizing the hardware, the startup / shutdown timing of each device of the hardware is reconstructed in the processes of face detection, quality judgment, feature extraction, and comparison recognition to ensure the optimal face recognition strategy in various environments .
图13为本说明书一实施例提供的一种电子设备的结构示意图,参见图13,该电子设备包括处理器、内部总线、网络接口、内存以及非易失性存储器,当然还可能包括其他业务所需要的硬件。处理器从非易失性存储器中读取对应的计算机程序到内存中然后运行,在逻辑层面上形成图像处理装置。当然,除了软件实现方式之外,本申请并不排除其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以下处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。FIG. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present specification. Referring to FIG. 13, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory. Of course, it may include other business offices. Required hardware. The processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it to form an image processing device at a logical level. Of course, in addition to the software implementation, this application does not exclude other implementations, such as a logic device or a combination of software and hardware, etc., that is to say, the execution body of the following processing flow is not limited to each logical unit, it can also be Hardware or logic device.
网络接口、处理器和存储器可以通过总线系统相互连接。总线可以是ISA(Industry Standard Architecture,工业标准体系结构)总线、PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图13 中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。The network interface, processor and memory can be connected to each other via a bus system. The bus may be an ISA (Industry Standard Architecture, Industry Standard Architecture) bus, a PCI (Peripheral Components Interconnect, Peripheral Component Interconnect Standard) bus, or an EISA (Extended Industry Standard, Extended Industry Standard Architecture) bus, etc. The bus can be divided into an address bus, a data bus, and a control bus. For ease of representation, only one bidirectional arrow is used in FIG. 13, but it does not mean that there is only one bus or one type of bus.
存储器用于存放程序。具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。存储器可能包含高速随机存取存储器(Random-Access Memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少1个磁盘存储器。The memory is used to store programs. Specifically, the program may include program code, and the program code includes computer operation instructions. The memory may include read-only memory and random access memory, and provide instructions and data to the processor. The memory may include a high-speed random access memory (Random-Access Memory, RAM), or may also include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
处理器,用于执行所述存储器存放的程序,并具体执行:The processor is used to execute the program stored in the memory and specifically execute:
获取目标对象的第一图像和第二图像,所述第一图像和所述第二图像为不同光照条件下采集的图像;Acquiring a first image and a second image of the target object, the first image and the second image being images collected under different lighting conditions;
对所述第一图像进行质量评价处理;Performing quality evaluation processing on the first image;
若所述第一图像中存在质量评价值小于预设质量评价阈值的第一区域时,确定所述第二图像中与所述第一区域对应的第二区域;If there is a first area in the first image whose quality evaluation value is less than a preset quality evaluation threshold, determine a second area in the second image corresponding to the first area;
基于所述第二区域的图像数据,补偿所述第一区域。Based on the image data of the second area, the first area is compensated.
上述如本申请图11所示实施例揭示的图像处理装置或管理者(Master)节点执行的方法可以应用于处理器中,或者由处理器实现。处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。The method executed by the image processing apparatus or the master node disclosed in the embodiment shown in FIG. 11 of the present application may be applied to a processor, or implemented by a processor. The processor may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software. The aforementioned processor may be a general-purpose processor, including a central processor (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc .; it may also be a digital signal processor (Digital Signal Processor, DSP), dedicated integration Circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application may be implemented or executed. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied and executed by a hardware decoding processor, or may be executed and completed by a combination of hardware and software modules in the decoding processor. The software module may be located in a mature storage medium in the art, such as a random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, and register. The storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
图像处理装置还可执行图2至图3,图6的方法,并实现管理者节点执行的方法。The image processing apparatus can also execute the methods of FIGS. 2 to 3 and 6 and implement the method executed by the manager node.
基于相同的发明创造,本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储一个或多个程序,所述一个或多个程序当被包括多个应用程序的电子设备执行时,使得所述电子设备执行图2至图3,图6对应的实施例提供的图像处理 方法。Based on the same invention, the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores one or more programs, and the one or more programs include multiple application programs. When the electronic device executes, the electronic device is caused to execute the image processing method provided in the embodiments corresponding to FIG. 2 to FIG. 3, and FIG. 6.
图14为本说明书一实施例提供的一种电子设备的结构示意图,参见图14,该电子设备包括处理器、内部总线、网络接口、内存以及非易失性存储器,当然还可能包括其他业务所需要的硬件。处理器从非易失性存储器中读取对应的计算机程序到内存中然后运行,在逻辑层面上形成人脸图像识别装置。当然,除了软件实现方式之外,本申请并不排除其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以下处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。FIG. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present specification. Referring to FIG. 14, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory. Of course, it may include other business offices. Required hardware. The processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it to form a face image recognition device at a logical level. Of course, in addition to the software implementation, this application does not exclude other implementations, such as a logic device or a combination of software and hardware, etc., that is to say, the execution body of the following processing flow is not limited to each logical unit, it can also be Hardware or logic device.
网络接口、处理器和存储器可以通过总线系统相互连接。总线可以是ISA(Industry Standard Architecture,工业标准体系结构)总线、PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图14中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。The network interface, processor and memory can be connected to each other via a bus system. The bus may be an ISA (Industry Standard Architecture, Industry Standard Architecture) bus, a PCI (Peripheral Components Interconnect, Peripheral Component Interconnect Standard) bus, or an EISA (Extended Industry Standard, Extended Industry Standard Architecture) bus, etc. The bus can be divided into an address bus, a data bus, and a control bus. For ease of representation, only one bidirectional arrow is used in FIG. 14, but it does not mean that there is only one bus or one type of bus.
存储器用于存放程序。具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。存储器可能包含高速随机存取存储器(Random-Access Memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少1个磁盘存储器。The memory is used to store programs. Specifically, the program may include program code, and the program code includes computer operation instructions. The memory may include read-only memory and random access memory, and provide instructions and data to the processor. The memory may include a high-speed random access memory (Random-Access Memory, RAM), or may also include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
处理器,用于执行所述存储器存放的程序,并具体执行:The processor is used to execute the program stored in the memory and specifically execute:
获取目标对象的第一人脸图像和第二人脸图像,所述第一人脸图像和所述第二人脸图像为不同光照条件下采集的图像;Acquiring a first face image and a second face image of the target object, where the first face image and the second face image are images collected under different lighting conditions;
对所述第一人脸图像进行质量评价处理;Performing quality evaluation processing on the first face image;
若所述第一人脸图像中存在质量评价值小于预设质量评价阈值的第一区域,则确定所述第二人脸图像中与所述第一区域对应的第二区域;If there is a first area in the first face image whose quality evaluation value is less than a preset quality evaluation threshold, determine a second area in the second face image corresponding to the first area;
基于所述第二区域的图像数据,补偿所述第一区域,并识别补偿后的所述第一人脸图像。Based on the image data of the second area, the first area is compensated, and the compensated first face image is identified.
上述如本申请图12所示实施例揭示的人脸图像识别装置或管理者(Master)节点执行的方法可以应用于处理器中,或者由处理器实现。处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific  Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。The method performed by the face image recognition device or the master node disclosed in the embodiment shown in FIG. 12 of the present application may be applied to a processor, or implemented by a processor. The processor may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software. The aforementioned processor may be a general-purpose processor, including a central processor (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc .; it may also be a digital signal processor (Digital Signal Processor, DSP), dedicated integration Circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application may be implemented or executed. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied and executed by a hardware decoding processor, or may be executed and completed by a combination of hardware and software modules in the decoding processor. The software module may be located in a mature storage medium in the art, such as a random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, and register. The storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
人脸图像识别装置还可执行图5至图6的方法,并实现管理者节点执行的方法。The face image recognition device can also execute the methods of FIGS. 5 to 6 and implement the method executed by the manager node.
基于相同的发明创造,本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储一个或多个程序,所述一个或多个程序当被包括多个应用程序的电子设备执行时,使得所述电子设备执行图5至图6对应的实施例提供的人脸图像识别方法。Based on the same invention, the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores one or more programs, and the one or more programs include multiple application programs. When the electronic device executes, the electronic device is caused to execute the face image recognition method provided in the embodiments corresponding to FIG. 5 to FIG. 6.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。The embodiments in this specification are described in a progressive manner. The same or similar parts between the embodiments can be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the system embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant part can be referred to the description of the method embodiment.
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。The foregoing describes specific embodiments of the present specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve the desired results. In addition, the processes depicted in the drawings do not necessarily require the particular order shown or sequential order to achieve the desired results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present invention may be provided as methods, systems, or computer program products. Therefore, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may take the form of a computer program product implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些 计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention is described with reference to flowcharts and / or block diagrams of methods, devices (systems), and computer program products according to embodiments of the present invention. It should be understood that each flow and / or block in the flowchart and / or block diagram and a combination of the flow and / or block in the flowchart and / or block diagram may be implemented by computer program instructions. These computer program instructions can be provided to the processor of a general-purpose computer, special-purpose computer, embedded processing machine, or other programmable data processing device to produce a machine that enables the generation of instructions executed by the processor of the computer or other programmable data processing device A device for realizing the functions specified in one block or multiple blocks of one flow or multiple blocks of a flowchart.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer readable memory that can guide a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory produce an article of manufacture including an instruction device, the instructions The device implements the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded onto a computer or other programmable data processing device, so that a series of operating steps are performed on the computer or other programmable device to produce computer-implemented processing, which is executed on the computer or other programmable device The instructions provide steps for implementing the functions specified in one block or multiple blocks of the flowchart one flow or multiple flows and / or block diagrams.
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。In a typical configuration, the computing device includes one or more processors (CPUs), input / output interfaces, network interfaces, and memory.
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。The memory may include non-permanent memory, random access memory (RAM) and / or non-volatile memory in computer-readable media, such as read only memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media, including permanent and non-permanent, removable and non-removable media, can store information by any method or technology. The information may be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, read-only compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include temporary computer-readable media (transitory media), such as modulated data signals and carrier waves.
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排 除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。It should also be noted that the terms "include", "include" or any other variant thereof are intended to cover non-exclusive inclusion, so that a process, method, commodity or device that includes a series of elements not only includes those elements, but also includes Other elements not explicitly listed, or include elements inherent to such processes, methods, goods, or equipment. In the absence of more restrictions, the elements defined by the sentence "include one ..." do not exclude that there are other identical elements in the process, method, commodity or equipment that includes the elements.
本领域技术人员应明白,本申请的实施例可提供为方法、系统或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art should understand that the embodiments of the present application may be provided as methods, systems, or computer program products. Therefore, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, the present application may take the form of a computer program product implemented on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer usable program code.
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。The above is only an embodiment of the present application, and is not intended to limit the present application. For those skilled in the art, the present application may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included in the scope of the claims of this application.

Claims (26)

  1. 一种图像处理方法,其特征在于,包括:An image processing method, characterized in that it includes:
    获取目标对象的第一图像和第二图像,所述第一图像和所述第二图像为不同光照条件下采集的图像;Acquiring a first image and a second image of the target object, the first image and the second image being images collected under different lighting conditions;
    对所述第一图像进行质量评价处理;Performing quality evaluation processing on the first image;
    若所述第一图像中存在质量评价值小于预设质量评价阈值的第一区域时,确定所述第二图像中与所述第一区域对应的第二区域;If there is a first area in the first image whose quality evaluation value is less than a preset quality evaluation threshold, determine a second area in the second image corresponding to the first area;
    基于所述第二区域的图像数据,补偿所述第一区域。Based on the image data of the second area, the first area is compensated.
  2. 根据权利要求1所述的方法,其特征在于,对所述第一图像进行质量评价处理包括:The method according to claim 1, wherein performing quality evaluation processing on the first image includes:
    确定所述第一图像的质量相关参数分布数据;Determining the quality-related parameter distribution data of the first image;
    基于所述质量相关参数分布数据,确定所述第一图像的质量评价值分布数据;Based on the quality-related parameter distribution data, determining the quality evaluation value distribution data of the first image;
    基于所述质量评价值分布数据,确定所述第一图像中质量评价值小于预设质量评价阈值的区域。Based on the quality evaluation value distribution data, an area in the first image whose quality evaluation value is less than a preset quality evaluation threshold is determined.
  3. 根据权利要求2所述的方法,其特征在于,所述质量相关参数包括:清晰度、分辨率、亮度中的一个或多个。The method according to claim 2, wherein the quality-related parameters include one or more of sharpness, resolution, and brightness.
  4. 根据权利要求1所述的方法,其特征在于,所述第一图像为RGB图像,所述第二图像为红外图像;The method according to claim 1, wherein the first image is an RGB image and the second image is an infrared image;
    其中,基于所述第二区域的图像数据,补偿所述第一区域的图像数据包括:Wherein, based on the image data of the second area, compensating the image data of the first area includes:
    若所述RGB图像的第一区域的亮度小于预设亮度阈值,则基于所述红外图像的第二区域的图像数据,补偿所述RGB图像的第一区域的亮度;或者,If the brightness of the first area of the RGB image is less than a preset brightness threshold, based on the image data of the second area of the infrared image, compensate the brightness of the first area of the RGB image; or,
    若所述RGB图像的第一区域的清晰度小于预设清晰度阈值,则基于所述红外图像的第二区域的图像数据,补偿所述RGB图像的第一区域的清晰度。If the sharpness of the first area of the RGB image is less than a preset sharpness threshold, the sharpness of the first area of the RGB image is compensated based on the image data of the second area of the infrared image.
  5. 根据权利要求1所述的方法,其特征在于,所述第一图像为红外图像,所述第二图像为RGB图像;The method according to claim 1, wherein the first image is an infrared image and the second image is an RGB image;
    其中,基于所述第二区域的图像数据,补偿所述第一区域的图像数据包括:Wherein, based on the image data of the second area, compensating the image data of the first area includes:
    若所述红外图像的第一区域为被遮挡区域,则基于所述RGB图像的第二区域的图像数据,补偿所述红外图像的第一区域的所述目标对象的图像数据;或者,If the first area of the infrared image is a blocked area, the image data of the target object in the first area of the infrared image is compensated based on the image data of the second area of the RGB image; or,
    若所述红外图像的第一区域的分辨率小于预设分辨率阈值,则基于所述RGB图像的第二区域的图像数据,补偿所述红外图像的第一区域分辨率。If the resolution of the first region of the infrared image is less than a preset resolution threshold, the first region resolution of the infrared image is compensated based on the image data of the second region of the RGB image.
  6. 根据权利要求1所述的方法,其特征在于,基于所述第二区域的图像数据,补偿所述第一区域包括:The method according to claim 1, wherein, based on the image data of the second area, compensating the first area includes:
    融合所述第一区域和所述第二区域的图像数据,并将融合后的图像数据作为补偿后的所述第一区域的图像数据。Fusing the image data of the first area and the second area, and using the fused image data as the compensated image data of the first area.
  7. 根据权利要求1所述的方法,其特征在于,所述第一图像和所述第二图像由双目摄像头采集并配准。The method according to claim 1, wherein the first image and the second image are collected and registered by a binocular camera.
  8. 根据权利要求1所述的方法,其特征在于,所述目标对象包括人脸,所述第一图像为第一人脸图像,所述第二图像为第二人脸图像。The method of claim 1, wherein the target object includes a human face, the first image is a first human face image, and the second image is a second human face image.
  9. 根据权利要求8所述的方法,其特征在于,获取目标对象的第一图像和第二图像包括:The method according to claim 8, wherein acquiring the first image and the second image of the target object comprises:
    采集目标对象的第一图像;Collect the first image of the target object;
    确定所述第一图像中存在人脸时,同步采集所述目标对象的第一人脸图像和第二人脸图像。When it is determined that there is a face in the first image, the first face image and the second face image of the target object are collected synchronously.
  10. 根据权利要求8所述的方法,其特征在于,还包括:The method of claim 8, further comprising:
    识别补偿后的所述第一人脸图像,确定所述目标对象的身份信息。Identify the compensated first face image and determine the identity information of the target object.
  11. 根据权利要求10所述的方法,其特征在于,还包括:The method according to claim 10, further comprising:
    确定所述目标对象的业务需求的安全度达到预设安全度阈值时,对所述目标对象进行活体检测。When it is determined that the security level of the business requirements of the target object reaches a preset security level threshold, live detection is performed on the target object.
  12. 根据权利要求11所述的方法,其特征在于,对所述目标对象进行活体检测包括:The method according to claim 11, wherein the biopsy of the target object comprises:
    确定所述目标对象的深度人脸图像,并基于深度人脸图像对所述目标对象进行活体检测。Determining the depth face image of the target object, and performing live detection on the target object based on the depth face image.
  13. 根据权利要求12所述的方法,其特征在于,基于深度图像对所述目标对象进行活体检测包括:The method according to claim 12, wherein the living body detection of the target object based on the depth image comprises:
    融合所述深度人脸图像和补偿后的所述第一人脸图像,并基于融合后的图像对所述目标对象进行活体检测。Fusing the deep face image and the compensated first face image, and performing live detection on the target object based on the fused image.
  14. 根据权利要求11所述的方法,其特征在于,对所述目标对象进行活体检测包括:The method according to claim 11, wherein the biopsy of the target object comprises:
    确定所述目标对象的活体动作,并基于所述活体动作确定所述目标对象是否为活体。Determine the living body motion of the target object, and determine whether the target object is a living body based on the living body motion.
  15. 一种人脸图像识别方法,其特征在于,包括:A face image recognition method, characterized in that it includes:
    获取目标对象的第一人脸图像和第二人脸图像,所述第一人脸图像和所述第二人脸图像为不同光照条件下采集的图像;Acquiring a first face image and a second face image of the target object, where the first face image and the second face image are images collected under different lighting conditions;
    对所述第一人脸图像进行质量评价处理;Performing quality evaluation processing on the first face image;
    若所述第一人脸图像中存在质量评价值小于预设质量评价阈值的第一区域,则确定所述第二人脸图像中与所述第一区域对应的第二区域;If there is a first area in the first face image whose quality evaluation value is less than a preset quality evaluation threshold, determine a second area in the second face image corresponding to the first area;
    基于所述第二区域的图像数据,补偿所述第一区域,并识别补偿后的所述第一人脸图像。Based on the image data of the second area, the first area is compensated, and the compensated first face image is identified.
  16. 根据权利要求15所述的方法,其特征在于,在识别补偿后的所述第一人脸图像之前,还包括:The method according to claim 15, wherein before identifying the compensated first face image, further comprising:
    对补偿后的所述第一人脸图像进行质量评价处理;Performing quality evaluation processing on the compensated first face image;
    若补偿后的所述第一人脸图像的质量评价值达到人脸识别质量评价要求,则允许识别补偿后的所述第一人脸图像。If the quality evaluation value of the compensated first face image meets the face recognition quality evaluation requirement, the first face image after compensation is allowed to be recognized.
  17. 根据权利要求15所述的方法,其特征在于,识别补偿后的所述第一人脸图像包括:The method of claim 15, wherein identifying the compensated first face image comprises:
    对补偿后的所述第一人脸图像进行人脸特征提取处理;Performing face feature extraction processing on the compensated first face image;
    基于提取出的人脸特征,进行人脸识别处理,以确定所述目标对象的身份信息。Based on the extracted face features, face recognition processing is performed to determine the identity information of the target object.
  18. 一种图像处理装置,其特征在于,包括:An image processing device, characterized in that it includes:
    获取模块,用于获取目标对象的第一图像和第二图像,所述第一图像和所述第二图像为不同光照条件下采集的图像;An obtaining module, configured to obtain a first image and a second image of the target object, where the first image and the second image are images collected under different lighting conditions;
    评价模块,用于对所述第一图像进行质量评价处理;An evaluation module, configured to perform quality evaluation processing on the first image;
    确定模块,用于若所述第一图像中存在质量评价值小于预设质量评价阈值的第一区域时,确定所述第二图像中与所述第一区域对应的第二区域;A determining module, configured to determine a second area in the second image corresponding to the first area if there is a first area in the first image whose quality evaluation value is less than a preset quality evaluation threshold;
    补偿模块,用于基于所述第二区域的图像数据,补偿所述第一区域。The compensation module is used to compensate the first area based on the image data of the second area.
  19. 一种人脸图像识别装置,其特征在于,包括:A face image recognition device, characterized in that it includes:
    获取模块,用于获取目标对象的第一人脸图像和第二人脸图像,所述第一人脸图像和所述第二人脸图像为不同光照条件下采集的图像;An obtaining module, configured to obtain a first face image and a second face image of the target object, where the first face image and the second face image are images collected under different lighting conditions;
    评价模块,用于对所述第一人脸图像进行质量评价处理;An evaluation module, configured to perform quality evaluation processing on the first face image;
    确定模块,用于若所述第一人脸图像中存在质量评价值小于预设质量评价阈值的第一区域,则确定所述第二人脸图像中与所述第一区域对应的第二区域;A determining module, configured to determine a second area in the second face image corresponding to the first area if there is a first area in the first face image whose quality evaluation value is less than a preset quality evaluation threshold ;
    补偿模块,用于基于所述第二区域的图像数据,补偿所述第一区域;A compensation module for compensating the first area based on the image data of the second area;
    识别模块,用于识别补偿后的所述第一人脸图像。A recognition module is used to recognize the compensated first face image.
  20. 一种人脸图像识别装置,其特征在于,包括:RGB摄像头、黑白摄像头、红外补光灯以及处理芯片;其中:A face image recognition device, characterized by comprising: an RGB camera, a black and white camera, an infrared fill light and a processing chip; wherein:
    所述RGB摄像头,用于采集目标对象的RGB人脸图像;The RGB camera is used to collect RGB face images of the target object;
    所述红外补光灯,用于对所述目标对象的人脸发射红外光;The infrared fill light is used to emit infrared light to the face of the target object;
    所述黑白摄像头,用于采集红外光照条件下的所述目标对象的红外人脸图像;The black-and-white camera is used to collect infrared face images of the target object under infrared lighting conditions;
    所述处理芯片,用于将所述RGB人脸图像和所述红外人脸图像中的一个作为第一人脸图像,另一个作为第二人脸图像,并执行如权利要求15至17中任一项所述的方法的步骤。The processing chip is configured to use one of the RGB face image and the infrared face image as a first face image and the other as a second face image, and perform any of claims 15 to 17 A method step.
  21. 根据权利要求20所述的装置,其特征在于,所述RGB摄像头和黑白摄像头属于同一双目摄像头。The device according to claim 20, wherein the RGB camera and the monochrome camera belong to the same binocular camera.
  22. 根据权利要求20所述的装置,其特征在于,所述RGB摄像头处于常开状态,所述红外摄像头和所述红外补光灯处理常闭状态;The device according to claim 20, wherein the RGB camera is in a normally open state, and the infrared camera and the infrared fill light process a normally closed state;
    所述处理芯片,还用于确定所述RGB摄像头采集的RGB人脸图像中存在人脸时,唤醒所述红外摄像头和所述红外补光灯,以使所述红外摄像头与所述RGB摄像头同步采集所述目标对象的人脸图像。The processing chip is also used to determine that there is a face in the RGB face image collected by the RGB camera, and wake up the infrared camera and the infrared fill light to synchronize the infrared camera with the RGB camera Collect a face image of the target object.
  23. 根据权利要求21所述的装置,其特征在于,The device according to claim 21, characterized in that
    所述处理芯片,还用于识别出所述目标对象的身份信息时,关闭所述红外补光灯。The processing chip is also used to turn off the infrared fill light when identifying the identity information of the target object.
  24. 根据权利要求21所述的装置,其特征在于,还包括:处于常闭状态的结构光发射器;The device according to claim 21, further comprising: a structured light emitter in a normally closed state;
    所述处理芯片,还用于确定需要对所述目标对象进行活体检测时,唤醒所述结构光发射器;The processing chip is also used to determine that the structured light emitter needs to be awakened when it is necessary to perform a live detection on the target object;
    所述结构光发射器,用于向所述目标对象的人脸发射结构光,以供所述黑白摄像头采集结构光光照条件下的所述目标对象的深度人脸图像;The structured light emitter is configured to emit structured light to the face of the target object for the black-and-white camera to collect the depth face image of the target object under structured light illumination conditions;
    所述处理芯片,还用于基于所述深度人脸图像确定所述目标对象通过活体检测时,关闭所述结构光发射器。The processing chip is also used to turn off the structured light emitter when it is determined that the target object passes live detection based on the depth face image.
  25. 一种电子设备,其特征在于,包括:An electronic device, characterized in that it includes:
    处理器;以及Processor; and
    被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理 器执行如权利要求1至17中任一项所述方法的步骤。A memory arranged to store computer executable instructions which, when executed, causes the processor to perform the steps of the method according to any one of claims 1 to 17.
  26. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至17中任一项所述方法的步骤。A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 17 are implemented .
PCT/CN2019/110266 2018-10-19 2019-10-10 Image processing and face image identification method, apparatus and device WO2020078243A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811222689.7 2018-10-19
CN201811222689.7A CN111161205B (en) 2018-10-19 2018-10-19 Image processing and face image recognition method, device and equipment

Publications (1)

Publication Number Publication Date
WO2020078243A1 true WO2020078243A1 (en) 2020-04-23

Family

ID=70284412

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/110266 WO2020078243A1 (en) 2018-10-19 2019-10-10 Image processing and face image identification method, apparatus and device

Country Status (2)

Country Link
CN (1) CN111161205B (en)
WO (1) WO2020078243A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860428A (en) * 2020-07-30 2020-10-30 上海华虹计通智能系统股份有限公司 Face recognition system and method
CN112132046A (en) * 2020-09-24 2020-12-25 天津锋物科技有限公司 Static living body detection method and system
CN112367470A (en) * 2020-10-29 2021-02-12 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112818732A (en) * 2020-08-11 2021-05-18 腾讯科技(深圳)有限公司 Image processing method and device, computer equipment and storage medium
CN112819722A (en) * 2021-02-03 2021-05-18 东莞埃科思科技有限公司 Infrared image face exposure method, device, equipment and storage medium
CN112836649A (en) * 2021-02-05 2021-05-25 黑龙江迅锐科技有限公司 Intelligent body temperature detection method and device and electronic equipment
CN112949467A (en) * 2021-02-26 2021-06-11 北京百度网讯科技有限公司 Face detection method and device, electronic equipment and storage medium
CN113158908A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Face recognition method and device, storage medium and electronic equipment
CN113255586A (en) * 2021-06-23 2021-08-13 中国平安人寿保险股份有限公司 Human face anti-cheating method based on alignment of RGB (red, green and blue) image and IR (infrared) image and related equipment
CN113409056A (en) * 2021-06-30 2021-09-17 深圳市商汤科技有限公司 Payment method and device, local identification equipment, face payment system and equipment
CN113436105A (en) * 2021-06-30 2021-09-24 北京百度网讯科技有限公司 Model training and image optimization method and device, electronic equipment and storage medium
CN113505674A (en) * 2021-06-30 2021-10-15 上海商汤临港智能科技有限公司 Face image processing method and device, electronic equipment and storage medium
CN113537028A (en) * 2021-07-09 2021-10-22 中星电子股份有限公司 Control method, apparatus, device and medium for face recognition system
CN113609950A (en) * 2021-07-30 2021-11-05 深圳市芯成像科技有限公司 Living body detection method and system of binocular camera and computer storage medium
CN114694266A (en) * 2022-03-28 2022-07-01 广州广电卓识智能科技有限公司 Silent in-vivo detection method, system, equipment and storage medium
CN114862665A (en) * 2022-07-05 2022-08-05 深圳市爱深盈通信息技术有限公司 Infrared human face image generation method and device and equipment terminal
WO2022166316A1 (en) * 2021-02-05 2022-08-11 深圳前海微众银行股份有限公司 Light supplementing method and apparatus for facial recognition, and facial recognition device and system therefor
US20230377192A1 (en) * 2022-05-23 2023-11-23 Dell Products, L.P. System and method for detecting postures of a user of an information handling system (ihs) during extreme lighting conditions

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053389A (en) * 2020-07-28 2020-12-08 北京迈格威科技有限公司 Portrait processing method and device, electronic equipment and readable storage medium
CN112597886A (en) * 2020-12-22 2021-04-02 成都商汤科技有限公司 Ride fare evasion detection method and device, electronic equipment and storage medium
CN114764775A (en) * 2021-01-12 2022-07-19 深圳市普渡科技有限公司 Infrared image quality evaluation method, device and storage medium
CN113743284A (en) * 2021-08-30 2021-12-03 杭州海康威视数字技术股份有限公司 Image recognition method, device, equipment, camera and access control equipment
CN113965679B (en) * 2021-10-19 2022-09-23 合肥的卢深视科技有限公司 Depth map acquisition method, structured light camera, electronic device, and storage medium
CN114299037B (en) * 2021-12-30 2023-09-01 广州极飞科技股份有限公司 Quality evaluation method and device for object detection result, electronic equipment and computer readable storage medium
CN115511833B (en) * 2022-09-28 2023-06-27 广东百能家居有限公司 Glass surface scratch detecting system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110002557A1 (en) * 2009-07-06 2011-01-06 Samsung Electronics Co., Ltd. Image processing apparatus and image processing method
CN105611205A (en) * 2015-10-15 2016-05-25 惠州Tcl移动通信有限公司 Optimization method of projected image, projection display module and electronic equipment
CN107888898A (en) * 2017-12-28 2018-04-06 盎锐(上海)信息科技有限公司 Image capture method and camera device
CN108090477A (en) * 2018-01-23 2018-05-29 北京易智能科技有限公司 A kind of face identification method and device based on Multi-spectral image fusion

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6950139B2 (en) * 1999-01-22 2005-09-27 Nikon Corporation Image reading device and storage medium storing control procedure for image reading device
JP2004318693A (en) * 2003-04-18 2004-11-11 Konica Minolta Photo Imaging Inc Image processing method, image processor, and image processing program
EP2309449B1 (en) * 2009-10-09 2016-04-20 EPFL Ecole Polytechnique Fédérale de Lausanne Method to produce a full-color smoothed image
JP2014078052A (en) * 2012-10-09 2014-05-01 Sony Corp Authentication apparatus, authentication method, and program
CN107153816B (en) * 2017-04-16 2021-03-23 五邑大学 Data enhancement method for robust face recognition
CN107483811A (en) * 2017-07-28 2017-12-15 广东欧珀移动通信有限公司 Imaging method and electronic installation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110002557A1 (en) * 2009-07-06 2011-01-06 Samsung Electronics Co., Ltd. Image processing apparatus and image processing method
CN105611205A (en) * 2015-10-15 2016-05-25 惠州Tcl移动通信有限公司 Optimization method of projected image, projection display module and electronic equipment
CN107888898A (en) * 2017-12-28 2018-04-06 盎锐(上海)信息科技有限公司 Image capture method and camera device
CN108090477A (en) * 2018-01-23 2018-05-29 北京易智能科技有限公司 A kind of face identification method and device based on Multi-spectral image fusion

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860428A (en) * 2020-07-30 2020-10-30 上海华虹计通智能系统股份有限公司 Face recognition system and method
CN112818732B (en) * 2020-08-11 2023-12-12 腾讯科技(深圳)有限公司 Image processing method, device, computer equipment and storage medium
CN112818732A (en) * 2020-08-11 2021-05-18 腾讯科技(深圳)有限公司 Image processing method and device, computer equipment and storage medium
CN112132046A (en) * 2020-09-24 2020-12-25 天津锋物科技有限公司 Static living body detection method and system
CN112367470A (en) * 2020-10-29 2021-02-12 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN112819722A (en) * 2021-02-03 2021-05-18 东莞埃科思科技有限公司 Infrared image face exposure method, device, equipment and storage medium
CN112836649A (en) * 2021-02-05 2021-05-25 黑龙江迅锐科技有限公司 Intelligent body temperature detection method and device and electronic equipment
WO2022166316A1 (en) * 2021-02-05 2022-08-11 深圳前海微众银行股份有限公司 Light supplementing method and apparatus for facial recognition, and facial recognition device and system therefor
CN112949467A (en) * 2021-02-26 2021-06-11 北京百度网讯科技有限公司 Face detection method and device, electronic equipment and storage medium
CN112949467B (en) * 2021-02-26 2024-03-08 北京百度网讯科技有限公司 Face detection method, device, electronic equipment and storage medium
CN113158908A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Face recognition method and device, storage medium and electronic equipment
CN113255586B (en) * 2021-06-23 2024-03-15 中国平安人寿保险股份有限公司 Face anti-cheating method based on RGB image and IR image alignment and related equipment
CN113255586A (en) * 2021-06-23 2021-08-13 中国平安人寿保险股份有限公司 Human face anti-cheating method based on alignment of RGB (red, green and blue) image and IR (infrared) image and related equipment
CN113505674B (en) * 2021-06-30 2023-04-18 上海商汤临港智能科技有限公司 Face image processing method and device, electronic equipment and storage medium
CN113505674A (en) * 2021-06-30 2021-10-15 上海商汤临港智能科技有限公司 Face image processing method and device, electronic equipment and storage medium
CN113436105A (en) * 2021-06-30 2021-09-24 北京百度网讯科技有限公司 Model training and image optimization method and device, electronic equipment and storage medium
CN113409056A (en) * 2021-06-30 2021-09-17 深圳市商汤科技有限公司 Payment method and device, local identification equipment, face payment system and equipment
CN113537028A (en) * 2021-07-09 2021-10-22 中星电子股份有限公司 Control method, apparatus, device and medium for face recognition system
CN113609950A (en) * 2021-07-30 2021-11-05 深圳市芯成像科技有限公司 Living body detection method and system of binocular camera and computer storage medium
CN114694266A (en) * 2022-03-28 2022-07-01 广州广电卓识智能科技有限公司 Silent in-vivo detection method, system, equipment and storage medium
US20230377192A1 (en) * 2022-05-23 2023-11-23 Dell Products, L.P. System and method for detecting postures of a user of an information handling system (ihs) during extreme lighting conditions
US11836825B1 (en) * 2022-05-23 2023-12-05 Dell Products L.P. System and method for detecting postures of a user of an information handling system (IHS) during extreme lighting conditions
CN114862665A (en) * 2022-07-05 2022-08-05 深圳市爱深盈通信息技术有限公司 Infrared human face image generation method and device and equipment terminal
CN114862665B (en) * 2022-07-05 2022-12-02 深圳市爱深盈通信息技术有限公司 Infrared human face image generation method and device and equipment terminal

Also Published As

Publication number Publication date
CN111161205A (en) 2020-05-15
CN111161205B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
WO2020078243A1 (en) Image processing and face image identification method, apparatus and device
US11010967B2 (en) Three dimensional content generating apparatus and three dimensional content generating method thereof
US11699219B2 (en) System, method, and computer program for capturing an image with correct skin tone exposure
EP3477931B1 (en) Image processing method and device, readable storage medium and electronic device
US10853625B2 (en) Facial signature methods, systems and software
US20180276841A1 (en) Method and system of determining object positions for image processing using wireless network angle of transmission
CN108716983B (en) Optical element detection method and device, electronic equipment, storage medium
EP3520390B1 (en) Recolorization of infrared image streams
US10204432B2 (en) Methods and systems for color processing of digital images
US9536321B2 (en) Apparatus and method for foreground object segmentation
CN108012078A (en) Brightness of image processing method, device, storage medium and electronic equipment
CN109190533B (en) Image processing method and device, electronic equipment and computer readable storage medium
KR20180000696A (en) A method and apparatus for creating a pair of stereoscopic images using least one lightfield camera
JP6922399B2 (en) Image processing device, image processing method and image processing program
US11051001B2 (en) Method and system for generating a two-dimensional and a three-dimensional image stream
CN108629329B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN116437198B (en) Image processing method and electronic equipment
US11509797B2 (en) Image processing apparatus, image processing method, and storage medium
WO2021184303A1 (en) Video processing method and device
EP3238174A1 (en) Methods and systems for color processing of digital images
US11303877B2 (en) Method and system for enhancing use of two-dimensional video analytics by using depth data
CN109447925A (en) Image processing method and device, storage medium, electronic equipment
WO2023105961A1 (en) Image processing device, image processing method, and program
US20230064963A1 (en) Feature Detection Methods and Systems Using Deconstructed Color Image Data
TWI696980B (en) Method, apparatus, medium for interactive image processing using depth engine and digital signal processor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19873773

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19873773

Country of ref document: EP

Kind code of ref document: A1