WO2020114105A1 - 一种基于多帧脸部图像的比对方法、装置和电子设备 - Google Patents

一种基于多帧脸部图像的比对方法、装置和电子设备 Download PDF

Info

Publication number
WO2020114105A1
WO2020114105A1 PCT/CN2019/111989 CN2019111989W WO2020114105A1 WO 2020114105 A1 WO2020114105 A1 WO 2020114105A1 CN 2019111989 W CN2019111989 W CN 2019111989W WO 2020114105 A1 WO2020114105 A1 WO 2020114105A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
face
facial
frame
image
Prior art date
Application number
PCT/CN2019/111989
Other languages
English (en)
French (fr)
Inventor
郑丹丹
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Priority to SG11202100924SA priority Critical patent/SG11202100924SA/en
Priority to EP19892558.8A priority patent/EP3812956A4/en
Publication of WO2020114105A1 publication Critical patent/WO2020114105A1/zh
Priority to US17/191,039 priority patent/US11210502B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking

Definitions

  • This specification relates to the technical field of computer software, and in particular, to a comparison method, device, and electronic device based on multi-frame facial images.
  • the common face brush payment After collecting images, the common face brush payment will select a frame of face image with the highest quality score and compare it with the bottom face image.
  • the image collected by the camera only relies on the quality sub-algorithm to select a frame of face images, and the quality sub-algorithm has errors, and the selected frame of the face image may appear blurred, occlusion and other defects; in addition, considering the single The amount of information in a frame of face image is limited, and it cannot maximize the accuracy of comparison.
  • the purpose of the embodiments of the present specification is to provide a comparison method, device and electronic device based on multi-frame facial images, so as to effectively improve the comparison accuracy of facial images.
  • a comparison method based on multi-frame facial images including:
  • the first parameter with the largest average gap between the preset parameters and the face images in the candidate image set is selected from the multi-frame face images. Two face images to be added to the candidate image set;
  • the facial image in the candidate image set is compared with the bottom facial image of the target object.
  • a comparison device based on multi-frame facial images including:
  • Acquisition module to collect multi-frame facial images of the target object
  • the first selection module selects the first facial image with the highest quality score from the multi-frame facial images to add to the candidate image set;
  • the second selection module when the number of images in the candidate image set is less than the preset number, cyclically selects between the preset parameters and the face images in the candidate image set from the multi-frame face images The second face image with the largest average gap to be added to the candidate image set;
  • the comparison module compares the face image in the candidate image set with the bottom face image of the target object.
  • an electronic device including:
  • a memory arranged to store computer-executable instructions, which when executed, causes the processor to perform the following operations:
  • the first parameter with the largest average gap between the preset parameters and the face images in the candidate image set is selected from the multi-frame face images. Two face images to be added to the candidate image set;
  • the facial image in the candidate image set is compared with the bottom facial image of the target object.
  • a computer-readable storage medium which stores one or more programs, which when executed by an electronic device including multiple application programs, causes all The electronic device performs the following operations:
  • the first parameter with the largest average gap between the preset parameters and the face images in the candidate image set is selected from the multi-frame face images. Two face images to be added to the candidate image set;
  • the facial image in the candidate image set is compared with the bottom facial image of the target object.
  • FIG. 1 is a schematic diagram of steps of a comparison method based on multi-frame facial images provided by an embodiment of the present specification.
  • FIG. 2 is a schematic diagram of a comparison process based on multi-frame facial images provided by another embodiment of the present specification.
  • FIG. 3 is a schematic structural diagram of an electronic device provided by an embodiment of this specification.
  • FIG. 4 is a schematic structural diagram of a comparison device based on multi-frame facial images provided by an embodiment of the present specification.
  • FIG. 1 it is a schematic diagram of steps of a comparison method based on multi-frame facial images provided by an embodiment of the present specification.
  • the subject of the method may be a comparison device based on multi-frame facial images, which may specifically be face recognition Comparison devices, payment terminals, self-service terminals, etc.
  • the comparison method may include the following steps:
  • S102 Collect multiple frames of face images of the target object.
  • the multi-frame facial images of the target object should be the facial images of the same target user.
  • These face images can be selected based on different criteria, for example, the face image of the target object is selected from each image based on the maximum face logic, or the face image of the target object is selected from each image based on the most recent face logic,
  • the face image of the target object may also be determined based on other methods, which is not limited in this specification.
  • S102 may specifically perform the following when acquiring multiple frames of face images of the target object:
  • a tracking method can be used.
  • the tracking method involved can be implemented based on a histogram tracking algorithm, MeanShift algorithm, and so on.
  • tracking in this specification refers to positioning the face in the face image of the current frame captured by the camera through image recognition, and tracking the face based on the camera to keep the face in the camera collection field of view Within range.
  • the quality score may be determined according to various evaluation methods, and a achievable solution, for example, the quality score of the face image of the frame is determined according to the attributes of the angle and light of each face image.
  • the quality sub-threshold involved here may be a quality score determined according to empirical values or corresponding algorithms, to filter face images with high image quality, and to filter out face images with abnormal defects such as occlusion and blurry exposure. , Effectively diluting the error caused by using a quality score to determine a frame of face image.
  • these attributes can include: quality score, angle, brightness, light, etc.
  • S102 when acquiring a face image satisfying the quality sub-threshold based on the tracking result, S102 may be specifically implemented as:
  • the facial image of the current frame is collected as the facial image of the target object, wherein the facial image of the current frame is the tracking The face image of the current frame upon success.
  • S102 when collecting face images satisfying the quality sub-threshold based on the tracking result, S102 may be specifically executed as follows:
  • the face image of the current frame at the time of tracking failure is compared successfully with the face image collected before the current frame, and the face image of the current frame meets the quality sub-threshold, then the current The face image of the frame is taken as the face image of the target object, wherein the face image of the current frame is the face image of the current frame when the tracking fails;
  • the successful tracking can be understood as the face image of the current frame positioned by the camera, and the face image of the previous frame positioned is the same face image, that is, the face in the face image is always kept within the camera's acquisition field of view .
  • tracking failure can be understood as the face image of the current frame positioned by the camera, which is not the same face image as the face image of the previous frame positioned, that is, failure to keep the face in the face image has been collected by the camera Within the field of vision.
  • the face image of the current frame is compared with the face image that has been collected before the current frame, which may be based on the ID or other identification of the target user (or target object) to determine Whether it is still tracking the same target user.
  • S104 Select the first facial image with the highest quality score from the multi-frame facial images to add to the candidate image set.
  • the preset parameter may include at least one of quality score, angle, brightness, and light.
  • the preset parameter may be determined based on at least one attribute of quality score, angle, brightness, and light.
  • the preset parameter may be determined by weighted average based on at least one attribute of quality score, angle, brightness, and light.
  • the second face image with the largest average gap between the preset parameters and the face images in the candidate image set from the multi-frame face images is cyclically selected. For example, when the preset number is 4. At this time, only the face image 1 with the highest quality score is selected and added to the candidate image set, which needs to be selected again. Specifically, the average gap between the preset parameters and the face image 1 with the highest quality score can be selected (because of this In the candidate image set, only the face image 1 is included.
  • the face image 2 with the largest average difference that is, the difference from the face image 1)
  • the images in the candidate image set Is still less than 4 continue to select the face image 3 with the largest average gap of preset parameters between the face image 1 and the face image 2, loop this operation, and then select the face image 4, so that the candidate image set Reached 4 images, and the cycle can be terminated.
  • the attributes of the face images are used for normalization processing to calculate the average gap of the preset parameters, while increasing the number of candidate face image frames, it also ensures that the selected face images have a large The difference, in turn, avoids noise and improves the accuracy of subsequent comparison results.
  • S108 may be specifically executed when comparing the face image in the candidate image set with the bottom face image of the target object as follows:
  • the weighted fusion of the comparison results can improve the accuracy of the comparison results.
  • the bottom facial images can be stored locally on the system or in a cloud server. If stored locally, each facial image in the candidate image set can be compared with the bottom facial images of the target object stored locally by the system Comparison; if it is stored in the cloud server, you can upload the face images in the candidate image collection to the cloud server and compare with the bottom face images of the target object one by one, and return the comparison result.
  • the first face image with the highest quality score is selected from the collected multi-frame face images to join the candidate image set, and based on the first face image, the multi-frame face images are cyclically Select the second face image with the largest average gap between the preset parameters and the face images in the candidate image set, and add it to the candidate image set until the image in the candidate image set reaches the preset number, and then, based on the candidate
  • the face images in the image set are compared with the bottom face image of the target object, so that while increasing the number of candidate face image frames, it also ensures that there is a large difference between the selected face images, and further, Avoid noise and improve the accuracy of comparison results.
  • the face image in the embodiment of the present specification includes: a recognizable human face image or a recognizable animal face image.
  • the comparison process based on multi-frame facial images may include the following:
  • the embodiment of this specification does not exclude possible animal face images.
  • the user's pet face is used as the bottom face image, and the pet face can be used for comparison during subsequent payment.
  • S206 Determine whether the face image of the current frame and the locally stored face image of the previous frame belong to the same ID, and if so, skip to S204, otherwise, clear the facial image set and skip to 202.
  • S212 Determine whether the tracking time reaches 2S, if yes, execute S214, otherwise, skip to S202.
  • S214 Select the face image with the highest quality score from the face image set and add it to the candidate image set.
  • S216 Select a face image with the largest average gap between the preset parameters from the face image set and the face images in the candidate image set to add to the candidate image set.
  • This step can refer to the content in S106, which will not be repeated here.
  • S218 Determine whether the image in the candidate image set reaches K frame, if yes, execute S220, otherwise, jump to S216.
  • K frames can be about 60 frames.
  • 2S face images are rich in information and can improve the accuracy of comparison.
  • S220 Upload the K-frame face images in the candidate image set to the server and compare them with the bottom target face images of the target object, respectively.
  • the weighted average result can be used as the final result of this comparison, and based on the comparison result, a decision can be made as to whether the face corresponding to the face image passes verification.
  • the electronic device includes a processor, and optionally also includes an internal bus, a network interface, and a memory.
  • the memory may include a memory, such as a high-speed random access memory (Random-Access Memory, RAM), or may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM Random-Access Memory
  • non-volatile memory such as at least one disk memory.
  • the electronic device may also include hardware required for other services.
  • the processor, network interface and memory can be connected to each other through an internal bus, which can be an ISA (Industry Standard Architecture, Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, or an EISA (Extended Industry, Standard Architecture, extended industry standard structure) bus, etc.
  • the bus can be divided into an address bus, a data bus, and a control bus. For ease of representation, only one bidirectional arrow is used in FIG. 3, but it does not mean that there is only one bus or one type of bus.
  • the program may include program code, and the program code includes a computer operation instruction.
  • the memory may include memory and non-volatile memory, and provide instructions and data to the processor.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it, forming a shared resource access control device at a logical level.
  • the processor executes the programs stored in the memory and is specifically used to perform the following operations:
  • the first parameter with the largest average gap between the preset parameters and the face images in the candidate image set is selected from the multi-frame face images. Two face images to be added to the candidate image set;
  • the facial image in the candidate image set is compared with the bottom facial image of the target object.
  • the method performed by the comparison device based on multi-frame facial images disclosed in the embodiments shown in FIG. 1 and FIG. 2 of the present specification may be applied to a processor, or implemented by a processor.
  • the processor may be an integrated circuit chip with signal processing capabilities.
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software.
  • the aforementioned processor may be a general-purpose processor, including a central processor (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; it may also be a digital signal processor (Digital Signal Processor, DSP), dedicated integration Circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present specification may be directly embodied and executed by a hardware decoding processor, or may be executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a mature storage medium in the art, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, and registers.
  • the storage medium is located in the memory.
  • the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
  • the electronic device may also execute the method of FIG. 1 and implement the functions of the embodiment shown in FIGS. 1 and 2 based on the multi-frame facial image comparison device, and the embodiments of this specification will not be described here.
  • the electronic device in the embodiments of this specification does not exclude other implementations, such as a logic device or a combination of software and hardware, etc., that is to say, the execution body of the following processing flow is not limited to each logic
  • the unit may also be a hardware or logic device.
  • the first face image with the highest quality score is selected from the collected multi-frame face images of the target object to join the candidate image set, and based on the first face image, the multi-frame face is cyclically Select the second face image with the largest average gap between the preset parameters and the face images in the candidate image set in the partial image and add it to the candidate image set until the number of images in the candidate image set reaches the preset number, and then , Based on the comparison between the face images in the candidate image set and the bottom face image of the target object, thus, while increasing the number of candidate face image frames, it also ensures that there is a large difference between the selected face images , In addition, to avoid noise and improve the accuracy of the comparison results.
  • the embodiments of the present specification also provide a computer-readable storage medium that stores one or more programs, the one or more programs include instructions, and when the instructions are included in a portable electronic device of multiple application programs When executed, the portable electronic device can execute the method of the embodiment shown in FIG. 1, and is specifically used to execute the following method:
  • the first parameter with the largest average gap between the preset parameters and the face images in the candidate image set is selected from the multi-frame face images. Two face images to be added to the candidate image set;
  • the facial image in the candidate image set is compared with the bottom facial image of the target object.
  • the first face image with the highest quality score is selected from the collected multi-frame face images of the target object to join the candidate image set, and based on the first face image, the multi-frame face Select the second face image with the largest average gap between the preset parameters and the face images in the candidate image set in the partial image, and add it to the candidate image set until the number of images in the candidate image set reaches the preset number, and then , Based on the comparison between the face images in the candidate image set and the bottom face image of the target object, thus, while increasing the number of candidate face image frames, it also ensures that there is a large difference between the selected face images , In addition, to avoid noise and improve the accuracy of the comparison results.
  • FIG. 4 is a schematic structural diagram of a comparison device 400 based on multi-frame facial images according to an embodiment of the present specification.
  • the comparison device 400 based on multi-frame facial images may include:
  • the collection module 402 collects multi-frame facial images of the target object
  • the first selection module 404 selects the first facial image with the highest quality score from the multi-frame facial images to add to the candidate image set;
  • the second selection module 406 when the number of images in the candidate image set is less than a preset number, cyclically selects between the preset parameters and the face images in the candidate image set from the multi-frame face images The second face image with the largest average gap of is added to the candidate image set;
  • the comparison module 408 compares the facial image in the candidate image set with the bottom facial image of the target object.
  • the first face image with the highest quality score is selected from the collected multi-frame face images of the target object to join the candidate image set, and based on the first face image, the multi-frame face is cyclically Select the second face image with the largest average gap between the preset parameters and the face images in the candidate image set in the partial image and add it to the candidate image set until the number of images in the candidate image set reaches the preset number, and then , Based on the comparison between the face images in the candidate image set and the bottom face image of the target object, thus, while increasing the number of candidate face image frames, it also ensures that there is a large difference between the selected face images , In addition, to avoid noise and improve the accuracy of the comparison results.
  • the collection module 402 is specifically used to:
  • the collecting module 402 collects a face image satisfying the quality sub-threshold based on the tracking result, it is specifically used to:
  • the facial image of the current frame is collected as the facial image of the target object.
  • the collecting module 402 collects a face image satisfying the quality sub-threshold based on the tracking result, it is specifically used to:
  • the face image of the current frame at the time of tracking failure is compared successfully with the face image collected before the current frame, and the face image of the current frame meets the quality sub-threshold, then the current The face image of the frame is used as the face image of the target object;
  • the preset parameter includes at least one of quality score, angle, brightness, and light; or, the preset parameter is based on the quality score, angle, At least one attribute of brightness and light is determined.
  • the comparison module 408 is specifically used when comparing the face image in the candidate image set with the bottom face image of the target object:
  • the face image includes: a recognizable human face image or a recognizable animal face image.
  • comparison device based on multi-frame facial images in the embodiment of the present specification may also execute the method performed by the comparison device (or device) based on multi-frame facial images in FIGS. 1-2 and implement multi-frame-based
  • the function of the face image comparison device (or device) in the embodiments shown in FIG. 1 to FIG. 2 will not be repeated here.
  • the system, device, module or unit explained in the above embodiments may be specifically implemented by a computer chip or entity, or implemented by a product with a certain function.
  • a typical implementation device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or A combination of any of these devices.
  • Computer-readable media including permanent and non-permanent, removable and non-removable media, can store information by any method or technology.
  • the information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, read-only compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices.
  • computer-readable media does not include temporary computer-readable media (transitory media), such as modulated data signals and carrier waves.

Abstract

一种基于多帧脸部图像的比对方法、装置和电子设备,该方法包括:采集目标对象的多帧脸部图像(S102);从所述多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合(S104);当所述候选图像集合中的图像个数小于预设数量时,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,以加入到所述候选图像集合中(S106);基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对(S108)。

Description

一种基于多帧脸部图像的比对方法、装置和电子设备 技术领域
本说明书涉及计算机软件技术领域,尤其涉及一种基于多帧脸部图像的比对方法、装置和电子设备。
背景技术
目前,随着支付场景中支付终端的层出不穷,刷脸支付成为潮流。
常见的刷脸支付在采集图像后,会选择质量分最高的一帧人脸图像,与留底人脸图像进行比对。但是,考虑到摄像头采集的图像仅依赖质量分算法来选择一帧人脸图像,而质量分算法存在误差,筛选出的一帧人脸图像可能会出现模糊、遮挡等缺陷;另外,考虑到单帧人脸图像的信息量有限,无法最大化提升比对精确性。
因此,如何有效提升脸部图像的比对精确度成为亟待解决的技术问题。
发明内容
本说明书实施例的目的是提供一种基于多帧脸部图像的比对方法、装置和电子设备,以有效提升脸部图像的比对精确度。
为解决上述技术问题,本说明书实施例是这样实现的:
第一方面,提出了一种基于多帧脸部图像的比对方法,包括:
采集目标对象的多帧脸部图像;
从所述多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合;
当所述候选图像集合中的图像个数小于预设数量时,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,以加入到所述候选图像集合中;
基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对。
第二方面,提出了一种基于多帧脸部图像的比对装置,包括:
采集模块,采集目标对象的多帧脸部图像;
第一选择模块,从所述多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合;
第二选择模块,当所述候选图像集合中的图像个数小于预设数量时,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,以加入到所述候选图像集合中;
比对模块,基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对。
第三方面,提出了一种电子设备,包括:
处理器;以及
被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行以下操作:
采集目标对象的多帧脸部图像;
从所述多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合;
当所述候选图像集合中的图像个数小于预设数量时,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,以加入到所述候选图像集合中;
基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对。
第四方面,提出了一种计算机可读存储介质,所述计算机可读存储介质存储一个或多个程序,所述一个或多个程序当被包括多个应用程序的电子设备执行时,使得所述电子设备执行以下操作:
采集目标对象的多帧脸部图像;
从所述多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合;
当所述候选图像集合中的图像个数小于预设数量时,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,以加入到所述候选图像集合中;
基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对。
由以上本说明书实施例提供的技术方案可见,通过从采集到的目标对象的多帧脸部 图像中选择质量分最高的第一脸部图像加入候选图像集合,并基于该第一脸部图像,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,加入到候选图像集合中,直至候选图像集合中图像达到预设数量,然后,基于候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对,从而,在增加候选脸部图像帧数(保证足够的信息量)的同时,还保证了选择的脸部图像之间具有较大差异,进而,避免噪声,提升比对结果准确性。
附图说明
为了更清楚地说明本说明书实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本说明书的一个实施例提供的基于多帧脸部图像的比对方法步骤示意图。
图2是本说明书的另一个实施例提供的基于多帧脸部图像的比对流程示意图。
图3是本说明书的一个实施例提供的电子设备的结构示意图。
图4是本说明书的一个实施例提供的基于多帧脸部图像的比对装置的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本说明书中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本说明书一部分实施例,而不是全部的实施例。基于本说明书中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本说明书保护的范围。
实施例一
参照图1所示,为本说明书实施例提供的基于多帧脸部图像的比对方法步骤示意图,该方法的执行主体可以是基于多帧脸部图像的比对装置,具体可以为脸部识别比对装置、支付终端、自助终端等。所述比对方法可以包括以下步骤:
S102:采集目标对象的多帧脸部图像。
在本说明书实施例中,所述目标对象的多帧脸部图像,应当是针对同一目标用户的脸部图像。这些脸部图像可以是基于不同标准选择的,例如,基于最大脸逻辑从每帧图像中选择目标对象的脸部图像,或是基于最近脸逻辑从每帧图像中选择目标对象的脸部图像,也可以是基于其它方式确定目标对象的脸部图像,本说明书并不对此进行限定。
在本说明书实施例中,一种可选的方案,S102在采集目标对象的多帧脸部图像时,可具体执行为:
跟踪拍摄的每帧脸部图像;
基于跟踪结果采集满足质量分阈值的脸部图像,并记录每帧脸部图像的属性。
考虑到拍摄过程中一般持续2秒,为了保证跟踪到每一帧脸部图像,且跟踪的均为所属同一目标用户的脸部图像,可以采用跟踪方法来实现。其中,所涉及的跟踪方法可以基于直方图的跟踪算法、MeanShift算法等实现。
应理解,本说明书中跟踪是指通过图像识别的方式将摄像头拍摄的当前帧的脸部图像中脸部进行定位,并基于摄像头对该脸部进行跟踪,让该脸部一直保持在摄像头采集视野范围内。
本说明书实施例中,质量分可以根据多种评估方式确定,一种可实现的方案,例如根据每帧脸部图像的角度、光线等属性确定该帧脸部图像的质量分。这里所涉及的质量分阈值可以是根据经验值或是相应算法确定的质量分取值,用以筛选图像质量较高的脸部图像,将存在遮挡、模糊曝光等异常缺陷的脸部图像滤除,有效稀释单纯采用质量分确定一帧脸部图像带来的误差。同时,还可以记录每帧脸部图像的属性,例如,这些属性可以包括:质量分、角度、亮度、光线等。
可选地,S102在基于跟踪结果采集满足质量分阈值的脸部图像时,可具体执行为:
如果跟踪成功,且跟踪成功时的当前帧的脸部图像满足质量分阈值,则采集所述当前帧的脸部图片作为目标对象的脸部图像,其中,所述当前帧的脸部图像为跟踪成功时的当前帧的脸部图像。
可选地,S102在基于跟踪结果采集满足质量分阈值的脸部图像时,还可具体执行为:
如果跟踪失败,且跟踪失败时的当前帧的脸部图像与所述当前帧之前已采集的脸部图像比对成功,且所述当前帧的脸部图像满足质量分阈值,则采集所述当前帧的脸部图像作为目标对象的脸部图像,其中,所述当前帧的脸部图像为跟踪失败时的当前帧的脸 部图像;
否则,重新跟踪并清空脸部图像。
其中,跟踪成功可以理解为摄像头定位的当前帧的脸部图像,与定位的前一帧的脸部图像是同一脸部图像,即一直保持该脸部图像中脸部在摄像头的采集视野范围内。相应地,跟踪失败可以理解为摄像头定位的当前帧的脸部图像,与定位的前一帧的脸部图像不是同一脸部图像,即未能保持该脸部图像中脸部一直在摄像头的采集视野范围内。
应理解,在跟踪失败时,将当前帧的脸部图像当前帧之前已采集的脸部图像进行比对,可以是基于目标用户(或目标对象)的ID或是其它标识进行比对,以确定是否跟踪的仍是同一目标用户。
S104:从所述多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合。
S106:当所述候选图像集合中的图像个数小于预设数量时,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,以加入到所述候选图像集合中。
在本说明书实施例中,所述预设参数可以包括质量分、角度、亮度、光线中的至少一种。或者,所述预设参数可以基于质量分、角度、亮度、光线中的至少一种属性确定。
应理解,所述预设参数,可以是基于质量分、角度、亮度、光线中的至少一种属性以加权平均的方式确定。
在这里,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,可以举例说明,当预设数量为4,而此时仅选择出质量分最高的脸部图像1加入到候选图像集合中,需要再次选择,具体可以选择与质量分最高的脸部图像1之间预设参数的平均差距(因为此时候选图像集合中仅有脸部图像1,因此,平均差值也就是与脸部图像1的差值)最大的脸部图像2,加入到候选图像集合;这时,候选图像集合中的图像仍小于4,继续选择与脸部图像1以及脸部图像2之间预设参数的平均差距最大的脸部图像3,循环这样的操作,再选择出脸部图像4,这样,候选图像集合中的图像达到4个,可以终止循环。
应理解,该步骤中采用脸部图像的属性进行归一化处理,计算预设参数的平均差距,在增加候选脸部图像帧数的同时,还保证了选择的脸部图像之间具有较大差异,进而,避免噪声,提升后续的比对结果准确性。
S108:基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对。
可选地,S108在基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对时,可具体执行为:
将所述候选图像集合中每个脸部图像分别与目标对象的留底脸部图像进行特征比对;
基于预设数量的比对结果进行加权平均处理。
这样,将比对结果进行加权融合,可以提升比对结果的准确性。
应理解,留底脸部图像可以存储在系统本地或是云端服务器,若是存储在系统本地,可将候选图像集合中每个脸部图像分别与系统本地存储的目标对象的留底脸部图像进行比对;若是存储在云端服务器,可以将候选图像集合中的脸部图像上传至云端服务器与目标对象的留底脸部图像进行一一比对,并返回比对结果。
本说明书技术方案,通过从采集到的多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合,并基于该第一脸部图像,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,加入到候选图像集合中,直至候选图像集合中图像达到预设数量,然后,基于候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对,从而,在增加候选脸部图像帧数的同时,还保证了选择的脸部图像之间具有较大差异,进而,避免噪声,提升比对结果准确性。
可选地,本说明书实施例中所述脸部图像包括:可识别的人脸图像或可识别的动物脸图像。
应理解,本说明书中基于多帧脸部图像进行比对的方案可以适用于支付场景,例如,刷脸支付等;或是身份验证场景,例如刷脸门禁等。
下面通过具体的实例对本说明书实施例所涉及的技术方案进行详述。
参照图2所示,基于多帧脸部图像的比对流程可以包括以下:
S202:基于最大脸选脸逻辑,跟踪当前帧的人脸图像,若跟踪成功,则执行S204,否则,执行S206。
此处仅为示例,也可以基于其它选脸逻辑确定、跟踪。另外,本说明书实施例也不排除可能存在的动物脸图像,例如以用户的宠物脸作为留底脸部图像,后续支付时可以使用宠物脸部进行比对。
S204:判断当前帧的人脸图像是否大于最低质量分阈值,若是,则执行S208,否则, 执行S210。
S206:判断当前帧的人脸图像与本地存储的上一帧人脸图像是否属于同一ID,若是,则跳转至S204,否则,清空人脸图像集合,跳转至202。
在该步骤中,若比对失败,则需要清空人脸图像集合,重新开始跟踪。
S208:加入本地目标对象的人脸图像集合,并记录人脸图像的属性。
S210:丢掉当前帧,并跳转至S202。
S212:判断跟踪时间是否达到2S,若是,则执行S214,否则,跳转至S202。
S214:从人脸图像集合中选择质量分最高的人脸图像加入至候选图像集合。
S216:从人脸图像集合中选择预设参数与所述候选图像集合中的人脸图像之间的平均差距最大的人脸图像加入至候选图像集合。
该步骤可以参照S106中的内容,在此不做赘述。
S218:判断候选图像集合中图像是否达到K帧,若是,则执行S220,否则,跳转至S216。
其中,K帧可以是60帧左右的数量,2S的人脸图像相比于单帧人脸图像而言信息量丰富程度高,可以提升比对准确性。
S220:将候选图像集合中的K帧人脸图像上传到服务器分别与目标对象的留底目标人脸图像进行比对。
S222:接收K个比对结果并加权平均。
至此,可以将加权平均后的结果作为本次比对的最终结果,并基于该比对结果决策该人脸图像对应的人脸是否验证通过。
实施例二
图3是本说明书的一个实施例电子设备的结构示意图。请参考图3,在硬件层面,该电子设备包括处理器,可选地还包括内部总线、网络接口、存储器。其中,存储器可能包含内存,例如高速随机存取存储器(Random-Access Memory,RAM),也可能还包括非易失性存储器(non-volatile memory),例如至少1个磁盘存储器等。当然,该电子设备还可能包括其他业务所需要的硬件。
处理器、网络接口和存储器可以通过内部总线相互连接,该内部总线可以是 ISA(Industry Standard Architecture,工业标准体系结构)总线、PCI(Peripheral Component Interconnect,外设部件互连标准)总线或EISA(Extended Industry Standard Architecture,扩展工业标准结构)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图3中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。
存储器,用于存放程序。具体地,程序可以包括程序代码,所述程序代码包括计算机操作指令。存储器可以包括内存和非易失性存储器,并向处理器提供指令和数据。
处理器从非易失性存储器中读取对应的计算机程序到内存中然后运行,在逻辑层面上形成共享资源访问控制装置。处理器,执行存储器所存放的程序,并具体用于执行以下操作:
采集目标对象的多帧脸部图像;
从所述多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合;
当所述候选图像集合中的图像个数小于预设数量时,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,以加入到所述候选图像集合中;
基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对。
上述如本说明书图1、图2所示实施例揭示的基于多帧脸部图像的比对装置执行的方法可以应用于处理器中,或者由处理器实现。处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本说明书实施例中公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本说明书实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
该电子设备还可执行图1的方法,并实现基于多帧脸部图像的比对装置在图1、图2所示实施例的功能,本说明书实施例在此不再赘述。
当然,除了软件实现方式之外,本说明书实施例的电子设备并不排除其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以下处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。
本说明书技术方案,通过从采集到的目标对象的多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合,并基于该第一脸部图像,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,加入到候选图像集合中,直至候选图像集合中图像达到预设数量,然后,基于候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对,从而,在增加候选脸部图像帧数的同时,还保证了选择的脸部图像之间具有较大差异,进而,避免噪声,提升比对结果准确性。
实施例三
本说明书实施例还提出了一种计算机可读存储介质,该计算机可读存储介质存储一个或多个程序,该一个或多个程序包括指令,该指令当被包括多个应用程序的便携式电子设备执行时,能够使该便携式电子设备执行图1所示实施例的方法,并具体用于执行以下方法:
采集目标对象的多帧脸部图像;
从所述多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合;
当所述候选图像集合中的图像个数小于预设数量时,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,以加入到所述候选图像集合中;
基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对。
本说明书技术方案,通过从采集到的目标对象的多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合,并基于该第一脸部图像,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,加入到候选图像集合中,直至候选图像集合中图像达到预设数量,然后,基于候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对,从而,在增加候选脸部图像帧数的同时,还保证了选择的脸部图像之间具有较大差异,进而,避免噪声,提升 比对结果准确性。
实施例四
图4为本说明书的一个实施例提供的基于多帧脸部图像的比对装置400的结构示意图。请参考图4,在一种软件实施方式中,基于多帧脸部图像的比对装置400可包括:
采集模块402,采集目标对象的多帧脸部图像;
第一选择模块404,从所述多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合;
第二选择模块406,当所述候选图像集合中的图像个数小于预设数量时,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,以加入到所述候选图像集合中;
比对模块408,基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对。
本说明书技术方案,通过从采集到的目标对象的多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合,并基于该第一脸部图像,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,加入到候选图像集合中,直至候选图像集合中图像达到预设数量,然后,基于候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对,从而,在增加候选脸部图像帧数的同时,还保证了选择的脸部图像之间具有较大差异,进而,避免噪声,提升比对结果准确性。
可选地,作为一个实施例,所述采集模块402,具体用于:
跟踪拍摄的每帧脸部图像;
基于跟踪结果采集满足质量分阈值的脸部图像,并记录每帧脸部图像的属性。
可选地,作为一个实施例,所述采集模块402在基于跟踪结果采集满足质量分阈值的脸部图像时,具体用于:
如果跟踪成功,且跟踪成功时的当前帧的脸部图像满足质量分阈值,则采集所述当前帧的脸部图片作为目标对象的脸部图像。
可选地,作为另一个实施例,所述采集模块402在基于跟踪结果采集满足质量 分阈值的脸部图像时,具体用于:
如果跟踪失败,且跟踪失败时的当前帧的脸部图像与所述当前帧之前已采集的脸部图像比对成功,且所述当前帧的脸部图像满足质量分阈值,则采集所述当前帧的脸部图像作为目标对象的脸部图像;
否则,重新跟踪并清空脸部图像。
可选地,在本说明书实施例的一种具体实现方式中,所述预设参数包括质量分、角度、亮度、光线中的至少一种;或者,所述预设参数基于质量分、角度、亮度、光线中的至少一种属性确定。
可选地,作为一个实施例,所述比对模块408在基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对时,具体用于:
将所述候选图像集合中每个脸部图像分别与目标对象的留底脸部图像进行特征比对;
基于预设数量的比对结果进行加权平均处理。
可选地,作为一个实施例,所述脸部图像包括:可识别的人脸图像或可识别的动物脸图像。
应理解,本说明书实施例的基于多帧脸部图像的比对装置还可执行图1-图2中基于多帧脸部图像的比对装置(或设备)执行的方法,并实现基于多帧脸部图像的比对装置(或设备)在图1-图2所示实施例的功能,在此不再赘述。
总之,以上所述仅为本说明书的较佳实施例而已,并非用于限定本说明书的保护范围。凡在本说明书的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本说明书的保护范围之内。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他 数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。

Claims (10)

  1. 一种基于多帧脸部图像的比对方法,包括:
    采集目标对象的多帧脸部图像;
    从所述多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合;
    当所述候选图像集合中的图像个数小于预设数量时,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,以加入到所述候选图像集合中;
    基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对。
  2. 如权利要求1所述的方法,采集目标对象的多帧脸部图像,包括:
    跟踪拍摄的每帧脸部图像;
    基于跟踪结果采集满足质量分阈值的脸部图像,并记录每帧脸部图像的属性。
  3. 如权利要求2所述的方法,基于跟踪结果采集满足质量分阈值的脸部图像,包括:
    如果跟踪成功,且跟踪成功时的当前帧的脸部图像满足质量分阈值,则采集所述当前帧的脸部图片作为目标对象的脸部图像。
  4. 如权利要求2所述的方法,基于跟踪结果采集满足质量分阈值的脸部图像,包括:
    如果跟踪失败,且跟踪失败时的当前帧的脸部图像与所述当前帧之前已采集的脸部图像比对成功,且所述当前帧的脸部图像满足质量分阈值,则采集所述当前帧的脸部图像作为目标对象的脸部图像;
    否则,重新跟踪并清空脸部图像。
  5. 如权利要求1-4任一项所述的方法,基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对,包括:
    将所述候选图像集合中脸部图像分别与目标对象的留底脸部图像进行特征比对;
    基于预设数量的比对结果进行加权平均处理。
  6. 如权利要求1-4任一项所述的方法,所述预设参数包括质量分、角度、亮度、光线中的至少一种;或者,所述预设参数基于质量分、角度、亮度、光线中的至少一种属性确定。
  7. 如权利要求1-4任一项所述的方法,所述脸部图像包括:可识别的人脸图像或可识别的动物脸图像。
  8. 一种基于多帧脸部图像的比对装置,包括:
    采集模块,采集目标对象的多帧脸部图像;
    第一选择模块,从所述多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合;
    第二选择模块,当所述候选图像集合中的图像个数小于预设数量时,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,以加入到所述候选图像集合中;
    比对模块,基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对。
  9. 一种电子设备,包括:
    处理器;以及
    被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行以下操作:
    采集目标对象的多帧脸部图像;
    从所述多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合;
    当所述候选图像集合中的图像个数小于预设数量时,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,以加入到所述候选图像集合中;
    基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储一个或多帧程序,所述一个或多帧程序当被包括多帧应用程序的电子设备执行时,使得所述电子设备执行以下操作:
    采集目标对象的多帧脸部图像;
    从所述多帧脸部图像中选择质量分最高的第一脸部图像加入候选图像集合;
    当所述候选图像集合中的图像个数小于预设数量时,循环从所述多帧脸部图像中选择预设参数与所述候选图像集合中的脸部图像之间的平均差距最大的第二脸部图像,以加入到所述候选图像集合中;
    基于所述候选图像集合中的脸部图像与目标对象的留底脸部图像进行比对。
PCT/CN2019/111989 2018-12-03 2019-10-18 一种基于多帧脸部图像的比对方法、装置和电子设备 WO2020114105A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
SG11202100924SA SG11202100924SA (en) 2018-12-03 2019-10-18 Comparison Method And Apparatus Based On A Plurality Of Face Image Frames And Electronic Device
EP19892558.8A EP3812956A4 (en) 2018-12-03 2019-10-18 COMPARISON PROCESS BASED ON MULTIPLE FACIAL IMAGES, APPARATUS AND ELECTRONIC DEVICE
US17/191,039 US11210502B2 (en) 2018-12-03 2021-03-03 Comparison method and apparatus based on a plurality of face image frames and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811468225.4 2018-12-03
CN201811468225.4A CN110020581B (zh) 2018-12-03 2018-12-03 一种基于多帧脸部图像的比对方法、装置和电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/191,039 Continuation US11210502B2 (en) 2018-12-03 2021-03-03 Comparison method and apparatus based on a plurality of face image frames and electronic device

Publications (1)

Publication Number Publication Date
WO2020114105A1 true WO2020114105A1 (zh) 2020-06-11

Family

ID=67188587

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/111989 WO2020114105A1 (zh) 2018-12-03 2019-10-18 一种基于多帧脸部图像的比对方法、装置和电子设备

Country Status (6)

Country Link
US (1) US11210502B2 (zh)
EP (1) EP3812956A4 (zh)
CN (1) CN110020581B (zh)
SG (1) SG11202100924SA (zh)
TW (1) TWI717834B (zh)
WO (1) WO2020114105A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6787391B2 (ja) * 2016-02-26 2020-11-18 日本電気株式会社 顔照合システム、顔照合方法、及びプログラム
CN110020581B (zh) * 2018-12-03 2020-06-09 阿里巴巴集团控股有限公司 一种基于多帧脸部图像的比对方法、装置和电子设备
CN111554006B (zh) * 2020-04-13 2022-03-18 绍兴埃瓦科技有限公司 智能锁及智能解锁方法
CN112085701A (zh) * 2020-08-05 2020-12-15 深圳市优必选科技股份有限公司 一种人脸模糊度检测方法、装置、终端设备及存储介质
US11921831B2 (en) * 2021-03-12 2024-03-05 Intellivision Technologies Corp Enrollment system with continuous learning and confirmation
CN113505700A (zh) * 2021-07-12 2021-10-15 北京字跳网络技术有限公司 一种图像处理方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110274330A1 (en) * 2010-03-30 2011-11-10 The Johns Hopkins University Automated characterization of time-dependent tissue change
CN103942525A (zh) * 2013-12-27 2014-07-23 高新兴科技集团股份有限公司 一种基于视频序列的实时人脸优选方法
US9693050B1 (en) * 2016-05-31 2017-06-27 Fmr Llc Automated measurement of mobile device application performance
CN107578017A (zh) * 2017-09-08 2018-01-12 百度在线网络技术(北京)有限公司 用于生成图像的方法和装置
CN110020581A (zh) * 2018-12-03 2019-07-16 阿里巴巴集团控股有限公司 一种基于多帧脸部图像的比对方法、装置和电子设备

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3279913B2 (ja) * 1996-03-18 2002-04-30 株式会社東芝 人物認証装置、特徴点抽出装置及び特徴点抽出方法
KR101185243B1 (ko) * 2009-12-15 2012-09-21 삼성전자주식회사 얼굴 인식을 위한 복수의 얼굴 이미지를 등록하기 위한 장치 및 방법
JP5075924B2 (ja) * 2010-01-13 2012-11-21 株式会社日立製作所 識別器学習画像生成プログラム、方法、及びシステム
CN102137077A (zh) * 2010-01-26 2011-07-27 凹凸电子(武汉)有限公司 访问控制系统和采用计算机系统控制访问权限的方法
WO2012004933A1 (ja) * 2010-07-09 2012-01-12 パナソニック株式会社 オブジェクト関連付け装置、オブジェクト関連付け方法、プログラム及び記録媒体
US9600711B2 (en) * 2012-08-29 2017-03-21 Conduent Business Services, Llc Method and system for automatically recognizing facial expressions via algorithmic periocular localization
CN104462891A (zh) * 2013-09-17 2015-03-25 联想(北京)有限公司 一种信息处理的方法和设备
US20160019420A1 (en) * 2014-07-15 2016-01-21 Qualcomm Incorporated Multispectral eye analysis for identity authentication
CN105005779A (zh) * 2015-08-25 2015-10-28 湖北文理学院 基于交互式动作的人脸验证防伪识别方法及系统
WO2017043132A1 (ja) * 2015-09-08 2017-03-16 日本電気株式会社 顔認識システム、顔認識方法、表示制御装置、表示制御方法および表示制御プログラム
US9747494B2 (en) * 2015-11-16 2017-08-29 MorphoTrak, LLC Facial matching system
WO2017139325A1 (en) * 2016-02-09 2017-08-17 Aware, Inc. Face liveness detection using background/foreground motion analysis
CN109074484B (zh) * 2016-03-02 2022-03-01 蒂诺克股份有限公司 用于有效率的面部识别的系统和方法
US9990536B2 (en) * 2016-08-03 2018-06-05 Microsoft Technology Licensing, Llc Combining images aligned to reference frame
CN107730483A (zh) * 2016-08-10 2018-02-23 阿里巴巴集团控股有限公司 移动设备、处理脸部生物特征的方法、装置和系统
JPWO2018097177A1 (ja) * 2016-11-24 2019-10-17 株式会社ガイア・システム・ソリューション エンゲージメント測定システム
JP6768537B2 (ja) * 2017-01-19 2020-10-14 キヤノン株式会社 画像処理装置、画像処理方法、プログラム
US11010595B2 (en) * 2017-03-23 2021-05-18 Samsung Electronics Co., Ltd. Facial verification method and apparatus
CN107633209B (zh) * 2017-08-17 2018-12-18 平安科技(深圳)有限公司 电子装置、动态视频人脸识别的方法及存储介质
CN108427911B (zh) * 2018-01-30 2020-06-23 阿里巴巴集团控股有限公司 一种身份验证方法、系统、装置及设备
CN108765394B (zh) * 2018-05-21 2021-02-05 上海交通大学 基于质量评价的目标识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110274330A1 (en) * 2010-03-30 2011-11-10 The Johns Hopkins University Automated characterization of time-dependent tissue change
CN103942525A (zh) * 2013-12-27 2014-07-23 高新兴科技集团股份有限公司 一种基于视频序列的实时人脸优选方法
US9693050B1 (en) * 2016-05-31 2017-06-27 Fmr Llc Automated measurement of mobile device application performance
CN107578017A (zh) * 2017-09-08 2018-01-12 百度在线网络技术(北京)有限公司 用于生成图像的方法和装置
CN110020581A (zh) * 2018-12-03 2019-07-16 阿里巴巴集团控股有限公司 一种基于多帧脸部图像的比对方法、装置和电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3812956A4 *

Also Published As

Publication number Publication date
US20210192190A1 (en) 2021-06-24
EP3812956A4 (en) 2021-12-01
TWI717834B (zh) 2021-02-01
EP3812956A1 (en) 2021-04-28
TW202024992A (zh) 2020-07-01
US11210502B2 (en) 2021-12-28
SG11202100924SA (en) 2021-02-25
CN110020581B (zh) 2020-06-09
CN110020581A (zh) 2019-07-16

Similar Documents

Publication Publication Date Title
WO2020114105A1 (zh) 一种基于多帧脸部图像的比对方法、装置和电子设备
TWI716008B (zh) 人臉識別方法及裝置
US20200160040A1 (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
TW202006595A (zh) 人臉識別方法及終端設備
US20200082192A1 (en) Liveness detection method, apparatus and computer-readable storage medium
CN109086734B (zh) 一种对人眼图像中瞳孔图像进行定位的方法及装置
CN107818301B (zh) 更新生物特征模板的方法、装置和电子设备
CN112333356B (zh) 一种证件图像采集方法、装置和设备
CN109102026B (zh) 一种车辆图像检测方法、装置及系统
CN115631112B (zh) 一种基于深度学习的建筑轮廓矫正方法及装置
CN111079793A (zh) 图标相似度的确定方法和电子设备
CN113505682A (zh) 活体检测方法及装置
WO2018058573A1 (zh) 对象检测方法、对象检测装置以及电子设备
JP7121132B2 (ja) 画像処理方法、装置及び電子機器
CN111985438A (zh) 一种静态人脸处理方法、装置及设备
US20220122341A1 (en) Target detection method and apparatus, electronic device, and computer storage medium
CN110019951B (zh) 一种生成视频缩略图的方法及设备
CN113051778A (zh) 服装设计方法、装置、电子设备及存储介质
CN109376585B (zh) 一种人脸识别的辅助方法、人脸识别方法及终端设备
CN109118506A (zh) 一种确定人眼图像中瞳孔图像边缘点的方法及装置
CN115619832B (zh) 多摄像头协同进行多目标轨迹确认方法、系统及相关装置
CN112883925B (zh) 一种人脸图像处理方法、装置及设备
CN112581506A (zh) 人脸跟踪方法、系统及计算机可读存储介质
CN114973366A (zh) 一种视频处理方法及相关设备
CN116740082A (zh) 业务处理方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19892558

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE