WO2022226699A1 - 模板采集方法、装置及系统 - Google Patents

模板采集方法、装置及系统 Download PDF

Info

Publication number
WO2022226699A1
WO2022226699A1 PCT/CN2021/089692 CN2021089692W WO2022226699A1 WO 2022226699 A1 WO2022226699 A1 WO 2022226699A1 CN 2021089692 W CN2021089692 W CN 2021089692W WO 2022226699 A1 WO2022226699 A1 WO 2022226699A1
Authority
WO
WIPO (PCT)
Prior art keywords
image information
template
template image
domain
stored
Prior art date
Application number
PCT/CN2021/089692
Other languages
English (en)
French (fr)
Inventor
王振阳
赵亚西
徐文康
黄为
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202180001486.9A priority Critical patent/CN113302623A/zh
Priority to PCT/CN2021/089692 priority patent/WO2022226699A1/zh
Priority to EP21938190.2A priority patent/EP4328796A4/en
Publication of WO2022226699A1 publication Critical patent/WO2022226699A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2134Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on separation criteria, e.g. independent component analysis
    • G06F18/21347Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on separation criteria, e.g. independent component analysis using domain transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities

Definitions

  • the present application relates to the field of computer vision and the field of smart cars, and in particular, to a template acquisition method, device and system.
  • Biometric recognition technology based on computer vision such as facial recognition technology
  • terminals such as smart cars and smart phones for identification based on human facial features.
  • Face recognition generally adopts a comparison method. By comparing the similarity between a face image and a pre-collected standard image (also called a template image), it is determined whether the face image and the template image are the same person.
  • a pre-collected standard image also called a template image
  • the domain of an image can be divided according to any information related to the image, ie, image-related information. Images of the same domain have at least one same image-related information.
  • the recognition accuracy is relatively high; if the face image and the template image belong to different domains, that is, cross-domain face recognition is performed. , the recognition accuracy will decrease.
  • the face recognition method in the prior art when faced with the problem of cross-domain face recognition, there are situations in which the user is only required to register one template and the user is required to register multiple templates. If users are only required to register one type of template, this template is used when identifying face images of various categories, and the recognition accuracy is significantly different when the face image and the template are in the same domain and when they are cross-domain. If the user is required to register multiple templates, different templates can be selected for comparison according to the face image, which can correspondingly improve the recognition accuracy. That is, the face recognition method in the prior art cannot simultaneously achieve simple operation process and improved recognition accuracy.
  • the present application provides a template collection method, device and system, which are used to improve the recognition accuracy while ensuring a simple operation process.
  • a first aspect provides a template acquisition method, the template acquisition method includes: acquiring first image information from a sensor; acquiring pre-stored template image information according to a domain to which the first image information belongs, the template image information being used for Compare with the first image information.
  • Sensors can be cameras, radars. Radar can include lidar, millimeter wave radar, ultrasonic radar.
  • the pre-stored template image information may be locally stored and/or cloud-stored template image information.
  • pre-stored template image information by acquiring pre-stored template image information according to the domain to which the first image information belongs, appropriate pre-stored template image information can be selected for comparison with the first image information to Reduce the impact of domain differences on recognition accuracy and improve recognition accuracy.
  • the second template image information is preferentially obtained for comparison with the first image information, wherein the second template image information is a pre-image of the same domain as the first image information.
  • the second template image information is a pre-image of the same domain as the first image information.
  • Same domain refers to the same domain.
  • the second template image information when the second template image information does not exist, obtain the first template image information for comparison with the first image information, wherein the first template image information is Pre-stored template image information cross-domain with the first image information.
  • Cross-domain refers to different domains.
  • the category of the target object may be determined according to the identity, attribute or characteristic of the target object.
  • the categories of target objects may include one or more of males, females, adults, or children.
  • the template acquisition method further includes: comparing the first image information with the acquired pre-stored template image information, and comparing the first image information with the acquired pre-stored template image information When the template image information matches, the authentication is passed. When the similarity between the target object included in the first image information and the target object included in the obtained template image information is greater than a predetermined threshold, the first image information matches the obtained template image information.
  • the template acquisition method further includes: when the first image information is compared with the acquired first template image information, and the authentication is passed, according to the first image information, Update pre-stored template image information.
  • the updating may be adding template image information generated according to the first image information.
  • the pre-stored template image information is updated, and the face template of the domain where the user does not have a registered template can be automatically collected, ensuring a simple process, and increasing the authentication in the future.
  • the probability of the same domain comparison in the weighting process can improve the recognition accuracy.
  • the template collection method further includes: when authentication fails, sending reminder information to the user, where the reminder information is used to remind the user to register the template.
  • the reminder information may be sent to the user when the user fails to authenticate using the first image information, but passes the authentication using other methods.
  • Other means include one or more of passwords, verification codes, fingerprints, voiceprints, or irises.
  • a template acquisition method includes the following steps: acquiring first image information from a sensor; acquiring pre-stored template image information; preferentially comparing the second template image information with the first image information, Wherein, the second image information is template image information whose domain is the same as that of the first image information in the pre-stored template image information. In the case where the second template image information does not exist, compare the first template image information with the first image information, where the first template image information is the pre-stored template image information whose domain is different from the first image information Template image information.
  • the recognition accuracy can be ensured by preferentially comparing the second template image information and the first image information in the same domain.
  • the template acquisition method further includes: when the first image information is compared with the acquired first template image information, and the authentication is passed, according to the first image information, Update pre-stored template image information.
  • the updating may be adding template image information generated according to the first image information.
  • the pre-stored template image information is updated, and the face template of the domain where the user does not have a registered template can be automatically collected, ensuring a simple process, and increasing the authentication in the future.
  • the probability of the same domain comparison in the weighting process can improve the recognition accuracy.
  • the template collection method further includes: when authentication fails, sending reminder information to the user, where the reminder information is used to remind the user to register the template.
  • the reminder information may be sent to the user when the user fails to authenticate using the first image information, but passes the authentication using other methods.
  • Other means include one or more of passwords, verification codes, fingerprints, voiceprints, or irises.
  • a template acquisition method includes: acquiring acquired first image information; acquiring first template image information from pre-stored template image information, the first template image information and the The domains to which the first image information belongs are different; when the first image information matches the first template image information, the pre-stored template image information is updated according to the first image information.
  • the first image information may come from a sensor. Sensors include cameras, radar.
  • the updating may be adding template image information generated according to the first image information.
  • the update may also be to replace the template image information used when the authentication is passed.
  • the pre-stored template image information is updated, and the user can be automatically collected.
  • the face template of the domain without the registration template does not require the user to perform the cumbersome operation of registering the template according to each domain, and increases the probability of performing a high-precision same-domain comparison in the subsequent authentication process. As a result, the recognition accuracy can be improved while the operation flow is kept simple.
  • acquiring first template image information from pre-stored template image information specifically includes: acquiring second template image information from the pre-stored template image information, the second template The image information belongs to the same domain as the first image information; when the second template image information does not match the first image information, the first template image information is acquired from the pre-stored template image information.
  • the second template image information in the same domain as the first image information is acquired first, and the first template image information in a different domain from the first image information is acquired only when the first image information and the second template image information do not match. Therefore, it is possible to prioritize whether the first image information of the same domain matches the second template image information, and to ensure the recognition accuracy.
  • the template acquisition method further includes: acquiring acquired second image information; acquiring second template image information from the pre-stored template image information, the second template image information The same as the domain to which the second image information belongs; when the second image information matches the second template image information, and the registration time of the second template image information exceeds the time threshold, update the second image information according to the second image information.
  • the second image information may come from a sensor.
  • the sensor that collects the second image information may be the same as or different from the sensor that collects the first image information.
  • the updating may be to replace second template image information that matches the second image information.
  • the pre-stored template image information is updated, which can reduce the recognition rejection rate and improve the recognition accuracy.
  • the template acquisition method further includes: acquiring acquired second image information; acquiring second template image information from the pre-stored template image information, the second template image information It is the same as the domain to which the second image information belongs; when the degree of matching between the second image information and the second template image information is greater than the first threshold and less than the second threshold, the pre-stored image information is updated according to the second image information. Template image information.
  • the characteristics of the target object contained in the second template image information may have some changes compared with the original ones.
  • the pre-stored template image information is updated, which can reduce the recognition rejection rate and improve the recognition accuracy.
  • the template acquisition method further includes: acquiring second image information from the sensor, where the second image information includes a plurality of image information; acquiring pre-stored second template image information, wherein , the domain of the second template image information is the same as that of the second image information; if the second image information does not match the second template image information, and the user passes the authentication by other authentication methods, the authentication by other methods is passed.
  • the pre-stored template image information is updated according to the second image information.
  • the plurality of image information may be image information collected continuously within a certain period of time. In this case, updating the pre-stored template image information generally means adding new template image information.
  • the pre-stored template image information is directly updated without asking for the user's consent. Security while saving the user's operation process.
  • the template acquisition method further includes: when the matching degree between the first image information and the first template image information is higher than a third threshold, the first image information and the first template image information A template image information is matched; when the matching degree of the second image information and the second template image information is higher than a fourth threshold, the second image information and the second template image information are matched; wherein, the third threshold and The fourth threshold is different.
  • the matching degree threshold used in the same-domain comparison is different from the matching degree threshold used in the cross-domain comparison, and the target object can be identified appropriately according to the difference between the same domain and the cross-domain, which can reduce the rejection rate and improve the recognition accuracy.
  • the The template acquisition method further includes: acquiring acquired third image information; acquiring third template image information from the pre-stored template image information, where the third template image information and the third image information belong to a different domain and are different from the third image information.
  • the domains of the first template image information are different; when the third image information matches the third template image information, the pre-stored template image information is updated according to the third image information; wherein, in the third image information
  • the third image information and the third template image information match.
  • the third image information may originate from a sensor.
  • the sensor that collects the third image information may be the same as or different from the sensor that collects the first image information and the sensor that collects the second image information.
  • the same threshold is used when comparing the acquired image information and the pre-stored template image information across any two domains in any two domains, so that even if the number of domains increases or changes, the Just re-selection of the threshold, no need to retrain the algorithm, to ensure the long-term validity of the algorithm.
  • the belonging field is used to indicate one or more features in the format, color or source of the image information.
  • the belonging domain includes an RGB domain and an IR domain. Images collected in the visible light band and with recorded color information can belong to the RGB domain. Specifically, for example, an RGB image may belong to the RGB domain. Images captured in the infrared band without recorded color information can belong to the IR domain. The belonging domain may also include a grayscale domain. Images captured in the visible wavelength band without recorded color information can belong to the grayscale domain.
  • the domain to which the image belongs is determined according to the color feature of the image, and on this basis, the template image information of the corresponding domain is automatically collected to improve the recognition accuracy.
  • the template collection method further includes: sending prompt information to the user, where the prompt information is used to request the user to agree to update the pre-stored template image information, or notify the user of the pre-stored template image information. Template image information has been updated.
  • other authentication methods may be used to verify the user's authority.
  • the update can be performed more accurately.
  • the user's right to know is ensured.
  • a template acquisition device in a fourth aspect, includes: an acquisition module and a processing module; wherein, the acquisition module is used to acquire the acquired first image information; the acquisition module is also used to obtain information from a pre- Obtaining first template image information from the stored template image information, where the first template image information is different from the domain to which the first image information belongs; the processing module is configured to match the first image information with the first template image information , the pre-stored template image information is updated according to the first image information.
  • the face templates of the domains where the user does not have a registered template can be automatically collected, without the need for the user to perform cumbersome operations such as registering templates according to each domain, and the increase in the subsequent authentication process is increased. Probability of performing a high-precision same-domain alignment. Therefore, by adopting the template collection method, the recognition accuracy can be improved while the operation process is simple.
  • the obtaining module is further configured to obtain second template image information from the pre-stored template image information, where the second template image information and the domain to which the first image information belongs The same; the obtaining module is further configured to obtain the first template image information from the pre-stored template image information when the second template image information does not match the first image information.
  • the acquiring module is further configured to acquire the collected second image information; the acquiring module is further configured to acquire the second template image information from the pre-stored template image information , the second template image information is the same as the domain to which the second image information belongs; the processing module is also used to match the second image information and the second template image information, and the registration time of the second template image information When the time threshold is exceeded, the pre-stored template image information is updated according to the second image information.
  • the pre-stored template image information is updated, which can reduce rejection. recognition rate and improve recognition accuracy.
  • the acquiring module is further configured to acquire the collected second image information; the acquiring module is further configured to acquire the second template image information from the pre-stored template image information , the domain of the second template image information is the same as that of the second image information; the processing module is also used for when the degree of matching between the second image information and the second template image information is greater than the first threshold and less than the second threshold , the pre-stored template image information is updated according to the second image information.
  • the pre-stored template image information is updated, which can reduce the Rejection rate and improve recognition accuracy.
  • the obtaining module is further configured to obtain second image information from the sensor, where the second image information includes multiple image information; the obtaining module is further configured to obtain pre-stored image information.
  • second template image information wherein the second template image information and the second image information belong to the same domain; the processing module is further used for when the second image information and the second template image information do not match, and the user
  • the pre-stored template image information is updated according to the second image information when the authentication is passed by other authentication methods.
  • the pre-stored template image information is directly updated without asking for the user's consent. Security while saving the user's operation process.
  • the fourth aspect when the degree of matching between the first image information and the first template image information is higher than a third threshold, the first image information and the first template image information match; When the matching degree between the second image information and the second template image information is higher than a fourth threshold, the second image information and the second template image information match; wherein the third threshold is different from the fourth threshold.
  • the matching degree threshold used in the same-domain comparison is different from the matching degree threshold used in the cross-domain comparison, and the target object can be identified appropriately according to the difference between the same domain and the cross-domain, which can reduce the rejection rate and improve the recognition accuracy.
  • the The acquisition module is also used to acquire the collected third image information; the acquisition module is also used to acquire the third template image information from the pre-stored template image information, the third template image information and the third image information The domain is different from that of the first template image information; the processing module is also used to update the third image information according to the third image information when the third image information matches the third template image information.
  • Pre-stored template image information wherein, when the matching degree of the third image information and the third template image information is higher than the third threshold, the third image information and the third template image information match.
  • the same threshold is used when comparing the acquired image information and the pre-stored template image information across any two domains in any two domains, so that even if the number of domains increases or changes, the Just re-selection of the threshold, no need to retrain the algorithm, to ensure the long-term validity of the algorithm.
  • the belonging field is used to indicate one or more features in the format, color or source of the image information.
  • the belonging domain includes an RGB domain and an IR domain.
  • the domain to which the image belongs is determined according to the color feature of the image, and on this basis, the template image information of the corresponding domain is automatically collected to improve the recognition accuracy.
  • the processing module is further configured to send prompt information to the user, where the prompt information is used to request the user to agree to update the pre-stored template image information, or notify the user of the pre-stored template image information.
  • the stored template image information has been updated.
  • the update can be performed more accurately.
  • the user's right to know is ensured.
  • a template acquisition system includes: a template acquisition device, and a server; the template acquisition device is used to send the collected first image information; the server is used to receive information from the template the first image information of the acquisition device; the server is further configured to acquire first template image information from pre-stored template image information, where the first template image information is different from the domain to which the first image information belongs; the server, It is also used for updating the pre-stored template image information according to the first image information when the first image information matches the first template image information.
  • the face templates of the domains where the user does not have a registered template can be automatically collected, without the need for the user to perform cumbersome operations according to each domain registration template, which can ensure a simple operation process and improve recognition. precision.
  • the server is further configured to acquire second template image information from the pre-stored template image information, where the second template image information and the first image information belong to the same domain ; when the second template image information does not match the first image information, obtain the first template image information from the pre-stored template image information.
  • the template collecting device is further configured to send the collected second image information; the server is further configured to receive the second image information from the template collecting device.
  • the server is further configured to obtain second template image information from the pre-stored template image information, where the second template image information and the second image information belong to the same domain.
  • the server is further configured to: when the second image information matches the second template image information and the registration time of the second template image information exceeds a time threshold, or when the second image information and the second template image When the matching degree of the information is greater than the first threshold and less than the second threshold, the pre-stored template image information is updated according to the second image information.
  • the degree of matching between the first image information and the first template image information when the degree of matching between the first image information and the first template image information is higher than a third threshold, the first image information and the first template image information match.
  • the degree of matching between the second image information and the second template image information is higher than a fourth threshold, the second image information and the second template image information match.
  • the third threshold is different from the fourth threshold.
  • the template collecting device is further configured to send the collected third image information; the server is further configured to receive the third image information from the template collecting device.
  • the server is further configured to obtain third template image information from the pre-stored template image information, where the third template image information is different from the domain to which the third image information belongs, and is different from the domain to which the first template image information belongs. different.
  • the server is further configured to update the pre-stored template image information according to the third image information when the third image information matches the third template image information, wherein, between the third image information and the third template image information When the matching degree of the template image information is higher than the third threshold, the third image information matches the third template image information.
  • a template collection system includes: a template collection device, and a server; the template collection device is used to send the collected first image information; the server is used to receive information from the template the first image information of the acquisition device; the server is further configured to acquire first template image information from pre-stored template image information, where the first template image information is different from the domain to which the first image information belongs; the server, is also used to send the first template image information; the template acquisition device is also used to receive the first template image information from the server; the template acquisition device is also used to compare the first image information and the first template When the image information matches, according to the first image information, send instruction information to the server, where the instruction information is used to instruct the server to update the pre-stored template image information.
  • the indication information may include template image information generated according to the first image information, and information indicating an update manner.
  • the updating manner includes adding the generated template image information or replacing the first template image information with the generated template image information.
  • the template collection system of the sixth aspect can automatically collect the face templates of the domains where the user does not have a registered template, without requiring the user to perform cumbersome operations like registering templates for each domain, which can ensure a simple operation process and at the same time improve recognition. precision.
  • the template collecting device is further configured to send the collected second image information; the server is further configured to receive the second image information from the template collecting device.
  • the server is further configured to obtain second template image information from the pre-stored template image information, where the second template image information and the second image information belong to the same domain.
  • the server is further configured to send the second template image information.
  • the template acquisition device is further configured to receive the second template image information from the server.
  • the template acquisition device is further configured to: when the second image information matches the second template image information and the registration time of the second template image information exceeds a time threshold, or when the second image information and the second template image information match When the matching degree of the template image information is greater than the first threshold and less than the second threshold, according to the second image information, send instruction information to the server, where the instruction information is used to instruct the server to update the pre-stored template image information.
  • the indication information may include template image information generated according to the second image information, and information indicating an update manner.
  • the updating manner includes adding the generated template image information or replacing the second template image information with the generated template image information.
  • the matching degree between the first image information and the first template image information is higher than a third threshold
  • the first image information and the first template image information match.
  • the degree of matching between the second image information and the second template image information is higher than a fourth threshold
  • the second image information and the second template image information match.
  • the third threshold is different from the fourth threshold.
  • the template collecting device is further configured to send the collected third image information; the server is further configured to receive the third image information from the template collecting device.
  • the server is further configured to obtain third template image information from the pre-stored template image information, where the third template image information is different from the domain to which the third image information belongs, and is different from the domain to which the first template image information belongs. different.
  • the server is further configured to send the third template image information.
  • the template acquisition device is further configured to receive the third template image information from the server.
  • the template acquisition device is further configured to, when the third image information matches the third template image information, send instruction information to the server according to the third image information, where the instruction information is used to instruct the server to update the pre-stored template image information.
  • the degree of matching between the third image information and the third template image information is higher than the third threshold, the third image information and the third template image information match.
  • a seventh aspect provides an electronic device comprising a processor and a memory, wherein the memory stores program instructions that, when executed by the processor, cause the processor to perform the first to third aspects
  • the technical solution provided by any aspect or any possible implementation manner of the aspect.
  • an electronic device in an eighth aspect, includes a processor and an interface circuit, wherein the processor is coupled to a memory through the interface circuit, and the processor is configured to execute program codes in the memory, so that the processing The device executes the technical solution provided by any aspect or any possible implementation manner of the first aspect to the third aspect.
  • a computer storage medium comprising computer instructions, when the computer instructions are executed on an electronic device, the electronic device is made to perform any one of the first to third aspects or any possibility The technical solution provided by the implementation method.
  • a tenth aspect provides a computer program product that, when the computer program product runs on a computer, enables the computer to execute the technical solution provided in any aspect or any possible implementation manner of the first aspect to the third aspect.
  • FIG. 1 is a schematic flowchart of a template collection method provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a template collection method provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a template collection method provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a face recognition system according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an application scenario of an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of a template collection method provided by an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a template registration/update process provided by an embodiment of the present application.
  • FIG. 8 is a schematic flow chart when a user actively initiates template registration according to an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a template collection method provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a template collection device provided by an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of a template collection system provided by an embodiment of the application.
  • FIG. 12 is a schematic structural diagram of a template collection system provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a computing device according to an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a computing device according to an embodiment of the present application.
  • the template collection method of the present application can be used for biometric identification technology, and the object may include various objects, human faces, animals, and the like. But it is not limited to this, for example, it can also be used for the recognition of various objects.
  • FIG. 1 is a schematic flowchart of a template collection method provided by an embodiment of the present application. It should be understood that the template acquisition method may be executed by a computing device or an electronic device (for example, a terminal), and may also be executed by a chip or a system on chip (system on chip; SoC) in the electronic device. As shown in Figure 1, the template acquisition method includes the following steps:
  • the sensors may include cameras, radars. Radar can include lidar, millimeter wave radar, ultrasonic radar.
  • the first image information may be RGB (Red-Green-Blue; red-green-blue) image information from the camera, grayscale image information, IR (infrared; infrared) image information, and point cloud image information from radar.
  • the first image information may be the image itself acquired by the sensor, or may be information such as a partial image of the target object, a feature vector, and numbers representing the characteristics of the target object extracted from the image.
  • S120 Acquire pre-stored template image information according to the domain to which the first image information belongs, where the template image information is used for comparison with the first image information.
  • the domains here can be divided according to attributes such as format, color, or origin of the image captured by the sensor. That is, the belonging field is used to indicate one or more characteristics of the format, color, or origin of the image captured by the sensor.
  • the domain to which the image information from the radar belongs is different from the domain to which the image information from the camera belongs.
  • the domain to which the RGB image information from the camera belongs is different from the domain to which the IR image information from the camera belongs.
  • the domain to which the image information from one camera belongs is different from the domain to which the image information from another camera belongs.
  • appropriate pre-stored template image information can be selected for comparison with the first image information, so as to reduce the impact of the difference in the domain on the recognition accuracy , to improve the recognition accuracy.
  • the pre-stored template image information may be template image information stored locally or in the cloud, or a combination of local storage and cloud storage.
  • a storage manner of the pre-stored template image information may be a database, a queue, a table, or the like.
  • Local storage can be understood as storage stored locally.
  • the memory includes nonvolatile memory and volatile memory. Stored in non-volatile memory, the storage capacity is relatively large, but signaling interaction can be reduced. Storage in volatile memory reduces storage costs.
  • the second template image information may be preferentially acquired for comparison with the first image information, wherein the second template image information is a pre-stored template image in the same domain as the first image information information.
  • the second template image information does not exist, acquire the first template image information for comparison with the first image information, wherein the first template image information is pre-stored across domains with the first image information Template image information.
  • Same domain refers to the same domain.
  • Cross-domain refers to different domains.
  • the difference between the features extracted from the two images in the same domain will be smaller than the difference between the features extracted from the two images across the domain. Therefore, in the case of the same domain comparison , which can more accurately perform feature comparison and reduce the rejection rate. Therefore, by preferentially acquiring pre-stored template image information in the same domain for comparison with the first image information, the recognition accuracy can be ensured.
  • the category of the target object contained in the first image information may be first identified, and according to the domain to which the first image information belongs, template image information containing the same category of target objects in the pre-stored template image information is acquired .
  • the category of the target object may be determined according to the identity, attribute or characteristic of the target object.
  • the categories of target objects may include one or more of people, animals, objects, males, females, adults, or children.
  • the pre-stored template image information may be information stored separately according to the category.
  • preliminary screening can be performed by first identifying the category of the target object, and then comparing with the pre-stored template image information filtered out one by one, thereby reducing the number of comparisons.
  • the calculation amount of image information ensures real-time performance.
  • the template collection method further includes:
  • S130 Compare the first image information with the acquired pre-stored template image information, and pass the authentication when the first image information matches the acquired pre-stored template image information.
  • the target object contained in the first image information may be determined first, and then the target object contained in the first image information and the target object contained in the obtained template image information may be compared. When the similarity is greater than a predetermined threshold, the first image information matches the obtained template image information.
  • the similarity of the two target objects can be determined by comparing the feature points of the target object included in the first image information with the feature points of the target object included in the obtained template image information. All feature points can be compared, or only one or more feature points can be compared. For example, when the target object is a human face, the whole face can be compared, or one or more of the eyes, nose, mouth, pupil, and iris can be compared.
  • the template collection method further includes:
  • the pre-stored template image information is updated according to the first image information.
  • the updating may be adding template image information generated according to the first image information.
  • the update may also be to replace the template image information used when the authentication is passed. By replacing template image information, storage costs are reduced.
  • the template collection method further includes:
  • S150 When the authentication fails, send reminder information to the user, where the reminder information is used to remind the user to register the template.
  • the reminder information may be sent to the user when the user fails to authenticate using the first image information, but passes the authentication using other methods.
  • Other means include one or more of passwords, verification codes, fingerprints, voiceprints, or irises. Therefore, when the user selects the registration template, there is no need to perform authentication through other means such as password every time thereafter, which reduces the user operation process.
  • FIG. 2 is a schematic flowchart of a template collection method provided by an embodiment of the present application. It should be understood that the template acquisition method may be executed by a computing device or an electronic device (for example, a terminal), and may also be executed by a chip or a chip system in the electronic device. As shown in Figure 2, the template collection method includes the following steps:
  • S220 Acquire pre-stored template image information
  • S230 Priority is given to comparing the second template image information with the first image information, where the second image information is template image information whose domain is the same as the first image information in the pre-stored template image information.
  • the first template image information is compared with the first image information, wherein the first template image information is the pre-stored template image information.
  • Template image information whose domain is different from that of the first image information.
  • the template collection method further includes S240 and S250.
  • S240 and S250 reference may be made to S140 and S150 in FIG. 1 .
  • FIG. 3 is a schematic flowchart of a template collection method provided by an embodiment of the present application. It should be understood that the template acquisition method may be executed by a computing device or an electronic device (for example, a terminal), and may also be executed by a chip or a chip system in the electronic device. As shown in Figure 3, the template acquisition method includes the following steps:
  • the first image information may come from a sensor.
  • the sensor may be a sensor such as a camera, a radar, or the like.
  • the belonging field is used to indicate one or more characteristics of the format, color, or origin of the image information.
  • the belonging domain may include the RGB domain and the IR domain. Images collected in the visible light band and with recorded color information can belong to the RGB domain. Specifically, for example, an RGB image may belong to the RGB domain. Images captured in the infrared band without recorded color information can belong to the IR domain.
  • the belonging domain may also include a grayscale domain. Images captured in the visible wavelength band without recorded color information can belong to the grayscale domain.
  • S311 may include: acquiring second template image information from the pre-stored template image information, where the second template image information and the first image information belong to the same domain; in the second template image information When it does not match the first image information, the first template image information is acquired from the pre-stored template image information.
  • S312 When the first image information matches the first template image information, update the pre-stored template image information according to the first image information. Updating may include adding template image information and replacing original template image information.
  • the template collection method further includes:
  • the second image information may come from a sensor.
  • the source of the second image information may be the same as or different from the source of the first image information. That is, the sensor that collects the second image information may be the same as or different from the sensor that collects the first image information.
  • S314 Obtain second template image information from the pre-stored template image information, where the second template image information and the second image information belong to the same domain;
  • S313-S315 can be executed before S310, or can be executed after S311.
  • the template acquisition method includes:
  • S315c When the degree of matching between the second image information and the second template image information is greater than the first threshold and less than the second threshold, update the pre-stored template image information according to the second image information.
  • the matching degree is greater than the first threshold and less than the second threshold, although the authentication is successful, the matching degree is relatively low, indicating that the characteristics of the identified target object may have changed somewhat compared with the original.
  • the template image information can reduce the rejection rate and improve the recognition accuracy.
  • the template collection method further includes:
  • S316 Acquire second image information from the sensor, where the second image information includes a plurality of image information.
  • the plurality of image information may be image information collected continuously within a certain period of time.
  • S317 Acquire pre-stored second template image information, where the second template image information and the second image information belong to the same domain.
  • the degree of matching between the first image information and the first template image information when the degree of matching between the first image information and the first template image information is higher than a third threshold, the first image information and the first template image information match; when the second image information and the first template image information match When the matching degree of the second template image information is higher than a fourth threshold, the second image information matches the second template image information; wherein the third threshold is different from the fourth threshold.
  • the template collection method further includes:
  • S319 Acquire the collected third image information.
  • the source of the third image information may be the same as or different from the source of the first image information and the source of the second image information. That is, the sensor that collects the third image information may be the same as or different from the sensor that collects the first image information and the sensor that collects the second image information.
  • S320 Obtain third template image information from the pre-stored template image information, where the third template image information and the third image information belong to a different domain and are different from the first template image information.
  • the template collection method further includes:
  • S322 Send prompt information to the user, where the prompt information is used to request the user to agree to update the pre-stored template image information, or notify the user that the pre-stored template image information has been updated. Updates can be made more accurate by requesting the user's consent to update the template image information. By notifying the user that the template image information has been updated, the user's right to know is ensured.
  • face recognition can be divided into verification applications and retrieval applications.
  • the verification application of face recognition is taken as an example for description.
  • the verification application of face recognition is mainly used to verify the identity of the object and grant corresponding permissions, which is sometimes referred to as face verification in this application.
  • the core steps of current mainstream face recognition algorithms are: feature extraction, feature comparison, and output results.
  • the feature is usually a multi-dimensional vector
  • the feature extracted from the collected face image is called the identification feature
  • the feature vector of the pre-stored template image used for comparison with the face image is also called the template feature.
  • face verification the recognition feature and the template feature are generally compared by calculating the similarity. The greater the similarity, the closer the two features are.
  • a similarity threshold is usually selected. When the similarity of the two features is greater than the similarity threshold, it is considered that the faces corresponding to the two features belong to the same person, and the face verification is passed.
  • the face image and the template image may belong to different domains, that is, cross-domain face verification.
  • the domain referred to here can be divided according to, for example, any information related to an image, that is, image-related information. Images of the same domain have at least one same image-related information.
  • the domain to which the image belongs can be divided according to the attribute information of the image. Attributes of an image can include time taken, brightness, color, format, source, and the like. RGB images, grayscale images, and IR images can belong to different domains, respectively.
  • the image from the radar and the image from the camera can belong to different domains. Images from different cameras can belong to different domains. If the face image and the template image cross domains, the features extracted from the images in different domains will be different, which will result in a decrease in the accuracy of face recognition.
  • cross-domain images may be generated in the following situations:
  • IR-CUT infrared-cut, filter out infrared
  • multi-device-based unified face recognition also involves cross-domain issues when logging into the same account on multiple devices (such as smart cars, mobile phones, tablets, laptops, smart watches, etc.).
  • the template collection method provided by the embodiment of the present application can be applied to a face recognition system including one or more devices.
  • FIG. 4 is a schematic structural diagram of a face recognition system 1 provided by an embodiment of the present application.
  • the face recognition system 1 includes a first device 10 .
  • the first device 10 has a first camera 110 , a second camera 120 , a third camera 130 and a first computing unit 140 .
  • the first camera 110, the second camera 120, and the third camera 130 are connected with the first computing unit 140 through wired cables or wireless communication.
  • FIG. 1 is only an example, and the number of cameras provided by the first device 10 is not limited to three, and may be one, two or more.
  • the face recognition system 1 includes a second device 20 .
  • the second device 20 has a fourth camera 210 and a second computing unit 220 .
  • the number of cameras in the second device 20 is not limited to one, and may be two or more.
  • the face recognition system 1 includes a third device 30 .
  • the third device 30 has a fifth camera 310 and a third computing unit 320 .
  • the number of cameras provided in the third device 30 is not limited to one, and may be two or more.
  • the first device 10, the second device 20, and the third device 30 may be, for example, smart cars, mobile phones, tablet computers, notebook computers, smart watches, or smart home terminals such as smart doorbells and smart speakers, respectively. It should be noted that FIG. 1 is only an example, and the number of devices included in the face recognition system 1 is not limited to three, and may be one, two or more. Each device may have a storage unit for pre-storing template images.
  • the face recognition system 1 includes a server 40 .
  • the first device 10 , the second device 20 and the third device 30 are respectively connected to the server 40 in communication.
  • the server 40 may be a physical server, a virtual server or a cloud server.
  • the server 40 may be used for pre-storage of template images, or online storage of registration information.
  • a part or all of the feature extraction, face template registration or face recognition may also be performed by the server 40 .
  • each camera in the face recognition system 1 has one or more states.
  • each camera in the face recognition system 1 has one or more states.
  • it when using an IR-CUT camera, it has two states of collecting color images and collecting IR images.
  • Each camera in the face recognition system 1 is used to collect face images and/or face template registration images.
  • Each computing unit is used for image processing, feature extraction, face template registration and face recognition.
  • Each device can perform operations such as unlocking, logging in to an account, and making payments based on the results of face recognition.
  • the face recognition system 1 can allow all devices, all cameras, and all camera states to perform face template registration and face recognition, and can also specify some devices, some cameras, and some camera states for face template registration and face recognition.
  • the device, camera, and camera status that are allowed to be identified include the device, camera, and status that are allowed to be registered.
  • the first application scenario is that the face recognition system 1 only includes the first device 10, and the first device 10 is a smart car.
  • FIG. 5 is a schematic diagram of a first application scenario.
  • the first camera 110, the second camera 120, and the third camera 130 are cameras installed inside and outside the cockpit, and the first computing unit 140 is, for example, a smart cockpit domain controller (CDC, Cockpit Domain Controller).
  • the first camera 110 is, for example, an IR camera, which is installed behind the steering wheel and is mainly used for driver monitoring and also for face recognition.
  • the first camera 110 may also be installed under the A-pillar or at the top of the instrument panel.
  • the second camera 120 is, for example, an IR-CUT camera, which is installed near the main rearview mirror and is mainly used for cockpit monitoring and also for face recognition.
  • the third camera 130 is, for example, an IR camera, which is installed above the door outside the cockpit, and is used to realize the function of unlocking and getting on the vehicle by face recognition. In addition to this, the third camera 130 may also be installed outside the cockpit above the A-pillar, above the B-pillar, or the like.
  • the user can use any camera among the first camera 110 , the second camera 120 , and the third camera 130 to register a face template and perform face recognition.
  • the first camera 110 and the second camera 120 in the cockpit can also perform face recognition at the same time, mutual authentication, and enhanced security.
  • the second application scenario is that the face recognition system 1 includes a first device 10 and a second device 20, the first device 10 is a smart car, and the second device 20 is a mobile phone.
  • the user can also perform face template registration and/or face recognition on the second device 20.
  • the second computing unit 220 is, for example, the center of the mobile phone. Processing unit (CPU, Central Processing Unit).
  • the third application scenario is that the face recognition system includes multiple devices such as the first device 10 , the second device 20 , and the third device 30 .
  • the first device 10, the second device 20, the third device 30 and other devices are, for example, smart cars, mobile phones, tablet computers, notebook computers, smart watches, or smart home terminals such as smart doorbells and smart speakers.
  • the computing units such as the computing unit 140, the second computing unit 220, and the third computing unit 320 may be, for example, a smart cockpit domain controller, a central processing unit, a microcontroller unit (MCU, Microcontroller Unit), and the like, respectively. Users can register face templates on all or some devices.
  • MCU microcontroller unit
  • FIG. 6 is a schematic flowchart of a method for collecting a face template according to an embodiment of the present application. For example, when the user attempts to perform account login, payment, screen unlocking, or vehicle door unlocking through face verification, and the camera captures a face image and sends the face image to the computing unit, execute the flowchart shown in FIG. 6 . deal with.
  • step S410 Acquire a face image collected by the camera, and perform face detection. After the face image is acquired, the area where the face is located is determined, and the image of the area is extracted.
  • face detection can be performed using methods such as deep learning.
  • the face image acquired in step S410 corresponds to an example of "first image information", “second image information", and "third image information" in this application.
  • S420 Determine whether a human face is detected in the human face image. According to the face detection result in S410, if a face is detected, execute S430. If no face is detected, this process ends.
  • S430 Align faces and perform feature extraction. According to the extracted image of the area where the face is located, the face is restored to the appropriate orientation and angle.
  • the key point matching method can be used for face alignment. First, the key points of the face are extracted by methods such as deep learning, and then the affine transformation matrix is calculated according to the extracted key points and standard key points, and the face alignment is realized through affine transformation. Based on the aligned faces, feature extraction is performed to obtain identification features.
  • S440 Determine whether there is a face template in the same domain as the collected face image. If yes, go to S450, otherwise go to S470.
  • the face template information that has been collected or registered before, that is, the pre-stored face template information can be stored, for example, in the storage unit of each device.
  • the pre-stored face template information can be stored in the form of databases, queues, and tables.
  • the pre-stored face template information may be stored separately according to the domain to which it belongs and/or face classification. Face classification can be divided according to gender, age, etc. Face classification can include male, female, adult, child.
  • the face template information includes information of the template content and information of the domain to which the face template belongs. According to the domain to which the face image belongs, it is determined whether there is a face template in the same domain as the face image.
  • the pre-stored face template information corresponds to an example of "pre-stored template image information" in this application.
  • Domains of face images and face templates can be defined based on one or more characteristics of the image's format, color, or source. Divided according to the color of the image, the belonging domain may include, for example, the RGB domain and the IR domain. Images collected in the visible light band and with recorded color information can belong to the RGB domain. Images captured in the infrared band without recorded color information can belong to the IR domain.
  • the domain to which the image and the face template generated according to the image belong is the RGB domain.
  • the domain to which the image and the template image information generated according to the image belong is the IR domain.
  • the same type of images from the same or the same camera can belong to the same domain.
  • the second camera 120 of the first device 10 in FIG. 4 is an IR-CUT camera, which can be defined as that all color images captured by it belong to the same domain, all IR images captured by it belong to the same domain, and the captured images belong to the same domain.
  • the color image of is in a different domain than the IR image. It can also be defined that the image captured by the first camera 110 and the image captured by the third camera 130 belong to different domains.
  • the domain to which the image belongs is determined at the time of image acquisition and cannot be changed; the domain to which the face template belongs is the same as the domain of the image that generates the template, and cannot be changed.
  • the domains to which images and face templates belong can be discriminated using external information.
  • a face template When a face template is saved, it will indicate the domain to which it belongs, which can be embodied in the file name, file storage location, or package template information and template content into a dictionary, etc.
  • the video stream from the camera contains source information, which can determine the domain to which the image belongs.
  • the domain to which the image belongs can also be discriminated using coding features.
  • the image in the RGB domain usually has 3 different channels; the image in the IR domain has only 1 channel or 3 identical channels. It is also possible to directly judge the encoded video stream.
  • the U channel and V channel of an image in the RGB domain have different values at each position; All positions of the V channel have the same default value.
  • the domain of the vectorized face template can generally only be judged based on external information.
  • S450 Perform high-precision verification. That is, feature alignment is performed using the same-domain template and the same-domain threshold.
  • the same domain template is a face template belonging to the same domain as the collected face image.
  • the same domain threshold is the similarity threshold used to judge whether the two face features belong to the same person when the face image is compared with the same domain template.
  • the same domain template corresponds to an example of the "second template image information" in this application.
  • the same domain threshold corresponds to an example of the "fourth threshold” in this application.
  • the similarity between the template feature and the identification feature of the same-domain template can be calculated, and the calculated similarity can be compared with the same-domain threshold.
  • the similarity of face features can be evaluated by indicators such as Euclidean distance and cosine similarity.
  • the selection method of the same domain threshold will be described later.
  • S460 is executed after S450.
  • the face image can be compared with all templates of the same domain one by one.
  • primary screening can be performed first. For example, it can be determined whether the face in the face image is male or female, or whether it is an adult or a child.
  • the male face template in the pre-stored face template information can be filtered out first, and then the face image is compared with all the same domain templates in the male face template one by one.
  • S460 Determine whether the high-precision verification is passed. According to the feature comparison result in S450, when the similarity between the template feature and the identification feature extracted from the face image is greater than the same domain threshold, the high-precision verification is passed. If the verification is passed, execute S530; otherwise, execute S470.
  • S470 Determine whether there is a face template that crosses domains (different domains) with the collected face image. The judgment can be made by the same method as above, and will not be repeated here. If yes, execute S480, otherwise execute S500.
  • S480 Perform low-precision verification. That is, feature alignment is performed using cross-domain templates and cross-domain thresholds.
  • the cross-domain template is a face template belonging to a different domain from the collected face image.
  • the cross-domain threshold is the similarity threshold used to judge whether the two face features belong to the same person when comparing the features between the face image and the cross-domain template.
  • the cross-domain template corresponds to an example of the "first template image information" in this application.
  • the cross-domain threshold corresponds to an example of the "third threshold” in this application.
  • the similarity between the template feature and the identification feature of the cross-domain template may be calculated, and the calculated similarity may be compared with the cross-domain threshold.
  • S490 is executed after S480. In this step, primary screening can also be performed as described above.
  • the same-domain threshold and the cross-domain threshold can be selected in the following manner:
  • a separate threshold can be used for each domain, or several domains or all domains can share the same threshold;
  • the domain with the highest false recognition rate should also meet the requirements, so it is necessary to select the minimum value among the thresholds corresponding to these domains under the specified false recognition rate;
  • the thresholds can be calculated separately for each two domains, and the minimum value can be selected; multiple domains can also be mixed together according to a certain proportion in the test;
  • the false recognition rate corresponding to the cross-domain threshold should not be higher than the false recognition rate corresponding to the same-domain threshold.
  • the upper limit of the false recognition rate corresponding to the cross-domain threshold is not higher than the value or upper limit of the false recognition rate corresponding to the same-domain threshold.
  • the value of the same-domain threshold selected according to the above method will be higher than the value of the cross-domain threshold, but other situations are not excluded.
  • the face recognition algorithm in the above (1) may be an algorithm trained using samples from a single domain, or an algorithm trained using samples from multiple domains, which is not particularly limited in this embodiment of the present application.
  • the accuracy of the algorithm can be improved if all image samples from the domain involved can be trained.
  • the base version of the algorithm code can be trained on one or a few of the most commonly used domains, and then fine-tuned with a smaller amount of data from other domains.
  • the basic version of the algorithm code can be trained on the RGB domain first.
  • S490 Determine whether the low-precision verification is passed. According to the comparison result in S480, when the similarity between the template feature and the identification feature is greater than the cross-domain threshold, the low-precision verification is passed. If the verification is passed, execute S540; otherwise, execute S500.
  • S500 Determine whether a template needs to be registered.
  • the computing unit may output information for asking the user whether the face template needs to be registered, obtain the user's reply information, and determine whether the registration template is allowed or not according to the user's reply information.
  • the computing unit can output information in the form of voice, display, etc., for example, through a speaker, a display, and the like provided by the device.
  • the computing unit may acquire the user's operation input information on the display or the user's voice information, and determine whether a template needs to be registered according to the acquired information. If it is determined that the template needs to be registered, S510 is executed; otherwise, the process ends.
  • S510 Obtain user authentication information.
  • other authentication methods are used to verify user rights.
  • the computing unit may acquire information input by the user including one or more of a password, a verification code, a fingerprint, a voiceprint or an iris, so as to verify the user's authority.
  • S520 Determine whether the user authentication is passed. If the authentication is passed, execute S540; otherwise, the process ends.
  • templates can also be automatically updated by the system at fixed intervals, or the templates can be updated randomly after a period of time.
  • the system may automatically update the template, or ask the user whether to update the template, when the recognition fails for many times/high frequency before or when the recognition succeeds but the similarity is less than a certain threshold. The reason for updating in the latter case is that the occurrence of this situation indicates that a great change has occurred in the collection of the user's face and the template.
  • FIG. 7 is a sub-process of FIG. 6 , which is a schematic diagram of a specific process of registering or updating a face template in S540. Next, each step of FIG. 7 will be described.
  • S5400 Perform quality inspection on the face image acquired in S410.
  • Quality inspection includes two aspects: inspection of imaging quality, including sharpness, contrast, exposure, etc., which can be realized by hardware; inspection of face attributes, such as head posture, occlusion, etc., can be realized by algorithms such as deep learning.
  • inspection of imaging quality for example, gradient thresholds, contrast thresholds, and luminance coefficient thresholds can be set to determine whether the sharpness, contrast, and exposure of the image meet the requirements, thereby checking whether the imaging quality meets the requirements.
  • a deep learning model can be trained, such as a model that calculates the head attitude angle (Yaw: azimuth angle, Pitch: pitch angle, Roll: roll angle) to determine specific areas of the face (such as mouth, The model of whether the nose, eyes) are occluded, and the attitude angle threshold and occlusion judgment rules are set to check whether the face posture meets the requirements.
  • Yaw azimuth angle
  • Pitch pitch angle
  • Roll roll angle
  • S5401 Determine whether the face image passes the quality inspection. When the sharpness, contrast and exposure of the image meet the requirements and the face pose meets the requirements, the face image passes the quality inspection. If the quality inspection is passed, S1402 is executed. If the quality inspection is not passed, the registration/update fails, and the process ends.
  • S5403 According to the previous face recognition result, confirm whether a corresponding ID (identification; account) has been registered. If registered, use the same ID; if not, assign a new ID; different domain templates of the same person correspond to the same ID, and the template indicates the domain to which they belong.
  • S540 when S540 is executed from S530, it can be considered that the corresponding ID has been registered, and the ID of the same domain template used for the high-precision verification of S450 can be directly used. In the case of executing S540 after executing S490, it can be considered that the corresponding ID has been registered, and the ID of the cross-domain template used in the low-precision verification of S480 can be directly used.
  • S540 when S540 is executed from S520, it can be considered that the corresponding ID has not been registered, and a new ID is allocated.
  • the face template of the domain to which the face image belongs is registered, that is, a new face template is added.
  • a new ID is allocated to the user, and a face template of the ID is registered according to the face image.
  • the face template is updated according to the face image. The update at this time is to update the face template of the same domain, for example, the original face template of the same domain can be replaced.
  • the pre-stored face template information has changed compared with that before the processing, so it can also be understood that the pre-stored face template information is updated through this processing.
  • the process of S5404 corresponds to an example of "update pre-stored template image information" in this application.
  • the existing template can be directly replaced with new features, or the new features and existing templates can be fused by summation.
  • the moving average method shown in the following formula (1) can be used:
  • V_n ⁇ V_i+(1- ⁇ )V_t (1)
  • V_i is the input feature vector
  • V_t is the existing template vector
  • V_n is the new template vector
  • is the custom coefficient
  • different methods may be selected according to different update trigger conditions. For example, when the user actively updates and the similarity is lower than a certain threshold (the threshold is not lower than the same-domain threshold for face recognition), the template is directly replaced, and in other cases, the new feature and the existing template are fused.
  • a certain threshold the threshold is not lower than the same-domain threshold for face recognition
  • the same ID corresponds to multiple templates (such as front face, left face, and right face) in the same domain
  • only the template corresponding to the collected face image can be updated each time.
  • the template features of some of the domains can be fused as a dedicated template vector for cross-domain recognition.
  • the templates of multiple domains can be averaged, and the weighted average can be calculated according to the recognition frequency and importance of each domain.
  • the process ends. It should be noted that there is a possibility of failure in the process of face template registration or update. For example, software errors such as unqualified image quality inspection and writing failure will cause the registration or update to fail. At this time, the update or registration result (success/failure) can also be recorded before ending the process. If it fails, the system will continue to use the existing template. In the absence of a same-domain template, the system will continue to attempt to auto-register the template the next time the user is identified.
  • FIG. 8 is a schematic flowchart when a user actively initiates template registration.
  • the computing unit performs authentication to verify the user's authority, the face template is registered according to the flow of FIG. 8 .
  • Each step of FIG. 8 will be described below.
  • S1500 Acquire the face image collected by the camera.
  • S1501 Perform imaging quality inspection on the collected face image.
  • the inspection of image quality includes sharpness, contrast, exposure, etc., which can be realized by hardware.
  • S1502 Determine whether the image passes the imaging quality inspection. If it passes, execute S1503, and if it fails, execute S1509.
  • S1504 Determine whether a human face is detected. If a human face is detected, execute S1505; otherwise, execute S1509.
  • S1505 Perform face attribute inspection. Similar to the above, face attribute inspection, such as head posture, occlusion, etc., can be implemented by algorithms such as deep learning.
  • S1506 Determine whether the face attribute test passes. If it passes, execute S1507, and if it fails, execute S1508.
  • the features extracted from the face image are compared with all existing template features to confirm whether the face has been registered with the corresponding ID. If registered, use the same ID; if not, assign a new ID. Different domain templates of the same person correspond to the same ID. Also, save the template with the domain it belongs to. Then the process ends.
  • the similarity between the features extracted from the face image and the same-domain template is higher than the same-domain threshold, it is considered that the same-domain template has been registered, and the user may be prompted to choose to replace, update or retain the template.
  • the similarity between the features extracted from the face image and all the cross-domain templates of the same ID is higher than the cross-domain threshold, it is considered that the corresponding ID has been registered, and the extracted features are similar to all the cross-domain templates of the same ID. If the degree is not higher than the cross-domain threshold, it is considered that the corresponding ID has not been registered. In the case where the similarity with only some cross-domain templates with the same ID is higher than the cross-domain threshold, the user can be asked to confirm whether it is the same person.
  • S1509 Determine whether the number of attempts or running time of the current registration reaches the upper limit.
  • the total number of times the imaging quality check fails, the face is not detected, and the face attribute check fails can be calculated as the number of attempts.
  • the elapsed time after the start of the flow of FIG. 8 is calculated as the operation time. If the upper limit is reached, the registration fails, and the process ends; otherwise, the processing after S1500 is repeated until the registration succeeds or the upper limit is reached.
  • the face verification is performed in combination with the same-domain high-precision verification and the cross-domain low-precision verification.
  • the cross-domain low-precision verification is automatically performed. Verification, and when the cross-domain low-precision verification is passed, the face template of the corresponding domain is automatically registered according to the face image. In this way, face templates of domains for which the user has not registered templates can be automatically collected, and the user does not need to perform cumbersome operations such as registering templates for each domain. Compared with the situation where users are required to register multiple different templates for different categories of images, the operation process is simple and the user experience is improved.
  • the recognition accuracy can be improved. Therefore, according to the template collection method according to the embodiment of the present application, the recognition accuracy can be improved while ensuring a simple operation process.
  • the embodiments of the present application are not limited thereto.
  • the low-precision verification fails, the user is asked whether a template needs to be registered, but the embodiment of the present application is not limited to this.
  • the flowchart shown in FIG. 9 omits S500-S520.
  • each user must actively trigger and register a face template. Compared with the above-mentioned embodiments, the convenience is reduced, but the system security is enhanced.
  • the template stored by the system is used as the feature vector is used as an example for description, but the embodiment of the present application is not limited to this.
  • the templates stored by the system may also be unprocessed or processed (face cropping, alignment) face images.
  • sensors other than cameras may also be used to acquire face images.
  • radar can be used to acquire images of faces.
  • pre-stored template image information is stored in the storage unit of the device to which the computing unit belongs is taken as an example for description, but the embodiment of the present application is not limited to this.
  • pre-stored template image information may be stored on the server.
  • the computing unit may acquire the pre-stored template image information from the server through the communication unit of the device to which it belongs.
  • each step in the template acquisition method in FIG. 6 may be partially or entirely performed by the computing unit of each device in FIG. 1 , or may be partially or fully performed by the server 40 .
  • the computing unit may send the acquired face image to the server 40, and the server 40 judges whether there is a face template in the same domain as the face image, and sends the judgment result to the computing unit.
  • the high-precision verification and low-precision verification described above may also be performed by the server 40, and the verification result is sent to the computing unit.
  • the server 40 may also register a face template according to the face image.
  • the template acquisition device may be a terminal, or a chip or a chip system inside the terminal, and may implement the template acquisition method shown in FIG. 3 and The above-mentioned optional embodiments. As shown in FIG. 3
  • the template acquisition device 1000 includes: an acquisition module 1100 and a processing module 1200; the acquisition module 1100 is used to acquire the acquired first image information; the acquisition module 1200 is also used to obtain information from a pre-stored template image The first template image information is obtained from the information, and the first template image information is different from the domain to which the first image information belongs; the processing module 1200 is configured to, when the first image information and the first template image information match, according to the The first image information, updating the pre-stored template image information.
  • the obtaining module 110 is further configured to obtain second template image information from the pre-stored template image information, where the second template image information has the same domain as the first image information; the obtaining module 110 is further configured to When the second template image information does not match the first image information, the first template image information is acquired from the pre-stored template image information.
  • the acquiring module 110 is further configured to acquire the collected second image information; the acquiring module 110 is further configured to acquire the second template image information from the pre-stored template image information, the second template image information being the same as that of the pre-stored template image information.
  • the second image information belongs to the same domain; the processing module 120 is further configured to, when the second image information matches the second template image information and the registration time of the second template image information exceeds the time threshold, according to the first 2. Image information, update the pre-stored template image information.
  • the acquiring module 110 is further configured to acquire the collected second image information; the acquiring module 110 is further configured to acquire the second template image information from the pre-stored template image information, the second template image information being the same as that of the pre-stored template image information.
  • the second image information belongs to the same domain; the processing module 120 is further configured to, when the matching degree between the second image information and the second template image information is greater than the first threshold and less than the second threshold, according to the second image information to update the pre-stored template image information.
  • the acquiring module 110 is further configured to acquire the collected third image information; the acquiring module 110 is further configured to acquire third template image information from the pre-stored template image information, the third template image information and the The domain to which the third image information belongs is different from that of the first template image information; the processing module 110 is further configured to, when the third image information matches the third template image information, information, update the pre-stored template image information, wherein, when the matching degree of the third image information and the third template image information is higher than the third threshold, the third image information and the third template image information match .
  • the processing module 120 is further configured to send prompt information to the user, wherein the prompt information is used to request the user to agree to update the pre-stored template image information, or notify the user that the pre-stored template image information has been updated. .
  • the template collection device in the embodiment of the present application may be implemented by software, for example, a computer program or instruction having the above-mentioned functions, and the corresponding computer program or instruction may be stored in the internal memory of the terminal, and read by the processor.
  • the above-mentioned functions are realized by fetching the corresponding computer programs or instructions inside the memory.
  • the image acquisition apparatus in this embodiment of the present application may also be implemented by hardware.
  • the processing module 1200 is a processor
  • the acquiring module 1110 is a transceiver circuit or an interface circuit.
  • the template acquisition apparatus in the embodiment of the present application may also be implemented by a combination of a processor and a software module.
  • FIG. 11 is a schematic structural diagram of a template collection system provided by an embodiment of the present application, where the template collection system executes the template collection method shown in FIG. 3 .
  • the template collection system 200 includes: a template collection device 2100 and a server 2200 .
  • the template acquisition device 2100 is configured to send the acquired first image information.
  • the collected first image information comes from the sensor.
  • the server 2200 is configured to receive the first image information from the template acquisition device.
  • the server 2200 is further configured to acquire first template image information from pre-stored template image information, where the first template image information and the domain to which the first image information belongs are different.
  • the server 2200 is further configured to update the pre-stored template image information according to the first image information when the first image information matches the first template image information.
  • the server 2200 may compare the similarity between the target object included in the first image information and the target object included in the first template image information. When the similarity is greater than a predetermined threshold, the first image information matches the obtained template image information.
  • the server 2200 is further configured to obtain second template image information from the pre-stored template image information, where the second template image information has the same domain as the first image information; in the second template image information When it does not match the first image information, the first template image information is acquired from the pre-stored template image information.
  • the template collection device 2100 is further configured to send the collected second image information.
  • the server 2200 is further configured to receive the second image information from the template acquisition device 2100 .
  • the server 2200 is further configured to perform steps S314 and S315 in the template collection method in FIG. 3 .
  • the server 2200 is further configured to perform step S315c instead of performing step S315.
  • the template collection apparatus 2100 is further configured to send the collected second image information, where the second image information includes a plurality of image information.
  • the server 2200 is further configured to receive the second image information from the template acquisition device 2100 .
  • the server 2200 is further configured to perform steps S317 and S318 in the template acquisition method of FIG. 3 .
  • the template collecting apparatus 2100 is further configured to send the collected third image information.
  • the server 2200 is further configured to receive the third image information from the template acquisition device 2100 .
  • the server 2200 is further configured to perform steps S320 and S321 in the template collection method in FIG. 3 .
  • FIG. 12 is a schematic structural diagram of a template collection system provided by an embodiment of the present application, where the template collection system executes the template collection method shown in FIG. 3 .
  • the template collecting system 200a has a template collecting device 2100a and a server 2200a.
  • the template acquisition device 2100a is configured to send the acquired first image information.
  • the server 2200a is configured to receive the first image information from the template acquisition device 2100a.
  • the server 2200a is further configured to obtain first template image information from pre-stored template image information, where the first template image information and the domain to which the first image information belongs are different.
  • the server 2200a is further configured to send the first template image information.
  • the template acquisition device 2100a is further configured to receive the first template image information from the server 2200a.
  • the template acquisition device 2100a is further configured to send instruction information to the server 2200a according to the first image information when the first image information matches the first template image information, where the instruction information is used to instruct the server 2200a to update the pre-stored template image information.
  • the indication information may include template image information generated according to the first image information, and information indicating an update manner.
  • the updating manner includes adding the generated template image information or replacing the first template image information with the generated template image information.
  • the template acquisition device 2100a may compare the similarity between the target object included in the first image information and the target object included in the first template image information. When the similarity is greater than a predetermined threshold, the first image information matches the obtained template image information.
  • the server 2200a is further configured to update the pre-stored template image information when receiving the indication information from the template acquisition device 2100a.
  • the template collection device 2100a is further configured to send the collected second image information.
  • the server 2200a is further configured to receive the second image information from the template acquisition device 2100a.
  • the server 2200a is further configured to obtain second template image information from the pre-stored template image information, where the second template image information and the second image information belong to the same domain.
  • the server 2200a is further configured to send the second template image information.
  • the template acquisition device 2100a is further configured to receive the second template image information from the server 2200a.
  • the template acquisition device 2100a is further configured to, when the second image information matches the second template image information and the registration time of the second template image information exceeds a time threshold, or when the second image information and the second template image information match
  • instruction information is sent to the server 2200a, where the instruction information is used to instruct the server 2200a to update the pre-stored template image information.
  • the indication information may include template image information generated according to the second image information, and information indicating an update manner.
  • the updating method includes adding the template image information or replacing the second template image information with the template image information.
  • the template collection device 2100a is further configured to send the collected third image information.
  • the server 2200a is further configured to receive the third image information from the template acquisition device 2100a.
  • the server 2200a is further configured to obtain third template image information from the pre-stored template image information, where the third template image information belongs to a different domain from the third image information and is different from the domain to which the first template image information belongs. different.
  • the server 2200a is further configured to send the third template image information.
  • the template acquisition device 2100a is further configured to receive the third template image information from the server 2200a.
  • the template acquisition device 2100a is further configured to send instruction information to the server 2200a according to the third image information when the third image information matches the third template image information, where the instruction information is used to instruct the server 2200a to update the pre-stored template image information.
  • the indication information may include template image information generated according to the third image information, and information indicating an update manner.
  • the updating method includes adding the template image information or replacing the third template image information with the template image information.
  • FIG. 13 is a schematic structural diagram of a computing device 1500 provided by an embodiment of the present application.
  • the computing device can be used as a template collecting device to execute the template collecting method shown in FIGS. 1-3 and the above-mentioned optional embodiments.
  • the computing device may be a terminal, or a chip or a chip system inside the terminal.
  • the computing device 1500 includes: a processor 1510 and a memory 1520 .
  • the processor 1510 can be connected with the memory 1520 .
  • the memory 1520 may be used to store program codes and data. Therefore, the memory 1520 may be a storage unit inside the processor 1510 , or an external storage unit independent from the processor 1510 , or may include a storage unit inside the processor 1510 and an external storage unit independent from the processor 1510 . part.
  • the computing device 1500 may further include a communication interface.
  • the communication interface can be used to communicate with other devices.
  • the computing device 1500 may further include a bus.
  • the memory 1520 and the communication interface may be connected to the processor 1510 through a bus.
  • the bus may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus or the like.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one line is shown in FIG. 13, but it does not mean that there is only one bus or one type of bus.
  • the processor 1510 may adopt a central processing unit.
  • the processor may also be other general-purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), off-the-shelf programmable gate arrays (FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs off-the-shelf programmable gate arrays
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the processor 1510 uses one or more integrated circuits to execute related programs to implement the technical solutions provided by the embodiments of the present application.
  • the memory 1520 may include read only memory and random access memory and provides instructions and data to the processor 1510 .
  • a portion of the processor 1510 may also include non-volatile random access memory.
  • the processor 1510 may also store device type information.
  • the processor 1510 executes the computer-executed instructions in the memory 1520 to execute the operation steps of the above template acquisition method.
  • the computing device 1500 may correspond to corresponding subjects in executing the methods according to the various embodiments of the present application, and the above-mentioned and other operations and/or functions of the modules in the computing device 1500 are respectively for realizing the present invention.
  • the corresponding processes of each method in the embodiment will not be repeated here.
  • FIG. 14 is a schematic structural diagram of a computing device provided by an embodiment of the present application.
  • the computing device can be used as an image acquisition device to execute the template acquisition method shown in FIGS. 1-3 and the above-mentioned optional embodiments.
  • the computing device may be a terminal, or a chip or a chip system inside the terminal.
  • the computing device 1600 includes a processor 1610 , and an interface circuit 1620 coupled to the processor 1610 . It should be understood that although only one processor and one interface circuit are shown in FIG. 14, the computing device 1600 may include other numbers of processors and interface circuits.
  • the interface circuit 1620 is used to communicate with other components of the terminal, such as memory or other processors.
  • the processor 1610 is used for signal interaction with other components through the interface circuit 1620 .
  • the interface circuit 1620 may be an input/output interface of the processor 1610 .
  • the processor 1610 reads computer programs or instructions in a memory coupled thereto through the interface circuit 1620, and decodes and executes the computer programs or instructions.
  • these computer programs or instructions may include the above-mentioned terminal function program, and may also include the above-mentioned function program of the image processing apparatus applied in the terminal.
  • the terminal or the image processing apparatus in the terminal can be made to implement the solution in the image processing method provided by the embodiments of the present application.
  • these terminal function programs are stored in a memory outside the computing device 1600 .
  • the above-mentioned terminal function program is decoded and executed by the processor 1600, part or all of the above-mentioned terminal function program is temporarily stored in the memory.
  • these terminal function programs are stored in the internal memory of the computing device 1600 .
  • the computing device 1600 may be set in the terminal of the embodiment of the present invention.
  • parts of the terminal function programs are stored in a memory outside the computing device 1800
  • other parts of the terminal function programs are stored in a memory inside the computing device 1800 .
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiments of the present application.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
  • Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, is used to execute at least one of the solutions described in the foregoing embodiments.
  • the computer storage medium of the embodiments of the present application may adopt any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination of the above.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a propagated data signal in baseband or as part of a carrier wave, with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including, but not limited to, wireless, wire, optical fiber cable, RF (Radio Frequency, radio frequency), etc., or any suitable combination of the above.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a Local Area Network (LAN, Wireless Local Area Network) or a Wide Area Network (WAN, Wide Area Network), or it can be connected to an external computer (for example, using an Internet service provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • Internet Internet service provider
  • the embodiments of the present application further provide a computer program, which, when executed by a computer, causes the computer to execute at least one of the solutions described in the foregoing embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及计算机视觉领域及智能汽车领域。本申请实施例提供一种人脸模板采集方法,该方法包括:获取传感器采集到的第一图像信息;从预存储的模板图像信息中获取与该第一图像信息同域的模板图像信息;在该第一图像信息和该同域的模板图像信息不匹配时,从该预存储的模板图像信息中获取与该第一图像信息不同域的模板图像信息;在该第一图像信息和该不同域的模板图像信息匹配时,根据该第一图像信息,更新该预存储的模板图像信息。由此,能够在确保操作流程简单的同时,提升识别精度。

Description

模板采集方法、装置及系统 技术领域
本申请涉及计算机视觉领域及智能汽车领域,尤其涉及一种模板采集方法、装置及系统。
背景技术
基于计算机视觉的生物识别技术,例如人脸识别(facial recognition)技术被应用于智能汽车、智能手机等终端中,用于基于人的脸部特征进行身份识别。人脸识别一般采用比对法,通过比较人脸图像和预先采集的标准图像(也可以称为模板图像)的相似程度,确定人脸图像与模板图像是否为同一人。通常,可以根据与图像有关的任意信息即图像相关信息,来划分图像的域(domain)。同一个域的图像具有至少一个相同的图像相关信息。一般而言,如果人脸图像和模板图像属于同一个域,即同域进行人脸识别时,识别精度相对较高;如果人脸图像和模板图像属于不同的域,即跨域进行人脸识别时,识别精度会有所下降。
现有技术中的人脸识别方法在面对跨域人脸识别问题时,存在只要求用户注册一种模板和要求用户注册多种模板的情况。如果只要求用户注册一种模板,则识别各种类别人脸图像时均使用该模板,在人脸图像与模板同域时和跨域时,识别精度有明显差异。如果要求用户注册多种模板,可以根据人脸图像选择不同的模板用于进行比对,可相应提高识别精度,但是用户注册多种模板的流程较为繁琐,会使用户感觉麻烦。即,现有技术中的人脸识别方法无法同时实现操作流程简单和识别精度提高。
发明内容
本申请提供一种模板采集方法、装置及系统,用于实现在保障操作流程简单的同时,提升识别精度。
第一方面,提供了一种模板采集方法,该模板采集方法包括:获取来自传感器的第一图像信息;根据第一图像信息的所属域,获取预存储的模板图像信息,该模板图像信息用于与第一图像信息进行比较。传感器可以是摄像头、雷达。雷达可以包括激光雷达、毫米波雷达、超声波雷达。预存储的模板图像信息可以是本地存储和/或云端存储的模板图像信息。
采用第一方面的模板采集方法,通过根据第一图像信息的所属域,获取预存储的模板图像信息,能够选择合适的预存储的模板图像信息,用于与该第一图像信息进行比较,以减轻所属域的差异对识别精度的影响,提升识别精度。
作为第一方面的一种可能的实现方式,优先获取第二模板图像信息,用于与该第一图像信息进行比较,其中,该第二模板图像信息是与该第一图像信息同域的预存储的模板图像信息。同域是指所属域相同。
采用该方式,通过优先获取与该第一图像信息同域的预存储的模板图像信息,用 于与第一图像信息进行比较,能够实现优先进行同域比较,确保识别精度。其理由在于,针对同一目标对象,同域的两个图像的特征之间的差异,会小于跨域的两个图像的特征之间的差异,因此,同域比较的情况下,更能够准确地进行特征比对,降低拒识率。
作为第一方面的一种可能的实现方式,在不存在该第二模板图像信息时,获取第一模板图像信息,用于与该第一图像信息进行比较,其中,该第一模板图像信息是与该第一图像信息跨域的预存储的模板图像信息。跨域是指所属域不同。
采用该方式,通过获取与第一图像信息不同域的第一模板图像信息,能够进行跨域识别,增加了识别灵活度。
作为第一方面的一种可能的实现方式,先识别该第一图像信息包含的目标对象的类别,根据该第一图像信息的所属域,获取预存储的模板图像信息中的包含相同类别目标对象的模板图像信息。目标对象的类别可以根据目标对象的身份、属性或特征来确定。目标对象的类别可以包括男性、女性、成年人或儿童中的一个或多个。
采用该方式,在预存储的模板图像信息的数量较大时,通过先识别目标对象的类别来进行初步筛选,缩小需要进行比较的范围,能够减少比较图像信息时的运算量,并且保证实时性。
作为第一方面的一种可能的实现方式,该模板采集方法还包括:比较该第一图像信息与获取到的该预存储的模板图像信息,在该第一图像信息与获取到的该预存储的模板图像信息匹配时,鉴权通过。该第一图像信息包含的目标对象与该获取到模板图像信息包含的目标对象的相似度,大于规定的阈值时,该第一图像信息与该获取到的模板图像信息匹配。
作为第一方面的一种可能的实现方式,该模板采集方法还包括:在比较该第一图像信息与获取到的该第一模板图像信息,且鉴权通过时,根据该第一图像信息,更新预存储的模板图像信息。该更新,可以是添加根据该第一图像信息生成的模板图像信息。
采用该方式,在进行跨域比较且鉴权通过时,更新预存储的模板图像信息,能够自动采集用户没有注册模板的域的人脸模板,确保了流程简单,并且,增加了在以后的鉴权过程中进行同域比对的概率,能够提升识别精度。
作为第一方面的一种可能的实现方式,该模板采集方法还包括:在鉴权失败时,向用户发送提醒信息,该提醒信息用于,提醒用户注册模板。可以在用户使用第一图像信息鉴权失败,但使用其他方式鉴权通过时,向用户发送该提醒信息。其他方式包括密码、验证码、指纹、声纹或虹膜中的一个或多个。
采用该方式,提醒用户注册模板,如果用户选择了注册模板,此后无需每次通过其他方式进行鉴权,减少了用户操作流程。
第二方面,提供一种模板采集方法,该模板采集方法包括以下步骤:获取来自传感器的第一图像信息;获取预存储的模板图像信息;优先比较第二模板图像信息与该第一图像信息,其中,该第二图像信息为该预存储的模板图像信息中所属域与该第一图像信息相同的模板图像信息。在不存在该第二模板图像信息的情况下,比较第一模 板图像信息与该第一图像信息,该第一模板图像信息为预存储的模板图像信息中所属域与该第一图像信息不同的模板图像信息。
采用第二方面的模板采集方法,通过优先比较同域的第二模板图像信息与第一图像信息,能够确保识别精度。
作为第二方面的一种可能的实现方式,该模板采集方法还包括:在比较该第一图像信息与获取到的该第一模板图像信息,且鉴权通过时,根据该第一图像信息,更新预存储的模板图像信息。该更新,可以是添加根据该第一图像信息生成的模板图像信息。
采用该方式,在进行跨域比较且鉴权通过时,更新预存储的模板图像信息,能够自动采集用户没有注册模板的域的人脸模板,确保了流程简单,并且,增加了在以后的鉴权过程中进行同域比对的概率,能够提升识别精度。
作为第二方面的一种可能的实现方式,该模板采集方法还包括:在鉴权失败时,向用户发送提醒信息,该提醒信息用于,提醒用户注册模板。可以在用户使用第一图像信息鉴权失败,但使用其他方式鉴权通过时,向用户发送该提醒信息。其他方式包括密码、验证码、指纹、声纹或虹膜中的一个或多个。
采用该方式,提醒用户注册模板,如果用户选择了注册模板,此后无需每次通过其他方式进行鉴权,减少了用户操作流程。
第三方面,提供了一种模板采集方法,该模板采集方法包括:获取采集到的第一图像信息;从预存储的模板图像信息中获取第一模板图像信息,该第一模板图像信息与该第一图像信息的所属域不同;在该第一图像信息和该第一模板图像信息匹配时,根据该第一图像信息,更新该预存储的模板图像信息。该第一图像信息可以来自于传感器。传感器包括摄像头、雷达。该更新,可以是添加根据该第一图像信息生成的模板图像信息。该更新,也可以是替换在鉴权通过时所使用的模板图像信息。
采用第三方面的模板采集方法,在该第一图像信息和该第一模板图像信息匹配时,即,在进行跨域比较且鉴权通过时,更新预存储的模板图像信息,能够自动采集用户没有注册模板的域的人脸模板,而无需用户进行按照每种域注册模板那样的繁琐的操作,并且,增加了在以后的鉴权过程中进行精度较高的同域比对的概率。由此,能够在确保操作流程简单的同时,提升识别精度。
作为第三方面的一种可能的实现方式,从预存储的模板图像信息中获取第一模板图像信息,具体包括:从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第一图像信息的所属域相同;在该第二模板图像信息和该第一图像信息不匹配时,从该预存储的模板图像信息中获取该第一模板图像信息。
采用该方式,先获取与第一图像信息同域的第二模板图像信息,在第一图像信息与第二模板图像信息不匹配时,才获取与第一图像信息不同域的第一模板图像信息,由此,能够优先进行同域的第一图像信息与第二模板图像信息是否匹配的判断,确保识别精度。
作为第三方面的一种可能的实现方式,该模板采集方法还包括:获取采集到的第二图像信息;从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像 信息与该第二图像信息的所属域相同;在该第二图像信息和该第二模板图像信息匹配,且该第二模板图像信息的注册时间超过时间阈值时,根据该第二图像信息,更新该预存储的模板图像信息。该第二图像信息可以来自于传感器。采集第二图像信息的传感器,与采集第一图像信息的传感器可以相同,也可以不同。该更新可以是替换与该第二图像信息匹配的第二模板图像信息。
采用该方式,在该第二图像信息和该第二模板图像信息匹配,且该第二模板图像信息的注册时间超过时间阈值时,该第二模板图像信息包含的目标对象自身的特征与原来相比,可能发生了一些变化,通过如该方式这样,即使存在该域的模板图像信息,也更新预存储的模板图像信息,能够降低拒识率,提高识别精度。
作为第三方面的一种可能的实现方式,该模板采集方法还包括:获取采集到的第二图像信息;从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第二图像信息的所属域相同;在该第二图像信息和该第二模板图像信息的匹配度大于第一阈值且小于第二阈值时,根据该第二图像信息,更新该预存储的模板图像信息。
采用该方式,在该第二图像信息和该第二模板图像信息匹配,但匹配度相对较低时,该第二模板图像信息包含的目标对象自身的特征与原来相比,可能发生了一些变化,通过如该方式这样,即使存在该域的模板图像信息,也更新预存储的模板图像信息,能够降低拒识率,提高识别精度。
作为第三方面的一种可能的实现方式,该模板采集方法还包括:获取来自传感器的第二图像信息,该第二图像信息包括多个图像信息;获取预存储的第二模板图像信息,其中,该第二模板图像信息与该第二图像信息的所属域相同;在该第二图像信息和该第二模板图像信息不匹配,且用户通过其他鉴权方式鉴权通过,其他方式鉴权通过的次数超过一定阈值时,根据该第二图像信息,更新该预存储的模板图像信息。该多个图像信息可以是在一定时间段内连续采集的图像信息。该情况下更新预存储的模板图像信息一般是添加新的模板图像信息。
采用该方式,在使用第二图像信息鉴权失败,但用户多次通过其他鉴权方式鉴权通过时,直接更新预存储的模板图像信息,而无需征求用户同意,由此,可以在保证系统安全性的同时节省用户的操作流程。
作为第三方面的一种可能的实现方式,该模板采集方法还包括:在该第一图像信息和该第一模板图像信息的匹配度高于第三阈值时,该第一图像信息和该第一模板图像信息匹配;在该第二图像信息和该第二模板图像信息的匹配度高于第四阈值时,该第二图像信息和该第二模板图像信息匹配;其中,该第三阈值与该第四阈值不同。
采用该方式,同域比较所使用的匹配度阈值不同于跨域比较所使用的匹配度阈值,能够针对同域与跨域的区别,分别适宜地进行目标对象的识别,降低拒识率,提高识别精度。
作为第三方面的一种可能的实现方式,在该第一图像信息和该第一模板图像信息的匹配度高于第三阈值时,该第一图像信息和该第一模板图像信息匹配,该模板采集方法还包括:获取采集到的第三图像信息;从该预存储的模板图像信息中获取第三模板图像信息,该第三模板图像信息与该第三图像信息的所属域不同,且与该第一模板 图像信息的所属域不同;在该第三图像信息和该第三模板图像信息匹配时,根据该第三图像信息,更新该预存储的模板图像信息;其中,在该第三图像信息和该第三模板图像信息的匹配度高于该第三阈值时,该第三图像信息和该第三模板图像信息匹配。该第三图像信息可以来源于传感器。采集第三图像信息的传感器,与采集第一图像信息的传感器、采集第二图像信息的传感器可以相同,也可以不同。
采用该方式,在多个域中的任意两个域间跨域进行采集的图像信息和预存储的模板图像信息的比较时,使用同一个阈值,由此,即使域的数量增加或改变,也只需重新选取阈值,无需重新训练算法,确保算法长期有效。
作为第三方面的一种可能的实现方式,该所属域用于指示图像信息的格式、颜色或来源中的一个或多个特征。
作为第三方面的一种可能的实现方式,该所属域包括RGB域和IR域。在可见光波段采集的且记录了色彩信息的图像可以属于RGB域。具体例如RGB图像可以属于RGB域。在红外波段采集的且没有记录色彩信息的图像可以属于IR域。该所属域还可以包括灰度域。在可见光波段采集的且没有记录色彩信息的图像可以属于灰度域。
采用该方式,根据图像的颜色特征来确定图像的所属域,在此基础上,自动采集相应域的模板图像信息,提升识别精度。
作为第三方面的一种可能的实现方式,该模板采集方法还包括:向用户发送提示信息,该提示信息用于,请求用户同意更新该预存储的模板图像信息,或者通知用户该预存储的模板图像信息已被更新。在请求用户更新且用户同意更新时,可通过其他鉴权方式验证用户权限。
采用该方式,通过请求用户同意更新模板图像信息,可更准确地进行更新。通过通知用户已更新模板图像信息,可确保用户的知情权。
第四方面,提供了一种模板采集装置,该模板采集装置包括:获取模块以及处理模块;其中,该获取模块,用于获取采集到的第一图像信息;该获取模块,还用于从预存储的模板图像信息中获取第一模板图像信息,该第一模板图像信息与该第一图像信息的所属域不同;该处理模块,用于在该第一图像信息和该第一模板图像信息匹配时,根据该第一图像信息,更新该预存储的模板图像信息。
采用第四方面的模板采集装置,能够自动采集用户没有注册模板的域的人脸模板,而无需用户进行按照每种域注册模板那样的繁琐的操作,并且,增加了在以后的鉴权过程中进行精度较高的同域比对的概率。由此,采用该模板采集方法,能够在确保操作流程简单的同时,提升识别精度。
作为第四方面的一种可能的实现方式,该获取模块,还用于从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第一图像信息的所属域相同;该获取模块,还用于在该第二模板图像信息和该第一图像信息不匹配时,从该预存储的模板图像信息中获取该第一模板图像信息。
采用该方式,能够优先进行同域的第一图像信息与第二模板图像信息是否匹配的判断,确保识别精度。
作为第四方面的一种可能的实现方式,该获取模块,还用于获取采集到的第二图 像信息;该获取模块,还用于从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第二图像信息的所属域相同;该处理模块,还用于在该第二图像信息和该第二模板图像信息匹配,且该第二模板图像信息的注册时间超过时间阈值时,根据该第二图像信息,更新该预存储的模板图像信息。
采用该方式,在第二模板图像信息包含的目标对象自身的特征与原来相比,可能发生了一些变化时,即使存在该域的模板图像信息,也更新预存储的模板图像信息,能够降低拒识率,提高识别精度。
作为第四方面的一种可能的实现方式,该获取模块,还用于获取采集到的第二图像信息;该获取模块,还用于从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第二图像信息的所属域相同;该处理模块,还用于在该第二图像信息和该第二模板图像信息的匹配度大于第一阈值且小于第二阈值时,根据该第二图像信息,更新该预存储的模板图像信息。
采用该方式,在该第二模板图像信息包含的目标对象自身的特征与原来相比,可能发生了一些变化时,即使存在该域的模板图像信息,也更新预存储的模板图像信息,能够降低拒识率,提高识别精度。
作为第四方面的一种可能的实现方式,该获取模块,还用于获取来自传感器的第二图像信息,该第二图像信息包括多个图像信息;该获取模块,还用于获取预存储的第二模板图像信息,其中,该第二模板图像信息与该第二图像信息的所属域相同;该处理模块,还用于在该第二图像信息和该第二模板图像信息不匹配,且用户通过其他鉴权方式鉴权通过,其他方式鉴权通过的次数超过一定阈值时,根据该第二图像信息,更新该预存储的模板图像信息。
采用该方式,在使用第二图像信息鉴权失败,但用户多次通过其他鉴权方式鉴权通过时,直接更新预存储的模板图像信息,而无需征求用户同意,由此,可以在保证系统安全性的同时节省用户的操作流程。
作为第四方面的一种可能的实现方式,在该第一图像信息和该第一模板图像信息的匹配度高于第三阈值时,该第一图像信息和该第一模板图像信息匹配;在该第二图像信息和该第二模板图像信息的匹配度高于第四阈值时,该第二图像信息和该第二模板图像信息匹配;其中,该第三阈值与该第四阈值不同。
采用该方式,同域比较所使用的匹配度阈值不同于跨域比较所使用的匹配度阈值,能够针对同域与跨域的区别,分别适宜地进行目标对象的识别,降低拒识率,提高识别精度。
作为第四方面的一种可能的实现方式,在该第一图像信息和该第一模板图像信息的匹配度高于第三阈值时,该第一图像信息和该第一模板图像信息匹配;该获取模块,还用于获取采集到的第三图像信息;该获取模块,还用于从该预存储的模板图像信息中获取第三模板图像信息,该第三模板图像信息与该第三图像信息的所属域不同,且与该第一模板图像信息的所属域不同;该处理模块,还用于在该第三图像信息和该第三模板图像信息匹配时,根据该第三图像信息,更新该预存储的模板图像信息;其中,在该第三图像信息和该第三模板图像信息的匹配度高于该第三阈值时,该第三图像信息和该第三模板图像信息匹配。
采用该方式,在多个域中的任意两个域间跨域进行采集的图像信息和预存储的模板图像信息的比较时,使用同一个阈值,由此,即使域的数量增加或改变,也只需重新选取阈值,无需重新训练算法,确保算法长期有效。
作为第四方面的一种可能的实现方式,该所属域用于指示图像信息的格式、颜色或来源中的一个或多个特征。
作为第四方面的一种可能的实现方式,该所属域包括RGB域和IR域。
采用该方式,根据图像的颜色特征来确定图像的所属域,在此基础上,自动采集相应域的模板图像信息,提升识别精度。
作为第四方面的一种可能的实现方式,该处理模块,还用于向用户发送提示信息,其中,该提示信息,用于请求用户同意更新该预存储的模板图像信息,或者通知用户该预存储的模板图像信息已被更新。
采用该方式,通过请求用户同意更新模板图像信息,可更准确地进行更新。通过通知用户已更新模板图像信息,可确保用户的知情权。
第五方面,提供了一种模板采集系统,该模板采集系统包括:模板采集装置,以及服务器;该模板采集装置,用于发送采集到的第一图像信息;该服务器,用于接收来自该模板采集装置的该第一图像信息;该服务器,还用于从预存储的模板图像信息中获取第一模板图像信息,该第一模板图像信息与该第一图像信息的所属域不同;该服务器,还用于在该第一图像信息和该第一模板图像信息匹配时,根据该第一图像信息,更新该预存储的模板图像信息。
采用第五方面的模板采集系统,能够自动采集用户没有注册模板的域的人脸模板,而无需用户进行按照每种域注册模板那样的繁琐的操作,能够在确保操作流程简单的同时,提升识别精度。
作为第五方面的一种可能的实现方式,该服务器,还用于从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第一图像信息的所属域相同;在该第二模板图像信息和该第一图像信息不匹配时,从该预存储的模板图像信息中获取该第一模板图像信息。
作为第五方面的一种可能的实现方式,该模板采集装置,还用于发送采集到的第二图像信息;该服务器,还用于接收来自该模板采集装置的该第二图像信息。该服务器,还用于从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第二图像信息的所属域相同。该服务器,还用于在该第二图像信息和该第二模板图像信息匹配,且该第二模板图像信息的注册时间超过时间阈值时,或者,在该第二图像信息和该第二模板图像信息的匹配度大于第一阈值且小于第二阈值时,根据该第二图像信息,更新该预存储的模板图像信息。
作为第五方面的一种可能的实现方式,在该第一图像信息和该第一模板图像信息的匹配度高于第三阈值时,该第一图像信息和该第一模板图像信息匹配。在该第二图像信息和该第二模板图像信息的匹配度高于第四阈值时,该第二图像信息和该第二模板图像信息匹配。该第三阈值与该第四阈值不同。
作为第五方面的一种可能的实现方式,该模板采集装置,还用于发送采集到的第 三图像信息;该服务器,还用于接收来自该模板采集装置的该第三图像信息。该服务器,还用于从该预存储的模板图像信息中获取第三模板图像信息,该第三模板图像信息与该第三图像信息的所属域不同,且与该第一模板图像信息的所属域不同。该服务器,还用于在该第三图像信息和该第三模板图像信息匹配时,根据该第三图像信息,更新该预存储的模板图像信息,其中,在该第三图像信息和该第三模板图像信息的匹配度高于该第三阈值时,该第三图像信息和该第三模板图像信息匹配。
第六方面,提供了一种模板采集系统,该模板采集系统包括:模板采集装置,以及服务器;该模板采集装置,用于发送采集到的第一图像信息;该服务器,用于接收来自该模板采集装置的该第一图像信息;该服务器,还用于从预存储的模板图像信息中获取第一模板图像信息,该第一模板图像信息与该第一图像信息的所属域不同;该服务器,还用于发送该第一模板图像信息;该模板采集装置,还用于接收来自该服务器的该第一模板图像信息;该模板采集装置,还用于在该第一图像信息和该第一模板图像信息匹配时,根据该第一图像信息,向该服务器发送指示信息,该指示信息用于指示该服务器更新该预存储的模板图像信息。该指示信息中可以包括根据该第一图像信息生成的模板图像信息,以及表示更新方式的信息。更新方式包括添加该生成的模板图像信息或用该生成的模板图像信息替换第一模板图像信息。
采用第六方面的模板采集系统,能够自动采集用户没有注册模板的域的人脸模板,而无需用户进行按照每种域注册模板那样的繁琐的操作,能够在确保操作流程简单的同时,提升识别精度。
作为第六方面的一种可能的实现方式,该模板采集装置,还用于发送采集到的第二图像信息;该服务器,还用于接收来自该模板采集装置的该第二图像信息。该服务器,还用于从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第二图像信息的所属域相同。该服务器,还用于发送该第二模板图像信息。该模板采集装置,还用于接收来自该服务器的该第二模板图像信息。该模板采集装置,还用于在该第二图像信息和该第二模板图像信息匹配,且该第二模板图像信息的注册时间超过时间阈值时,或者,在该第二图像信息和该第二模板图像信息的匹配度大于第一阈值且小于第二阈值时,根据该第二图像信息,向该服务器发送指示信息,该指示信息用于指示该服务器更新该预存储的模板图像信息。该指示信息中可以包括根据该第二图像信息生成的模板图像信息,以及表示更新方式的信息。更新方式包括添加该生成的模板图像信息或用该生成的模板图像信息替换第二模板图像信息。
作为第六方面的一种可能的实现方式,在该第一图像信息和该第一模板图像信息的匹配度高于第三阈值时,该第一图像信息和该第一模板图像信息匹配。在该第二图像信息和该第二模板图像信息的匹配度高于第四阈值时,该第二图像信息和该第二模板图像信息匹配。该第三阈值与该第四阈值不同。
作为第六方面的一种可能的实现方式,该模板采集装置,还用于发送采集到的第三图像信息;该服务器,还用于接收来自该模板采集装置的该第三图像信息。该服务器,还用于从该预存储的模板图像信息中获取第三模板图像信息,该第三模板图像信息与该第三图像信息的所属域不同,且与该第一模板图像信息的所属域不同。该服务 器,还用于发送该第三模板图像信息。该模板采集装置,还用于接收来自该服务器的该第三模板图像信息。该模板采集装置,还用于在该第三图像信息和该第三模板图像信息匹配时,根据该第三图像信息,向服务器发送指示信息,该指示信息用于指示服务器更新该预存储的模板图像信息。在该第三图像信息和该第三模板图像信息的匹配度高于该第三阈值时,该第三图像信息和该第三模板图像信息匹配。
第七方面,提供了一种电子装置,该电子装置包括处理器和存储器,其中,该存储器存储有程序指令,该程序指令当被该处理器执行时使得该处理器执行第一方面至第三方面任一方面或任一可能的实现方式所提供的技术方案。
第八方面,提供了一种电子装置,该电子装置包括处理器和接口电路,其中,该处理器通过该接口电路与存储器耦合,该处理器用于执行该存储器中的程序代码,以使得该处理器执行第一方面至第三方面任一方面或任一可能的实现方式所提供的技术方案。
第九方面,提供了一种计算机存储介质,该计算机存储介质包括计算机指令,当该计算机指令在电子设备上运行时,使得该电子设备执行第一方面至第三方面任一方面或任一可能的实现方式所提供的技术方案。
第十方面,提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得该计算机执行第一方面至第三方面任一方面或任一可能的实现方式所提供的技术方案。
附图说明
以下参照附图来进一步说明本发明的各个特征和各个特征之间的联系。附图均为示例性的,一些特征并不以实际比例示出,并且一些附图中可能省略了本申请所涉及领域的惯常的且对于本申请非必要的特征,或是额外示出了对于本申请非必要的特征,附图所示的各个特征的组合并不用以限制本申请。另外,在本说明书全文中,相同的附图标记所指代的内容也是相同的。具体的附图说明如下:
图1为本申请实施例提供的一种模板采集方法的流程示意图;
图2为本申请实施例提供的一种模板采集方法的流程示意图;
图3为本申请实施例提供的一种模板采集方法的流程示意图;
图4为本申请实施例提供的一种人脸识别系统的结构示意图;
图5为本申请实施例的一种应用场景示意图;
图6为本申请实施例提供的一种模板采集方法的流程示意图;
图7为本申请实施例提供的一种模板注册/更新处理的流程示意图;
图8为本申请实施例提供的一种用户主动发起模板注册时的流程示意图;
图9为本申请实施例提供的一种模板采集方法的流程示意图;
图10为本申请实施例提供的一种模板采集装置的结构示意图;
图11为本申请实施例提供的一种模板采集系统的结构示意图;
图12为本申请实施例提供的一种模板采集系统的结构示意图;
图13为本申请实施例提供的一种计算装置的结构示意图。
图14为本申请实施例提供的一种计算装置的结构示意图。
具体实施方式
本申请的模板采集方法可用于生物识别技术,该对象可以包括各种物体、人脸、动物等。但不局限于此,例如还可以用于各种物体的识别。
图1是本申请实施例提供的一种模板采集方法的流程示意图。应理解,该模板采集方法可以由计算装置或电子设备(例如,终端)执行,也可以由电子设备内的芯片或芯片系统(system on chip;SoC)执行。如图1所示,该模板采集方法包括以下步骤:
S110:获取来自传感器的第一图像信息。该传感器可以包括摄像头、雷达。雷达可以包括激光雷达、毫米波雷达、超声波雷达。该第一图像信息可以是来自摄像头的RGB(Red-Green-Blue;红-绿-蓝)图像信息、灰度图像信息、IR(infrared;红外线)图像信息、来自雷达的点云图像信息。该第一图像信息可以是传感器获取到的图像本身,也可以是从该图像提取出的包含目标对象的局部图像、特征向量、表示目标对象特征的数字等信息。
S120:根据第一图像信息的所属域,获取预存储的模板图像信息,该模板图像信息用于与第一图像信息进行比较。这里的域可以根据传感器采集的图像的格式、颜色或来源等属性来进行划分。即,该所属域用于指示传感器采集的图像的格式、颜色或来源中的一个或多个特征。根据域的划分方式的不同,可以是,来自雷达的图像信息的所属域不同于来自摄像头的图像信息的所属域。还可以是,来自摄像头的RGB图像信息的所属域不同于来自摄像头的IR图像信息的所属域。在存在多个摄像头时,还可以是,来自一摄像头的图像信息的所属域不同于来自另一摄像头的图像信息的所属域。通过根据第一图像信息的所属域,获取预存储的模板图像信息,能够选择合适的预存储的模板图像信息,用于与第一图像信息进行比较,以减轻所属域的差异对识别精度的影响,提升识别精度。
预存储的模板图像信息可以是本地存储或者云端存储的模板图像信息,或者采用本地存储和云端存储相结合的方式。该预存储的模板图像信息的存储方式可以是数据库、队列、表格等。本地存储可以理解为存储在本地的存储器。该存储器包括非易失性存储器和易失性存储器。存储在非易失性存储器,存储量比较大,但可减少信令交互。存储在易失性存储器,可降低存储成本。
在一些实施例中,可以是优先获取第二模板图像信息,用于与该第一图像信息进行比较,其中,该第二模板图像信息是与该第一图像信息同域的预存储的模板图像信息。在不存在该第二模板图像信息时,获取第一模板图像信息,用于与该第一图像信息进行比较,其中,该第一模板图像信息是与该第一图像信息跨域的预存储的模板图像信息。同域是指所属域相同。跨域是指所属域不同。一般而言,针对同一目标对象, 同域的两个图像上提取的特征之间的差异,会小于跨域的两个图像上提取出的特征之间的差异,因此,同域比较的情况下,更能够准确地进行特征比对,降低拒识率。由此,通过优先获取同域的预存储的模板图像信息,用于与第一图像信息进行比较,能够确保识别精度。
在一些实施例中,可以是先识别该第一图像信息包含的目标对象的类别,根据该第一图像信息的所属域,获取预存储的模板图像信息中的包含相同类别目标对象的模板图像信息。目标对象的类别可以根据目标对象的身份、属性或特征来确定。目标对象的类别可以包括人、动物、物体、男性、女性、成年人或儿童中的一个或多个。该情况下,预存储的模板图像信息可以是按照该类别分别存储的信息。由此,在预存储的模板图像信息的数量较大时,能够通过先识别目标对象的类别来进行初步筛选,然后再与筛选出的预存储的模板图像信息进行逐一比对,由此减少比较图像信息时的运算量,保证实时性。
可选的,该模板采集方法还包括:
S130:比较该第一图像信息与获取到的该预存储的模板图像信息,在该第一图像信息与获取到的该预存储的模板图像信息匹配时,鉴权通过。可以先确定该第一图像信息包含的目标对象,然后比对该第一图像信息包含的目标对象与该获取到的模板图像信息包含的目标对象。在该相似度大于规定的阈值时,该第一图像信息与该获取到的模板图像信息匹配。可以通过比对第一图像信息包含的目标对象的特征点和获取到的模板图像信息包含的目标对象的特征点,来判断两个目标对象的相似度。可以比对所有的特征点,也可以仅比对一个或多个特征点。例如,在目标对象为人脸时,可以全脸比对,也可以比较眼睛、鼻子、嘴巴、瞳孔、虹膜中的一个或多个。
可选的,该模板采集方法还包括:
S140:在比较该第一图像信息与获取到的该第一模板图像信息,且鉴权通过时,根据该第一图像信息,更新预存储的模板图像信息。该更新,可以是添加根据该第一图像信息生成的模板图像信息。通过添加模板图像信息,可以获得所属域不同的各种模板图像信息,增加了与采集到的图像信息进行比较时的灵活度。该更新,也可以是替换在鉴权通过时所使用的模板图像信息。通过替换模板图像信息,降低了存储成本。
可选的,该模板采集方法还包括:
S150:在鉴权失败时,向用户发送提醒信息,该提醒信息用于,提醒用户注册模板。可以在用户使用第一图像信息鉴权失败,但使用其他方式鉴权通过时,向用户发送该提醒信息。其他方式包括密码、验证码、指纹、声纹或虹膜中的一个或多个。由此,在用户选择注册模板时,此后无需每次通过密码等其他方式进行鉴权,减少了用户操作流程。
图2是本申请实施例提供的一种模板采集方法的流程示意图。应理解,该模板采集方法可以由计算装置或电子设备(例如,终端)执行,也可以由电子设备内的芯片或芯片系统执行。如图2所示,该模板采集方法包括以下步骤:
S210:获取来自传感器的第一图像信息;
S220:获取预存储的模板图像信息;
S230:优先比较第二模板图像信息与该第一图像信息,其中,该第二图像信息为该预存储的模板图像信息中所属域与该第一图像信息相同的模板图像信息。在该预存储的模板图像信息中不存在第二模板图像信息的情况下,比较第一模板图像信息与该第一图像信息,其中,该第一模板图像信息为该预存储的模板图像信息中所属域与该第一图像信息不同的模板图像信息。通过优先比较同域的第二模板图像信息与该第一图像信息,能够确保识别精度。
可选的,该模板采集方法还包括S240、S250,S240、S250可以参考图1中的S140、S150。
应理解,图2所示的模板采集方法中的相关技术细节可以参考图1所示的模板采集方法中的相关说明,在此不再赘述。
图3是本申请实施例提供的一种模板采集方法的流程示意图。应理解,该模板采集方法可以由计算装置或电子设备(例如,终端)执行,也可以由电子设备内的芯片或芯片系统执行。如图3所示,该模板采集方法包括以下步骤:
S310:获取采集到的第一图像信息。该第一图像信息可以来自于传感器。传感器可以是摄像头、雷达等传感器。
S311:从预存储的模板图像信息中获取第一模板图像信息,该第一模板图像信息与该第一图像信息的所属域不同。该所属域用于指示图像信息的格式、颜色或来源中的一个或多个特征。该所属域可以包括RGB域和IR域。在可见光波段采集的且记录了色彩信息的图像可以属于RGB域。具体例如RGB图像可以属于RGB域。在红外波段采集的且没有记录色彩信息的图像可以属于IR域。该所属域还可以包括灰度域。在可见光波段采集的且没有记录色彩信息的图像可以属于灰度域。
在一些实施例中,S311可以包括:从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第一图像信息的所属域相同;在该第二模板图像信息和该第一图像信息不匹配时,从该预存储的模板图像信息中获取该第一模板图像信息。
S312:在该第一图像信息和该第一模板图像信息匹配时,根据该第一图像信息,更新该预存储的模板图像信息。更新可以包括添加模板图像信息、替换原有的模板图像信息。
可选的,该模板采集方法还包括:
S313:获取采集到的第二图像信息。该第二图像信息可以来自于传感器。该第二图像信息的来源,与第一图像信息的来源可以相同,也可以不同。即,采集第二图像信息的传感器,与采集第一图像信息的传感器可以相同,也可以不同。
S314:从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第二图像信息的所属域相同;
S315:在该第二图像信息和该第二模板图像信息匹配,且该第二模板图像信息的注册时间超过时间阈值时,根据该第二图像信息,更新该预存储的模板图像信息。在模板图像信息的注册时间超过时间阈值时,模板图像信息包含的目标对象自身的特征与原来相比,可能发生了一些变化,通过更新预存储的模板图像信息,能够降低拒识 率,提高识别精度。
应理解,S313-S315可以在S310之前执行,也可以在S311之后执行。
在一些实施例中,替代S315,该模板采集方法具有:
S315c:在该第二图像信息和该第二模板图像信息的匹配度大于第一阈值且小于第二阈值时,根据该第二图像信息,更新该预存储的模板图像信息。在匹配度大于第一阈值且小于第二阈值时,虽然鉴权成功,但匹配度相对较低,表示所识别的目标对象自身的特征与原来相比,可能发生了一些变化,通过更新预存储的模板图像信息,能够降低拒识率,提高识别精度。
可选的,该模板采集方法还包括:
S316:获取来自传感器的第二图像信息,该第二图像信息包括多个图像信息。该多个图像信息可以是在一定时间段内连续采集的图像信息。
S317:获取预存储的第二模板图像信息,其中,该第二模板图像信息与该第二图像信息的所属域相同。
S318:在该第二图像信息和该第二模板图像信息不匹配,且用户通过其他鉴权方式鉴权通过,其他方式鉴权通过的次数超过一定阈值时,根据该第二图像信息,更新该预存储的模板图像信息。该情况下更新预存储的模板图像信息一般是添加新的模板图像信息。在用户多次通过其他鉴权方式鉴权通过时,直接更新预存储的模板图像信息,而无需征求用户同意,由此,可以在保证系统安全性的同时节省用户的操作流程。
在一些实施例中,在该第一图像信息和该第一模板图像信息的匹配度高于第三阈值时,该第一图像信息和该第一模板图像信息匹配;在该第二图像信息和该第二模板图像信息的匹配度高于第四阈值时,该第二图像信息和该第二模板图像信息匹配;其中,该第三阈值与该第四阈值不同。
可选的,该模板采集方法还包括:
S319:获取采集到的第三图像信息。该第三图像信息的来源,与第一图像信息的来源、第二图像信息的来源可以相同,也可以不同。即,采集第三图像信息的传感器,与采集第一图像信息的传感器、采集第二图像信息的传感器可以相同,也可以不同。
S320:从该预存储的模板图像信息中获取第三模板图像信息,该第三模板图像信息与该第三图像信息的所属域不同,且与该第一模板图像信息的所属域不同。
S321:在该第三图像信息和该第三模板图像信息匹配时,根据该第三图像信息,更新该预存储的模板图像信息,其中,在该第三图像信息和该第三模板图像信息的匹配度高于该第三阈值时,该第三图像信息和该第三模板图像信息匹配。在多个域中的任意两个域间跨域进行采集的图像信息和预存储的模板图像信息的比较时,使用同一个阈值即第三阈值,由此,即使域的数量增加或改变,也只需重新选取阈值,无需重新训练算法,确保算法长期有效。
可选的,该模板采集方法还包括:
S322:向用户发送提示信息,该提示信息用于,请求用户同意更新该预存储的模板图像信息,或者通知用户该预存储的模板图像信息已被更新。通过请求用户同意更新模板图像信息,可更准确地进行更新。通过通知用户已更新模板图像信息,可确保用户的知情权。
应理解,图3所示的模板采集方法中的相关技术细节可以参考图1、图2所示的模板采集方法中的相关说明,在此不再赘述。
下面,参照图4-图10,以应用于人脸识别的情况为例,对本申请实施例提供的一种模板采集方法进行详细说明。
一般而言,人脸识别可分为验证类应用和检索类应用。下面,以人脸识别的验证类应用为例,进行说明。人脸识别的验证类应用主要用于验证对象的身份,并授予相应的权限,本申请中有时也将其称为人脸验证。
当前主流的人脸识别算法的核心步骤为:提取特征、特征比对、输出结果。特征通常为多维向量,从采集到的人脸图像中提取出的特征称为识别特征,用于与该人脸图像进行比对的预存储的模板图像的特征向量也称为模板特征。在人脸验证中,一般采用计算相似度的方式对识别特征和模板特征进行比对,相似度越大表示这两个特征越接近。通常会选择一个相似度阈值,当这两个特征的相似度大于相似度阈值时,认为这两个特征对应的人脸属于同一人,人脸验证通过。
在进行人脸验证时,可能会出现人脸图像与模板图像属于不同的域,即跨域进行人脸验证的情况。这里所指的域,例如可以根据与图像有关的任意信息即图像相关信息来划分。同一种域的图像具有至少一个相同的图像相关信息。具体地,可以根据图像的属性信息来划分图像所属的域。图像的属性可以包括拍摄时间、亮度、颜色、格式、来源等。RGB图像、灰度图像、IR图像可以分别属于不同的域。来自的雷达的图像和来自摄像头的图像可以属于不同的域。来自不同摄像头的图像可以属于不同的域。如果人脸图像与模板图像跨域,由于不同域的图像上提取的特征有所不同,因此会造成人脸识别精度的下降。
这里,需要提前说明的是,上面说明的关于划分域的方法仅仅是示例,并不用于限定本申请。实际上,当前业界对于跨域的判定并没有一个统一的、定量化的标准。本申请也不对此作出严格的限制。凡是符合以下描述之一,即可认为两种类别不同的图像是跨域的,进而能够采用本申请提出的方法:
(1)当人脸图像和模板图像属于同一类时,人脸识别的精度较高,而人脸图像与模板图像属于不同类时,人脸识别的精度必然明显下降;
(2)在人脸识别算法实现中,对不同类的人脸图像,使用不同的模板图像,且对应的相似度阈值也不同;
(3)在人脸识别算法实现中,训练算法识别不同类别人脸图像的能力,并对不同类别的人脸图像采用不同的处理;
(1)在人脸识别算法实现中,对于人脸图像和模板图像属于相同类别或不同类别的情况,进行不同的处理。
具体地举例说明,例如在应用于智能汽车上的人脸识别中,以下情况均可能产生跨域图像:
(1)智能汽车内外有多个摄像头,这些摄像头在安装位置、类型等方面有所不同;
(2)使用IR-CUT(infrared-cut,滤除红外线)摄像头,这类摄像头在光照充足时 会采集彩色图像,而在光照较弱时则会采集IR图像,因此其在不同时刻采集的图像可能是跨域的;
(3)在手机上使用彩色图像注册人脸模板,在车载终端上使用IR图像进行人脸识别。
此外,在多种设备(如智能汽车、手机、平板电脑、笔记本电脑、智能手表等)上登录同一账号时,基于多设备的统一人脸识别也涉及跨域问题。
下面,对本申请实施例的应用场景进行说明。
本申请实施例提供的模板采集方法可应用于包括一个或多个设备的人脸识别系统。
图4是本申请实施例提供的一种人脸识别系统1的结构示意图。如图1所示,人脸识别系统1包括第一设备10。第一设备10具有第一摄像头110、第二摄像头120、第三摄像头130和第一计算单元140。第一摄像头110、第二摄像头120、第三摄像头130通过有线电缆或无线通信与第一计算单元140连接。图1仅仅是示例,第一设备10具有的摄像头的数量不局限于三个,可以是一个、两个或更多。
可选的,人脸识别系统1包括第二设备20。第二设备20具有第四摄像头210和第二计算单元220。第二设备20具有的摄像头的数量不局限于一个,也可以是两个以上。
可选的,人脸识别系统1包括第三设备30。第三设备30具有第五摄像头310和第三计算单元320。第三设备30具有的摄像头的数量不局限于一个,也可以是两个以上。
第一设备10、第二设备20、第三设备30例如分别可以为智能汽车、手机、平板电脑、笔记本电脑、智能手表、或者智能门铃、智能音箱等智能家居终端。需注意,图1仅仅是示例,人脸识别系统1包括的设备数量不局限于三个,也可以是一个、两个或更多。各设备可以具有用于预存储模板图像的存储单元。
可选的,人脸识别系统1包括服务器40。第一设备10、第二设备20、第三设备30分别与服务器40通信连接。服务器40可以是物理服务器、虚拟服务器或云端服务器。服务器40可以用于预存储模板图像,或者注册信息的在线存储。可选的,还可以由服务器40进行特征提取、人脸模板注册或人脸识别中的一部分或全部处理。
可选的,人脸识别系统1中的每个摄像头具有一个或多个状态。例如,在使用IR-CUT摄像头时,其具有采集彩色图像和采集IR图像两种状态。
人脸识别系统1中的各摄像头用于采集人脸图像和/或人脸模板注册图像。各计算单元用于处理图像、提取特征、进行人脸模板注册和人脸识别。各设备可以根据人脸识别的结果,执行解锁、登录账户、支付等操作。
人脸识别系统1可以允许所有设备、所有摄像头、摄像头的所有状态进行人脸模板注册和人脸识别,也可以指定部分设备、部分摄像头、摄像头的部分状态进行人脸模板注册和人脸识别。其中,允许进行识别的设备、摄像头、摄像头的状态中包含允许注册的设备、摄像头、状态。
根据图4所示的人脸识别系统1,例如可以设想下面的三种具体应用场景。
第一种应用场景为,人脸识别系统1仅包括第一设备10,第一设备10为智能汽 车。图5是第一种应用场景的示意图。如图5所示,第一摄像头110、第二摄像头120、第三摄像头130是安装于座舱内外的摄像头,第一计算单元140例如为智能座舱域控制器(CDC,Cockpit Domain Controller)。第一摄像头110例如为IR摄像头,安装在方向盘后方,主要用于驾驶员监控,兼用于人脸识别。除此以外,第一摄像头110也可以安装于A柱下方或仪表盘顶端等。第二摄像头120例如为IR-CUT摄像头,安装在主后视镜附近,主要用于座舱监控,兼用于人脸识别。第三摄像头130例如为IR摄像头,安装在座舱外的车门上方,用于实现人脸识别解锁上车功能。除此以外,第三摄像头130也可以安装于座舱外的A柱上方、B柱上方等位置。用户可用第一摄像头110、第二摄像头120、第三摄像头130中的任一摄像头注册人脸模板、任一摄像头进行人脸识别。此外,座舱内的第一摄像头110和第二摄像头120还可以同时进行人脸识别,互相验证,增强安全性。
第二种应用场景为,人脸识别系统1包括第一设备10和第二设备20,第一设备10为智能汽车,第二设备20为手机。在该场景下,在第一种应用场景的配置及功能的基础上,用户还可以在第二设备20上进行人脸模板注册和/或人脸识别,第二计算单元220例如为手机的中央处理单元(CPU,Central Processing Unit)。
第三种应用场景为,人脸识别系统包括第一设备10、第二设备20、第三设备30等多个设备。在该场景下,第一设备10、第二设备20、第三设备30等设备例如分别是智能汽车、手机、平板电脑、笔记本电脑、智能手表、或者智能门铃、智能音箱等智能家居终端,第一计算单元140、第二计算单元220以及第三计算单元320等计算单元例如可以分别是智能座舱域控制器、中央处理单元、微控制单元(MCU,Microcontroller Unit)等。用户可以在所有或部分设备上注册人脸模板。
图6为本申请实施例提供的一种人脸模板采集方法的示意流程图。例如在用户试图通过人脸验证进行账号登陆、支付、屏幕解锁或者车门解锁等行为,摄像头拍摄到人脸图像,并将该人脸图像发送给计算单元时,执行图6的流程图所示的处理。
下面依次对图6中的各个步骤进行说明。
S410:获取摄像头采集到的人脸图像,并进行人脸检测。在获取到人脸图像后,确定人脸所在区域,并提取该区域的图像。这里,人脸检测可以利用深度学习等方法来进行。步骤S410中获取到的人脸图像相当于本申请中的“第一图像信息”、“第二图像信息”、“第三图像信息”的一例。
S420:判断在人脸图像中是否检测到人脸。根据S410中的人脸检测结果,如果检测到人脸,执行S430。如果未检测到人脸,本次流程结束。
S430:对齐人脸,并进行特征提取。根据提取到的人脸所在区域的图像,将人脸恢复到合适的方位和角度。人脸对齐例如可以采用关键点匹配法。首先,利用深度学习等方法提取人脸关键点,然后根据提取到的关键点和标准关键点推算仿射变换矩阵,通过仿射变换实现人脸对齐。基于对齐后的人脸,进行特征提取而获得识别特征。
S440:判断是否存在与采集到的人脸图像同域的人脸模板。是则执行S450,否则执行S470。
在此之前已经采集或注册的人脸模板信息即预存储的人脸模板信息,例如可以存 储于各设备具有的存储单元。预存储的人脸模板信息可以以数据库、队列、表格等形式存储。预存储的人脸模板信息可以是按照所属域和/或人脸分类来分别存储的。人脸分类可以按照性别、年龄等来划分。人脸分类可以包括男性、女性、成年人、儿童。该人脸模板信息包含模板内容的信息以及人脸模板所属域的信息。根据人脸图像所属域,来判断是否存在与人脸图像同域的人脸模板。预存储的人脸模板信息相当于本申请中的“预存储的模板图像信息”的一例。
可以根据图像的格式、颜色或来源中的一个或多个特征,定义人脸图像和人脸模板的所属域。根据图像的颜色来划分,该所属域例如可以包括RGB域、IR域。在可见光波段采集的且记录了色彩信息的图像可以属于RGB域。在红外波段采集的且没有记录色彩信息的图像可以属于IR域。来自摄像头的图像为RGB图像时,该图像以及根据该图像生成的人脸模板的所属域为RGB域。来自摄像头的图像为IR图像时,该图像以及根据该图像生成的模板图像信息的所属域为IR域。根据图像的来源来划分,来源于同一或同种摄像头的同类图像可以属于同一种域。具体例如,图4中的第一设备10的第二摄像头120为IR-CUT摄像头,可以定义为,其拍摄的所有彩色图像属于同一个域,其拍摄的所有IR图像属于同一个域,其拍摄的彩色图像与IR图像属于不同的域。也可以定义为,第一摄像头110拍摄的图像与第三摄像头130拍摄的图像属于不同的域。
在人脸识别和人脸模板注册的过程中,图像所属域在图像采集时就已确定,且不可更改;人脸模板所属域与生成此模板的图像所属域一致,且不可更改。
图像和人脸模板所属的域可以利用外部信息判别。人脸模板在保存时,会标明所属域,具体可体现在文件名、文件存储位置,或将模板信息与模板内容打包成字典等。来自摄像头的视频流含有来源信息,可以判断图像所属域。
图像所属的域也可利用编码特征判别。视频流解码后,RGB域的图像通常有3个不同的通道;IR域的图像则只有1个通道或3个相同的通道。也可直接对编码视频流进行判断。以YUV(Luminance-Bandwidth-Chrominance,明亮度-带宽-色度)颜色编码格式为例,RGB域的图像的U通道和V通道在各位置的数值各不相同;IR域的图像的U通道和V通道的所有位置都是相同的默认值。
对于向量化的人脸模板的域,一般只能根据外部信息判断。对于视频流的域,根据外部信息或编码特征判断均可,但优选使用外部信息判断。
S450:进行高精度验证。即,使用同域模板和同域阈值进行特征比对。同域模板为,与采集到的人脸图像属于同一种域的人脸模板。同域阈值为,在人脸图像与同域模板进行特征比对时,用于判断两个人脸特征是否属于同一个人的相似度阈值。同域模板相当于本申请中的“第二模板图像信息”的一例。同域阈值相当于本申请中的“第四阈值”的一例。具体地,可以计算同域模板的模板特征与识别特征的相似度,将计算出的相似度与同域阈值进行比较。本实施例中,人脸特征的相似度(人脸相似度)可以用欧氏距离、余弦相似度等指标来评价。关于同域阈值的选取方式在后面进行说明。S450之后执行S460。在预存储的人脸模板信息中包含的人脸模板的数量较少时,可以将人脸图像与所有同域模板逐一进行比对。在人脸模板的数量较多时,可以先进行初级筛选。例如可以先判断人脸图像中的人脸是男性还是女性,或者是成年人还是 儿童。例如在是男性的情况下,可以先筛选出预存储的人脸模板信息中的男性的人脸模板,再将人脸图像与男性的人脸模板中的所有同域模板进行逐一比对。
S460:判断高精度验证是否通过。根据S450中的特征比对结果,模板特征与从人脸图像提取的识别特征的相似度大于同域阈值时,高精度验证通过。验证通过则执行S530,否则执行S470。
S470:判断是否存在与采集到的人脸图像跨域(不同域)的人脸模板。可以用与上面同样的方法进行判断,在此不再赘述。是则执行S480,否则执行S500。
S480:进行低精度验证。即,使用跨域模板和跨域阈值进行特征比对。跨域模板为,与采集到的人脸图像属于不同域的人脸模板。跨域阈值为,在人脸图像与跨域模板进行特征比对时,用于判断两个人脸特征是否属于同一个人的相似度阈值。跨域模板相当于本申请中的“第一模板图像信息”的一例。跨域阈值相当于本申请中的“第三阈值”的一例。具体地,可以计算跨域模板的模板特征与识别特征的相似度,将计算出的相似度与跨域阈值进行比较。S480之后执行S490。在该步骤中,同样可以如上述那样进行初级筛选。
如上所述,在S450和S480中,采用不同的人脸模板和相似度阈值进行特征比对。本实施例中,同域阈值和跨域阈值可以采用以下方式来选取:
(1)在具有足够代表性的测试集上测试人脸识别算法,输出相同人脸、不同人脸的相似度;
(2)根据安全要求,确定人脸识别的误识率(FAR;false acceptance rate);
(3)根据误识率和不同人脸相似度的测试结果选择相似度阈值;
(4)至少指定2种相似度阈值:同域阈值和跨域阈值;
(5)对于同域阈值,可以每一种域使用单独的阈值,也可以几种域或所有域共用同一个阈值;
(6)对于多域共用的同域阈值,应使误识率最高的域也满足要求,故需要选择这些域在指定误识率下对应的阈值中的最小值;
(7)对于跨域阈值,可以每两种域使用单独的阈值,也可以几种域或所有域共用同一个阈值,推荐所有域使用同一个阈值;
(8)对于多域共用的跨域阈值,可以每两种域单独测算阈值,选择其中最小值;也可以在测试中按照一定比例将多种域混在一起;
(9)在安全要求相同的情况下,跨域阈值对应的误识率不应高于同域阈值对应的误识率。
根据上述阈值选取方式,在多域共用同一跨域阈值的情况下,跨域阈值对应的误识率的上限值不高于同域阈值对应的误识率的数值或其上限值。另外,一般情况下,根据上述方式选取的同域阈值的数值会高于跨域阈值的数值,但并不排除除此以外的情况。
此外,上述(1)中的人脸识别算法可以是使用单一域的样本训练出的算法,也可以是使用多种域的样本训练出的算法,本申请实施例对此没有特别限定。不过,如果能够使所涉及域的图像样本均参与训练,能够提高算法的精度。例如,可以先在最常用的一种或几种域上训练基础版本的算法代码,然后用少量的其他域的数据进行微 调。例如可以先在RGB域上训练基础版本的算法代码。
S490:判断低精度验证是否通过。根据S480中的比较结果,模板特征与识别特征的相似度大于跨域阈值时,低精度验证通过。验证通过则执行S540,否则执行S500。
S500:判断是否需要注册模板。这里,计算单元可以输出用于询问用户是否需要注册人脸模板的信息,并且获取用户的答复信息,根据用户的答复信息来判断是否许需要注册模板。计算单元例如可以通过设备具有的扬声器、显示器等以语音、显示等方式输出信息。计算单元例如可以获取用户对显示器的操作输入信息或者用户的语音信息,根据获取到的信息判断是否需要注册模板。若判断为需要注册模板则执行S510,否则流程结束。
S510:获取用户鉴权信息。在该步骤中,采用其他验证方式验证用户权限。计算单元可以获取用户输入的包括密码、验证码、指纹、声纹或虹膜中的一个或多个的信息,用于验证用户权限。
S520:判断用户鉴权是否通过。鉴权通过则执行S540,否则流程结束。
S530:判断是否需要更新模板。需要则执行S540,否则流程结束。
这里,可以输出用于向用户询问是否更新模板的信息,并获取用户输入的答复信息,根据用户的答复信息来决定是否需要更新模板。也可以由系统间隔固定时间自动更新模板,或者间隔一段时间后随机更新模板。或者,也可以在此前曾连续多次/高频度出现识别失败或者识别成功但相似度小于特定阈值时,由系统自动更新模板,或者向用户询问是否更新模板。后一种情况进行更新的理由在于,出现该情况说明用户人脸与模板采集时发生了较大变化。
S540:注册或更新人脸模板。图7是图6的子流程,是S540的注册或更新人脸模板处理的具体流程示意图。下面,对图7的各步骤进行说明。
S5400:对S410中获取到的人脸图像进行质检。质检包含2个方面:成像质量的检验,包含清晰度、对比度、曝光等,可以由硬件实现;人脸属性的检验,如头部姿态、遮挡等,可以利用深度学习等算法实现。在检验成像质量时,例如可以设定梯度阈值、对比度阈值、亮度系数阈值来判断图像的清晰度、对比度以及曝光是否满足要求,由此检验成像质量是否满足要求。在检验人脸属性时,例如可以训练深度学习模型,如计算头部姿态角(Yaw:方位角、Pitch:纵摇角、Roll:横摇角)的模型,判断人脸特定区域(如口、鼻、眼)是否被遮挡的模型,设定姿态角阈值以及遮挡判断规则,来检验人脸姿态是否满足要求。
S5401:判断人脸图像是否通过质检。在图像的清晰度、对比度以及曝光均满足要求且人脸姿态满足要求时,人脸图像通过质检。通过质检则执行S1402,未通过质检则注册/更新失败,流程结束。
S5402:对于通过质检的图像,提取人脸特征。由于之前的步骤S430中已计算了人脸特征,可以直接使用。
S5403:根据之前的人脸识别结果,确认是否已注册过相应ID(identification;账户)。如果注册过,则使用相同的ID;如果未注册过,则分配一个新的ID;同一人的不同域模板对应相同的ID,且模板标明所属域。
例如,在从S530执行S540的情况下,可以视为注册过相应ID,直接使用S450 的高精度验证所利用的同域模板的ID即可。在执行S490之后执行S540的情况下,可以视为注册过相应ID,直接使用S480的低精度验证所利用的跨域模板的ID即可。从S520执行S540的情况下,可以视为未注册过相应ID,分配一个新的ID。
S5404:进行人脸模板的注册或更新。
在执行S490之后执行S540的情况下,根据人脸图像,注册该人脸图像所属域的人脸模板,即添加新的人脸模板。在执行S520之后执行S540的情况下,为该用户分配一个新的ID,根据人脸图像,注册该ID的人脸模板。在执行S530之后执行S540的情况下,根据人脸图像更新人脸模板。此时的更新,是对同域的人脸模板进行更新,例如可以替换原有的同域的人脸模板。由于经过该S5404的处理后,预存储的人脸模板信息相比该处理之前发生了改变,因此也可以理解为,通过该处理更新了预存储的人脸模板信息。S5404的处理相当于本申请中的“更新预存储的模板图像信息”的一例。
另外,在更新人脸模板时,可以直接用新特征替换现有模板,也可以采用求和的方式融合新特征和现有模板。在融合新特征和现有模板时,可以使用下式(1)所示的滑动平均的方式:
V_n=αV_i+(1-α)V_t                                               (1)
式中,V_i为输入特征向量,V_t为现有模板向量,V_n为新的模板向量,α为自定义系数。
可选的,可以根据不同的更新触发条件选择不同的方法。例如,对于用户主动更新且相似度低于特定阈值(该阈值不低于用于人脸识别的同域阈值)的情况,直接替换模板,其他情况下则融合新特征和现有模板。
对于同一ID在同一个域对应多个模板(如正脸、左侧脸、右侧脸)的情况,每次可以只更新与采集的人脸图像对应的模板。对于跨多个域的人脸识别,可以将其中部分域的模板特征融合,作为跨域识别专用的模板向量。融合时,可以对多个域的模板求平均,还可根据各个域的识别频率和重要性计算加权平均。
此外,考虑到设备算力有限,也可以先将图像和特征暂存,待设备负载较轻时再进行人脸模板的注册或更新。
S5404之后流程结束。需注意,在人脸模板注册或更新的过程中,是存在失败的可能的。例如,图像质检不合格、写入失败等软件错误等情况会导致注册或更新的失败。此时,也可以在结束流程前记录更新或注册结果(成功/失败)。如果失败,系统将继续使用现有模板。对于缺乏同域模板的情况,系统将在下次识别到此用户时继续尝试自动注册模板。
以上对在进行人脸识别过程中更新或注册人脸模板的情况进行了说明。然而,本实施例中,上述说明中的同域模板或跨域模板也可以是由用户主动发起注册的模板。图8是用户主动发起模板注册时的流程示意图。在用户触发模板注册时,计算单元进行鉴权而验证用户权限之后,按照图8的流程来注册人脸模板。下面对图8的各步骤进行说明。
S1500:获取摄像头采集的人脸图像。
S1501:对采集的人脸图像进行成像质量检验。与上述同样,成像质量的检验包 含清晰度、对比度、曝光等,可以由硬件实现。
S1502:判断图像是否通过成像质量检验。通过则执行S1503,未通过则执行S1509。
S1503:进行人脸检测。
S1504:判断是否检测到人脸。检测到人脸则执行S1505,否则执行S1509。
S1505:进行人脸属性检验。与上述同样,人脸属性检验如包括头部姿态、遮挡等,可以利用深度学习等算法实现。
S1506:判断人脸属性检验是否通过。通过则执行S1507,未通过则执行S1508。
S1507:对齐人脸,并进行特征提取。
S1508:确认ID进行模板注册。
这里,将从人脸图像提取的特征与现有的所有模板特征进行比对,确认人脸是否注册过相应ID。如果注册过,则使用相同的ID;如果未注册过,则分配一个新的ID。同一人的不同域模板对应相同的ID。此外,保存模板时标明其所属域。之后流程结束。
可选的,从人脸图像提取的特征与同域模板的相似度高于同域阈值,认为已注册过同域模板,此时可以提示用户选择替换、更新或保留模板。
可选的,从人脸图像提取的特征与同一个ID的所有跨域模板的相似度都高于跨域阈值,认为已注册过相应ID,提取特征与同一个ID的所有跨域模板的相似度都不高于跨域阈值,认为没有注册过相应ID。对于仅与同一个ID的部分跨域模板的相似度高于跨域阈值的情况,可以询问用户,确认是否是同一人。
S1509:判断本次注册的尝试次数或运行时间是否达到上限。这里,可以计算成像质量检验未通过、未检测到人脸、人脸属性检验未通过的合计次数作为尝试次数。或者,计算图8的流程开始后的经过时间来作为运行时间。如果达到上限,注册失败,流程结束,否则执行S1500反复进行之后的处理,直到注册成功或者达到上限为止。
根据以上说明的本实施例,结合同域高精度验证和跨域低精度验证来进行人脸验证,在不存在同域模板时,或者同域高精度验证未通过时,自动进行跨域低精度验证,并且在跨域低精度验证通过时,根据人脸图像来自动注册相应域的人脸模板。由此,能够自动采集用户没有注册过模板的域的人脸模板,而无需用户进行按照每种域注册模板那样的繁琐的操作。与要求用户针对不同类别的图像注册多种不同的模板的情况相比,操作流程简单,提高了用户体验。另外,相比于只要求用户注册一类模板,但在识别不同类别的图像时使用不同的阈值的情况,能够提高识别精度。因此,根据本申请实施例的模板采集方法,能够在确保操作流程简单的同时,提升识别精度。
以上对本实施例提供的人脸模板采集方法进行了说明。但本申请实施例并不局限于此。在本实施例中,在低精度验证未通过时,询问用户是否需要注册模板,但本申请实施例并不局限于此。在其他实施例中,也可以如图9所示,在低精度验证未通过时,不向用户询问是否需要注册模板,而直接结束流程。图9所示的流程图与图6的流程图相比,省略了S500-S520。采用该实施例,每位用户必须主动触发而注册一个人脸模板。与上述实施例相比,便利性有所下降,但系统安全性增强。
在本实施例中,以系统存储的模板为特征向量的情况为例进行了说明,但本申请实施例并不局限于此。在其他实施例中,系统存储的模板也可以是未经处理或经过处理(人脸裁剪、对齐)的人脸图像。
在本实施例中,以基于深度学习的人脸识别算法的情况为例进行了说明,但本申请实施例并不局限于此。在其他实施例中,也可以适用其他的基于比较的人脸识别算法。
在本实施例中,以使用摄像头获取人脸图像的情况为例进行了说明,但本申请实施例并不局限于此。在其他实施例中,也可以使用摄像头以外的传感器来获取人脸图像。例如,可以使用雷达来获取人脸图像。
在本实施例中,以预存储的模板图像信息存储于计算单元所属设备具有的存储单元的情况为例进行了说明,但本申请实施例并不局限于此。在其他实施例中,预存储的模板图像信息可以存储于服务器。计算单元可以通过所属设备的通信单元从服务器获取预存储的模板图像信息。
应理解,图6的模板采集方法中的各步骤可以一部分或全部由图1中的各设备的计算单元执行,也可以一部分或全部由服务器40执行。例如,计算单元可以将获取到的人脸图像发送给服务器40,由服务器40判断是否存在与该人脸图像同域的人脸模板,并将判断结果发送给计算单元。也可以由服务器40进行上面说明的高精度验证、低精度验证,并将验证结果发送给计算单元。也可以由服务器40在人脸验证通过时,根据该人脸图像,注册人脸模板。
图10是本申请实施例提供的一种模板采集装置的结构示意图,该模板采集装置可以是终端,也可以是终端内部的芯片或芯片系统,并且可以实现如图3所示的模板采集方法以及上述各可选实施例。如图10所示,模板采集装置1000包括:获取模块1100以及处理模块1200;其中,获取模块1100,用于获取采集到的第一图像信息;获取模块1200,还用于从预存储的模板图像信息中获取第一模板图像信息,该第一模板图像信息与该第一图像信息的所属域不同;处理模块1200,用于在该第一图像信息和该第一模板图像信息匹配时,根据该第一图像信息,更新该预存储的模板图像信息。
可选的,获取模块110,还用于从预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第一图像信息的所属域相同;获取模块110,还用于在该第二模板图像信息和该第一图像信息不匹配时,从该预存储的模板图像信息中获取该第一模板图像信息。
可选的,获取模块110,还用于获取采集到的第二图像信息;获取模块110,还用于从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第二图像信息的所属域相同;处理模块120,还用于在该第二图像信息和该第二模板图像信息匹配,且该第二模板图像信息的注册时间超过时间阈值时,根据该第二图像信息,更新该预存储的模板图像信息。
可选的,获取模块110,还用于获取采集到的第二图像信息;获取模块110,还用于从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第二图像信息的所属域相同;处理模块120,还用于在该第二图像信息和该第二模板图像信息的匹配度大于第一阈值且小于第二阈值时,根据该第二图像信息,更新该预存储的模板图像信息。
可选的,获取模块110还用于获取采集到的第三图像信息;获取模块110,还用 于从该预存储的模板图像信息中获取第三模板图像信息,该第三模板图像信息与该第三图像信息的所属域不同,且与该第一模板图像信息的所属域不同;处理模块110,还用于在该第三图像信息和该第三模板图像信息匹配时,根据该第三图像信息,更新该预存储的模板图像信息,其中,在该第三图像信息和该第三模板图像信息的匹配度高于该第三阈值时,该第三图像信息和该第三模板图像信息匹配。
可选的,处理模块120,还用于向用户发送提示信息,其中,该提示信息,用于请求用户同意更新该预存储的模板图像信息,或者通知用户该预存储的模板图像信息已被更新。
应理解的是,本申请实施例中的模板采集装置可以由软件实现,例如,具有上述功能的计算机程序或指令来实现,相应计算机程序或指令可以存储在终端内部的存储器中,通过处理器读取该存储器内部的相应计算机程序或指令来实现上述功能。或者,本申请实施例中的图像采集装置还可以由硬件来实现。其中处理模块1200为处理器,获取模块1110为收发电路或接口电路。或者,本申请实施例中的模板采集装置还可以由处理器和软件模块的结合实现。
应理解的是,本申请实施例中的装置处理细节以及效果可以参考图3的模板采集方法及上述各可选实施例的相关表述,此处不再重复赘述。
图11是本申请实施例提供的一种模板采集系统的结构示意图,该模板采集系统执行图3所示的模板采集方法。如图11所示,模板采集系统200包括:模板采集装置2100,以及服务器2200。模板采集装置2100,用于发送采集到的第一图像信息。该采集到的第一图像信息来自于传感器。服务器2200,用于接收来自该模板采集装置的该第一图像信息。
服务器2200,还用于从预存储的模板图像信息中获取第一模板图像信息,该第一模板图像信息与该第一图像信息的所属域不同。服务器2200,还用于在该第一图像信息和该第一模板图像信息匹配时,根据该第一图像信息,更新该预存储的模板图像信息。服务器2200可以比对第一图像信息包含的目标对象与该第一模板图像信息包含的目标对象的相似度。在该相似度大于规定的阈值时,第一图像信息与获取到的模板图像信息匹配。
可选的,服务器2200,还用于从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第一图像信息的所属域相同;在该第二模板图像信息和该第一图像信息不匹配时,从该预存储的模板图像信息中获取该第一模板图像信息。
可选的,模板采集装置2100,还用于发送采集到的第二图像信息。服务器2200,还用于接收来自模板采集装置2100的该第二图像信息。服务器2200,还用于执行图3的模板采集方法中的步骤S314、S315。在一些实施例中,服务器2200,还用于执行步骤S315c,来替代执行步骤S315。
可选的,模板采集装置2100,还用于发送采集到的第二图像信息,该第二图像信息包括多个图像信息。服务器2200,还用于接收来自模板采集装置2100的该第二图像信息。服务器2200,还用于执行图3的模板采集方法中的步骤S317、S318。
可选的,模板采集装置2100,还用于发送采集到的第三图像信息。服务器2200, 还用于接收来自模板采集装置2100的该第三图像信息。服务器2200,还用于执行图3的模板采集方法中的步骤S320、S321。
应理解,图11所示的模板采集系统200的技术细节和有益效果可以参考图3所示的模板采集方法及上述各可选实施例中的相关说明,此处不再重复赘述。
图12是本申请实施例提供的一种模板采集系统的结构示意图,该模板采集系统执行图3所示的模板采集方法。如图12所示,该模板采集系统200a具有模板采集装置2100a和服务器2200a。模板采集装置2100a,用于发送采集到的第一图像信息。服务器2200a,用于接收来自模板采集装置2100a的该第一图像信息。服务器2200a,还用于从预存储的模板图像信息中获取第一模板图像信息,该第一模板图像信息与该第一图像信息的所属域不同。
服务器2200a,还用于发送该第一模板图像信息。模板采集装置2100a,还用于接收来自该服务器2200a的该第一模板图像信息。模板采集装置2100a,还用于在该第一图像信息和该第一模板图像信息匹配时,根据该第一图像信息,向服务器2200a发送指示信息,该指示信息用于指示服务器2200a更新该预存储的模板图像信息。该指示信息中可以包括根据该第一图像信息生成的模板图像信息,以及表示更新方式的信息。更新方式包括添加该生成的模板图像信息或用该生成的模板图像信息替换第一模板图像信息。模板采集装置2100a可以比对第一图像信息包含的目标对象与该第一模板图像信息包含的目标对象的相似度。在该相似度大于规定的阈值时,第一图像信息与获取到的模板图像信息匹配。服务器2200a,还用于在接收到来自模板采集装置2100a的该指示信息时,更新该预存储的模板图像信息。
可选的,模板采集装置2100a,还用于发送采集到的第二图像信息。服务器2200a,还用于接收来自模板采集装置2100a的该第二图像信息。服务器2200a,还用于从该预存储的模板图像信息中获取第二模板图像信息,该第二模板图像信息与该第二图像信息的所属域相同。服务器2200a,还用于发送该第二模板图像信息。
模板采集装置2100a,还用于接收来自该服务器2200a的该第二模板图像信息。模板采集装置2100a,还用于在该第二图像信息和该第二模板图像信息匹配,且该第二模板图像信息的注册时间超过时间阈值时,或者,在该第二图像信息和该第二模板图像信息的匹配度大于第一阈值且小于第二阈值时,根据该第二图像信息,向服务器2200a发送指示信息,该指示信息用于指示服务器2200a更新该预存储的模板图像信息。该指示信息中可以包括根据该第二图像信息生成的模板图像信息,以及表示更新方式的信息。更新方式包括添加该模板图像信息或用该模板图像信息替换第二模板图像信息。
可选的,模板采集装置2100a,还用于发送采集到的第三图像信息。服务器2200a,还用于接收来自模板采集装置2100a的该第三图像信息。服务器2200a,还用于从该预存储的模板图像信息中获取第三模板图像信息,该第三模板图像信息与该第三图像信息的所属域不同,且与该第一模板图像信息的所属域不同。服务器2200a,还用于发送该第三模板图像信息。
模板采集装置2100a,还用于接收来自该服务器2200a的该第三模板图像信息。模板采集装置2100a,还用于在该第三图像信息和该第三模板图像信息匹配时,根据该第三图像信息,向服务器2200a发送指示信息,该指示信息用于指示服务器2200a更新该预存储的模板图像信息。该指示信息中可以包括根据该第三图像信息生成的模板图像信息,以及表示更新方式的信息。更新方式包括添加该模板图像信息或用该模板图像信息替换第三模板图像信息。
应理解,图12所示的模板采集系统200中的技术细节和有益效果可以参考图3所示的模板采集方法及上述各可选实施例的相关说明,此处不再重复赘述。
图13是本申请实施例提供的一种计算装置1500的结构示意图,该计算装置可以作为模板采集装置,执行图1-3所示的模板采集方法以及上述各可选实施例。该计算装置可以是终端,也可以终端内部的芯片或芯片系统。如图13所示,该计算装置1500包括:处理器1510和存储器1520。
其中,该处理器1510可以与存储器1520连接。该存储器1520可以用于存储程序代码和数据。因此,该存储器1520可以是处理器1510内部的存储单元,也可以是与处理器1510独立的外部存储单元,还可以是包括处理器1510内部的存储单元和与处理器1510独立的外部存储单元的部件。
可选的,该计算装置1500还可以包括通信接口。该通信接口可以用于与其他装置之间进行通信。
可选的,计算装置1500还可以包括总线。存储器1520、通信接口可以通过总线与处理器1510连接。总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。为便于表示,图13中仅用一条线表示,但并不表示仅有一根总线或一种类型的总线。
应理解,在本申请实施例中,该处理器1510可以采用中央处理单元。该处理器还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate Array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。或者该处理器1510采用一个或多个集成电路,用于执行相关程序,以实现本申请实施例所提供的技术方案。
该存储器1520可以包括只读存储器和随机存取存储器,并向处理器1510提供指令和数据。处理器1510的一部分还可以包括非易失性随机存取存储器。例如,处理器1510还可以存储设备类型的信息。
在计算装置1500运行时,处理器1510执行存储器1520中的计算机执行指令执行上述模板采集方法的操作步骤。
应理解,根据本申请实施例的计算装置1500可以对应于执行根据本申请各实施例的方法中的相应主体,并且计算装置1500中的各个模块的上述和其它操作和/或功能分别为了实现本实施例各方法的相应流程,为了简洁,在此不再赘述。
图14为本申请实施例提供的一种计算装置的结构示意图,该计算装置可以作为图像采集装置,执行图1-3所示的模板采集方法以及上述各可选实施例。该计算装置可以是终端,也可以终端内部的芯片或芯片系统。如图14所示,计算装置1600包括:处理器1610,与处理器1610耦合的接口电路1620。应理解,虽然图14中仅示出了一个处理器和一个接口电路,但计算装置1600可以包括其他数目的处理器和接口电路。
其中,接口电路1620用于与终端的其他组件连通,例如存储器或其他处理器。处理器1610用于通过接口电路1620与其他组件进行信号交互。接口电路1620可以是处理器1610的输入/输出接口。
例如,处理器1610通过接口电路1620读取与之耦合的存储器中的计算机程序或指令,并译码和执行这些计算机程序或指令。应理解,这些计算机程序或指令可包括上述终端功能程序,也可以包括上述应用在终端内的图像处理装置的功能程序。当相应功能程序被处理器1610译码并执行时,可以使得终端或在终端内的图像处理装置实现本申请实施例所提供的图像处理方法中的方案。
可选的,这些终端功能程序存储在计算装置1600外部的存储器中。当上述终端功能程序被处理器1600译码并执行时,存储器中临时存放上述终端功能程序的部分或全部内容。
可选的,这些终端功能程序存储在计算装置1600内部的存储器中。当计算装置1600内部的存储器中存储有终端功能程序时,计算装置1600可被设置在本发明实施例的终端中。
可选的,这些终端功能程序的部分内容存储在计算装置1800外部的存储器中,这些终端功能程序的其他部分内容存储在计算装置1800内部的存储器中。
应理解,图10至图14中所示的具有相同功能的装置可以互相结合,图10至图14中所示的具有相同功能的装置以及各可选实施例相关设计细节可互相参考,也可以参考图1-3或图9中任一所示的模板采集方法以及各可选实施例相关设计细节。此处不再重复赘述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到 多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本申请实施例的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时用于执行上述各个实施例所描述的方案中的至少之一。
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是,但不限于,电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM,Electrical Programmable Read Only Memory或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括、但不限于无线、电线、光缆、RF(Radio Frequency,射频)等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括局域网(LAN,Wireless Local Area Network)或广域网(WAN,Wide Area Network),连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提 供商来通过因特网连接)。
本申请实施例还提供一种计算机程序,当被计算机执行时使得计算机执行上述各个实施例所描述的方案中的至少之一。
应理解,本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。
此外,说明书和权利要求书中的“第一”、“第二”、“第三”等类似用语,仅用于区别类似的对象,不代表针对对象的特定排序,可以理解地,在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
所涉及的表示步骤的标号,如S110、S120……等,并不表示一定会按此步骤执行,在允许的情况下可以互换前后步骤的顺序,或同时执行。关于数量、数值的表述“以上”应解释为包含本数。
说明书和权利要求书中使用的术语“包括”不应解释为限制于其后列出的内容;它不排除其它的元件或步骤。因此,其应当诠释为指定所提到的所述特征、整体、步骤或部件的存在,但并不排除存在或添加一个或更多其它特征、整体、步骤或部件及其组群。因此,表述“包括装置A和B的设备”不应局限为仅由部件A和B组成的设备。
此外,除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。如有不一致,以本说明书中所说明的含义或者根据本说明书中记载的内容得出的含义为准。另外,本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。

Claims (23)

  1. 一种模板采集方法,其特征在于,包括:
    获取采集到的第一图像信息;
    从预存储的模板图像信息中获取第一模板图像信息,所述第一模板图像信息与所述第一图像信息的所属域不同;
    在所述第一图像信息和所述第一模板图像信息匹配时,根据所述第一图像信息,更新所述预存储的模板图像信息。
  2. 根据权利要求1所述的模板采集方法,其特征在于,所述从预存储的模板图像信息中获取第一模板图像信息,具体包括:
    从所述预存储的模板图像信息中获取第二模板图像信息,所述第二模板图像信息与所述第一图像信息的所属域相同;
    在所述第二模板图像信息和所述第一图像信息不匹配时,从所述预存储的模板图像信息中获取所述第一模板图像信息。
  3. 根据权利要求1或2所述的模板采集方法,其特征在于,所述方法还包括:
    获取采集到的第二图像信息;
    从所述预存储的模板图像信息中获取第二模板图像信息,所述第二模板图像信息与所述第二图像信息的所属域相同;
    在所述第二图像信息和所述第二模板图像信息匹配,且所述第二模板图像信息的注册时间超过时间阈值时,根据所述第二图像信息,更新所述预存储的模板图像信息。
  4. 根据权利要求1或2所述的模板采集方法,其特征在于,所述方法还包括:
    获取采集到的第二图像信息;
    从所述预存储的模板图像信息中获取第二模板图像信息,所述第二模板图像信息与所述第二图像信息的所属域相同;
    在所述第二图像信息和所述第二模板图像信息的匹配度大于第一阈值且小于第二阈值时,根据所述第二图像信息,更新所述预存储的模板图像信息。
  5. 根据权利要求3所述的模板采集方法,其特征在于,
    在所述第一图像信息和所述第一模板图像信息的匹配度高于第三阈值时,所述第一图像信息和所述第一模板图像信息匹配;
    在所述第二图像信息和所述第二模板图像信息的匹配度高于第四阈值时,所述第二图像信息和所述第二模板图像信息匹配;
    其中,所述第三阈值与所述第四阈值不同。
  6. 根据权利要求1-5中任一项所述的模板采集方法,其特征在于,
    在所述第一图像信息和所述第一模板图像信息的匹配度高于第三阈值时,所述第 一图像信息和所述第一模板图像信息匹配,
    所述方法还包括:
    获取采集到的第三图像信息;
    从所述预存储的模板图像信息中获取第三模板图像信息,所述第三模板图像信息与所述第三图像信息的所属域不同,且与所述第一模板图像信息的所属域不同;
    在所述第三图像信息和所述第三模板图像信息匹配时,根据所述第三图像信息,更新所述预存储的模板图像信息;
    其中,在所述第三图像信息和所述第三模板图像信息的匹配度高于所述第三阈值时,所述第三图像信息和所述第三模板图像信息匹配。
  7. 根据权利要求1-6中任一项所述的模板采集方法,其特征在于,所述所属域用于指示图像信息的格式、颜色或来源中的一个或多个特征。
  8. 根据权利要求7所述的模板采集方法,其特征在于,所述所属域包括RGB域和IR域。
  9. 根据权利要求1-8中任一项所述的模板采集方法,其特征在于,所述方法还包括:向用户发送提示信息,所述提示信息用于,请求用户同意更新所述预存储的模板图像信息,或者通知用户所述预存储的模板图像信息已被更新。
  10. 一种模板采集装置,其特征在于,包括:获取模块以及处理模块;
    其中,
    所述获取模块,用于获取采集到的第一图像信息;
    所述获取模块,还用于从预存储的模板图像信息中获取第一模板图像信息,所述第一模板图像信息与所述第一图像信息的所属域不同;
    所述处理模块,用于在所述第一图像信息和所述第一模板图像信息匹配时,根据所述第一图像信息,更新所述预存储的模板图像信息。
  11. 根据权利要求10所述的模板采集装置,其特征在于,所述获取模块,还用于从所述预存储的模板图像信息中获取第二模板图像信息,所述第二模板图像信息与所述第一图像信息的所属域相同;
    在所述第二模板图像信息和所述第一图像信息不匹配时,从所述预存储的模板图像信息中获取所述第一模板图像信息。
  12. 根据权利要求10或11所述的模板采集装置,其特征在于,
    所述获取模块,还用于获取采集到的第二图像信息;
    所述获取模块,还用于从所述预存储的模板图像信息中获取第二模板图像信息,所述第二模板图像信息与所述第二图像信息的所属域相同;
    所述处理模块,还用于在所述第二图像信息和所述第二模板图像信息匹配,且所 述第二模板图像信息的注册时间超过时间阈值时,根据所述第二图像信息,更新所述预存储的模板图像信息。
  13. 根据权利要求10或11所述的模板采集装置,其特征在于,所述获取模块,还用于获取采集到的第二图像信息;
    所述获取模块,还用于从所述预存储的模板图像信息中获取第二模板图像信息,所述第二模板图像信息与所述第二图像信息的所属域相同,
    所述处理模块,还用于在所述第二图像信息和所述第二模板图像信息的匹配度大于第一阈值且小于第二阈值时,根据所述第二图像信息,更新所述预存储的模板图像信息。
  14. 根据权利要求12所述的模板采集装置,其特征在于,
    在所述第一图像信息和所述第一模板图像信息的匹配度高于第三阈值时,所述第一图像信息和所述第一模板图像信息匹配;
    在所述第二图像信息和所述第二模板图像信息的匹配度高于第四阈值时,所述第二图像信息和所述第二模板图像信息匹配;
    其中,所述第三阈值与所述第四阈值不同。
  15. 根据权利要求10-14中任一项所述的模板采集装置,其特征在于,在所述第一图像信息和所述第一模板图像信息的匹配度高于第三阈值时,所述第一图像信息和所述第一模板图像信息匹配,
    所述获取模块,还用于获取采集到的第三图像信息;
    所述获取模块,还用于从所述预存储的模板图像信息中获取第三模板图像信息,所述第三模板图像信息与所述第三图像信息的所属域不同,且与所述第一模板图像信息的所属域不同,
    所述处理模块,还用于在所述第三图像信息和所述第三模板图像信息匹配时,根据所述第三图像信息,更新所述预存储的模板图像信息,
    其中,在所述第三图像信息和所述第三模板图像信息的匹配度高于所述第三阈值时,所述第三图像信息和所述第三模板图像信息匹配。
  16. 根据权利要求10-15中任一项所述的模板采集装置,其特征在于,所述所属域用于指示图像信息的格式、颜色或来源中的一个或多个特征。
  17. 根据权利要求16所述的模板采集装置,其特征在于,
    所述所属域包括RGB域和IR域。
  18. 根据权利要求10-17中任一项所述的模板采集装置,其特征在于,所述处理模块,还用于向用户发送提示信息,其中,所述提示信息,用于请求用户同意更新所述预存储的模板图像信息,或者通知用户所述预存储的模板图像信息已被更新。
  19. 一种模板采集系统,其特征在于,包括:模板采集装置,以及服务器;
    所述模板采集装置,用于发送采集到的第一图像信息;
    所述服务器,用于接收来自所述模板采集装置的所述第一图像信息;
    所述服务器,还用于从预存储的模板图像信息中获取第一模板图像信息,所述第一模板图像信息与所述第一图像信息的所属域不同;
    所述服务器,还用于在所述第一图像信息和所述第一模板图像信息匹配时,根据所述第一图像信息,更新所述预存储的模板图像信息。
  20. 一种电子装置,其特征在于,包括处理器和存储器,
    其中,所述存储器存储有程序指令,所述程序指令当被所述处理器执行时使得所述处理器执行权利要求1-9中任一项所述的模板采集方法。
  21. 一种电子装置,其特征在于,包括处理器和接口电路,其中,所述处理器通过所述接口电路与存储器耦合,所述处理器用于执行所述存储器中的程序代码,以使得所述处理器执行权利要求1-9中任一项所述的模板采集方法。
  22. 一种计算机存储介质,其特征在于,包括计算机指令,当该计算机指令在电子设备上运行时,使得该电子设备执行权利要求1-9中任一项所述的模板采集方法。
  23. 一种计算机程序产品,其特征在于,当该计算机程序产品在计算机上运行时,使得该计算机执行权利要求1-9中任一项所述的模板采集方法。
PCT/CN2021/089692 2021-04-25 2021-04-25 模板采集方法、装置及系统 WO2022226699A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202180001486.9A CN113302623A (zh) 2021-04-25 2021-04-25 模板采集方法、装置及系统
PCT/CN2021/089692 WO2022226699A1 (zh) 2021-04-25 2021-04-25 模板采集方法、装置及系统
EP21938190.2A EP4328796A4 (en) 2021-04-25 2021-04-25 METHOD, APPARATUS AND SYSTEM FOR TEMPLATE COLLECTION

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/089692 WO2022226699A1 (zh) 2021-04-25 2021-04-25 模板采集方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2022226699A1 true WO2022226699A1 (zh) 2022-11-03

Family

ID=77331315

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/089692 WO2022226699A1 (zh) 2021-04-25 2021-04-25 模板采集方法、装置及系统

Country Status (3)

Country Link
EP (1) EP4328796A4 (zh)
CN (1) CN113302623A (zh)
WO (1) WO2022226699A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657222A (zh) * 2017-09-12 2018-02-02 广东欧珀移动通信有限公司 人脸识别方法及相关产品
CN109241888A (zh) * 2018-08-24 2019-01-18 北京旷视科技有限公司 神经网络训练与对象识别方法、装置和系统及存储介质
CN110008903A (zh) * 2019-04-04 2019-07-12 北京旷视科技有限公司 人脸识别方法、装置、系统、存储介质和人脸支付方法
CN110458072A (zh) * 2019-08-01 2019-11-15 珠海格力电器股份有限公司 指静脉识别方法、系统、智能门锁及计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657222A (zh) * 2017-09-12 2018-02-02 广东欧珀移动通信有限公司 人脸识别方法及相关产品
CN109241888A (zh) * 2018-08-24 2019-01-18 北京旷视科技有限公司 神经网络训练与对象识别方法、装置和系统及存储介质
CN110008903A (zh) * 2019-04-04 2019-07-12 北京旷视科技有限公司 人脸识别方法、装置、系统、存储介质和人脸支付方法
CN110458072A (zh) * 2019-08-01 2019-11-15 珠海格力电器股份有限公司 指静脉识别方法、系统、智能门锁及计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4328796A4 *

Also Published As

Publication number Publication date
EP4328796A1 (en) 2024-02-28
EP4328796A4 (en) 2024-06-26
CN113302623A (zh) 2021-08-24

Similar Documents

Publication Publication Date Title
US11163981B2 (en) Periocular facial recognition switching
US10430645B2 (en) Facial recognition operations based on pose
US11693937B2 (en) Automatic retries for facial recognition
KR102299847B1 (ko) 얼굴 인증 방법 및 장치
US10503992B2 (en) Process for updating templates used in facial recognition
US10990806B2 (en) Facial image processing method, terminal, and data storage medium
WO2020135095A1 (zh) 定点授权的身份识别方法、装置及服务器
US11080557B2 (en) Image authentication apparatus, method, and storage medium using registered image
US11113510B1 (en) Virtual templates for facial recognition
JP2017091520A (ja) ユーザ認証のための登録データベースの適応的更新方法及び装置
US10769415B1 (en) Detection of identity changes during facial recognition enrollment process
CN113366487A (zh) 基于表情组别的操作确定方法、装置及电子设备
WO2020135115A1 (zh) 近场信息认证的方法、装置、电子设备和计算机存储介质
WO2020135081A1 (zh) 基于动态栅格化管理的身份识别方法、装置及服务器
US11503021B2 (en) Mobile enrollment using a known biometric
KR101724971B1 (ko) 광각 카메라를 이용한 얼굴 인식 시스템 및 그를 이용한 얼굴 인식 방법
CN106056083B (zh) 一种信息处理方法及终端
TW202232367A (zh) 人臉識別方法、裝置、設備及存儲介質
CN113837006B (zh) 一种人脸识别方法、装置、存储介质及电子设备
JP2005259049A (ja) 顔面照合装置
EP3785166A1 (en) Multiple enrollments in facial recognition
WO2022226699A1 (zh) 模板采集方法、装置及系统
US10311290B1 (en) System and method for generating a facial model
AU2020100218B4 (en) Process for updating templates used in facial recognition
KR20210050649A (ko) 모바일 기기의 페이스 인증 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21938190

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2021938190

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021938190

Country of ref document: EP

Effective date: 20231122