WO2020151315A1 - Method and device for generating face recognition fusion model - Google Patents

Method and device for generating face recognition fusion model Download PDF

Info

Publication number
WO2020151315A1
WO2020151315A1 PCT/CN2019/117477 CN2019117477W WO2020151315A1 WO 2020151315 A1 WO2020151315 A1 WO 2020151315A1 CN 2019117477 W CN2019117477 W CN 2019117477W WO 2020151315 A1 WO2020151315 A1 WO 2020151315A1
Authority
WO
WIPO (PCT)
Prior art keywords
face recognition
confidence
recognition models
fusion model
models
Prior art date
Application number
PCT/CN2019/117477
Other languages
French (fr)
Chinese (zh)
Inventor
戴磊
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020151315A1 publication Critical patent/WO2020151315A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • This application relates to the technical field of face recognition. Specifically, this application relates to a method and device for generating a face recognition fusion model.
  • the method to improve the accuracy of face recognition is because of the different recognition characteristics of different models of face recognition. By adding two face recognition models in half, a fusion model of face recognition is obtained.
  • the inventor realizes that the face recognition fusion model obtained above, if the recognition accuracy of the two face recognition models are far apart, the generated fusion model may have a false recognition rate than the recognition model with the highest recognition accuracy alone. High defects.
  • this application provides a method for generating a face recognition fusion model, which includes the following steps: using two face recognition models to obtain the feature vector of each face image in a positive sample and a negative sample; Feature vectors, respectively obtaining the angles between the two face recognition models and the feature vectors of the face images in the positive and negative samples to obtain the confidence of the corresponding face recognition models; and recognizing the two faces
  • the results of the model’s confidence are compared, and the combination of the weight values of the two face recognition models is obtained according to the comparison results; the weights of the two face recognition models are calculated according to the combination of the weight values to determine the face recognition Fusion model.
  • the present application also provides a face recognition fusion model generation device, which includes: a feature vector obtaining module, which is used to use two face recognition models to obtain each face image in a positive sample and a negative sample.
  • the feature vector the confidence acquisition module is used to obtain the angle between the two face recognition models on the feature vector of the face image in the positive sample and the negative sample according to the feature vector to obtain the corresponding person The confidence of the face recognition model; a confidence comparison module, used to compare the results of the confidence of the two face recognition models, and obtain a combination of the weight values of the two face recognition models according to the comparison results; a face recognition fusion model
  • the determining module is configured to calculate the weight of the two face recognition models according to the combination of the weight values to determine the face recognition fusion model.
  • the present application also provides a computer device, which includes: one or more processors; a memory; one or more computer programs, wherein the one or more computer programs are stored in the memory and are Configured to be executed by the one or more processors, and the one or more computer programs are configured to execute a face recognition fusion model generation method, wherein the face recognition fusion model generation method includes: Two face recognition models are used to obtain the feature vector of each face image in the positive sample and the negative sample; according to the feature vector, the two face recognition models are used to obtain the positive sample and the negative sample.
  • the included angle of the feature vector of the face image obtains the confidence of the corresponding face recognition model; the confidence results of the two face recognition models are compared, and the combination of the weight values of the two face recognition models is obtained according to the comparison result Calculating the weights of the two face recognition models according to the combination of the weight values to determine the face recognition fusion model.
  • the present application also provides a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, a method for generating a face recognition fusion model is realized, wherein
  • the method for generating the face recognition fusion model includes the following steps: using two face recognition models to obtain the feature vector of each face image in the positive sample and the negative sample respectively; according to the feature vector, obtain the The angle between the two face recognition models of the feature vectors of the face images in the positive sample and the negative sample to obtain the confidence of the corresponding face recognition model; the results of the confidence of the two face recognition models are compared Obtain a combination of weight values of the two face recognition models according to the comparison result; calculate the weights of the two face recognition models according to the combination of weight values, and determine the face recognition fusion model.
  • the method and device for generating a face recognition fusion model overcomes the fact that the difference in recognition accuracy of two face recognition models is not considered in the prior art, and is based on the confidence of comparing the two face recognition models.
  • the two face recognition models are assigned weight values of different sizes to ensure that the face recognition fusion model generated by the two face recognition models has a higher accuracy of face recognition than a single face recognition model Degree should be high.
  • Fig. 1 is a flowchart of a method for generating a face recognition fusion model according to an embodiment of the present application
  • FIG. 2 is a flowchart of a method for generating a face recognition fusion model according to another embodiment of the present application
  • FIG. 3 is a schematic diagram of an apparatus for generating a face recognition fusion model according to an embodiment of the application
  • FIG. 4 is a schematic structural diagram of a computer device according to an embodiment of the application.
  • FIG. 5 is a schematic structural diagram of a terminal according to an embodiment of the application.
  • FIG. 1 is a flowchart of a method for generating a face recognition fusion model according to an embodiment. The method includes the following steps:
  • S110 Use two face recognition models to obtain the feature vector of each face image in the positive sample and the negative sample respectively.
  • the positive sample is that several collected face images belong to the same person; the negative sample is that several collected face images do not belong to the same person.
  • the confidence is used to reflect the recognition accuracy of the corresponding face recognition model; in this embodiment, it is the ability of the face recognition model to judge the similarity between the positive sample and the negative sample. Specifically, if the degree of separation between the positive and negative samples obtained by the face recognition model is higher, the confidence is higher. The higher the confidence of the corresponding face recognition model, the higher the accuracy of its face recognition.
  • the server uses the positive sample and the negative sample as a training set to calculate the confidence of the two face recognition models, and obtain the face recognition accuracy of the two face recognition models respectively.
  • the two face recognition models are used to generate a face recognition fusion model.
  • the two face recognition models are defined as the face recognition model A and the face recognition model B respectively to distinguish them.
  • the confidence recognition model A labeled S A model B recognition confidence labeled S B.
  • the face recognition model A is used to recognize each image in all positive samples and each image in negative samples. When the face recognition model A recognizes each image, it will get the corresponding feature vector. Corresponding to each positive sample or negative sample, a corresponding number of feature vectors are obtained according to the number of positive and negative face images.
  • the angle between the respective feature vectors is obtained. If the angle formed by the feature vector of the positive sample is smaller, the confidence of the face recognition model A is higher; otherwise, the confidence of the face recognition model A is lower. Conversely, the larger the angle formed by the feature vector of the negative sample, the higher the confidence of the face recognition model A; otherwise, the lower the confidence of the face recognition model A.
  • both the positive sample and the negative sample include two face images
  • the included angles are respectively composed of two corresponding feature vectors.
  • the calculation of the confidence of the corresponding face recognition model is to obtain the inner product after normalizing the two feature vectors.
  • S B of confidence.
  • the included angle of the feature vector of the positive sample obtained by the face recognition model A for the positive sample should be smaller than the included angle of the feature vector obtained for the negative sample. If the degree of separation between the included angle of the feature vector of the positive sample and the included angle of the feature vector obtained from the negative sample is low, or even greater than the included angle of the feature vector obtained from the negative sample, then the face recognition model A The accuracy of image recognition is low. Otherwise, the face recognition model A has the highest image recognition accuracy.
  • the number of positive samples is 10,000
  • the number of negative samples is 10,000.
  • the above-mentioned number of positive and negative samples is only for taking as many test samples as possible, and the number of samples can be different.
  • step S110 In order to further verify the accuracy of the corresponding face recognition model, before the step of separately calculating the confidence of the two face recognition models in step S110, one of the face images of the positive sample and the negative sample When the face image belongs to the same person, the confidence of the positive sample and the negative sample of the two face recognition models are compared.
  • the verification basis of the two face recognition models namely the positive sample and the negative sample, are unified, so that it is easier to measure the angle between the feature vectors corresponding to each face image, so as to verify the corresponding person Accuracy of face recognition model.
  • the server obtains the confidence results of the two face recognition models from step S110, and compares the results of the two results. According to the result of the comparison, different weight values are assigned to the two face recognition models.
  • the weight value a assigned by the face recognition model A is higher than the weight value b of the face recognition model B.
  • the weight value a and weight value b form a combination (a, b) of the two face recognition models.
  • the following relationship exists between the weight value a and the weight value b: a+b 1, where a ⁇ [0,1] and b ⁇ [0,1].
  • S140 Calculate the weights of the two face recognition models according to the combination of the weight values, and determine the face recognition fusion model.
  • the server enters the value of the combination (a, b) of the two face recognition models into the weight calculation of the two face recognition models to obtain a fusion model about the two face recognition models.
  • the method for generating a face recognition fusion model overcomes the fact that the prior art does not take into account the difference in the recognition accuracy of the two face recognition models, and is based on the comparison of the confidence level of the two face recognition models.
  • the two face recognition models are assigned weight values of different sizes to ensure that the face recognition fusion model generated by the two face recognition models has a higher face recognition accuracy than a single face recognition model .
  • the step includes: according to the result of the comparison of the confidence levels of the two face recognition models, reporting the two face recognition models to the Set the combination of each weight value.
  • weight value combinations (a, b) of the two face recognition models it is possible to first set different combinations of the weight value combinations (a, b) of the two face recognition models, and obtain the best weight value combination after calculation and comparison.
  • step S140 On the basis of the combination of the weight values set for the two face recognition models obtained above, for step S140, refer to FIG. 2, which is another embodiment of the method for generating a face recognition fusion model Flow chart, which includes the following steps:
  • the face recognition fusion model with the highest built-in confidence in this interval and the face recognition fusion model of the combination of the weight values of other (a, b) are used to compare the confidence, and so on.
  • the face recognition fusion model with the highest confidence level is obtained, and the face recognition fusion model with the highest accuracy is obtained.
  • the above step S142 may further include: according to the range of the two weight values and the two weight values The value range of the sum is divided into equal scales, and the corresponding confidence is obtained, and the highest confidence is obtained after comparison.
  • the time complexity of the above steps is a constant 0 (1), so a certain amount of calculation is required, but this kind of test is generally performed offline, and the operation of the server does not cause calculation burden.
  • step 140 according to the determined face recognition fusion model, a combination of weight values of (a, b) is correspondingly obtained.
  • the method for generating a face recognition fusion model provided in this application can be completed through a terminal.
  • the terminal divides the face image data in the memory and/or the face image data obtained on-site into positive samples and negative samples according to the user's mark. Then, according to the calculation of the confidence of the two face recognition models, a combination of corresponding weight values is formed, and after calculation and comparison, the face recognition fusion model with the highest confidence is obtained.
  • the terminal correspondingly responds to the user's request for face recognition sent by the terminal according to the face recognition fusion model, and completes the corresponding face recognition task.
  • an embodiment of the present application also provides an apparatus for generating a face recognition fusion model, as shown in FIG. 3, including:
  • the feature vector obtaining module 310 is configured to use the two face recognition models to obtain the feature vector of each face image in the positive sample and the negative sample respectively;
  • the confidence acquisition module 320 is configured to obtain the angles between the two face recognition models and the feature vectors of the face images in the positive sample and the negative sample according to the feature vector, to obtain the corresponding face recognition model Confidence;
  • the confidence comparison module 330 is configured to compare the confidence results of the two face recognition models, and obtain a combination of weight values of the two face recognition models according to the comparison results;
  • the face recognition fusion model determination module 340 is configured to calculate the weights of the two face recognition models according to the combination of the weight values to determine the face recognition fusion model.
  • FIG. 4 is a schematic diagram of the internal structure of a computer device in an embodiment.
  • the computer device includes a processor 410, a storage medium 420, a memory 430, and a network interface 440 connected through a system bus.
  • the storage medium 420 of the computer device stores an operating system, a database, and computer-readable instructions.
  • the database may store control information sequences.
  • the processor 410 can implement a In the method for generating a face recognition fusion model, the processor 410 can implement the feature vector obtaining module 310, the confidence obtaining module 320, and the confidence comparison in the device for generating a face recognition fusion model in the embodiment shown in FIG.
  • the module 330 and the face recognition fusion model determine the function of the module 340.
  • the processor 410 of the computer device is used to provide calculation and control capabilities, and supports the operation of the entire computer device.
  • the memory 430 of the computer device may store computer-readable instructions, and when the computer-readable instructions are executed by the processor 410, the processor 410 can make the processor 410 execute a method for generating a face recognition fusion model.
  • the network interface 440 of the computer device is used to connect and communicate with the terminal.
  • the terminal can be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales, sales terminal), a car computer, etc. Take the terminal as a mobile phone as an example:
  • FIG. 5 shows a block diagram of a part of the structure of a mobile phone related to a terminal provided in an embodiment of the present application.
  • the mobile phone includes: a radio frequency (RF) circuit 510, a memory 520, an input unit 530, a display unit 540, a sensor 550, an audio circuit 560, a wireless fidelity (Wi-Fi) module 570, a processing 580, and power supply 590.
  • RF radio frequency
  • the memory 520 may be used to store software programs and modules.
  • the processor 580 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 520.
  • the memory 520 may mainly include a storage program area and a storage data area.
  • the storage program area may store an operating system, an application program required by at least one function (such as a voiceprint playback function, an image playback function, etc.), etc.; the storage data area may store Data (such as audio data, phone book, etc.) created based on the use of mobile phones.
  • the memory 520 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the input unit 530 can be used to receive inputted digital or character information, obtain and input a face image, and generate signal input related to user settings and function control of the mobile phone.
  • the input unit 530 may include a touch panel 531 and other input devices 532.
  • the touch panel 531 also known as a touch screen, can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 531 or near the touch panel 531. Operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 531 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 580, and can receive and execute commands from the processor 580.
  • the touch panel 531 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the input unit 530 may also include other input devices 532.
  • the other input device 532 may include, but is not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick.
  • the display unit 540 may be used to display information input by the user or information provided to the user and various menus of the mobile phone.
  • the display unit 540 may include a display panel 541.
  • the display panel 541 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), etc.
  • the touch panel 531 can cover the display panel 541. When the touch panel 531 detects a touch operation on or near it, it transmits it to the processor 580 to determine the type of the touch event, and then the processor 580 responds to the touch event. The type provides corresponding visual output on the display panel 541.
  • the touch panel 531 and the display panel 541 are used as two independent components to implement the input and input functions of the mobile phone, but in some embodiments, the touch panel 531 and the display panel 541 can be integrated. Realize the input and output functions of mobile phones.
  • the processor 580 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire mobile phone. It executes by running or executing software programs and/or modules stored in the memory 520, and calling data stored in the memory 520. Various functions and processing data of the mobile phone can be used to monitor the mobile phone as a whole.
  • the processor 580 may include one or more processing units; preferably, the processor 580 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 580.
  • the processor 580 included in the terminal also has the following functions: acquiring and using the positive and negative samples of the face image comparison combination, respectively calculating the confidence of the two face recognition models; The confidence results of the two face recognition models are compared, and the combination of the weight values of the two face recognition models is obtained according to the comparison results; the weights of the two face recognition models are calculated according to the combination of the weight values to determine the The fusion model of face recognition.
  • the processor 580 has the function of executing the method for generating a face recognition fusion model in any of the above embodiments, wherein the method for generating the face recognition fusion model includes: using two face recognition models to obtain positive samples respectively And the feature vector of each face image in the negative sample; according to the feature vector, the angle between the two face recognition models and the feature vector of the face image in the positive sample and the negative sample is obtained to obtain the corresponding The confidence of the face recognition model; compare the results of the confidence of the two face recognition models, and obtain a combination of the weight values of the two face recognition models according to the comparison result; according to the combination of the weight values, The weight calculation of the two face recognition models determines the face recognition fusion model. I won't repeat them here.
  • the present application also proposes a storage medium storing computer-readable instructions.
  • the storage medium is a volatile storage medium or a non-volatile storage medium.
  • the computer-readable instructions are stored by one or more When the two processors are executed, one or more processors are made to perform the following steps: obtain and use the positive sample and the negative sample of the face image comparison combination, and calculate the confidence of the two face recognition models; The results of the confidence of the recognition models are compared, and the combination of the weight values of the two face recognition models is obtained according to the comparison results; the weights of the two face recognition models are calculated according to the combination of the weight values to determine the face Identify the fusion model.
  • the method for generating a face recognition fusion model overcomes the fact that the difference in recognition accuracy of two face recognition models is not considered in the prior art, and is based on the result of comparing the confidence levels of the two face recognition models , Assigning weight values of different sizes to the two face recognition models to ensure that the face recognition fusion model generated by the two face recognition models has a higher face recognition accuracy than a single face recognition model.
  • This application also provides the face recognition fusion model that determines the highest confidence level by calculating and comparing different combinations of the weight values.
  • the face recognition fusion model has the highest accuracy for face recognition.
  • the value of the weight value is equal-scaled to obtain the corresponding two with the highest built-in confidence in the set interval.
  • a combination of two weight values, and then further equal-scale division within the range of the highest combination of two weight values in the set interval, and by analogy, the face recognition with the highest accuracy for face recognition is obtained Fusion model.
  • this application considers the face recognition accuracy of the face recognition model to be fused into the generation of the face recognition fusion model, which solves the problem of the generation in the prior art.
  • the fusion model may have a higher false recognition rate than the recognition model with the highest recognition accuracy alone.
  • the computer program can be stored in a computer readable storage medium. When executed, it may include the procedures of the above-mentioned method embodiments.
  • the aforementioned storage medium may be a storage medium such as a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.

Abstract

A method and device for generating a face recognition fusion model, relating to the technical field of face recognition. The method comprises: using two face recognition models to respectively obtain feature vectors of face images in a positive sample and in a negative sample (S110); respectively obtaining, according to the feature vectors, included angles between the feature vectors of the face images in the positive sample and in the negative sample targeted by the two face recognition models to obtain confidences corresponding to the face recognition models (S120); comparing results of the confidences of the two face recognition models to obtain a combination of weight values of the two face recognition models according to a comparison result (S130); and calculating a weight of the two face recognition models according to the combination of the weight values to determine a face recognition fusion model (S140), so as to ensure that the face recognition accuracy of the face recognition fusion model generated by using the two face recognition models is higher than the accuracy of a single face recognition model.

Description

人脸识别融合模型的生成方法和装置Method and device for generating face recognition fusion model
本申请要求于2019年1月25日提交中国专利局、申请号为201910075750.8,发明名称为“人脸识别融合模型的生成方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on January 25, 2019, the application number is 201910075750.8, and the invention title is "Method and Apparatus for Generating Face Recognition Fusion Model", the entire content of which is incorporated by reference In this application.
技术领域Technical field
本申请涉及人脸识别技术领域,具体而言,本申请涉及一种人脸识别融合模型的生成方法和装置。This application relates to the technical field of face recognition. Specifically, this application relates to a method and device for generating a face recognition fusion model.
背景技术Background technique
随着人脸识别技术在金融加密解锁领域、安检领域、家居安保领域等安全领域的应用,用户对人脸识别的精确度要求也再不断提高。目前,提高人脸识别精确度的方法是出于人脸识别的不同模型的识别特征不同,通过两个人脸识别模型对半相加,得到人脸识别的融合模型。With the application of face recognition technology in security fields such as financial encryption unlocking, security inspection, and home security, users have continuously improved the accuracy of face recognition. At present, the method to improve the accuracy of face recognition is because of the different recognition characteristics of different models of face recognition. By adding two face recognition models in half, a fusion model of face recognition is obtained.
但发明人意识到上述得到的人脸识别的融合模型,如果两个人脸识别模型的识别精确度相距较大,所生成的融合模型可能存在比单独使用识别精确度最高的识别模型的误识别率高的缺陷。However, the inventor realizes that the face recognition fusion model obtained above, if the recognition accuracy of the two face recognition models are far apart, the generated fusion model may have a false recognition rate than the recognition model with the highest recognition accuracy alone. High defects.
发明内容Summary of the invention
为克服以上技术问题,特别是现有技术中生成的融合模型可能存在比单独使用识别精准度最高的识别模型的误识别率高的缺陷,特提出以下技术方案:In order to overcome the above technical problems, especially the fusion model generated in the prior art may have a higher false recognition rate than the recognition model with the highest recognition accuracy alone, the following technical solutions are proposed:
第一方面,本申请提供一种人脸识别融合模型的生成方法,其包括以下步骤:利用两个人脸识别模型,分别求取正样本和负样本中每个人脸图像的特征向量;根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度;将所述两个人脸识别模型的置信度的结果进行比较,根据比 较结果得到两个人脸识别模型的权重值的组合;根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别融合模型。In the first aspect, this application provides a method for generating a face recognition fusion model, which includes the following steps: using two face recognition models to obtain the feature vector of each face image in a positive sample and a negative sample; Feature vectors, respectively obtaining the angles between the two face recognition models and the feature vectors of the face images in the positive and negative samples to obtain the confidence of the corresponding face recognition models; and recognizing the two faces The results of the model’s confidence are compared, and the combination of the weight values of the two face recognition models is obtained according to the comparison results; the weights of the two face recognition models are calculated according to the combination of the weight values to determine the face recognition Fusion model.
第二方面,本申请还提供一种人脸识别融合模型的生成装置,其包括:特征向量求取模块,用于利用两个人脸识别模型,分别求取正样本和负样本中每个人脸图像的特征向量;置信度获取模块,用于根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度;置信度比较模块,用于将所述两个人脸识别模型的置信度的结果进行比较,根据比较结果得到两个人脸识别模型的权重值的组合;人脸识别融合模型确定模块,用于根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别融合模型。In the second aspect, the present application also provides a face recognition fusion model generation device, which includes: a feature vector obtaining module, which is used to use two face recognition models to obtain each face image in a positive sample and a negative sample. The feature vector; the confidence acquisition module is used to obtain the angle between the two face recognition models on the feature vector of the face image in the positive sample and the negative sample according to the feature vector to obtain the corresponding person The confidence of the face recognition model; a confidence comparison module, used to compare the results of the confidence of the two face recognition models, and obtain a combination of the weight values of the two face recognition models according to the comparison results; a face recognition fusion model The determining module is configured to calculate the weight of the two face recognition models according to the combination of the weight values to determine the face recognition fusion model.
第三方面,本申请还提供一种计算机设备,其包括:一个或多个处理器;存储器;一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个计算机程序配置用于执行一种人脸识别融合模型的生成方法,其中,所述人脸识别融合模型的生成方法包括:利用两个人脸识别模型,分别求取正样本和负样本中每个人脸图像的特征向量;根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度;将所述两个人脸识别模型的置信度的结果进行比较,根据比较结果得到两个人脸识别模型的权重值的组合;根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别融合模型。In the third aspect, the present application also provides a computer device, which includes: one or more processors; a memory; one or more computer programs, wherein the one or more computer programs are stored in the memory and are Configured to be executed by the one or more processors, and the one or more computer programs are configured to execute a face recognition fusion model generation method, wherein the face recognition fusion model generation method includes: Two face recognition models are used to obtain the feature vector of each face image in the positive sample and the negative sample; according to the feature vector, the two face recognition models are used to obtain the positive sample and the negative sample. The included angle of the feature vector of the face image obtains the confidence of the corresponding face recognition model; the confidence results of the two face recognition models are compared, and the combination of the weight values of the two face recognition models is obtained according to the comparison result Calculating the weights of the two face recognition models according to the combination of the weight values to determine the face recognition fusion model.
第四方面,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现一种人脸识别融合模型的生成方法,其中,所述人脸识别融合模型的生成方法包括以下步骤:利用两个人脸识别模型,分别求取正样本和负样本中每个人脸图像的特征向量;根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度;将所述两个人脸识别模型的置信度的结果进行比较,根 据比较结果得到两个人脸识别模型的权重值的组合;根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别融合模型。In a fourth aspect, the present application also provides a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, a method for generating a face recognition fusion model is realized, wherein The method for generating the face recognition fusion model includes the following steps: using two face recognition models to obtain the feature vector of each face image in the positive sample and the negative sample respectively; according to the feature vector, obtain the The angle between the two face recognition models of the feature vectors of the face images in the positive sample and the negative sample to obtain the confidence of the corresponding face recognition model; the results of the confidence of the two face recognition models are compared Obtain a combination of weight values of the two face recognition models according to the comparison result; calculate the weights of the two face recognition models according to the combination of weight values, and determine the face recognition fusion model.
本申请所提供的一种人脸识别融合模型的生成方法和装置,克服到了现有技术中没有考虑到两个人脸识别模型的识别精确度的不同,根据对比所述两个人脸识别模型的置信度的大小的结果,向两个人脸识别模型分配大小不同的权重值,确保利用所述两个人脸识别模型所生成的人脸识别融合模型的人脸识别精确度比单个人脸识别模型的精确度要高。The method and device for generating a face recognition fusion model provided by this application overcomes the fact that the difference in recognition accuracy of two face recognition models is not considered in the prior art, and is based on the confidence of comparing the two face recognition models. As a result of the size of the degree, the two face recognition models are assigned weight values of different sizes to ensure that the face recognition fusion model generated by the two face recognition models has a higher accuracy of face recognition than a single face recognition model Degree should be high.
附图说明Description of the drawings
图1是本申请中的一个实施例的人脸识别融合模型的生成方法的流程图;Fig. 1 is a flowchart of a method for generating a face recognition fusion model according to an embodiment of the present application;
图2是本申请中的另一个实施例的人脸识别融合模型的生成方法的流程图;2 is a flowchart of a method for generating a face recognition fusion model according to another embodiment of the present application;
图3为本申请中的一个实施例的人脸识别融合模型的生成装置的示意图;FIG. 3 is a schematic diagram of an apparatus for generating a face recognition fusion model according to an embodiment of the application;
图4为本申请中的一个实施例的计算机设备的结构示意图;FIG. 4 is a schematic structural diagram of a computer device according to an embodiment of the application;
图5为本申请中的一个实施例的终端的结构示意图。FIG. 5 is a schematic structural diagram of a terminal according to an embodiment of the application.
具体实施方式detailed description
为了解决上述问题,本申请提供了一种人脸识别融合模型的生成方法,可参考图1,图1是一个实施例的人脸识别融合模型的生成方法的流程图,该方法包括以下步骤:In order to solve the above problems, the present application provides a method for generating a face recognition fusion model. Refer to FIG. 1. FIG. 1 is a flowchart of a method for generating a face recognition fusion model according to an embodiment. The method includes the following steps:
S110、利用两个人脸识别模型,分别求取正样本和负样本中每个人脸图像的特征向量。S110. Use two face recognition models to obtain the feature vector of each face image in the positive sample and the negative sample respectively.
在此步骤中,通过收集不同的人脸图像,并对所述人脸图像分别形成正样本和负样本,并将其作为生成所述人脸识别融合模型的训练集。In this step, different face images are collected, and positive samples and negative samples are respectively formed for the face images, and used as a training set for generating the face recognition fusion model.
所述正样本为所收集的若干人脸图像属于同一人;所述负样本为所收集的若干人脸图像不属于同一人。The positive sample is that several collected face images belong to the same person; the negative sample is that several collected face images do not belong to the same person.
所述置信度用于体现对应人脸识别模型的识别精确度;在本实施例中,其为人脸识别模型对所述正样本和所述负样本的相似度的判断能力。具体地,若所述人脸识别模型得到的关于与正样本和负样本的分离度越高,则置信度越高。对应的人脸识别模型的置信度越高,其人脸识别精确度越高。The confidence is used to reflect the recognition accuracy of the corresponding face recognition model; in this embodiment, it is the ability of the face recognition model to judge the similarity between the positive sample and the negative sample. Specifically, if the degree of separation between the positive and negative samples obtained by the face recognition model is higher, the confidence is higher. The higher the confidence of the corresponding face recognition model, the higher the accuracy of its face recognition.
在本步骤中,服务器分别利用所述正样本和所述负样本作为训练集,计算得到两个人脸识别模型的置信度,分别得到两个人脸识别模型的人脸识别精确度。其中,所述两个人脸识别模型是用于生成人脸识别融合模型的。In this step, the server uses the positive sample and the negative sample as a training set to calculate the confidence of the two face recognition models, and obtain the face recognition accuracy of the two face recognition models respectively. Wherein, the two face recognition models are used to generate a face recognition fusion model.
在本申请具体实施例的陈述中,对所述两个人脸识别模型分别定义为人脸识别模型A和人脸识别模型B,以作区分。对应地,人脸识别模型A的置信度标记为S A,人脸识别模型B的置信度标记为S BIn the statement of the specific embodiment of the present application, the two face recognition models are defined as the face recognition model A and the face recognition model B respectively to distinguish them. Correspondingly, the confidence recognition model A labeled S A, model B recognition confidence labeled S B.
以人脸识别模型A为例进行说明。分别利用人脸识别模型A对所有正样本数中的每个图像和负样本数中的每个图像进行识别。当人脸识别模型A对每个图像进行识别时,会得到对应的特征向量。对应每个正样本或者负样本,根据正样本和负样本的人脸图像的数量得到相应数量的特征向量。Take the face recognition model A as an example. The face recognition model A is used to recognize each image in all positive samples and each image in negative samples. When the face recognition model A recognizes each image, it will get the corresponding feature vector. Corresponding to each positive sample or negative sample, a corresponding number of feature vectors are obtained according to the number of positive and negative face images.
S120、根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度。S120: According to the feature vector, respectively obtain the included angles of the two face recognition models to the feature vectors of the face images in the positive sample and the negative sample, to obtain the confidence of the corresponding face recognition model.
根据上述步骤110所求取的关于所述正样本和所述负样本中每个人脸图像的特征向量,得到其各自特征向量之间的夹角大小。若对于正样本的特征向量所形成的夹角越小,该人脸识别模型A的置信度越高;否则,该人脸识别模型A的置信度越低。反之,对于负样本的特征向量所形成的夹角越大,该人脸识别模型A的置信度越高;否则,该人脸识别模型A的置信度越低。According to the feature vector of each face image in the positive sample and the negative sample obtained in step 110, the angle between the respective feature vectors is obtained. If the angle formed by the feature vector of the positive sample is smaller, the confidence of the face recognition model A is higher; otherwise, the confidence of the face recognition model A is lower. Conversely, the larger the angle formed by the feature vector of the negative sample, the higher the confidence of the face recognition model A; otherwise, the lower the confidence of the face recognition model A.
若所述正样本和所述负样本均包括两个人脸图像,所述夹角分别由对应的两个特征向量组成。对应的人脸识别模型的置信度的计算是为两个特征向量归一化之后求内积。以人脸识别模型A进行具体化说明,其计算公 式为:S A=(F1/|F1|)·(F2/|F2|),其中,F1和F2是两个特征向量。同理,得到人脸识别模型B的置信度S B的计算方式同理。 If both the positive sample and the negative sample include two face images, the included angles are respectively composed of two corresponding feature vectors. The calculation of the confidence of the corresponding face recognition model is to obtain the inner product after normalizing the two feature vectors. A face recognition model to be described concrete, which is calculated as: S A = (F1 / | F1 |) · (F2 / | F2 |), wherein, F1, and F2 are the two feature vectors. Similarly, the same way to obtain the calculated face recognition model B S B of confidence.
所述人脸识别模型A对正样本所求取的关于正样本的特征向量的夹角理应小于对负样本所求取的特征向量的夹角。若关于正样本的特征向量的夹角与负样本所求取的特征向量的夹角的分离度较低,甚至是大于负样本所求取的特征向量的夹角,则该人脸识别模型A对图像的识别精确度较低。否则,该人脸识别模型A对图像的识别精准度最高。The included angle of the feature vector of the positive sample obtained by the face recognition model A for the positive sample should be smaller than the included angle of the feature vector obtained for the negative sample. If the degree of separation between the included angle of the feature vector of the positive sample and the included angle of the feature vector obtained from the negative sample is low, or even greater than the included angle of the feature vector obtained from the negative sample, then the face recognition model A The accuracy of image recognition is low. Otherwise, the face recognition model A has the highest image recognition accuracy.
为了能准确测试人脸识别模型A和人脸识别模型B的识别精确性,采用了多组正样本和负样本进行测试。在本实施例中,正样本数为10000,负样本为10000。但要注意的是,上述关于正负样本数只是为了取尽量多的测试样本,其各自的样本数可以不同。In order to accurately test the recognition accuracy of face recognition model A and face recognition model B, multiple sets of positive and negative samples are used for testing. In this embodiment, the number of positive samples is 10,000, and the number of negative samples is 10,000. However, it should be noted that the above-mentioned number of positive and negative samples is only for taking as many test samples as possible, and the number of samples can be different.
为了进一步验证对应人脸识别模型的精确性,在步骤S110的分别计算得到两个人脸识别模型的置信度的步骤之前,针对所述正样本的人脸图像和所述负样本中的其中一个人脸图像属于同一人的情况,分别将两个人脸识别模型对所述正样本和所述负样本的置信度进行对比。In order to further verify the accuracy of the corresponding face recognition model, before the step of separately calculating the confidence of the two face recognition models in step S110, one of the face images of the positive sample and the negative sample When the face image belongs to the same person, the confidence of the positive sample and the negative sample of the two face recognition models are compared.
在该步骤中,将所述两个人脸识别模型的验证基础,即正样本和负样本,进行统一,以便更容易衡量各个人脸图像对应的特征向量之间的夹角,从而达到验证对应人脸识别模型精确性。In this step, the verification basis of the two face recognition models, namely the positive sample and the negative sample, are unified, so that it is easier to measure the angle between the feature vectors corresponding to each face image, so as to verify the corresponding person Accuracy of face recognition model.
S130、将所述两个人脸识别模型的置信度的结果进行比较,根据比较结果得到两个人脸识别模型的权重值的组合。S130: Compare the confidence results of the two face recognition models, and obtain a combination of weight values of the two face recognition models according to the comparison result.
服务器从步骤S110中得到的关于所述两个人脸识别模型的置信度的结果,并对该两个结果进行结果大小的比较。根据所述比较的结果,向所述两个人脸识别模型分配不同的权重值。The server obtains the confidence results of the two face recognition models from step S110, and compares the results of the two results. According to the result of the comparison, different weight values are assigned to the two face recognition models.
具体地,若S A>S B,那么人脸识别模型A别分配得到的权重值a要高于人脸识别模型B的权重值b。而权重值a和权重值b形成了关于所述两个人脸识别模型的组合(a,b)。对于权重值a和权重值b之间,存在以下关系:a+b=1,其中,a∈[0,1],b∈[0,1]。 Specifically, if S A > S B , then the weight value a assigned by the face recognition model A is higher than the weight value b of the face recognition model B. The weight value a and weight value b form a combination (a, b) of the two face recognition models. The following relationship exists between the weight value a and the weight value b: a+b=1, where aε[0,1] and bε[0,1].
S140、根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别融合模型。S140: Calculate the weights of the two face recognition models according to the combination of the weight values, and determine the face recognition fusion model.
服务器将所述两个人脸识别模型的组合(a,b)的数值待入到所述两个人脸识别模型的权重计算中,得到关于该两个人脸识别模型的融合模型。The server enters the value of the combination (a, b) of the two face recognition models into the weight calculation of the two face recognition models to obtain a fusion model about the two face recognition models.
本申请所提供的一种人脸识别融合模型的生成方法克服到了现有技术中没有考虑到两个人脸识别模型的识别精确度的不同,根据对比所述两个人脸识别模型的置信度的大小的结果,向两个人脸识别模型分配大小不同的权重值,确保利用所述两个人脸识别模型所生成的人脸识别融合模型的人脸识别精确度比单个人脸识别模型的精确度要高。The method for generating a face recognition fusion model provided by the present application overcomes the fact that the prior art does not take into account the difference in the recognition accuracy of the two face recognition models, and is based on the comparison of the confidence level of the two face recognition models. As a result, the two face recognition models are assigned weight values of different sizes to ensure that the face recognition fusion model generated by the two face recognition models has a higher face recognition accuracy than a single face recognition model .
对于步骤S130中所述根据比较结果得到两个人脸识别模型的权重值的组合的步骤,包括:根据所述两个人脸识别模型的置信度的比较大小的结果,向所述两个人脸识别模型设定各个的权重值的组合。For the step of obtaining the combination of the weight values of the two face recognition models according to the comparison result in step S130, the step includes: according to the result of the comparison of the confidence levels of the two face recognition models, reporting the two face recognition models to the Set the combination of each weight value.
具体地,可以先向两个人脸识别模型的权重值的组合(a,b)设定不同的组合,经过计算比较后得到最佳权重值的组合。Specifically, it is possible to first set different combinations of the weight value combinations (a, b) of the two face recognition models, and obtain the best weight value combination after calculation and comparison.
在上述得到的为所述两个人脸识别模型设定的各个权重值的组合的基础上,对于步骤S140,可参考图2,图2是另一个实施例的人脸识别融合模型的生成方法的流程图,其包括以下步骤:On the basis of the combination of the weight values set for the two face recognition models obtained above, for step S140, refer to FIG. 2, which is another embodiment of the method for generating a face recognition fusion model Flow chart, which includes the following steps:
S141、获取各个所述权重值的组合,对所述两个人脸识别模型的权重计算;S141. Obtain a combination of each of the weight values, and calculate the weight of the two face recognition models.
S142、根据计算的结果,对应得到并比较各个所述人脸识别融合模型的置信度,确定最高置信度的所述人脸识别融合模型。S142: According to the calculation result, correspondingly obtain and compare the confidence of each of the face recognition fusion models, and determine the face recognition fusion model with the highest confidence.
对于所述权重值的组合,如先以a取0.6,b取0.4,分别得到人脸识别融合模型与a取0.7,b取0.3的人脸识别融合模型的置信度进行比较,得到在此区间内置信度最高的人脸识别融合模型。然后,再依次以在此区间内置信度最高的人脸识别融合模型与其他的(a,b)的权重值的组合的人脸识别融合模型进行置信度的比较,以此类推。最终得到最高置信度的人脸识别融合模型,得到精确度最高的人脸识别融合模型。For the combination of the weight values, first take 0.6 for a and 0.4 for b, and compare the confidence of the face recognition fusion model with the face recognition fusion model where a is 0.7 and b is 0.3, and the interval is Built-in face recognition fusion model with the highest reliability. Then, the face recognition fusion model with the highest built-in confidence in this interval and the face recognition fusion model of the combination of the weight values of other (a, b) are used to compare the confidence, and so on. Finally, the face recognition fusion model with the highest confidence level is obtained, and the face recognition fusion model with the highest accuracy is obtained.
为了提高确定最高置信度的人脸识别融合模型及其对应的最佳权重组合的效率,对于上述步骤S142,可以进一步包括:根据所述两个权重值的取值范围和所述两个权重值之和的取值范围,进行等刻度划分取值,并得到对应的置信度,进行比较后得到最高置信度。In order to improve the efficiency of determining the highest-confidence face recognition fusion model and its corresponding optimal weight combination, the above step S142 may further include: according to the range of the two weight values and the two weight values The value range of the sum is divided into equal scales, and the corresponding confidence is obtained, and the highest confidence is obtained after comparison.
具体地,在所述权重值的组合中,由于a∈[0,1],b∈[0,1],a+b=1,所以其实可以看作是在[0,1]的区间搜索使roc曲线的auc最大的a的值,假定auc在a的[0,1]区间上是凸函数,该提高效率的步骤如下:Specifically, in the combination of weight values, since a∈[0,1], b∈[0,1], a+b=1, it can actually be regarded as searching in the interval of [0,1] The value of a that maximizes the auc of the roc curve, assuming that auc is a convex function in the interval [0,1] of a, the steps to improve efficiency are as follows:
(1)把[0,1]的区间按0.1刻度等分,并测试每个刻度点(0,0.1,0.2,…,0.9,1)的auc,找到值最大的前两个刻度点;(1) Divide the interval of [0,1] into equal divisions by 0.1 scale, and test the auc of each scale point (0,0.1,0.2,...,0.9,1) to find the first two scale points with the largest value;
(2)把上述两个刻度点之间的区间再按照0.01的刻度等分,同样测试每个刻度点的auc,找到值最大的前两个刻度点;(2) Divide the interval between the above two scale points into equal divisions of 0.01, and also test the auc of each scale point to find the first two scale points with the largest value;
(3)重复上述步骤直到所规定的刻度精度(比如0.001)。(3) Repeat the above steps until the specified scale accuracy (such as 0.001).
对于确定的测试样本集,上述步骤的时间复杂度为常数0(1),所以需要一定运算量,但是这种测试一般是离线进行的,不会服务器的运行造成运算方面的负担。For a certain set of test samples, the time complexity of the above steps is a constant 0 (1), so a certain amount of calculation is required, but this kind of test is generally performed offline, and the operation of the server does not cause calculation burden.
在步骤140之后,根据确定的所述人脸识别融合模型,对应得到(a,b)的权重值的组合。After step 140, according to the determined face recognition fusion model, a combination of weight values of (a, b) is correspondingly obtained.
在本申请中提供的一种人脸识别融合模型的生成方法,可以通过终端完成。具体地,终端通过存储器的人脸图像数据和/或现场获取的人脸图像数据,根据用户的标记,相应将其分置成正样本和负样本。然后,根据对两个人脸识别模型的置信度的计算并形成对应的权重值的组合,通过计算对比后,得到置信度最高的人脸识别融合模型。该终端并根据该人脸识别融合模型对应响应用户通过该终端发出的人脸识别的请求,并完成相应的人脸识别任务。The method for generating a face recognition fusion model provided in this application can be completed through a terminal. Specifically, the terminal divides the face image data in the memory and/or the face image data obtained on-site into positive samples and negative samples according to the user's mark. Then, according to the calculation of the confidence of the two face recognition models, a combination of corresponding weight values is formed, and after calculation and comparison, the face recognition fusion model with the highest confidence is obtained. The terminal correspondingly responds to the user's request for face recognition sent by the terminal according to the face recognition fusion model, and completes the corresponding face recognition task.
基于与上述人脸识别融合模型的生成方法相同的发明构思,本申请实施例还提供了一种人脸识别融合模型的生成装置,如图3所示,包括:Based on the same inventive concept as the aforementioned method for generating a face recognition fusion model, an embodiment of the present application also provides an apparatus for generating a face recognition fusion model, as shown in FIG. 3, including:
特征向量求取模块310,用于利用所述两个人脸识别模型,分别求取所述正样本和所述负样本中每个人脸图像的特征向量;The feature vector obtaining module 310 is configured to use the two face recognition models to obtain the feature vector of each face image in the positive sample and the negative sample respectively;
置信度获取模块320,用于根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度;The confidence acquisition module 320 is configured to obtain the angles between the two face recognition models and the feature vectors of the face images in the positive sample and the negative sample according to the feature vector, to obtain the corresponding face recognition model Confidence;
置信度比较模块330,用于将所述两个人脸识别模型的置信度的结果进行比较,根据比较结果得到两个人脸识别模型的权重值的组合;The confidence comparison module 330 is configured to compare the confidence results of the two face recognition models, and obtain a combination of weight values of the two face recognition models according to the comparison results;
人脸识别融合模型确定模块340,用于根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别融合模型。The face recognition fusion model determination module 340 is configured to calculate the weights of the two face recognition models according to the combination of the weight values to determine the face recognition fusion model.
请参考图4,图4为一个实施例中计算机设备的内部结构示意图。如图5所示,该计算机设备包括通过系统总线连接的处理器410、存储介质420、存储器430和网络接口440。其中,该计算机设备的存储介质420存储有操作系统、数据库和计算机可读指令,数据库中可存储有控件信息序列,该计算机可读指令被处理器410执行时,可使得处理器410实现一种人脸识别融合模型的生成方法,处理器410能实现图3所示实施例中的一种人脸识别融合模型的生成装置中的特征向量求取模块310、置信度获取模块320、置信度比较模块330和人脸识别融合模型确定模块340的功能。该计算机设备的处理器410用于提供计算和控制能力,支撑整个计算机设备的运行。该计算机设备的存储器430中可存储有计算机可读指令,该计算机可读指令被处理器410执行时,可使得处理器410执行一种人脸识别融合模型的生成方法。该计算机设备的网络接口440用于与终端连接通信。本领域技术人员可以理解,图4中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Please refer to FIG. 4, which is a schematic diagram of the internal structure of a computer device in an embodiment. As shown in FIG. 5, the computer device includes a processor 410, a storage medium 420, a memory 430, and a network interface 440 connected through a system bus. Wherein, the storage medium 420 of the computer device stores an operating system, a database, and computer-readable instructions. The database may store control information sequences. When the computer-readable instructions are executed by the processor 410, the processor 410 can implement a In the method for generating a face recognition fusion model, the processor 410 can implement the feature vector obtaining module 310, the confidence obtaining module 320, and the confidence comparison in the device for generating a face recognition fusion model in the embodiment shown in FIG. 3 The module 330 and the face recognition fusion model determine the function of the module 340. The processor 410 of the computer device is used to provide calculation and control capabilities, and supports the operation of the entire computer device. The memory 430 of the computer device may store computer-readable instructions, and when the computer-readable instructions are executed by the processor 410, the processor 410 can make the processor 410 execute a method for generating a face recognition fusion model. The network interface 440 of the computer device is used to connect and communicate with the terminal. Those skilled in the art can understand that the structure shown in FIG. 4 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied. The specific computer device may Including more or less parts than shown in the figure, or combining some parts, or having a different part arrangement.
若所述计算机设备为终端,如图5所示,为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该终端可以为包括手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑等任意终端设备,以终端为手机为例:If the computer device is a terminal, as shown in FIG. 5, for ease of description, only parts related to the embodiment of the present application are shown. For specific technical details that are not disclosed, please refer to the method part of the embodiment of the present application. The terminal can be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales, sales terminal), a car computer, etc. Take the terminal as a mobile phone as an example:
图5示出的是与本申请实施例提供的终端相关的手机的部分结构的框图。参考图5,手机包括:射频(Radio Frequency,RF)电路510、存储器520、输入单元530、显示单元540、传感器550、音频电路560、无线保真(wireless fidelity,Wi-Fi)模块570、处理器580、以及电源590等部件。本领域技术人员可以理解,图5中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。FIG. 5 shows a block diagram of a part of the structure of a mobile phone related to a terminal provided in an embodiment of the present application. 5, the mobile phone includes: a radio frequency (RF) circuit 510, a memory 520, an input unit 530, a display unit 540, a sensor 550, an audio circuit 560, a wireless fidelity (Wi-Fi) module 570, a processing 580, and power supply 590. Those skilled in the art can understand that the structure of the mobile phone shown in FIG. 5 does not constitute a limitation on the mobile phone, and may include more or less components than those shown in the figure, or combine some components, or arrange different components.
下面结合图5对手机的各个构成部件进行具体的介绍:The following describes the components of the mobile phone in detail in conjunction with Figure 5:
存储器520可用于存储软件程序以及模块,处理器580通过运行存储在存储器520的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器520可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声纹播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器520可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 520 may be used to store software programs and modules. The processor 580 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 520. The memory 520 may mainly include a storage program area and a storage data area. The storage program area may store an operating system, an application program required by at least one function (such as a voiceprint playback function, an image playback function, etc.), etc.; the storage data area may store Data (such as audio data, phone book, etc.) created based on the use of mobile phones. In addition, the memory 520 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
输入单元530可用于接收输入的数字或字符信息、人脸图像获取并输入,以及产生与手机的用户设置以及功能控制有关的信号输入。具体地,输入单元530可包括触控面板531以及其他输入设备532。触控面板531,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板531上或在触控面板531附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板531可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器580,并能接收处理器580发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板531。除了触控面板531,输入单元530还可以包括其他输入设备532。具体地,其他输入设备532可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。The input unit 530 can be used to receive inputted digital or character information, obtain and input a face image, and generate signal input related to user settings and function control of the mobile phone. Specifically, the input unit 530 may include a touch panel 531 and other input devices 532. The touch panel 531, also known as a touch screen, can collect the user's touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 531 or near the touch panel 531. Operation), and drive the corresponding connection device according to the preset program. Optionally, the touch panel 531 may include two parts: a touch detection device and a touch controller. Among them, the touch detection device detects the user's touch position, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 580, and can receive and execute commands from the processor 580. In addition, the touch panel 531 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 531, the input unit 530 may also include other input devices 532. Specifically, the other input device 532 may include, but is not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick.
显示单元540可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元540可包括显示面板541,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板541。进一步的,触控面板531可覆盖显示面板541,当触控面板531检测到在其上或附近的触摸操作后,传送给处理器580以确定触摸事件的类型,随后处理器 580根据触摸事件的类型在显示面板541上提供相应的视觉输出。虽然在图5中,触控面板531与显示面板541是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板531与显示面板541集成而实现手机的输入和输出功能。The display unit 540 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 540 may include a display panel 541. Optionally, the display panel 541 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), etc. Further, the touch panel 531 can cover the display panel 541. When the touch panel 531 detects a touch operation on or near it, it transmits it to the processor 580 to determine the type of the touch event, and then the processor 580 responds to the touch event. The type provides corresponding visual output on the display panel 541. Although in FIG. 5, the touch panel 531 and the display panel 541 are used as two independent components to implement the input and input functions of the mobile phone, but in some embodiments, the touch panel 531 and the display panel 541 can be integrated. Realize the input and output functions of mobile phones.
处理器580是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器520内的软件程序和/或模块,以及调用存储在存储器520内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器580可包括一个或多个处理单元;优选的,处理器580可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器580中。The processor 580 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire mobile phone. It executes by running or executing software programs and/or modules stored in the memory 520, and calling data stored in the memory 520. Various functions and processing data of the mobile phone can be used to monitor the mobile phone as a whole. Optionally, the processor 580 may include one or more processing units; preferably, the processor 580 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 580.
在本申请实施例中,该终端所包括的处理器580还具有以下功能:获取并利用人脸图像对比组合的正样本和负样本,分别计算得到两个人脸识别模型的置信度;将所述两个人脸识别模型的置信度的结果进行比较,根据比较结果得到两个人脸识别模型的权重值的组合;根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别融合模型。也即处理器580具备执行上述的任一实施例人脸识别融合模型的生成方法的功能,其中,所述人脸识别融合模型的生成方法包括:利用两个人脸识别模型,分别求取正样本和负样本中每个人脸图像的特征向量;根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度;将所述两个人脸识别模型的置信度的结果进行比较,根据比较结果得到两个人脸识别模型的权重值的组合;根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别融合模型。在此不再赘述。In the embodiment of the present application, the processor 580 included in the terminal also has the following functions: acquiring and using the positive and negative samples of the face image comparison combination, respectively calculating the confidence of the two face recognition models; The confidence results of the two face recognition models are compared, and the combination of the weight values of the two face recognition models is obtained according to the comparison results; the weights of the two face recognition models are calculated according to the combination of the weight values to determine the The fusion model of face recognition. That is, the processor 580 has the function of executing the method for generating a face recognition fusion model in any of the above embodiments, wherein the method for generating the face recognition fusion model includes: using two face recognition models to obtain positive samples respectively And the feature vector of each face image in the negative sample; according to the feature vector, the angle between the two face recognition models and the feature vector of the face image in the positive sample and the negative sample is obtained to obtain the corresponding The confidence of the face recognition model; compare the results of the confidence of the two face recognition models, and obtain a combination of the weight values of the two face recognition models according to the comparison result; according to the combination of the weight values, The weight calculation of the two face recognition models determines the face recognition fusion model. I won't repeat them here.
在一个实施例中,本申请还提出了一种存储有计算机可读指令的存储介质,所述存储介质为易失性存储介质或非易失性存储介质,该计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行以下步骤:获取并利用人脸图像对比组合的正样本和负样本,分别计算得到两个人脸识别 模型的置信度;将所述两个人脸识别模型的置信度的结果进行比较,根据比较结果得到两个人脸识别模型的权重值的组合;根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别融合模型。In one embodiment, the present application also proposes a storage medium storing computer-readable instructions. The storage medium is a volatile storage medium or a non-volatile storage medium. The computer-readable instructions are stored by one or more When the two processors are executed, one or more processors are made to perform the following steps: obtain and use the positive sample and the negative sample of the face image comparison combination, and calculate the confidence of the two face recognition models; The results of the confidence of the recognition models are compared, and the combination of the weight values of the two face recognition models is obtained according to the comparison results; the weights of the two face recognition models are calculated according to the combination of the weight values to determine the face Identify the fusion model.
综合上述实施例可知,本申请最大的有益效果在于:Based on the foregoing embodiments, it can be seen that the greatest beneficial effect of this application lies in:
本申请提供的人脸识别融合模型的生成方法,克服到了现有技术中没有考虑到两个人脸识别模型的识别精确度的不同,根据对比所述两个人脸识别模型的置信度的大小的结果,向两个人脸识别模型分配大小不同的权重值,确保利用所述两个人脸识别模型所生成的人脸识别融合模型的人脸识别精确度比单个人脸识别模型的精确度要高。The method for generating a face recognition fusion model provided in this application overcomes the fact that the difference in recognition accuracy of two face recognition models is not considered in the prior art, and is based on the result of comparing the confidence levels of the two face recognition models , Assigning weight values of different sizes to the two face recognition models to ensure that the face recognition fusion model generated by the two face recognition models has a higher face recognition accuracy than a single face recognition model.
本申请还提供通过将不同的所述权重值的组合,计算并比较得到确定最高置信度的所述人脸识别融合模型。这样使得到人脸识别融合模型对人脸识别的精确度是最高的。进一步地,为了提高确定最高置信度的人脸识别融合模型及其对应的最佳权重组合的效率,对权重值的数值进行等刻度划分取值并得到在设定区间内置信度最高的对应两个权重值的组合,然后再对在该设定区间内最高的对应两个权重值的组合的范围内进行进一步的等刻度划分,由此类推,得到对人脸识别精确度最高的人脸识别融合模型。This application also provides the face recognition fusion model that determines the highest confidence level by calculating and comparing different combinations of the weight values. In this way, the face recognition fusion model has the highest accuracy for face recognition. Further, in order to improve the efficiency of determining the face recognition fusion model with the highest confidence level and its corresponding optimal weight combination, the value of the weight value is equal-scaled to obtain the corresponding two with the highest built-in confidence in the set interval. A combination of two weight values, and then further equal-scale division within the range of the highest combination of two weight values in the set interval, and by analogy, the face recognition with the highest accuracy for face recognition is obtained Fusion model.
综上,本申请通过人脸识别融合模型的生成方法和装置的使用,将待融合的人脸识别模型的人脸识别精确度考虑到人脸识别融合模型的生成,解决了现有技术中生成的融合模型可能存在比单独使用识别精确度最高的识别模型的误识别率高的缺陷。In summary, through the use of the method and device for generating a face recognition fusion model, this application considers the face recognition accuracy of the face recognition model to be fused into the generation of the face recognition fusion model, which solves the problem of the generation in the prior art. The fusion model may have a higher false recognition rate than the recognition model with the highest recognition accuracy alone.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,该计算机程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,前述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等存储介质,或随机存储记忆体(Random Access Memory,RAM)等。A person of ordinary skill in the art can understand that all or part of the processes in the above-mentioned embodiment methods can be implemented by instructing relevant hardware through a computer program. The computer program can be stored in a computer readable storage medium. When executed, it may include the procedures of the above-mentioned method embodiments. Among them, the aforementioned storage medium may be a storage medium such as a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM), etc.
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-mentioned embodiments can be combined arbitrarily. To make the description concise, all possible combinations of the various technical features in the above-mentioned embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, All should be considered as the scope of this specification.

Claims (20)

  1. 一种人脸识别融合模型的生成方法,包括以下步骤:A method for generating a face recognition fusion model includes the following steps:
    利用两个人脸识别模型,分别求取正样本和负样本中每个人脸图像的特征向量;Use two face recognition models to obtain the feature vector of each face image in the positive sample and the negative sample;
    根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度;According to the feature vector, respectively obtaining the included angles of the two face recognition models to the feature vectors of the face images in the positive sample and the negative sample to obtain the confidence of the corresponding face recognition model;
    将所述两个人脸识别模型的置信度的结果进行比较,根据比较结果得到两个人脸识别模型的权重值的组合;Comparing the confidence results of the two face recognition models, and obtaining a combination of weight values of the two face recognition models according to the comparison result;
    根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别融合模型。According to the combination of the weight values, the weights of the two face recognition models are calculated to determine the face recognition fusion model.
  2. 根据权利要求1所述的方法,The method according to claim 1,
    所述正样本的人脸图像和所述负样本中的其中一个人脸图像属于同一人;The face image of the positive sample and one of the face images in the negative sample belong to the same person;
    所述根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度的步骤之前,还包括:Before the step of obtaining, according to the feature vector, the angle between the two face recognition models and the feature vector of the face image in the positive sample and the negative sample, and obtaining the confidence of the corresponding face recognition model ,Also includes:
    分别将两个人脸识别模型对所述正样本和所述负样本的置信度进行对比,对对应人脸识别模型进行精确性的验证。The two face recognition models are respectively compared with the confidence of the positive sample and the negative sample, and the accuracy of the corresponding face recognition model is verified.
  3. 根据权利要求1所述的方法,The method according to claim 1,
    所述根据比较结果得到两个人脸识别模型的权重值的组合的步骤包括:The step of obtaining a combination of weight values of two face recognition models according to the comparison result includes:
    根据所述两个人脸识别模型的置信度的比较大小的结果,向所述两个人脸识别模型设定各个的权重值的组合。According to the result of the comparison of the confidence levels of the two face recognition models, a combination of weight values is set to the two face recognition models.
  4. 根据权利要求3所述的方法,According to the method of claim 3,
    所述根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别模型的融合模型的步骤包括:The step of calculating the weights of the two face recognition models according to the combination of the weight values, and determining the fusion model of the face recognition models includes:
    获取各个所述权重值的组合,对所述两个人脸识别模型的权重计算;Obtaining a combination of each of the weight values, and calculating the weight of the two face recognition models;
    根据计算的结果,对应得到并比较各个所述人脸识别融合模型的置信度,确定最高置信度的所述人脸识别融合模型。According to the calculation result, the confidence of each face recognition fusion model is correspondingly obtained and compared, and the face recognition fusion model with the highest confidence is determined.
  5. 根据权利要求4所述的方法,According to the method of claim 4,
    根据所述两个人脸识别模型的置信度的比较大小的结果,向所述两个人脸识别模型设定各个的权重值的组合的步骤之后,还包括:根据不同的设定的权重值的组合,分别得到各个对应的人脸识别融合模型的置信度;According to the result of the comparison of the confidence levels of the two face recognition models, after the step of setting the respective weight value combinations for the two face recognition models, the method further includes: according to different set weight value combinations , Respectively obtain the confidence of each corresponding face recognition fusion model;
    对比在设定区间内所述各个对应的人脸识别融合模型的置信度的大小,得到在该设定区间内置信度最高的人脸识别融合模型;Comparing the confidence level of each corresponding face recognition fusion model in the set interval to obtain the face recognition fusion model with the highest built-in confidence in the set interval;
    以该最高的人脸识别融合模型与其他的权重值的组合的人脸识别融合模型进行置信度的比较。The face recognition fusion model combining the highest face recognition fusion model and other weight values is used to compare the confidence.
  6. 根据权利要求5所述的方法,According to the method of claim 5,
    所述根据计算的结果,对应得到并比较各个所述人脸识别融合模型的置信度,确定最高置信度的所述人脸识别融合模型的步骤包括:According to the calculation result, the step of correspondingly obtaining and comparing the confidence of each of the face recognition fusion models, and determining the face recognition fusion model with the highest confidence includes:
    根据两个权重值的取值范围和所述两个权重值之和的取值范围,进行等刻度划分取值,并得到对应的置信度,进行比较后得到最高置信度。According to the value range of the two weight values and the value range of the sum of the two weight values, equal scale division is performed to obtain the values, and the corresponding confidence is obtained, and the highest confidence is obtained after comparison.
  7. 根据权利要求1-6任一项所述的方法,The method according to any one of claims 1-6,
    所述利用两个人脸识别模型,分别求取所述正样本和所述负样本中每个人脸图像的特征向量的步骤包括:The step of using two face recognition models to separately obtain the feature vector of each face image in the positive sample and the negative sample includes:
    利用所述两个人脸识别模型,分别求取若干组正样本和所述负样本中每个人脸图像的特征向量;Using the two face recognition models to obtain feature vectors of each face image in several groups of positive samples and the negative samples;
    其中,所述正样本和所述负样本各自包括两个人脸图像。Wherein, the positive sample and the negative sample each include two face images.
  8. 一种人脸识别融合模型的生成装置,包括:A device for generating a face recognition fusion model, including:
    特征向量求取模块,用于利用所述两个人脸识别模型,分别求取所述正样本和所述负样本中每个人脸图像的特征向量;The feature vector obtaining module is configured to obtain the feature vector of each face image in the positive sample and the negative sample by using the two face recognition models;
    置信度获取模块,用于根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度;The confidence acquisition module is used to obtain the angles between the two face recognition models and the feature vectors of the face images in the positive and negative samples according to the feature vectors to obtain the corresponding face recognition model Confidence;
    置信度比较模块,用于将所述两个人脸识别模型的置信度的结果进行比较,根据比较结果得到两个人脸识别模型的权重值的组合;A confidence comparison module, configured to compare the confidence results of the two face recognition models, and obtain a combination of weight values of the two face recognition models according to the comparison results;
    人脸识别融合模型确定模块,用于根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别融合模型。The face recognition fusion model determination module is configured to calculate the weights of the two face recognition models according to the combination of the weight values to determine the face recognition fusion model.
  9. 一种计算机设备,包括:A computer device including:
    一个或多个处理器;One or more processors;
    存储器;Memory
    一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个计算机程序配置用于执行一种人脸识别融合模型的生成方法:One or more computer programs, wherein the one or more computer programs are stored in the memory and configured to be executed by the one or more processors, and the one or more computer programs are configured to execute A method of generating a face recognition fusion model:
    其中,所述人脸识别融合模型的生成方法包括:Wherein, the method for generating the face recognition fusion model includes:
    利用两个人脸识别模型,分别求取正样本和负样本中每个人脸图像的特征向量;Use two face recognition models to obtain the feature vector of each face image in the positive sample and the negative sample;
    根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度;According to the feature vector, respectively obtaining the included angles of the two face recognition models to the feature vectors of the face images in the positive sample and the negative sample to obtain the confidence of the corresponding face recognition model;
    将所述两个人脸识别模型的置信度的结果进行比较,根据比较结果得到两个人脸识别模型的权重值的组合;Comparing the confidence results of the two face recognition models, and obtaining a combination of weight values of the two face recognition models according to the comparison result;
    根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别融合模型。According to the combination of the weight values, the weights of the two face recognition models are calculated to determine the face recognition fusion model.
  10. 根据权利要求9所述的一种计算机设备,所述正样本的人脸图像和所述负样本中的其中一个人脸图像属于同一人;9. The computer device according to claim 9, wherein the face image of the positive sample and one of the face images in the negative sample belong to the same person;
    所述所述根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度的步骤之前,还包括:According to the feature vector, the angle between the two face recognition models and the feature vector of the face image in the positive sample and the negative sample is obtained respectively to obtain the confidence level of the corresponding face recognition model Before the steps, also include:
    分别将两个人脸识别模型对所述正样本和所述负样本的置信度进行对比,对对应人脸识别模型进行精确性的验证。The two face recognition models are respectively compared with the confidence of the positive sample and the negative sample, and the accuracy of the corresponding face recognition model is verified.
  11. 根据权利要求9所述的一种计算机设备,所述根据比较结果得到两个人脸识别模型的权重值的组合的步骤包括:The computer device according to claim 9, wherein the step of obtaining a combination of weight values of two face recognition models according to the comparison result comprises:
    根据所述两个人脸识别模型的置信度的比较大小的结果,向所述两个人脸识别模型设定各个的权重值的组合。According to the result of the comparison of the confidence levels of the two face recognition models, a combination of weight values is set to the two face recognition models.
  12. 根据权利要求11所述的一种计算机设备,所述根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别模型的融合模型的步骤包括:The computer device according to claim 11, wherein the step of calculating the weights of the two face recognition models according to the combination of the weight values, and determining the fusion model of the face recognition models comprises:
    获取各个所述权重值的组合,对所述两个人脸识别模型的权重计算;Obtaining a combination of each of the weight values, and calculating the weight of the two face recognition models;
    根据计算的结果,对应得到并比较各个所述人脸识别融合模型的置信度,确定最高置信度的所述人脸识别融合模型。According to the calculation result, the confidence of each face recognition fusion model is correspondingly obtained and compared, and the face recognition fusion model with the highest confidence is determined.
  13. 根据权利要求12所述的一种计算机设备,根据所述两个人脸识别模型的置信度的比较大小的结果,向所述两个人脸识别模型设定各个的权重值的组合的步骤之后,还包括:根据不同的设定的权重值的组合,分别得到各个对应的人脸识别融合模型的置信度;The computer device according to claim 12, after the step of setting a combination of weight values for the two face recognition models according to the result of the comparison of the confidence levels of the two face recognition models, further Including: obtaining the confidence of each corresponding face recognition fusion model according to the combination of different set weight values;
    对比在设定区间内所述各个对应的人脸识别融合模型的置信度的大小,得到在该设定区间内置信度最高的人脸识别融合模型;Comparing the confidence level of each corresponding face recognition fusion model in the set interval to obtain the face recognition fusion model with the highest built-in confidence in the set interval;
    以该最高的人脸识别融合模型与其他的权重值的组合的人脸识别融合模型进行置信度的比较。The face recognition fusion model combining the highest face recognition fusion model and other weight values is used to compare the confidence.
  14. 根据权利要求13所述的一种计算机设备,所述根据计算的结果,对应得到并比较各个所述人脸识别融合模型的置信度,确定最高置信度的所述人脸识别融合模型的步骤包括:The computer device according to claim 13, wherein the step of correspondingly obtaining and comparing the confidence of each of the face recognition fusion models according to the calculation result, and determining the face recognition fusion model with the highest confidence, comprises :
    根据所述两个权重值的取值范围和所述两个权重值之和的取值范围,进行等刻度划分取值,并得到对应的置信度,进行比较后得到最高置信度。According to the value range of the two weight values and the value range of the sum of the two weight values, equal scale division is performed to obtain the values, and the corresponding confidence is obtained, and the highest confidence is obtained after comparison.
  15. 根据权利要求9-14任一项所述的计算机设备,所述利用所述两个人脸识别模型,分别求取所述正样本和所述负样本中每个人脸图像的特征向量的步骤包括:The computer device according to any one of claims 9-14, wherein the step of using the two face recognition models to separately obtain the feature vector of each face image in the positive sample and the negative sample comprises:
    利用所述两个人脸识别模型,分别求取若干组正样本和所述负样本中每个人脸图像的特征向量;Using the two face recognition models to obtain feature vectors of each face image in several groups of positive samples and the negative samples;
    其中,所述正样本和所述负样本各自包括两个人脸图像。Wherein, the positive sample and the negative sample each include two face images.
  16. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现人脸识别融合模型的生成方法,其中,所述人脸识别融合模型的生成方法包括以下步骤:A computer-readable storage medium in which a computer program is stored, and when the computer program is executed by a processor, a method for generating a face recognition fusion model is realized, wherein the generation of the face recognition fusion model The method includes the following steps:
    利用两个人脸识别模型,分别求取正样本和负样本中每个人脸图像的 特征向量;Use two face recognition models to obtain the feature vector of each face image in the positive sample and the negative sample;
    根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度;According to the feature vector, respectively obtaining the included angles of the two face recognition models to the feature vectors of the face images in the positive sample and the negative sample to obtain the confidence of the corresponding face recognition model;
    将所述两个人脸识别模型的置信度的结果进行比较,根据比较结果得到两个人脸识别模型的权重值的组合;Comparing the confidence results of the two face recognition models, and obtaining a combination of weight values of the two face recognition models according to the comparison result;
    根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别融合模型。According to the combination of the weight values, the weights of the two face recognition models are calculated to determine the face recognition fusion model.
  17. 根据权利要求16所述的计算机可读存储介质,所述正样本的人脸图像和所述负样本中的其中一个人脸图像属于同一人;The computer-readable storage medium according to claim 16, wherein the face image of the positive sample and one of the face images in the negative sample belong to the same person;
    所述根据所述特征向量,分别求取所述两个人脸识别模型对所述正样本和负样本中的人脸图像的特征向量的夹角,得到对应人脸识别模型的置信度的步骤之前,还包括:Before the step of obtaining, according to the feature vector, the angle between the two face recognition models and the feature vector of the face image in the positive sample and the negative sample, and obtaining the confidence of the corresponding face recognition model ,Also includes:
    分别将两个人脸识别模型对所述正样本和所述负样本的置信度进行对比,对对应人脸识别模型进行精确性的验证。The two face recognition models are respectively compared with the confidence of the positive sample and the negative sample, and the accuracy of the corresponding face recognition model is verified.
  18. 根据权利要求16所述的计算机可读存储介质,所述根据比较结果得到两个人脸识别模型的权重值的组合的步骤包括:The computer-readable storage medium according to claim 16, wherein the step of obtaining a combination of weight values of two face recognition models according to the comparison result comprises:
    根据所述两个人脸识别模型的置信度的比较大小的结果,向所述两个人脸识别模型设定各个的权重值的组合。According to the result of the comparison of the confidence levels of the two face recognition models, a combination of weight values is set to the two face recognition models.
  19. 根据权利要求18所述的计算机可读存储介质,所述根据所述权重值的组合,对所述两个人脸识别模型的权重计算,确定所述人脸识别模型的融合模型的步骤包括:The computer-readable storage medium according to claim 18, wherein the step of calculating the weights of the two face recognition models according to the combination of the weight values, and determining the fusion model of the face recognition models comprises:
    获取各个所述权重值的组合,对所述两个人脸识别模型的权重计算;Obtaining a combination of each of the weight values, and calculating the weight of the two face recognition models;
    根据计算的结果,对应得到并比较各个所述人脸识别融合模型的置信度,确定最高置信度的所述人脸识别融合模型。According to the calculation result, the confidence of each face recognition fusion model is correspondingly obtained and compared, and the face recognition fusion model with the highest confidence is determined.
  20. 根据权利要求19所述的计算机可读存储介质,根据所述两个人脸识别模型的置信度的比较大小的结果,向所述两个人脸识别模型设定各个的权重值的组合的步骤之后,还包括:根据不同的设定的权重值的组合,分别得到各个对应的人脸识别融合模型的置信度;The computer-readable storage medium according to claim 19, after the step of setting a combination of weight values for the two face recognition models according to the result of the comparison of the confidence levels of the two face recognition models, It also includes: obtaining the confidence of each corresponding face recognition fusion model according to the combination of different set weight values;
    对比在设定区间内所述各个对应的人脸识别融合模型的置信度的大小,得到在该设定区间内置信度最高的人脸识别融合模型;Comparing the confidence level of each corresponding face recognition fusion model in the set interval to obtain the face recognition fusion model with the highest built-in confidence in the set interval;
    以该最高的人脸识别融合模型与其他的权重值的组合的人脸识别融合模型进行置信度的比较。The face recognition fusion model combining the highest face recognition fusion model and other weight values is used to compare the confidence.
PCT/CN2019/117477 2019-01-25 2019-11-12 Method and device for generating face recognition fusion model WO2020151315A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910075750.8A CN110008815A (en) 2019-01-25 2019-01-25 The generation method and device of recognition of face Fusion Model
CN201910075750.8 2019-01-25

Publications (1)

Publication Number Publication Date
WO2020151315A1 true WO2020151315A1 (en) 2020-07-30

Family

ID=67165520

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117477 WO2020151315A1 (en) 2019-01-25 2019-11-12 Method and device for generating face recognition fusion model

Country Status (2)

Country Link
CN (1) CN110008815A (en)
WO (1) WO2020151315A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008815A (en) * 2019-01-25 2019-07-12 平安科技(深圳)有限公司 The generation method and device of recognition of face Fusion Model
CN111340090B (en) * 2020-02-21 2023-08-01 每日互动股份有限公司 Image feature comparison method and device, equipment and computer readable storage medium
WO2023121563A2 (en) * 2021-12-24 2023-06-29 Grabtaxi Holdings Pte. Ltd. Method and system for precision face lookup and identification using multilayer ensembles

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020733A (en) * 2012-11-27 2013-04-03 南京航空航天大学 Method and system for predicting single flight noise of airport based on weight
CN106156161A (en) * 2015-04-15 2016-11-23 富士通株式会社 Model Fusion method, Model Fusion equipment and sorting technique
CN110008815A (en) * 2019-01-25 2019-07-12 平安科技(深圳)有限公司 The generation method and device of recognition of face Fusion Model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108776768A (en) * 2018-04-19 2018-11-09 广州视源电子科技股份有限公司 Image-recognizing method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020733A (en) * 2012-11-27 2013-04-03 南京航空航天大学 Method and system for predicting single flight noise of airport based on weight
CN106156161A (en) * 2015-04-15 2016-11-23 富士通株式会社 Model Fusion method, Model Fusion equipment and sorting technique
CN110008815A (en) * 2019-01-25 2019-07-12 平安科技(深圳)有限公司 The generation method and device of recognition of face Fusion Model

Also Published As

Publication number Publication date
CN110008815A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
AU2022202047B2 (en) Remote usage of locally stored biometric authentication data
WO2019096008A1 (en) Identification method, computer device, and storage medium
US8700557B2 (en) Method and system for association and decision fusion of multimodal inputs
US9355234B1 (en) Authentication involving selection among different biometric methods dynamically
WO2020151315A1 (en) Method and device for generating face recognition fusion model
US8418237B2 (en) Resource access based on multiple credentials
CN104636715B (en) Dynamic handwriting verification and handwriting-based user authentication
WO2015197008A1 (en) Biometric authentication method and terminal
US10108870B1 (en) Biometric electronic signatures
JP6667801B2 (en) Segment-based handwritten signature authentication system and method
WO2018120425A1 (en) Personal property status assessing method, apparatus, device, and storage medium
CN107818251B (en) Face recognition method and mobile terminal
US11765162B2 (en) Systems and methods for automatically performing secondary authentication of primary authentication credentials
WO2022000959A1 (en) Captcha method and apparatus, device, and storage medium
JP6924770B2 (en) Dynamic movement tracking infrastructure for spatially divided segments Signature authentication system and method
KR20150026647A (en) Apparatus and method for verifying handwritten signature
US20220139109A1 (en) User authentication using pose-based facial recognition
CN112001442B (en) Feature detection method, device, computer equipment and storage medium
CN111292087A (en) Identity verification method and device, computer readable medium and electronic equipment
US11195170B1 (en) Method and a system for creating a behavioral user profile
US11269443B2 (en) Method for distinguishing touch inputs on display from function of recognizing fingerprint and electronic device employing method
CN114511779B (en) Training method of scene graph generation model, scene graph generation method and device
US20240070516A1 (en) Machine learning context based confidence calibration
CN115145396A (en) Gesture control method and device, computer equipment and storage medium
WO2023229646A1 (en) Using touch input data to improve fingerprint sensor performance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19911161

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 15.09.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19911161

Country of ref document: EP

Kind code of ref document: A1