WO2020164264A1 - 人脸图像识别方法、装置及计算机设备 - Google Patents

人脸图像识别方法、装置及计算机设备 Download PDF

Info

Publication number
WO2020164264A1
WO2020164264A1 PCT/CN2019/116359 CN2019116359W WO2020164264A1 WO 2020164264 A1 WO2020164264 A1 WO 2020164264A1 CN 2019116359 W CN2019116359 W CN 2019116359W WO 2020164264 A1 WO2020164264 A1 WO 2020164264A1
Authority
WO
WIPO (PCT)
Prior art keywords
matching degree
face image
user
recognition
image
Prior art date
Application number
PCT/CN2019/116359
Other languages
English (en)
French (fr)
Inventor
徐玲玲
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020164264A1 publication Critical patent/WO2020164264A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • This application relates to the technical fields of image recognition and face recognition in biometrics. Specifically, this application relates to a face image recognition method, device and computer equipment.
  • face recognition is a recognition method with a high penetration rate, and face recognition has many uses.
  • the existing technical solutions involving face recognition generally compare the current image of the user with the pre-entered image in the database. If the similarity of the current image is greater than the set similarity value, it means that the user’s face recognition has passed .
  • the inventor of the present application found in the research that in the prior art scheme, users whose similarity is slightly greater than the set similarity value will also be recognized by face, while users who should be recognized by face are because of the face recognition environment. Factors such as the inability to pass face recognition result in low accuracy of face recognition and affect the application of face recognition.
  • the purpose of this application is to at least solve one of the above-mentioned technical defects, especially the low accuracy of face recognition, which affects the technical defects of face recognition.
  • the present application provides a face image recognition method, including: acquiring a user's first face image, and calculating the matching degree between the first face image and each reference image in the database. Acquire the recognition environment of the user and its corresponding threshold range according to the first face image; if the maximum value of the matching degree is within the threshold range, reacquire the user’s first Two face images; selecting several reference images with the largest value in the matching degree, and performing secondary recognition on the second face image.
  • a face image recognition device including: a calculation unit configured to obtain a user's first face image, and respectively calculate the matching degree between the first face image and each reference image in the database;
  • the acquiring unit is configured to acquire the recognition environment of the user and its corresponding threshold range according to the first face image; if the maximum value of the matching degree is within the threshold range, re-acquire all The second face image of the user;
  • a recognition unit configured to select several reference images with larger values in the matching degree, and perform secondary recognition on the second face image.
  • This embodiment also provides an electronic device, the electronic device comprising: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to execute the steps of the face image recognition method, wherein ,
  • the steps of the face image recognition method include: acquiring a first face image of the user, and calculating the matching degree between the first face image and each reference image in the database; according to the first face image Acquire the recognition environment of the user and the corresponding threshold range; if the maximum value of the matching degree is within the threshold range, reacquire the second face image of the user; select the matching A number of reference images with the largest value among the degrees are subjected to secondary recognition on the second face image.
  • This embodiment also provides a non-transitory computer-readable storage medium.
  • the mobile terminal can execute the steps of the face image recognition method.
  • the steps of the face image recognition method include: acquiring a first face image of a user, and calculating the matching degree between the first face image and each reference image in the database; and acquiring all the images according to the first face image
  • the recognition environment in which the user is located and the corresponding threshold range if the maximum value of the matching degree is within the threshold range, reacquire the second face image of the user; select the matching degree
  • the second face image is identified for the second face image with the several reference images with the largest value.
  • the face image recognition method involved in the present application obtains a user's first face image, and calculates the matching degree between the first face image and each reference image in the database; and obtains all the images according to the first face image.
  • the technical solution of performing secondary recognition on the second face image with the several reference images with the largest value can avoid the situation where the accuracy of the face image whose matching degree is slightly higher than the matching degree threshold is affected by face recognition, etc., It can effectively improve the accuracy and speed of face recognition, and further enhance the efficiency of the entire process of face recognition.
  • FIG. 1 is a schematic diagram of an application scenario of a face image recognition method according to an embodiment
  • Fig. 2 is a method flowchart of a face image recognition method according to an embodiment
  • Figure 3 is a diagram showing the effect of face image recognition in the embodiment
  • FIG. 4 is a flowchart of a method for generating a matching degree in this embodiment
  • Fig. 5 is a flowchart of a method for generating a threshold range of an embodiment
  • Fig. 6 is a flowchart of a method for generating a matching degree edge value in an embodiment
  • FIG. 7 is a flowchart of a method for constructing a matching degree edge value model of an embodiment
  • FIG. 8 is a flowchart of a method for generating a threshold range according to a face recognition environment in an embodiment
  • FIG. 9 is a flowchart of a method for pushing prompt information to a user according to the number of secondary identifications
  • FIG. 10 shows a schematic diagram of the device structure of the face image recognition device of an embodiment.
  • This application relates to the technical fields of image recognition and face recognition in biometrics, and in particular to the technical solution related to face comparison.
  • this application provides a technical solution of a face image recognition method. Please refer to FIG. 1.
  • FIG. 1 An application environment corresponding to the face image recognition method is also provided exemplarily.
  • the application environment of Fig. 1 includes a face image recognition system running a face image recognition method.
  • the face image recognition system is connected to the user's mobile phone terminal 11 through the network and performs data interaction.
  • the user can use gestures, clicks, input and other interactive methods to transmit corresponding instructions to the facial image recognition system through the mobile phone 11 for facial image recognition.
  • the face image recognition system can also interact with the database through a network connection, and obtain a reference image to compare the matching degree.
  • a face image recognition method provided by this application includes the following steps:
  • Step S21 Obtain the first face image of the user, and calculate the matching degree between the first face image and each reference image in the database.
  • Step S22 Acquire the recognition environment in which the user is located and the correspondingly set threshold range according to the first face image.
  • Step S23 Select several reference images with the largest value in the matching degree, and perform secondary recognition on the second face image.
  • the facial image recognition system can call the camera of the user's mobile phone 11 to obtain the user's first facial image by taking photos, videos, and video framing, and the facial image recognition system obtains multiple reference images in the database.
  • the face image recognition system separately calculates the matching degree between the first face image and each reference image in the database.
  • the matching degree may be the proportion of the feature points between the first face image and the reference image or the degree of similarity between the feature points.
  • the face image recognition system obtains the first face image and performs matching recognition of feature points with all reference images in the database (all n reference images from reference image A to reference image N).
  • the facial image recognition system sorts the matching degrees corresponding to multiple reference images, and takes the matching degree corresponding to the reference image with the largest matching degree value as the matching degree of the first face image.
  • the facial image recognition system determines that the maximum value of the matching degree is within the set threshold range, the facial image recognition system retrieves the second facial image of the current user through the user's mobile phone 11.
  • the facial image recognition system can call the camera of the mobile phone 11 to re-acquire the second facial image, or it can filter out other images corresponding to the original first facial image in the video sub-frame.
  • the facial image recognition system selects several reference images with the largest matching degree among several reference images sorted by the matching degree, and the facial image recognition system obtains the second facial image Perform a second identification.
  • the face image recognition system uses the reference image A, the reference image B, and the reference image C according to the multiple reference images with the largest matching degree in the recognition process of the first face image.
  • the facial image recognition system can capture the second facial image of the user again by calling the camera of the mobile phone 11, and recognize the matching degree with the reference image A, reference image B, and reference image C in the database.
  • the facial image recognition system can also recognize the matching degree with all the reference images in the database, but relatively speaking, the technical solution for recognizing the reference image A, the reference image B, and the reference image C in the database has a smaller amount of data.
  • the response speed is faster. Therefore, in this embodiment, in order to further improve the response speed of the above-mentioned secondary recognition, the second face image may be ranked higher with the reference image (reference image A, The reference image B and reference image C) are matched and recognized.
  • the matching degree of the first face image falls within the threshold range, it indicates that the risk of the first face image passing verification is relatively high, and the risk of attacking similar face images is relatively high.
  • the lower limit of the threshold range can be understood as the matching degree of the minimum recognition requirement of face recognition
  • the upper limit of the threshold range is the matching degree of the recognition requirement meeting the safety standard.
  • the first face image corresponding to the matching degree falling within the above threshold range is suspicious, and there is a risk of misrecognition.
  • the reference image reference image A, reference image B, and reference image C in this embodiment
  • Face images are relatively similar. Therefore, in the process of secondary recognition, re-recognizing the reference image with a higher matching degree in the first recognition has a higher reliability, which effectively reduces the amount of calculation and improves the recognition response speed.
  • the face image recognition system can call the camera of the mobile phone 11 again to obtain the user's face again.
  • the image is used as the second face image until the acquired face image is greater than the upper limit value of the threshold range.
  • the face image recognition system can output the recognition result of the current user through the recognition.
  • the facial image recognition system can output the recognition result that the current user has failed the recognition.
  • the facial image recognition system can output the recognition result of the current user through the recognition.
  • step S21 obtains the user's first face image, and calculates the first face image separately.
  • the step of matching degree between a face image and each reference image in the database includes:
  • Step S41 Invoke the camera to acquire the first face image of the user, and extract the feature points of the first face image.
  • Step S42 Analyze the reference image in the database to obtain feature points of the reference image.
  • Step S43 The feature points of the first face image and all the reference images in the database are respectively compared to generate a matching degree.
  • the facial image recognition system extracts the feature points in the first facial image after acquiring the first facial image of the user.
  • the feature points may be contour feature points, facial features feature points, facial three-dimensional feature points, mole feature points, facial expression feature points, etc. for a face image such as the first face image.
  • the face image recognition system acquires multiple reference images in the database, and extracts the feature points in the multiple reference images according to the feature points of the first face image.
  • the face image recognition system respectively performs recognition and matching according to the feature points in each reference image and the feature points of the first face image, and finally generates the matching degree of each reference image.
  • the above-mentioned technical solution of feature point recognition can also be adopted.
  • step S22 obtains the recognition environment of the user and the corresponding threshold range according to the first face image; if the maximum value of the matching degree is within the threshold range, restart Before the step of obtaining the second face image of the user, the method further includes:
  • Step S51 Obtain the matching degree edge value and the matching degree threshold corresponding to the face recognition environment where the user is located.
  • Step S52 Generate a threshold range according to the matching degree edge value and the matching degree threshold. Among them, the edge value of the matching degree is obtained through a test in advance.
  • the face image recognition system adjusts the matching degree edge value according to different face recognition environments.
  • the edge value of the matching degree changes according to changes in the face recognition environment.
  • the matching degree threshold varies according to the recognition model called by the face image recognition system.
  • the face recognition environment can be divided into several environmental parameter values such as light intensity, light angle, and camera angle according to the impact on face recognition.
  • the face image recognition system can select the edge value of the matching degree corresponding to the current face recognition environment. Previously, the edge value of the matching degree was tested in advance according to different environments.
  • the face image recognition system also selects the current corresponding matching degree threshold according to the adopted face recognition model.
  • the face image recognition system generates a threshold range through numerical superposition based on the edge value of the matching degree and the threshold of the matching degree.
  • the method further includes :
  • Step S61 Acquire several test images and matching degree thresholds, and compare the test images with each reference image in the database to generate test values.
  • Step S62 Screen test images with a test value greater than the matching degree threshold as sample images.
  • Step S63 Use the test value corresponding to the wrongly recognized sample image as the similarity.
  • Step S64 Difference the similarity degree and the matching degree threshold to obtain the matching degree edge value.
  • the face image recognition system can obtain multiple test images from the database, and the face image recognition system compares the test images in the database with the reference images to generate test values.
  • the test value can be calculated by identifying and comparing the feature points on the test image and the reference image.
  • the face image recognition system confirms the matching degree threshold according to the face recognition model currently used for face recognition.
  • the face image recognition system uses the screening condition that the test value is greater than the matching degree threshold value, and screens the test images that meet the screening conditions as sample images.
  • the test value is greater than the matching degree threshold value, it can be determined that the corresponding user passes the test.
  • the wrongly recognized test image is used as a sample image, and the face image recognition system can extract the corresponding test value in the sample image as the similarity.
  • the similarity between the sample image and the reference image is slightly higher than the matching degree threshold. This is due to factors such as the light intensity of the recognition environment, the light angle, or the angle from which the face image is obtained, which leads to misrecognition.
  • the face image recognition system makes a difference between the similarity and the matching degree threshold to obtain the edge value of the matching degree.
  • the matching degree threshold corresponding to the current face recognition model is 80%, and the similarity obtained by the above test is 85%, then the obtained matching degree edge value is 5%.
  • the obtained matching degree edge value is the matching degree edge value corresponding to the current face recognition environment.
  • test images in different face recognition environments can be obtained for the above-mentioned test.
  • the face image recognition system obtains multiple sample images, the corresponding similarity can be averaged using the similarity of multiple sample images.
  • an embodiment of the present application provides a technical solution to construct a face recognition environment and a matching degree edge value as a specific model to adapt to different face recognition environments. Please refer to FIG. 7, after the step S64 of making the difference between the similarity degree and the matching degree threshold to obtain the matching degree edge value, the method further includes:
  • Step S71 Extract the environmental feature information in the test image.
  • Step S72 Associate the edge value of the matching degree with the environmental feature information, and construct an edge value model of the matching degree.
  • the face image recognition system can obtain the environmental feature information of the user in the test image by extracting the illumination value and the bright point feature points in the test image.
  • the environmental characteristic information may include characteristic parameters such as light intensity, light angle, and angle of the collected image.
  • the face image recognition system can associate the edge value of the matching degree with the environmental feature information, and construct the edge value model of the matching degree.
  • regression analysis can be performed on the matching degree edge value and the feature parameters in the environmental feature information to obtain the association relationship between the two, and this association relationship is used to construct the matching degree edge value model.
  • step S22 obtains the recognition environment of the user and its corresponding threshold range according to the first face image; if the maximum value of the matching degree is within the threshold range, restart Before the step of obtaining the second face image of the user, the method further includes:
  • Step S81 Analyze and extract the environmental feature information of the face recognition environment where the user is located.
  • Step S82 Input the environmental feature information into the matching degree edge value model to obtain the matching degree edge value of the current environment.
  • Step S83 Generate a threshold range according to the matching degree threshold and the matching degree edge value.
  • the face image recognition system can use the mobile phone 11 to retrieve the camera to take the first face image, and further analyze the feature points on the first face image to obtain the environmental feature information of the environment where the current user is located.
  • the face image recognition system can also obtain the current user's environment such as the light intensity and angle of light, the camera distance and other factors by calling the sensor of the mobile phone 11, such as a light sensor, and then analyze and obtain the environmental characteristics. information.
  • the face image recognition system inputs the environmental feature information into the matching degree edge value model obtained by the above training, and obtains the matching degree edge value of the current environment, for example, 5%.
  • the face image recognition system determines the matching degree threshold according to the currently used face recognition model, for example, 80%.
  • the face image recognition system generates a threshold range of 80%-85% according to the matching degree threshold 80% and the matching degree edge value 5%.
  • the value of 80% can be understood as the lower limit of the threshold range, and the value of 85% can be understood Is the upper limit of the threshold range.
  • the face image recognition system can confirm that the current user has not passed the verification.
  • the face image recognition system reacquires the second face image for secondary recognition .
  • step S23 selects several reference images with the largest value in the matching degree, and before the step of performing secondary recognition on the second face image, it further includes:
  • Step S91 Calculate the number of times of performing secondary recognition.
  • Step S92 Determine whether the number of times of performing secondary identification is greater than the number threshold. If yes, go to step S93.
  • Step S93 Generate and push to the user a prompt message prompting the user to adjust the state or change the face recognition environment.
  • the facial image recognition system counts the number of secondary recognitions, and the number of secondary recognitions can be calculated by ways such as the number of acquisitions of the second facial image.
  • the face image recognition system judges whether the number of secondary recognition is greater than the number threshold. For example, the threshold of the number of times is 5, and when the number of secondary recognition is 6 times, the facial image recognition system generates a prompt to prompt the user to adjust the state, or prompts the user to change the facial recognition environment, and the facial image recognition system sends The user's mobile phone 11 performs the push.
  • the facial image recognition system can also confirm whether the current facial recognition environment can meet the requirements of facial recognition according to the sensors on the mobile phone 11. Further, the face image recognition system can also adjust the current face recognition environment or adjust the lighting conditions of the face of the collected user through the front flash of the mobile phone 11 or adjusting the screen brightness.
  • the push notification information may be displayed to the user by means of vibration, pop-up window display, floating window display, sound broadcast, etc. in a specific manner.
  • the facial image recognition system can also prompt the user to adjust or adjust the angle of collecting the user's face in a specific direction based on information such as the facial recognition environment fed back by the sensor on the mobile phone 11. For example, the light sensor on the mobile phone 11 obtains that the current user’s face recognition environment is only 10% of the minimum lighting requirement.
  • the face image recognition system can determine The adjustment direction of the user's mobile phone 11 is toward the light source, and the user may be prompted to continue moving the mobile phone 11 in the adjustment direction to meet the requirements of appropriate ambient light intensity.
  • the face image recognition system can judge the posture of the user's hand holding the phone through the gyroscope sensor and acceleration sensor of the mobile phone 11, and then prompt the user to adjust the angle of the mobile phone 11 so that the camera of the mobile phone 11 can better Collecting the user's face image, the mobile phone 11 can prompt the user to adjust through the vibration of the vibrator in different directions or different modes.
  • the facial image recognition method provided in this application is not only applied to mobile terminals such as mobile phones, but also can be applied to ticket purchasing machines, smart check-in counters, card registration machines, etc.
  • Electronic equipment such as self-service ticket gates can also be applied to application scenarios such as face recognition in the examination room, face recognition for payment, face recognition for authentication, and device unlocking.
  • the present application also provides a face image recognition device in this embodiment, including:
  • the calculation unit 101 is configured to obtain the first face image of the user, and respectively calculate the matching degree between the first face image and each reference image in the database;
  • the obtaining unit 102 obtains the recognition environment of the user and the corresponding threshold range according to the first face image; if the maximum value of the matching degree is within the threshold range, reacquire the The second face image of the user;
  • the recognition unit 103 is configured to select several reference images with larger values in the matching degree, and perform secondary recognition on the second face image.
  • This application also provides an electronic device in this embodiment, and the electronic device includes:
  • a memory for storing processor executable instructions
  • the processor is configured as the steps of the face image recognition method described in any one of the foregoing embodiments.
  • the processor can perform the following steps:
  • the present application also provides a non-transitory computer-readable storage medium in this embodiment.
  • the instructions in the storage medium are executed by the processor of the mobile terminal, the mobile terminal can perform any of the above-mentioned embodiments.
  • the face image recognition method involved in the present application obtains a user's first face image, and calculates the matching degree between the first face image and each reference image in the database; and obtains all the images according to the first face image.
  • the technical solution of performing secondary recognition on the second face image with the several reference images with the largest value can avoid the situation where the accuracy of the face image whose matching degree is slightly higher than the matching degree threshold is affected by face recognition, etc., Effectively reduce the misrecognition rate, can effectively improve the accuracy of face recognition and the speed of face recognition, and further improve the efficiency of the entire process of face recognition.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

本申请涉及生物识别和人脸识别的领域,本申请的人脸图像识别方法,包括:获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度;根据第一人脸图像获取用户所处的识别环境及其对应设定的阈值范围;若匹配度中的最大值在阈值范围中,重新获取用户的第二人脸图像;选择匹配度中取值最大的若干张基准图像,对第二人脸图像进行二次识别。本申请还对应提供一种人脸图像识别装置、电子设备及存储介质。本申请的人脸图像识别方法、装置、电子设备及存储介质能够避免匹配度略高于匹配度阈值的人脸图像通过人脸识别等影响准确率的情况,能够有效提升人脸识别的准确率和人脸识别的速度,进一步提升人脸识别整个过程的效率。

Description

人脸图像识别方法、装置及计算机设备
本申请要求于2019年2月13日提交中国专利局、申请号为201910112937.0,发明名称为“人脸图像识别方法、装置及计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及生物识别中的图像识别和人脸识别技术领域,具体而言,本申请涉及一种人脸图像识别方法、装置及计算机设备。
背景技术
随着社会的急速发展,人脸识别作为一种普及率高的识别方法,人脸识别具有很多用途。
现有技术涉及人脸识别的技术方案一般是通过识别用户当前图像和数据库中的预先录入的图像进行对比,若当前图像的相似度大于设定的相似度值,那么说明用户的人脸识别通过。
本申请的发明人在研究中发现,现有技术的方案中相似度略大于设定的相似度值的用户也会通过人脸识别,而本应通过人脸识别的用户却因为人脸识别环境等因素不能通过人脸识别,导致人脸识别的准确率低,影响人脸识别的应用。
发明内容
本申请的目的旨在至少能解决上述的技术缺陷之一,特别是人脸识别的准确率低,影响人脸识别的技术缺陷。
为了克服现有技术的技术缺陷,本申请提供一种人脸图像识别方法,包括:获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度;根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围 中,重新获取所述用户的第二人脸图像;选择所述匹配度中取值最大的若干张基准图像,对所述第二人脸图像进行二次识别。
本实施例中还提供一种人脸图像识别装置,包括:计算单元,用于获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度;获取单元,用于根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像;识别单元,用于选择所述匹配度中取值较大的若干张基准图像,对所述第二人脸图像进行二次识别。
本实施例中还提供一种电子设备,所述电子设备包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行人脸图像识别方法的步骤,其中,所述人脸图像识别方法的步骤,包括:获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度;根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像;选择所述匹配度中取值最大的若干张基准图像,对所述第二人脸图像进行二次识别。
本实施例中还提供一种非临时性计算机可读存储介质,当所述存储介质中的指令由移动终端的处理器执行时,使得移动终端能够执行人脸图像识别方法的步骤,其中,所述人脸图像识别方法的步骤,包括:获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度;根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像;选择所述匹配度中取值最大的若干张基准图像,对所述第二人脸图像进行二次识别。
本申请涉及的人脸图像识别方法,通过获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度;根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图 像;选择所述匹配度中取值最大的若干张基准图像,对所述第二人脸图像进行二次识别的技术方案,能够避免匹配度略高于匹配度阈值的人脸图像通过人脸识别等影响准确率的情况,能够有效提升人脸识别的准确率和人脸识别的速度,进一步提升人脸识别整个过程的效率。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1为一个实施例的人脸图像识别方法的应用场景示意图;
图2为一个实施例的人脸图像识别方法的方法流程图;
图3为实施例中人脸图像识别的效果展示图;
图4为本实施例中生成匹配度的方法流程图;
图5为实施例的生成阈值范围的方法流程图;
图6为实施例中生成匹配度边缘值的方法流程图;
图7为一个实施例的构建匹配度边缘值模型的方法流程图;
图8为实施例中的根据人脸识别环境生成阈值范围的方法流程图;
图9为根据二次识别的次数向用户推送提示信息的方法流程图;
图10示出的一个实施例的人脸图像识别装置的装置结构示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本申请,而不能解释为对本申请的限制。
本申请涉及生物识别中的图像识别和人脸识别技术领域,尤其是涉及人脸对比的技术方案。为了克服人脸识别的准确率低,影响人脸识别的技术缺陷,本申请提供一种人脸图像识别方法的技术方案,请参考图1,为了更细致地展示本申请的技术方案,本申请还示例性提供一种人脸图像识别方法对应的应用环境。
图1的应用环境中包括运行人脸图像识别方法的人脸图像识别系统。其中,人脸图像识别系统通过网络与用户的手机终端11连接并进行数据交互。用户可以利用手势、点击、输入等交互方式将相应的指令通过手机11传送到人脸图像识别系统中进行人脸图像识别。人脸图像识别系统还可以通过网络连接与数据库进行数据交互,获取基准图像用以比对匹配度。
请参考图2,本申请提供的一种人脸图像识别方法,包括步骤:
步骤S21:获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度。
步骤S22:根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围。
若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像。
步骤S23:选择所述匹配度中取值最大的若干张基准图像,对所述第二人脸图像进行二次识别。
请参考图3,图3展示了本实施例中人脸图像识别的效果示意图。在上述过程中,人脸图像识别系统可以调用用户的手机11的摄像头,通过拍照或者录像、视频分帧得到的用户的第一人脸图像,人脸图像识别系统获取数据库中的多张基准图像,人脸图像识别系统分别计算第一人脸图像与数据库中的各张基准图像的匹配度。其中,匹配度可以是第一人脸图像与基准图像之间特征点的符合比例或者是特征点之间相似的程度。
在图3展示的效果示意图中,人脸图像识别系统得到第一人脸图像后与数据库中全部的基准图像(基准图像A至基准图像N的全部n张基准图像)进行特征点的匹配识别。人脸图像识别系统根据多张基准图像对应的匹配度进行排序,将匹配度数值最大的基准图像对应的匹配度作为第一人脸图像的匹配度。
当人脸图像识别系统判断匹配度中的最大值在设定的阈值范围中,人脸图像识别系统通过用户的手机11重新获取当前用户的第二人脸图像。人脸图像识别系统可以调用手机11的摄像头重新获取第二人脸图像,也 可以相应在原本第一人脸图像对应的视频分帧的其他图像中筛选得到。
人脸图像识别系统根据在第一人脸图像的识别过程中,匹配度排序的若干基准图像中,选择匹配度中取值最大的若干张基准图像,人脸图像识别系统获取第二人脸图像进行二次识别。在本实施例中,例如人脸图像识别系统根据在第一人脸图像的识别过程中匹配度中取值最大的若干张基准图像,分别是基准图像A、基准图像B以及基准图像C。人脸图像识别系统可以通过调取手机11的摄像头再次拍摄用户的第二人脸图像,并与数据库中的基准图像A、基准图像B以及基准图像C进行识别得到匹配度。人脸图像识别系统也可以与数据库中全部基准图像进行匹配度的识别,但是相对而言,与数据库中的基准图像A、基准图像B以及基准图像C进行识别的技术方案,数据量更小,响应速度更快。因此,在本实施例中,为了进一步提升上述二次识别的响应速度,第二人脸图像可以与第一次识别得到的匹配度排序较高的基准图像(本实施例中的基准图像A、基准图像B以及基准图像C)进行匹配识别。
由于对于同一用户而言,第一人脸图像的匹配度落入阈值范围的区间时,说明第一人脸图像通过验证的风险较高,相似人脸图像攻击的风险也相对较高。阈值范围的下限可以理解为人脸识别最低识别要求的匹配度,而阈值范围的上限是符合安全标准的识别要求的匹配度。落入到上述阈值范围的匹配度对应的第一人脸图像存疑,存在错误识别的风险。但是,对于同一用户而言,第一人脸图像的识别过程中得到的匹配度排序较高的基准图像(本实施例中的基准图像A、基准图像B以及基准图像C)与用户本人的人脸图像是比较相似的,因此在二次识别的过程中,重新对第一次识别中匹配度排序较高的基准图像再次识别的可信度较高,有效降低运算量,提升识别响应速度。
当然,在本实施例中,若二次识别的中第二人脸图像的匹配度再次落入阈值范围的区间时,人脸图像识别系统可以再次调取手机11的摄像头重新获取用户的人脸图像作为第二人脸图像,直至获取的人脸图像大于阈值范围的区间上限的数值,此时人脸图像识别系统可以输出当前用户通过识别的识别结果。在上述过程中,当获取的人脸图像的匹配度小于阈值范 围的区间下限的数值时,人脸图像识别系统可以输出当前用户未通过识别的识别结果。当获取的人脸图像的匹配度大于阈值范围的区间上限的数值时,人脸图像识别系统可以输出当前用户通过识别的识别结果。
为了准确地根据人脸图像上的特征点进行比对得到匹配度,本申请的实施例中提供相应的技术方案,请参考图4,步骤S21获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度的步骤,包括:
步骤S41:调用摄像头获取用户的第一人脸图像,提取第一人脸图像的特征点。
步骤S42:分析数据库中的基准图像,得到基准图像的特征点。
步骤S43:分别对比第一人脸图像与数据库中的全部基准图像的特征点比对生成匹配度。
在上述过程中,人脸图像识别系统获取用户的第一人脸图像后,提取第一人脸图像中的特征点。其中,特征点可以是针对第一人脸图像等人脸图像的轮廓特征点、五官特征点、面部三维特征点、痣特征点以及表情特征点等等。
相应地,人脸图像识别系统获取数据库中多张基准图像,并根据第一人脸图像的特征点,提取多张基准图像中的特征点。人脸图像识别系统分别根据各张基准图像中的特征点和第一人脸图像的特征点进行识别匹配,最终生成每一张基准图像的匹配度。当然,在第二人脸图像与基准图像的识别过程中也可以上述采用特征点识别的技术方案。
人脸识别的应用场景往往是多种多样的,不同的识别环境将会影响本实施例中的阈值范围。为此,本申请的实施例中提供一种适配不同识别环境的技术方案。请参考图5,步骤S22根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像的步骤之前,还包括:
步骤S51:获取用户所处的人脸识别环境对应的匹配度边缘值和匹配度阈值。
步骤S52:根据所述匹配度边缘值和匹配度阈值生成阈值范围。其中, 匹配度边缘值预先经过测试得到。
在上述过程中,人脸图像识别系统根据人脸识别环境不同相应调取匹配度边缘值不同。换而言之,匹配度边缘值根据人脸识别环境的变化而变化。匹配度阈值根据人脸图像识别系统调取的识别模型而存在差异。人脸识别环境根据对人脸识别的影响具体可以分为光照强度、光照角度以及摄像头角度等若干环境参数值。根据环境参数值,人脸图像识别系统可以选择对应当前人脸识别环境的匹配度边缘值。此前,匹配度边缘值根据不同环境预先进行测试得到。人脸图像识别系统还根据所采用的人脸识别模型选择当前对应的匹配度阈值。人脸图像识别系统根据匹配度边缘值和匹配度阈值通过数值叠加生成阈值范围。
在本申请实施例提供的技术方案中,还提供一种匹配度边缘值的生成方案,请参考图6,步骤S51根据所述匹配度边缘值和匹配度阈值生成阈值范围的步骤之前,还包括:
步骤S61:获取若干测试图像和匹配度阈值,所述测试图像分别与数据库中的各张基准图像比对生成测试值。
步骤S62:筛选测试值大于匹配度阈值的测试图像作为样本图像。
步骤S63:将识别错误的样本图像对应的测试值作为相似度。
步骤S64:对相似度和匹配度阈值作差得到匹配度边缘值。
在上述测试的过程中,人脸图像识别系统可以向数据库获取多张测试图像,人脸图像识别系统将数据库中的测试图像和基准图像进行识别对比,生成测试值。测试值可以通过测试图像和基准图像上特征点的识别比对计算得到。人脸图像识别系统根据当前使用应用于人脸识别的人脸识别模型确认匹配度阈值。人脸图像识别系统通过测试值数值大于匹配度阈值数值的筛选条件,筛选得到符合筛选条件的测试图像作为样本图像。
在识别的过程中,若测试值数值大于匹配度阈值数值可以判断对应的用户通过测试。在测试的过程中,将错误识别的测试图像作为样本图像,人脸图像识别系统可以提取样本图像中相应的测试值作为相似度。一般而言,样本图像与基准图像的相似度会略高于匹配度阈值,这是由于识别环境的光照强度、光照角度或者获取人脸图像的角度等因素导致错误识别。
此时,人脸图像识别系统对相似度和匹配度阈值作差,得到匹配度边缘值。例如,对应当前人脸识别模型的匹配度阈值为80%,上述测试得到的相似度为85%,那么得到的匹配度边缘值为5%。得到的匹配度边缘值是对应当前人脸识别环境的匹配度边缘值。为了得到多种人脸识别环境对应的匹配度边缘值可以获取不同人脸识别环境下的测试图像进行上述测试。当人脸图像识别系统得到多张样本图像时,对应的相似度可以采用多张样本图像的相似度取平均值。
为了适应不同人脸识别场景的识别,本申请实施例中提供一种技术方案,将人脸识别环境和匹配度边缘值构建为特定的模型,以适应不同的人脸识别环境。请参考图7,步骤S64对相似度和匹配度阈值作差得到匹配度边缘值的步骤之后,还包括:
步骤S71:提取测试图像中所处的环境特征信息。
步骤S72:将匹配度边缘值和环境特征信息进行关联,构建匹配度边缘值模型。
在上述过程中,人脸图像识别系统可以在测试图像中,通过提取图像中的光照值、亮点特征点得到测试图像中用户所处的环境特征信息。环境特征信息可以包括光照强度、光照角度以及采集图像的角度等特征参数。人脸图像识别系统可以将匹配度边缘值和环境特征信息进行关联,构建匹配度边缘值模型。其中,构建匹配度边缘值模型时,可以将匹配度边缘值和环境特征信息中的特征参数进行回归分析,得到两者之间的关联关系,应用这种关联关系构建匹配度边缘值模型。
为了提升用户人脸识别的准确率,适应用户在不同人脸识别环境下的人脸识别,本申请提供相应的技术方案,应用上述匹配度边缘值模型。请参考图8,步骤S22根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像的步骤之前,还包括:
步骤S81:分析并提取所述用户所处人脸识别环境的环境特征信息。
步骤S82:将环境特征信息输入匹配度边缘值模型,得到当前环境的匹配度边缘值。
步骤S83:根据匹配度阈值和所述匹配度边缘值生成阈值范围。
在上述过程中,人脸图像识别系统可以通过手机11调取摄像头拍摄第一人脸图像,进一步分析第一人脸图像上的特征点,得到当前用户所处环境的环境特征信息。在一些实施例中,人脸图像识别系统还可以通过调取手机11的传感器,例如光线传感器等,可以获取当前用户所处环境的光照强度和光照角度、摄像头距离等因素,进而解析得到环境特征信息。
人脸图像识别系统将环境特征信息输入上述训练得到的匹配度边缘值模型,获得当前环境的匹配度边缘值,例如是5%。人脸图像识别系统根据当前使用的人脸识别模型确定匹配度阈值,例如是80%。人脸图像识别系统根据匹配度阈值80%和所述匹配度边缘值5%生成阈值范围为80%~85%,其中80%的数值可以理解为是阈值范围的下限,85%的数值可以理解为是阈值范围的上限。当第一人脸图像或第二人脸图像与数据库中基准图像对比得到的匹配度大于85%时,人脸图像识别系统可以确认当前用户通过验证。当第一人脸图像或第二人脸图像与数据库中基准图像对比得到的匹配度小于80%时,人脸图像识别系统可以确认当前用户未通过验证。当第一人脸图像或第二人脸图像与数据库中基准图像对比得到的匹配度的数值在80%~85%之间时,人脸图像识别系统重新获取第二人脸图像进行二次识别。
本申请的实施例还提供一种技术方案,其目的在于减少二次识别的次数,避免重复执行二次识别,导致用户需要多次被采集人脸图像,影响人脸识别整体速度。请参考图9,步骤S23选择所述匹配度中取值最大的若干张基准图像,对所述第二人脸图像进行二次识别的步骤之前,还包括:
步骤S91:计算进行二次识别的次数。
步骤S92:判断进行二次识别的次数是否大于次数阈值。若是,执行步骤S93。
步骤S93:生成提示用户调整状态或更换人脸识别环境的提示信息并向所述用户推送。
在上述的过程中,人脸图像识别系统统计进行二次识别的次数,可以通过采集第二人脸图像的次数等方式计算二次识别的次数。人脸图像识别 系统判断进行二次识别的次数是否大于次数阈值。例如,次数阈值是5次,当进行二次识别的次数为6次时,人脸图像识别系统生成提示用户调整状态,也可以提示用户更换人脸识别环境的提示信息,人脸图像识别系统向用户的手机11进行推送。在以上技术方案的基础上,人脸图像识别系统还可以根据手机11上的传感器确认当前的人脸识别环境是否能够满足人脸识别的要求。进一步地,人脸图像识别系统还可以通过手机11的前置闪光灯或者调整屏幕亮度的方式,调节当前人脸识别的环境或者调整被采集用户的人脸的光照情况。
在上述过程中,推送提示信息可以是通过特定方式的震动、弹窗显示、悬浮窗显示、声音播报等方式向用户展示提示内容。人脸图像识别系统还可以根据手机11其上的传感器反馈的人脸识别环境等信息,提示用户向特定的方向调整或者调整采集用户人脸的角度。例如,手机11上的光线传感器获取得到当前用户的人脸识别环境的光照仅为最低光照要求的10%,用户在调整手机11的过程中,环境光照强度不断上升,人脸图像识别系统可以判断用户手机11的调整方向是朝向光源的,相应可以提示用户继续向该调整方向继续移动手机11以达到合适的环境光照强度的要求。此外,再举一个例子,人脸图像识别系统可以通过手机11的陀螺仪传感器、加速度传感器判断用户手握手机的姿势,进而提示用户调整手机11的角度,以便于手机11的摄像头可以更好地采集用户的人脸图像,手机11可以通过震动器不同方向或者不同模式的震动提示用户的调整。
尽管本实施例中是以用户的手机11作为解释的例子,但是本申请所提供的人脸图像识别方法除了应用在手机等移动终端,也可以应用在购票机、智能登记柜台、办证机、自助检票机等电子设备中,还可以应用到考场人脸识别、支付人脸识别、鉴权人脸识别、设备解锁等应用场景中。
请参考图10,为了克服人脸识别的准确率低,影响人脸识别的技术缺陷,本申请在本实施例中还提供一种人脸图像识别装置,包括:
计算单元101,用于获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度;
获取单元102,根据所述第一人脸图像获取所述用户所处的识别环境 及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像;
识别单元103,用于选择所述匹配度中取值较大的若干张基准图像,对所述第二人脸图像进行二次识别。
本申请在本实施例中还提供一种电子设备,所述电子设备包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为如上述实施例中任一实施例所述人脸图像识别方法的步骤。
其中,处理器可以执行如下步骤:
获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度;
根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像;
选择所述匹配度中取值最大的若干张基准图像,对所述第二人脸图像进行二次识别。
本申请在本实施例中还提供一种非临时性计算机可读存储介质,当所述存储介质中的指令由移动终端的处理器执行时,使得移动终端能够执行如上述实施例中任一实施例所提及的人脸图像识别方法的步骤。
本申请涉及的人脸图像识别方法,通过获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度;根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像;选择所述匹配度中取值最大的若干张基准图像,对所述第二人脸图像进行二次识别的技术方案,能够避免匹配度略高于匹配度阈值的人脸图像通过人脸识别等影响准确率的情况,有效降低误识率,能够有效提升人脸识别的准确率和人脸识别的速度,进一步提升人脸识别整个过程的效率。
以上所述仅是本申请的部分实施方式,应当指出,对于本技术领域的 普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (20)

  1. 一种人脸图像识别方法,包括:
    获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度;
    根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像;
    选择所述匹配度中取值最大的若干张基准图像,对所述第二人脸图像进行二次识别。
  2. 根据权利要求1所述的人脸图像识别方法,所述获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度的步骤,包括:
    调用摄像头获取用户的第一人脸图像,提取第一人脸图像的特征点;
    分析数据库中的基准图像,得到基准图像的特征点;
    分别对比第一人脸图像与数据库中的全部基准图像的特征点比对生成匹配度。
  3. 根据权利要求1所述的人脸图像识别方法,所述根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围的步骤,包括:
    获取用户所处的人脸识别环境对应的匹配度边缘值和匹配度阈值;
    根据所述匹配度边缘值和匹配度阈值生成阈值范围;其中,匹配度边缘值预先经过测试得到。
  4. 根据权利要求3所述的人脸图像识别方法,所述根据所述匹配度边缘值和匹配度阈值生成阈值范围的步骤之前,还包括:
    获取若干测试图像和匹配度阈值,所述测试图像分别与数据库中的各张基准图像比对生成测试值;
    筛选测试值大于匹配度阈值的测试图像作为样本图像;
    将识别错误的样本图像对应的测试值作为相似度;
    对相似度和匹配度阈值作差得到匹配度边缘值。
  5. 根据权利要求4所述的人脸图像识别方法,所述对相似度和匹配度阈值作差得到匹配度边缘值的步骤之后,还包括:
    提取测试图像中所处的环境特征信息;
    将匹配度边缘值和环境特征信息进行关联,构建匹配度边缘值模型。
  6. 根据权利要求5所述的人脸图像识别方法,所述若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像的步骤之前,还包括:
    分析并提取所述用户所处人脸识别环境的环境特征信息;
    将环境特征信息输入匹配度边缘值模型,得到当前环境的匹配度边缘值;
    根据匹配度阈值和所述匹配度边缘值生成阈值范围。
  7. 根据权利要求1所述的人脸图像识别方法,所述选择所述匹配度中取值较大的若干张基准图像,对所述第二人脸图像进行二次识别的步骤之前,还包括:
    计算进行二次识别的次数;
    当进行二次识别的次数大于次数阈值,生成提示用户调整状态或更换人脸识别环境的提示信息并向所述用户推送。
  8. 一种人脸图像识别装置,包括:
    计算单元,用于获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度;
    获取单元,用于根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像;
    识别单元,用于选择所述匹配度中取值较大的若干张基准图像,对所述第二人脸图像进行二次识别。
  9. 一种电子设备,所述电子设备包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为执行人脸图像识别方法的步骤,其中,所述人脸图像识别方法的步骤,包括:
    获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度;
    根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像;
    选择所述匹配度中取值最大的若干张基准图像,对所述第二人脸图像进行二次识别。
  10. 根据权利要求9所述的电子设备,所述获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度的步骤,包括:
    调用摄像头获取用户的第一人脸图像,提取第一人脸图像的特征点;
    分析数据库中的基准图像,得到基准图像的特征点;
    分别对比第一人脸图像与数据库中的全部基准图像的特征点比对生成匹配度。
  11. 根据权利要求9所述的电子设备,所述根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围的步骤,包括:
    获取用户所处的人脸识别环境对应的匹配度边缘值和匹配度阈值;
    根据所述匹配度边缘值和匹配度阈值生成阈值范围;其中,匹配度边缘值预先经过测试得到。
  12. 根据权利要求11所述的电子设备,所述根据所述匹配度边缘值和匹配度阈值生成阈值范围的步骤之前,还包括:
    获取若干测试图像和匹配度阈值,所述测试图像分别与数据库中的各张基准图像比对生成测试值;
    筛选测试值大于匹配度阈值的测试图像作为样本图像;
    将识别错误的样本图像对应的测试值作为相似度;
    对相似度和匹配度阈值作差得到匹配度边缘值。
  13. 根据权利要求12所述的电子设备,所述对相似度和匹配度阈值作 差得到匹配度边缘值的步骤之后,还包括:
    提取测试图像中所处的环境特征信息;
    将匹配度边缘值和环境特征信息进行关联,构建匹配度边缘值模型。
  14. 根据权利要求13所述的电子设备,所述若所述匹配度中的最大值在设定的阈值范围中,重新获取所述用户的第二人脸图像的步骤之前,还包括:
    分析并提取所述用户所处人脸识别环境的环境特征信息;
    将环境特征信息输入匹配度边缘值模型,得到当前环境的匹配度边缘值;
    根据匹配度阈值和所述匹配度边缘值生成阈值范围。
  15. 一种非临时性计算机可读存储介质,当所述存储介质中的指令由移动终端的处理器执行时,使得移动终端能够执行人脸图像识别方法的步骤,其中,所述人脸图像识别方法的步骤,包括:
    获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度;
    根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范围;若所述匹配度中的最大值在所述阈值范围中,重新获取所述用户的第二人脸图像;
    选择所述匹配度中取值最大的若干张基准图像,对所述第二人脸图像进行二次识别。
  16. 根据权利要求15所述的非临时性计算机可读存储介质,所述获取用户的第一人脸图像,并分别计算第一人脸图像与数据库中的各张基准图像的匹配度的步骤,包括:
    调用摄像头获取用户的第一人脸图像,提取第一人脸图像的特征点;
    分析数据库中的基准图像,得到基准图像的特征点;
    分别对比第一人脸图像与数据库中的全部基准图像的特征点比对生成匹配度。
  17. 根据权利要求15所述的非临时性计算机可读存储介质,所述根据所述第一人脸图像获取所述用户所处的识别环境及其对应设定的阈值范 围的步骤,包括:
    获取用户所处的人脸识别环境对应的匹配度边缘值和匹配度阈值;
    根据所述匹配度边缘值和匹配度阈值生成阈值范围;其中,匹配度边缘值预先经过测试得到。
  18. 根据权利要求17所述的非临时性计算机可读存储介质,所述根据所述匹配度边缘值和匹配度阈值生成阈值范围的步骤之前,还包括:
    获取若干测试图像和匹配度阈值,所述测试图像分别与数据库中的各张基准图像比对生成测试值;
    筛选测试值大于匹配度阈值的测试图像作为样本图像;
    将识别错误的样本图像对应的测试值作为相似度;
    对相似度和匹配度阈值作差得到匹配度边缘值。
  19. 根据权利要求18所述的非临时性计算机可读存储介质,所述对相似度和匹配度阈值作差得到匹配度边缘值的步骤之后,还包括:
    提取测试图像中所处的环境特征信息;
    将匹配度边缘值和环境特征信息进行关联,构建匹配度边缘值模型。
  20. 根据权利要求19所述的非临时性计算机可读存储介质,所述若所述匹配度中的最大值在设定的阈值范围中,重新获取所述用户的第二人脸图像的步骤之前,还包括:
    分析并提取所述用户所处人脸识别环境的环境特征信息;
    将环境特征信息输入匹配度边缘值模型,得到当前环境的匹配度边缘值;
    根据匹配度阈值和所述匹配度边缘值生成阈值范围。
PCT/CN2019/116359 2019-02-13 2019-11-07 人脸图像识别方法、装置及计算机设备 WO2020164264A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910112937.0 2019-02-13
CN201910112937.0A CN109977765A (zh) 2019-02-13 2019-02-13 人脸图像识别方法、装置及计算机设备

Publications (1)

Publication Number Publication Date
WO2020164264A1 true WO2020164264A1 (zh) 2020-08-20

Family

ID=67076903

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/116359 WO2020164264A1 (zh) 2019-02-13 2019-11-07 人脸图像识别方法、装置及计算机设备

Country Status (2)

Country Link
CN (1) CN109977765A (zh)
WO (1) WO2020164264A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861742A (zh) * 2021-02-18 2021-05-28 咪咕文化科技有限公司 人脸识别方法、装置、电子设备及存储介质
CN112904451A (zh) * 2021-01-20 2021-06-04 浙江洁特智慧科技有限公司 一种存在式感应器
CN116524569A (zh) * 2023-05-10 2023-08-01 深圳大器时代科技有限公司 一种基于归类算法的多并发人脸识别系统及方法

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977765A (zh) * 2019-02-13 2019-07-05 平安科技(深圳)有限公司 人脸图像识别方法、装置及计算机设备
CN110825765B (zh) * 2019-10-23 2022-10-04 中国建设银行股份有限公司 一种人脸识别的方法和装置
CN110755847B (zh) * 2019-10-30 2021-03-16 腾讯科技(深圳)有限公司 虚拟操作对象的生成方法和装置、存储介质及电子装置
CN110843726A (zh) * 2019-12-19 2020-02-28 北京明朝万达科技股份有限公司 汽车门锁控制系统、方法及车辆
CN111191018B (zh) * 2019-12-30 2023-10-20 华为技术有限公司 对话系统的应答方法和装置、电子设备、智能设备
CN111144357B (zh) * 2019-12-31 2023-12-12 北京明略软件系统有限公司 人脸识别方法和装置
CN111626161A (zh) * 2020-05-15 2020-09-04 Oppo(重庆)智能科技有限公司 人脸识别方法及装置、终端和可读存储介质
CN112580459A (zh) * 2020-12-07 2021-03-30 平安普惠企业管理有限公司 基于生物识别的业务处理方法、装置、计算机设备及介质
CN112818885B (zh) * 2021-02-07 2024-05-28 中国银联股份有限公司 人脸识别方法、装置、设备及存储介质
CN113744769A (zh) * 2021-09-06 2021-12-03 盐城市聚云网络科技有限公司 一种计算机信息安全产品用存储装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050152604A1 (en) * 2004-01-09 2005-07-14 Nucore Technology Inc. Template matching method and target image area extraction apparatus
CN103714347A (zh) * 2013-12-30 2014-04-09 汉王科技股份有限公司 人脸识别方法及人脸识别装置
CN106295672A (zh) * 2015-06-12 2017-01-04 中国移动(深圳)有限公司 一种人脸识别方法及装置
CN108960145A (zh) * 2018-07-04 2018-12-07 北京蜂盒科技有限公司 人脸图像检测方法、装置、存储介质和电子设备
CN109117808A (zh) * 2018-08-24 2019-01-01 深圳前海达闼云端智能科技有限公司 面部识别方法、装置、电子设备及计算机可读介质
CN109977765A (zh) * 2019-02-13 2019-07-05 平安科技(深圳)有限公司 人脸图像识别方法、装置及计算机设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050152604A1 (en) * 2004-01-09 2005-07-14 Nucore Technology Inc. Template matching method and target image area extraction apparatus
CN103714347A (zh) * 2013-12-30 2014-04-09 汉王科技股份有限公司 人脸识别方法及人脸识别装置
CN106295672A (zh) * 2015-06-12 2017-01-04 中国移动(深圳)有限公司 一种人脸识别方法及装置
CN108960145A (zh) * 2018-07-04 2018-12-07 北京蜂盒科技有限公司 人脸图像检测方法、装置、存储介质和电子设备
CN109117808A (zh) * 2018-08-24 2019-01-01 深圳前海达闼云端智能科技有限公司 面部识别方法、装置、电子设备及计算机可读介质
CN109977765A (zh) * 2019-02-13 2019-07-05 平安科技(深圳)有限公司 人脸图像识别方法、装置及计算机设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112904451A (zh) * 2021-01-20 2021-06-04 浙江洁特智慧科技有限公司 一种存在式感应器
CN112861742A (zh) * 2021-02-18 2021-05-28 咪咕文化科技有限公司 人脸识别方法、装置、电子设备及存储介质
CN116524569A (zh) * 2023-05-10 2023-08-01 深圳大器时代科技有限公司 一种基于归类算法的多并发人脸识别系统及方法

Also Published As

Publication number Publication date
CN109977765A (zh) 2019-07-05

Similar Documents

Publication Publication Date Title
WO2020164264A1 (zh) 人脸图像识别方法、装置及计算机设备
WO2018121428A1 (zh) 一种活体检测方法、装置及存储介质
CN110443016B (zh) 信息防泄露方法、电子装置及存储介质
US9262614B2 (en) Image processing device, image processing method, and storage medium storing image processing program
JP7159509B1 (ja) 画像に基づくなりすまし検出のために複数の焦点スタックを使用するシステムおよび方法
WO2017198014A1 (zh) 一种身份认证方法和装置
US8649575B2 (en) Method and apparatus of a gesture based biometric system
WO2019033572A1 (zh) 人脸遮挡检测方法、装置及存储介质
WO2020244071A1 (zh) 基于神经网络的手势识别方法、装置、存储介质及设备
US20210012091A1 (en) Method and apparatus for image processing, electronic device, and storage medium
CN111626371A (zh) 一种图像分类方法、装置、设备及可读存储介质
CN104143086A (zh) 人像比对在移动终端操作系统上的应用技术
WO2018098668A1 (zh) 电子装置及其软键盘显示方法
CN105335684A (zh) 人脸检测方法及装置
CN111626240B (zh) 一种人脸图像识别方法、装置、设备及可读存储介质
TW202018577A (zh) 基於資料融合的人員辨識方法
CN113177850A (zh) 一种保险的多方身份认证的方法及装置
US20230306792A1 (en) Spoof Detection Based on Challenge Response Analysis
CN112989299A (zh) 一种交互式身份识别方法、系统、设备及介质
WO2020007191A1 (zh) 活体识别检测方法、装置、介质及电子设备
US20230386256A1 (en) Techniques for detecting a three-dimensional face in facial recognition
KR20130133676A (ko) 카메라를 통한 얼굴인식을 이용한 사용자 인증 방법 및 장치
WO2021082548A1 (zh) 活体检测方法、装置、服务器和人脸识别设备
KR20110092848A (ko) 얼굴 인증 및 등록방법
JP5552946B2 (ja) 顔画像サンプル採取装置、顔画像サンプル採取方法、プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19915067

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 05.10.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19915067

Country of ref document: EP

Kind code of ref document: A1