CN112733601B - Face recognition method and device based on AI (Artificial Intelligence) trinocular imaging technology - Google Patents

Face recognition method and device based on AI (Artificial Intelligence) trinocular imaging technology Download PDF

Info

Publication number
CN112733601B
CN112733601B CN202011418718.4A CN202011418718A CN112733601B CN 112733601 B CN112733601 B CN 112733601B CN 202011418718 A CN202011418718 A CN 202011418718A CN 112733601 B CN112733601 B CN 112733601B
Authority
CN
China
Prior art keywords
face
infrared
infrared temperature
characteristic
face recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011418718.4A
Other languages
Chinese (zh)
Other versions
CN112733601A (en
Inventor
戚奇平
袁伟栋
卞小军
张开锋
陆志武
戚飞
汤峰
陈华平
陈祥营
刘闯闯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Aisuo Electronic Co ltd
Original Assignee
Changzhou Aisuo Electronic Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Aisuo Electronic Co ltd filed Critical Changzhou Aisuo Electronic Co ltd
Priority to CN202011418718.4A priority Critical patent/CN112733601B/en
Publication of CN112733601A publication Critical patent/CN112733601A/en
Application granted granted Critical
Publication of CN112733601B publication Critical patent/CN112733601B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/327Short range or proximity payments by means of M-devices
    • G06Q20/3276Short range or proximity payments by means of M-devices using a pictured code, e.g. barcode or QR-code, being read by the M-device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/327Short range or proximity payments by means of M-devices
    • G06Q20/3278RFID or NFC payments by means of M-devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Abstract

The application discloses a face recognition method based on an AI trinocular imaging technology, and belongs to the technical field of face recognition. The method comprises the following steps: acquiring and identifying infrared temperature characteristics in an infrared thermal imaging picture; judging whether an infrared temperature characteristic irrelevant to the face infrared temperature characteristic function exists or not; comparing the data with a preset database, and identifying the type of the shielding object; acquiring the infrared temperature characteristics of the shielded area of the face part of the human face by using the temperature characteristic function of the shielding object; generating an infrared temperature characteristic of a virtual face; and carrying out face recognition according to the infrared temperature characteristics of the virtual face. In addition, a face recognition device based on the AI trinocular imaging technology is also disclosed. On the basis of the traditional binocular face recognition technology, the infrared biological feature recognition sensor is additionally arranged, the infrared feature recognition of the human face shelter and the influence of the infrared feature of the shelter are creatively solved, the virtual face is obtained, the virtual face features are rapidly recognized, and the accuracy of face recognition is greatly improved.

Description

Face recognition method and device based on AI (Artificial Intelligence) trinocular imaging technology
Technical Field
The present application relates to the field of face recognition technologies, and in particular, to a face recognition method and apparatus based on an AI trinocular imaging technology.
Background
Most face recognition systems based on binocular imaging technology are used in the market at present, although the recognition system can basically solve the problem of living body (photo) recognition, the accuracy of face recognition can be greatly reduced when the face part is shielded by shields such as a hat, a mask, scarf fabrics and the like, and the face recognition system cannot meet the requirements of application scenes with strict identity recognition requirements, public health and epidemic prevention safety and other occasions.
The biggest defect of the prior art is that when the face of a person has a shelter, accurate identification of identity becomes very difficult, in addition, the binocular recognition technology only adopts 2 cameras, the recognition distance and range are limited, the identity recognition is only limited to the face of the person, and the identity of the person under the influence of the shelter cannot be recognized, so that the comparison accuracy of the face biological feature recognition database is greatly reduced in a state of having the shelter.
Disclosure of Invention
The application provides a face recognition method and a face recognition device based on an AI trinocular imaging technology, which aim to solve the problems in the background technology.
A face recognition method based on an AI trinocular imaging technology comprises the following steps:
acquiring an infrared thermal imaging picture, and identifying infrared temperature characteristics in the infrared thermal imaging picture, wherein the infrared thermal imaging picture is a human face infrared thermal imaging picture possibly provided with a shelter;
judging whether the face area of the human face has infrared temperature characteristics irrelevant to the infrared temperature characteristic function of the human face;
if the judgment result is yes, judging that a shelter exists on the face of the human face, and acquiring the infrared temperature characteristic of the shelter area;
comparing the infrared temperature characteristics of the sheltering object area with a preset database, and identifying the type of the sheltering object, wherein the preset database comprises temperature characteristic functions of different types of sheltering objects;
acquiring the infrared temperature characteristics of the shielded area of the face part of the human face by using the temperature characteristic function of the shielding object according to the identified type of the shielding object and the current temperature value of the shielding object;
generating an infrared temperature characteristic of a virtual face according to the infrared temperature characteristic of the shielded area of the face part of the face and the infrared temperature characteristic of the unshielded area of the face;
and carrying out face recognition according to the infrared temperature characteristics of the virtual face.
Preferably, the acquiring an infrared thermal imaging picture and identifying an infrared temperature characteristic in the infrared thermal imaging picture includes:
acquiring a face infrared image and obtaining face characteristic infrared temperature data;
removing dead pixels from the face characteristic infrared temperature data;
and carrying out temperature compensation on the face characteristic infrared temperature data from which the dead pixel is removed to obtain real temperature data.
Preferably, the step of removing the dead pixel from the face feature infrared temperature data comprises:
acquiring a matrix pixel point temperature array a, and initializing i to be 0;
judging whether a [ i +1] -ai is larger than 3;
if the judgment result is yes, a [ i +1] ═ a [ i ] + a [ i-1])/2, i ═ i + 1;
if the judgment result is negative, i is equal to i + 1;
judging whether all the arrays are detected;
if the judgment result is yes, outputting the array with the dead pixel removed;
if the judgment result is negative, continuing to judge.
Preferably, the temperature compensation is performed on the face feature infrared temperature data from which the dead pixel is removed, and obtaining the real temperature data includes:
carrying out temperature compensation on the face characteristic infrared temperature data after dead pixel removal according to a temperature compensation formula, wherein the temperature compensation formula is as follows:
Figure GDA0003206136900000031
Tring (C)Is the current ambient temperature value, TMeasuringThe face characteristic infrared temperature data after dead spots are removed.
Preferably, the performing face recognition according to the infrared temperature characteristic of the virtual face includes:
carrying out face segmentation on the virtual face;
extracting a blood vessel distribution map of the segmented virtual human face;
extracting blood vessel intersection features in the blood vessel distribution map;
and comparing the blood vessel intersection point characteristics with a face recognition database to perform face recognition.
A face recognition device based on AI trinocular imaging technology comprises:
the first acquisition module is used for acquiring an infrared thermal imaging picture and identifying infrared temperature characteristics in the infrared thermal imaging picture;
the judging module is used for judging whether the infrared temperature characteristics irrelevant to the infrared temperature characteristic function of the human face exist in the human face area;
the judging module is used for judging that a shelter exists on the face of the human face and acquiring the infrared temperature characteristics of the shelter area if the judging result is yes;
the identification module is used for comparing the infrared temperature characteristics of the shelter area with a preset database and identifying the type of the shelter;
the second acquisition module is used for acquiring the infrared temperature characteristics of the shielded area of the face part of the human face by using the temperature characteristic function of the shielding object according to the identified type of the shielding object and the current temperature value of the shielding object;
the generating module is used for generating an infrared temperature characteristic of a virtual face according to the infrared temperature characteristic of the shielded area of the face part of the face and the infrared temperature characteristic of the unshielded area of the face;
and the face recognition module is used for carrying out face recognition according to the infrared temperature characteristics of the virtual face.
Preferably, the first obtaining module includes:
the first acquisition unit is used for acquiring a face infrared image and acquiring face characteristic infrared temperature data;
the dead pixel removing unit is used for removing dead pixels from the face characteristic infrared temperature data;
and the temperature compensation unit is used for performing temperature compensation on the face characteristic infrared temperature data after dead spots are removed to obtain real temperature data.
Preferably, the dead pixel removing unit includes:
the second acquisition unit is used for acquiring a matrix pixel point temperature array a and initializing i to be 0;
a first judgment unit configured to judge whether a [ i +1] -a [ i ] is greater than 3, and if the judgment result is yes, a [ i +1] = (a [ i ] + a [ i-1])/2, i ═ i +1, and if the judgment result is no, i ═ i + 1;
and the second judgment unit is used for judging whether all the arrays are detected completely, outputting the array with the dead pixel removed if the judgment result is yes, and continuing to judge if the judgment result is no.
Preferably, the face recognition module includes:
the segmentation unit is used for carrying out face segmentation on the virtual face;
the first extraction unit is used for extracting the blood vessel distribution map of the segmented virtual human face;
a second extraction unit, configured to extract a blood vessel intersection feature in the blood vessel distribution map;
and the comparison unit is used for comparing the blood vessel intersection point characteristics with a face recognition database to perform face recognition.
According to the technical scheme, the face recognition method and the face recognition device based on the AI trinocular imaging technology are additionally provided with the infrared biological feature recognition sensor on the basis of the traditional binocular face recognition technology, the infrared feature recognition of the human face shelter and the influence of the infrared feature of the shelter are creatively solved, the virtual face is obtained, the virtual face features are rapidly recognized, and the accuracy of face recognition is greatly improved.
Drawings
Fig. 1 is a flowchart of a face recognition method based on an AI trinocular imaging technology provided in the present application;
FIG. 2 is a schematic flow chart of step S110;
FIG. 3 is a flowchart illustrating the step S220;
fig. 4 is a schematic diagram of a distributed detection fusion system in the face recognition method based on the AI trinocular imaging technology provided by the present application;
FIG. 5 is a block diagram of a decision-level fusion algorithm based on N-P criteria;
FIG. 6 is a schematic flowchart of step S160;
FIG. 7 is a flowchart of the extraction of a blood vessel map;
FIG. 8 is a schematic diagram of the extraction of a blood vessel map;
FIG. 9 is a schematic diagram of blood vessel intersection feature extraction;
fig. 10 is a schematic diagram of a face recognition apparatus based on an AI trinocular imaging technology according to the present application.
Detailed Description
The technical solutions in the embodiments will be described clearly and completely with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application adds a non-refrigeration infrared intelligent sensing camera on the basis of mature binocular imaging biological feature identification, the infrared camera has far infrared feature capturing capability, senses the surface temperature infrared feature value of the measured biological feature (human face), meanwhile, the biological feature data captured by the camera can be compared with the surface biological feature (human face) obtained by binocular imaging, under the condition that the biological characteristics of the human face can not be identified by the comparison of binocular imaging and a database, the characteristic types of the shelters can be further identified, and deducing the infrared interference influence value of the shelter under the current environment radiation illumination through fast iteration, therefore, the essential characteristics of the detected biological characteristics (human face) are quickly restored, the virtual human face biological characteristics after the infrared influence of the shielding object is deducted are compared with the data of the human face database, and the identity of the detected human face biological characteristics is accurately identified.
As shown in fig. 1, the face recognition method based on the AI trinocular imaging technology provided by the present application includes the following steps:
step S110: acquiring an infrared thermal imaging picture, and identifying infrared temperature characteristics in the infrared thermal imaging picture, wherein the infrared thermal imaging picture is a human face infrared thermal imaging picture possibly provided with a shelter;
step S120: judging whether the face area of the human face has infrared temperature characteristics irrelevant to the infrared temperature characteristic function of the human face;
if the judgment result is yes, judging that a shelter exists on the face of the human face, and acquiring the infrared temperature characteristic of the shelter area;
step S130: comparing the infrared temperature characteristics of the sheltering object area with a preset database, and identifying the type of the sheltering object, wherein the preset database comprises temperature characteristic functions of different types of sheltering objects;
step S140: acquiring the infrared temperature characteristics of the shielded area of the face part of the human face by using the temperature characteristic function of the shielding object according to the identified type of the shielding object and the current temperature value of the shielding object;
step S150: generating an infrared temperature characteristic of a virtual face according to the infrared temperature characteristic of the shielded area of the face part of the face and the infrared temperature characteristic of the unshielded area of the face;
step S160: and carrying out face recognition according to the infrared temperature characteristics of the virtual face.
Referring to fig. 2, step S110 specifically includes:
step S210: acquiring a human face infrared image through an infrared sensor, and acquiring human face characteristic infrared temperature data;
step S220: removing dead pixels from the face characteristic infrared temperature data; the infrared thermal imaging may have pixel damage or bad pixels, after the temperature calculation process is completed, the bad pixels can obtain wrong temperature data, and a comparison method is adopted for processing the bad data to eliminate points with large differences.
Referring to fig. 3, the method specifically includes:
s221: acquiring a matrix pixel point temperature array a, and initializing i to be 0;
s222: judging whether a [ i +1] -ai is larger than 3;
s223: if the judgment result is yes, a [ i +1] ═ a [ i ] + a [ i-1])/2, i ═ i + 1;
s224: if the judgment result is negative, i is equal to i + 1;
s225: judging whether all the arrays are detected;
s226: if the judgment result is yes, outputting the array with the dead pixel removed;
s227: if the judgment result is negative, the flow returns to S2 to continue the judgment.
Step S230: and carrying out temperature compensation on the face characteristic infrared temperature data from which the dead pixel is removed to obtain real temperature data. Specifically, the temperature compensation is performed on the face characteristic infrared temperature data after the dead pixel is removed according to a temperature compensation formula, wherein the temperature compensation formula is as follows:
Figure GDA0003206136900000071
Tring (C)Is the current ambient temperature value, TMeasuringThe face characteristic infrared temperature data after dead spots are removed.
The method and the device adopt a fusion optimization algorithm based on an N-P criterion to judge whether the face of the human face has the obstruction or not under the given condition of an infrared temperature sensor judgment rule. As shown in FIG. 4, the present application employs a distributed detection fusion system, comprising a fusion center and a plurality of sensorsThe sensors send the results of independent detection and judgment to the fusion center, and the fusion center fuses the judgment results of the sensors according to a certain rule to obtain a final judgment result U0. For the distributed fusion detection system as shown in FIG. 4, H is used0Null hypothesis, denoted by H, indicating no target present1Alternative assumptions representing the appearance of an object, by XiRepresents the observed value, x, of the ith sensoriRepresents XiA certain value of (1), UiIndicating the decision value, u, of the ith sensoriRepresents UiA certain value of u i1 indicates that the ith sensor judges that the target exists, ui0 indicates that the i-th sensor determines that the target does not exist, and the decision value of each sensor forms a decision vector U (U ═ U)1,U2,...,Ui), u=(u1,u2,...,ui) A certain value of the decision vector U is represented, and the fusion center performs hypothesis test according to the decision vector U to obtain a final decision result U0
Let P denote the detection probability and the false probability of the ith sensor respectivelyDiAnd PFi,PDiRepresenting the probability that the target is present and the decision result is also the presence of the target, PFiIndicating the probability that the target does not appear but the decision result is that the target exists. Detection probability P of distributed fusion detection systemDAnd false alarm probability PFThe calculation formula is as follows:
Figure GDA0003206136900000072
Figure GDA0003206136900000073
wherein, P (u | H)1) Representing the probability of occurrence of a fusion center decision target given an input vector u; p (u | H)0) Representing the probability that a fusion centric decision object does not occur given an input vector u, P (u)01| U) represents a conditional probability density.
Assuming that the observed quantities of the respective sensors are independent of each other, the following equation holds true according to bayesian theorem:
Figure GDA0003206136900000081
Figure GDA0003206136900000082
wherein, PDiAnd PFiIndicating the detection probability and the false probability of the ith sensor.
For the distributed detection fusion system, the optimization criterion of the system is the false alarm probability PFSatisfy PFUnder the condition that alpha is less than or equal to alpha (0 < alpha < 1), the detection probability P can be enabled through an N-P criterionDA maximum value is reached. Given an arbitrary false alarm probability PFUnder conditions such that PDThe optimal fusion rule to reach the maximum is as follows:
Figure GDA0003206136900000083
Figure GDA0003206136900000084
wherein u is0And the final judgment of the fusion center is represented, T (u) represents a likelihood ratio, lambda is a judgment threshold value set by the fusion center, and W is a randomization factor and satisfies 0 < W < 1.
Substituting the formula 5 into the formula 1 and the formula 2 can calculate the false alarm probability P of the whole fusion systemFAnd a detection probability PD
Figure GDA0003206136900000085
Figure GDA0003206136900000086
Wherein, P (u | H)1) And P (u | H)0) The values can be obtained by using equation 3 and equation 4, respectively.
Given fusion system false alarm probability PFSubstituting into formula 7, the decision threshold λ and randomization factor W of the fusion center can be calculated, and substituting λ and W into formula 8 can obtain the detection probability P of the fusion systemD
The distributed detection fusion theory based on the N-P criterion is applied to decision-level fusion of infrared images and visible light images, and a fusion system structure is constructed on the basis of ensuring the mutual independence of the infrared images and the visible light images, and the basic idea is as follows:
and in the fusion center of the system, comprehensively processing the judgment results of all the sensors according to a certain preset rule to finally obtain the judgment result of the fusion system. The concrete implementation is as follows:
1. respectively calculating characteristic condition functions of two sensors for detecting the target according to the characteristics of target candidate areas in the infrared image and the visible light image, obtaining respective target detection probabilities of an infrared image target detection system and a visible light image target detection system according to a distributed detection decision level fusion algorithm based on an N-P criterion, and respectively marking the probabilities as PD1And PD2. Determining weighting coefficients according to the detection probability, determining fusion weights from the characteristic values extracted by the binocular imaging technique, using
Figure GDA0003206136900000091
The binocular imaging technology is used for representing the condition that the face features are detected, and the conditions are represented as follows:
Figure GDA0003206136900000092
where θ is a matching degree threshold.
Figure GDA0003206136900000093
The higher the value is, the higher the value represents the face obtained by the binocular imaging technologyThe more complete the image characteristic value is, the higher fusion weight is given to the binocular imaging technology in the final fusion algorithm; whereas infrared imaging techniques obtain higher weights.
2. And finally determining the fused target through weighted summation. Stipulating:
Figure GDA0003206136900000094
wherein, I1Representing recognition results obtained by binocular imaging techniques, I2Representing the recognition results obtained by infrared imaging techniques. A block diagram of a decision-level fusion algorithm based on N-P criteria is shown in fig. 5.
Referring to fig. 6, step S160 specifically includes:
s161: carrying out face segmentation on the virtual face;
the infrared human face segmentation algorithm based on Bayesian classification belongs to a segmentation algorithm based on data driving, and the probability of statistics is used for determining whether a certain pixel point is human face skin or background.
In the pattern classification problem, it is desirable to minimize errors in classification. By using Bayes formula in probability theory, the classification rule with the minimum error rate can be obtained, which is called Bayes decision with minimum error rate. For each pixel point of the face, the Bayesian formula is used for solving the probability that the point is the face, if the point is the face point, the point is represented by f (face), and if the point is the background point, the point is represented by b (background).
Face region probability distribution f (x) using training imagesjIf) and background partial probability distribution f (x)j| b) according to a Bayesian formula, the probability that each pixel point of the test image is a face point can be obtained, namely:
Figure GDA0003206136900000101
wherein:
accept x is face:
Figure GDA0003206136900000102
accept x is background:
Figure GDA0003206136900000103
s162: extracting a blood vessel distribution map of the segmented virtual human face;
the intersections of the face vascular network are selected as face feature points, which are proven to have unique characteristics. The points correspond to the distribution of the blood vessels of the human face, and the distribution of the blood vessels of the human face cannot be changed, so that the position relations of the characteristic points of the infrared human face image of the same person under different expressions at different times and different places are the same. And using the position relation between the feature point and the midpoint of the two eyes as a feature vector. As shown in fig. 7 and 8: the extraction process of the blood vessel distribution map comprises the following steps: acquiring a virtual face; the erosion algorithm removes small bright details in the image and simultaneously weakens the brightness of the image; the image brightness is increased through expansion operation, and the integral gray value and the large bright area of the image are basically not influenced; all the structural elements of 3-by-3 are 1, and the brighter region of the face part in the graph is face blood vessel distribution; and acquiring a human face blood vessel distribution map.
S163: extracting blood vessel intersection features in the blood vessel distribution map;
as shown in fig. 9, in the refined binary blood vessel image, 1 represents a blood vessel, 0 represents a non-blood vessel part, and 8 neighborhoods (N) for each pixel point0,N1,…N7) If so, and the points satisfying the following 13 forms are blood vessel intersections.
(1)
Figure GDA0003206136900000104
(2)
Figure GDA0003206136900000105
(3)
Figure GDA0003206136900000106
(4)
Figure GDA0003206136900000107
(5)
Figure GDA0003206136900000108
(6)
Figure GDA0003206136900000109
(7)
Figure GDA00032061369000001010
(8)
Figure GDA00032061369000001011
(9)
Figure GDA00032061369000001012
(10)
Figure GDA00032061369000001013
(11)
Figure GDA00032061369000001014
(12)
Figure GDA00032061369000001015
(13)
Figure GDA00032061369000001016
A blood vessel intersection feature extraction step:
step 1: thinning the blood vessel image; thinning the blood vessel of the human face to the width of a single pixel.
step 2: crude extraction of characteristic points; points satisfying the sum of the area pixel values greater than 2 are extracted.
step 3: removing the pseudo characteristic points; it is detected whether or not the points extracted in the second step have 13 types proposed in the feature point definition, and points other than these 13 types can be removed.
S164: and comparing the blood vessel intersection point characteristics with a face recognition database to perform face recognition.
After the coordinates of the intersection point are obtained, the coordinates of the intersection point are obtainedAnd using the position relation between the feature point and the midpoint of the eyes as a feature vector. Let N feature points p in the face image1,p2,…pnFeature point p in the imagen(xn,yn) Human left eye feature point (x)le,yle) Human right eye feature point (x)re,yre). The distance between the two eyes is as follows:
Figure GDA0003206136900000111
then the characteristic point pnThe feature vector of (a) is:
Figure GDA0003206136900000112
when feature vectors match, feature vectors are utilized
Figure GDA0003206136900000113
To identify infrared face images.
Referring to fig. 10, the present application further provides a face recognition apparatus based on the AI trinocular imaging technology, including:
the first acquisition module is used for acquiring an infrared thermal imaging picture and identifying infrared temperature characteristics in the infrared thermal imaging picture;
the judging module is used for judging whether the infrared temperature characteristics irrelevant to the infrared temperature characteristic function of the human face exist in the human face area;
the judging module is used for judging that a shelter exists on the face of the human face and acquiring the infrared temperature characteristics of the shelter area if the judging result is yes;
the identification module is used for comparing the infrared temperature characteristics of the shelter area with a preset database and identifying the type of the shelter;
the second acquisition module is used for acquiring the infrared temperature characteristics of the shielded area of the face part of the human face by using the temperature characteristic function of the shielding object according to the identified type of the shielding object and the current temperature value of the shielding object;
the generating module is used for generating an infrared temperature characteristic of a virtual face according to the infrared temperature characteristic of the shielded area of the face part of the face and the infrared temperature characteristic of the unshielded area of the face;
and the face recognition module is used for carrying out face recognition according to the infrared temperature characteristics of the virtual face.
Specifically, the first obtaining module includes:
the first acquisition unit is used for acquiring a face infrared image and acquiring face characteristic infrared temperature data;
the dead pixel removing unit is used for removing dead pixels from the face characteristic infrared temperature data;
and the temperature compensation unit is used for performing temperature compensation on the face characteristic infrared temperature data after dead spots are removed to obtain real temperature data.
Specifically, the dead pixel removing unit includes:
the second acquisition unit is used for acquiring a matrix pixel point temperature array a and initializing i to be 0;
a first judgment unit configured to judge whether a [ i +1] -a [ i ] is greater than 3, and if the judgment result is yes, a [ i +1] = (a [ i ] + a [ i-1])/2, i ═ i +1, and if the judgment result is no, i ═ i + 1;
and the second judgment unit is used for judging whether all the arrays are detected completely, outputting the array with the dead pixel removed if the judgment result is yes, and continuing to judge if the judgment result is no.
Specifically, the face recognition module includes:
the segmentation unit is used for carrying out face segmentation on the virtual face;
the first extraction unit is used for extracting the blood vessel distribution map of the segmented virtual human face;
a second extraction unit, configured to extract a blood vessel intersection feature in the blood vessel distribution map;
and the comparison unit is used for comparing the blood vessel intersection point characteristics with a face recognition database to perform face recognition.
In addition, according to the face recognition method based on the AI trinocular imaging technology, various intelligent consumption modes can be conveniently docked, such as an RFID card swiping consumption mode, two-dimensional code consumption, face consumption and the like.
(1) RFID card swiping consumption mode: the card swiping mode supports a common Mifare One card and a CPU card with better safety performance at present.
The principle of swiping the card: RFID is a short-range wireless communication technology, and is a short-range high-frequency wireless communication technology, which allows non-contact point-to-point data transmission (within ten centimeters) between electronic devices to exchange data. Has the characteristics of low cost, convenience, easy use, richer intuition and the like.
The RFID card and the card reader are respectively provided with an inductance coil, when the RFID card approaches the corresponding card reader, an inductance coupling coil in the card reader serves as a primary coil of a transformer to supply power to the passive RFID identification card, at the moment, the inductance coil on the RFID card is equivalent to a secondary coil of the transformer, and on the other hand, a chip in the RFID card modulates information stored in the chip on a coil of the RFID card (the modulation principle is that the impedance of the coil is regularly changed so that the load of the inductance primary coil is regularly changed), and the RFID card reader can read the information in the RFID card by detecting the impedance change rule of the inductance coil in the RFID card.
The card swiping consumption implementation mode comprises the following steps:
and the card swiping module is connected with an AI intelligent consumption terminal based on the trinocular imaging identification technology through a serial port to exchange data. When a card is close to the card swiping module, the card information is verified and read by using the secret key, the read data is transmitted to the AI intelligent terminal after the verification is successful, the terminal processes the data, corresponding money deduction and record saving are carried out according to the amount of money consumed by the terminal, and the data are uploaded to a cloud platform and other operations.
(2) Consumption of the two-dimension code: the customer who does not take the card can directly swipe the two-dimensional code and consume, and the two-dimensional code adopts self-defined format, encrypts and takes the timestamp, prevents to duplicate the robbery, and is safer than ordinary two-dimensional code.
Two-dimensional code principle: the data symbol information is recorded by black and white patterns distributed on a plane (in a two-dimensional direction) according to a certain rule by using a certain specific geometric figure; the concept of '0' and '1' bit stream forming the internal logic foundation of the computer is skillfully utilized in code establishment, a plurality of geometric shapes corresponding to binary systems are used for representing character numerical value information, and the information is automatically read through an image input device or an optoelectronic scanning device so as to realize automatic information processing: it has some commonality of barcode technology: each code system has its specific character set; each character occupies a certain width; has certain checking function and the like. Meanwhile, the method also has the function of automatically identifying information of different rows and processing the graph rotation change points.
The two-dimension code consumption implementation mode is as follows:
and the two-dimensional code module is connected with the three-eye imaging face recognition AI intelligent consumption terminal through a serial port. When the AI terminal scans the two-dimensional code, the current two-dimensional code is identified and the identification result is sent to the AI terminal, the scanned two-dimensional code is analyzed by the AI terminal and uploaded to the cloud platform for validity comparison, the cloud platform compares the result, and the account information validity and other results of the two-dimensional code are returned to the AI intelligent terminal. And the terminal carries out corresponding processing according to the result, the consumption is successful or fails, and if the consumption is successful, the AI terminal uploads a consumption success record.
(3) Face consumption: the working principle of face recognition can be divided into the following steps:
detecting the position of a face in the image;
registering the face, positioning the coordinates of key points of facial features, and labeling;
identifying attributes of the face, such as gender, age, posture, expression and the like;
extracting the characteristics of the human face, namely converting a human face image into a series of numerical values with fixed length;
comparing the faces, and measuring the similarity between the two faces;
face verification, namely judging whether the two faces are the same person;
identifying a face, namely identifying the identity corresponding to the input face image;
searching a face, namely searching and inputting a face sequence similar to the face in a face library;
clustering human faces, namely grouping the human faces in a set according to identities;
a human face living body, which judges whether a human face image comes from a real person or an attack prosthesis, such as a simulated photo, a video and the like;
identity authentication process and realization based on face recognition consumption.
First, a picture is uploaded to a server, the server extracts facial features and writes the facial features into a file, the file has a unique identification code called face token which represents an identity, and the server responds to the features and the identification through json data. The json data of the response contains facial features, and face token.
The login verification is a comparison or matching process, a photo is shot through a terminal or selected from an album and then uploaded to a server, the server firstly extracts facial features, then the facial features are compared with the facial features extracted when an account is registered, if the degree of identity reaches a certain height, the server considers the person as the same person, and then the server responds to the verification and enters the account after passing the verification.
The identity of all people in a group photo is recognized, the principle is the same as that of login verification, and only one more process is needed. Firstly, a server can detect that a plurality of people exist in a picture, face features of the detected people and identification codes are fed back in a json array form, the face features and the identification codes can be matched one by one in a mode of traversing the identification codes, and finally, results are gathered and fed back to users.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (3)

1. A face recognition method based on AI trinocular imaging technology is characterized by comprising the following steps:
acquiring an infrared thermal imaging picture, and identifying infrared temperature characteristics in the infrared thermal imaging picture, wherein the infrared thermal imaging picture is a human face infrared thermal imaging picture possibly provided with a shelter;
judging whether the face area of the human face has infrared temperature characteristics irrelevant to the infrared temperature characteristic function of the human face;
if the judgment result is yes, judging that a shelter exists on the face of the human face, and acquiring the infrared temperature characteristic of the shelter area;
comparing the infrared temperature characteristics of the sheltering object area with a preset database, and identifying the type of the sheltering object, wherein the preset database comprises temperature characteristic functions of different types of sheltering objects;
acquiring the infrared temperature characteristics of the shielded area of the face part of the human face by using the temperature characteristic function of the shielding object according to the identified type of the shielding object and the current temperature value of the shielding object;
generating an infrared temperature characteristic of a virtual face according to the infrared temperature characteristic of the shielded area of the face part of the face and the infrared temperature characteristic of the unshielded area of the face;
according to the infrared temperature characteristics of the virtual human face, human face recognition is carried out;
the acquiring of the infrared thermal imaging picture and the recognizing of the infrared temperature characteristics in the infrared thermal imaging picture comprise:
acquiring a face infrared image and obtaining face characteristic infrared temperature data;
removing dead pixels from the face characteristic infrared temperature data;
carrying out temperature compensation on the face characteristic infrared temperature data after dead spots are removed to obtain real temperature data;
the step of carrying out temperature compensation on the face characteristic infrared temperature data after dead pixels are removed to obtain real temperature data comprises the following steps:
carrying out temperature compensation on the face characteristic infrared temperature data after dead pixel removal according to a temperature compensation formula, wherein the temperature compensation formula is as follows:
Figure FDA0003206136890000011
Figure FDA0003206136890000021
Tring (C)Is the current ambient temperature value, TMeasuringThe face characteristic infrared temperature data after dead spots are removed.
2. The AI-trinocular-imaging-technology-based face recognition method of claim 1, wherein the removing of dead spots from the face-feature infrared temperature data comprises:
acquiring a matrix pixel point temperature array a, and initializing i to be 0;
judging whether a [ i +1] -ai is larger than 3;
if the judgment result is yes, a [ i +1] ═ a [ i ] + a [ i-1])/2, i ═ i + 1;
if the judgment result is negative, i is equal to i + 1;
judging whether all the arrays are detected;
if the judgment result is yes, outputting the array with the dead pixel removed;
if the judgment result is negative, continuing to judge.
3. The AI-trinocular-imaging-technology-based face recognition method of claim 1, wherein performing face recognition according to the infrared temperature characteristics of the virtual face comprises:
carrying out face segmentation on the virtual face;
extracting a blood vessel distribution map of the segmented virtual human face;
extracting blood vessel intersection features in the blood vessel distribution map;
and comparing the blood vessel intersection point characteristics with a face recognition database to perform face recognition.
CN202011418718.4A 2020-12-07 2020-12-07 Face recognition method and device based on AI (Artificial Intelligence) trinocular imaging technology Active CN112733601B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011418718.4A CN112733601B (en) 2020-12-07 2020-12-07 Face recognition method and device based on AI (Artificial Intelligence) trinocular imaging technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011418718.4A CN112733601B (en) 2020-12-07 2020-12-07 Face recognition method and device based on AI (Artificial Intelligence) trinocular imaging technology

Publications (2)

Publication Number Publication Date
CN112733601A CN112733601A (en) 2021-04-30
CN112733601B true CN112733601B (en) 2021-10-12

Family

ID=75598335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011418718.4A Active CN112733601B (en) 2020-12-07 2020-12-07 Face recognition method and device based on AI (Artificial Intelligence) trinocular imaging technology

Country Status (1)

Country Link
CN (1) CN112733601B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514432B (en) * 2012-06-25 2017-09-01 诺基亚技术有限公司 Face feature extraction method, equipment and computer program product
CN103246883B (en) * 2013-05-20 2016-02-17 中国矿业大学(北京) A kind of underground coal mine thermal infrared images face identification method
CN111598047B (en) * 2020-05-28 2023-06-27 重庆康普达科技有限公司 Face recognition method
CN111860428A (en) * 2020-07-30 2020-10-30 上海华虹计通智能系统股份有限公司 Face recognition system and method

Also Published As

Publication number Publication date
CN112733601A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN106780906B (en) A kind of testimony of a witness unification recognition methods and system based on depth convolutional neural networks
Miao et al. A hierarchical multiscale and multiangle system for human face detection in a complex background using gravity-center template
Zhang et al. Fast and robust occluded face detection in ATM surveillance
CN101246544B (en) Iris positioning method based on boundary point search and minimum kernel value similarity region edge detection
Kang et al. Person re-identification between visible and thermal camera images based on deep residual CNN using single input
CN110008813B (en) Face recognition method and system based on living body detection technology
Guo et al. Improved hand tracking system
CN106778474A (en) 3D human body recognition methods and equipment
CN111898736A (en) Efficient pedestrian re-identification method based on attribute perception
CN110390308B (en) Video behavior identification method based on space-time confrontation generation network
CN107392187A (en) A kind of human face in-vivo detection method based on gradient orientation histogram
Bhanu et al. Human ear recognition by computer
Tsalakanidou et al. A 3D face and hand biometric system for robust user-friendly authentication
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
Iodice et al. Salient feature based graph matching for person re-identification
Liu et al. Gait recognition using deep learning
Foresti et al. Face detection for visual surveillance
CN112733601B (en) Face recognition method and device based on AI (Artificial Intelligence) trinocular imaging technology
Mousavi et al. Three dimensional face recognition using svm classifier
Curran et al. The use of neural networks in real-time face detection
CN114360058A (en) Cross-visual angle gait recognition method based on walking visual angle prediction
Park et al. Invariant object detection based on evidence accumulation and Gabor features
Kalangi et al. Deployment of Haar Cascade algorithm to detect real-time faces
Wang et al. Detecting and tracking eyes through dynamic terrain feature matching
Lin et al. 3D face authentication by mutual coupled 3D and 2D feature extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant