CN116311479B - Face recognition method, system and storage medium for unlocking automobile - Google Patents

Face recognition method, system and storage medium for unlocking automobile Download PDF

Info

Publication number
CN116311479B
CN116311479B CN202310547918.7A CN202310547918A CN116311479B CN 116311479 B CN116311479 B CN 116311479B CN 202310547918 A CN202310547918 A CN 202310547918A CN 116311479 B CN116311479 B CN 116311479B
Authority
CN
China
Prior art keywords
representing
unit
module
obg
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310547918.7A
Other languages
Chinese (zh)
Other versions
CN116311479A (en
Inventor
朱文忠
刘峪
包德帅
何鑫
尹鑫淼
李�杰
张智柯
潘磊
何海东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University of Science and Engineering
Original Assignee
Sichuan University of Science and Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University of Science and Engineering filed Critical Sichuan University of Science and Engineering
Priority to CN202310547918.7A priority Critical patent/CN116311479B/en
Publication of CN116311479A publication Critical patent/CN116311479A/en
Application granted granted Critical
Publication of CN116311479B publication Critical patent/CN116311479B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a face recognition method, a face recognition system and a storage medium for unlocking an automobile, and belongs to the technical field of face recognition. The face recognition method comprises the steps of obtaining a scene face image, inputting the scene face image into a trained recognition algorithm model, sequentially extracting features by an initial convolution layer and a OBG module, generating a reference vector by a standardization module, calculating the distance between the reference vector and a preset sample vector and the like. The attention unit in the OBG module can perform self-calibration by utilizing the original input information, so that the interference of other parts to the attention unit is effectively reduced, and the model has stronger robustness. The feature information is transmitted in the model in a richer mode, the model can better fit the image information in the diversified actual scenes, and the recognition accuracy is higher in the actual application environment.

Description

Face recognition method, system and storage medium for unlocking automobile
Technical Field
The invention belongs to the technical field of face recognition, and particularly relates to a face recognition method, a face recognition system and a storage medium for unlocking an automobile.
Background
At present, face recognition systems are installed on various vehicle types, the identity of a user can be recognized through face information, unlocking of the vehicle is completed, fatigue monitoring can be carried out, and driving safety is improved. However, for a platform where the car moves in this way, the environment in which it is located becomes variable, resulting in a large uncertainty in the lighting conditions in the car. Although many train enterprises claim that the identification accuracy of the face recognition system exceeds 99%, the actual recognition effect is basically far lower than that of propaganda data under the influence of illumination conditions in actual application scenes. Therefore, there is a need for an improved face recognition method for automobiles, which is more suitable for practical application.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a face recognition method, a face recognition system and a storage medium for unlocking an automobile, so as to improve the recognition accuracy of on-site face images acquired by an on-board camera in an actual scene.
In order to achieve the above object, the present invention adopts the following solutions: a face recognition method for unlocking an automobile, comprising the steps of:
step 1, acquiring a field face image acquired by a vehicle-mounted camera, and inputting the field face image into a trained recognition algorithm model, wherein the recognition algorithm model comprises an initial convolution layer, a OBG module and a standardization module which are sequentially arranged;
wherein, the internal operation process of the OBG module is expressed as the following mathematical model:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the characteristic diagram entered into said OBG module, < >>Representing the built-in feature extraction unit->Representing a first-level feature map output by the built-in feature extraction unit; />And->A first compression unit and a second compression unit are respectively indicated,a first secondary characteristic map representing the output of the first compression unit,>a second-stage feature map representing the output of the second compression unit; />And->Representing a first attention unit and a second attention unit, respectively,/respectively>The elements are represented as corresponding to the product operation,and->Respectively representing a first-level characteristic diagram and a second-level characteristic diagram; />Representing feature fusion unit->Representing a four-level feature map output by the feature fusion unit;
step 2, extracting features of the on-site face image through the initial convolution layer, and outputting to obtain a shallow image feature map;
step 3, inputting the shallow image feature map into the OBG module, and generating and outputting a deep feature map by the OBG module after operation;
step 4, inputting the deep feature map into a standardization module, and then generating a reference vector corresponding to the on-site face image by the standardization module;
and 5, calculating the distance between the index vector and a preset sample vector, and obtaining the identity of the on-site face image by the identity corresponding to the sample vector which is closest to the index vector and has a distance smaller than a preset threshold value.
Further, five end-to-end connected OBG modules are provided in the recognition algorithm model, the shallow image feature map is used as an input of the first OBG module, and the deep feature map is a feature map output by the fifth OBG module.
Further, the built-in feature extraction unit comprises at least one residual block.
Further, the first compression unit comprises a compression convolution layer and a first compression activation layer which are sequentially arranged, and the step length of the compression convolution layer is 2; the second compression unit comprises a compression pooling layer and a second compression activation layer which are sequentially arranged, and the step length of the compression pooling layer is 2.
Further, the first attention unit and the second attention unit are the same in internal operation process, and the first attention unit and the second attention unit are both expressed as the following mathematical models:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing a first secondary profile or a second secondary profile,/a second secondary profile>The method comprises the steps of carrying out global maximum pooling treatment on each layer of the feature map respectively, and carrying out +.>Representing a first calibration vector, ">Representing a feature map entered into the OBG module,representing a first calibration activation function, ">A second calibration vector representing the output of the first calibration activation function,>representing element-corresponding product operation,/->Representing a second calibration activation function->A third calibration vector representing the output of the first or second attention unit。
In the OBG module, the third calibration vector output by the first attention unit and the first secondary feature map are multiplied by each other in element correspondence, so that the first secondary feature map is calibrated, and then the first tertiary feature map is generated. And the third calibration vector output by the second attention unit and the second secondary characteristic diagram are subjected to element corresponding product to calibrate the second secondary characteristic diagram, and then a second tertiary characteristic diagram is generated.
Further, a third attention unit is further provided in the OBG module, and the internal operation process of the third attention unit is expressed as the following mathematical model:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing a second calibration vector generated in the first attention unit,/>Representing a second calibration vector generated in a second attention unit,>representing a splicing operation->Representing a third calibration activation function, ">A fourth calibration vector representing the output of the third attention unit; and the fourth calibration vector is used for performing element corresponding product with the four-level feature map to calibrate the four-level feature map.
Further, the feature fusion unit comprises a splicing processing layer, a fusion convolution layer and a fusion activation layer which are sequentially arranged.
The invention also provides a face recognition system for unlocking the automobile, which comprises a processor and a memory, wherein the memory stores a computer program, and the processor is used for executing the face recognition method for unlocking the automobile by loading the computer program.
The invention also provides a storage medium, wherein the storage medium is stored with a computer program, and the computer program realizes the face recognition method for unlocking the automobile when being executed by a processor.
The beneficial effects of the invention are as follows:
(1) In the invention, a compression convolution layer and a compression pooling layer are arranged in a OBG module, the height and width dimensions of a feature image are gradually reduced in a differentiation mode, the flexibility of the model on different feature perceptions is greatly enhanced, the feature images output by a first compression unit and a second compression unit are respectively calibrated by a first attention unit and a second attention unit, when a plurality of OBG modules are arranged in series, the model can more accurately distinguish the mask area from the face image area in the process of gradually reducing the height and width dimensions of the feature image, and focus the face image area;
(2) In the prior art, a plurality of models use an attention module as a calibration mechanism to adjust different characteristics so as to expect to improve the effect of extracting the characteristics, but the effect of the attention module is also influenced by other parts of the model, and the performance of the attention module in the model is unstable, and some of the attention module can even have negative effects frequently; in the present invention, the input OBG moduleThe feature diagram is introduced into the attention unit, so that the attention unit in the OBG module can perform self-calibration by utilizing original input information, interference brought by other parts (a built-in feature extraction unit, a first compression unit and a second compression unit) to the attention unit is effectively reduced, and the model has stronger robustness; furthermore, the +. OBG module will be entered>The feature map is led into the attention unit, so that part of information at the front end of OBG can be transmitted backwards by skipping the built-in feature extraction unit, the first compression unit and the second compression unit, the transmission mode of the feature information in the model is richer, the model can better fit the image information in diversified actual scenes, and the recognition accuracy is higher in the actual application environment;
(3) After the third attention unit is arranged in the OBG module, the second calibration vectors generated in the first attention unit and the second attention unit are not only used as self-calibration information of the attention module, but also used as calibration information of a four-level feature map, the calibration information is used for forming multi-level and progressive calibration on the feature stream in the OBG module, the omnibearing regulation and control on the feature stream is realized, and when partial shielding (shielding caused by external object shielding or uneven illumination) exists in an input face image, the model can deeply and fully excavate local face image information under the coordination guidance of the first, second and third attention units, and tests show that after the structure is used in the OBG module, the face recognition accuracy in an actual scene is effectively improved.
Drawings
FIG. 1 is a schematic flow chart of an identification algorithm model of the present invention;
FIG. 2 is a schematic diagram of an internal operation flow of the OBG module according to the present invention;
FIG. 3 is a schematic diagram illustrating an internal operation flow of the first attention unit according to the present invention;
FIG. 4 is a schematic diagram of the internal operation flow of the standardized module according to the present invention;
FIG. 5 is a schematic diagram of the internal operation flow of the OBG module of comparative example 1;
fig. 6 is a schematic diagram of the internal operation flow of the first attention unit of comparative example 1;
FIG. 7 is a schematic diagram of the internal operation flow of the OBG module of comparative example 2;
in the accompanying drawings: the face image comprises a 1-field face image, a 2-initial convolution layer, a 3-OBG module, a 31-built-in feature extraction unit, a 32-first compression unit, a 33-second compression unit, a 34-first attention unit, a 35-second attention unit, a 36-third attention unit, a 37-feature fusion unit and a 4-standardization module.
Detailed Description
Examples:
the following describes the face recognition method and the algorithm model internal operation process provided by the invention in more detail by referring to the accompanying drawings.
Fig. 1 shows a flow chart of a face recognition algorithm model provided by the invention, and the size (high-x wide-x channel) of a field face image 1 is expressed by m x n x v. The convolution kernel size of the initial convolution layer 2 is 3*3, the step length=1 during convolution operation, when the initial convolution layer 2 carries out convolution operation on the field face image 1, only the size of the channel dimension is changed, and the size of the output shallow image feature map is m×n×24.
The number of OBG modules 3 is set to five, and for each OBG module 3, the height and width of the four-level feature map output from the end thereof are respectively input to the moduleHalf the height and width of the feature map, the channel size of the four-level feature map in the same OBG module 3 is +.>Twice the channel size. Thus, the input and output profile dimensions for each OBG module 3 are shown in table 1.
Table 1 each OBG module 3 input and output map size
The internal structure and operation process of each OBG module 3 are the same, and as shown in fig. 2, taking the first OBG module 3 as an example, the internal built-in feature extraction unit 31 includes a residual block. The residual block is an existing conventional technology and comprises a first 3*3 convolution layer (step length=1), a first ReLU activation layer, a second 3*3 convolution layer (step length=1) and a second ReLU activation layer which are sequentially arranged, wherein residual connection is arranged in the residual block, and a characteristic diagram at the front end of the residual block is added with a characteristic diagram at the rear part of the residual block through the residual connection and then is used as output of the whole residual block. The convolution layers in the residual block do not change the feature map size before and after the convolution operation, and thus the feature map sizes of the input and output built-in feature extraction units 31 are m×n×24. In some other embodiments, in order to further improve the nonlinear fitting capability of the model, 2 or more residual blocks may be disposed in the built-in feature extraction unit 31, and the multiple residual blocks are connected end to end in sequence.
In the present embodiment, the first compression unit 32 includes a compression convolution layer (whose convolution kernel size= 3*3, step size=2) and a first compression activation layer (ReLU function) that are sequentially disposed, and the second compression unit 33 includes a compression pooling layer (whose pooling window size=2x2, step size=2) and a second compression activation layer (ReLU function). The first compression unit 32 and the second compression unit 33 are each configured to compress the height and width dimensions of the feature map to half of the original dimensions. Still taking the first OBG module 3 as an example, the first compression unit 32 and the second compression unit 33 output feature patterns each have a size of m/2*n/2×24.
The internal operation of the first attention unit 34 and the second attention unit 35 is the same, as shown in fig. 3, taking the first attention unit 34 in the first OBG module 3 as an example, after global maximum pooling is performed on each layer of the first secondary feature map, the first calibration vector size obtained is 1×1×24, and similarly, forAfter global maximum pooling treatment is carried out on each layer of the feature map, vectors with the size of 1 x 24 are obtained. The two vectors are added and pass through the firstCalibrating the activation function +.>After activation (sigmoid function) a second calibration vector of size 1 x 24 is obtained. Then the first calibration vector and the second calibration vector are multiplied by element correspondence, and the first calibration vector and the second calibration vector are subjected to a second calibration activation function +.>After activation (sigmoid function), a third calibration vector is generated that yields a size of 1 x 24. And performing element corresponding product operation on the third calibration vector and the first secondary characteristic diagram, distributing weight parameters with different sizes for each diagram layer of the first secondary characteristic diagram, and realizing the calibration of the first secondary characteristic diagram to obtain a first tertiary characteristic diagram with the size of m/2*n/2 x 24.
The first level three feature map and the second level three feature map are simultaneously input to the feature fusion unit 37, and in this embodiment, the feature fusion unit 37 includes a concatenation processing layer, a fusion convolution layer (convolution kernel size= 1*1, step size=1), and a fusion activation layer (ReLU function) that are sequentially set. Taking the first OBG module 3 as an example, the feature fusion unit 37 fuses the first three-level feature map and the second three-level feature map to obtain a four-level feature map with a size of m/2*n/2×48.
In this embodiment, the OBG module 3 is further provided with a third attention unit 36, and the second calibration vector generated in the first attention unit 34 and the second calibration vector generated in the second attention unit 35 are used together as the input of the third attention unit 36, and the two second calibration vectors are spliced and the third calibration activation functionAfter activation (tanh function), a fourth calibration vector (exemplified by the first OBG module 3) is generated with a size of 1 x 48. And the fourth calibration vector and the four-level feature map are subjected to element corresponding product to calibrate the four-level feature map, and the calibrated four-level feature map is used as the final output of the OBG module 3.
In this embodiment, as shown in fig. 4, the normalization module 4 includes a standard compression layer, a standard feedforward layer and a standard activation layer, where the standard compression layer is a global pooling operation layer, and the standard compression layer is configured to perform global average pooling processing on each layer of the deep feature map, and then generate a first feature vector with a size of 1×1×768. The standard feedforward layer is realized by adopting an existing full-connection layer, and has 768 input nodes and 100 output nodes for matching with the first feature vector. The standard activation layer is realized by adopting a ReLU function, and the standard activation layer outputs a reference vector with the size of 1 x 100 after being activated.
The algorithm model training and testing process of the present embodiment is exemplarily described below.
And collecting face images acquired by the vehicle-mounted camera under the actual scene, and labeling all face images with identity tags to prepare a training set. In the images of the training set, a total of 56 different faces are taken, and each face is exposed to at least 5 different scenes for shooting. Meanwhile, images of another 10 faces acquired by the vehicle-mounted camera under the actual scene are collected, and identity tags are marked to prepare a verification set. Each face of the verification set is also captured in at least 5 different scenes. And randomly extracting 2 images from the images corresponding to each face in the verification set to form a sample library, and forming a test set by using the rest images in the verification set, wherein the images in the test set are used for simulating the on-site face image 1 in the actual application scene.
The image in the training set is utilized to train the recognition algorithm model, the learning rate in the training process is in a fixed mode, and the loss function is a ternary loss function. And respectively inputting photos in the sample library into the recognition algorithm model after training, and outputting vectors corresponding to the images of the sample library one by the model, wherein the vectors are used as preset sample vectors.
And respectively inputting the images in the test set into a trained algorithm model, and then outputting the index vectors corresponding to the images in the test set by the model. And respectively calculating the distance (Euclidean distance) between each reference vector and all preset sample vectors, and the identity corresponding to the sample vector with the nearest reference vector and the distance smaller than the preset threshold value is the identification identity of the corresponding test set image. The identification accuracy of the model can be obtained by comparing the identification identity of the test set image with the identification label.
The test results showed that the recognition accuracy of the algorithm model of example 1 was 95.85%. After the same training and testing process is adopted, the recognition accuracy rates of the existing VGGFace2 and SphereF are 74.16% and 81.27% respectively, and are obviously lower than that of the method provided by the invention.
Comparative example 1:
all OBG modules 3 were removed from example 1The feature map introduces the first attention unit 34 and the second attention unit 35 while removing all the third attention unit 36, and modifies the internal structure of the first attention unit 34 and the second attention unit 35 to the structure shown in fig. 6, and the modified OBG module is shown in fig. 5. The rest of the model was identical to example 1, and the training and testing procedure was also consistent with example 1, with test results showing that the recognition accuracy of comparative example 1 on the test set was 77.32%.
Comparative example 2:
on the basis of example 1, only the third attention unit 36 in all OBG modules 3 was removed, and the structure of the OBG module in comparative example 2 is shown in fig. 7. The rest of the model was identical to example 1, and the training and testing procedure was also consistent with example 1, with test results showing that the recognition accuracy on the test set described above for comparative example 2 was 90.40%.
The recognition results of comparative example 1 and comparative example 2 effectively demonstrate that the third attention unit 36 has a prominent effect; the recognition results of comparative examples 1 and 2 effectively prove thatThe feature map introduces an important role for the first attention unit 34 and the second attention unit 35.
The foregoing examples merely illustrate specific embodiments of the invention, which are described in greater detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention.

Claims (9)

1. A face recognition method for unlocking an automobile is characterized by comprising the following steps: the method comprises the following steps:
step 1, acquiring a field face image acquired by a vehicle-mounted camera, and inputting the field face image into a trained recognition algorithm model, wherein the recognition algorithm model comprises an initial convolution layer, a OBG module and a standardization module which are sequentially arranged;
wherein, the internal operation process of the OBG module is expressed as the following mathematical model:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the characteristic diagram entered into said OBG module, < >>Representing the built-in feature extraction unit->Representing a first-level feature map output by the built-in feature extraction unit; />And->A first compression unit and a second compression unit are respectively indicated,a first secondary characteristic map representing the output of the first compression unit,>a second-stage feature map representing the output of the second compression unit; />And->Representing a first attention unit and a second attention unit, respectively,/respectively>Representing element-corresponding product operation,/->And->Respectively representing a first-level characteristic diagram and a second-level characteristic diagram; />The feature fusion unit is represented by a feature fusion unit,representing a four-level feature map output by the feature fusion unit;
step 2, extracting features of the on-site face image through the initial convolution layer, and outputting to obtain a shallow image feature map;
step 3, inputting the shallow image feature map into the OBG module, and generating and outputting a deep feature map by the OBG module after operation;
step 4, inputting the deep feature map into a standardization module, and then generating a reference vector corresponding to the on-site face image by the standardization module;
and 5, calculating the distance between the index vector and a preset sample vector, and obtaining the identity of the on-site face image by the identity corresponding to the sample vector which is closest to the index vector and has a distance smaller than a preset threshold value.
2. The face recognition method for unlocking a car according to claim 1, characterized in that: five end-to-end connected OBG modules are arranged in the recognition algorithm model, the shallow image feature map is used as the input of the first OBG module, and the deep feature map is the feature map output by the fifth OBG module.
3. The face recognition method for unlocking a car according to claim 1, characterized in that: the built-in feature extraction unit comprises at least one residual block.
4. The face recognition method for unlocking a car according to claim 1, characterized in that: the first compression unit comprises a compression convolution layer and a first compression activation layer which are sequentially arranged, and the step length of the compression convolution layer is 2; the second compression unit comprises a compression pooling layer and a second compression activation layer which are sequentially arranged, and the step length of the compression pooling layer is 2.
5. The face recognition method for unlocking a car according to claim 1, characterized in that: the first attention unit and the second attention unit have the same internal operation process, and the internal operation processes of the first attention unit and the second attention unit are expressed as the following mathematical models:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing a first secondary profile or a second secondary profile,/a second secondary profile>Representing global maximization of each layer of a feature mapPooling treatment of->Representing a first calibration vector, ">Representing the characteristic diagram entered into said OBG module, < >>Representing a first calibration activation function, ">A second calibration vector representing the output of the first calibration activation function,>representing element-corresponding product operation,/->Representing a second calibration activation function->A third calibration vector representing an output of the first or second attention unit.
6. The face recognition method for unlocking a car according to claim 5, wherein: the OBG module is also provided with a third attention unit, and the internal operation process of the third attention unit is expressed as the following mathematical model:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing a second calibration vector generated in the first attention unit,/>Representing a second calibration vector generated in a second attention unit,>representing a splicing operation->Representing a third calibration activation function, ">A fourth calibration vector representing the output of the third attention unit; and the fourth calibration vector is used for performing element corresponding product with the four-level feature map to calibrate the four-level feature map.
7. The face recognition method for unlocking a car according to claim 1, characterized in that: the feature fusion unit comprises a splicing processing layer, a fusion convolution layer and a fusion activation layer which are sequentially arranged.
8. A face identification system for car unblock, characterized by: comprising a processor and a memory, said memory storing a computer program for executing the face recognition method for unlocking a car according to any one of claims 1-7 by loading said computer program.
9. A storage medium, characterized by: the storage medium has stored thereon a computer program which, when executed by a processor, implements the face recognition method for unlocking a car according to any one of claims 1 to 7.
CN202310547918.7A 2023-05-16 2023-05-16 Face recognition method, system and storage medium for unlocking automobile Active CN116311479B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310547918.7A CN116311479B (en) 2023-05-16 2023-05-16 Face recognition method, system and storage medium for unlocking automobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310547918.7A CN116311479B (en) 2023-05-16 2023-05-16 Face recognition method, system and storage medium for unlocking automobile

Publications (2)

Publication Number Publication Date
CN116311479A CN116311479A (en) 2023-06-23
CN116311479B true CN116311479B (en) 2023-07-21

Family

ID=86798083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310547918.7A Active CN116311479B (en) 2023-05-16 2023-05-16 Face recognition method, system and storage medium for unlocking automobile

Country Status (1)

Country Link
CN (1) CN116311479B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014001610A1 (en) * 2012-06-25 2014-01-03 Nokia Corporation Method, apparatus and computer program product for human-face features extraction
CN112750082A (en) * 2021-01-21 2021-05-04 武汉工程大学 Face super-resolution method and system based on fusion attention mechanism
CN112949565A (en) * 2021-03-25 2021-06-11 重庆邮电大学 Single-sample partially-shielded face recognition method and system based on attention mechanism
CN113723332A (en) * 2021-09-07 2021-11-30 中国工商银行股份有限公司 Facial image recognition method and device
CN113887494A (en) * 2021-10-21 2022-01-04 上海大学 Real-time high-precision face detection and recognition system for embedded platform
KR20220129463A (en) * 2021-03-16 2022-09-23 삼성전자주식회사 Method and apparatus of face recognition
CN115661911A (en) * 2022-12-23 2023-01-31 四川轻化工大学 Face feature extraction method, device and storage medium
CN115880786A (en) * 2022-11-03 2023-03-31 中国银行股份有限公司 Method, device and equipment for detecting living human face based on channel attention
CN115984949A (en) * 2023-03-21 2023-04-18 威海职业学院(威海市技术学院) Low-quality face image recognition method and device with attention mechanism

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102645698B1 (en) * 2020-10-28 2024-03-11 한국전자통신연구원 Method and apparatus for face recognition robust to alignment shape of the face
CN112348640B (en) * 2020-11-12 2021-08-13 北京科技大学 Online shopping system and method based on facial emotion state analysis

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014001610A1 (en) * 2012-06-25 2014-01-03 Nokia Corporation Method, apparatus and computer program product for human-face features extraction
CN112750082A (en) * 2021-01-21 2021-05-04 武汉工程大学 Face super-resolution method and system based on fusion attention mechanism
KR20220129463A (en) * 2021-03-16 2022-09-23 삼성전자주식회사 Method and apparatus of face recognition
CN112949565A (en) * 2021-03-25 2021-06-11 重庆邮电大学 Single-sample partially-shielded face recognition method and system based on attention mechanism
CN113723332A (en) * 2021-09-07 2021-11-30 中国工商银行股份有限公司 Facial image recognition method and device
CN113887494A (en) * 2021-10-21 2022-01-04 上海大学 Real-time high-precision face detection and recognition system for embedded platform
CN115880786A (en) * 2022-11-03 2023-03-31 中国银行股份有限公司 Method, device and equipment for detecting living human face based on channel attention
CN115661911A (en) * 2022-12-23 2023-01-31 四川轻化工大学 Face feature extraction method, device and storage medium
CN115984949A (en) * 2023-03-21 2023-04-18 威海职业学院(威海市技术学院) Low-quality face image recognition method and device with attention mechanism

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Attention Mechanism and Feature Correction Fusion Model for Facial Expression Recognition;Qihua Xu 等;《2021 6th International Conference on Inventive Computation Technologies (ICICT)》;786-793 *
基于改进YOLOv4-tiny的分心驾驶行为检测;魏启康 等;《四川轻化工大学学报(自然科学版)》;第36卷(第02期);67-76 *
基于深度学习的人脸识别算法及在树莓派上的实现;许军;《中国优秀硕士学位论文全文数据库信息科技辑》;I138-2752 *
基于特征融合的注意力双线性池细粒度表情识别;刘力源 等;《鲁东大学学报(自然科学版)》(第02期);38-44 *

Also Published As

Publication number Publication date
CN116311479A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
US10977530B2 (en) ThunderNet: a turbo unified network for real-time semantic segmentation
CN110197146B (en) Face image analysis method based on deep learning, electronic device and storage medium
JP2012529110A (en) Semantic scene segmentation using random multinomial logit
CN114170516B (en) Vehicle weight recognition method and device based on roadside perception and electronic equipment
CN111598182A (en) Method, apparatus, device and medium for training neural network and image recognition
CN111612100A (en) Object re-recognition method and device, storage medium and computer equipment
CN114266894A (en) Image segmentation method and device, electronic equipment and storage medium
Waqas et al. Vehicle damage classification and fraudulent image detection including moiré effect using deep learning
CN114067294B (en) Text feature fusion-based fine-grained vehicle identification system and method
CN115346068A (en) Automatic generation method for bolt loss fault image of railway freight train
CN111950583A (en) Multi-scale traffic signal sign identification method based on GMM clustering
CN117392615B (en) Anomaly identification method and system based on monitoring video
CN114758113A (en) Confrontation sample defense training method, classification prediction method and device, and electronic equipment
CN116311479B (en) Face recognition method, system and storage medium for unlocking automobile
CN112052829A (en) Pilot behavior monitoring method based on deep learning
CN115861981A (en) Driver fatigue behavior detection method and system based on video attitude invariance
CN116630932A (en) Road shielding target detection method based on improved YOLOV5
CN111931767B (en) Multi-model target detection method, device and system based on picture informativeness and storage medium
CN115546779B (en) Logistics truck license plate recognition method and equipment
CN112288748A (en) Semantic segmentation network training and image semantic segmentation method and device
CN116645727B (en) Behavior capturing and identifying method based on Openphase model algorithm
CN114882449B (en) Car-Det network model-based vehicle detection method and device
CN112699928B (en) Non-motor vehicle detection and identification method based on deep convolutional network
CN116563170B (en) Image data processing method and system and electronic equipment
CN114648529B (en) DPCR liquid drop fluorescence detection method based on CNN network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant