CN112215179B - In-vehicle face recognition method, device, apparatus and storage medium - Google Patents
In-vehicle face recognition method, device, apparatus and storage medium Download PDFInfo
- Publication number
- CN112215179B CN112215179B CN202011119803.0A CN202011119803A CN112215179B CN 112215179 B CN112215179 B CN 112215179B CN 202011119803 A CN202011119803 A CN 202011119803A CN 112215179 B CN112215179 B CN 112215179B
- Authority
- CN
- China
- Prior art keywords
- face
- feature
- prediction frame
- feature matrix
- face prediction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 239000011159 matrix material Substances 0.000 claims abstract description 167
- 238000012545 processing Methods 0.000 claims abstract description 75
- 238000005070 sampling Methods 0.000 claims abstract description 20
- 238000012216 screening Methods 0.000 claims abstract description 8
- 238000001514 detection method Methods 0.000 claims description 75
- 238000000605 extraction Methods 0.000 claims description 32
- 230000008034 disappearance Effects 0.000 claims description 19
- 230000002265 prevention Effects 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 13
- 238000004422 calculation algorithm Methods 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000011176 pooling Methods 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 5
- 230000001815 facial effect Effects 0.000 claims description 5
- 238000010606 normalization Methods 0.000 claims description 5
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 4
- 238000002372 labelling Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to artificial intelligence, and provides an in-vehicle face recognition method, device and apparatus and a computer readable storage medium, comprising: processing the face image in the vehicle into an image to be recognized; extracting face features of an image to be identified to obtain an original feature matrix; sequentially performing up-sampling on the original feature matrix at least three times, and performing feature weighted summation on the up-sampled matrix obtained after up-sampling each time and the original feature matrix to sequentially obtain a corresponding feature matrix; performing convolution operation on the feature matrix to obtain a corresponding feature map; detecting and identifying the face of the feature map to obtain a face prediction frame and a category confidence coefficient of the face prediction frame; and performing de-duplication processing on the face prediction frame, and screening out an optimal face prediction frame as a recognition result. The invention can solve the problems of low accuracy and recall rate of face recognition in the vehicle and the like in the prior art under the difficult scenes such as haze, rainy days, night, shielding and the like.
Description
Technical Field
The present invention relates to artificial intelligence, and more particularly, to an in-vehicle face recognition method, apparatus, device, and computer-readable storage medium.
Background
Face recognition is an important image processing technical means in the new era, and has wide application in the traffic field, such as checking by escape personnel, entering station security check and the like.
The depth learning method based on YOLOv is one of the popular face recognition algorithms in the industry at present because of the high detection speed. The human face recognition algorithm based on YOLOv can achieve real-time detection in simple scenes such as sunny days, daytime and non-shielding, and has the effect of higher accuracy, but the accuracy and recall rate are lower in difficult scenes such as haze, rainy days, nights and shielding.
Disclosure of Invention
Based on the problems in the prior art, the invention provides an in-vehicle face recognition method, an in-vehicle face recognition device and a computer readable storage medium, which mainly aim to sequentially perform at least three times of up-sampling on an original feature matrix obtained by extraction, perform feature weighted summation on the up-sampled matrix obtained by up-sampling each time and the original feature matrix, respectively perform convolution operation to obtain at least three feature graphs with different sizes, and perform face detection and recognition on the feature graphs with at least three different sizes through a preset target detection frame to obtain a face prediction frame, and perform de-duplication on the face prediction frame to obtain an optimal face prediction frame as a recognition result. The problems of low accuracy and recall rate of face recognition in the vehicle and the like in the prior art under difficult scenes such as haze, rainy days, night, shielding and the like are solved.
In order to achieve the above object, the present invention provides an in-vehicle face recognition method, including:
Processing the acquired in-car face image into an image with a preset size to obtain an image to be identified;
extracting facial features of the image to be identified through a feature extraction network to obtain an original feature matrix;
Sequentially performing at least three times of upsampling on the original feature matrix, wherein each upsampling is based on the original feature matrix obtained by the previous upsampling, and performing feature weighted summation on the upsampling matrix obtained by each upsampling and the original feature matrix to sequentially obtain corresponding feature matrices;
performing convolution operation on the feature matrix to obtain corresponding feature graphs respectively;
carrying out face detection and recognition on the feature images through a preset target detection frame to obtain a face prediction frame and category confidence of the face prediction frame;
and performing de-duplication processing on the face prediction frame according to the category confidence of the face prediction frame, and screening an optimal face prediction frame from the face prediction frame to serve as a recognition result.
In a second aspect, to achieve the above object, the present invention further provides an in-vehicle face recognition device, including:
The image size processing unit is used for processing the acquired in-car face image into an image with a preset size to obtain an image to be identified;
The feature extraction unit is used for extracting the face features of the image to be identified through a feature extraction network to obtain an original feature matrix;
The weighted summation processing unit is used for sequentially carrying out at least three times of upsampling on the original feature matrix, wherein each upsampling is based on the original feature matrix obtained by the previous upsampling, and carrying out feature weighted summation on the upsampled matrix obtained by each upsampling and the original feature matrix to sequentially obtain a corresponding feature matrix;
The convolution processing unit is used for carrying out convolution operation processing on the feature matrix to obtain corresponding feature graphs respectively;
The detection and recognition unit is used for carrying out face detection and recognition on the feature images through a preset target detection frame to obtain a face prediction frame and category confidence of the face prediction frame;
And the face prediction frame de-duplication unit is used for performing de-duplication processing on the face prediction frame according to the category confidence of the face prediction frame, and screening an optimal face prediction frame from the face prediction frame to be used as a recognition result.
In a third aspect, to achieve the above object, the present invention further provides an electronic device, including: the vehicle-mounted face recognition system comprises a memory and a processor, wherein the memory stores an in-vehicle face recognition program, and the in-vehicle face recognition program realizes any step in the in-vehicle face recognition method when being executed by the processor.
In a fourth aspect, to achieve the above object, the present invention also provides a computer-readable storage medium having stored therein an in-vehicle face recognition program which, when executed by a processor, implements any of the steps in the in-vehicle face recognition method described above.
The method, the device and the computer readable storage medium for in-vehicle face recognition are characterized in that the original feature matrix obtained through extraction is sequentially subjected to up-sampling for at least three times, the up-sampling matrix obtained after up-sampling is subjected to feature weighted summation with the original feature matrix each time, convolution operation is respectively carried out to obtain at least three feature images with different sizes, face detection and recognition are respectively carried out on the at least three feature images with different sizes through a preset target detection frame, a face prediction frame is obtained, and the optimal face prediction frame is obtained as a recognition result through de-weighting of the face prediction frame. The original feature matrix is weighted and summed for three times, so that information loss is effectively reduced, the integrity of extracted features is improved, and the overall accuracy and recall rate of face recognition in a vehicle are improved; the optimal face prediction frame is obtained as a recognition result by removing the weight of the face prediction frame, so that the feature expression capability of the face prediction frame in difficult scenes such as haze, rainy days, night, in-car shielding and the like is remarkably enhanced, and the overall accuracy and recall rate of the face recognition in the car are further improved.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the in-vehicle face recognition method of the present invention;
fig. 2 is a schematic view of an application environment of a preferred embodiment of the in-vehicle face recognition method of the present invention;
Fig. 3 is a schematic block diagram of a preferred embodiment of the in-vehicle face recognition procedure in fig. 2.
Fig. 4 is a system logic diagram corresponding to the in-vehicle face recognition method of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The invention provides an in-vehicle face recognition method. Referring to fig. 1, a flowchart of a preferred embodiment of the in-vehicle face recognition method of the present invention is shown. The method may be performed by an apparatus, which may be implemented in software and/or hardware.
In this embodiment, the in-vehicle face recognition method includes: step S110 to step S160.
Step S110, the acquired face image in the vehicle is processed into an image with a preset size, and an image to be identified is obtained.
Specifically, through the picture size processing, the in-car face image is processed to meet the requirement of the model on the size of the image, so that the in-car face image features can be conveniently extracted by a subsequent feature extraction network. The size of the image to be identified can be set according to actual needs, for example: 512 x 512 or 1024 x 1024, etc.
And step S120, extracting facial features of the image to be identified through a feature extraction network to obtain an original feature matrix.
Specifically, the feature extraction network is used for extracting the face features of the image to be identified, the extraction speed is higher, and the feature extraction network for realizing feature extraction is realized by continuously carrying out convolution calculation or multi-layer convolution calculation on the image to be identified, so that a multi-layer original feature matrix is obtained.
As a preferred aspect of the present invention, the feature extraction network includes: the device comprises an input layer for acquiring an image to be identified, a convolution layer for carrying out convolution operation processing on the image to be identified of the input layer, a pooling layer for carrying out downsampling processing on a first face feature map matrix output by the convolution layer, a fully-connected layer for carrying out full-connection processing on a second face feature map matrix output by the pooling layer, a global average pooling layer for carrying out average value calculation on pixels of the face feature map output by the fully-connected layer and an output layer for outputting an original feature matrix obtained by the global average pooling layer.
Specifically, the feature extraction network is preferably CSPResNeXt network, and the CSPResNeXt network enhances the feature expression capability of the vehicle in difficult scenes such as haze, rainy days, night, in-vehicle shielding and the like, so that the overall accuracy and recall rate of the in-vehicle face recognition are improved. The CSPResNeXt network is a relatively advanced image classification network, and has a specific network structure that an image to be identified processed into a fixed size is input through an input layer, and convolution calculation processing is performed through a convolution layer, wherein the convolution layer number can be determined according to practical situations, for example, 3,4, 6, namely, the convolution layer number (execution times); the size of the convolution kernel can also be set according to actual needs, for example 7*7, 3*3, etc.; convolution processing is performed by a plurality of convolution modules, such as: conv_Block_1, conv_Block_2, conv_Block_3, conv_Block_4 are identical in structure except that the channels are different (128, 256, 512, 1024 respectively), and the specific structure is 1*1 convolution +3*3 convolution +1*1 convolution; and then compressing the first face feature map matrix output by the convolution layer through the pooling layer, improving the feature extraction speed, performing full-connection processing on the second face feature map matrix output by the pooling layer through the full-connection layer, calculating the average value of pixels of the face feature map output by the full-connection layer, and finally obtaining an original feature matrix which is output by the output layer.
Step S130, sequentially performing at least three times of upsampling on the original feature matrix, wherein each upsampling is based on the original feature matrix obtained by the previous upsampling, and performing feature weighted summation on the upsampled matrix obtained by each upsampling and the original feature matrix, so as to sequentially obtain the corresponding feature matrix.
Specifically, at least three upsampling, preferably three upsampling, may be four upsampling, five upsampling, six upsampling, etc. are performed on the original feature matrix respectively, the matrix after each upsampling is enlarged, and then feature weighted summation is performed on the feature matrix enlarged each time and the original feature matrix with the same size in the feature extraction network, so as to sequentially obtain the corresponding feature matrix.
As a preferred scheme of the present invention, sequentially performing three upsampling on an original feature matrix, where each upsampling is based on an original feature matrix obtained by previous upsampling, and performing feature weighted summation on the upsampled matrix obtained by each upsampling and the original feature matrix includes:
sequentially performing gradient disappearance prevention treatment and first upsampling on the original feature matrix to obtain an enlarged feature matrix;
carrying out first feature weighted summation on the expanded feature matrix and the original feature matrix according to a first preset weight parameter to obtain a first feature matrix;
sequentially performing gradient disappearance prevention treatment and second upsampling on the first feature matrix to obtain an enlarged first feature matrix;
carrying out second characteristic weighted summation on the expanded first characteristic matrix and the original characteristic matrix according to a second preset weight parameter to obtain a second characteristic matrix;
sequentially performing gradient disappearance prevention treatment and third upsampling on the second feature matrix to obtain an enlarged second feature matrix;
And carrying out third feature weighted summation on the expanded second feature matrix and the original feature matrix according to a third preset weight parameter to obtain a third feature matrix.
Specifically, the up-sampling is used for amplifying the obtained original feature matrix, so as to reduce information loss. Such as: the size of the up-sampled input matrix is 13×13×256, the size of the up-sampled matrix is 26×26×256 (i.e. the width and height of the matrix are amplified by 2 times), when the up-sampled and expanded feature matrix performs feature weighted summation with the original feature matrix in a certain layer in the feature extraction network, the size of the matrix needs to be consistent, otherwise, the weighted summation operation cannot be performed, for example: assuming that the matrix size after the first upsampling (i.e. one of the inputs of Sum) is 26×26×128, and further assuming that the output matrix sizes of the 120 th layer, the 130 th layer and the 140 th layer in the feature extraction network are 13×13×128, 26×26×128 and 26×26×256 respectively, the other input of the weighted summation can only be the output matrix of the 130 th layer (i.e. the matrix size must be the same (both are 26×26×128) during the feature weighted summation processing), but not the feature map matrix of the 120 th layer and the feature map matrix of the 140 th layer, where the first preset weight parameter, the second preset weight parameter and the third preset weight parameter can be set according to practical implementation, and the three preset parameters can be the same or different.
And step S140, performing convolution operation processing on the feature matrix to obtain corresponding feature graphs respectively.
Specifically, a corresponding feature map is obtained by performing convolution operation on each feature matrix, and the feature maps corresponding to each feature matrix are different in size.
As a preferred embodiment of the present invention, performing convolution operation on the feature matrix to obtain corresponding feature graphs includes:
sequentially performing gradient disappearance prevention treatment and first convolution operation on the first feature matrix to obtain a first feature map;
sequentially performing gradient disappearance prevention treatment and second convolution operation on the second feature matrix to obtain a second feature map;
sequentially performing gradient disappearance prevention treatment and third convolution operation on the third feature matrix to obtain a third feature map;
Wherein the gradient disappearance prevention treatment includes: convolution calculation processing, batch normalization processing and activation function processing.
Specifically, before the convolution operation is performed on the first feature matrix, the second feature matrix and the third feature matrix, in order to prevent gradient disappearance, gradient disappearance prevention processing is performed, that is, 3*3 convolution+batch normalization+ Mish activation function processing is performed, and convolution calculation is performed on each feature matrix after processing, so as to obtain a first feature map, a second feature map and a first feature map respectively. Wherein the gradient disappearance prevention process is performed a plurality of times as needed.
The training speed of the feature matrix of each stage is faster through convolution calculation processing, batch normalization processing and Mish activation function processing, and the gradient disappearance phenomenon is prevented.
And step S150, carrying out face detection and recognition on the feature images through a preset target detection frame to obtain a face prediction frame and category confidence of the face prediction frame.
Specifically, a preset target detection box (Anchor box) is one mode of target detection and recognition, and through the preset target detection box, the image to be recognized, namely the feature map in the embodiment of the invention, can be subjected to face detection and recognition, the detection result is a face prediction box, and the probability of the face prediction box is the class confidence of the face prediction box.
As a preferred scheme of the invention, the preset target detection frame is stored in the blockchain, and before the feature map is subjected to face detection and recognition through the preset target detection frame to obtain the face prediction frame and the category confidence of the face prediction frame, the method further comprises the following steps:
Acquiring face sample data;
Randomly acquiring a specified number of points from face sample data to serve as face initial sample points;
Clustering the face sample data by adopting a clustering algorithm to obtain the clusters with the specified number;
And calculating the center point coordinates of each cluster to serve as a preset target detection frame.
Specifically, in the clustering process, the distance function that can be used is: IOU=I/U, I represents the intersection area of two labeling frames, U represents the union area of two labeling frames; the clustering algorithm is preferably a kmeans algorithm.
As a preferred scheme of the invention, carrying out face detection and recognition on the feature map through a preset target detection frame, and obtaining the face prediction frame and the category confidence coefficient of the face prediction frame comprises the following steps:
sliding a preset target detection frame on the feature map, and acquiring the coordinates of the center point of the preset target detection frame on the feature map as first coordinates;
calculating the predicted coordinates of the feature map according to the first coordinates and the original coordinates of the preset target detection frame;
obtaining a face prediction frame according to the prediction coordinates of the feature map;
and calculating the category confidence of the face prediction frame through a categorization algorithm.
Specifically, the first feature diagram is taken as an example, assuming that the size of the first feature diagram is 16×16, the number of preset target detection boxes (Anchor boxes) allocated to the first feature diagram is 3, the 3 preset target detection boxes slide in 16×16=256 grids of the first feature diagram, and each grid carries out the prediction of the coordinates (i.e., x, y, w, h, (x, y) of the detection boxes, the coordinates of the center point of the detection boxes on the first feature diagram, (w, h) indicates the width and height of the detection boxes, and the category (two categories, i.e., whether the detection boxes are faces or not).
The predicted coordinates and width-height values are relative to the coordinates and width-height values of the current 3 preset target detection frames. Assuming that the width and height of 1 preset target detection frame obtained by clustering in advance by kmeans algorithm is (2, 3), then assuming that the preset target detection frame slides to the 2 nd grid on the first feature map (i.e. y1, matrix of 16×16), the coordinate of the preset anchor box is (1, 0), the width and height is (2, 3), then assuming that the original coordinate of the preset target detection frame predicted by the prediction model is (0.3, 0.6) and the width and height is (2.1,1.3), then the coordinate of the prediction frame obtained based on the preset target detection frame on the first feature map is (1+0.3, 0+1.6) = (1.3,1.6), and the width and height is (2×e 2.1,3×e Σ1.3) = (16.3,11.0), namely the predicted coordinate of the first feature map. In addition, similar operations are performed on the second feature map and the third feature map with 2 scales, and finally, face detection (i.e. coordinates) and face recognition (i.e. classification) are realized, and the face detection method can also directly obtain the face detection result through detection model prediction, namely, the probability (decimal between 0 and 1) that the predicted coordinates are the faces is directly predicted, if the probability is larger than a preset probability, for example, larger than 0.5, the face is the face, and otherwise, the face detection result is not the face.
And mapping the detection (namely coordinates) of the face obtained after the three feature images are predicted by the preset target detection frame to the face image in the vehicle, and finally obtaining the face prediction frame and the category confidence of the face prediction frame.
And step S160, performing de-duplication processing on the face prediction frames according to the category confidence of the face prediction frames, and screening out the optimal face prediction frames from the face prediction frames to serve as recognition results.
Specifically, the face prediction frames obtained through mapping and the category confidence of each face prediction frame are subjected to face prediction frame de-duplication processing, so that an optimal face prediction frame is obtained, and the optimal face prediction frame is used as a recognition result.
As a preferred scheme of the invention, according to the category confidence of the face prediction frame, the face prediction frame is subjected to de-duplication processing, and the optimal face prediction frame is selected from the face prediction frames as a recognition result, and the method comprises the following steps:
acquiring a face prediction frame with highest category confidence coefficient of the face prediction frame, wherein the face prediction frame is used as the face prediction frame with the highest confidence coefficient, and the rest is the rest face prediction frames;
calculating the cross-over ratio of the residual face prediction frame and the face prediction frame with the highest confidence degree through a cross-over ratio formula, wherein,
The formula of the cross ratio is: IOU=I/U, wherein IOU is the intersection ratio of the residual face prediction frame and the face prediction frame with the highest confidence, I is the intersection area of the residual face prediction frame and the face prediction frame with the highest confidence, and U is the intersection area of the residual face prediction frame and the face prediction frame with the highest confidence;
updating the confidence coefficient of the rest face prediction frame according to the intersection ratio and a preset confidence coefficient updating formula; the preset confidence updating formula is as follows:
The IOU is the intersection ratio of the residual face prediction frame and the face prediction frame with the highest confidence degree; the alpha is an attenuation coefficient; the IOU threshold is a preset cross ratio threshold; score is the confidence level of the updated residual face prediction frame;
And deleting the residual face prediction frames with the updated confidence coefficient lower than a preset confidence coefficient threshold value to obtain the optimal face prediction frames.
Specifically, it is assumed that there is actually only one face in one picture, but in the process of performing perfect person face detection and recognition, it is possible to detect 4 face detection frames, and 3 of the 4 face detection frames are repeated with the real face frame (i.e., IOU > 0.5), so that the above steps need to be performed again to remove the 3 repeated frames, and finally, an optimal face prediction frame is obtained as a recognition result.
The in-vehicle face recognition method provided by the invention is applied to electronic equipment 1. Referring to fig. 2, an application environment of a preferred embodiment of the in-vehicle face recognition method of the present invention is shown.
In this embodiment, the electronic device 1 may be a terminal device having an operation function, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.
The electronic device 1 includes: processor 12, memory 11, network interface 13, and communication bus 14.
The memory 11 includes at least one type of readable storage medium. At least one type of readable storage medium may be a non-volatile storage medium such as flash memory, hard disk, multimedia card, and card memory 11. In some embodiments, the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1. In other embodiments, the readable storage medium may also be an external memory 11 of the electronic device 1, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), etc. provided on the electronic device 1.
In the present embodiment, the readable storage medium of the memory 11 is generally used to store the in-vehicle face recognition program 10 and the like mounted on the electronic device 1. The memory 11 may also be used to temporarily store data that has been output or is to be output.
The processor 12 may in some embodiments be a central processing unit (Central Processing Unit, CPU), microprocessor or other data processing chip for running program code or processing data stored in the memory 11, for example, for executing the in-vehicle face recognition program 10, etc.
The network interface 13 may optionally comprise a standard wired interface, a wireless interface (e.g. WI-FI interface), typically used to establish a communication connection between the electronic device 1 and other electronic devices.
The communication bus 14 is used to enable the connection communication between these components.
In the device embodiment shown in fig. 2, an operating system and an in-vehicle face recognition program 10 may be included in a memory 11 as a computer storage medium; the processor 12 executes the in-vehicle face recognition program 10 stored in the memory 11 to implement the following steps:
step S110, processing the acquired in-car face image into an image with a preset size to obtain an image to be identified;
Step S120, extracting facial features of an image to be identified through a feature extraction network to obtain an original feature matrix;
step S130, sequentially performing at least three times of upsampling on the original feature matrix, wherein each upsampling is based on the original feature matrix obtained by the previous upsampling, and performing feature weighted summation on the upsampled matrix obtained by each upsampling and the original feature matrix to sequentially obtain a corresponding feature matrix;
Step S140, carrying out convolution operation processing on the feature matrixes to obtain corresponding feature graphs respectively;
step S150, carrying out face detection and recognition on the feature images through a preset target detection frame to obtain a face prediction frame and category confidence of the face prediction frame;
And step S160, performing de-duplication processing on the face prediction frames according to the category confidence of the face prediction frames, and screening out the optimal face prediction frames from the face prediction frames to serve as recognition results.
In other embodiments, the in-vehicle face recognition program 10 may also be partitioned into one or more modules, one or more modules being stored in the memory 11 and executed by the processor 12 to complete the present invention.
The invention may refer to a series of computer program instruction segments capable of performing a specified function. Referring to fig. 3, a block diagram of a preferred embodiment of the in-vehicle face recognition program 10 of fig. 2 is shown. The in-vehicle face recognition program 10 may be divided into: an image size processing module 110, a feature extraction module 120, a weighted summation processing module 130, a convolution processing module 140, a detection and identification module 150, and a face prediction frame deduplication module 160.
The functions or operational steps performed by the modules 110-160 are similar to those described above and are not described in detail herein, for example, wherein:
the image size processing module 110 is configured to process the obtained face image in the vehicle into an image with a preset size, so as to obtain an image to be identified;
the feature extraction module 120 is configured to perform face feature extraction on an image to be identified through a feature extraction network to obtain an original feature matrix;
The weighted summation processing module 130 is configured to sequentially perform at least three times of upsampling on the original feature matrix, where each upsampling is based on the original feature matrix obtained by the previous upsampling, and perform feature weighted summation on the upsampled matrix obtained by each upsampling and the original feature matrix, so as to sequentially obtain a corresponding feature matrix;
The convolution processing module 140 is configured to perform convolution operation processing on the feature matrices to obtain corresponding feature graphs respectively;
the detection and recognition module 150 is configured to detect and recognize a face of the feature map through a preset target detection frame, so as to obtain a face prediction frame and a category confidence of the face prediction frame; it should be emphasized that the preset target detection frame is stored in the blockchain;
The face prediction frame deduplication module 160 is configured to deduplicate the face prediction frame according to the class confidence of the face prediction frame, and screen an optimal face prediction frame from the face prediction frames as a recognition result.
As shown in fig. 4, in addition, corresponding to the above method, an embodiment of the present invention further provides an in-vehicle face recognition device 400, including: the image size processing unit 410, the feature extraction unit 420, the weighted summation processing unit 430, the convolution processing unit 440, the detection and identification unit 450 and the face prediction frame duplication removal unit 460, wherein the implementation functions of the image size processing unit 410, the feature extraction unit 420, the weighted summation processing unit 430, the convolution processing unit 440, the detection and identification unit 450 and the face prediction frame duplication removal unit 460 are in one-to-one correspondence with the steps of the in-vehicle face recognition method in the embodiment.
An image size processing unit 410, configured to process the obtained face image in the vehicle into an image with a preset size, so as to obtain an image to be identified;
The feature extraction unit 420 is configured to perform face feature extraction on the image to be identified through a feature extraction network, so as to obtain an original feature matrix;
The weighted summation processing unit 430 is configured to sequentially perform at least three upsampling on the original feature matrix, perform each upsampling process based on the original feature matrix obtained by the previous upsampling, and perform feature weighted summation on the upsampled matrix obtained by each upsampling and the original feature matrix, so as to sequentially obtain a corresponding feature matrix;
a convolution processing unit 440, configured to perform convolution operation processing on the feature matrices to obtain corresponding feature graphs respectively;
The detection and recognition unit 450 is configured to perform face detection and recognition on the feature map through a preset target detection frame to obtain a face prediction frame and a category confidence of the face prediction frame, where it is emphasized that the preset target detection frame is stored in the blockchain;
The face prediction frame deduplication unit 460 is configured to deduplicate the face prediction frame according to the class confidence of the face prediction frame, and screen the optimal face prediction frame from the face prediction frames as the recognition result.
In addition, the embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores an in-vehicle face recognition program, and the in-vehicle face recognition program realizes the following operations when being executed by a processor:
Processing the acquired in-car face image into an image with a preset size to obtain an image to be identified;
extracting facial features of the image to be identified through a feature extraction network to obtain an original feature matrix;
Sequentially performing at least three times of upsampling on the original feature matrix, wherein each upsampling process is based on the original feature matrix obtained by the previous upsampling, and performing feature weighted summation on the upsampled matrix obtained by each upsampling and the original feature matrix to sequentially obtain corresponding feature matrixes;
Performing convolution operation on the feature matrix to obtain corresponding feature graphs respectively;
Carrying out face detection and recognition on the feature images through a preset target detection frame to obtain a face prediction frame and category confidence of the face prediction frame;
And performing de-duplication processing on the face prediction frame according to the category confidence of the face prediction frame, and screening the optimal face prediction frame from the face prediction frame to serve as a recognition result.
The embodiment of the computer readable storage medium of the present invention is substantially the same as the embodiment of the in-vehicle face recognition method and the electronic device, and will not be described herein.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (9)
1. An in-vehicle face recognition method applied to an electronic device, the method comprising:
Processing the acquired in-car face image into an image with a preset size to obtain an image to be identified;
extracting facial features of the image to be identified through a feature extraction network to obtain an original feature matrix;
Sequentially performing at least three times of upsampling on the original feature matrix, wherein each upsampling is based on the feature matrix obtained by the previous upsampling, and performing feature weighted summation on the upsampling matrix obtained by each upsampling and the original feature matrix to sequentially obtain corresponding feature matrices;
In the process of carrying out feature weighted summation on an up-sampling matrix obtained after up-sampling and the original feature matrix each time to sequentially obtain corresponding feature matrices, sequentially carrying out gradient disappearance prevention treatment and up-sampling on the corresponding feature matrix obtained in the previous up-sampling to obtain an enlarged feature matrix, and carrying out feature weighted summation on the enlarged feature matrix and the original feature matrix according to preset weight parameters corresponding to the current up-sampling to obtain a feature matrix corresponding to the current up-sampling;
The gradient vanishing prevention treatment includes: convolution calculation processing, batch normalization processing and activation function processing;
performing convolution operation on the feature matrix to obtain corresponding feature graphs respectively;
carrying out face detection and recognition on the feature images through a preset target detection frame to obtain a face prediction frame and category confidence of the face prediction frame;
performing de-duplication processing on the face prediction frame according to the category confidence of the face prediction frame, and screening an optimal face prediction frame from the face prediction frame to serve as a recognition result; wherein, include:
Acquiring a face prediction frame with highest category confidence coefficient of the face prediction frame, wherein the face prediction frame is used as the face prediction frame with the highest confidence coefficient, and the rest is the rest face prediction frames;
calculating the cross ratio of the residual face prediction frame and the face prediction frame with the highest confidence degree through a cross ratio formula, wherein,
The formula of the cross ratio is as follows: IOU=I/U, wherein IOU is the intersection ratio of the residual face prediction frame and the face prediction frame with the highest confidence, I is the intersection area of the residual face prediction frame and the face prediction frame with the highest confidence, and U is the intersection area of the residual face prediction frame and the face prediction frame with the highest confidence;
Updating the confidence coefficient of the residual face prediction frame according to the intersection ratio and a preset confidence coefficient updating formula; the preset confidence updating formula is as follows:
The IOU is the intersection ratio of the residual face prediction frame and the face prediction frame with the highest confidence degree; the alpha is an attenuation coefficient; the IOU threshold is a preset cross ratio threshold; score is the confidence level of the updated residual face prediction frame;
and deleting the residual face prediction frames with the updated confidence coefficient lower than a preset confidence coefficient threshold value to obtain the optimal face prediction frames.
2. The in-vehicle face recognition method according to claim 1, wherein the feature extraction network includes:
The device comprises an input layer for acquiring the image to be identified, a convolution layer for carrying out convolution operation processing on the image to be identified of the input layer, a pooling layer for carrying out downsampling processing on a first face feature map matrix output by the convolution layer, a fully-connected layer for carrying out full-connection processing on a second face feature map matrix output by the pooling layer, a global average pooling layer for carrying out average value calculation on pixels of a face feature map output by the fully-connected layer and an output layer for outputting an original feature matrix obtained by the global average pooling layer.
3. The in-vehicle face recognition method according to claim 1, wherein the sequentially performing three upsampling on the original feature matrix, each upsampling being based on a feature matrix obtained by a previous upsampling, and performing feature weighted summation on the upsampled matrix obtained by each upsampling and the original feature matrix, and sequentially obtaining a corresponding feature matrix includes:
Sequentially performing gradient disappearance prevention treatment and first upsampling on the original feature matrix to obtain an enlarged feature matrix;
carrying out first feature weighted summation on the expanded feature matrix and the original feature matrix according to a first preset weight parameter to obtain a first feature matrix;
sequentially performing gradient disappearance prevention treatment and second upsampling on the first feature matrix to obtain an enlarged first feature matrix;
Performing second feature weighted summation on the expanded first feature matrix and the original feature matrix according to a second preset weight parameter to obtain a second feature matrix;
Sequentially performing gradient disappearance prevention treatment and third upsampling on the second feature matrix to obtain an enlarged second feature matrix;
and carrying out third feature weighted summation on the expanded second feature matrix and the original feature matrix according to a third preset weight parameter to obtain a third feature matrix.
4. The method for in-vehicle face recognition according to claim 3, wherein the performing convolution operation on the feature matrix to obtain corresponding feature graphs includes:
sequentially performing gradient disappearance prevention treatment and first convolution operation on the first feature matrix to obtain a first feature map;
sequentially performing gradient disappearance prevention treatment and second convolution operation on the second feature matrix to obtain a second feature map;
And sequentially performing gradient disappearance prevention treatment and third convolution operation on the third feature matrix to obtain a third feature map.
5. The in-vehicle face recognition method according to claim 1, wherein the preset target detection frame is stored in a blockchain, and before the face detection and recognition are performed on the feature map through the preset target detection frame, the method further comprises:
Acquiring face sample data;
randomly acquiring a specified number of points from the face sample data to serve as face initial sample points;
clustering the face sample data by adopting a clustering algorithm to obtain the specified number of clusters;
and calculating the center point coordinates of each cluster as the preset target detection frame.
6. The method for in-vehicle face recognition according to claim 1, wherein the performing face detection and recognition on the feature map by a preset target detection frame, and obtaining the category confidence of the face prediction frame and the face prediction frame includes:
Sliding the preset target detection frame on the feature map, and acquiring the coordinate of the central point of the preset target detection frame on the feature map as a first coordinate;
calculating the predicted coordinates of the feature map according to the first coordinates and the original coordinates of the preset target detection frame;
obtaining the face prediction frame according to the prediction coordinates of the feature map;
And calculating the category confidence of the face prediction frame through a categorization algorithm.
7. An in-vehicle face recognition device, the device comprising:
The image size processing unit is used for processing the acquired in-car face image into an image with a preset size to obtain an image to be identified;
The feature extraction unit is used for extracting the face features of the image to be identified through a feature extraction network to obtain an original feature matrix;
The weighted summation processing unit is used for sequentially carrying out at least three times of upsampling on the original feature matrix, wherein each upsampling is based on the feature matrix obtained by the previous upsampling, and carrying out feature weighted summation on the upsampled matrix obtained by each upsampling and the original feature matrix to sequentially obtain a corresponding feature matrix; in the process of carrying out feature weighted summation on an up-sampling matrix obtained after up-sampling and the original feature matrix each time to sequentially obtain corresponding feature matrices, sequentially carrying out gradient disappearance prevention treatment and up-sampling on the corresponding feature matrix obtained in the previous up-sampling to obtain an enlarged feature matrix, and carrying out feature weighted summation on the enlarged feature matrix and the original feature matrix according to preset weight parameters corresponding to the current up-sampling to obtain a feature matrix corresponding to the current up-sampling;
The gradient vanishing prevention treatment includes: convolution calculation processing, batch normalization processing and activation function processing;
The convolution processing unit is used for carrying out convolution operation processing on the feature matrix to obtain corresponding feature graphs respectively;
The detection and recognition unit is used for carrying out face detection and recognition on the feature images through a preset target detection frame to obtain a face prediction frame and category confidence of the face prediction frame;
The face prediction frame de-duplication unit is used for performing de-duplication processing on the face prediction frame according to the category confidence level of the face prediction frame, and screening an optimal face prediction frame from the face prediction frame to serve as a recognition result; the face prediction frame deduplication unit is specifically configured to:
Acquiring a face prediction frame with highest category confidence coefficient of the face prediction frame, wherein the face prediction frame is used as the face prediction frame with the highest confidence coefficient, and the rest is the rest face prediction frames;
calculating the cross ratio of the residual face prediction frame and the face prediction frame with the highest confidence degree through a cross ratio formula, wherein,
The formula of the cross ratio is as follows: IOU=I/U, wherein IOU is the intersection ratio of the residual face prediction frame and the face prediction frame with the highest confidence, I is the intersection area of the residual face prediction frame and the face prediction frame with the highest confidence, and U is the intersection area of the residual face prediction frame and the face prediction frame with the highest confidence;
Updating the confidence coefficient of the residual face prediction frame according to the intersection ratio and a preset confidence coefficient updating formula; the preset confidence updating formula is as follows:
The IOU is the intersection ratio of the residual face prediction frame and the face prediction frame with the highest confidence degree; the alpha is an attenuation coefficient; the IOU threshold is a preset cross ratio threshold; score is the confidence level of the updated residual face prediction frame;
and deleting the residual face prediction frames with the updated confidence coefficient lower than a preset confidence coefficient threshold value to obtain the optimal face prediction frames.
8. An electronic device, comprising: the in-vehicle face recognition program is executed by the processor to realize the steps of the in-vehicle face recognition method according to any one of claims 1 to 6.
9. A computer-readable storage medium, in which an in-vehicle face recognition program is stored, which when executed by a processor, implements the steps of the in-vehicle face recognition method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011119803.0A CN112215179B (en) | 2020-10-19 | 2020-10-19 | In-vehicle face recognition method, device, apparatus and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011119803.0A CN112215179B (en) | 2020-10-19 | 2020-10-19 | In-vehicle face recognition method, device, apparatus and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112215179A CN112215179A (en) | 2021-01-12 |
CN112215179B true CN112215179B (en) | 2024-04-19 |
Family
ID=74055870
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011119803.0A Active CN112215179B (en) | 2020-10-19 | 2020-10-19 | In-vehicle face recognition method, device, apparatus and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112215179B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113077415B (en) * | 2021-03-04 | 2023-02-24 | 中山大学附属第一医院 | Tumor microvascular invasion detection device based on image analysis |
CN112967216B (en) * | 2021-03-08 | 2023-06-09 | 平安科技(深圳)有限公司 | Method, device, equipment and storage medium for detecting key points of face image |
CN113420840B (en) * | 2021-08-23 | 2021-12-21 | 常州微亿智造科技有限公司 | Target detection method and system based on low-resolution image |
CN117333928B (en) * | 2023-12-01 | 2024-03-22 | 深圳市宗匠科技有限公司 | Face feature point detection method and device, electronic equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109886286A (en) * | 2019-01-03 | 2019-06-14 | 武汉精测电子集团股份有限公司 | Object detection method, target detection model and system based on cascade detectors |
CN109919013A (en) * | 2019-01-28 | 2019-06-21 | 浙江英索人工智能科技有限公司 | Method for detecting human face and device in video image based on deep learning |
CN110503112A (en) * | 2019-08-27 | 2019-11-26 | 电子科技大学 | A kind of small target deteection of Enhanced feature study and recognition methods |
CN111160368A (en) * | 2019-12-24 | 2020-05-15 | 中国建设银行股份有限公司 | Method, device and equipment for detecting target in image and storage medium |
CN111291637A (en) * | 2020-01-19 | 2020-06-16 | 中国科学院上海微系统与信息技术研究所 | Face detection method, device and equipment based on convolutional neural network |
KR20200087350A (en) * | 2018-12-31 | 2020-07-21 | 주식회사 포스코아이씨티 | System for Face Recognition Based On AI |
CN111461110A (en) * | 2020-03-02 | 2020-07-28 | 华南理工大学 | Small target detection method based on multi-scale image and weighted fusion loss |
CN111476225A (en) * | 2020-06-28 | 2020-07-31 | 平安国际智慧城市科技股份有限公司 | In-vehicle human face identification method, device, equipment and medium based on artificial intelligence |
-
2020
- 2020-10-19 CN CN202011119803.0A patent/CN112215179B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200087350A (en) * | 2018-12-31 | 2020-07-21 | 주식회사 포스코아이씨티 | System for Face Recognition Based On AI |
CN109886286A (en) * | 2019-01-03 | 2019-06-14 | 武汉精测电子集团股份有限公司 | Object detection method, target detection model and system based on cascade detectors |
CN109919013A (en) * | 2019-01-28 | 2019-06-21 | 浙江英索人工智能科技有限公司 | Method for detecting human face and device in video image based on deep learning |
CN110503112A (en) * | 2019-08-27 | 2019-11-26 | 电子科技大学 | A kind of small target deteection of Enhanced feature study and recognition methods |
CN111160368A (en) * | 2019-12-24 | 2020-05-15 | 中国建设银行股份有限公司 | Method, device and equipment for detecting target in image and storage medium |
CN111291637A (en) * | 2020-01-19 | 2020-06-16 | 中国科学院上海微系统与信息技术研究所 | Face detection method, device and equipment based on convolutional neural network |
CN111461110A (en) * | 2020-03-02 | 2020-07-28 | 华南理工大学 | Small target detection method based on multi-scale image and weighted fusion loss |
CN111476225A (en) * | 2020-06-28 | 2020-07-31 | 平安国际智慧城市科技股份有限公司 | In-vehicle human face identification method, device, equipment and medium based on artificial intelligence |
Non-Patent Citations (4)
Title |
---|
一种卷积神经网络的车辆和行人检测算法;李大华 等;《激光杂志》;20200430(第04期);全文 * |
基于全卷积神经网络的人脸检测算法研究;卫露宁;《中国优秀硕士学位论文全文数据库 信息科技辑(月刊)》;20180115(第01期);全文 * |
基于深度学习的多尺度目标检测算法研究;焦天驰;《中国优秀硕士学位论文全文数据库 信息科技辑(月刊)》;20200815(第08期);全文 * |
复杂场景下人脸检测算法的研究与应用;蒋婵;《中国优秀硕士学位论文全文数据库 信息科技辑(月刊)》;20200615(第06期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112215179A (en) | 2021-01-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112215179B (en) | In-vehicle face recognition method, device, apparatus and storage medium | |
CN107862340A (en) | A kind of model recognizing method and device | |
CN106650740A (en) | License plate identification method and terminal | |
CN112380978B (en) | Multi-face detection method, system and storage medium based on key point positioning | |
CN112733652B (en) | Image target recognition method, device, computer equipment and readable storage medium | |
CN116403094B (en) | Embedded image recognition method and system | |
CN112215188B (en) | Traffic police gesture recognition method, device, equipment and storage medium | |
CN112241646A (en) | Lane line recognition method and device, computer equipment and storage medium | |
CN114758145B (en) | Image desensitizing method and device, electronic equipment and storage medium | |
CN114049568A (en) | Object shape change detection method, device, equipment and medium based on image comparison | |
CN111079626B (en) | Living body fingerprint identification method, electronic equipment and computer readable storage medium | |
CN115147405A (en) | Rapid nondestructive testing method for new energy battery | |
CN113205510B (en) | Railway intrusion foreign matter detection method, device and terminal | |
CN112132215B (en) | Method, device and computer readable storage medium for identifying object type | |
CN114155363A (en) | Converter station vehicle identification method and device, computer equipment and storage medium | |
CN114529750A (en) | Image classification method, device, equipment and storage medium | |
CN116883611B (en) | Channel silt distribution active detection and identification method combining GIS channel information | |
CN112906652A (en) | Face image recognition method and device, electronic equipment and storage medium | |
CN114724128B (en) | License plate recognition method, device, equipment and medium | |
CN115345895B (en) | Image segmentation method and device for visual detection, computer equipment and medium | |
CN114463685B (en) | Behavior recognition method, behavior recognition device, electronic equipment and storage medium | |
CN113792671B (en) | Face synthetic image detection method and device, electronic equipment and medium | |
CN112487943B (en) | Key frame de-duplication method and device and electronic equipment | |
CN113239738A (en) | Image blur detection method and device | |
CN116664604B (en) | Image processing method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |