CN116052150A - Vehicle face recognition method for shielding license plate - Google Patents

Vehicle face recognition method for shielding license plate Download PDF

Info

Publication number
CN116052150A
CN116052150A CN202310061292.9A CN202310061292A CN116052150A CN 116052150 A CN116052150 A CN 116052150A CN 202310061292 A CN202310061292 A CN 202310061292A CN 116052150 A CN116052150 A CN 116052150A
Authority
CN
China
Prior art keywords
layer
relu
vector
feature map
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310061292.9A
Other languages
Chinese (zh)
Inventor
邓玉辉
汤智敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan University
Original Assignee
Jinan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan University filed Critical Jinan University
Priority to CN202310061292.9A priority Critical patent/CN116052150A/en
Publication of CN116052150A publication Critical patent/CN116052150A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a vehicle face recognition method aiming at a shielded license plate, which solves the technical problem of the existing recognition of the shielded license plate. According to the invention, color prediction information and picture vector information are obtained according to the color prediction module, the global perception module and the detail perception module, then a corresponding folder is found according to the color prediction information, cosine distance calculation is sequentially carried out on the corresponding folder and the picture vector information in the folder, and the first 5 are selected as recognition results according to the sequence from big to small. Compared with the traditional method, the method has the advantages that the auxiliary flow of vehicle color classification is increased, the identification is carried out through the global perception module and the detail perception module, multiple normalization calculation is carried out in the global perception module from batches and channels, namely, vehicles which shield license plates are searched and identified in a vehicle picture library from the multi-dimension of color appearance, global information and detail information, the content expression of searching capacity and models on the vehicles which shield the license plates is improved, and the vehicles which do not shield the license plates are successfully found out.

Description

一种针对遮挡车牌的车脸识别方法A vehicle face recognition method for obscured license plates

技术领域Technical Field

本发明涉及计算机视觉和模式识别技术领域,具体涉及一种针对遮挡车牌的车脸识别方法。The present invention relates to the technical field of computer vision and pattern recognition, and in particular to a vehicle face recognition method for obscured license plates.

背景技术Background Art

家家户户都选用汽车作为代步工具。但在复杂的交通环境下,某些司机可能会在某道路上进行超速超载等违法行为,将其车牌进行遮挡来躲过交警的追踪,并在一段时间后把车牌恢复正常并正常行驶,这样的现象给交通管制和监控的侦查变得困难,并难以解决。虽然道路上有大量摄像头能够监控违法汽车的追踪,但汽车型号颜色等特征大部分都是相同的,要在这些非重叠的摄像头拍摄的海量的图片库中找到相对应的目标图片单靠人力是很难实现,并且要浪费大量的时间查找。因此需要一种方法通过深度学习的技术在车脸图片库中按车脸的特征将被遮挡的车辆在图片库以未遮挡的形式出现的车脸图片找出来,以此找出违规的车辆。而现有的基于深度学习方法都是为每张车脸图片提取一个有效的全局特征表示,而且缺乏细节特征信息的考虑。此外,通过神经网络提取的特征难以获得外观变化的信息,并且排序后得到的车脸识别结果实际上与待测图片里的车辆实际差异非常大。Every household chooses cars as a means of transportation. However, in a complex traffic environment, some drivers may violate the law such as speeding and overloading on a certain road, cover their license plates to avoid being tracked by the traffic police, and restore the license plates to normal after a period of time and drive normally. This phenomenon makes traffic control and monitoring difficult and difficult to solve. Although there are a large number of cameras on the road that can monitor the tracking of illegal cars, most of the features such as car models and colors are the same. It is difficult to find the corresponding target images in the massive image library taken by these non-overlapping cameras by manpower alone, and it takes a lot of time to search. Therefore, a method is needed to use deep learning technology to find the car face images that appear in the image library as unobstructed vehicles in the car face image library according to the features of the car face, so as to find the illegal vehicles. The existing deep learning-based methods all extract an effective global feature representation for each car face image, and lack the consideration of detailed feature information. In addition, it is difficult to obtain information about appearance changes through the features extracted by neural networks, and the car face recognition results obtained after sorting are actually very different from the actual vehicles in the images to be tested.

发明内容Summary of the invention

本发明的目的是为了解决现有技术中的上述缺陷,提供一种针对遮挡车牌的车脸识别方法。The purpose of the present invention is to solve the above-mentioned defects in the prior art and provide a vehicle face recognition method for obscured license plates.

本发明的目的可以通过采取如下技术方案达到:The purpose of the present invention can be achieved by adopting the following technical solutions:

一种针对遮挡车牌的车脸识别方法,所述车脸识别方法包括下列步骤:A vehicle face recognition method for a blocked license plate, the vehicle face recognition method comprising the following steps:

S1、将数据集中的某一车脸图片P输入到主干网络,得到特征图Fp∈RC×W×H,其中C表示特征图通道的数量,W和H为特征图的宽和高,R表示实数域;S1. Input a car face image P in the dataset into the backbone network to obtain a feature map Fp∈R C×W×H , where C represents the number of feature map channels, W and H are the width and height of the feature map, and R represents the real number domain;

S2、将特征图Fp输入到颜色预测模块中对车辆颜色进行预测,得到预测车辆颜色结果i,颜色的预测结果i用于步骤S6;S2, input the feature map Fp into the color prediction module to predict the vehicle color, and obtain the predicted vehicle color result i, and the color prediction result i is used in step S6;

S3、将特征图Fp输入到全局感知模块进行预测,得到向量

Figure SMS_1
其中D为列向量的维度,向量Q1包含的是车脸图片的全局信息;S3, input the feature map Fp into the global perception module for prediction, and obtain the vector
Figure SMS_1
Where D is the dimension of the column vector, and the vector Q 1 contains the global information of the car face image;

S4、将特征图Fp输入到细节感知模块进行预测,得到向量

Figure SMS_2
向量Q2包含的是车脸图片的细节信息;S4, input the feature map Fp into the detail perception module for prediction, and obtain the vector
Figure SMS_2
Vector Q 2 contains the detailed information of the car face image;

S5、将向量Q1和向量Q2进行拼接,得到向量Qr∈RD×1,向量Qr包含了车脸图片的全局信息和细节信息;S5. Concatenate vector Q1 and vector Q2 to obtain vector Qr∈R D×1 . Vector Qr contains the global information and detailed information of the car face image.

S6、根据步骤S2的预测车辆颜色结果i,把向量Qr存放在名为i的文件夹;S6. According to the predicted vehicle color result i in step S2, the vector Q r is stored in a folder named i;

S7、将数据集中图片库的所有车脸图片依次重复执行步骤S1至步骤S6操作,得到向量集合Qi={Qi,k|1≤k≤In},其中In为文件夹i中向量的数目,Qi,k指的是文件夹i中第k张图片对应的向量;S7, repeating steps S1 to S6 for all the car face images in the image library in the data set, to obtain a vector set Qi = {Qi ,k | 1≤k≤In }, where In is the number of vectors in the folder i, and Qi ,k refers to the vector corresponding to the kth image in the folder i;

S8、输入待测车脸图片依次执行步骤S1,得到特征图FqS8, input the vehicle face image to be tested and execute step S1 in sequence to obtain a feature map F q ;

S9、输入特征图Fq执行步骤S2,得到颜色预测结果为iq,颜色的预测结果i用于步骤S11;S9, input feature map F q and execute step S2 to obtain a color prediction result i q . The color prediction result i is used in step S11;

S10、输入特征图Fq依次执行步骤S3、S4和S5操作,得到拼接后的向量Qq∈RD×1,向量Qq包含了待测的车脸图片的全局信息和细节信息;S10, input feature map F q and perform steps S3, S4 and S5 in sequence to obtain a concatenated vector Q q ∈R D×1 , where the vector Q q contains the global information and detail information of the vehicle face image to be tested;

S11、将向量Qq与文件夹iq中所有向量进行余弦距离计算,得到集合

Figure SMS_3
Inq为文件夹iq中向量的数目,
Figure SMS_4
指的是文件夹iq中第g张图片对应的向量,颜色预测分类的处理的使得车脸图片的识别更有针对性;S11. Calculate the cosine distance between vector Q q and all vectors in folder i q to obtain the set
Figure SMS_3
I nq is the number of vectors in folder i q ,
Figure SMS_4
Refers to the vector corresponding to the g-th picture in the folder i q. The color prediction and classification processing makes the recognition of car face pictures more targeted;

S12、将集合

Figure SMS_5
按距离的取值从大到小进行排序,选择最高的前5个向量对应的图片作为预测结果。S12, will gather
Figure SMS_5
Sort the distance values from large to small, and select the images corresponding to the top 5 highest vectors as the prediction results.

进一步地,所述主干网络结构从输入层至输出层依次连接为:卷积层conv1_1、Relu层conv1_1_relu、卷积层conv1_2、Relu层conv1_2_relu、池化层max_pooling1、卷积层conv2_1、Relu层conv2_1_relu、卷积层conv2_2、BN层conv2_2_bn、Relu层conv2_2_relu、池化层max_pooling2、卷积层conv3_1、Relu层conv3_1_relu、卷积层conv3_2、Relu层conv3_2_relu、卷积层conv3_3、Relu层conv3_3_relu、池化层max_pooling3、卷积层conv4_1、Relu层conv4_1_relu、卷积层conv4_2、Relu层conv4_2_relu、卷积层conv4_3、Relu层conv4_3_relu、池化层max_pooling4、卷积层conv5_1、Relu层conv5_1_relu、卷积层conv5_2、Relu层conv5_2_relu、卷积层conv5_3、Relu层conv5_3_relu、池化层max_pooling5、卷积层fc6、Relu层fc6_relu、卷积层fc7、Relu层fc7_relu、卷积层conv6_1、Relu层conv6_1_relu、卷积层conv6_2、Relu层conv6_2_relu、卷积层conv7_1、Relu层conv7_1_relu、卷积层conv7_2、Relu层conv7_2_relu、卷积层conv8_1、Relu层conv8_1_relu、卷积层conv8_2、Relu层conv8_2_relu、卷积层conv9_1、Relu层conv9_1_relu、卷积层conv9_2、Relu层conv9_2_relu、卷积层conv10_1、Relu层conv10_1_relu、卷积层conv10_2、Relu层conv10_2_relu。Furthermore, the backbone network structure is connected from the input layer to the output layer in sequence: convolution layer conv1_1, Relu layer conv1_1_relu, convolution layer conv1_2, Relu layer conv1_2_relu, pooling layer max_pooling1, convolution layer conv2_1, Relu layer conv2_1_relu, convolution layer conv2_2, BN layer conv2_2_bn, Relu layer conv2_2_relu, pooling layer max_pooling2, convolution layer conv3_1, Relu layer con v3_1_relu, convolution layer conv3_2, Relu layer conv3_2_relu, convolution layer conv3_3, Relu layer conv3_3_relu, pooling layer max_pooling3, convolution layer conv4_1, Relu layer conv4_1_relu, convolution layer conv4_2, Relu layer conv4_2_relu, convolution layer conv4_3, Relu layer conv4_3_relu, pooling layer max_pooling4, convolution layer conv5_1, Relu layer conv5_ 1_relu, convolution layer conv5_2, Relu layer conv5_2_relu, convolution layer conv5_3, Relu layer conv5_3_relu, pooling layer max_pooling5, convolution layer fc6, Relu layer fc6_relu, convolution layer fc7, Relu layer fc7_relu, convolution layer conv6_1, Relu layer conv6_1_relu, convolution layer conv6_2, Relu layer conv6_2_relu, convolution layer conv7_1, Relu layer conv7_1_rel u, convolutional layer conv7_2, Relu layer conv7_2_relu, convolutional layer conv8_1, Relu layer conv8_1_relu, convolutional layer conv8_2, Relu layer conv8_2_relu, convolutional layer conv9_1, Relu layer conv9_1_relu, convolutional layer conv9_2, Relu layer conv9_2_relu, convolutional layer conv10_1, Relu layer conv10_1_relu, convolutional layer conv10_2, Relu layer conv10_2_relu.

进一步地,所述颜色预测模块结构从输入层至输出层依次连接为:池化层global_pooling_1、fc层fc_1、fc层fc_2。Furthermore, the color prediction module structure is connected in sequence from the input layer to the output layer: pooling layer global_pooling_1, fc layer fc_1, and fc layer fc_2.

进一步地,所述全局感知模块结构从输入层至输出层依次连接为:池化层global_pooling_2、多维度归一化层conv1_1_in_bn、卷积层conv11_1、Relu层conv11_1_relu、卷积层conv11_2、Relu层conv11_2_relu;其中,多维度归一化层conv1_1_in_bn由BN层conv11_1_bn和IN层conv11_1_in构成。Furthermore, the global perception module structure is connected in sequence from the input layer to the output layer: pooling layer global_pooling_2, multi-dimensional normalization layer conv1_1_in_bn, convolution layer conv11_1, Relu layer conv11_1_relu, convolution layer conv11_2, Relu layer conv11_2_relu; among which, the multi-dimensional normalization layer conv1_1_in_bn is composed of the BN layer conv11_1_bn and the IN layer conv11_1_in.

进一步地,所述细节感知模块结构从输入层至输出层依次连接为:特征细节压缩层horizontal_pooling_1、BN层conv12_1_bn、卷积层conv12_1、Relu层conv12_1_relu、卷积层conv12_2、Relu层conv12_2_relu。Furthermore, the detail perception module structure is connected in sequence from the input layer to the output layer: feature detail compression layer horizontal_pooling_1, BN layer conv12_1_bn, convolution layer conv12_1, Relu layer conv12_1_relu, convolution layer conv12_2, and Relu layer conv12_2_relu.

进一步地,所述步骤S2过程如下:Furthermore, the process of step S2 is as follows:

S21、将特征图Fp输入到全局平均池化层global_pooling_1得到特征图Ea∈RC×1,用于对特征图Fp的信息压缩;S21, input the feature map Fp to the global average pooling layer global_pooling_1 to obtain the feature map Ea∈RC ×1 , which is used to compress the information of the feature map Fp ;

S22、将特征图Ea输入到fc层fc_1、fc层fc_2,并通过softmax函数得到向量Ef∈RN ×1,其中N为数据集中的车辆颜色数量,通过两层fc层的全连接结构对车辆颜色进行预测;S22, input the feature map E a into the fc layer fc_1 and fc layer fc_2, and obtain the vector E f ∈ R N ×1 through the softmax function, where N is the number of vehicle colors in the data set, and predict the vehicle color through the fully connected structure of the two fc layers;

S23、在向量Ef中取到最高值对应的位置下标i,输出车辆颜色结果i;S23, get the position subscript i corresponding to the highest value in the vector E f , and output the vehicle color result i;

进一步地,所述步骤S3过程如下:Furthermore, the process of step S3 is as follows:

S31、将特征图Fp输入到全局平均池化层global_pooling_2得到向量V∈RD×1,用于对特征图Fp的信息压缩;S31, input the feature map F p into the global average pooling layer global_pooling_2 to obtain a vector V∈R D×1 , which is used to compress the information of the feature map F p ;

S32、将向量V输入到BN层conv11_1_bn得到向量Vb∈RD×1;向量Vb是向量V从批量方向进行归一化的结果;S32. Input the vector V to the BN layer conv11_1_bn to obtain a vector V bRD×1 ; the vector V b is the result of normalizing the vector V from the batch direction;

S33、将向量V输入到IN层conv11_1_in得到向量Vi∈RD×1;向量Vi是向量V从通道方向进行归一化的结果;S33, input the vector V to the IN layer conv11_1_in to obtain the vector V i ∈R D×1 ; the vector V i is the result of normalizing the vector V from the channel direction;

S34、将向量Vb与向量Vi进行拼接得到向量Vc∈R2D×1S34, concatenate the vector V b and the vector V i to obtain a vector V c ∈R 2D×1 ;

S35、将向量Vc通过卷积层conv11_1、Relu层conv11_1_relu、卷积层conv11_2、Relu层conv11_2_relu得到向量Q1,因此向量Q1得到了向量V从批量归一化和通道归一化后拼接的数据,避免过拟合和风格特征的丢失。S35. Pass the vector V c through the convolution layer conv11_1, the Relu layer conv11_1_relu, the convolution layer conv11_2, and the Relu layer conv11_2_relu to obtain the vector Q 1. Therefore, the vector Q 1 obtains the data of the vector V after batch normalization and channel normalization, avoiding overfitting and loss of style features.

进一步地,所述步骤S4过程如下:Furthermore, the process of step S4 is as follows:

S41、将特征图Fp输入到特征细节压缩层horizontal_pooling_1进行压缩,再输入到BN层conv12_1_bn进行归一化,得到特征图Js∈RC×H×1S41, input the feature map Fp to the feature detail compression layer horizontal_pooling_1 for compression, and then input it to the BN layer conv12_1_bn for normalization to obtain the feature map Js∈R C×H×1 ;

S42、将特征图Js输入到卷积层conv12_1、Relu层conv12_1_relu、卷积层conv12_2、Relu层conv12_2_relu,得到向量Q2S42. Input the feature map J s to the convolution layer conv12_1, the Relu layer conv12_1_relu, the convolution layer conv12_2, and the Relu layer conv12_2_relu to obtain a vector Q 2 .

进一步地,所述特征细节压缩层用于压缩细节特征便于后续的识别,其工作过程如下:Furthermore, the feature detail compression layer is used to compress detail features for subsequent recognition, and its working process is as follows:

假定特征细节压缩层的输入为特征图M∈RC×W×H,输出为特征图Y∈RC×H×1Assume that the input of the feature detail compression layer is the feature map M∈RC ×W×H , and the output is the feature map Y∈RC ×H×1 .

对于特征图M以特征图的高度方向切分为数目为H的特征图集合A*={Am|1≤m≤H},其中Am∈RC×W×1是指集合A*中第m位的特征图;The feature map M is divided into a feature map set A * = {A m |1≤m≤H} with a number of H in the height direction of the feature map, where A mRC×W×1 refers to the mth feature map in the set A * ;

对集合A*某一特征图Az进行压缩,得到结果kz,压缩公式如下:Compress a certain feature graph A z of the set A * to obtain the result k z . The compression formula is as follows:

Figure SMS_6
Figure SMS_6

其中Az,x,y为集合A*的第z个位置的特征图Az中通道数为y的第x个行向量;Where Az,x,y is the x-th row vector with y channels in the feature map Az at the z-th position of the set A * ;

对集合A*中的每个特征图都执行公式(1)计算,汇集成列向量T∈RH×1,其中列向量T中的第t个位置的数值表示集合A*的第t个位置的特征图通过公式(1)计算后的结果。Formula (1) is executed for each feature map in the set A * and the results are aggregated into a column vector T∈R H×1 , where the value at the t-th position in the column vector T represents the result of the feature map at the t-th position of the set A * calculated using formula (1).

进一步地,所述余弦距离用于计算两个向量的相似性,其计算过程如下:Furthermore, the cosine distance is used to calculate the similarity of two vectors, and the calculation process is as follows:

假设向量J和向量B进行余弦距离计算,得到数值l,计算表达式如下:Assume that the cosine distance between vector J and vector B is calculated to obtain the value l. The calculation expression is as follows:

l=1-J·BTl = 1 - J·B T .

式中,符号“·”表示向量中元素逐位相乘,右上标的“T”表示向量转置操作。In the formula, the symbol “·” represents the bit-by-bit multiplication of the elements in the vector, and the superscript “T” represents the vector transpose operation.

本发明相对于现有技术具有如下的优点及效果:Compared with the prior art, the present invention has the following advantages and effects:

(1)本发明能够图片库中找出5张与遮挡车牌图片相似的车辆,在5张中能够找出图片中与其匹配未遮挡的车辆的图片。(1) The present invention can find 5 vehicles similar to the obstructed license plate image in the image library, and can find the image of the unobstructed vehicle that matches the vehicle in the image library among the 5 images.

(2)本发明相对于传统方法在神经网络和特征向量的归一化中进行改进,提高特征向量对车脸图片的表达能力。(2) Compared with the traditional method, the present invention makes improvements in the normalization of neural network and feature vector, thereby improving the ability of feature vector to express vehicle face images.

(3)本发明相对于传统方法增加了颜色预测模块,作为后面识别的辅助,根据车脸的颜色进行分类并存放在不同颜色的文件夹,这使得车脸的识别更有针对性。(3) Compared with the traditional method, the present invention adds a color prediction module as an auxiliary for subsequent recognition. The vehicle faces are classified according to their colors and stored in folders of different colors, which makes the recognition of vehicle faces more targeted.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

此处所说明的附图用来提供对本发明的进一步理解,构成本申请的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:The drawings described herein are used to provide a further understanding of the present invention and constitute a part of this application. The exemplary embodiments of the present invention and their descriptions are used to explain the present invention and do not constitute an improper limitation of the present invention. In the drawings:

图1是本发明中公开的一种针对遮挡车牌的车脸识别方法的流程图。FIG1 is a flow chart of a vehicle face recognition method for an obscured license plate disclosed in the present invention.

图2是本发明中公开的一种针对遮挡车牌的车脸识别方法的原理图。FIG. 2 is a schematic diagram of a vehicle face recognition method for an obscured license plate disclosed in the present invention.

图3是使用VehicleID数据集时本方法在实施例1的前提下与其他方法的准确率对比的柱状图;FIG3 is a bar graph comparing the accuracy of the present method under the premise of Example 1 with other methods when using the VehicleID dataset;

图4是使用VeRi数据集时本方法在实施例2的前提下与其他方法的准确率对比的柱状图;FIG4 is a bar graph comparing the accuracy of the present method with other methods under the premise of Example 2 when using the VeRi dataset;

图5是本发明中主干网络的结构图。FIG5 is a structural diagram of the backbone network in the present invention.

具体实施方式DETAILED DESCRIPTION

为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solution and advantages of the embodiments of the present invention clearer, the technical solution in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.

实施例1Example 1

本实施例1中选用VehicleId数据集。数据集中每个图片都带有一个与现实世界中的身份相对应的id标签。其中,训练集为13164辆车,共有113346张图片。测试集为2400辆车,共有19777张图片。In this embodiment 1, the VehicleId dataset is used. Each image in the dataset has an ID tag corresponding to an identity in the real world. The training set contains 13,164 vehicles and 113,346 images. The test set contains 2,400 vehicles and 19,777 images.

所述方法包括以下步骤:The method comprises the following steps:

步骤S1、获取数据集中的车辆颜色种类数目N,在该数据集中N=9,颜色种类数目用于颜色预测模块中fc层fc_2中维度设定;并且从测试集按照车辆的id标签随机取1000张图片,作为待测图片库;Step S1, obtaining the number of vehicle color types N in the data set, in which N=9, and the number of color types is used for dimension setting in the fc layer fc_2 in the color prediction module; and randomly selecting 1000 pictures from the test set according to the vehicle ID label as the picture library to be tested;

步骤S2、实验环境,本实施例实验环境只需要一般的硬件配置和可提高计算速度的图形处理单元(Graphics Processing Unit,GPU)进行加速运算。其中模型的搭建、训练以及训练结果的测试都在Pytorch深度学习框架和FastReid工具下完成,使用计算统一架构(Compute Unified Device Architecture,CUDA),使GPU能够解决复杂的计算问题。本实施例实验所需具体运行环境配置见表1。Step S2, experimental environment. The experimental environment of this embodiment only requires general hardware configuration and a graphics processing unit (GPU) that can increase the computing speed for accelerated computing. The model building, training, and testing of training results are all completed under the Pytorch deep learning framework and FastReid tools, using the Compute Unified Device Architecture (CUDA) to enable the GPU to solve complex computing problems. The specific operating environment configuration required for the experiment of this embodiment is shown in Table 1.

表1.本实施例实验运行环境配置表Table 1. Configuration table of the experimental operation environment of this embodiment

Figure SMS_7
Figure SMS_7

步骤S3、构建一种针对遮挡车牌的车脸识别方法,结构图如图2所示,具体构建结构步骤如下:Step S3: construct a vehicle face recognition method for obscured license plates. The structure diagram is shown in FIG2 . The specific steps of constructing the structure are as follows:

步骤S31、构建主干网络,如图5所示,主干网络结构从输入层至输出层依次连接为:卷积层conv1_1、Relu层conv1_1_relu、卷积层conv1_2、Relu层conv1_2_relu、池化层max_pooling1、卷积层conv2_1、Relu层conv2_1_relu、卷积层conv2_2、BN层conv2_2_bn、Relu层conv2_2_relu、池化层max_pooling2、卷积层conv3_1、Relu层conv3_1_relu、卷积层conv3_2、Relu层conv3_2_relu、卷积层conv3_3、Relu层conv3_3_relu、池化层max_pooling3、卷积层conv4_1、Relu层conv4_1_relu、卷积层conv4_2、Relu层conv4_2_relu、卷积层conv4_3、Relu层conv4_3_relu、池化层max_pooling4、卷积层conv5_1、Relu层conv5_1_relu、卷积层conv5_2、Relu层conv5_2_relu、卷积层conv5_3、Relu层conv5_3_relu、池化层max_pooling5、卷积层fc6、Relu层fc6_relu、卷积层fc7、Relu层fc7_relu、卷积层conv6_1、Relu层conv6_1_relu、卷积层conv6_2、Relu层conv6_2_relu、卷积层conv7_1、Relu层conv7_1_relu、卷积层conv7_2、Relu层conv7_2_relu、卷积层conv8_1、Relu层conv8_1_relu、卷积层conv8_2、Relu层conv8_2_relu、卷积层conv9_1、Relu层conv9_1_relu、卷积层conv9_2、Relu层conv9_2_relu、卷积层conv10_1、Relu层conv10_1_relu、卷积层conv10_2、Relu层conv10_2_relu;Step S31, build a backbone network, as shown in Figure 5, the backbone network structure is connected from the input layer to the output layer in sequence: convolution layer conv1_1, Relu layer conv1_1_relu, convolution layer conv1_2, Relu layer conv1_2_relu, pooling layer max_pooling1, convolution layer conv2_1, Relu layer conv2_1_relu, convolution layer conv2_2, BN layer conv2_2_bn, Relu layer conv2_2_relu, pooling layer max_pooling2, convolution layer conv3_1 , Relu layer conv3_1_relu, convolution layer conv3_2, Relu layer conv3_2_relu, convolution layer conv3_3, Relu layer conv3_3_relu, pooling layer max_pooling3, convolution layer conv4_1, Relu layer conv4_1_relu, convolution layer conv4_2, Relu layer conv4_2_relu, convolution layer conv4_3, Relu layer conv4_3_relu, pooling layer max_pooling4, convolution layer conv5_1, Relu layer conv5_1_relu, convolution layer conv5_2, Relu layer conv5_2_relu, convolution layer conv5_3, Relu layer conv5_3_relu, pooling layer max_pooling5, convolution layer fc6, Relu layer fc6_relu, convolution layer fc7, Relu layer fc7_relu, convolution layer conv6_1, Relu layer conv6_1_relu, convolution layer conv6_2, Relu layer conv6_2_relu, convolution layer conv7_1, Relu layer conv7_1_ relu, convolutional layer conv7_2, Relu layer conv7_2_relu, convolutional layer conv8_1, Relu layer conv8_1_relu, convolutional layer conv8_2, Relu layer conv8_2_relu, convolutional layer conv9_1, Relu layer conv9_1_relu, convolutional layer conv9_2, Relu layer conv9_2_relu, convolutional layer conv10_1, Relu layer conv10_1_relu, convolutional layer conv10_2, Relu layer conv10_2_relu;

步骤S32、构建颜色预测模块,颜色预测模块结构从输入层至输出层依次连接为:池化层global_pooling_1、fc层fc_1、fc层fc_2;Step S32, constructing a color prediction module, the color prediction module structure is sequentially connected from the input layer to the output layer: pooling layer global_pooling_1, fc layer fc_1, fc layer fc_2;

步骤S33、构建全局感知模块,全局感知模块结构从输入层至输出层依次连接为:池化层global_pooling_2、多维度归一化层conv1_1_in_bn、卷积层conv11_1、Relu层conv11_1_relu、卷积层conv11_2、Relu层conv11_2_relu;其中,多维度归一化层conv1_1_in_bn由BN层conv11_1_bn和IN层conv11_1_in构成;Step S33, construct a global perception module. The structure of the global perception module is connected in sequence from the input layer to the output layer: pooling layer global_pooling_2, multi-dimensional normalization layer conv1_1_in_bn, convolution layer conv11_1, Relu layer conv11_1_relu, convolution layer conv11_2, Relu layer conv11_2_relu; wherein, the multi-dimensional normalization layer conv1_1_in_bn is composed of the BN layer conv11_1_bn and the IN layer conv11_1_in;

步骤S34、构建细节感知模块,细节感知模块结构从输入层至输出层依次连接为:特征细节压缩层horizontal_pooling_1、BN层conv12_1_bn、卷积层conv12_1、Relu层conv12_1_relu、卷积层conv12_2、Relu层conv12_2_relu;Step S34, construct a detail perception module, the structure of the detail perception module is connected in sequence from the input layer to the output layer: feature detail compression layer horizontal_pooling_1, BN layer conv12_1_bn, convolution layer conv12_1, Relu layer conv12_1_relu, convolution layer conv12_2, Relu layer conv12_2_relu;

步骤S4、将训练集的图片按车辆id进行划分图片库和待查询库,其中待查询库中的车辆id是唯一的,而且图片库中可以包含多张相同车辆id的图片。先将处理后的训练集图片使用本方法进行训练。在训练过程中,使用的优化器是Adam,学习率设置为0.00035,学习率动量设置为0.0005,训练批次大小设置为64,epoch设为60,学习率在epoch为30和50的时候,以0.1的倍数下降学习率。Step S4, divide the training set images into a picture library and a to-be-queried library according to the vehicle ID, wherein the vehicle ID in the to-be-queried library is unique, and the picture library can contain multiple pictures of the same vehicle ID. First, the processed training set images are trained using this method. During the training process, the optimizer used is Adam, the learning rate is set to 0.00035, the learning rate momentum is set to 0.0005, the training batch size is set to 64, the epoch is set to 60, and the learning rate is reduced by a multiple of 0.1 when the epoch is 30 and 50.

步骤S5、将数据集中的某一车脸图片P输入到主干网络,得到特征图Fp∈RC×W×H,其中C表示特征图通道的数量,W和H为特征图的宽和高,R表示实数域;Step S5: Input a certain car face image P in the data set into the backbone network to obtain a feature map F pRC×W×H , where C represents the number of feature map channels, W and H represent the width and height of the feature map, and R represents the real number domain;

步骤S6、将特征图Fp输入到颜色预测模块中对车辆颜色进行预测,得到预测车辆颜色结果为i,其中1≤i≤N;Step S6, inputting the feature map Fp into the color prediction module to predict the vehicle color, and obtaining the predicted vehicle color result as i, where 1≤i≤N;

具体过程如下:The specific process is as follows:

S61、将特征图Fp输入到全局平均池化层global_pooling_1得到特征图Ea∈RC×1,用于对特征图Fp的信息压缩;S61, input the feature map Fp to the global average pooling layer global_pooling_1 to obtain the feature map Ea∈RC ×1 , which is used to compress the information of the feature map Fp ;

S62、将特征图Ea输入到fc层fc_1、fc层fc_2,并通过softmax函数得到向量Ef∈RN ×1,通过两层fc层的全连接结构对车辆颜色进行预测;S62, input the feature map E a into the fc layer fc_1 and the fc layer fc_2, and obtain the vector E f ∈ R N ×1 through the softmax function, and predict the vehicle color through the fully connected structure of the two fc layers;

S63、在向量Ef中取到最高值对应的位置下标i,输出车辆颜色结果iS63. Take the position index i corresponding to the highest value in vector E f and output the vehicle color result i

步骤S7、将特征图Fp输入到全局感知模块进行预测,得到向量

Figure SMS_8
Figure SMS_9
其中D为列向量的维度,向量Q1包含的是车脸图片的全局信息;在实施例1中,D=2048;Step S7: Input the feature map Fp into the global perception module for prediction to obtain the vector
Figure SMS_8
Figure SMS_9
Where D is the dimension of the column vector, and the vector Q1 contains the global information of the vehicle face image; in Example 1, D=2048;

具体过程如下:The specific process is as follows:

S71、将特征图Fp输入到全局平均池化层global_pooling_2得到向量V∈RD×1S71, input the feature map Fp to the global average pooling layer global_pooling_2 to obtain a vector V∈R D×1 ;

S72、将向量V输入到BN层conv11_1_bn得到向量Vb∈RD×1S72, input the vector V to the BN layer conv11_1_bn to obtain a vector V bRD×1 ;

S73、将向量V输入到IN层conv11_1_in得到向量Vi∈RD×1S73, input the vector V to the IN layer conv11_1_in to obtain the vector V i ∈R D×1 ;

S74、将向量Vb与向量Vi进行拼接得到向量Vc∈R2D×1S74, concatenate the vector V b and the vector V i to obtain a vector V c ∈R 2D×1 ;

S75、将向量Vc通过卷积层conv11_1、Relu层conv11_1_relu、卷积层conv11_2、Relu层conv11_2_relu得到向量Q1S75. Pass the vector V c through the convolution layer conv11_1, the Relu layer conv11_1_relu, the convolution layer conv11_2, and the Relu layer conv11_2_relu to obtain the vector Q 1 .

步骤S8、将特征图Fp输入到细节感知模块进行预测,得到向量

Figure SMS_10
Figure SMS_11
向量Q2包含的是车脸图片的细节信息;Step S8: Input the feature map Fp into the detail perception module for prediction to obtain the vector
Figure SMS_10
Figure SMS_11
Vector Q 2 contains the detailed information of the car face image;

具体过程如下:The specific process is as follows:

S41、将特征图Fp输入到特征细节压缩层horizontal_pooling_1进行压缩,再输入到BN层conv12_1_bn进行归一化,得到特征图Js∈RC×H×1S41, input the feature map Fp to the feature detail compression layer horizontal_pooling_1 for compression, and then input it to the BN layer conv12_1_bn for normalization to obtain the feature map Js∈R C×H×1 ;

S42、将特征图Js输入到卷积层conv12_1、Relu层conv12_1_relu、卷积层conv12_2、Relu层conv12_2_relu,得到向量Q2S42. Input the feature map J s to the convolution layer conv12_1, the Relu layer conv12_1_relu, the convolution layer conv12_2, and the Relu layer conv12_2_relu to obtain a vector Q 2 .

其中,细节感知模块中的特征细节压缩层的工作过程如下:Among them, the working process of the feature detail compression layer in the detail perception module is as follows:

假定特征细节压缩层的输入为特征图M∈RC×W×H,输出为特征图Y∈RC×H×1Assume that the input of the feature detail compression layer is the feature map M∈RC ×W×H and the output is the feature map Y∈RC ×H×1 :

对于特征图M以特征图的高度方向切分为数目为H的特征图集合A*={Am|1≤m≤H},其中Am∈RC×W×1是指集合A*中第m位的特征图;The feature map M is divided into a feature map set A * = {A m |1≤m≤H} with a number of H in the height direction of the feature map, where A mRC×W×1 refers to the mth feature map in the set A * ;

对集合A*某一特征图Az进行压缩,得到结果kz,压缩公式如下:Compress a certain feature graph A z of the set A * to obtain the result k z . The compression formula is as follows:

Figure SMS_12
Figure SMS_12

其中Az,x,y为集合A*的第z个位置的特征图Az中通道数为y的第x个行向量;Where Az,x,y is the x-th row vector with y channels in the feature map Az at the z-th position of the set A * ;

对集合A*中的每个特征图都执行公式(1)计算,汇集成列向量T∈RH×1,其中列向量T中的第t个位置的数值表示集合A*的第t个位置的特征图通过公式(1)计算后的结果。Formula (1) is executed for each feature map in the set A * and the results are aggregated into a column vector T∈R H×1 , where the value at the t-th position in the column vector T represents the result of the feature map at the t-th position of the set A * calculated using formula (1).

步骤S9、将向量Q1和向量Q2进行拼接,得到向量Qr∈RD×1,向量Qr包含了车脸图片的全局信息和细节信息;Step S9, concatenate vector Q1 and vector Q2 to obtain vector Qr∈R D×1 , where vector Qr contains global information and detailed information of the vehicle face image;

步骤S10、根据步骤S6的预测车辆颜色结果i,把向量Qr存放在名为i的文件夹;Step S10, according to the predicted vehicle color result i in step S6, the vector Q r is stored in a folder named i;

步骤S11、将测试集中图片库的所有车脸图片依次全部执行步骤S5至步骤S10操作,得到向量集合Qi={Qi,k|1≤k≤In},其中In为文件夹i中向量的数目,Qi,k指的是文件夹i中第k张图片对应的向量;Step S11, perform the operations of steps S5 to S10 on all the car face pictures in the picture library of the test set in sequence, and obtain a vector set Qi = {Qi ,k | 1≤k≤In }, where In is the number of vectors in the folder i, and Qi ,k refers to the vector corresponding to the kth picture in the folder i;

步骤S12、输入待测车脸图片依次执行步骤S5,得到特征图FqStep S12: Input the vehicle face image to be tested and execute step S5 in sequence to obtain a feature map F q ;

步骤S13、输入特征图Fq执行步骤S6,得到颜色预测结果为iq,颜色的预测结果i用于步骤S11;Step S13, input the feature map F q and execute step S6 to obtain a color prediction result i q . The color prediction result i is used in step S11;

步骤S14、输入特征图Fq依次执行步骤S7、S8和S9操作,得到拼接后的向量Qq∈RD ×1,向量Qq包含了待测的车脸图片的全局信息和细节信息;Step S14: Input the feature map Fq and perform the operations of steps S7, S8 and S9 in sequence to obtain a concatenated vector Qq∈RD ×1 , where the vector Qq contains the global information and detail information of the vehicle face image to be tested;

步骤S15、将向量Qq与文件夹iq中所有向量进行余弦距离计算,得到集合

Figure SMS_13
Inq为文件夹iq中向量的数目,
Figure SMS_14
指的是文件夹iq中第g张图片对应的向量,颜色预测分类的处理的使得车脸图片的识别更有针对性;Step S15: Calculate the cosine distance between vector Q q and all vectors in folder i q to obtain the set
Figure SMS_13
I nq is the number of vectors in folder i q ,
Figure SMS_14
Refers to the vector corresponding to the g-th picture in the folder i q. The color prediction and classification processing makes the recognition of car face pictures more targeted;

其中,所述余弦距离的计算过程如下:The calculation process of the cosine distance is as follows:

假设向量J和向量B进行余弦距离计算,得到数值l,计算表达式如下:Assume that the cosine distance between vector J and vector B is calculated to obtain the value l. The calculation expression is as follows:

l=1-J·BTl = 1 - J·B T .

式中,符号“·”表示向量中元素逐位相乘,右上标的“T”表示向量转置操作。In the formula, the symbol “·” represents the bit-by-bit multiplication of the elements in the vector, and the superscript “T” represents the vector transpose operation.

步骤S16、将集合

Figure SMS_15
按距离的取值从大到小进行排序,选择最高的前5个向量对应的图片作为预测结果。Step S16: Set
Figure SMS_15
Sort the distance values from large to small, and select the images corresponding to the top 5 highest vectors as the prediction results.

步骤S17、将测试集中的待测图片库依次执行步骤S12、S13、S14和S15,具体流程如图1所示,得到测试集中的待测图片库所有的预测结果,并计算准确率。Step S17, execute steps S12, S13, S14 and S15 in sequence on the image library to be tested in the test set. The specific process is shown in FIG1 , and all prediction results of the image library to be tested in the test set are obtained, and the accuracy is calculated.

实施例1的实验对比结果如图3所示,实施例1在数量为1000张的待测图片库上与其他方法进行对比,5张图片中包含对应的图片标记为准确,本方法得到了88.1%的准确率,比BOW-CN方法高70.21%,比LOMO方法高62.47%,比FACT方法高27.61%,比NuFACT方法高27.38%,比VAML高17.81%,比QD-DLF高4.73%,证明了方法的有效性,并且能够成功识别到对应的未遮挡的车牌。The experimental comparison results of Example 1 are shown in Figure 3. Example 1 is compared with other methods on a test image library of 1,000 images. Five images contain corresponding images marked as accurate. The method obtains an accuracy rate of 88.1%, which is 70.21% higher than the BOW-CN method, 62.47% higher than the LOMO method, 27.61% higher than the FACT method, 27.38% higher than the NuFACT method, 17.81% higher than VAML, and 4.73% higher than QD-DLF, which proves the effectiveness of the method and can successfully identify the corresponding unobstructed license plate.

实施例2Example 2

本实施例2中选用公开的VeRi数据集,它包含超过50,000张776辆车的图像,其中包括训练集37778张,图片库图片有11579张,待测图片有1678张。数据集中每个图片都带有一个与现实世界中的身份相对应的id标签。In this embodiment 2, the public VeRi dataset is selected, which contains more than 50,000 images of 776 vehicles, including 37,778 training sets, 11,579 library images, and 1,678 test images. Each image in the dataset has an ID tag corresponding to an identity in the real world.

所述方法包括以下步骤:The method comprises the following steps:

步骤S1、获取数据集中的车辆颜色种类数目N,在该数据集中N=9,颜色种类数目用于颜色预测模块中fc层fc_2中维度设定;Step S1, obtaining the number N of vehicle color types in the data set, in which N=9, and the number of color types is used for dimension setting in the fc layer fc_2 in the color prediction module;

步骤S2、实验环境,本实施例实验环境只需要一般的硬件配置和可提高计算速度的图形处理单元(Graphics Processing Unit,GPU)进行加速运算。其中模型的搭建、训练以及训练结果的测试都在Pytorch深度学习框架和Fastreid工具下完成,使用计算统一架构(Compute Unified Device Architecture,CUDA),使GPU能够解决复杂的计算问题。具体的说,本实施例采用的硬件和软件配置为:GPU版本为GeForce RTX 2080Ti;CPU版本为Intel(R)Xeon(R)Silver 4216CPU@2.10GHz;操作系统版本为CentOS 8.3.2011;Python版本为3.6.13;CUDA版本为11.2。Step S2, experimental environment. The experimental environment of this embodiment only requires general hardware configuration and a graphics processing unit (GPU) that can improve the computing speed for accelerated computing. The model building, training, and testing of training results are all completed under the Pytorch deep learning framework and Fastreid tools, using the Compute Unified Device Architecture (CUDA) to enable the GPU to solve complex computing problems. Specifically, the hardware and software configurations used in this embodiment are: GPU version is GeForce RTX 2080Ti; CPU version is Intel (R) Xeon (R) Silver 4216CPU@2.10GHz; operating system version is CentOS 8.3.2011; Python version is 3.6.13; CUDA version is 11.2.

步骤S3、构建一种针对遮挡车牌的车脸识别方法,结构图如图2所示,具体构建结构步骤如下:Step S3: construct a vehicle face recognition method for obscured license plates. The structure diagram is shown in FIG2 . The specific steps of constructing the structure are as follows:

步骤S31、构建主干网络,如图5所示,主干网络结构从输入层至输出层依次连接为:卷积层conv1_1、Relu层conv1_1_relu、卷积层conv1_2、Relu层conv1_2_relu、池化层max_pooling1、卷积层conv2_1、Relu层conv2_1_relu、卷积层conv2_2、BN层conv2_2_bn、Relu层conv2_2_relu、池化层max_pooling2、卷积层conv3_1、Relu层conv3_1_relu、卷积层conv3_2、Relu层conv3_2_relu、卷积层conv3_3、Relu层conv3_3_relu、池化层max_pooling3、卷积层conv4_1、Relu层conv4_1_relu、卷积层conv4_2、Relu层conv4_2_relu、卷积层conv4_3、Relu层conv4_3_relu、池化层max_pooling4、卷积层conv5_1、Relu层conv5_1_relu、卷积层conv5_2、Relu层conv5_2_relu、卷积层conv5_3、Relu层conv5_3_relu、池化层max_pooling5、卷积层fc6、Relu层fc6_relu、卷积层fc7、Relu层fc7_relu、卷积层conv6_1、Relu层conv6_1_relu、卷积层conv6_2、Relu层conv6_2_relu、卷积层conv7_1、Relu层conv7_1_relu、卷积层conv7_2、Relu层conv7_2_relu、卷积层conv8_1、Relu层conv8_1_relu、卷积层conv8_2、Relu层conv8_2_relu、卷积层conv9_1、Relu层conv9_1_relu、卷积层conv9_2、Relu层conv9_2_relu、卷积层conv10_1、Relu层conv10_1_relu、卷积层conv10_2、Relu层conv10_2_relu;Step S31, build a backbone network, as shown in Figure 5, the backbone network structure is connected from the input layer to the output layer in sequence: convolution layer conv1_1, Relu layer conv1_1_relu, convolution layer conv1_2, Relu layer conv1_2_relu, pooling layer max_pooling1, convolution layer conv2_1, Relu layer conv2_1_relu, convolution layer conv2_2, BN layer conv2_2_bn, Relu layer conv2_2_relu, pooling layer max_pooling2, convolution layer conv3_1 , Relu layer conv3_1_relu, convolution layer conv3_2, Relu layer conv3_2_relu, convolution layer conv3_3, Relu layer conv3_3_relu, pooling layer max_pooling3, convolution layer conv4_1, Relu layer conv4_1_relu, convolution layer conv4_2, Relu layer conv4_2_relu, convolution layer conv4_3, Relu layer conv4_3_relu, pooling layer max_pooling4, convolution layer conv5_1, Relu layer conv5_1_relu, convolution layer conv5_2, Relu layer conv5_2_relu, convolution layer conv5_3, Relu layer conv5_3_relu, pooling layer max_pooling5, convolution layer fc6, Relu layer fc6_relu, convolution layer fc7, Relu layer fc7_relu, convolution layer conv6_1, Relu layer conv6_1_relu, convolution layer conv6_2, Relu layer conv6_2_relu, convolution layer conv7_1, Relu layer conv7_1_ relu, convolutional layer conv7_2, Relu layer conv7_2_relu, convolutional layer conv8_1, Relu layer conv8_1_relu, convolutional layer conv8_2, Relu layer conv8_2_relu, convolutional layer conv9_1, Relu layer conv9_1_relu, convolutional layer conv9_2, Relu layer conv9_2_relu, convolutional layer conv10_1, Relu layer conv10_1_relu, convolutional layer conv10_2, Relu layer conv10_2_relu;

步骤S32、构建颜色预测模块,颜色预测模块结构从输入层至输出层依次连接为:池化层global_pooling_1、fc层fc_1、fc层fc_2;Step S32, constructing a color prediction module, the color prediction module structure is sequentially connected from the input layer to the output layer: pooling layer global_pooling_1, fc layer fc_1, fc layer fc_2;

步骤S33、构建全局感知模块,全局感知模块结构从输入层至输出层依次连接为:池化层global_pooling_2、多维度归一化层conv1_1_in_bn、卷积层conv11_1、Relu层conv11_1_relu、卷积层conv11_2、Relu层conv11_2_relu;其中,多维度归一化层conv1_1_in_bn由BN层conv11_1_bn和IN层conv11_1_in构成;Step S33, construct a global perception module. The structure of the global perception module is connected in sequence from the input layer to the output layer: pooling layer global_pooling_2, multi-dimensional normalization layer conv1_1_in_bn, convolution layer conv11_1, Relu layer conv11_1_relu, convolution layer conv11_2, Relu layer conv11_2_relu; wherein, the multi-dimensional normalization layer conv1_1_in_bn is composed of the BN layer conv11_1_bn and the IN layer conv11_1_in;

步骤S34、构建细节感知模块,细节感知模块结构从输入层至输出层依次连接为:特征细节压缩层horizontal_pooling_1、BN层conv12_1_bn、卷积层conv12_1、Relu层conv12_1_relu、卷积层conv12_2、Relu层conv12_2_relu;Step S34, construct a detail perception module, the structure of the detail perception module is connected in sequence from the input layer to the output layer: feature detail compression layer horizontal_pooling_1, BN layer conv12_1_bn, convolution layer conv12_1, Relu layer conv12_1_relu, convolution layer conv12_2, Relu layer conv12_2_relu;

步骤S4、将训练集的图片按车辆id进行划分图片库和待查询库,其中待查询库中的车辆id是唯一的,而且图片库中可以包含多张相同车辆id的图片。先将处理后的训练集图片使用本方法进行训练。在训练过程中,使用的优化器是SGD,学习率设置为0.001,学习率动量设置为0.0005,训练批次大小设置为64,epoch设为60,学习率在epoch为30和50的时候,以0.1的倍数下降学习率。Step S4, divide the training set images into a picture library and a to-be-queried library according to the vehicle ID, wherein the vehicle ID in the to-be-queried library is unique, and the picture library can contain multiple pictures of the same vehicle ID. First, the processed training set images are trained using this method. During the training process, the optimizer used is SGD, the learning rate is set to 0.001, the learning rate momentum is set to 0.0005, the training batch size is set to 64, the epoch is set to 60, and the learning rate is reduced by a multiple of 0.1 when the epoch is 30 and 50.

步骤S5、将数据集中的某一车脸图片P输入到主干网络,得到特征图Fp∈RC×W×H,其中C表示特征图通道的数量,W和H为特征图的宽和高,R表示实数域;Step S5: Input a certain car face image P in the data set into the backbone network to obtain a feature map F pRC×W×H , where C represents the number of feature map channels, W and H represent the width and height of the feature map, and R represents the real number domain;

步骤S6、将特征图Fp输入到颜色预测模块中对车辆颜色进行预测,得到预测车辆颜色结果为i,其中1≤i≤N;Step S6, inputting the feature map Fp into the color prediction module to predict the vehicle color, and obtaining the predicted vehicle color result as i, where 1≤i≤N;

具体过程如下:The specific process is as follows:

S21、将特征图Fp输入到全局平均池化层global_pooling_1得到特征图Ea∈RC×1,用于对特征图Fp的信息压缩;S21, input the feature map Fp to the global average pooling layer global_pooling_1 to obtain the feature map Ea∈RC ×1 , which is used to compress the information of the feature map Fp ;

S22、将特征图Ea输入到fc层fc_1、fc层fc_2,并通过softmax函数得到向量Ef∈RN ×1,通过两层fc层的全连接结构对车辆颜色进行预测;S22, input the feature map E a into the fc layer fc_1 and fc layer fc_2, and obtain the vector E f ∈ R N ×1 through the softmax function, and predict the vehicle color through the fully connected structure of the two fc layers;

S23、在向量Ef中取到最高值对应的位置下标i,输出车辆颜色结果i。S23. Get the position subscript i corresponding to the highest value in vector E f , and output the vehicle color result i.

步骤S7、将特征图Fp输入到全局感知模块进行预测,得到向量

Figure SMS_16
Figure SMS_17
其中D为列向量的维度,向量Q1包含的是车脸图片的全局信息;在实施例2中,D=2048;Step S7: Input the feature map Fp into the global perception module for prediction to obtain the vector
Figure SMS_16
Figure SMS_17
Where D is the dimension of the column vector, and the vector Q1 contains the global information of the vehicle face image; in Example 2, D=2048;

具体过程如下:The specific process is as follows:

S71、将特征图Fp输入到全局平均池化层global_pooling_2得到向量V∈RD×1S71, input the feature map Fp to the global average pooling layer global_pooling_2 to obtain a vector V∈R D×1 ;

S72、将向量V输入到BN层conv11_1_bn得到向量Vb∈RD×1S72, input the vector V to the BN layer conv11_1_bn to obtain a vector V bRD×1 ;

S73、将向量V输入到IN层conv11_1_in得到向量Vi∈RD×1S73, input the vector V to the IN layer conv11_1_in to obtain the vector V i ∈R D×1 ;

S74、将向量Vb与向量Vi进行拼接得到向量Vc∈R2D×1S74, concatenate the vector V b and the vector V i to obtain a vector V c ∈R 2D×1 ;

S75、将向量Vc通过卷积层conv11_1、Relu层conv11_1_relu、卷积层conv11_2、Relu层conv11_2_relu得到向量Q1S75. Pass the vector V c through the convolution layer conv11_1, the Relu layer conv11_1_relu, the convolution layer conv11_2, and the Relu layer conv11_2_relu to obtain the vector Q 1 .

步骤S8、将特征图Fp输入到细节感知模块进行预测,得到向量

Figure SMS_18
Figure SMS_19
向量Q2包含的是车脸图片的细节信息;Step S8: Input the feature map Fp into the detail perception module for prediction to obtain the vector
Figure SMS_18
Figure SMS_19
Vector Q 2 contains the detailed information of the car face image;

具体过程如下:The specific process is as follows:

S41、将特征图Fp输入到特征细节压缩层horizontal_pooling_1进行压缩,再输入到BN层conv12_1_bn进行归一化,得到特征图Js∈RC×H×1S41, input the feature map Fp to the feature detail compression layer horizontal_pooling_1 for compression, and then input it to the BN layer conv12_1_bn for normalization to obtain the feature map Js∈R C×H×1 ;

S42、将特征图Js输入到卷积层conv12_1、Relu层conv12_1_relu、卷积层conv12_2、Relu层conv12_2_relu,得到向量Q2S42, input the feature map J s to the convolution layer conv12_1, the Relu layer conv12_1_relu, the convolution layer conv12_2, and the Relu layer conv12_2_relu to obtain the vector Q 2 ;

其中,细节感知模块中的特征细节压缩层的工作过程如下:Among them, the working process of the feature detail compression layer in the detail perception module is as follows:

假定特征细节压缩层的输入为特征图M∈RC×W×H,输出为特征图Y∈RC×H×1Assume that the input of the feature detail compression layer is the feature map M∈RC ×W×H and the output is the feature map Y∈RC ×H×1 :

对于特征图M以特征图的高度方向切分为数目为H的特征图集合A*={Am|1≤m≤H},其中Am∈RC×W×1是指集合A*中第m位的特征图;The feature map M is divided into a feature map set A * = {A m |1≤m≤H} with a number of H in the height direction of the feature map, where A mRC×W×1 refers to the mth feature map in the set A * ;

对集合A*某一特征图Az进行压缩,得到结果kz,压缩公式如下:Compress a certain feature graph A z of the set A * to obtain the result k z . The compression formula is as follows:

Figure SMS_20
Figure SMS_20

其中Az,x,y为集合A*的第z个位置的特征图Az中通道数为y的第x个行向量;Where Az,x,y is the x-th row vector with y channels in the feature map Az at the z-th position of the set A * ;

对集合A*中的每个特征图都执行公式(1)计算,汇集成列向量T∈RH×1,其中列向量T中的第t个位置的数值表示集合A*的第t个位置的特征图通过公式(1)计算后的结果。Formula (1) is executed for each feature map in the set A * and the results are aggregated into a column vector T∈R H×1 , where the value at the t-th position in the column vector T represents the result of the feature map at the t-th position of the set A * calculated using formula (1).

步骤S9、将向量Q1和向量Q2进行拼接,得到向量Qr∈RD×1,向量Qr包含了车脸图片的全局信息和细节信息;Step S9, concatenate vector Q1 and vector Q2 to obtain vector Qr∈R D×1 , where vector Qr contains global information and detailed information of the vehicle face image;

步骤S10、根据步骤S6的预测车辆颜色结果i,把向量Qr存放在名为i的文件夹;Step S10, according to the predicted vehicle color result i in step S6, the vector Q r is stored in a folder named i;

步骤S11、将图片库的所有车脸图片依次全部执行步骤S5至步骤S10操作,得到向量集合Qi={Qi,k|1≤k≤In},其中In为文件夹i中向量的数目,Qi,k指的是文件夹i中第k张图片对应的向量;Step S11, perform the operations of steps S5 to S10 on all the car face pictures in the picture library in sequence, and obtain a vector set Qi = {Qi ,k | 1≤k≤In }, where In is the number of vectors in the folder i, and Qi ,k refers to the vector corresponding to the kth picture in the folder i;

步骤S12、输入待测车脸图片依次执行步骤S5,得到特征图FqStep S12: Input the vehicle face image to be tested and execute step S5 in sequence to obtain a feature map F q ;

步骤S13、输入特征图Fq执行步骤S6,得到颜色预测结果为iq,颜色的预测结果i用于步骤S11;Step S13, input the feature map F q and execute step S6 to obtain a color prediction result i q . The color prediction result i is used in step S11;

步骤S14、输入特征图Fq依次执行步骤S7、S8和S9操作,得到拼接后的向量Qq∈RD ×1,向量Qq包含了待测的车脸图片的全局信息和细节信息;Step S14: Input the feature map Fq and perform the operations of steps S7, S8 and S9 in sequence to obtain a concatenated vector Qq∈RD ×1 , where the vector Qq contains the global information and detail information of the vehicle face image to be tested;

步骤S15、将向量Qq与文件夹iq中所有向量进行余弦距离计算,得到集合

Figure SMS_21
Inq为文件夹iq中向量的数目,
Figure SMS_22
指的是文件夹iq中第g张图片对应的向量,颜色预测分类的处理的使得车脸图片的识别更有针对性;Step S15: Calculate the cosine distance between vector Q q and all vectors in folder i q to obtain the set
Figure SMS_21
I nq is the number of vectors in folder i q ,
Figure SMS_22
Refers to the vector corresponding to the g-th picture in the folder i q. The color prediction and classification processing makes the recognition of car face pictures more targeted;

其中,所述余弦距离的计算过程如下:The calculation process of the cosine distance is as follows:

假设向量J和向量B进行余弦距离计算,得到数值l,计算表达式如下:Assume that the cosine distance between vector J and vector B is calculated to obtain the value l. The calculation expression is as follows:

l=1-J·BTl = 1 - J·B T .

式中,符号“·”表示向量中元素逐位相乘,右上标的“T”表示向量转置操作。In the formula, the symbol “·” represents the bit-by-bit multiplication of the elements in the vector, and the superscript “T” represents the vector transpose operation.

步骤S16、将集合

Figure SMS_23
按距离的取值从大到小进行排序,选择最高的前5个向量对应的图片作为预测结果。Step S16: Set
Figure SMS_23
Sort the distance values from large to small, and select the images corresponding to the top 5 highest vectors as the prediction results.

步骤S17、将测试集中的待测图片库依次执行步骤S12、S13、S14和S15,具体流程如图1所示,得到测试集中的待测图片库所有的预测结果,并计算准确率。Step S17, execute steps S12, S13, S14 and S15 in sequence on the image library to be tested in the test set. The specific process is shown in FIG1 , and all prediction results of the image library to be tested in the test set are obtained, and the accuracy is calculated.

实施例2的实验对比结果如图4所示,实施例2在数量为1678张的待测图片库上与其他方法进行对比,5张图片中包含对应的图片标记为准确,本方法得到了97.3%的准确率,比BOW-CN方法高43.61%,比LOMO方法高50.82%,比FACT方法高24.42%,比NuFACT方法高5.88%,比VAML高6.48%,比QD-DLF高2.84%,证明了方法的有效性,并且能够成功识别到对应的未遮挡的车牌。The experimental comparison results of Example 2 are shown in Figure 4. Example 2 is compared with other methods on a test image library of 1,678 pictures. Five pictures contain corresponding pictures marked as accurate. The method obtains an accuracy rate of 97.3%, which is 43.61% higher than the BOW-CN method, 50.82% higher than the LOMO method, 24.42% higher than the FACT method, 5.88% higher than the NuFACT method, 6.48% higher than VAML, and 2.84% higher than QD-DLF, which proves the effectiveness of the method and can successfully identify the corresponding unobstructed license plate.

综上所述,上述实施例公开一种针对遮挡车牌的车脸识别方法。本发明根据颜色预测模块、全局感知模块、细节感知模块得到颜色预测信息和图片向量信息,然后根据颜色预测信息找到对应文件夹,与文件夹中图片向量信息依次进行余弦距离计算,按从大到小排序,选择前5个作为识别结果。用以解决现有识别遮挡车牌的技术问题。本发明相对于传统方法,增加车辆颜色分类的辅助流程,通过全局感知模块和细节感知模块进行识别,并在全局感知模块中从批量和通道进行多重的归一化计算,能够提高搜索能力和模型对遮挡车辆的内容表达,并成功找出相应未遮挡车牌状态的车辆。本方法的高识别率可通过实施例1和实施例2的实验结果体现。In summary, the above embodiment discloses a vehicle face recognition method for obstructed license plates. The present invention obtains color prediction information and image vector information according to the color prediction module, the global perception module, and the detail perception module, and then finds the corresponding folder according to the color prediction information, and performs cosine distance calculation with the image vector information in the folder in turn, sorts them from large to small, and selects the first 5 as the recognition results. It is used to solve the technical problem of existing identification of obstructed license plates. Compared with the traditional method, the present invention adds an auxiliary process of vehicle color classification, identifies through the global perception module and the detail perception module, and performs multiple normalization calculations from batches and channels in the global perception module, which can improve the search capability and the model's content expression of obstructed vehicles, and successfully find the corresponding vehicles in the unobstructed license plate state. The high recognition rate of this method can be reflected by the experimental results of Example 1 and Example 2.

上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above embodiments are preferred implementation modes of the present invention, but the implementation modes of the present invention are not limited to the above embodiments. Any other changes, modifications, substitutions, combinations, and simplifications that do not deviate from the spirit and principles of the present invention should be equivalent replacement methods and are included in the protection scope of the present invention.

Claims (10)

1. The vehicle face recognition method for the license plate shielding is characterized by comprising the following steps of:
s1, inputting a certain face picture P in a data set into a backbone network to obtain a feature map F p ∈R C×E×H Wherein C represents the number of channels of the feature map, W and H are the width and height of the feature map, and R represents the real number domain;
s2, mapping the characteristic diagram F p Input to colorThe method comprises the steps that a prediction module predicts the color of a vehicle to obtain a predicted vehicle color result i;
s3, mapping the characteristic diagram F p Inputting to global perception module for prediction to obtain vector
Figure FDA0004061264920000011
Wherein D is the dimension of the column vector;
s4, mapping the characteristic diagram F p Inputting to a detail perception module for prediction to obtain a vector
Figure FDA0004061264920000012
S5, vector Q 1 Sum vector Q 2 Splicing to obtain a vector Q r ∈R D×1
S6, according to the predicted vehicle color result i of the step S2, the vector Q is calculated r A folder named i;
s7, sequentially and repeatedly executing the operations from the step S1 to the step S6 on all the face pictures in the picture library in the data set to obtain a vector set Q i ={Q i,k |1≤k≤I n }, wherein I n For the number of vectors in folder i, Q i,k Refers to a vector corresponding to a kth picture in the folder i;
s8, inputting face pictures to be detected, and sequentially executing the step S1 to obtain a feature map F q
S9, inputting a feature map F q Executing step S2 to obtain a color prediction result of i q
S10, inputting a feature map F q Sequentially executing the operations of the steps S3, S4 and S5 to obtain a spliced vector Q q ∈R D×1
S11, vector Q q And file i q Cosine distance calculation is carried out on all vectors in the set to obtain a set
Figure FDA0004061264920000013
I nq For folder i q Number of medium vectors, ++>
Figure FDA0004061264920000014
Refers to a folder i q Vector corresponding to the g-th picture;
s12, collecting
Figure FDA0004061264920000021
And sequencing the images according to the distance from large to small, and selecting the images corresponding to the top 5 vectors as prediction results.
2. The method for recognizing the face of a vehicle for shielding a license plate according to claim 1, wherein the backbone network structure is sequentially connected from an input layer to an output layer as follows: convolutional layers conv1_1, relu layer conv1_relu, convolutional layer conv1_2, relu layer conv1_2_relu, pooling layer max_pooling1, convolutional layer conv2_1, relu layer conv2_1_relu, convolutional layer conv2_2, BN layer conv2_2_bn, relu layer conv2_2_relu, pooling layer max_pooling2, convolutional layer conv3_1, relu layer conv3_1_relu, convolutional layer conv3_2 Relu layer conv3_2_relu, convolution layer conv3_3, relu layer conv3_3_relu, pooling layer max_pooling3, convolution layer conv4_1, relu layer conv4_1_relu, convolution layer conv4_2, relu layer conv4_2_relu, convolution layer conv4_3, relu layer conv4_3_relu, pooling layer max_pooling4, convolution layer conv5_1, relu layer conv5_1_relu, convolution layer conv5_2 Relu layer conv5_2_Relu, convolution layer conv5_3, relu layer conv5_3_Relu, pooling layer max_pooling5, convolution layer fc6, relu layer fc6_Relu, convolution layer fc7, relu layer fc7_Relu, convolution layer conv6_1, relu layer conv6_1_Relu, convolution layer conv6_2, relu layer conv6_2_Relu, convolution layer conv7_1, relu layer conv7_1_Relu, convolution layer conv7_2_Relu, convolution layer conv8_1, relu layer conv8_1_Relu, convolution layer conv8_2, relu layer conv8_2_Relu, convolution layer conv9_1, convolution layer conv9_2_conv2_conv10_con2_conv10_con2, convolution layer conv10_2_conv10_con2_con2.
3. The method for recognizing a vehicle face aiming at a license plate shielding according to claim 1, wherein the color prediction module structure is sequentially connected from an input layer to an output layer as follows: pooling layers global_pooling_1, fc layer fc_1, fc layer fc_2.
4. The method for recognizing the face of a vehicle aiming at shielding a license plate according to claim 1, wherein the global perception module structure is sequentially connected from an input layer to an output layer as follows: pooling layer global_pooling_2, multidimensional normalization layer conv1_1_in_bn, convolution layer conv11_1, relu layer conv11_1_relu, convolution layer conv11_2, relu layer conv11_2_relu; the multidimensional normalization layer conv1_1_in_bn is composed of a BN layer conv11_1_bn and an IN layer conv11_1_in.
5. The method for recognizing the face of the vehicle for shielding the license plate according to claim 1, wherein the detail perception module structure is sequentially connected from the input layer to the output layer as follows: feature detail compression layer horizontal_working_1, BN layer conv12_1_bn, convolution layer conv12_1, relu layer conv12_1_relu, convolution layer conv12_2, relu layer conv12_2_relu.
6. A method for recognizing a vehicle face for shielding a license plate according to claim 3, wherein the step S2 is as follows:
s21, feature map F p Inputting to global average pooling layer global_pooling_1 to obtain feature map E a ∈R C×1
S22, combining the characteristic diagram E a Input to fc layer fc_1, fc layer fc_2, and obtain vector E by softmax function f ∈R N×1 Where N is the number of vehicle colors in the dataset;
s23, at vector E f And (5) taking the position index i corresponding to the highest value, and outputting a vehicle color result i.
7. The method for recognizing a vehicle face for shielding a license plate according to claim 4, wherein the step S3 is as follows:
s31, feature map F p Input to global_pooling_2 layer to obtain vector V e R D×1
S32, inputting the vector V into the BN layer conv11_1_bn to obtain the vector V b ∈R D×1
S33, inputting the vector V into the IN layer conv11_1_in to obtain the vector V i ∈R D×1
S34, vector V b Vector V of AND i Splicing to obtain a vector V c ∈R 2D×1
S35, vector V c Vector Q is obtained by convolving layers conv11_1, relu layer conv11_1_relu, convolving layers conv11_2, relu layer conv11_2_relu 1
8. The method for recognizing the face of the vehicle for shielding the license plate according to claim 5, wherein the step S4 is as follows:
s41, feature map F p Inputting to feature detail compression layer horizontal_compressing_1 and BN layer conv12_1_bn to obtain feature diagram J s ∈R C×H×1
S42, feature map J s Input to convolution layers conv12_1, relu layer conv12_1_relu, convolution layers conv12_2, relu layer conv12_2_relu to obtain vector Q 2
9. The method for recognizing the face of a vehicle for shielding a license plate according to claim 5, wherein the working process of the feature detail compression layer is as follows:
assuming that the input of the feature detail compression layer is a feature map M E R C×W×H Output is a feature map Y ε R C×H×1
For the feature map M, the feature map M is segmented into a feature map set A with the number of H in the height direction of the feature map * ={A m 1.ltoreq.m.ltoreq.H, where A m ∈R C×W×1 Refers to the set A * A feature map of the m-th bit in the list;
for set A * A certain characteristic diagram A z Compressing to obtain result k z The compression formula is as follows:
Figure FDA0004061264920000041
wherein A is z,x,y For set A * Characteristic map A of the z-th position in (b) z An xth row vector with the middle channel number of y;
for set A * Each feature map in the list is calculated by a formula (1) and is collected into a column vector T E R H×1 Wherein the value of the T-th position in the column vector T represents the set a * The feature map of the t-th position of (c) is calculated by the formula (1).
10. The method for recognizing a car face aiming at a license plate shielding according to claim 1, wherein the cosine distance is calculated as follows:
assume that vector J and vector B perform cosine distance calculation to obtain a value l, and the calculation expression is as follows:
l=1-J·B T
in the formula, the symbol "·" represents the bitwise multiplication of elements in the vector, and the right-hand superscript "T" represents the vector transpose operation.
CN202310061292.9A 2023-01-18 2023-01-18 Vehicle face recognition method for shielding license plate Pending CN116052150A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310061292.9A CN116052150A (en) 2023-01-18 2023-01-18 Vehicle face recognition method for shielding license plate

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310061292.9A CN116052150A (en) 2023-01-18 2023-01-18 Vehicle face recognition method for shielding license plate

Publications (1)

Publication Number Publication Date
CN116052150A true CN116052150A (en) 2023-05-02

Family

ID=86131084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310061292.9A Pending CN116052150A (en) 2023-01-18 2023-01-18 Vehicle face recognition method for shielding license plate

Country Status (1)

Country Link
CN (1) CN116052150A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740661A (en) * 2023-08-11 2023-09-12 科大国创软件股份有限公司 Method for reversely tracking Mongolian vehicle based on face recognition
CN117745720A (en) * 2024-02-19 2024-03-22 成都数之联科技股份有限公司 Vehicle appearance detection method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116740661A (en) * 2023-08-11 2023-09-12 科大国创软件股份有限公司 Method for reversely tracking Mongolian vehicle based on face recognition
CN116740661B (en) * 2023-08-11 2023-12-22 科大国创软件股份有限公司 Method for reversely tracking Mongolian vehicle based on face recognition
CN117745720A (en) * 2024-02-19 2024-03-22 成都数之联科技股份有限公司 Vehicle appearance detection method, device, equipment and storage medium
CN117745720B (en) * 2024-02-19 2024-05-07 成都数之联科技股份有限公司 Vehicle appearance detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Deng et al. A global-local self-adaptive network for drone-view object detection
Zhang et al. Too far to see? Not really!—Pedestrian detection with scale-aware localization policy
Yang et al. Diffusion model as representation learner
Peeples et al. Histogram layers for texture analysis
WO2020247545A1 (en) Lightweight decompositional convolution neural network
CN116052150A (en) Vehicle face recognition method for shielding license plate
Kaddar et al. HCiT: Deepfake video detection using a hybrid model of CNN features and vision transformer
CN114170516A (en) A vehicle re-identification method, device and electronic device based on roadside perception
CN111428730B (en) Weak supervision fine-grained object classification method
Khan et al. Drone-HAT: Hybrid attention transformer for complex action recognition in drone surveillance videos
Gao et al. A transformer-based network for hyperspectral object tracking
Chen et al. SSL-Net: Sparse semantic learning for identifying reliable correspondences
Li et al. NDNet: Spacewise multiscale representation learning via neighbor decoupling for real-time driving scene parsing
CN112465700A (en) Image splicing positioning device and method based on depth clustering
CN115861927A (en) Image identification method and device for power equipment inspection image and computer equipment
Vo et al. Enhanced feature pyramid networks by feature aggregation module and refinement module
Shi et al. Combined channel and spatial attention for YOLOv5 during target detection
CN114005142A (en) Person Re-identification Model and Recognition Method Based on Multi-scale and Attention Feature Aggregation
CN118865433A (en) A pedestrian target detection model training method, device and equipment
Gong et al. An improved YOLO algorithm with multisensing for pedestrian detection
Qiu et al. Graph Convolution and Self Attention Based Non-maximum Suppression
Wu et al. A CNN-Transformer Hybrid Network for Multi-scale object detection
Liu et al. Part-Whole Relational Fusion Towards Multi-Modal Scene Understanding
Samad et al. SCMA: exploring dual-module attention with multi-scale kernels for effective feature extraction
Catalano et al. More than the Sum of Its Parts: Ensembling Backbone Networks for Few-Shot Segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination