CN112308066A - License plate recognition system - Google Patents

License plate recognition system Download PDF

Info

Publication number
CN112308066A
CN112308066A CN202011147535.3A CN202011147535A CN112308066A CN 112308066 A CN112308066 A CN 112308066A CN 202011147535 A CN202011147535 A CN 202011147535A CN 112308066 A CN112308066 A CN 112308066A
Authority
CN
China
Prior art keywords
network
license plate
module
plate recognition
dense connection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011147535.3A
Other languages
Chinese (zh)
Inventor
刘建虢
尹晓雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Cresun Innovation Technology Co Ltd
Original Assignee
Xian Cresun Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Cresun Innovation Technology Co Ltd filed Critical Xian Cresun Innovation Technology Co Ltd
Priority to CN202011147535.3A priority Critical patent/CN112308066A/en
Publication of CN112308066A publication Critical patent/CN112308066A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a license plate recognition system, which comprises: the image acquisition module is used for acquiring a target license plate image; the image recognition module is used for performing feature extraction on the target license plate image by utilizing a backbone network of a license plate recognition network to obtain x feature maps; the scales of the x characteristic graphs are sequentially increased; x is a natural number of 4 or more; performing feature fusion on the x feature maps by using an FPN (field programmable gate array) network of the license plate recognition network to obtain a prediction result corresponding to the feature map of each scale; obtaining a detection result of the target license plate image based on the prediction result, wherein the detection result comprises license plate characters of the target license plate; a sending module, configured to send the detection result; the license plate recognition network is formed by adopting layer pruning and channel pruning and knowledge distillation guide network recovery on the basis of a YOLOv3 network. The license plate recognition system provided by the invention can realize accurate license plate recognition in a long distance or a complex environment.

Description

License plate recognition system
Technical Field
The invention belongs to the field of image processing, and particularly relates to a license plate recognition system.
Background
At present, the number of vehicles on roads is increasing, and with the development of intelligent transportation systems, License Plate Recognition systems are widely applied to various fields, and a License Plate Recognition system (VLPR) is an application of a computer video image Recognition technology in Vehicle License Plate Recognition. The license plate recognition technology requires that the license plate of the automobile can be extracted and recognized from a complex background, the information of the license plate number, the color and the like of the automobile is recognized through the technologies of license plate extraction, image preprocessing, feature extraction, license plate character recognition and the like, and the license plate recognition technology is widely applied to the scenes of parking charge management, automatic payment management of a highway toll station, traffic flow control index detection, road overspeed automatic monitoring, vehicle positioning, automobile theft prevention and the like.
However, in the prior art, the technology for recognizing the license plate of the vehicle is limited, and when the vehicle is too far away and the natural environment is bad, the recognition rate of the license plate recognition system is affected.
Therefore, how to accurately identify the license plate in a long distance or a complex environment is a problem which needs to be solved urgently.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a license plate recognition system. The technical problem to be solved by the invention is realized by the following technical scheme:
the embodiment of the invention provides a license plate recognition system, which comprises:
the image acquisition module is used for acquiring a target license plate image;
the image recognition module is used for extracting the features of the target license plate image by utilizing a backbone network in a dense connection form of a license plate recognition network to obtain x feature maps; the scales of the x characteristic graphs are sequentially increased; x is a natural number of 4 or more; performing feature fusion on the x feature maps by using an FPN (field programmable gate array) network of the license plate recognition network to obtain a prediction result corresponding to the feature map of each scale; obtaining a detection result of the target license plate image based on the prediction result, wherein the detection result comprises license plate characters of the target license plate;
a sending module, configured to send the detection result;
the license plate recognition network comprises the trunk network in the dense connection form and the FPN network, and is formed by adopting layer pruning, channel pruning and knowledge distillation guide network recovery on the basis of a YOLOv3 network; and the license plate recognition network is obtained by training according to the sample license plate image and the license plate characters corresponding to the sample license plate image.
Optionally, the backbone network in the dense connection form includes a plurality of dense connection modules and transition modules connected in series at intervals; the number of the dense connection modules is y; the dense connection module comprises a convolution network module and a dense connection unit group which are connected in series; the convolution network module comprises a convolution layer, a BN layer and a Leaky relu layer which are connected in series; the dense connection unit group comprises m dense connection units; wherein y and m are natural numbers of 4 or more; y is equal to or greater than x.
Optionally, each dense connection unit includes a plurality of convolution network modules connected in a dense connection manner, and a feature map output by the plurality of convolution network modules is fused in a cascade manner.
Optionally, the obtaining x feature maps includes:
and obtaining the characteristic graphs which are output by the x dense connection modules along the input reverse direction and have sequentially increased sizes.
Optionally, the transition module is the convolutional network module.
Optionally, the transition module includes a plurality of convolution network modules and a maximum pooling layer, which are sequentially connected; the input of the convolution network module and the input of the maximum pooling layer are shared, and the feature graph output by the convolution network module and the feature graph output by the maximum pooling layer are fused in a cascading mode.
Optionally, the FPN network includes x prediction branches Y with sequentially increasing scales1~Yx(ii) a Wherein the prediction branch Y1~YxThe scales of the feature maps are in one-to-one correspondence with the scales of the x feature maps;
each prediction branch YiComprising a convolutional networkThe device comprises a module group and an up-sampling module; prediction branch YiObtaining a feature map with a corresponding scale from the x feature maps and predicting branch Yi-1Performing cascade fusion on the feature maps subjected to the upsampling treatment; wherein i is a natural number of 2 or more and x or less.
Optionally, the license plate recognition network is formed by adopting layer pruning, channel pruning and knowledge distillation to guide network recovery, and the method includes:
on the basis of a YOLOv3 network, in a network obtained by adopting the backbone network in the dense connection form and increasing the extraction scale of the feature map, layer pruning is carried out on the dense connection modules of the backbone network in the dense connection form to obtain a YOLOv3-1 network;
carrying out sparse training on the YOLOv3-1 network to obtain a YOLOv3-2 network with BN layer scaling coefficients in sparse distribution;
performing channel pruning on the YOLOv3-2 network to obtain a YOLOv3-3 network;
and carrying out knowledge distillation on the YOLOv3-3 network to obtain the license plate recognition network.
Optionally, in the network obtained by adding the extraction scale of the feature map to the backbone network in the dense connection form based on the YOLOv3 network, performing layer pruning on the dense connection module of the backbone network in the dense connection form includes:
pruning the number of the dense connection units contained in the dense connection module from m to p; wherein m and p are both natural numbers, and p is less than m.
Optionally, the image acquisition module includes a camera, a video camera, a mobile phone, or a monitoring device on the road.
According to the license plate recognition system provided by the embodiment of the invention, when the target license plate image is detected, a residual error module in a main network of a YOLOv3 network in the prior art is replaced by a dense connection module. When the target license plate image features are extracted, an original parallel feature fusion mode is changed into a serial mode by using a dense connection module, and an early obtained feature graph is used as the input of each layer of feature graph behind, so that the feature graph with more information can be obtained, the feature transfer is strengthened, and the detection precision is improved, therefore, the detection precision can be higher in the case of complex conditions such as wind and snow weather or insufficient shooting light; the feature extraction scale with fine granularity is increased, so that smaller objects can be detected, and the detection precision of small targets in the target license plate image can be improved, therefore, the target license plate can be effectively detected under the conditions that the shooting distance is long, the occupied area of the license plate in the obtained target license plate image is small, and the like; based on a target detection method of a network obtained by adopting the trunk network in the dense connection form and increasing the extraction scale of the feature map on the basis of the YOLOv3 network, the license plate recognition network is obtained by performing layer pruning, sparse training, channel pruning and knowledge distillation processing on the original YOLOv3 and selecting optimized processing parameters in each processing process. Because the volume of the network is greatly reduced, most redundant calculation is eliminated, the target detection speed based on the network is greatly improved, and the detection precision can be maintained. Particularly, when the method is applied to scenes with few types to be detected, the detection precision can be ensured, the detection speed can be greatly improved, and the license plate can be accurately identified in a long distance or a complex environment.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
Fig. 1 is a schematic structural diagram of a license plate recognition system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a prior art YOLOv3 network;
fig. 3 is a schematic structural diagram of a license plate recognition network according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a transition module according to an embodiment of the present invention;
fig. 5 is a weight distribution diagram of a sparse training parameter combination based on a license plate recognition network according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
The embodiment of the invention provides a license plate recognition system in order to accurately recognize license plates in a long distance or complex environment.
The embodiment of the invention provides a license plate recognition system 100. Next, the license plate recognition system will be described.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a license plate recognition system provided in an embodiment of the present invention, which may include the following modules:
and the image acquisition module 101 is used for acquiring a target license plate image.
The image recognition module 102 is configured to perform feature extraction on a target license plate image by using a backbone network in a dense connection form of a license plate recognition network to obtain x feature maps; the scales of the x characteristic graphs are sequentially increased; x is a natural number of 4 or more; performing feature fusion on the x feature maps by using an FPN (field programmable gate array) network of the license plate recognition network to obtain a prediction result corresponding to the feature map of each scale; obtaining a detection result of the target license plate image based on the prediction result, wherein the detection result comprises license plate characters of the target license plate;
a sending module 103, configured to send a detection result;
the license plate recognition network comprises a trunk network and an FPN network in a dense connection mode, and is formed by adopting layer pruning, channel pruning and knowledge distillation guide network recovery on the basis of a YOLOv3 network; and the license plate recognition network is obtained by training according to the sample license plate image and the license plate characters corresponding to the sample license plate image.
The modules are described below:
(1) the image acquisition module 101:
the image acquisition module 101 may include a camera, a video camera, a mobile phone, or a monitoring device on the road, etc. The image capturing module 101 may be disposed on a street lamp post at the edge of a road, on an overpass, or the like. The image acquisition module 101 captures images of a target license plate of a passing vehicle.
The target license plate image is an image containing a license plate of the target vehicle, which is acquired by the image acquisition module 101.
The acquired target license plate image at least contains a license plate of a target vehicle.
It can be understood that in such a scenario, due to a long shooting distance, an area occupied by a license plate in the acquired target license plate image may be relatively small, or due to problems such as wind, snow, weather, shooting light, and the like, a resolution of the acquired target license plate image may be poor.
(2) The image recognition module 102:
in this embodiment, the backbone network in the form of dense connection includes a plurality of dense connection modules and transition modules connected in series at intervals.
The backbone network in the form of dense connection in the embodiment is improved based on the backbone network of the YOLOv3 network. The license plate recognition network is obtained by training according to the sample license plate image and the license plate characters corresponding to the sample license plate image.
In order to facilitate understanding of the network structure of the backbone network in the form of dense connections provided by the embodiments of the present invention, the structure of the YOLOv3 network in the prior art is described. Fig. 2 is a schematic structural diagram of a YOLOv3 network in the prior art.
Referring to fig. 2, the portion within the dashed box is the YOLOv3 network. Wherein the part in the dotted line frame is a backbone (backbone) network of the YOLOv3 network, namely a darknet-53 network; the rest is a Feature Pyramid Network (FPN) network, which is divided into three prediction branches Y1~Y3Predicting branch Y1~Y3The scales of (2) are in one-to-one correspondence with the scales of the feature maps output by the 3 residual error modules res4, res8, res8 in the reverse direction of the input. The prediction results of the prediction branches are respectively represented by Y1, Y2 and Y3, and the scales of Y1, Y2 and Y3 are increased in sequence.
The backbone network of the YOLOv3 network is formed by connecting CBL modules and a plurality of resn modules in series. The CBL module is a Convolutional network module, and includes a conv layer (convolutive layer, convolutive layer for short), a BN (Batch Normalization) layer and an leakage relu layer corresponding to an activation function leakage relu, which are connected in series, and the CBL represents conv + BN + leakage relu. The resn module is a Residual module, n represents a natural number, and includes Res1, Res2, …, Res8, and the like, the resn module includes a zero padding (zero padding) layer, a CBL module, and a Residual unit group, which are connected in series, the Residual unit group is represented by Res unit n, meaning includes n Residual units Res unit, each Residual unit includes a plurality of CBL modules connected in a Residual Network (ResNets) connection form, and the feature fusion form adopts a parallel form, i.e., an add form.
Each prediction branch of the FPN network includes a convolutional network module group, specifically includes 5 convolutional network modules, that is, CBL × 5 in fig. 2. In addition, the US (up sampling) module is an up sampling module; concat represents that the feature fusion adopts a cascade mode, and concat is short for concatenate.
For the specific structure of each main module in the YOLOv3 network, please refer to the schematic diagram below the dashed box in fig. 2.
The backbone network in the form of dense connection provided by the embodiment of the present invention is different from the backbone network of the YOLOv3 network in the prior art in that: the system comprises a plurality of densely connected modules and transition modules which are connected in series at intervals. By taking the connection mode of dense convolutional network DenseNet as reference, a specific dense connection module is proposed to replace a residual module (resn module) in the backbone network of yollov 3 network. It is known that ResNets combines features by summation before passing them to layers, i.e. feature fusion in a parallel manner. Whereas the dense connection approach connects all layers (with matching signature sizes) directly to each other in order to ensure that information flows to the maximum extent between layers in the network. Specifically, for each layer, all feature maps of its previous layer are used as its input, and its own feature map is used as its input for all subsequent layers, i.e., feature fusion is in a cascade (also referred to as a cascade). Therefore, compared with the YOLOv3 network using a residual error module, the license plate recognition network in the embodiment of the invention obtains more information quantity of the feature map by using the dense connection module instead, and can enhance feature propagation when detecting the target license plate. Meanwhile, because the redundant characteristic diagram does not need to be learned again, the number of parameters can be greatly reduced, the calculated amount is reduced, and the problem of gradient disappearance can be reduced.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a license plate recognition network according to an embodiment of the present invention. The backbone network of the dense connection form of the present embodiment is explained below with reference to fig. 3.
In fig. 3, the portion inside the dotted line frame is a license plate recognition network. The license plate recognition network comprises a trunk network in a dense connection mode, an FPN network, a classification network and a non-maximum value inhibition module, and is formed by adopting layer pruning, channel pruning and knowledge distillation guide network recovery on the basis of a YOLOv3 network. The license plate recognition network is obtained by training according to the sample license plate image and the license plate characters corresponding to the sample license plate image, and the training process is described later.
The part inside the dotted line frame in fig. 3 is a backbone network in a dense connection form for feature extraction.
Illustratively, the backbone network in the form of dense connection comprises a plurality of dense connection modules and transition modules which are connected in series at intervals; the densely connected modules are denoted with denm in fig. 3.
The number of the dense connection modules is y; y is a natural number of 4 or more, and y is equal to or more than x, and is exemplified by y being 5 in fig. 3.
The dense connection module comprises a convolution network module and a dense connection unit group which are connected in series; the convolution network module is represented by CBL in fig. 3, and the dense connection unit group is represented by den unit m in fig. 3, where m is a natural number equal to or greater than 4. The Convolutional network module includes a conv layer (Convolutional layer for short), a BN (Batch Normalization) layer and a leakage relu layer (leakage relu is an activation function) connected in series, where CBL represents conv + BN + leakage relu.
The dense connecting unit group comprises m dense connecting units; each dense connection unit comprises a plurality of convolution network modules connected in a dense connection mode, and a characteristic diagram output by the convolution network modules is fused in a cascading mode; the cascade is denoted by concat in fig. 3. Cascade fusion is denoted by concat, which is short for concatenate.
Correspondingly, for obtaining x feature maps, the method comprises the following steps:
and obtaining the characteristic graphs which are output by the x dense connection modules along the input reverse direction and have sequentially increased sizes.
As can be understood with reference to fig. 3, y is 5 and x is 4 in fig. 3. The backbone network performs shallow-to-deep feature extraction on an input target traffic image (simply referred to as an image in fig. 3) in a form of a dense connection module and a transition module, and outputs the extracted feature map after 4 dense connection modules. Namely, the first to fourth densely connected modules denm in the reverse direction of the input each output a corresponding characteristic map, the dimensions of which are successively increased. Specifically, the scale of each feature map is 13 × 13 × 72, 26 × 26 × 72, 52 × 52 × 72, and 104 × 104 × 72, respectively.
The embodiment of the invention adopts the backbone network in a dense connection mode, can ensure that information flows between layers in the network to the maximum extent, and directly connects all the layers (with matched characteristic diagram sizes). For each layer, all feature maps of the layer before the layer are used as input of the layer, and the feature map of the layer is used as input of all the subsequent layers, namely, the feature fusion adopts a cascade mode (also called a series mode). Therefore, compared with the prior art that a residual error module is used and the parallel connection mode is used for feature fusion, the embodiment of the invention adopts the dense connection module, so that the obtained feature map has more information, and the feature propagation can be enhanced and the detection precision can be improved during target detection. Meanwhile, because the redundant characteristic diagram does not need to be learned again, the number of parameters can be greatly reduced, the calculated amount is reduced, and the problem of gradient disappearance can be reduced. In addition, the invention transfers the characteristic diagram from shallow to deep, extracts the characteristic diagram with at least four scales, and enables the network to detect objects with different scales, namely, increases the characteristic extraction scale with fine granularity. When the subsequent target is detected, the detection precision can be improved aiming at the condition that the area occupied by the license plate in the obtained target license plate image is a small target or in windy and snowy weather, dark light and the like.
Transition modules can be arranged among the added dense connection modules so as to adjust the characteristic diagram of the dense connection.
In an optional first embodiment, the transition module is a convolutional network module. I.e. using the CBL module as a transition module. The network building process can be fast, and the obtained network structure is simple. However, such a transition module only uses convolution layers for transition, that is, the dimension of the feature map is reduced by directly increasing the step size, and this only takes care of the features in the local region, but cannot combine the information of the whole feature map, so that the information in the feature map is lost more.
In a second optional implementation manner, the transition module includes a plurality of convolutional network modules and a maximum pooling layer, which are connected in sequence; the input of the convolution network module and the input of the maximum pooling layer are shared, and the characteristic diagram output by the convolution network module and the characteristic diagram output by the maximum pooling layer are fused in a cascading mode. Referring to fig. 4, a structure of a transition module in this embodiment is shown, and fig. 4 is a schematic structural diagram of a transition module according to an embodiment of the present invention. In this embodiment, the transition module is represented by a tran module, and the MP layer is a max pooling layer (Maxpool, abbreviated MP, meaning max pooling). Further, the step size of the MP layer may be selected to be 2. In this embodiment, the introduced MP layer can perform dimension reduction on the feature map with a larger receptive field; the used parameters are less, so that the calculated amount is not increased too much, the possibility of overfitting can be weakened, and the generalization capability of the network model is improved; and the original CBL module is combined, so that the characteristic diagram can be viewed as being subjected to dimension reduction from different receptive fields, and more information can be reserved.
For the second embodiment, optionally, the number of the convolution network modules included in the transition module is two or three, and a serial connection manner is adopted between each convolution network module. Compared with the method using one convolution network module, the method using two or three convolution network modules connected in series can increase the complexity of the model and fully extract the features.
And performing feature fusion on the x feature graphs by using an FPN (field programmable gate array) network of the license plate recognition network to obtain a prediction result corresponding to the feature graph of each scale.
Referring to fig. 3, the rest of the network except the main network in the form of dense connection, the classification network and the non-maximum suppression module is an FPN (Feature Pyramid Networks) network, and the FPN network includes x prediction branches Y with sequentially increasing scales1~Yx(ii) a Wherein branch Y is predicted1~YxThe scale of (a) corresponds to the scale of the x feature maps one to one.
Referring to fig. 3, the scales of the 4 prediction branches respectively correspond to the scales of the feature maps respectively output by the 4 dense connection modules along the input reverse direction one by one.
Aiming at carrying out feature fusion on x feature maps with different scales by utilizing an FPN network, and each prediction branch YiObtaining a feature map with a corresponding scale from the x feature maps as a prediction branch YiFeature map F to be fusedi(ii) a Where i is 2, 3, …, x.
Obtaining a predicted branch Yi-1The feature graph output by the convolutional network module group is subjected to convolution and up-sampling processing to obtain a prediction branch YiFeature map F to be fusedi-1
The feature map F to be fusediAnd feature map F to be fusedi-1Cascade fusion was performed.
The convolutional network module group comprises k convolutional network modules, wherein k is a natural number; each prediction branch comprises a convolutional network module group, and a prediction branch YiThe convolutional network module group is arranged after the cascade fusion processing of the prediction branch.
As can be appreciated with reference to FIG. 3, branch Y is predicted1And directly acquiring a feature map with a corresponding scale, namely the feature map output by the first dense connection module along the input reverse direction, and performing convolution processing on the feature map through a convolution network module group (represented by CBL (cubic boron).
Each prediction branch YiComprising a convolutional network module group anda sampling module; prediction branch YiObtaining a feature map with corresponding scale from the x feature maps and predicting branch Yi-1Performing cascade fusion on the feature maps subjected to the upsampling treatment; wherein i is a natural number of 2 or more and x or less.
In particular, from the predicted branch Y2Initially, each predicted branch YiFeature graphs of two aspects are obtained for feature fusion, and on one hand, the feature graphs are as follows: obtaining a feature map with a corresponding scale from the x feature maps as a prediction branch YiFeature map F to be fusedi(ii) a For predicted branch Y2That is, the feature map output by the second densely connected module along the reverse direction of the input is obtained as the feature map F to be fused2(ii) a Another aspect is: obtaining adjacent prediction branch Y with smaller scalei-1The feature graph output by the convolutional network module group is subjected to convolution and up-sampling processing to obtain a prediction branch YiFeature map F to be fusedi-1(ii) a For predicted branch Y2Obtaining the predicted branch Y1The feature map output by the convolutional network module group CBL xk is processed by convolution (CBL module) and up sampling (US module, US is called up sampling for short) to obtain the feature map F to be fused of the prediction branch1
Prediction branch YiThe feature map F to be fusediAnd feature map F to be fusedi-1Carrying out cascade fusion; for predicted branch Y2I.e. to fuse the feature map F2And feature map F to be fused1Cascade fusion was performed. As can be appreciated with reference to FIG. 3, the prediction branch Y2The cascade-connection fused feature maps are processed by a convolution network module group CBL k, the output feature maps are used for the subsequent target prediction of the prediction branch on one hand, and convolution and up-sampling processing is carried out on the other hand to be used for the prediction branch Y3And performing feature cascade fusion.
Feature fusion process on remaining prediction branches and prediction branch Y2Similarly, no further description is provided herein.
In this embodiment, feature fusion combines the horizontal approach with the top-down approach, in which the feature map of a smaller-scale prediction branch is processed by an adjacent larger-scale prediction branch to deliver its own features downward.
In the embodiment of feature fusion, a method of adding deep layer and shallow layer network features and then performing up-sampling together is used, and after the features are added, a feature map is extracted through a convolutional layer.
And obtaining a detection result of the target license plate image based on the prediction result, wherein the detection result comprises license plate characters of the target license plate.
In the module, the method can be divided into two steps:
and step S1, processing the prediction result through a classification network and a non-maximum value inhibition module to obtain the target position and the type in the target license plate image.
The targets in the embodiment of the invention comprise a license plate and rectangular objects which are similar to the license plate in shape, such as vehicle windows, rearview mirrors and the like. For each target, the detection result is in the form of a vector, including the position of the prediction box, the confidence of the target in the prediction box, and the category of the target in the prediction box. The position of the prediction frame is used for representing the position of the target in the target license plate image; specifically, the position of each prediction frame is represented by four values, bx, by, bw and bh, bx and by are used for representing the position of the center point of the prediction frame, and bw and bh are used for representing the width and height of the prediction frame. Correspondingly, the types of the target license plate images comprise license plates, vehicle windows, rearview mirrors and the like.
Optionally, the classification network includes a SoftMax classifier in order to implement mutually exclusive classification of multiple classes. The classification network can also classify using logistic regression to achieve multiple independent two classifications.
The non-maximum suppression module is configured to perform NMS (non _ max _ suppression) processing. The method is used for repeatedly selecting a plurality of detection frames of the same target, and the detection frames with relatively low confidence coefficient are excluded.
For the processing procedure of the classification network and the non-maximum suppression module, please refer to the related prior art, which is not described herein.
In fig. 3, the feature maps of four scales, 13 × 13, 26 × 26, 52 × 52, and 104 × 104, are output by 4 prediction branches, and the smallest 13 × 13 feature map is suitable for larger target detection due to the largest receptive field; a medium 26 × 26 signature is suitable for detecting a medium size target due to its medium receptive field; the larger 52 x 52 feature map is suitable for detecting smaller targets due to the smaller receptive field; the largest 104 x 104 feature map is suitable for detecting smaller targets because it has a smaller receptive field. The embodiment of the invention divides the image more finely, and the prediction result has more pertinence to the object with smaller size, namely the license plate with smaller occupation in the image.
And step S2, aiming at the target with the category as the license plate, recognizing the characters of the license plate based on the position of the license plate.
An alternative embodiment may include:
step (1): acquiring the license plate position of a target license plate image;
step (2): carrying out license plate character segmentation on the obtained target license plate image;
and (3): and performing character recognition on each segmented license plate character one by one, and generating a detection result.
By identifying the characters of the target license plate image, the characters of the license plate or the type of the license plate, such as regional information, can be obtained. It is understood that the above step S2 can also be implemented by using other prior arts, and in the embodiment of the present invention, the network corresponding to the step S2 is named as a character recognition network, and particularly refer to fig. 3.
Hereinafter, the training process of the license plate recognition network is briefly introduced.
The license plate recognition network is obtained by training according to the sample license plate image and the license plate characters corresponding to the sample license plate image, and a person skilled in the art can understand that before network training, a license plate recognition network structure as shown in fig. 3 needs to be built. The network training process can be divided into the following steps:
step 1, obtaining a plurality of sample license plate images and license plate characters corresponding to the sample license plate images. The license plate characters can comprise Chinese characters, letters, numbers and the like. It is understood that the color of the license plate may also be defined.
In the process, license plate characters corresponding to each sample license plate image are known, and the license plate characters corresponding to each sample license plate image can be determined in the following manner: by manual recognition, or by other image recognition tools, and the like. And then, the sample license plate image needs to be marked, an artificial marking mode can be adopted, and the non-artificial marking can be carried out by using other artificial intelligence methods.
And 2, training the constructed network by using the license plate images of all samples and the license plate characters corresponding to the license plate images of all samples to obtain the trained license plate recognition network. Specifically, the method comprises the following steps:
(a) taking the license plate characters corresponding to each sample license plate image as a true value corresponding to the sample license plate image, and training each sample license plate image and the corresponding true value through a built network to obtain a training result of each sample license plate image;
(b) comparing the training result of each sample license plate image with the true value corresponding to the sample license plate image to obtain an output result corresponding to the sample license plate image;
(c) calculating a loss value of the network according to the output result corresponding to each sample license plate image;
(d) and (c) adjusting parameters of the network according to the loss value, and repeating the steps (a) - (c) until the loss value of the network reaches a certain convergence condition, namely the loss value reaches the minimum value, at the moment, the training result of each sample license plate image is consistent with the true value corresponding to the sample license plate image, so that the training of the network is completed, and the trained license plate recognition network is obtained.
In addition, the network training needs to use data in a VOC format or a COCO format, and the marked data is stored in a text document. A Python script is required to perform the conversion of the data set markup format.
In this embodiment, the license plate recognition network is formed after the knowledge distillation guidance network is restored by layer pruning and channel pruning, and includes:
and performing layer pruning on a dense connection module of the backbone network in the dense connection form in the network obtained by increasing the extraction scale of the characteristic diagram on the basis of the YOLOv3 network to obtain the YOLOv3-1 network.
Usually, a trunk network in a dense connection form is adopted on the basis of the YOLOv3 network, and channel pruning is directly performed in a network simplification processing process obtained by increasing the extraction scale of the feature map, but in an experiment, the effect of rapidly increasing the speed is still difficult to achieve only through the channel pruning. Therefore, the treatment process of layer pruning is added before channel pruning.
Specifically, the layer pruning treatment process comprises the following steps: on the basis of a YOLOv3 network, a backbone network in a dense connection form is adopted, and the extraction scale of a feature map is increased to obtain a network, wherein m dense connection units contained in a dense connection module are pruned into p dense connection units; wherein m and p are both natural numbers, and p is less than m. Preferably, p is 1/2 m. Through layer pruning, a trunk network in a dense connection mode is adopted on the basis of the YOLOv3 network, the network structure obtained by increasing the extraction scale of the feature map is simplified, meanwhile, the parameter quantity and the operation quantity of the network are reduced by nearly half, and the speed is obviously increased.
And carrying out sparse training on the YOLOv3-1 network to obtain the YOLOv3-2 network with the BN layer scaling coefficients in sparse distribution.
The YOLOv3-1 network is sparsely trained, and a proportionality coefficient gamma is introduced into each channel of the YOLOv3-1 network, so that the output size of each channel can be controlled by the proportionality coefficient. In order to make most of the scale factor γ close to 0, it is necessary to add sparse regularization to γ during training. The loss function for sparse training is:
Figure BDA0002740185380000171
wherein the content of the first and second substances,
Figure BDA0002740185380000172
loss function representing the net origin, (x, y) represents the trainingInput data and target data for a process, W represents trainable weights,
Figure BDA0002740185380000173
and g (gamma) is a penalty function for sparse training of the scale coefficient, and lambda is weight. The penalty function selects the L1 norm since the scaling factor γ is to be sparse. Meanwhile, because the proportion of the latter term is unknown, the lambda parameter is introduced for adjustment.
Because the value of the lambda is related to the convergence rate of the sparse training, the application scenario of the embodiment of the invention is that the number of the types of the targets to be detected is small, so that the value of the lambda can be a large lambda value, the convergence rate of the sparse training cannot be slow, and the convergence can be further accelerated by a method for improving the model learning rate; however, considering that the accuracy of the network model is lost due to excessive parameter selection, the combination with the learning rate of 0.1 x and the lambda of 1 x is finally determined as the preferred parameter combination for sparse training.
Referring to fig. 5, fig. 5 is a graph illustrating a distribution of weights of a sparse training parameter combination based on a license plate recognition network according to an embodiment of the present invention, where fig. 5(a) is a weight deviation graph, and fig. 5(b) is a weight overlap graph. As shown in fig. 5, the combination of a smaller learning rate and a larger weight, which is preferred by the embodiment of the present invention, is more favorable for the distribution of the weight after the coefficient training, and the accuracy of the network model is higher.
And performing channel pruning on the YOLOv3-2 network to obtain a YOLOv3-3 network.
After the sparsification training, a network model with the BN layer scaling coefficients distributed sparsely is obtained, so that the importance of which channels is smaller can be determined conveniently. These less important channels can thus be pruned by removing incoming and outgoing connections and the corresponding weights.
Performing a channel pruning operation on the network, pruning a channel corresponding to substantially removing all incoming and outgoing connections of the channel, may directly result in a lightweight network without the use of any special sparse computation packages. In the channel pruning process, the scaling factor serves as a proxy for channel selection; because they are jointly optimized with network weights, the network can automatically identify insignificant channels that can be safely removed without greatly impacting generalization performance.
Specifically, for the YOLOv3-2 network, a channel pruning proportion is set in all channels of all layers, then all BN layer scaling factors in the YOLOv3-2 network are arranged in an ascending order, and channels corresponding to the preceding BN layer scaling factors are pruned according to the channel pruning proportion. Through channel pruning, redundant channels can be deleted, the calculated amount is reduced, and the target detection speed is increased.
However, after channel pruning, some accuracy may be reduced due to the reduction of parameters, the influence of different pruning ratios on the network accuracy is analyzed, if the network pruning ratio is too large, the network volume is compressed more, but the network accuracy is also reduced dramatically, so that a network compression ratio and the compressed network accuracy need to be balanced.
As a preferred mode, the channel pruning ratio to the YOLOv3-2 network is 50%. Channel pruning was chosen 50% because:
since the influence of the less numerous types of images to be detected is greater during the network compression process, which directly affects the mAP, it is considered from the aspect of the data set and the network compression ratio. For the processing of the data set, the embodiment of the present invention selects the category with a smaller number of combinations to balance the number of different categories, or directly adopts the data set with a more balanced category distribution, which is consistent with the application scenario of the embodiment of the present invention. In addition, the compression ratio is controlled, and the prediction accuracy of the types with small quantity is ensured not to be reduced too much. According to the mAP simulation result, 50% -60% of compression ratio is the turning point of precision change, so that 50% of compression ratio can be initially selected.
In addition to analyzing the influence of compression from precision, the relationship between the target detection time and the model compression ratio is also considered, and by simulating the running time of network models processed by different pruning ratios on different platforms (in Tesla V100 servers and Jetson TX2 edge devices), according to the simulation result, the influence of different network compression ratios on the time of network estimation is very weak, the influence of different network compression ratios on the time of NMS (non-maximum suppression) is large, the detection speed is accelerated along with the network compression before the compression ratio reaches 50%, but the detection speed is slowed down after the compression ratio exceeds 50%. Thus, the final selected channel pruning percentage is 50%.
Knowledge distillation is carried out on the YOLOv3-3 network, and a license plate recognition network is generated based on the obtained network.
Through pruning, a more compact Yolov3-3 network model is obtained, and then fine tuning is needed to recover the precision. The strategy of knowledge distillation is introduced here.
Specifically, knowledge distillation is introduced into a YOLOv3-3 network, the YOLOv3 network is used as a teacher network, the YOLOv3-3 network is used as a student network, and the teacher network guides the student network to carry out precision recovery and adjustment so as to obtain the license plate recognition network.
In a preferred embodiment, the output result before the Softmax layer of the YOLOv3 network is divided by the temperature coefficient to soften the predicted value finally output by the teacher network, then the student network uses the softened predicted value as a label to assist in training the YOLOv3-3 network, and finally the accuracy of the YOLOv3-3 network is equivalent to that of the YOLOv3 network; the temperature coefficient is a preset value and does not change along with network training.
The reason for introducing the temperature parameter T is that a trained and highly accurate network is substantially consistent with the classification result of the input data and the real label. For example, with three classes, the true known training class label is [1,0,0], the prediction result may be [0.95,0.02,0.03], and the true label value is very close. Therefore, for the student network, the classification result of the teacher network is used for assisting training and the data is directly used for training, and the difference is not great. The temperature parameter T can be used to control the softening degree of the prediction tag, i.e. the deviation of the teacher's network classification result can be increased.
Comparing the fine tuning process added with the knowledge distillation strategy with the general fine tuning process, the network precision recovered by the knowledge distillation adjustment is higher than that of the original YOLOv3 network.
(3) The sending module 103:
in this embodiment, the sending module 103 may send the detection result to other devices, for example, may send a terminal display device to display the detection result.
For example, the terminal display device may be a display of a parking lot entrance, a display of a road management system, or the like. The terminal display devices can directly display the target license plate image marked with the detection result so as to directly observe the detection result. Of course, it is reasonable that the detection result may be displayed in other text forms, or be supplemented with a voice prompt, etc.
When the license plate recognition system provided by the invention is used for detecting a target license plate image, a residual error module in a main network of a YOLOv3 network in the prior art is replaced by a dense connection module. When the target license plate image features are extracted, an original parallel feature fusion mode is changed into a serial mode by using a dense connection module, and an early obtained feature graph is used as the input of each layer of feature graph behind, so that the feature graph with more information can be obtained, the feature transfer is strengthened, and the detection precision is improved, therefore, the detection precision can be higher in the case of complex conditions such as wind and snow weather or insufficient shooting light; the feature extraction scale with fine granularity is increased, so that smaller objects can be detected, and the detection precision of small targets in the target license plate image can be improved, therefore, the target license plate can be effectively detected under the conditions that the shooting distance is long, the occupied area of the license plate in the obtained target license plate image is small, and the like; based on a target detection method of a network obtained by adopting the trunk network in the dense connection form and increasing the extraction scale of the feature map on the basis of the YOLOv3 network, the license plate recognition network is obtained by performing layer pruning, sparse training, channel pruning and knowledge distillation processing on the original YOLOv3 and selecting optimized processing parameters in each processing process. Because the volume of the network is greatly reduced, most redundant calculation is eliminated, the target detection speed based on the network is greatly improved, and the detection precision can be maintained. Particularly, when the method is applied to scenes with few types to be detected, the detection precision can be ensured, the detection speed can be greatly improved, and the license plate can be accurately identified in a long distance or a complex environment.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (10)

1. A license plate recognition system, comprising:
the image acquisition module is used for acquiring a target license plate image;
the image recognition module is used for extracting the features of the target license plate image by utilizing a backbone network in a dense connection form of a license plate recognition network to obtain x feature maps; the scales of the x characteristic graphs are sequentially increased; x is a natural number of 4 or more; performing feature fusion on the x feature maps by using an FPN (field programmable gate array) network of the license plate recognition network to obtain a prediction result corresponding to the feature map of each scale; obtaining a detection result of the target license plate image based on the prediction result, wherein the detection result comprises license plate characters of the target license plate;
a sending module, configured to send the detection result;
the license plate recognition network comprises the trunk network in the dense connection form and the FPN network, and is formed by adopting layer pruning, channel pruning and knowledge distillation guide network recovery on the basis of a YOLOv3 network; and the license plate recognition network is obtained by training according to the sample license plate image and the license plate characters corresponding to the sample license plate image.
2. The license plate recognition system of claim 1, wherein the backbone network in a densely-connected form comprises a plurality of densely-connected modules connected in series at intervals and a transition module; the number of the dense connection modules is y; the dense connection module comprises a convolution network module and a dense connection unit group which are connected in series; the convolution network module comprises a convolution layer, a BN layer and a Leaky relu layer which are connected in series; the dense connection unit group comprises m dense connection units; wherein y and m are natural numbers of 4 or more; y is equal to or greater than x.
3. The license plate recognition system of claim 2, wherein each densely connected unit comprises a plurality of the convolutional network modules connected in a densely connected manner, and a feature map output by the plurality of convolutional network modules is fused in a cascading manner.
4. The license plate recognition system of claim 1, wherein the obtaining x feature maps comprises:
and obtaining the characteristic graphs which are output by the x dense connection modules along the input reverse direction and have sequentially increased sizes.
5. The license plate recognition system of claim 2, wherein the transition module is the convolutional network module.
6. The license plate recognition system of claim 2, wherein the transition module comprises a plurality of the convolutional network modules and a max-pooling layer connected in series; the input of the convolution network module and the input of the maximum pooling layer are shared, and the feature graph output by the convolution network module and the feature graph output by the maximum pooling layer are fused in a cascading mode.
7. The license plate recognition system of claim 1, wherein the FPN network comprises x prediction branches Y of successively increasing scale1~Yx(ii) a Wherein the prediction branch Y1~YxThe scales of the feature maps are in one-to-one correspondence with the scales of the x feature maps;
each prediction branch YiThe system comprises a convolutional network module group and an up-sampling module; prediction branch YiObtaining a feature map with a corresponding scale from the x feature maps and predicting branch Yi-1Performing cascade fusion on the feature maps subjected to the upsampling treatment; wherein i is a natural number of 2 or more and x or less.
8. The license plate recognition system of claim 2, wherein the license plate recognition network is formed after layer pruning and channel pruning and knowledge distillation guided network recovery, comprising:
on the basis of a YOLOv3 network, in a network obtained by adopting the backbone network in the dense connection form and increasing the extraction scale of the feature map, layer pruning is carried out on the dense connection modules of the backbone network in the dense connection form to obtain a YOLOv3-1 network;
carrying out sparse training on the YOLOv3-1 network to obtain a YOLOv3-2 network with BN layer scaling coefficients in sparse distribution;
performing channel pruning on the YOLOv3-2 network to obtain a YOLOv3-3 network;
and carrying out knowledge distillation on the YOLOv3-3 network to obtain the license plate recognition network.
9. The license plate recognition system of claim 8, wherein in a network obtained by increasing an extraction scale of the feature map by using the backbone network in the dense connection form on the basis of the YOLOv3 network, the performing layer pruning on the dense connection module of the backbone network in the dense connection form comprises:
pruning the number of the dense connection units contained in the dense connection module from m to p; wherein m and p are both natural numbers, and p is less than m.
10. The license plate recognition system of claim 1, wherein the image capture module comprises a camera, a video camera, a cell phone, or an on-road monitoring device.
CN202011147535.3A 2020-10-23 2020-10-23 License plate recognition system Withdrawn CN112308066A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011147535.3A CN112308066A (en) 2020-10-23 2020-10-23 License plate recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011147535.3A CN112308066A (en) 2020-10-23 2020-10-23 License plate recognition system

Publications (1)

Publication Number Publication Date
CN112308066A true CN112308066A (en) 2021-02-02

Family

ID=74327423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011147535.3A Withdrawn CN112308066A (en) 2020-10-23 2020-10-23 License plate recognition system

Country Status (1)

Country Link
CN (1) CN112308066A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836751A (en) * 2021-02-03 2021-05-25 歌尔股份有限公司 Target detection method and device
CN115861997A (en) * 2023-02-27 2023-03-28 松立控股集团股份有限公司 License plate detection and identification method for guiding knowledge distillation by key foreground features

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836751A (en) * 2021-02-03 2021-05-25 歌尔股份有限公司 Target detection method and device
CN115861997A (en) * 2023-02-27 2023-03-28 松立控股集团股份有限公司 License plate detection and identification method for guiding knowledge distillation by key foreground features

Similar Documents

Publication Publication Date Title
WO2022083784A1 (en) Road detection method based on internet of vehicles
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN107886073B (en) Fine-grained vehicle multi-attribute identification method based on convolutional neural network
CN111368886B (en) Sample screening-based label-free vehicle picture classification method
CN112464910A (en) Traffic sign identification method based on YOLO v4-tiny
EP3690740B1 (en) Method for optimizing hyperparameters of auto-labeling device which auto-labels training images for use in deep learning network to analyze images with high precision, and optimizing device using the same
EP3690741A2 (en) Method for automatically evaluating labeling reliability of training images for use in deep learning network to analyze images, and reliability-evaluating device using the same
CN110147707B (en) High-precision vehicle identification method and system
CN112101117A (en) Expressway congestion identification model construction method and device and identification method
CN112364719A (en) Method for rapidly detecting remote sensing image target
CN114677507A (en) Street view image segmentation method and system based on bidirectional attention network
CN112837315A (en) Transmission line insulator defect detection method based on deep learning
CN112381763A (en) Surface defect detection method
CN113221852B (en) Target identification method and device
CN113723377A (en) Traffic sign detection method based on LD-SSD network
CN112308066A (en) License plate recognition system
CN112288701A (en) Intelligent traffic image detection method
CN112364864A (en) License plate recognition method and device, electronic equipment and storage medium
CN112288700A (en) Rail defect detection method
CN113255678A (en) Road crack automatic identification method based on semantic segmentation
CN113011308A (en) Pedestrian detection method introducing attention mechanism
CN117152513A (en) Vehicle boundary positioning method for night scene
CN112395953A (en) Road surface foreign matter detection system
CN112084897A (en) Rapid traffic large-scene vehicle target detection method of GS-SSD
CN114821462A (en) Target detection method based on multi-branch parallel hybrid hole coding neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210202

WW01 Invention patent application withdrawn after publication