WO2021218164A1 - 双行车牌识别方法、装置、设备及计算机可读存储介质 - Google Patents

双行车牌识别方法、装置、设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2021218164A1
WO2021218164A1 PCT/CN2020/135310 CN2020135310W WO2021218164A1 WO 2021218164 A1 WO2021218164 A1 WO 2021218164A1 CN 2020135310 W CN2020135310 W CN 2020135310W WO 2021218164 A1 WO2021218164 A1 WO 2021218164A1
Authority
WO
WIPO (PCT)
Prior art keywords
license plate
image
feature
feature extraction
line
Prior art date
Application number
PCT/CN2020/135310
Other languages
English (en)
French (fr)
Inventor
朱文和
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021218164A1 publication Critical patent/WO2021218164A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • G06V30/1475Inclination or skew detection or correction of characters or of image to be recognised
    • G06V30/1478Inclination or skew detection or correction of characters or of image to be recognised of characters or characters lines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Definitions

  • This application relates to the field of image recognition technology, and in particular to a method, device, equipment, and computer-readable storage medium for dual-line license plate recognition.
  • License plate recognition is one of the important components of modern intelligent transportation systems, and it is widely used, such as unattended parking lots, restricted area security control, traffic law enforcement, congestion pricing, automatic charging, etc.
  • license plates include single-row license plates and multi-row license plates.
  • single-row license plates are used for personal cars, police cars, military vehicles, embassy cars, trucks, buses, etc.
  • double-row license plates are used for buses and passenger cars. , Trucks, armed police cars, motorcycles, etc.
  • a two-line license plate recognition method includes:
  • the target license plate feature is input into a predetermined depth bidirectional recurrent neural network to obtain the license plate recognition result of the dual-line license plate to be recognized.
  • a dual-line license plate recognition device includes:
  • the image acquisition module is used to acquire the license plate image of the double-row license plate to be recognized
  • the feature extraction module is used to perform feature extraction on the license plate image using a preset feature extraction model to obtain a license plate feature matrix
  • the feature reorganization module is used to perform feature reorganization on the license plate feature matrix to obtain the target license plate feature;
  • the license plate recognition module is used to input the characteristics of the target license plate into a two-way recurrent neural network with a preset depth to obtain the license plate recognition result of the dual-line license plate to be recognized.
  • a dual-line license plate recognition device which includes a memory, a processor, and a dual-line license plate recognition program stored on the memory and executable by the processor, wherein the dual-line license plate recognition program When executed by the processor, the following steps are implemented:
  • the target license plate feature is input into a predetermined depth bidirectional recurrent neural network to obtain the license plate recognition result of the dual-line license plate to be recognized.
  • the target license plate feature is input into a predetermined depth bidirectional recurrent neural network to obtain the license plate recognition result of the dual-line license plate to be recognized.
  • This application can improve the accuracy of the dual-line license plate recognition result.
  • FIG. 1 is a schematic diagram of a device structure of a hardware operating environment involved in a solution of an embodiment of the application
  • FIG. 2 is a schematic flowchart of the first embodiment of the dual-line license plate recognition method according to the application;
  • Fig. 3 is a schematic diagram of the functional modules of the first embodiment of the dual-line license plate recognition device of this application.
  • FIG. 1 is a schematic diagram of the device structure of the hardware operating environment involved in the solution of the embodiment of the application.
  • the dual-line license plate recognition device involved in the embodiment of the present application may be a PC (personal computer, personal computer), a notebook computer, a server, and other terminal devices with display and processing functions.
  • the dual-line license plate recognition device may include a processor 1001, such as a CPU (Central Processing Unit, central processing unit), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.
  • the communication bus 1002 is used to realize the connection and communication between these components;
  • the user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface. .
  • the network interface 1004 can optionally include a standard wired interface and a wireless interface (such as Wireless-Fidelity, Wi-Fi interface); the memory 1005 can be a high-speed random access memory (random access memory, RAM) or A stable memory (non-volatile memory), such as a disk memory, and the memory 1005 may optionally also be a storage device independent of the aforementioned processor 1001.
  • a standard wired interface and a wireless interface such as Wireless-Fidelity, Wi-Fi interface
  • the memory 1005 can be a high-speed random access memory (random access memory, RAM) or A stable memory (non-volatile memory), such as a disk memory, and the memory 1005 may optionally also be a storage device independent of the aforementioned processor 1001.
  • the memory 1005 as a computer storage medium in FIG. 1 may include an operating system, a network communication module, and a dual-line license plate recognition program.
  • the network communication module can be used to connect to the server and perform data communication with the server; and the processor 1001 can be used to call the dual-line license plate recognition program stored in the memory 1005, and perform the following operations:
  • the target license plate feature is input into a predetermined depth bidirectional recurrent neural network to obtain the license plate recognition result of the dual-line license plate to be recognized.
  • the preset feature extraction model includes an input layer, a convolutional layer, a pooling layer, and a normalization layer; wherein,
  • the input layer is used to receive the license plate image of the double-row license plate to be recognized
  • the convolution layer is used to extract the image feature matrix of the license plate image according to a convolution kernel
  • the pooling layer is used to perform pooling processing on the output of the convolutional layer
  • the normalization layer is used to normalize the output of the convolutional layer.
  • processor 1001 may call the dual-line license plate recognition program stored in the memory 1005, and also perform the following operations:
  • the first feature matrix and the second feature matrix are combined and spliced to obtain the target license plate feature.
  • processor 1001 may call the dual-line license plate recognition program stored in the memory 1005, and also perform the following operations:
  • the license plate recognition result is processed by the connection time classification CTC algorithm to obtain the license plate information of the double-row license plate to be recognized.
  • processor 1001 may call the dual-line license plate recognition program stored in the memory 1005, and also perform the following operations:
  • a preset feature extraction model is used to perform feature extraction on the license plate number image to obtain a license plate feature matrix.
  • processor 1001 may call the dual-line license plate recognition program stored in the memory 1005, and also perform the following operations:
  • a preset feature extraction model is used to perform feature extraction on the corrected license plate image to obtain a license plate feature matrix.
  • processor 1001 may call the dual-line license plate recognition program stored in the memory 1005, and also perform the following operations:
  • This application provides a method for recognizing a double-line license plate.
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for recognizing a dual-line license plate according to this application.
  • the dual-line license plate recognition method includes:
  • Step S10 acquiring the license plate image of the double-row license plate to be recognized
  • the dual-line license plate recognition method is implemented by a dual-line license plate recognition device.
  • the dual-line license plate recognition device may be a PC, a notebook computer, a server, etc.
  • the dual-line license plate recognition device is described by taking a server as an example.
  • the license plate image of the double-row license plate to be recognized is first acquired.
  • Step S20 using a preset feature extraction model to perform feature extraction on the license plate image to obtain a license plate feature matrix
  • the preset feature extraction model is used to perform feature extraction on the license plate image to obtain the license plate feature matrix.
  • the type of the preset feature extraction model can be selected as a convolutional neural network model, and the preset feature extraction model includes an input layer, a convolution layer, a pooling layer, and a normalization layer; wherein,
  • the input layer is used to receive the license plate image of the double-row license plate to be recognized
  • the convolution layer is used to extract the image feature matrix of the license plate image according to a convolution kernel
  • the pooling layer is used to perform pooling processing on the output of the convolutional layer
  • the normalization layer is used to normalize the output of the convolutional layer.
  • the preset feature extraction model ie, convolutional neural network model
  • the preset feature extraction model includes 1 input layer (Input), 7 convolution layers (Conv), 4 pooling layers (Pool), and 2 normalization layers (BatchNorm), the dimensions and configuration parameters of each layer structure are shown in Table 1, where the dimensions respectively represent the number of channels x height x width, and the K, S, and P in the configuration parameters represent the size of the convolution kernel, step length and padding (padding). )Number of.
  • the input layer is used to receive the license plate image of the double-row license plate to be recognized
  • the convolutional layer is used to extract the image feature matrix of the license plate image according to the convolution kernel
  • the pooling layer is used to pool the output of the convolutional layer, namely The image feature value that best represents the local feature of the image in each image feature matrix is extracted
  • the normalization layer is used to normalize the output of the convolutional layer.
  • the convolutional neural network model can also add a zero-padding (zero-padding) layer to use the zero-padding method to perform zero-padding operations on the feature map to supplement the boundary, thereby ensuring the spatial domain of the feature map output from this layer The size remains the same.
  • the dimension of the license plate feature matrix obtained is 512x2x12, which is a matrix with a height of 2.
  • Step S30 Perform feature reorganization on the license plate feature matrix to obtain the target license plate feature
  • the feature reorganization is performed on the license plate feature matrix to obtain the target license plate feature.
  • step S30 includes:
  • Step a31 split the license plate feature matrix according to the height of the license plate feature matrix to obtain a first feature matrix and a second feature matrix;
  • Step a32 combining and splicing the first feature matrix and the second feature matrix to obtain the target license plate feature.
  • the license plate feature matrix is split according to the height of the license plate feature matrix to obtain the first feature matrix and the second feature matrix. That is, the license plate feature matrix with a dimension of 512x2x12 obtained above is split by height to obtain two feature matrices with a matrix dimension of 512x1x12, and the feature matrix obtained based on the first row of the license plate feature matrix is recorded as the first feature matrix, The feature matrix obtained based on the second row of the license plate feature matrix is recorded as the second feature matrix. Then, the first feature matrix and the second feature matrix are combined and spliced to obtain the target license plate feature.
  • the first feature matrix and the second feature matrix are recombined horizontally, and the dimension of the reorganized matrix is 512x1x24, that is, the target license plate feature with a height of 1 is obtained by reorganization.
  • step S40 the target license plate feature is input into a predetermined depth bidirectional recurrent neural network to obtain the license plate recognition result of the dual-row license plate to be recognized.
  • the target license plate features are input into the preset depth bidirectional recurrent neural network, and the license plate recognition result of the dual-line license plate to be recognized is obtained.
  • the preset depth bidirectional recurrent neural network can optionally be a bi-directional long short-term memory network (Bi-directional Long Short-Term memory network). Memory, Bi-LSTM). It can be understood that Bi-LSTM has the ability to process feature information from left to right. Through context information, the processing result of the current frame is merged with the previous memory and forgotten results, so that prediction can be made and the structure can be passed to subsequent frames.
  • the double-row license plate is also a character sequence after recombination, with the characteristics of time sequence from left to right, so the two-way recurrent neural network can be used for license plate character recognition.
  • the problem with a one-way recurrent neural network is that only information before time t can be used, but sometimes future information may also need to be used.
  • the two-way RNN model can solve this problem.
  • the two-way RNN maintains two hidden layers at all times, one hidden layer is used to transmit information from left to right, and the other hidden layer is used to record the transmitted information from right to left. Therefore, compared to using a unidirectional recurrent neural network, the embodiment of the present application can improve the accuracy of the recognition result by using a preset depth bidirectional recurrent neural network.
  • This application provides a dual-row license plate recognition method.
  • feature extraction is performed on the license plate image using a preset feature extraction model to obtain a license plate feature matrix; then, the license plate feature matrix is feature reorganized, Obtain the characteristics of the target license plate; then input the characteristics of the target license plate into the preset depth two-way recurrent neural network to obtain the license plate recognition result of the dual-line license plate to be recognized.
  • the license plate image of the double-row license plate to be recognized can be directly input as a whole, without segmenting and recognizing each character of the license plate. Compared with the individual recognition of each license plate character in the prior art, Save workload and improve recognition efficiency.
  • the embodiment of the present application the entire license plate recognition error due to the segmentation error of a single character can be avoided. Therefore, the embodiment of the present application can improve the accuracy of the double-row license plate recognition result.
  • the two-line license plate recognition method further includes:
  • Step A Process the license plate recognition result through the connection time classification CTC algorithm to obtain the license plate information of the double-row license plate to be recognized.
  • CTC Connectionist Temporal Classification (joining time classification) algorithm processes the results of license plate recognition to perform character alignment operations to obtain the final license plate information.
  • the CTC algorithm can merge repeated elements and remove spaces, which can be used to solve input features and Alignment of output labels.
  • the problem of missing or multiple inspections of the double-line license plate information can be effectively avoided, thereby further improving the accuracy of the license plate recognition result.
  • step S20 the method further includes:
  • Step B Perform color statistics on the license plate image, and determine the color of the license plate number according to the statistical result
  • the color statistics of the license plate image are performed, and the color of the license plate number is determined according to the statistical result. Because the license plate image mainly contains two colors, one is the license plate background color, and the other is the license plate number color. Therefore, after performing color statistics, the color corresponding to the highest statistical value in the statistical result is the license plate background color, and the color corresponding to the second statistical value in the statistical result is the license plate number color.
  • the RGB (Red Green Blue, also known as RGB color mode) value of each pixel of the license plate image can be extracted through the preset image reading function, and then based on the pixel of the license plate image The RGB value performs color statistics on the license plate image.
  • the preset image reading function is the imread() function.
  • a numpy an open source scientific computing library of Python
  • Step C Determine the pixel coordinates of the license plate number according to the color of the license plate number, and determine the effective area of the license plate number according to the pixel coordinates of the license plate number;
  • the pixel coordinates of the license plate number are determined according to the color of the license plate number, and the effective area of the license plate number is determined according to the pixel coordinates of the license plate number. Specifically, the pixel coordinates corresponding to the RGB values of the license plate number color can be obtained, that is, the license plate number pixel coordinates, and then the outermost pixel coordinates are determined based on the license plate number pixel coordinates to form the license plate number valid area.
  • Step D crop the license plate image according to the effective area of the license plate number to obtain the license plate number image
  • the license plate image is cropped according to the effective area of the license plate number to obtain the license plate number image.
  • step S20 includes:
  • Step A21 using a preset feature extraction model to perform feature extraction on the license plate number image to obtain a license plate feature matrix.
  • the preset feature extraction model is used to perform feature extraction on the license plate number image to obtain the license plate feature matrix, and then the subsequent steps are performed.
  • the specific execution process can refer to the above-mentioned embodiment, and will not be repeated here.
  • the license plate image is cropped to obtain the license plate number image, and then the license plate number image is recognized based on the license plate number image.
  • step S20 the method further includes:
  • Step E Perform image correction processing on the license plate image to obtain a corrected license plate image
  • the license plate number in the license plate image of the dual-row license plate to be recognized may be tilted or distorted. Therefore, in order to further improve the accuracy of the license plate recognition result, you can After obtaining the license plate image of the double-row license plate to be recognized, image correction processing is performed on the license plate image to obtain a corrected license plate image.
  • step E includes:
  • Step E1 Perform a non-linear gray scale transformation on the license plate image through a non-linear index to obtain a transformed license plate image
  • Step E2 Perform edge detection on the transformed license plate image by using a preset edge detection algorithm to obtain an outer frame of license plate characters
  • Step E3 Establish an equation corresponding to the circumscribed frame of the license plate character by straight line fitting, and obtain the inclination angle of the character according to the equation;
  • Step E4 Perform rotation correction on the license plate image based on the inclination angle of the characters to obtain a corrected license plate image.
  • the image correction process is as follows:
  • the preset edge detection algorithm can be a sobel operator or a canny algorithm.
  • the equation corresponding to the circumscribed frame of the license plate character is established by straight-line fitting, and the inclination angle of the character is obtained according to the equation.
  • the calculation process of the inclination angle of the character is as follows: due to shooting and other reasons, the frame of the license plate will have a certain curvature, and the long line where the frame of the license plate is located may be broken into several straight line segments with different slopes. Therefore, the character is established by straight line fitting
  • the license plate image is rotated and corrected based on the inclination angle of the characters to obtain the corrected license plate image.
  • step S20 includes:
  • Step A22 using a preset feature extraction model to perform feature extraction on the corrected license plate image to obtain a license plate feature matrix.
  • the application also provides a dual-line license plate recognition device.
  • Fig. 3 is a schematic diagram of the functional modules of the first embodiment of the dual-line license plate recognition device of the present application.
  • the dual-line license plate recognition device includes:
  • the image acquisition module 10 is used to acquire the license plate image of the double-row license plate to be recognized
  • the feature extraction module 20 is configured to use a preset feature extraction model to perform feature extraction on the license plate image to obtain a license plate feature matrix;
  • the feature reorganization module 30 is used to perform feature reorganization on the license plate feature matrix to obtain the target license plate feature;
  • the license plate recognition module 40 is used to input the characteristics of the target license plate into a predetermined depth bidirectional recurrent neural network to obtain the license plate recognition result of the dual-line license plate to be recognized.
  • each virtual function module of the above-mentioned dual-line license plate recognition device is stored in the memory 1005 of the dual-line license plate recognition device shown in FIG. It can realize the recognition function of double-line license plate.
  • the preset feature extraction model includes an input layer, a convolutional layer, a pooling layer, and a normalization layer; wherein,
  • the input layer is used to receive the license plate image of the double-row license plate to be recognized
  • the convolution layer is used to extract the image feature matrix of the license plate image according to a convolution kernel
  • the pooling layer is used to perform pooling processing on the output of the convolutional layer
  • the normalization layer is used to normalize the output of the convolutional layer.
  • the feature restructuring module 30 includes:
  • a matrix splitting unit configured to split the license plate feature matrix according to the height of the license plate feature matrix to obtain a first feature matrix and a second feature matrix;
  • the combined splicing unit is used to combine and splice the first feature matrix and the second feature matrix to obtain the target license plate feature.
  • the dual-line license plate recognition device further includes:
  • the result processing module is used to process the license plate recognition result through the connection time classification CTC algorithm to obtain the license plate information of the double-row license plate to be recognized.
  • the dual-line license plate recognition device further includes:
  • the color statistics module is used to perform color statistics on the license plate image, and determine the color of the license plate number according to the statistical result;
  • An area determination module configured to determine the pixel coordinates of the license plate number according to the color of the license plate number, and determine the effective area of the license plate number according to the pixel coordinates of the license plate number;
  • An image cropping module for cropping the license plate image according to the effective area of the license plate number to obtain the license plate number image
  • the feature extraction module 20 is also used for:
  • a preset feature extraction model is used to perform feature extraction on the license plate number image to obtain a license plate feature matrix.
  • the dual-line license plate recognition device further includes:
  • An image correction module for performing image correction processing on the license plate image to obtain a corrected license plate image
  • the feature extraction module 20 is also used for:
  • a preset feature extraction model is used to perform feature extraction on the corrected license plate image to obtain a license plate feature matrix.
  • the image correction module includes:
  • the gray scale transformation unit is used to perform non-linear gray scale transformation on the license plate image through the non-linear index to obtain the transformed license plate image;
  • An edge detection unit configured to perform edge detection on the transformed license plate image by using a preset edge detection algorithm to obtain an outer frame of license plate characters
  • An angle acquisition unit configured to establish an equation corresponding to the circumscribed frame of the license plate character through straight line fitting, and obtain the inclination angle of the character according to the equation;
  • the image correction unit is configured to perform rotation correction on the license plate image based on the inclination angle of the characters to obtain a corrected license plate image.
  • each module in the above-mentioned dual-line license plate recognition device corresponds to each step in the embodiment of the above-mentioned dual-line license plate recognition method, and the functions and realization processes are not repeated here.
  • the computer-readable storage medium may be volatile or non-volatile.
  • the computer-readable storage medium stores a dual-line license plate recognition program. When the license plate recognition program is executed by the processor, the following steps are implemented:
  • the target license plate feature is input into a predetermined depth bidirectional recurrent neural network to obtain the license plate recognition result of the dual-line license plate to be recognized.
  • the two-line license plate recognition method provided by the present application further ensures the privacy and security of all the above-mentioned data
  • all the above-mentioned data can also be stored in a node of a blockchain.
  • the license plate image and the license plate feature matrix, etc., these data can be stored in the blockchain node.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Character Discrimination (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

一种双行车牌识别方法、双行车牌识别装置、设备及计算机可读存储介质,涉及图像识别技术领域。该方法包括:获取待识别双行车牌的车牌图像(S10);采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵(S20);对所述车牌特征矩阵进行特征重组,得到目标车牌特征(S30);将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果(S40)。该方法能够提高双行车牌识别结果的识别效率和准确性。

Description

双行车牌识别方法、装置、设备及计算机可读存储介质
本申请要求于2020年4月30日提交中国专利局、申请号为CN202010371388.1       ,发明名称为“双行车牌识别方法、装置、设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像识别技术领域,尤其涉及一种双行车牌识别方法、装置、设备及计算机可读存储介质。
背景技术
车牌识别是现代智能交通系统中的重要组成部分之一,应用十分广泛,如无人看管的停车场、禁区安全管制、交通执法、拥堵定价、自动收费等。根据我国的交通法规,车牌包括单行车牌和多行车牌,其中,单行车牌用于个人汽车、警车、军车、大使馆车、卡车、公共汽车等,而双行车牌用于公共汽车、乘用车、卡车、武装警车、摩托车等。
技术问题
目前,大多数车牌识别方法都只关注于单行车牌识别任务,只有少数方法考虑双行车牌。发明人意识到这些方法通常需要把双行车牌分割成两部分,分别作为输入,具体来说,他们要求车牌每个字符都应该分割且正确识别,因此每个字符分割和识别精度会影响最终识别结果的准确性。因此,现有的双行车牌识别方法,识别过程的任务量较大,流程较为繁琐,且识别结果的准确性低。
技术解决方案
一种双行车牌识别方法,所述双行车牌识别方法包括:
获取待识别双行车牌的车牌图像;
采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵;
对所述车牌特征矩阵进行特征重组,得到目标车牌特征;
将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果。
一种双行车牌识别装置,所述双行车牌识别装置包括:
图像获取模块,用于获取待识别双行车牌的车牌图像;
特征提取模块,用于采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵;
特征重组模块,用于对所述车牌特征矩阵进行特征重组,得到目标车牌特征;
车牌识别模块,用于将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果。
一种双行车牌识别设备,所述双行车牌识别设备包括存储器、处理器以及存储在所述存储器上并可被所述处理器执行的双行车牌识别程序,其中所述双行车牌识别程序被所述处理器执行时,实现如下步骤:
获取待识别双行车牌的车牌图像;
采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵;
对所述车牌特征矩阵进行特征重组,得到目标车牌特征;
将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果。
一种计算机可读存储介质,所述计算机可读存储介质上存储有双行车牌识别程序,其中所述双行车牌识别程序被处理器执行时,实现如下步骤:
获取待识别双行车牌的车牌图像;
采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵;
对所述车牌特征矩阵进行特征重组,得到目标车牌特征;
将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果。
本申请可提高双行车牌识别结果的准确性。
附图说明
图1为本申请实施例方案涉及的硬件运行环境的设备结构示意图;
图2为本申请双行车牌识别方法第一实施例的流程示意图;
图3为本申请双行车牌识别装置第一实施例的功能模块示意图。
本申请目的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
本发明的实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
参照图1,图1为本申请实施例方案涉及的硬件运行环境的设备结构示意图。
本申请实施例涉及的双行车牌识别设备可以是PC(personal computer,个人计算机)、笔记本电脑、服务器等具有显示和处理功能的终端设备。
如图1所示,该双行车牌识别设备可以包括:处理器1001,例如CPU(Central Processing Unit,中央处理器),通信总线1002,用户接口1003,网络接口1004,存储器1005。其中,通信总线1002用于实现这些组件之间的连接通信;用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如无线保真Wireless-Fidelity,Wi-Fi接口);存储器1005可以是高速随机存取存储器(random access memory,RAM),也可以是稳定的存储器(non-volatile memory),例如磁盘存储器,存储器1005可选的还可以是独立于前述处理器1001的存储装置。本领域技术人员可以理解,图1中示出的双行车牌识别设备结构并不构成对双行车牌识别设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
继续参照图1,图1中作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块以及双行车牌识别程序。在图1中,网络通信模块可用于连接服务器,与服务器进行数据通信;而处理器1001可以用于调用存储器1005中存储的双行车牌识别程序,并执行以下操作:
获取待识别双行车牌的车牌图像;
采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵;
对所述车牌特征矩阵进行特征重组,得到目标车牌特征;
将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果。
进一步地,所述预设特征提取模型包括输入层、卷积层、池化层和归一化层;其中,
所述输入层用于接收所述待识别双行车牌的车牌图像;
所述卷积层用于根据卷积核提取所述车牌图像的图像特征矩阵;
所述池化层用于对所述卷积层的输出进行池化处理;
所述归一化层用于对所述卷积层的输出进行归一化处理。
进一步地,处理器1001可以调用存储器1005中存储的双行车牌识别程序,还执行以下操作:
按所述车牌特征矩阵的高度对所述车牌特征矩阵进行拆分,得到第一特征矩阵和第二特征矩阵;
将所述第一特征矩阵与所述第二特征矩阵进行组合拼接,得到目标车牌特征。
进一步地,处理器1001可以调用存储器1005中存储的双行车牌识别程序,还执行以下操作:
通过联接时间分类CTC算法对所述车牌识别结果进行处理,得到所述待识别双行车牌的车牌信息。
进一步地,处理器1001可以调用存储器1005中存储的双行车牌识别程序,还执行以下操作:
对所述车牌图像进行颜色统计,根据统计结果确定车牌号颜色;
根据所述车牌号颜色确定车牌号像素坐标,并根据所述车牌号像素坐标确定车牌号有效区域;
根据所述车牌号有效区域对所述车牌图像进行裁剪,得到车牌号图像;
采用预设特征提取模型对所述车牌号图像进行特征提取,得到车牌特征矩阵。
进一步地,处理器1001可以调用存储器1005中存储的双行车牌识别程序,还执行以下操作:
对所述车牌图像进行图像校正处理,得到校正后的车牌图像;
采用预设特征提取模型对所述校正后的车牌图像进行特征提取,得到车牌特征矩阵。
进一步地,处理器1001可以调用存储器1005中存储的双行车牌识别程序,还执行以下操作:
通过非线性指数对车牌图像进行非线性灰度变换,得到变换后的车牌图像;
通过预设边缘检测算法对所述变换后的车牌图像进行边缘检测,得到车牌字符外接框;
通过直线拟合建立所述车牌字符外接框对应的方程,并根据所述方程得到字符倾斜角度;
基于所述字符倾斜角度对所述车牌图像进行旋转校正,得到校正后的车牌图像。
基于上述硬件结构,提出本申请双行车牌识别方法的各个实施例。
本申请提供一种双行车牌识别方法。
请参照图2,图2为本申请双行车牌识别方法第一实施例的流程示意图。
在本实施例中,该双行车牌识别方法包括:
步骤S10,获取待识别双行车牌的车牌图像;
在本实施例中,该双行车牌识别方法由双行车牌识别设备实现,该双行车牌识别设备可以是PC、笔记本电脑、服务器等设备,该双行车牌识别设备以服务器为例进行说明。
在本实施例中,先获取待识别双行车牌的车牌图像。
步骤S20,采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵;
然后,采用预设特征提取模型对车牌图像进行特征提取,得到车牌特征矩阵。其中,该预设特征提取模型的类型可选为卷积神经网络模型,所述预设特征提取模型包括输入层、卷积层、池化层和归一化层;其中,
所述输入层用于接收所述待识别双行车牌的车牌图像;
所述卷积层用于根据卷积核提取所述车牌图像的图像特征矩阵;
所述池化层用于对所述卷积层的输出进行池化处理;
所述归一化层用于对所述卷积层的输出进行归一化处理。
具体的,该预设特征提取模型(即卷积神经网络模型)包括1个输入层(Input)、7个卷积层(Conv)、4个池化层(Pool)和2个归一化层(BatchNorm),各层结构的维度和配置参数如下表1,其中,维度分别表示通道数x高x宽,配置参数中的K、S、P分别表示卷积核大小、步长和padding(填充)的数目。其中,输入层用于接收待识别双行车牌的车牌图像,卷积层用于根据卷积核提取车牌图像的图像特征矩阵;池化层用于对卷积层的输出进行池化处理,即提取出每个图像特征矩阵中最能代表图像局部特征的图像特征值,归一化层用于对卷积层的输出进行归一化处理。此外,该卷积神经网络模型还可以增加zero-padding(零填充)层,以利用zero-padding方法来对特征图进行零填充操作,以补充边界,从而可保证从该层输出特征图的空域大小不变。
在通过预设特征提取模型进行特征提取之后,所得到的车牌特征矩阵的维度为512x2x12,即为一个高度为2的矩阵。
网络层类型 维度 配置
Input 3x64x96  
Conv1 64x64x96 K=3,S=1,P=1
Pool1 64x32x48 K=2,S=2
Conv2 128x32x48 K=3,S=1,P=1
Pool2 128x16x24 K=2,S=2
Conv3 256x16x24 K=3,S=1,P=1
BatchNorm 256x16x24  
Conv4 256x16x24 K=3,S=1,P=1
Pool3 256x8x12 K=2,S=2
Conv5 512x8x12 K=3,S=1,P=1
BatchNorm 512x8x12  
Conv6 512x8x12 K=3,S=1,P=1
Pool4 512x4x12 K=[2,1],S=[2,1]
ZeroPadding 512x4x14 P=[0,1]
Conv7 512x2x12 K=3,S=1
表1预设特征提取模型的网络结构
步骤S30,对所述车牌特征矩阵进行特征重组,得到目标车牌特征;
在得到车牌特征矩阵之后,对车牌特征矩阵进行特征重组,得到目标车牌特征。
具体的,步骤S30包括:
步骤a31,按所述车牌特征矩阵的高度对所述车牌特征矩阵进行拆分,得到第一特征矩阵和第二特征矩阵;
步骤a32,将所述第一特征矩阵与所述第二特征矩阵进行组合拼接,得到目标车牌特征。
按车牌特征矩阵的高度对车牌特征矩阵进行拆分,得到第一特征矩阵和第二特征矩阵。即,将上述得到的维度为512x2x12的车牌特征矩阵按高度进行拆分,得到两个矩阵维度为512x1x12的特征矩阵,将基于车牌特征矩阵的第一行得到的特征矩阵记为第一特征矩阵,基于车牌特征矩阵的第二行得到的特征矩阵记为第二特征矩阵。然后,将第一特征矩阵与第二特征矩阵进行组合拼接,得到目标车牌特征。在组合拼接时,将第一特征矩阵与第二特征矩阵重新水平组合,重组后的矩阵维度为512x1x24,即重组得到高度为1的目标车牌特征。
步骤S40,将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果。
最后,将目标车牌特征输入至预设深度双向递归神经网络中,得到待识别双行车牌的车牌识别结果。其中,预设深度双向递归神经网络可选地为双向长短期记忆网络(Bi-directional Long Short-Term Memory,Bi-LSTM)。可以理解,Bi-LSTM有能力从左到右处理特征信息,通过上下文信息,当前帧的处理结果与之前记忆和忘记结果融合,从而能够进行预测,并且能够将结构传递给后续的帧。而双行车牌在重新组合后也是一个字符序列,具有从左到右的时序特征,因此可以采用双向递归神经网络进行车牌字符识别。
需要说明的是,单向的递归神经网络(Recurrent Neural Network,RNN)的问题在于只可以使用t时刻之前的信息,但是有时可能还需要使用将来的信息。而双向RNN模型可以解决这个问题。双向RNN在任何时候都保持两个隐藏层,一个隐藏层用于传输从左到右的信息,并使用另一个隐藏层对于从右到左的传播信息进行记录。因此,相比于采用单向的递归神经网络,本申请实施例通过采用预设深度双向递归神经网络可以提高识别结果的准确性。
本申请提供一种双行车牌识别方法,通过获取待识别双行车牌的车牌图像,采用预设特征提取模型对车牌图像进行特征提取,得到车牌特征矩阵;然后,对车牌特征矩阵进行特征重组,得到目标车牌特征;进而将目标车牌特征输入至预设深度双向递归神经网络中,得到待识别双行车牌的车牌识别结果。本实施例中,可直接将待识别双行车牌的车牌图像作为一个整体进行输入,无需对车牌的每个字符进行分割、识别,相比于现有技术中对各个车牌字符进行单独识别,可节约工作量,提高识别效率。同时,本申请实施例中可避免因单个字符分割错误而导致整个车牌识别错误,因此,本申请实施例可提高双行车牌识别结果的准确性。
进一步地,基于上述第一实施例,提出本申请双行车牌识别方法的第二实施例。
本实施例中,在步骤S40之后,该双行车牌识别方法还包括:
步骤A,通过联接时间分类CTC算法对所述车牌识别结果进行处理,得到所述待识别双行车牌的车牌信息。
本实施例中,由于车牌图像中的字符数量不同、字体样式不同、或大小不同,可能导致输出不一定能和每个字符一一对应,因此,为进一步保证识别结果的准确性,可通过CTC(Connectionist Temporal Classification,联接时间分类)算法对车牌识别结果进行处理,以进行字符对齐操作,得到最终的车牌信息其中,其中,CTC算法可将重复的元素合并,将空格符去除,可用于解决输入特征与输出标签的对齐问题。
本实施例中,通过将深度双向递归神经网络与CTC算法相结合,可有效避免双行车牌信息的漏检或多检等识别错误的问题,从而进一步提高车牌识别结果的准确性。
进一步地,基于上述第一实施例,提出本申请双行车牌识别方法的第三实施例。
本实施例中,在步骤S20之前,还包括:
步骤B,对所述车牌图像进行颜色统计,根据统计结果确定车牌号颜色;
在本实施例中,在获取到待识别双行车牌的车牌图像之后,对车牌图像进行颜色统计,根据统计结果确定车牌号颜色。由于车牌图像中主要包含2种颜色,一种为车牌背景色,一种为车牌号颜色。因此,在进行颜色统计后,统计结果中的最高统计值所对应的颜色即为车牌背景色,而统计结果中排名第二的统计值所对应的颜色即为车牌号颜色。
在进行颜色统计时,可通过预设图像读取函数提取该车牌图像各像素点的RGB(Red Green Blue,红绿蓝,也称作RGB色彩模式)值,然后基于该车牌图像各像素点的RGB值对车牌图像进行色彩统计。可选地,预设图像读取函数为imread()函数,在通过imread()函数读取车牌图像,可得到numpy(Python的一种开源的科学计算库)数组,即车牌图像各像素点对应的RGB值,然后遍历numpy数组,以对提取到的车牌图像的RGB值进行色彩统计,得到统计结果。
步骤C,根据所述车牌号颜色确定车牌号像素坐标,并根据所述车牌号像素坐标确定车牌号有效区域;
然后,根据车牌号颜色确定车牌号像素坐标,并根据车牌号像素坐标确定车牌号有效区域。具体的,可获取车牌号颜色对应RGB值所对应的像素坐标,即为车牌号像素坐标,然后,基于车牌号像素坐标确定最外围的像素坐标构成车牌号有效区域。
步骤D,根据所述车牌号有效区域对所述车牌图像进行裁剪,得到车牌号图像;
进而根据车牌号有效区域对车牌图像进行裁剪,得到车牌号图像。
此时,步骤S20包括:
步骤A21,采用预设特征提取模型对所述车牌号图像进行特征提取,得到车牌特征矩阵。
在裁剪得到车牌号图像之后,采用预设特征提取模型对车牌号图像进行特征提取,得到车牌特征矩阵,进而执行后续步骤,具体的执行过程可参照上述实施例,此处不作赘述。
本实施例中,通过对车牌图像进行颜色统计,然后确定得到车牌号有效区域,以对车牌图像进行裁剪,得到车牌号图像,进而基于对该车牌号图像进行识别。通过上述方式,可缩小图像识别区域,从而可进一步提高识别效率。
进一步地,基于上述第一实施例,提出本申请双行车牌识别方法的第四实施例。
本实施例中,在步骤S20之前,还包括:
步骤E,对所述车牌图像进行图像校正处理,得到校正后的车牌图像;
在本实施例中,由于拍摄角度的问题,获取到的待识别双行车牌的车牌图像中的车牌号可能会存在倾斜或扭曲的现象,因此,为进一步提高车牌识别结果的准确率,可在获取到待识别双行车牌的车牌图像之后,对车牌图像进行图像校正处理,得到校正后的车牌图像。
具体的,步骤E包括:
步骤E1,通过非线性指数对车牌图像进行非线性灰度变换,得到变换后的车牌图像;
步骤E2,通过预设边缘检测算法对所述变换后的车牌图像进行边缘检测,得到车牌字符外接框;
步骤E3,通过直线拟合建立所述车牌字符外接框对应的方程,并根据所述方程得到字符倾斜角度;
步骤E4,基于所述字符倾斜角度对所述车牌图像进行旋转校正,得到校正后的车牌图像。
图像校正处理的过程如下:
先通过非线性指数对车牌图像进行非线性灰度变换,得到变换后的车牌图像,通过灰度变换,可拉伸车牌图像的高灰度区,使得图像灰度对比更显著,便于后续的识别处理。然后,通过预设边缘检测算法对变换后的车牌图像进行边缘检测,得到车牌字符外接框。其中,预设边缘检测算法可以为sobel(索贝尔)算子或canny(坎尼)算法。
在得到车牌字符外接框之后,通过直线拟合建立车牌字符外接框对应的方程,并根据方程得到字符倾斜角度。对于字符倾斜角度的计算过程如下:由于拍摄等原因车牌外框会有一定的弧度,车牌的边框所在的长直线可能会断裂成好几条斜率不同的直线段,因此,通过直线拟合建立该字符外接框对应的方程也包括多条。获取各直线方程y=ax+b,可基于倾斜角度的计算公式tanθ=-b/a,可得到多个角度值,取倾斜角度在-10~10°范围内的各角度值的平均值,得到字符倾斜角度。
最后,基于字符倾斜角度对车牌图像进行旋转校正,得到校正后的车牌图像。
此时,步骤S20包括:
步骤A22,采用预设特征提取模型对所述校正后的车牌图像进行特征提取,得到车牌特征矩阵。
在对待识别双行车牌的车牌图像进行校正之后,采用预设特征提取模型对校正后的车牌图像进行特征提取,得到车牌特征矩阵,进而执行后续步骤,具体的执行过程可参照上述实施例,此处不作赘述。
本实施例中,通过对车牌图像进行图像校正处理,进而基于对校正后的车牌图像进行识别,可进一步提高双行车牌识别结果的准确性。
本申请还提供一种双行车牌识别装置。
参照图3,图3为本申请双行车牌识别装置第一实施例的功能模块示意图。
在本实施例中,所述双行车牌识别装置包括:
图像获取模块10,用于获取待识别双行车牌的车牌图像;
特征提取模块20,用于采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵;
特征重组模块30,用于对所述车牌特征矩阵进行特征重组,得到目标车牌特征;
车牌识别模块40,用于将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果。
其中,上述双行车牌识别装置的各虚拟功能模块存储于图1所示双行车牌识别设备的存储器1005中,用于实现双行车牌识别程序的所有功能;各模块被处理器1001执行时,可实现双行车牌的识别功能。
进一步的,所述预设特征提取模型包括输入层、卷积层、池化层和归一化层;其中,
所述输入层用于接收所述待识别双行车牌的车牌图像;
所述卷积层用于根据卷积核提取所述车牌图像的图像特征矩阵;
所述池化层用于对所述卷积层的输出进行池化处理;
所述归一化层用于对所述卷积层的输出进行归一化处理。
进一步地,所述特征重组模块30包括:
矩阵拆分单元,用于按所述车牌特征矩阵的高度对所述车牌特征矩阵进行拆分,得到第一特征矩阵和第二特征矩阵;
组合拼接单元,用于将所述第一特征矩阵与所述第二特征矩阵进行组合拼接,得到目标车牌特征。
进一步地,所述双行车牌识别装置还包括:
结果处理模块,用于通过联接时间分类CTC算法对所述车牌识别结果进行处理,得到所述待识别双行车牌的车牌信息。
进一步地,所述双行车牌识别装置还包括:
颜色统计模块,用于对所述车牌图像进行颜色统计,根据统计结果确定车牌号颜色;
区域确定模块,用于根据所述车牌号颜色确定车牌号像素坐标,并根据所述车牌号像素坐标确定车牌号有效区域;
图像裁剪模块,用于根据所述车牌号有效区域对所述车牌图像进行裁剪,得到车牌号图像;
所述特征提取模块20还用于:
采用预设特征提取模型对所述车牌号图像进行特征提取,得到车牌特征矩阵。
进一步地,所述双行车牌识别装置还包括:
图像校正模块,用于对所述车牌图像进行图像校正处理,得到校正后的车牌图像;
所述特征提取模块20还用于:
采用预设特征提取模型对所述校正后的车牌图像进行特征提取,得到车牌特征矩阵。
进一步地,所述图像校正模块包括:
灰度变换单元,用于通过非线性指数对车牌图像进行非线性灰度变换,得到变换后的车牌图像;
边缘检测单元,用于通过预设边缘检测算法对所述变换后的车牌图像进行边缘检测,得到车牌字符外接框;
角度获取单元,用于通过直线拟合建立所述车牌字符外接框对应的方程,并根据所述方程得到字符倾斜角度;
图像校正单元,用于基于所述字符倾斜角度对所述车牌图像进行旋转校正,得到校正后的车牌图像。
其中,上述双行车牌识别装置中各个模块的功能实现与上述双行车牌识别方法实施例中各步骤相对应,其功能和实现过程在此处不再一一赘述。
本申请还提供一种计算机可读存储介质,计算机可读存储介质可以是易失性的,也可以是非易失性的,该计算机可读存储介质上存储有双行车牌识别程序,所述双行车牌识别程序被处理器执行时实现如下步骤:
获取待识别双行车牌的车牌图像;
采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵;
对所述车牌特征矩阵进行特征重组,得到目标车牌特征;
将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果。
本申请计算机可读存储介质的具体实施例与上述双行车牌识别方法各实施例基本相同,在此不作赘述。
在另一个实施例中,本申请所提供的双行车牌识别方法,为进一步保证上述所有出现的数据的私密和安全性,上述所有数据还可以存储于一区块链的节点中。例如车牌图像及车牌特征矩阵等,这些数据均可存储在区块链节点中。
需要说明的是,本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种双行车牌识别方法,其中,所述双行车牌识别方法包括以下步骤:
    获取待识别双行车牌的车牌图像;
    采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵;
    对所述车牌特征矩阵进行特征重组,得到目标车牌特征;
    将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果。
  2. 如权利要求1所述的双行车牌识别方法,其中,所述预设特征提取模型包括输入层、卷积层、池化层和归一化层;其中,
    所述输入层用于接收所述待识别双行车牌的车牌图像;
    所述卷积层用于根据卷积核提取所述车牌图像的图像特征矩阵;
    所述池化层用于对所述卷积层的输出进行池化处理;
    所述归一化层用于对所述卷积层的输出进行归一化处理。
  3. 如权利要求1所述的双行车牌识别方法,其中,所述对所述车牌特征矩阵进行特征重组,得到目标车牌特征的步骤包括:
    按所述车牌特征矩阵的高度对所述车牌特征矩阵进行拆分,得到第一特征矩阵和第二特征矩阵;
    将所述第一特征矩阵与所述第二特征矩阵进行组合拼接,得到目标车牌特征。
  4. 如权利要求1至3中任一项所述的双行车牌识别方法,其中,所述将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果的步骤之后,还包括:
    通过联接时间分类CTC算法对所述车牌识别结果进行处理,得到所述待识别双行车牌的车牌信息。
  5. 如权利要求1至3中任一项所述的双行车牌识别方法,其中,所述采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵的步骤之前,还包括:
    对所述车牌图像进行颜色统计,根据统计结果确定车牌号颜色;
    根据所述车牌号颜色确定车牌号像素坐标,并根据所述车牌号像素坐标确定车牌号有效区域;
    根据所述车牌号有效区域对所述车牌图像进行裁剪,得到车牌号图像;
    所述采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵的步骤包括:
    采用预设特征提取模型对所述车牌号图像进行特征提取,得到车牌特征矩阵。
  6. 如权利要求1至3中任一项所述的双行车牌识别方法,其中,所述采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵的步骤之前,还包括:
    对所述车牌图像进行图像校正处理,得到校正后的车牌图像;
    所述采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵的步骤包括:
    采用预设特征提取模型对所述校正后的车牌图像进行特征提取,得到车牌特征矩阵。
  7. 如权利要求6所述的双行车牌识别方法,其中,所述对所述车牌图像进行图像校正处理,得到校正后的车牌图像的步骤包括:
    通过非线性指数对车牌图像进行非线性灰度变换,得到变换后的车牌图像;
    通过预设边缘检测算法对所述变换后的车牌图像进行边缘检测,得到车牌字符外接框;
    通过直线拟合建立所述车牌字符外接框对应的方程,并根据所述方程得到字符倾斜角度;
    基于所述字符倾斜角度对所述车牌图像进行旋转校正,得到校正后的车牌图像。
  8. 一种双行车牌识别装置,其中,所述双行车牌识别装置包括:
    图像获取模块,用于获取待识别双行车牌的车牌图像;
    特征提取模块,用于采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵;
    特征重组模块,用于对所述车牌特征矩阵进行特征重组,得到目标车牌特征;
    车牌识别模块,用于将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果。
  9. 一种双行车牌识别设备,其中,所述双行车牌识别设备包括存储器、处理器以及存储在所述存储器上并可被所述处理器执行的双行车牌识别程序,其中所述双行车牌识别程序被所述处理器执行时,实现如下步骤:
    获取待识别双行车牌的车牌图像;
    采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵;
    对所述车牌特征矩阵进行特征重组,得到目标车牌特征;
    将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果。
  10. 如权利要求9所述的双行车牌识别设备,其中,所述预设特征提取模型包括输入层、卷积层、池化层和归一化层;其中,
    所述输入层用于接收所述待识别双行车牌的车牌图像;
    所述卷积层用于根据卷积核提取所述车牌图像的图像特征矩阵;
    所述池化层用于对所述卷积层的输出进行池化处理;
    所述归一化层用于对所述卷积层的输出进行归一化处理。
  11. 如权利要求9所述的双行车牌识别设备,其中,所述对所述车牌特征矩阵进行特征重组,得到目标车牌特征的步骤包括:
    按所述车牌特征矩阵的高度对所述车牌特征矩阵进行拆分,得到第一特征矩阵和第二特征矩阵;
    将所述第一特征矩阵与所述第二特征矩阵进行组合拼接,得到目标车牌特征。
  12. 如权利要求9至11中任一项所述的双行车牌识别设备,其中,所述将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果的步骤之后,所述双行车牌识别程序被所述处理器执行时,还实现如下步骤:
    通过联接时间分类CTC算法对所述车牌识别结果进行处理,得到所述待识别双行车牌的车牌信息。
  13. 如权利要求9至11中任一项所述的双行车牌识别设备,其中,所述采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵的步骤之前,所述双行车牌识别程序被所述处理器执行时,还实现如下步骤:
    对所述车牌图像进行颜色统计,根据统计结果确定车牌号颜色;
    根据所述车牌号颜色确定车牌号像素坐标,并根据所述车牌号像素坐标确定车牌号有效区域;
    根据所述车牌号有效区域对所述车牌图像进行裁剪,得到车牌号图像;
    所述采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵的步骤包括:
    采用预设特征提取模型对所述车牌号图像进行特征提取,得到车牌特征矩阵。
  14. 如权利要求9至11中任一项所述的双行车牌识别设备,其中,所述采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵的步骤之前,所述双行车牌识别程序被所述处理器执行时,还实现如下步骤:
    对所述车牌图像进行图像校正处理,得到校正后的车牌图像;
    所述采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵的步骤包括:
    采用预设特征提取模型对所述校正后的车牌图像进行特征提取,得到车牌特征矩阵。
  15. 如权利要求14所述的双行车牌识别设备,其中,所述对所述车牌图像进行图像校正处理,得到校正后的车牌图像的步骤包括:
    通过非线性指数对车牌图像进行非线性灰度变换,得到变换后的车牌图像;
    通过预设边缘检测算法对所述变换后的车牌图像进行边缘检测,得到车牌字符外接框;
    通过直线拟合建立所述车牌字符外接框对应的方程,并根据所述方程得到字符倾斜角度;
    基于所述字符倾斜角度对所述车牌图像进行旋转校正,得到校正后的车牌图像。
  16. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有双行车牌识别程序,其中所述双行车牌识别程序被处理器执行时,实现如下步骤:
    获取待识别双行车牌的车牌图像;
    采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵;
    对所述车牌特征矩阵进行特征重组,得到目标车牌特征;
    将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果。
  17. 如权利要求16所述的计算机可读存储介质,其中,所述预设特征提取模型包括输入层、卷积层、池化层和归一化层;其中,
    所述输入层用于接收所述待识别双行车牌的车牌图像;
    所述卷积层用于根据卷积核提取所述车牌图像的图像特征矩阵;
    所述池化层用于对所述卷积层的输出进行池化处理;
    所述归一化层用于对所述卷积层的输出进行归一化处理。
  18. 如权利要求16所述的计算机可读存储介质,其中,所述对所述车牌特征矩阵进行特征重组,得到目标车牌特征的步骤包括:
    按所述车牌特征矩阵的高度对所述车牌特征矩阵进行拆分,得到第一特征矩阵和第二特征矩阵;
    将所述第一特征矩阵与所述第二特征矩阵进行组合拼接,得到目标车牌特征。
  19. 如权利要求16至18中任一项所述的计算机可读存储介质,其中,所述将所述目标车牌特征输入至预设深度双向递归神经网络中,得到所述待识别双行车牌的车牌识别结果的步骤之后,所述双行车牌识别程序被处理器执行时,还实现如下步骤:
    通过联接时间分类CTC算法对所述车牌识别结果进行处理,得到所述待识别双行车牌的车牌信息。
  20. 如权利要求16至18中任一项所述的计算机可读存储介质,其中,所述采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵的步骤之前,所述双行车牌识别程序被处理器执行时,还实现如下步骤:
    对所述车牌图像进行颜色统计,根据统计结果确定车牌号颜色;
    根据所述车牌号颜色确定车牌号像素坐标,并根据所述车牌号像素坐标确定车牌号有效区域;
    根据所述车牌号有效区域对所述车牌图像进行裁剪,得到车牌号图像;
    所述采用预设特征提取模型对所述车牌图像进行特征提取,得到车牌特征矩阵的步骤包括:
    采用预设特征提取模型对所述车牌号图像进行特征提取,得到车牌特征矩阵。
PCT/CN2020/135310 2020-04-30 2020-12-10 双行车牌识别方法、装置、设备及计算机可读存储介质 WO2021218164A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010371388.1 2020-04-30
CN202010371388.1A CN111582272A (zh) 2020-04-30 2020-04-30 双行车牌识别方法、装置、设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021218164A1 true WO2021218164A1 (zh) 2021-11-04

Family

ID=72120754

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135310 WO2021218164A1 (zh) 2020-04-30 2020-12-10 双行车牌识别方法、装置、设备及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111582272A (zh)
WO (1) WO2021218164A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463611A (zh) * 2021-12-18 2022-05-10 北京工业大学 一种非可控环境下的鲁棒中文车牌检测与校正方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582272A (zh) * 2020-04-30 2020-08-25 平安科技(深圳)有限公司 双行车牌识别方法、装置、设备及计算机可读存储介质
CN112686252A (zh) * 2020-12-28 2021-04-20 中国联合网络通信集团有限公司 一种车牌检测方法和装置
CN113780113A (zh) * 2021-08-25 2021-12-10 廊坊中油朗威工程项目管理有限公司 管道违章行为识别方法
CN116597437B (zh) * 2023-07-18 2023-10-03 昆明理工大学 融合双层注意力网络的端到端老挝车牌照识别方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413147A (zh) * 2013-08-28 2013-11-27 庄浩洋 一种车牌识别方法和系统
CN108229474A (zh) * 2017-12-29 2018-06-29 北京旷视科技有限公司 车牌识别方法、装置及电子设备
CN109034019A (zh) * 2018-07-12 2018-12-18 浙江工业大学 一种基于行分割线的黄色双行车牌字符分割方法
CN110070085A (zh) * 2019-04-30 2019-07-30 北京百度网讯科技有限公司 车牌识别方法和装置
US20190251369A1 (en) * 2018-02-11 2019-08-15 Ilya Popov License plate detection and recognition system
CN110942071A (zh) * 2019-12-09 2020-03-31 上海眼控科技股份有限公司 一种基于车牌分类和lstm的车牌识别方法
CN111582272A (zh) * 2020-04-30 2020-08-25 平安科技(深圳)有限公司 双行车牌识别方法、装置、设备及计算机可读存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413147A (zh) * 2013-08-28 2013-11-27 庄浩洋 一种车牌识别方法和系统
CN108229474A (zh) * 2017-12-29 2018-06-29 北京旷视科技有限公司 车牌识别方法、装置及电子设备
US20190251369A1 (en) * 2018-02-11 2019-08-15 Ilya Popov License plate detection and recognition system
CN109034019A (zh) * 2018-07-12 2018-12-18 浙江工业大学 一种基于行分割线的黄色双行车牌字符分割方法
CN110070085A (zh) * 2019-04-30 2019-07-30 北京百度网讯科技有限公司 车牌识别方法和装置
CN110942071A (zh) * 2019-12-09 2020-03-31 上海眼控科技股份有限公司 一种基于车牌分类和lstm的车牌识别方法
CN111582272A (zh) * 2020-04-30 2020-08-25 平安科技(深圳)有限公司 双行车牌识别方法、装置、设备及计算机可读存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463611A (zh) * 2021-12-18 2022-05-10 北京工业大学 一种非可控环境下的鲁棒中文车牌检测与校正方法

Also Published As

Publication number Publication date
CN111582272A (zh) 2020-08-25

Similar Documents

Publication Publication Date Title
WO2021218164A1 (zh) 双行车牌识别方法、装置、设备及计算机可读存储介质
CN107194398B (zh) 车损部位的识别方法及系统
CN109871845B (zh) 证件图像提取方法及终端设备
US11645857B2 (en) License plate number recognition method and device, electronic device and storage medium
US11354797B2 (en) Method, device, and system for testing an image
CN112102402B (zh) 闪光灯光斑位置识别方法、装置、电子设备及存储介质
CN108154149B (zh) 基于深度学习网络共享的车牌识别方法
CN105320952B (zh) 一种基于ocr的行驶证信息识别方法
CN110675940A (zh) 病理图像标注方法、装置、计算机设备及存储介质
CN108280909A (zh) 访客信息自动登记的方法、系统及存储介质
CN111178357B (zh) 车牌识别方法、系统、设备和存储介质
WO2021189847A1 (zh) 基于图像分类模型的训练方法、装置、设备及存储介质
CN114038004A (zh) 一种证件信息提取方法、装置、设备及存储介质
EP3869472A1 (en) Detecting identification tampering using ultra-violet imaging
CN111553251A (zh) 证件四角残缺检测方法、装置、设备及存储介质
CN111639648A (zh) 证件识别方法、装置、计算设备和存储介质
WO2021189850A1 (zh) 证件鉴伪方法、装置、设备及可读存储介质
CN112668575A (zh) 关键信息提取方法、装置、电子设备及存储介质
CN111428740A (zh) 网络翻拍照片的检测方法、装置、计算机设备及存储介质
CN111401362A (zh) 车辆vin码的篡改检测方法、装置、设备和存储介质
CN114332883A (zh) 发票信息识别方法、装置、计算机设备及存储介质
CN111062374A (zh) 身份证信息的识别方法、装置、系统、设备及可读介质
CN109141457A (zh) 导航评估方法、装置、计算机设备和存储介质
US20220327862A1 (en) Method for detecting whether a face is masked, masked-face recognition device, and computer storage medium
CN112541899A (zh) 证件的残缺检测方法、装置、电子设备及计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933711

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933711

Country of ref document: EP

Kind code of ref document: A1