CN110837838B - End-to-end vehicle frame number identification system and identification method based on deep learning - Google Patents

End-to-end vehicle frame number identification system and identification method based on deep learning Download PDF

Info

Publication number
CN110837838B
CN110837838B CN201911075932.1A CN201911075932A CN110837838B CN 110837838 B CN110837838 B CN 110837838B CN 201911075932 A CN201911075932 A CN 201911075932A CN 110837838 B CN110837838 B CN 110837838B
Authority
CN
China
Prior art keywords
character
frame number
recognition
image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911075932.1A
Other languages
Chinese (zh)
Other versions
CN110837838A (en
Inventor
张发恩
范峻铭
黄家水
唐永亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ainnovation Chongqing Technology Co ltd
Original Assignee
Ainnovation Chongqing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ainnovation Chongqing Technology Co ltd filed Critical Ainnovation Chongqing Technology Co ltd
Priority to CN201911075932.1A priority Critical patent/CN110837838B/en
Publication of CN110837838A publication Critical patent/CN110837838A/en
Application granted granted Critical
Publication of CN110837838B publication Critical patent/CN110837838B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses an end-to-end vehicle frame number identification system based on deep learning, which comprises the following steps: the image input module is used for inputting an image containing the whole frame number character string; the image feature extraction module is connected with the image input module and is used for extracting image features corresponding to the image to obtain a feature map corresponding to the image and converting the feature map into corresponding feature vectors; the character recognition module is connected with the image feature extraction module and is used for carrying out corresponding character type recognition on each frame number character in the frame number character string in the image according to the feature vector, and finally recognizing to obtain a character recognition result of the frame number character string.

Description

End-to-end vehicle frame number identification system and identification method based on deep learning
Technical Field
The invention relates to an automatic frame number recognition system, in particular to an end-to-end frame number recognition system and an end-to-end frame number recognition method based on deep learning.
Background
The frame number is the unique identification code of the vehicle. The frame number is typically formed from a combination of 17 digits and letters. At present, when each vehicle dealer counts the vehicle stock, a mode of manually identifying and recording the vehicle frame number is generally adopted to count the vehicle stock, but the working efficiency of the mode of manually identifying the vehicle frame number is extremely low, so that a system capable of automatically identifying and extracting the vehicle frame number is needed to solve the problems.
Disclosure of Invention
The invention aims to provide an end-to-end vehicle frame number identification system based on deep learning so as to solve the technical problems.
To achieve the purpose, the invention adopts the following technical scheme:
the utility model provides an end-to-end car frame number identification system based on degree of depth study for carry out automatic identification to the car frame number, include:
the image input module is used for inputting an image containing the whole frame number character string;
the image feature extraction module is connected with the image input module and is used for extracting image features corresponding to the image to obtain a feature map corresponding to the image and converting the feature map into corresponding feature vectors;
the character recognition module is connected with the image feature extraction module and is used for carrying out corresponding character type recognition on each frame number character in the frame number character string in the image based on a preset character recognition model according to the feature vector, and finally obtaining a character recognition result of the frame number character string.
As a preferable scheme of the invention, the end-to-end vehicle frame number identification system carries out convolution identification on the image through a convolution neural network to obtain the feature map corresponding to the image.
As a preferable mode of the present invention, the end-to-end vehicle frame number identification system converts the feature map corresponding to the image into the feature vector through the convolutional neural network.
As a preferable mode of the invention, the character recognition module comprises a plurality of character recognition units, each character recognition unit is used for recognizing the character type corresponding to the frame number character on one appointed character position in the frame number character string,
each character recognition unit specifically comprises:
a designated character position character feature positioning subunit, configured to position, in the feature vector, a component feature corresponding to the frame number character associated with the designated character position, and obtain a positioning result;
the prediction vector generation subunit is connected with the appointed character bit character feature positioning subunit and is used for converting the feature vector into a corresponding prediction vector according to the positioning result;
a prediction probability calculation subunit, connected to the prediction vector generation subunit, for calculating component values corresponding to components in the prediction vector based on the character recognition model;
and the character type recognition subunit is connected with the prediction probability calculation subunit and is used for recognizing the character type corresponding to the component corresponding to the maximum component value in the prediction vector based on the character recognition model, taking the recognized character type as the character type corresponding to the frame number character on the appointed character bit and outputting a character type recognition result of the frame number character on the appointed character bit.
As a preferable mode of the present invention, the number of the character recognition units is 17, and each character recognition unit is used for recognizing the character type corresponding to the frame number character on one of the designated character positions in the frame number character string.
As a preferable scheme of the invention, the end-to-end vehicle frame number recognition system further comprises a character recognition model training module which is connected with the character recognition module and used for training and forming the character recognition model according to the character recognition result.
The invention also provides an end-to-end vehicle frame number identification method based on deep learning, which is realized by applying the end-to-end vehicle frame number identification system and comprises the following steps:
step S1, inputting an image containing a whole frame number character string by the end-to-end frame number recognition system;
s2, extracting image features corresponding to the images by the end-to-end frame number recognition system, and obtaining a feature map corresponding to the images;
s3, the end-to-end vehicle frame number identification system converts the feature map into corresponding feature vectors;
and S4, the end-to-end frame number recognition system performs corresponding character type recognition on each character in the frame number character string in the image simultaneously according to the feature vector and based on a preset character recognition model, and finally recognizes and obtains a character recognition result of the frame number character string.
As a preferred solution of the present invention, in step S4, the process of the end-to-end frame number recognition system recognizing the character type corresponding to each of the characters in the frame number character string specifically includes the following steps:
step S41, the end-to-end vehicle frame number recognition system locates components corresponding to the vehicle frame number characters on each appointed character position in the vehicle frame number character string in the feature vector based on the preset character recognition model, and obtains a plurality of positioning results related to the vehicle frame number characters on each appointed character position;
step S42, the end-to-end frame number identification system converts the same feature vector into a plurality of corresponding prediction vectors according to each positioning result;
step S43, the end-to-end vehicle frame number recognition system calculates a component value corresponding to each component in each predictive vector based on the preset character recognition model;
step S44, the end-to-end frame number recognition system recognizes a character type corresponding to the component corresponding to the maximum component value in each prediction vector based on the preset character recognition model, takes the recognized character type as a character type corresponding to the frame number character on the corresponding designated character position, and finally recognizes and obtains a character recognition result of the frame number character string.
The end-to-end vehicle frame number recognition system provided by the invention can automatically recognize the inputted image containing the vehicle frame number character string, has the advantages of quick and efficient recognition process and high recognition accuracy, and solves the technical problems of low recognition efficiency and easy missed detection and false detection of the traditional manual recognition mode.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are required to be used in the embodiments of the present invention will be briefly described below. It is evident that the drawings described below are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of a deep learning based end-to-end car frame number identification system provided by the invention;
FIG. 2 is a schematic diagram of the character recognition module in the deep learning based end-to-end car frame number recognition system provided by the invention;
FIG. 3 is a schematic diagram of a character recognition unit in a character recognition module in a deep learning-based end-to-end car frame number recognition system according to the present invention;
FIG. 4 is a diagram of steps in a method for implementing end-to-end identification of a frame number using the end-to-end frame number identification system provided by the present invention;
FIG. 5 is a step diagram of a preferred method of character type recognition for frame number characters by the end-to-end frame number recognition system provided by the present invention;
FIG. 6 is a network architecture diagram of a convolutional neural network employed by the end-to-end frame number identification system of the present invention to extract feature maps of images containing frame number strings;
FIG. 7 is a network architecture diagram of a convolutional neural network employed by the end-to-end vehicle frame number identification system of the present invention to identify the type of character corresponding to the character on the designated character position in the vehicle frame number string;
fig. 8 is a diagram of recognition results of recognizing a frame number character string by the end-to-end frame number recognition system provided by the invention.
Detailed Description
The technical scheme of the invention is further described below by the specific embodiments with reference to the accompanying drawings.
Wherein the drawings are for illustrative purposes only and are shown in schematic, non-physical, and not intended to be limiting of the present patent; for the purpose of better illustrating embodiments of the invention, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the size of the actual product; it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numbers in the drawings of embodiments of the invention correspond to the same or similar components; in the description of the present invention, it should be understood that, if the terms "upper", "lower", "left", "right", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, only for convenience in describing the present invention and simplifying the description, rather than indicating or implying that the apparatus or elements being referred to must have a specific orientation, be constructed and operated in a specific orientation, so that the terms describing the positional relationships in the drawings are merely for exemplary illustration and should not be construed as limiting the present patent, and that the specific meaning of the terms described above may be understood by those of ordinary skill in the art according to specific circumstances.
In the description of the present invention, unless explicitly stated and limited otherwise, the term "coupled" or the like should be interpreted broadly, as it may be fixedly coupled, detachably coupled, or integrally formed, as indicating the relationship of components; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between the two parts or interaction relationship between the two parts. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Referring to fig. 1, the end-to-end frame number recognition system based on deep learning provided by the embodiment of the invention is used for automatically recognizing a frame number, and the frame number recognition system comprises:
the image input module 1 is used for inputting an image containing the whole frame number character string;
the image feature extraction module 2 is connected with the image input module 1 and is used for extracting image features corresponding to the image to obtain a feature map corresponding to the image and converting the feature map into corresponding feature vectors;
the character recognition module 3 is connected with the image feature extraction module 2 and is used for carrying out corresponding character type recognition on each frame number character in the frame number character strings in the image according to the feature vector and based on a preset character recognition model, and finally obtaining a character recognition result of the frame number character strings.
In the technical scheme, the end-to-end vehicle frame number recognition system carries out convolution recognition on the image through the convolution neural network to obtain the feature map corresponding to the input image. Referring specifically to fig. 6, the convolutional neural network preferably employs VGGNet or res net network architecture as existing in the prior art to extract image features. The network architecture includes a convolution layer, a ReLu layer, and a batch normalization layer. The convolutional neural network is input as an image comprising an entire carriage number string, the image size of which is 3 x 448. The input image is first calculated by a convolution layer with a convolution kernel size of 3*3, a feature map of 64 x 448 is output, then, the feature images with the sizes of 64×224×224, 128×112×112, 256×64×64 and 512×32×32 are sequentially output after the further four stages of image convolution feature extraction. Finally, compressing the feature map with the size of 512 x 32 into 1024-dimensional feature vectors, wherein the feature vectors encode the position and shape features of the frame number character strings in the input image, and then inputting the feature vectors into a subsequent character recognition network for carrying out the character recognition process of the frame number character strings.
It should be noted that, the convolutional neural network image feature extraction method adopted by the end-to-end frame number recognition system is an image feature extraction method existing in the prior art, and the image feature extraction method is not in the scope of the invention claimed, so the specific process of extracting the feature map of the input image by the end-to-end frame number recognition system is not described herein.
In the technical scheme, the end-to-frame number recognition system also converts the feature map corresponding to the image into the feature vector through the convolutional neural network. The method of converting the feature map into the feature vector by using the convolutional neural network is a method existing in the prior art, and the conversion method is not the scope of the claimed invention, so the detailed conversion process thereof is not described herein.
Referring to fig. 2, the character recognition module 3 includes a plurality of character recognition units 31, each character recognition unit 31 is configured to recognize a character type corresponding to a frame number character on a designated character position in the frame number character string,
referring specifically to fig. 3, each character recognition unit 31 specifically includes:
a designated character position character feature positioning subunit 311, configured to position, in the feature vector, the component feature corresponding to the frame number character associated with the designated character position, and obtain a positioning result;
a prediction vector generation subunit 312, connected to the specified character bit character feature positioning subunit 311, configured to convert the feature vector into a corresponding prediction vector according to the positioning result;
a prediction probability calculation unit 313 connected to the prediction vector generation subunit 312, for calculating a prediction probability corresponding to each component in the prediction vector based on the character recognition model;
the character type recognition subunit 314, the prediction probability calculation subunit 313 is configured to recognize, based on the character recognition model, a character type corresponding to a component corresponding to the largest component value in the prediction vector, and take the recognized character type as a character type corresponding to the frame number character on the designated character bit, and output a character type recognition result for the frame number character on the designated character bit.
It is emphasized here that one character recognition unit recognizes the carriage number character on only one of the specified character bits in the carriage number character string. For example, the first character recognition unit recognizes the carriage number character on the first character bit in the carriage number character string, and the second character recognition unit recognizes the carriage number character … … on the second character bit in the carriage number character string.
Since the carriage number is generally composed of 17-bit letters or numbers or a combination of letters and numbers, it is preferable that the number of the character recognition units 31 be set to 17, and each character recognition unit 31 is respectively used for recognizing the character type corresponding to the carriage number character on one of the designated character bits in the carriage number character string.
The characters include 36 character types, 26 english alphabets and 10 natural numbers from 1 to 10, respectively.
In the above technical scheme, the process of the end-to-end frame number recognition system recognizing the character type of the frame number character is detailed as follows:
fig. 7 shows a network architecture diagram of a convolutional neural network used by the end-to-end frame number recognition system of the present invention to recognize a type of a character corresponding to a frame number character on a designated character position in a frame number character string, referring to fig. 7, the network architecture is composed of a first full connection layer, a second full connection layer and a ReLu layer,
the 1024-dimensional feature vector output by the system outputs a 36-dimensional prediction vector after feature extraction of the first fully connected layer 100, the second fully connected layer 200 and the ReLu layer 300. The 36 components in the 36-dimensional feature vector are used to represent 26 english alphabets and ten natural numbers from 1 to 10, respectively. The component values corresponding to the 36 components are used for representing the prediction probability that the component is the corresponding english letter or natural number.
Specifically, the 1024-dimensional feature vector output by the system is used as the input of 17 character recognition units at the same time, and then each character recognition unit respectively performs component feature extraction on the frame number character on the designated character bit on the concerned frame number character string to extract and output the prediction vector corresponding to the frame number character on the designated character bit. The predicted vector is the 36-dimensional feature vector, then, a component value corresponding to each component in the 36-dimensional feature vector is calculated according to a preset character recognition model (that is, the predicted probability that the component is a corresponding english letter or number is calculated), then, the character type of the component is output as a predicted result with the largest component value, for example, the character type is a character "a", and finally, the frame number character on the designated character position on the frame number character string focused by the character recognition unit is a character "a".
Here, each character recognition unit may locate the component feature to be focused on associated with the frame number character on the designated character position in the 1024-dimensional feature vector according to the preset character recognition model, and ignore other feature portions in the 1024-dimensional feature vector.
Preferably, the end-to-end vehicle frame number recognition system further comprises a character recognition model training module 4 connected with the character recognition module 3 and used for training and forming the character recognition model according to the character recognition result.
The loss function used for training the character recognition model is calculated by the following formula:
Figure BDA0002262439950000061
in the above formula, L is used to represent a loss function;
m is used for representing the character category (namely 26 English letters and 10 natural numbers from 1 to 10) to which each frame number character possibly belongs;
y c for indicating an indicating variable indicating whether the character class predicted by the system is consistent with the true character class, and if so, y c Has a value of 1, otherwise y c The value of (2) is 0;
p c for representing the predicted probability that the training sample is character class c.
The invention optimizes the character recognition model by the Adam optimization method existing in the prior art.
The invention also provides an end-to-end vehicle frame number identification method based on deep learning, which is realized by applying the end-to-end vehicle frame number identification system, referring to fig. 4 and 8, and comprises the following steps:
s1, inputting an image containing a character string of the whole frame number by an end-to-end frame number recognition system;
s2, extracting image features corresponding to the images by the end-to-end frame number recognition system, and obtaining a feature map corresponding to the images;
s3, converting the feature map into corresponding feature vectors by the end-to-end vehicle frame number identification system;
and S4, the end-to-end vehicle frame number recognition system performs corresponding character type recognition on each character in the vehicle frame number character strings in the image simultaneously according to the feature vectors and based on a preset character recognition model, and finally recognizes to obtain a character recognition result of the vehicle frame number character strings.
Referring to fig. 5, in step S4, the process of the end-to-end frame number recognition system for recognizing the character type corresponding to each character in the frame number character string specifically includes the following steps:
step S41, the end-to-end vehicle frame number recognition system simultaneously positions components corresponding to the vehicle frame number characters on each appointed character position in the vehicle frame number character string in the feature vector based on a preset character recognition model to obtain a plurality of positioning results related to the vehicle frame number characters on each appointed character position;
step S42, the end-to-end vehicle frame number identification system converts the same feature vector into a plurality of corresponding prediction vectors according to each positioning result;
step S43, the end-to-end vehicle frame number recognition system calculates component values corresponding to components in each predictive vector based on a preset character recognition model;
step S44, the end-to-end vehicle frame number recognition system recognizes the character type corresponding to the component corresponding to the maximum component value in each predictive vector based on a preset character recognition model, takes the recognized character type as the character type corresponding to the vehicle frame number character on the corresponding appointed character position in the vehicle frame number character string, and finally recognizes and obtains the character recognition result of the vehicle frame number character string.
In the above technical solution, in step S41, the positioning method adopted by the system for positioning the component corresponding to the frame number character on the designated character position in the feature vector is obtained by recognition of the pre-trained character recognition model, and the recognition positioning method is a positioning method existing in the prior art, and the positioning method is not the scope of the invention claimed, so the specific method process of positioning the component based on the convolutional neural network by the system is not described herein.
In step S42, the prediction vector is a 36-dimensional feature vector, 36 components in the 36-dimensional feature vector are used to represent the prediction probability that the frame number character on the specified character bit may be the corresponding 26 english letters or 10 natural numbers from 1 to 10 (i.e., the possible character classes of the frame number character), and each component in the 36 components corresponds to the component value that the component is the corresponding character class.
In step S42, the method of locating the position of the character feature corresponding to the frame number character of the designated character bit in the 1024-dimensional feature vector is the existing locating method, which is not described herein, and is not the scope of the invention.
In step S43, the method of calculating the component values corresponding to the components in the 36-dimensional feature vector by the system is the same as the method existing in the prior art, and preferably the calculation is performed by using the convolutional neural network, and the specific calculation process is not described herein.
In addition, it should be emphasized that each character recognition unit only recognizes the frame number character type of the designated character bit in the frame number character string, for example, the first character recognition unit only recognizes the character type corresponding to the frame number character on the first designated character bit in the frame number character string, and the second character recognition unit only recognizes the character type … … corresponding to the frame number character on the second designated character bit in the frame number character string, so that in the recognition result of the system performing character recognition on the frame number character string, each character in the frame number is arranged in order, and no disorder arrangement occurs.
In conclusion, the end-to-end vehicle frame number recognition system provided by the invention can automatically recognize the vehicle frame number character of the input image containing the vehicle frame number character string, has the advantages of quick and efficient recognition process and high recognition accuracy, and solves the technical problems of low recognition efficiency and easiness in missed detection and false detection of the traditional manual recognition mode.
It should be understood that the above description is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be apparent to those skilled in the art that various modifications, equivalents, variations, and the like can be made to the present invention. However, such modifications are intended to fall within the scope of the present invention without departing from the spirit of the present invention. In addition, some terms used in the specification and claims of the present application are not limiting, but are merely for convenience of description.

Claims (6)

1. An end-to-end vehicle frame number identification system based on deep learning for automatically identifying a vehicle frame number, comprising:
the image input module is used for inputting an image containing the whole frame number character string;
the image feature extraction module is connected with the image input module and is used for extracting image features corresponding to the image to obtain a feature map corresponding to the image and converting the feature map into corresponding feature vectors;
the character recognition module is connected with the image feature extraction module and is used for carrying out corresponding character type recognition on each frame number character in the frame number character string in the image based on a preset character recognition model according to the feature vector, and finally recognizing to obtain a character recognition result of the frame number character string;
the character recognition module comprises a plurality of character recognition units, each character recognition unit is used for recognizing the character type corresponding to the frame number character on one appointed character position in the frame number character string,
each character recognition unit specifically comprises:
a designated character position character feature positioning subunit, configured to position, in the feature vector, a component feature corresponding to the frame number character associated with the designated character position, and obtain a positioning result;
the prediction vector generation subunit is connected with the appointed character bit character feature positioning subunit and is used for converting the feature vector into a corresponding prediction vector according to the positioning result;
a prediction probability calculation subunit, connected to the prediction vector generation subunit, for calculating component values corresponding to components in the prediction vector based on the character recognition model;
and the character type recognition subunit is connected with the prediction probability calculation subunit and is used for recognizing the character type corresponding to the component corresponding to the maximum component value in the prediction vector based on the character recognition model, taking the recognized character type as the character type corresponding to the frame number character on the appointed character bit and outputting a character type recognition result of the frame number character on the appointed character bit.
2. The end-to-end car frame number identification system of claim 1, wherein the end-to-end car frame number identification system convolutionally identifies the image via a convolutional neural network to obtain the feature map corresponding to the image.
3. The end-to-end car frame number identification system of claim 2, wherein the end-to-end car frame number identification system converts the feature map corresponding to the image into the feature vector via the convolutional neural network.
4. The end-to-end carriage number recognition system of claim 1 wherein the number of said character recognition units is 17, each of said character recognition units being for respectively recognizing said character type corresponding to said carriage number character on one of said designated character bits in said 17-bit carriage number character string.
5. The end-to-end vehicle frame number identification system of claim 1 further comprising a character recognition model training module coupled to said character recognition module for training to form said character recognition model based on said character recognition result.
6. An end-to-end car frame number identification method based on deep learning, which is realized by applying the end-to-end car frame number identification system according to any one of claims 1-5, and is characterized by comprising the following steps:
step S1, inputting an image containing a whole frame number character string by the end-to-end frame number recognition system;
s2, extracting image features corresponding to the images by the end-to-end frame number recognition system, and obtaining a feature map corresponding to the images;
s3, the end-to-end vehicle frame number identification system converts the feature map into corresponding feature vectors;
step S4, the end-to-end frame number recognition system performs corresponding character type recognition on each character in the frame number character string in the image simultaneously according to the feature vector and based on a preset character recognition model, and finally recognizes and obtains a character recognition result of the frame number character string;
in step S4, the process of the end-to-end frame number recognition system recognizing the character type corresponding to each character in the frame number character string specifically includes the following steps:
step S41, the end-to-end vehicle frame number recognition system locates components corresponding to the vehicle frame number characters on each appointed character position in the vehicle frame number character string in the feature vector based on the preset character recognition model, and obtains a plurality of positioning results related to the vehicle frame number characters on each appointed character position;
step S42, the end-to-end frame number identification system converts the same feature vector into a plurality of corresponding prediction vectors according to each positioning result;
step S43, the end-to-end vehicle frame number recognition system calculates a component value corresponding to each component in each predictive vector based on the preset character recognition model;
step S44, the end-to-end frame number recognition system recognizes a character type corresponding to the component corresponding to the maximum component value in each prediction vector based on the preset character recognition model, takes the recognized character type as a character type corresponding to the frame number character on the corresponding designated character position, and finally recognizes and obtains a character recognition result of the frame number character string.
CN201911075932.1A 2019-11-06 2019-11-06 End-to-end vehicle frame number identification system and identification method based on deep learning Active CN110837838B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911075932.1A CN110837838B (en) 2019-11-06 2019-11-06 End-to-end vehicle frame number identification system and identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911075932.1A CN110837838B (en) 2019-11-06 2019-11-06 End-to-end vehicle frame number identification system and identification method based on deep learning

Publications (2)

Publication Number Publication Date
CN110837838A CN110837838A (en) 2020-02-25
CN110837838B true CN110837838B (en) 2023-07-11

Family

ID=69576170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911075932.1A Active CN110837838B (en) 2019-11-06 2019-11-06 End-to-end vehicle frame number identification system and identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN110837838B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215221A (en) * 2020-09-22 2021-01-12 国交空间信息技术(北京)有限公司 Automatic vehicle frame number identification method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894045A (en) * 2016-05-06 2016-08-24 电子科技大学 Vehicle type recognition method with deep network model based on spatial pyramid pooling
CN109840524A (en) * 2019-01-04 2019-06-04 平安科技(深圳)有限公司 Kind identification method, device, equipment and the storage medium of text
WO2019177734A1 (en) * 2018-03-13 2019-09-19 Recogni Inc. Systems and methods for inter-camera recognition of individuals and their properties

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8156116B2 (en) * 2006-07-31 2012-04-10 Ricoh Co., Ltd Dynamic presentation of targeted information in a mixed media reality recognition system
CN102509112A (en) * 2011-11-02 2012-06-20 珠海逸迩科技有限公司 Number plate identification method and identification system thereof
US20140270384A1 (en) * 2013-03-15 2014-09-18 Mitek Systems, Inc. Methods for mobile image capture of vehicle identification numbers
CN105760891A (en) * 2016-03-02 2016-07-13 上海源庐加佳信息科技有限公司 Chinese character verification code recognition method
US10163022B1 (en) * 2017-06-22 2018-12-25 StradVision, Inc. Method for learning text recognition, method for recognizing text using the same, and apparatus for learning text recognition, apparatus for recognizing text using the same
CN107423732A (en) * 2017-07-26 2017-12-01 大连交通大学 Vehicle VIN recognition methods based on Android platform
US10255514B2 (en) * 2017-08-21 2019-04-09 Sap Se Automatic identification of cloned vehicle identifiers
CN107704857B (en) * 2017-09-25 2020-07-24 北京邮电大学 End-to-end lightweight license plate recognition method and device
CN109460765A (en) * 2018-09-25 2019-03-12 平安科技(深圳)有限公司 Driving license is taken pictures recognition methods, device and the electronic equipment of image in natural scene
CN109726715A (en) * 2018-12-27 2019-05-07 信雅达系统工程股份有限公司 A kind of character image serializing identification, structural data output method
CN109829453B (en) * 2018-12-29 2021-10-12 天津车之家数据信息技术有限公司 Method and device for recognizing characters in card and computing equipment
CN110378331B (en) * 2019-06-10 2022-10-04 南京邮电大学 End-to-end license plate recognition system and method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894045A (en) * 2016-05-06 2016-08-24 电子科技大学 Vehicle type recognition method with deep network model based on spatial pyramid pooling
WO2019177734A1 (en) * 2018-03-13 2019-09-19 Recogni Inc. Systems and methods for inter-camera recognition of individuals and their properties
CN109840524A (en) * 2019-01-04 2019-06-04 平安科技(深圳)有限公司 Kind identification method, device, equipment and the storage medium of text

Also Published As

Publication number Publication date
CN110837838A (en) 2020-02-25

Similar Documents

Publication Publication Date Title
Zhang et al. Multi-scale attention with dense encoder for handwritten mathematical expression recognition
CN109543667B (en) Text recognition method based on attention mechanism
CN111428718B (en) Natural scene text recognition method based on image enhancement
CN109993164A (en) A kind of natural scene character recognition method based on RCRNN neural network
CN109522558B (en) Deep learning-based Chinese character-staggering correction method
CN110210433B (en) Container number detection and identification method based on deep learning
CN111274804A (en) Case information extraction method based on named entity recognition
CN113254654B (en) Model training method, text recognition method, device, equipment and medium
CN109977950A (en) A kind of character recognition method based on mixing CNN-LSTM network
CN114998673A (en) Dam defect time sequence image description method based on local self-attention mechanism
CN105117740A (en) Font identification method and device
CN112070114A (en) Scene character recognition method and system based on Gaussian constraint attention mechanism network
CN114299512A (en) Zero-sample small seal character recognition method based on Chinese character etymon structure
CN110837838B (en) End-to-end vehicle frame number identification system and identification method based on deep learning
CN107545281B (en) Single harmful gas infrared image classification and identification method based on deep learning
CN114092815A (en) Remote sensing intelligent extraction method for large-range photovoltaic power generation facility
CN115658955A (en) Cross-media retrieval and model training method, device, equipment and menu retrieval system
CN112926323B (en) Chinese named entity recognition method based on multistage residual convolution and attention mechanism
CN112990196B (en) Scene text recognition method and system based on super-parameter search and two-stage training
CN112949637A (en) Bidding text entity identification method based on IDCNN and attention mechanism
CN111079749B (en) End-to-end commodity price tag character recognition method and system with gesture correction
CN112380861A (en) Model training method and device and intention identification method and device
CN116433934A (en) Multi-mode pre-training method for generating CT image representation and image report
CN115984876A (en) Text recognition method and device, electronic equipment, vehicle and storage medium
CN114548116A (en) Chinese text error detection method and system based on language sequence and semantic joint analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant