CN112070087A - Train number identification method and device with end position and readable storage medium - Google Patents

Train number identification method and device with end position and readable storage medium Download PDF

Info

Publication number
CN112070087A
CN112070087A CN202010961523.8A CN202010961523A CN112070087A CN 112070087 A CN112070087 A CN 112070087A CN 202010961523 A CN202010961523 A CN 202010961523A CN 112070087 A CN112070087 A CN 112070087A
Authority
CN
China
Prior art keywords
end position
information
image
position information
train
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010961523.8A
Other languages
Chinese (zh)
Other versions
CN112070087B (en
Inventor
张渝
彭建平
赵波
王祯
黄炜
章祥
马莉
王小伟
胡继东
史亚利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Leading Software Technology Co ltd
Original Assignee
Chengdu Leading Software Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Leading Software Technology Co ltd filed Critical Chengdu Leading Software Technology Co ltd
Priority to CN202010961523.8A priority Critical patent/CN112070087B/en
Publication of CN112070087A publication Critical patent/CN112070087A/en
Application granted granted Critical
Publication of CN112070087B publication Critical patent/CN112070087B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to the technical field of image recognition, and particularly discloses a train number recognition method and device with an end position and a readable storage medium. The method comprises the steps of obtaining a complete train number image of a train, identifying train number information and end position information by adopting a deep learning algorithm, identifying end position information by adopting an end position identification algorithm, judging whether the end position information identified by the end position identification algorithm is correct or not, outputting the train number information and the end position information identified by the end position identification algorithm if the end position information is correct, obtaining final end position information according to the end position information identified by the deep learning algorithm and the end position identification algorithm if the end position information is incorrect, and outputting the final end position information and the train number information together.

Description

Train number identification method and device with end position and readable storage medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a train number recognition method with an end position, a train number recognition device with an end position and a readable storage medium.
Background
The locomotive number of the train includes terminal information consisting of roman numerals i, ii or letters a and B in addition to a string of letters and numerals. The end position information is positioned at the tail end, above or below the vehicle number consisting of letters and numbers. In the development process of the train, the locomotive number is identified by vehicle-mounted equipment, online equipment, ground reading equipment and the like, the maintenance of the train is guided according to the identification result, and the position information of the actual corresponding component is searched, so that the daily maintenance of the train is guaranteed by means of the end position information besides the character string consisting of letters and numbers.
In the current railway system, the image recognition technology is adopted to automatically recognize the locomotive car number, so that the condition that the car number cannot be recognized due to damage and loss of an electronic tag is avoided. However, in the image recognition of the locomotive number, no matter the traditional mode recognition or the recognition based on the deep learning method, the shot image is required to be clear and the shooting angles are consistent, and due to the influence of the objective factors, in the recognition of the terminal position information of the locomotive number, the accuracy of the terminal position information recognition needs to be improved due to the problems that the types of trains are more, the size, the position and the printing style of the terminal position are different greatly and the like.
Disclosure of Invention
In view of the above, the present application provides a method, an apparatus and a readable storage medium for identifying a train number with an end position, which can solve or at least partially solve the above existing problems.
In order to solve the technical problems, the technical scheme provided by the invention is a train number identification method with an end position, which comprises the following steps:
acquiring a complete train number image of a train, wherein the complete train number image comprises train number information and end position information of the train;
recognizing the car number information and the end position information in the complete car number image by adopting a deep learning algorithm;
identifying end position information in the complete car number image by adopting an end position identification algorithm;
judging whether the end bit information identified by the end bit identification algorithm is correct or not, taking the end bit information as final end bit information in response to the end bit information being correct, and generating the final end bit information according to the end bit information identified by the deep learning algorithm and the end bit identification algorithm in response to the end bit information being wrong;
and outputting the train number information and the final end position information of the train.
Preferably, the method for identifying the end position information in the complete car number image by using an end position identification algorithm includes:
carrying out image graying processing and image smoothing processing on the complete car number image to obtain a preprocessed image;
positioning and cutting an end position information area in the preprocessed image to obtain an end position area image;
carrying out binarization processing on the end position area image, and then carrying out feature extraction to obtain an end position feature image;
and identifying the end position characters in the end position characteristic image to obtain end position information.
Preferably, the method for performing image graying processing and image smoothing processing on the complete car number image to obtain the preprocessed image comprises:
assuming that f (i, j) is the gray scale value of the point with coordinate (i, j) in the complete car number image, and R (i, j), G (i, j), B (i, j) are the values on the red, green and blue components of the point with coordinate (i, j), respectively, and different weights are assigned to R, G, B, the gray scale value of each point in the complete car number image is obtained, the formula is as follows:
f(i,j)=0.30×R(i,j)+0.59×G(i,j)+0.11×B(i,j);
smoothing the complete car number image by adopting median filtering, and performing two-dimensional sequence { xi,jThe median filtering of the filter window is two-dimensional and can be expressed as:
Figure BDA0002680724460000021
a is the filter window, and a is 3 x 3 in size.
Preferably, the method for locating and cropping the end bit information area in the preprocessed image to obtain the end bit area image includes:
performing edge recognition by using a Sobel operator, wherein the Sobel operator comprises two groups of 3 × 3 matrixes which are respectively in the transverse direction and the longitudinal direction, and performing plane convolution on the Sobel operator and the preprocessed image to respectively obtain transverse brightness difference approximate values and longitudinal brightness difference approximate values;
if f represents the preprocessed image, and Gx and Gy represent the detected horizontal and vertical edges, respectively, the formula is as follows:
Figure BDA0002680724460000031
Figure BDA0002680724460000032
the transverse and longitudinal gradient approximations for each pixel of the transverse and longitudinal edge detected image are combined by the following formula to calculate the magnitude of the gradient:
Figure BDA0002680724460000033
the gradient direction is calculated with the following formula:
Figure BDA0002680724460000034
preferably, the method for performing binarization processing on the end position region image and then performing feature extraction to obtain the end position feature image includes:
obtaining an optimal threshold value by adopting an iterative method, and carrying out binarization processing on the terminal area image to obtain a binarized image, wherein the method for obtaining the optimal threshold value by adopting the iterative method comprises the following steps: initializing a threshold to be half of the sum of the maximum gray level and the minimum gray level, dividing the end bit region image into a foreground and a background by using the threshold calculated each time, calculating the average gray level of the whole foreground pixel and the average gray level of the whole background pixel, wherein the threshold is half of the sum of the average gray levels of the foreground and the background, and converging the threshold to obtain the optimal threshold if the threshold is equal to the threshold calculated last time;
the method comprises the steps of extracting character features by calculating proportion values of white pixel points of a binary image, dividing the character into 16 equal parts, counting the proportion of the white pixel points of the binary image in the 16 equal parts to be used as 16 feature vectors, and counting the proportion of the white pixel points of four regions in the vertical direction to be used as the last 4 feature vectors.
Preferably, the method for identifying the end position characters in the end position characteristic image and obtaining the end position information includes:
identifying the cut end position characteristic images with the same size by adopting a neural network; the neural network is a network consisting of basic neurons and comprises an input layer, a hidden layer and an output layer, wherein the input layer comprises 20 nodes, the output layer comprises 4 nodes, 1 hidden layer obtains a classifier of end position information after being trained by the end position information, and the end position information is identified through the classifier.
Preferably, the method for determining whether the end bit information identified by the end bit identification algorithm is correct, taking the end bit information as final end bit information in response to the end bit information being correct, and generating the final end bit information according to the end bit information identified by the deep learning algorithm and the end bit identification algorithm in response to the end bit information being incorrect includes:
judging whether the identified information of the head end position of the same train is consistent with the information of the tail end position of the train or not,
in response to the fact that the information of the train head end position is consistent with the information of the train tail end position, taking first end position information obtained when a train passes as final end position information;
responding to the fact that the information of the head end position does not accord with the information of the tail end position, comparing the information of the head end position and the information of the tail end position identified by the deep learning algorithm, if the information of the two head end positions is the same, indicating that the information of the tail end position is wrong, assigning the information of the tail end position to be a corresponding result of the information of the head end position, if the information of the two tail end positions is the same, indicating that the information of the head end position is wrong, assigning the information of the head end position to be a corresponding result of the information of the tail end position, and finally taking the first end position information obtained when the train passes through as final end position information.
The invention also provides a train number identification device with an end position, which comprises:
the complete train number image acquisition module is used for acquiring a complete train number image of the train, wherein the complete train number image comprises train number information and end position information of the train;
the vehicle number end position information identification module is used for identifying the vehicle number information and the end position information in the complete vehicle number image by adopting a deep learning algorithm;
the end position information identification module is used for identifying end position information in the complete car number image by adopting an end position identification algorithm;
the end position information judging module is used for judging whether the end position information identified by the end position identification algorithm is correct or not, taking the end position information as final end position information in response to the correct end position information, and generating the final end position information according to the end position information identified by the deep learning algorithm and the end position identification algorithm in response to the wrong end position information;
and the train number end position information output module is used for outputting the train number information and the final end position information of the train.
Preferably, the end bit information identification module includes:
the image preprocessing unit is used for carrying out image graying processing and image smoothing processing on the complete car number image to obtain a preprocessed image;
the end position positioning unit is used for positioning and cutting an end position information area in the preprocessed image to obtain an end position area image;
the characteristic extraction unit is used for carrying out binarization processing on the end position area image and then carrying out characteristic extraction to obtain an end position characteristic image;
and the character recognition unit is used for recognizing the end position characters in the end position characteristic image to obtain end position information.
Preferably, the end position information determining module includes:
an end position judging unit for judging whether the identified information of the head end position of the same train is consistent with the information of the tail end position of the train,
the conforming processing unit is used for responding that the information of the train head end position conforms to the information of the train tail end position and taking the first end position information obtained when the train passes as final end position information;
and the unmatched processing unit is used for responding to the unmatched train head end position information and train tail end position information, comparing the unmatched train head end position information with the train tail end position information identified by the deep learning algorithm, if the two train head end position information are the same, indicating that the train tail end position information is wrong, assigning the train tail end position information as a corresponding result of the train head end position information, if the two train tail end position information are the same, indicating that the train head end position information is wrong, assigning the train head end position information as a corresponding result of the train tail end position information, and finally taking the first end position information obtained when the train passes through as final end position information.
The invention also provides a train number identification device with an end position, which comprises:
a memory for storing a computer program;
and the processor is used for executing the computer program to realize the steps of the train number identification method with the end position.
The invention also provides a readable storage medium, which stores a computer program, and the computer program is executed by a processor to realize the steps of the train number identification method with the end position.
Compared with the prior art, the beneficial effects of the method are detailed as follows: the train number identification method with the end positions comprises the steps of obtaining a complete train number image of a train, identifying train number information and end position information by adopting a deep learning algorithm, identifying the end position information by adopting an end position identification algorithm, judging whether the end position information identified by the end position identification algorithm is correct, outputting the train number information and the end position information identified by the end position identification algorithm if the end position information is correct, obtaining final end position information according to the end position information identified by the deep learning algorithm and the end position identification algorithm if the end position information is incorrect, and outputting the final end position information and the train number information together.
Drawings
In order to illustrate the embodiments of the present invention more clearly, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained by those skilled in the art without inventive effort.
Fig. 1 is a schematic flow chart of a train number identification method with an end position according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a method for identifying end position information in a complete car number image by using an end position identification algorithm according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a method for generating final end position information according to end position information identified by a deep learning algorithm and an end position identification algorithm according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a train number identification device with an end position according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without any creative work belong to the protection scope of the present invention.
In order to make the technical solutions of the present invention better understood by those skilled in the art, the present invention will be further described in detail with reference to the accompanying drawings and specific embodiments.
Due to the wide variety of trains, the height (position) of the end position information, the size of characters and the style have great differences. In practical application, uneven light supplement of the end position photo can occur, and the size, the angle, the color and the definition of the shot car number are different. For car number image processing, no matter a traditional mode identification mode or a car number identification mode based on deep learning, certain requirements are imposed on the form and structure of an image. Therefore, the algorithm adapted in one scene is not suitable for other scenes, so that the correct identification rate for the end bit information is not high at present.
As shown in fig. 1, an embodiment of the present invention provides a train number identification method with an end position, including:
s11: acquiring a complete train number image of the train, wherein the complete train number image comprises train number information and end position information of the train;
s12: identifying the car number information and the end position information in the complete car number image by adopting a deep learning algorithm;
s13: identifying end position information in the complete car number image by adopting an end position identification algorithm;
s14: judging whether the end bit information identified by the end bit identification algorithm is correct or not, taking the end bit information as final end bit information in response to the end bit information being correct, and generating the final end bit information according to the end bit information identified by the deep learning algorithm and the end bit identification algorithm in response to the end bit information being wrong;
s15: and outputting the train number information and the final end position information of the train.
Specifically, aiming at the problems, the invention develops a set of algorithm for improving the identification accuracy of the end position information of the vehicle number based on the current mature deep learning identification technology, and the algorithm is used together with the deep learning identification algorithm of the vehicle number, so that the identification accuracy of the end position information of the vehicle number can be improved.
It should be noted that, in S12, the car number information and the end position information in the complete car number image are identified by using a deep learning algorithm, which is a mature deep learning algorithm at present, and mainly includes image preprocessing, car number region positioning and character identification steps, and the deep learning algorithm at present has high accuracy in identifying the car number without the end position information.
As shown in fig. 2, it should be noted that the method for identifying the end position information in the complete car number image by using the end position identification algorithm in S13 includes:
s131: carrying out image graying processing and image smoothing processing on the complete car number image to obtain a preprocessed image;
s132: positioning and cutting an end position information area in the preprocessed image to obtain an end position area image;
s133: carrying out binarization processing on the end position area image, and then carrying out feature extraction to obtain an end position feature image;
s134: and identifying the end position characters in the end position characteristic image to obtain end position information.
Specifically, the method for identifying the end position information in the complete car number image by adopting the end position identification algorithm comprises the steps of image preprocessing, car number end position positioning, end position characteristic extraction and character identification, wherein the image preprocessing is realized based on image gray processing and image smoothing technology, the influences of noise, poor shooting effect and the like can be reduced, and the car number end position characteristic is highlighted; and positioning the end position area of the vehicle number by adopting an edge detection technology, carrying out binarization processing on an image cut out of the end position, then carrying out feature extraction, training a classifier based on feature information to carry out end position identification, and identifying characters by adopting a neural network.
It should be noted that, in S131, the image graying processing and the image smoothing processing are performed on the complete car number image, and the method for obtaining the preprocessed image includes:
assuming that f (i, j) is the gray scale value of the point with coordinate (i, j) in the complete car number image, and R (i, j), G (i, j), B (i, j) are the values on the red, green and blue components of the point with coordinate (i, j), respectively, and different weights are assigned to R, G, B, the gray scale value of each point in the complete car number image is obtained, the formula is as follows:
f(i,j)=0.30×R(i,j)+0.59×G(i,j)+0.11×B(i,j);
smoothing the complete car number image by adopting median filtering, and performing two-dimensional sequence { xi,jThe median filtering of the filter window is two-dimensional and can be expressed as:
Figure BDA0002680724460000081
a is the filter window, and a is 3 x 3 in size.
Specifically, color information can be filtered by adopting image graying preprocessing, so that the calculated amount is reduced. The specific processing method comprises the following steps: let f (i, j) be the gray scale value at coordinate (i, j) in the two-dimensional image, and R (i, j), G (i, j), B (i, j) be the values on the red, green and blue components, respectively, of the point at coordinate (i, j). Different weights are given R, G, B according to the sensitivity of human eyes to R, G and B, and the gray value of each position of the image is obtained according to the following formula:
f (i, j) ═ 0.30 xr (i, j) +0.59 xg (i, j) +0.11 xb (i, j); in order to accurately identify the edge information of the end position of the car number, the image smoothing processing is enhanced by adopting median filtering, and abnormal points and salt and pepper noises are removed.
For two-dimensional sequence { xi,jThe median filtering of the filter window is two-dimensional and can be expressed as:
Figure BDA0002680724460000091
a is a filtering window; the present invention employs a 3 x 3 filter window size.
It should be noted that, in S132, the method for locating and cropping the end bit information area in the preprocessed image to obtain the end bit area image includes:
performing edge recognition by using a Sobel operator, wherein the Sobel operator comprises two groups of 3 × 3 matrixes which are respectively in the transverse direction and the longitudinal direction, and performing plane convolution on the Sobel operator and the preprocessed image to respectively obtain transverse brightness difference approximate values and longitudinal brightness difference approximate values;
if f represents the preprocessed image, and Gx and Gy represent the detected horizontal and vertical edges, respectively, the formula is as follows:
Figure BDA0002680724460000092
Figure BDA0002680724460000093
the transverse and longitudinal gradient approximations for each pixel of the transverse and longitudinal edge detected image are combined by the following formula to calculate the magnitude of the gradient:
Figure BDA0002680724460000094
the gradient direction is calculated with the following formula:
Figure BDA0002680724460000095
specifically, the end position of the vehicle number is positioned by adopting edge detection: in order to cut out an accurate car number region, an end position region needs to be identified, namely, the edge of an end position character is identified, the edge detection depends on sudden change of the character and the background in the gradient direction, and a Sobel operator is adopted for edge identification. The Sobel operator includes two 3 × 3 matrixes, which are horizontal and vertical, respectively, and performs planar convolution with the image to obtain horizontal and vertical brightness difference approximations. If f represents the original image, GxAnd GyRepresents the images detected by the transverse and longitudinal edges respectively, and the formula is as follows:
Figure BDA0002680724460000096
Figure BDA0002680724460000097
the transverse and longitudinal gradient approximations for each pixel of the image may be combined using the following formula to calculate the magnitude of the gradient:
Figure BDA0002680724460000101
the gradient direction can then be calculated using the following formula:
Figure BDA0002680724460000102
in S133, the method for obtaining the end feature image by performing binarization processing on the end region image and then performing feature extraction includes:
obtaining an optimal threshold value by adopting an iterative method, and carrying out binarization processing on the terminal area image to obtain a binarized image, wherein the method for obtaining the optimal threshold value by adopting the iterative method comprises the following steps: initializing a threshold to be half of the sum of the maximum gray level and the minimum gray level, dividing the end bit region image into a foreground and a background by using the threshold calculated each time, calculating the average gray level of the whole foreground pixel and the average gray level of the whole background pixel, wherein the threshold is half of the sum of the average gray levels of the foreground and the background, and converging the threshold to obtain the optimal threshold if the threshold is equal to the threshold calculated last time;
the method comprises the steps of extracting character features by calculating proportion values of white pixel points of a binary image, dividing the character into 16 equal parts, counting the proportion of the white pixel points of the binary image in the 16 equal parts to be used as 16 feature vectors, and counting the proportion of the white pixel points of four regions in the vertical direction to be used as the last 4 feature vectors.
Specifically, the optimal threshold value is obtained by an iteration method, so that binarization processing is performed, and if the threshold value is larger than the threshold value, the threshold value is set to be white, otherwise, the threshold value is set to be black. The method for solving the optimal threshold value by an iterative method comprises the following steps: the threshold is initialized to be half of the sum of the maximum gray level and the minimum gray level, the image is divided into the foreground and the background by the threshold calculated each time, the average gray level of the whole foreground pixel and the average gray level of the whole background pixel are solved, the threshold is half of the sum of the average gray levels of the foreground and the background at the moment, and if the threshold at the moment is equal to the threshold calculated at the last time, the threshold at the moment is converged to obtain the optimal threshold.
The method adopts the calculation of the proportional value of the white pixel point of the binary image to extract the character characteristics. The specific implementation mode is as follows: dividing the character into 16 equal parts, and counting the proportion of white pixel points of the binary image in the 16 equal parts to be used as 16 feature vectors. And then counting white pixel points of four regions in the vertical direction as the next four feature vectors.
It should be noted that, in S134, the method for identifying the end characters in the end feature image and obtaining the end information includes:
identifying the cut end position characteristic images with the same size by adopting a neural network; the neural network is a network consisting of basic neurons and comprises an input layer, a hidden layer and an output layer, wherein the input layer comprises 20 nodes, the output layer comprises 4 nodes, 1 hidden layer obtains a classifier of end position information after being trained by the end position information, and the end position information is identified by the classifier.
Specifically, the terminal character recognition of the invention adopts a neural network to recognize the cut terminal pictures with the same size. A neural network is here a network of elementary neurons, which network comprises an input layer, a hidden layer and an output layer. The input layer comprises 20 nodes, the output layer comprises 4 nodes, and 1 hidden layer obtains a classifier of a car number end position after being trained by the car number end position.
As shown in fig. 3, the method for determining whether the end bit information identified by the end bit identification algorithm is correct in S14, taking the end bit information as final end bit information in response to the end bit information being correct, and generating final end bit information according to the end bit information identified by the deep learning algorithm and the end bit identification algorithm in response to the end bit information being incorrect includes:
s141: judging whether the identified head end position information of the same train is consistent with the tail end position information, responding to the coincidence of the head end position information and the tail end position information, and entering S145;
s142: comparing the information of the head end position and the information of the tail end position identified by the deep learning algorithm in response to the fact that the information of the head end position does not accord with the information of the tail end position,
s143: if the two pieces of information of the head end position are the same, the tail end position information is wrong, the tail end position information is assigned as a corresponding result of the head end position information, and the step S145 is entered;
s144: if the two tail end position information are the same, the information of the head end position is wrong, the head end position information is assigned as a corresponding result of the tail end position information, and the step S145 is entered;
s145: and taking the first end position information obtained when the train passes as final end position information.
Specifically, when the train end position is determined, the whole process of the passing train is collected. The end positions of the train number generally exist at the train head and the train tail, the train head character is I or A, the train tail character is II or B, namely I corresponds to II, and A corresponds to B. Based on the above situation, a special logic judgment is added in the algorithm to further confirm the end bit. The specific method comprises the following steps: after the front end position and the rear end position of the train are identified by the trained classifier, the characters of the identified front end position and the rear end position are compared, if the identified end positions are inconsistent, if I and II are identified, the end position identification is accurate, the first end position identified when the train passes through is output, if the end positions are consistent, the situation that identification errors exist in the train head or the train tail is shown, at the moment, the end position result identified by deep learning is combined and compared with the end position identified by the classifier, if the comparison result of the train head end positions is consistent, the identification error of the train tail end position is given, the train tail result is given as a train head reverse result, and if the comparison result of the train tail is consistent, the identification error of the train head end position is given, and the train head result is given as a train. And finally, outputting the first end position information identified when the train passes.
As shown in fig. 4, an embodiment of the present invention further provides a train number identification device with an end position, where the device includes:
the complete train number image acquisition module 21 is used for acquiring a complete train number image of the train, wherein the complete train number image comprises train number information and end position information of the train;
the vehicle number end position information identification module 22 is used for identifying the vehicle number information and the end position information in the complete vehicle number image by adopting a deep learning algorithm;
the end position information identification module 23 is used for identifying end position information in the complete car number image by adopting an end position identification algorithm;
the end position information judging module 24 is configured to judge whether the end position information identified by the end position identification algorithm is correct, take the end position information as final end position information in response to the end position information being correct, and generate final end position information according to the end position information identified by the deep learning algorithm and the end position identification algorithm in response to the end position information being incorrect;
and the train number end position information output module 25 is used for outputting the train number information and the final end position information of the train.
It should be noted that the end bit information identification module 23 includes:
the image preprocessing unit is used for carrying out image graying processing and image smoothing processing on the complete car number image to obtain a preprocessed image;
the end position positioning unit is used for positioning and cutting an end position information area in the preprocessed image to obtain an end position area image;
the characteristic extraction unit is used for carrying out binarization processing on the end position area image and then carrying out characteristic extraction to obtain an end position characteristic image;
and the character recognition unit is used for recognizing the end position characters in the end position characteristic image to obtain end position information.
It should be noted that the end bit information determining module 24 includes:
an end position judging unit for judging whether the identified information of the head end position of the same train is consistent with the information of the tail end position of the train,
the conforming processing unit is used for responding that the information of the train head end position conforms to the information of the train tail end position and taking the first end position information obtained when the train passes as final end position information;
and the unmatched processing unit is used for responding to the unmatched train head end position information and train tail end position information, comparing the unmatched train head end position information with the train tail end position information identified by the deep learning algorithm, if the two train head end position information are the same, indicating that the train tail end position information is wrong, assigning the train tail end position information as a corresponding result of the train head end position information, if the two train tail end position information are the same, indicating that the train head end position information is wrong, assigning the train head end position information as a corresponding result of the train tail end position information, and finally taking the first end position information obtained when the train passes through as final end position information.
The embodiment of the invention also provides a train number identification device with an end position, which comprises: a memory for storing a computer program; and the processor is used for executing a computer program to realize the steps of the train number identification method with the end position.
The embodiment of the invention also provides a readable storage medium, wherein the readable storage medium stores a computer program, and the computer program is executed by a processor to realize the steps of the train number identification method with the end position.
For the description of the features in the embodiment corresponding to fig. 4, reference may be made to the related description of the embodiments corresponding to fig. 1 to fig. 3, which is not repeated here.
The train number identification method with the end position, the train number identification device with the end position and the readable storage medium provided by the embodiment of the invention are described in detail above. The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.

Claims (10)

1. A train number identification method with end positions is characterized by comprising the following steps:
acquiring a complete train number image of a train, wherein the complete train number image comprises train number information and end position information of the train;
recognizing the car number information and the end position information in the complete car number image by adopting a deep learning algorithm;
identifying end position information in the complete car number image by adopting an end position identification algorithm;
judging whether the end bit information identified by the end bit identification algorithm is correct or not, taking the end bit information as final end bit information in response to the end bit information being correct, and generating the final end bit information according to the end bit information identified by the deep learning algorithm and the end bit identification algorithm in response to the end bit information being wrong;
and outputting the train number information and the final end position information of the train.
2. The method for identifying the train number with the end position according to claim 1, wherein the method for identifying the end position information in the complete train number image by using an end position identification algorithm comprises the following steps:
carrying out image graying processing and image smoothing processing on the complete car number image to obtain a preprocessed image;
positioning and cutting an end position information area in the preprocessed image to obtain an end position area image;
carrying out binarization processing on the end position area image, and then carrying out feature extraction to obtain an end position feature image;
and identifying the end position characters in the end position characteristic image to obtain end position information.
3. The method for identifying the train number with the terminal position according to claim 2, wherein the method for performing image graying processing and image smoothing processing on the complete train number image to obtain the preprocessed image comprises the following steps:
assuming that f (i, j) is the gray scale value of the point with coordinate (i, j) in the complete car number image, and R (i, j), G (i, j), B (i, j) are the values on the red, green and blue components of the point with coordinate (i, j), respectively, and different weights are assigned to R, G, B, the gray scale value of each point in the complete car number image is obtained, the formula is as follows:
f(i,j)=0.30×R(i,j)+0.59×G(i,j)+0.11×B(i,j);
smoothing the complete car number image by adopting median filtering, and performing two-dimensional sequence { xi,jThe median filtering of the filter window is two-dimensional and can be expressed as:
Figure FDA0002680724450000011
a is the filter window, and a is 3 x 3 in size.
4. The method for identifying the train number with the end position according to claim 2, wherein the method for locating and cutting the end position information area in the preprocessed image to obtain the end position area image comprises the following steps:
performing edge recognition by using a Sobel operator, wherein the Sobel operator comprises two groups of 3 × 3 matrixes which are respectively in the transverse direction and the longitudinal direction, and performing plane convolution on the Sobel operator and the preprocessed image to respectively obtain transverse brightness difference approximate values and longitudinal brightness difference approximate values;
if f represents the preprocessed image, and Gx and Gy represent the detected horizontal and vertical edges, respectively, the formula is as follows:
Figure FDA0002680724450000021
the transverse and longitudinal gradient approximations for each pixel of the transverse and longitudinal edge detected image are combined by the following formula to calculate the magnitude of the gradient:
Figure FDA0002680724450000022
the gradient direction is calculated with the following formula:
Figure FDA0002680724450000023
5. the method for identifying the train number with the end position as claimed in claim 2, wherein the method for obtaining the end position feature image by performing binarization processing on the end position area image and then performing feature extraction comprises the following steps:
obtaining an optimal threshold value by adopting an iterative method, and carrying out binarization processing on the terminal area image to obtain a binarized image, wherein the method for obtaining the optimal threshold value by adopting the iterative method comprises the following steps: initializing a threshold to be half of the sum of the maximum gray level and the minimum gray level, dividing the end bit region image into a foreground and a background by using the threshold calculated each time, calculating the average gray level of the whole foreground pixel and the average gray level of the whole background pixel, wherein the threshold is half of the sum of the average gray levels of the foreground and the background, and converging the threshold to obtain the optimal threshold if the threshold is equal to the threshold calculated last time;
the method comprises the steps of extracting character features by calculating proportion values of white pixel points of a binary image, dividing the character into 16 equal parts, counting the proportion of the white pixel points of the binary image in the 16 equal parts to be used as 16 feature vectors, and counting the proportion of the white pixel points of four regions in the vertical direction to be used as the last 4 feature vectors.
6. The method for identifying the train number with the end position as claimed in claim 2, wherein the method for identifying the end position character in the end position characteristic image and obtaining the end position information comprises the following steps:
identifying the cut end position characteristic images with the same size by adopting a neural network; the neural network is a network consisting of basic neurons and comprises an input layer, a hidden layer and an output layer, wherein the input layer comprises 20 nodes, the output layer comprises 4 nodes, 1 hidden layer obtains a classifier of end position information after being trained by the end position information, and the end position information is identified through the classifier.
7. The method for identifying a train number with an end position according to claim 1, wherein the method for determining whether the end position information identified by the end position identification algorithm is correct, taking the end position information as final end position information in response to the end position information being correct, and generating the final end position information according to the end position information identified by the deep learning algorithm and the end position identification algorithm in response to the end position information being incorrect comprises:
judging whether the identified information of the head end position of the same train is consistent with the information of the tail end position of the train or not,
in response to the fact that the information of the train head end position is consistent with the information of the train tail end position, taking first end position information obtained when a train passes as final end position information;
responding to the fact that the information of the head end position does not accord with the information of the tail end position, comparing the information of the head end position and the information of the tail end position identified by the deep learning algorithm, if the information of the two head end positions is the same, indicating that the information of the tail end position is wrong, assigning the information of the tail end position to be a corresponding result of the information of the head end position, if the information of the two tail end positions is the same, indicating that the information of the head end position is wrong, assigning the information of the head end position to be a corresponding result of the information of the tail end position, and finally taking the first end position information obtained when the train passes through as final end position information.
8. The utility model provides a take train number recognition device of end position which characterized in that, the device includes:
the complete train number image acquisition module is used for acquiring a complete train number image of the train, wherein the complete train number image comprises train number information and end position information of the train;
the vehicle number end position information identification module is used for identifying the vehicle number information and the end position information in the complete vehicle number image by adopting a deep learning algorithm;
the end position information identification module is used for identifying end position information in the complete car number image by adopting an end position identification algorithm;
the end position information judging module is used for judging whether the end position information identified by the end position identification algorithm is correct or not, taking the end position information as final end position information in response to the correct end position information, and generating the final end position information according to the end position information identified by the deep learning algorithm and the end position identification algorithm in response to the wrong end position information;
and the train number end position information output module is used for outputting the train number information and the final end position information of the train.
9. The utility model provides a take train number recognition device of end position which characterized in that includes:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the endmost train number identification method according to any one of claims 1 to 7.
10. A readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the steps of the method for identifying a train number with an end station according to any one of claims 1 to 7.
CN202010961523.8A 2020-09-14 2020-09-14 Train number identification method and device with end bit and readable storage medium Active CN112070087B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010961523.8A CN112070087B (en) 2020-09-14 2020-09-14 Train number identification method and device with end bit and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010961523.8A CN112070087B (en) 2020-09-14 2020-09-14 Train number identification method and device with end bit and readable storage medium

Publications (2)

Publication Number Publication Date
CN112070087A true CN112070087A (en) 2020-12-11
CN112070087B CN112070087B (en) 2023-06-02

Family

ID=73695887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010961523.8A Active CN112070087B (en) 2020-09-14 2020-09-14 Train number identification method and device with end bit and readable storage medium

Country Status (1)

Country Link
CN (1) CN112070087B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0475187A (en) * 1990-07-17 1992-03-10 Matsushita Electric Ind Co Ltd In-list character recognizing device
JP2002344952A (en) * 2001-05-22 2002-11-29 Fujitsu Ltd License plate recognition device and method
CN105354574A (en) * 2015-12-04 2016-02-24 山东博昂信息科技有限公司 Vehicle number recognition method and device
CN106940884A (en) * 2015-12-15 2017-07-11 北京康拓红外技术股份有限公司 A kind of EMUs operation troubles image detecting system and method comprising depth information
WO2018233038A1 (en) * 2017-06-23 2018-12-27 平安科技(深圳)有限公司 Deep learning-based method, apparatus and device for recognizing license plate, and storage medium
CN109840523A (en) * 2018-12-29 2019-06-04 南京睿速轨道交通科技有限公司 A kind of municipal rail train Train number recognition algorithm based on image procossing
CN110378332A (en) * 2019-06-14 2019-10-25 上海咪啰信息科技有限公司 A kind of container terminal case number (CN) and Train number recognition method and system
US20190347497A1 (en) * 2017-01-25 2019-11-14 Wuhan Jimu Intelligent Technology Co., Ltd. Road sign recognition method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0475187A (en) * 1990-07-17 1992-03-10 Matsushita Electric Ind Co Ltd In-list character recognizing device
JP2002344952A (en) * 2001-05-22 2002-11-29 Fujitsu Ltd License plate recognition device and method
CN105354574A (en) * 2015-12-04 2016-02-24 山东博昂信息科技有限公司 Vehicle number recognition method and device
CN106940884A (en) * 2015-12-15 2017-07-11 北京康拓红外技术股份有限公司 A kind of EMUs operation troubles image detecting system and method comprising depth information
US20190347497A1 (en) * 2017-01-25 2019-11-14 Wuhan Jimu Intelligent Technology Co., Ltd. Road sign recognition method and system
WO2018233038A1 (en) * 2017-06-23 2018-12-27 平安科技(深圳)有限公司 Deep learning-based method, apparatus and device for recognizing license plate, and storage medium
CN109840523A (en) * 2018-12-29 2019-06-04 南京睿速轨道交通科技有限公司 A kind of municipal rail train Train number recognition algorithm based on image procossing
CN110378332A (en) * 2019-06-14 2019-10-25 上海咪啰信息科技有限公司 A kind of container terminal case number (CN) and Train number recognition method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张晓利;曹宁;任杰;: "基于图像处理的列车车号识别系统研究", 电子世界, no. 11 *

Also Published As

Publication number Publication date
CN112070087B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN109657632B (en) Lane line detection and identification method
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
CN103035013B (en) A kind of precise motion shadow detection method based on multi-feature fusion
CN102509098B (en) Fisheye image vehicle identification method
CN104778721A (en) Distance measuring method of significant target in binocular image
CN107066933A (en) A kind of road sign recognition methods and system
CN109376740A (en) A kind of water gauge reading detection method based on video
CN104680161A (en) Digit recognition method for identification cards
CN112488046B (en) Lane line extraction method based on high-resolution images of unmanned aerial vehicle
CN106407924A (en) Binocular road identifying and detecting method based on pavement characteristics
CN109711268B (en) Face image screening method and device
CN104574401A (en) Image registration method based on parallel line matching
CN111310760A (en) Method for detecting onychomycosis characters by combining local prior characteristics and depth convolution characteristics
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN109635799B (en) Method for recognizing number of character wheel of gas meter
Wu et al. Strong shadow removal via patch-based shadow edge detection
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN114037650B (en) Ground target visible light damage image processing method for change detection and target detection
CN111753749A (en) Lane line detection method based on feature matching
CN108520252B (en) Road sign identification method based on generalized Hough transform and wavelet transform
CN114299383A (en) Remote sensing image target detection method based on integration of density map and attention mechanism
CN115880683B (en) Urban waterlogging ponding intelligent water level detection method based on deep learning
CN112070087B (en) Train number identification method and device with end bit and readable storage medium
CN116363655A (en) Financial bill identification method and system
CN115588178A (en) Method for automatically extracting high-precision map elements

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant