CN115713758B - Carriage identification method, system, device and storage medium - Google Patents

Carriage identification method, system, device and storage medium Download PDF

Info

Publication number
CN115713758B
CN115713758B CN202211406868.2A CN202211406868A CN115713758B CN 115713758 B CN115713758 B CN 115713758B CN 202211406868 A CN202211406868 A CN 202211406868A CN 115713758 B CN115713758 B CN 115713758B
Authority
CN
China
Prior art keywords
carriage
target
area
car
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211406868.2A
Other languages
Chinese (zh)
Other versions
CN115713758A (en
Inventor
李洪军
陈国亮
潘超
李泽琦
韩士红
李彦彬
徐茂春
曹重阳
陈晶
林科
姜来福
许童童
郝晨旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoneng Huanghua Port Co ltd
Original Assignee
Guoneng Huanghua Port Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoneng Huanghua Port Co ltd filed Critical Guoneng Huanghua Port Co ltd
Priority to CN202211406868.2A priority Critical patent/CN115713758B/en
Publication of CN115713758A publication Critical patent/CN115713758A/en
Application granted granted Critical
Publication of CN115713758B publication Critical patent/CN115713758B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a carriage identification method, system, device and storage medium, relates to train monitoring technical field, and its technical scheme main points are: the carriage identification method comprises the following steps: acquiring an input image containing a carriage; performing detection processing on the input image to determine a cabin area including an identification target; analyzing the carriage area to determine a target area corresponding to an identification target; determining the highest point of carriage cargoes in the carriage area; calculating the cargo height of the carriage according to the highest point; extracting each single target character in the target area; and identifying a plurality of the single target characters one by one to obtain the information content of the identification targets. Through the identification method, the carriage cargo height can be accurately obtained, the carriage number information can be accurately acquired, comprehensive automation is realized, and the operation efficiency is improved.

Description

Carriage identification method, system, device and storage medium
Technical Field
The invention belongs to the technical field of train monitoring, and particularly relates to a carriage identification method, a carriage identification system, a carriage identification device and a carriage storage medium.
Background
This section is intended to provide a background or context for the embodiments recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
In a port, a large number of trains with special coal transportation trains wait for unloading every day, the car type and the car number of each carriage and the coal loading height of the corresponding carriage need to be accurately mastered, the conventional operation is usually checked by monitoring, and manual recording is carried out by checking and monitoring by special people, which is troublesome; therefore, the intelligent monitoring is realized by combining the monitoring with the artificial intelligence in the prior art, such as identifying the car number by adopting technical means such as RFID and the like, and the infrared monitoring is utilized to detect the height value. However, in the actual action process, the problem that the tag is damaged and is difficult to match is solved, the special person is required to acknowledge the vehicle type and the vehicle number, the comprehensive automation cannot be realized, the efficiency is low, one monitoring can only identify single content, such as the vehicle number of a carriage, and whether the height of the carriage goods is too high or not cannot be judged through simultaneous judgment, so that a plurality of monitoring systems are required to combine and judge, the operation efficiency is low.
Disclosure of Invention
Aiming at the technical problems, the invention provides a carriage identification method, a system, a device and a storage medium, which can accurately obtain the carriage cargo height and the carriage number information, realize comprehensive automation and improve the operation efficiency.
In order to solve the technical problems, the technical scheme adopted by the invention comprises four aspects.
In a first aspect, a car identification method is provided, including:
acquiring an input image containing a carriage;
performing detection processing on the input image to determine a cabin area including an identification target;
analyzing the carriage area to determine a target area corresponding to an identification target;
determining the highest point of carriage cargoes in the carriage area;
calculating the cargo height of the carriage according to the highest point;
extracting each single target character in the target area;
and identifying a plurality of the single target characters one by one to obtain the information content of the identification targets.
In some embodiments, the performing the detection processing on the input image includes: and identifying an input image containing the carriage by a first yolov5 model trained in advance through a deep learning algorithm so as to obtain a carriage and a carriage cargo surrounding frame classified as a carriage area.
In some embodiments, the calculating the car cargo height from the highest point comprises:
projecting the straight line where the highest point is located into a coordinate system with scales to obtain a scale value closest to the straight line;
calculating the pixel distance between the straight line and the acquired scale value;
converting the pixel distance from an image coordinate system to a physical coordinate system to obtain a distance difference between the straight line and the scale value;
and calculating the scale value and the distance difference to obtain the cargo height of the carriage.
In some embodiments, the acquiring comprises at least 1 frame of an input image of the car; the calculating the cargo height of the carriage according to the highest point further comprises: determining the cargo height of a carriage in a carriage area in a plurality of frames of input images; the car cargo heights for each frame are averaged to determine an output car zone height.
In some embodiments, the analyzing the cabin area to determine a target area corresponding to the identification target includes:
identifying a carriage area according to a second yolov5 model pre-trained by a deep learning algorithm so as to obtain a surrounding frame classified as an identification target as an identification area;
preprocessing the identification area image;
and positioning the identification target area of the preprocessed identification area image to determine the area where the target character is located as the target area.
In some embodiments, the extracting individual single target characters within the target region includes: and dividing each single target character in the target area to extract each single target character image.
In some embodiments, the segmenting the target region according to the target character includes: inverting the positioned target area, and analyzing the communication area of the inverted image; according to the connected domain of the target character, a plurality of single target characters are segmented.
In some embodiments, the identifying the plurality of single target characters one by one to obtain the information content of the identification target includes: performing normalization processing on each single target character image, and performing binarization processing to obtain target images; and identifying target information by adopting an artificial neural network according to the target image.
In some embodiments, after calculating the height of the cargo in the carriage according to the highest point, judging whether the height of the cargo in the carriage exceeds a set threshold; and judging that the carriage is abnormal when the carriage cargo height exceeds a set threshold value.
In some embodiments, the identification target comprises at least one of a vehicle number, a vehicle model, a load, a volume.
In a second aspect, the present application provides a car identification system comprising:
the image acquisition unit is used for acquiring real-time images of the carriage;
and the computing unit is connected with the image acquisition unit and is used for executing the carriage identification method.
A third aspect provides a car identification device, comprising: a memory and a processor, the memory having stored thereon a computer program which, when executed by the processor, performs the car identification method as described above.
A fourth aspect provides a computer readable storage medium storing a computer program executable by one or more processors, the computer program operable to implement the steps of a car identification method as previously described.
One or more embodiments of the above-described solution may have the following advantages or benefits compared to the prior art:
the application provides a carriage identification method, a system, a device and a storage medium, wherein the carriage identification method comprises the steps of acquiring an input image containing a carriage; performing detection processing on the input image to determine a cabin area including an identification target; analyzing the carriage area to determine a target area corresponding to an identification target; determining the highest point of carriage cargoes in the carriage area; calculating the cargo height of the carriage according to the highest point; extracting each single target character in the target area; and identifying a plurality of the single target characters one by one to obtain the information content of the identification targets. According to the identification method, basic information such as the train number, the train number of the carriage, the height of the carriage and the like can be automatically and accurately identified, manual transcription and recording are replaced, the authenticity, timeliness, accuracy and continuity of data are guaranteed, comprehensive automation is achieved, and therefore the operation efficiency is improved, whether the height value of the carriage is higher than the limit height is conveniently determined and judged, the carriage with the ultrahigh height value is monitored in real time, an alarm is given, operation accidents are avoided, and the automatic operation efficiency is guaranteed.
Drawings
The present application will be described in more detail hereinafter based on embodiments and with reference to the accompanying drawings;
fig. 1 is a schematic flow chart of a car identification method in an embodiment of the invention;
FIG. 2 is an exemplary flow chart corresponding to step S3 shown in FIG. 1 in an embodiment of the invention;
FIG. 3 is an exemplary flow chart corresponding to step S5 shown in FIG. 1 in an embodiment of the invention;
FIG. 4 is an exemplary flow chart corresponding to step S7 shown in FIG. 1 in an embodiment of the invention;
fig. 5 is a flowchart of another car identification method according to an embodiment of the present invention;
FIG. 6 is a schematic block diagram of a car identification system provided in an embodiment of the present invention;
fig. 7 is a schematic block diagram of a car identification device provided in an embodiment of the invention;
fig. 8 is a schematic diagram of a storage medium provided in an embodiment of the invention.
In the drawings, like parts are given like reference numerals, and the drawings are not drawn to scale.
Detailed Description
The present disclosure will be further described with reference to the embodiments shown in the drawings, wherein the embodiments described are merely some, but not all, of the embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the prior art, the intelligent monitoring is realized by combining the monitoring with the artificial intelligence, such as identifying the car number by adopting technical means such as RFID and the like, and the height value is monitored by utilizing infrared. However, in the actual action process, a special person is required to carry out confirmation of the vehicle type and the vehicle number, so that comprehensive automation cannot be realized, and the process is troublesome. The application discloses a carriage identification method, through obtaining carriage image and through analysis processing etc. can obtain the carriage cargo height of carriage and discernment train number and car number, provide mutual information and pass into PLC as automatic operation's initial condition for harbour rollover management system, improve marshalling station operating efficiency, realize automated management's effect.
The embodiment of the invention discloses a carriage identification method, which is shown in fig. 1 and comprises the following steps: acquiring an input image containing a carriage; performing detection processing on the input image to determine a cabin area including an identification target; analyzing the carriage area to determine a target area corresponding to an identification target; determining the highest point of carriage cargoes in the carriage area; calculating the cargo height of the carriage according to the highest point; extracting each single target character in the target area; and identifying a plurality of the single target characters one by one to obtain the information content of the identification targets.
Some embodiments of the present disclosure also provide a car identification system, a car identification device, and a non-transitory storage medium corresponding to the above car identification method.
Some embodiments of the present disclosure and examples thereof are described in detail below with reference to the attached drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the disclosure, are not intended to limit the disclosure.
Referring to fig. 1, the car identification method includes the following steps S1 to S7.
Step S1: an input image containing a car is acquired. A camera (e.g., a camera, a web camera, etc.) is mounted on one side of the car for taking a picture of the car and sending the picture of the car to the recognition system as an input image with the car, wherein the picture with the input image of the car generally includes the car and the car cargo, and the obtained input image is at least 1 frame, and in this embodiment, about 100 frames are taken for 2 seconds as an example.
In some embodiments, the input image may be a color image. The color image includes, but is not limited to, a first color channel, a second color channel, and a third color channel. The first color channel is a red (R) channel, the second color channel is a green (G) channel, and the third color channel is a blue (B) channel, i.e., the input image is a color image in RGB format. In other embodiments, the input image may also be a gray scale image.
Step S2: performing detection processing on the input image to determine a cabin area including an identification target;
in some embodiments, a common target detection algorithm may be employed to detect the cabin region in the input image. Common target detection issues include R-CNN (Region-based Convolutional Neural Networks), SPP-net (Spatial Pyramid Pooling-net), fast R-CNN, R-FCN (Region-based Fully Convolutional Networks), YOLO (You Only Look Once), SSD (Single Shot MultiBox Detector), and the like.
In this embodiment, the target detection algorithm is a deep learning algorithm yolov5 model, the input image including the carriage is identified by the first yolov5 model trained in advance by the deep learning algorithm, so as to obtain a carriage area classified as a carriage and a carriage cargo surrounding frame, and a surrounding frame (surrounding box) result and a classification result are output by the yolov5 model, so that the surrounding frame classified as the carriage and the carriage cargo can be obtained, the detection range is narrowed, and background interference is eliminated for subsequent detection.
The bounding box (bounding box) result includes coordinate values of the center position of the object, the width and the height of the bounding box; wherein, in the training process by yolov5, the values of the width and the height are normalized to within the [0,1] section using the width and the height of the image, and the coordinate values are offset values of the center position of the bounding box (bounding box) with respect to the current lattice position, and are normalized to within the [0,1] section. Wherein the accuracy of whether the current bounding box (bounding box) contains the object and the object position is reflected by the confidence (i.e. the IOU value), i.e. confidence=p (object); if an object is contained in the bounding box (bounding box), P (object) =1; otherwise P (object) =0; where IOU is the intersection area of the predicted bounding box (normalized to the [0,1] interval in pixels with the pixel area of the real region) with the real region of the object. By utilizing the flexibility of the yolov5 model, the model has extremely strong advantages in rapid deployment, and is light in weight, rapid and accurate.
Step S3: and analyzing the carriage area to determine a target area corresponding to the identification target.
In this embodiment, the recognition target includes at least one of a vehicle number, a vehicle type, a load, and a volume. In some embodiments, the identification target is a car number, and the specific location of the car number area (i.e., the target area) is determined by excluding a majority of all pixels in the car area that do not belong to the car number area.
In some embodiments, referring to fig. 2, step S3 includes:
s31, recognizing a carriage area according to a second yolov5 model trained in advance by a deep learning algorithm, so as to obtain a surrounding frame classified as a recognition target as a recognition area; outputting a bounding box (bounding box) result and a classification result through the second yolov5 model, so that a bounding box classified as an identification target (such as a car number) can be obtained, the detection range is reduced, and background interference is eliminated for subsequent detection;
s32, preprocessing the identification area image; preprocessing comprises binarizing the obtained identification area image to obtain a binary image; carrying out morphological processing on the obtained binary image to obtain a preprocessed image;
s33, carrying out area positioning of an identification target (such as a car number) on the preprocessed identification area image; in this embodiment, the image data analysis is performed on the obtained preprocessed image, and the area where the car number character is located (i.e. the target area) is located according to the characteristics of the connected area of the area where the car number character is located. The connected domains generally refer to image areas formed by foreground pixels with the same pixel value and adjacent positions in an image, all the connected domains on the image and rectangular frames outside each connected domain are preprocessed, and then the connected domains with the areas of the external rectangular frames ordered from large to small to second to bottom N are selected as the areas to be selected, so that each area to be selected is verified, and a target area corresponding to an identification target is determined.
Step S4: and determining the highest point of the carriage cargoes in the carriage area.
In some embodiments, after the carriage and the carriage cargo frame are output by using a deep learning algorithm yolov5 model, a straight line where the highest point coordinates of the carriage cargo are located is obtained.
Step S5: and calculating the cargo height of the carriage according to the highest point.
In some embodiments, referring to fig. 3, step S5 includes:
s51, projecting a straight line where the highest point is located into a coordinate system with scales to obtain a scale value closest to the straight line; the coordinate system with the scales can be a scale with scales carried on one side of a carriage, so that the linear projection in the coordinate system where the scales of the scale are located can be conveniently obtained; in some embodiments, monocular visual ranging may be achieved using corresponding coordinates of the monocular acquisition target in the image coordinate system;
s52, calculating the pixel distance between the straight line and the acquired scale value;
s53, converting the pixel distance from an image coordinate system to a physical coordinate system to obtain a distance difference between the straight line and the scale value;
s54, calculating the scale value and the distance difference to obtain the cargo height of the carriage; specifically, car cargo height = closest scale value above the linear projection-distance difference.
In some embodiments, due to the reason of railway train grouping, each carriage needs to enter a car dumper area for automatic operation, however, the car dumper area automatic operation has a height limiting requirement on the height of carriage cargos, if the carriage cargos with high height enter the car dumper area for direct automatic operation, the carriage cargos with high height are easy to cause machine loss accidents, and the operation time is seriously delayed after the carriage cargos enter the car dumper area for reprocessing, so that the operation efficiency is affected. Therefore, in this embodiment, according to the determined cargo height of the vehicle cabin, it is determined whether the height exceeds the set threshold; when the cargo height of the carriage exceeds a set threshold, judging that the carriage is abnormal, and giving an alarm signal to prompt staff to timely handle the carriage so as to avoid accidents; when the height of the carriage cargo is lower than a set threshold value, the carriage cargo is judged to be normal, and then the carriage cargo enters a car dumper area to perform automatic operation, so that the operation has basis and guarantee.
In some embodiments, the heights of the cargos in the car areas in the input images of the frames in step S5 are determined by steps S51-S54, so that step S5 further includes a step S55 of averaging the heights of the cargos of the frames to determine the output height of the car area. In this embodiment, the height value of the carriage cargo of the input image of about 100 frames in 2 seconds is continuously measured, so that the actual height of the carriage cargo is determined by taking an average value, and the actual height value is output to the PLC controller to facilitate the next action, reduce the detection error, and improve the stability of the automated operation.
Step S6: and extracting each single target character in the target area.
In some embodiments, step S6 comprises the steps of:
s61, dividing each single target character in the target area to extract each single target character image. Specific:
inverting the positioned target area, and analyzing the communication area of the inverted image;
according to the connected domain of the target character, a plurality of single target characters are segmented.
The connected domains generally refer to image areas which are formed by foreground pixels with the same pixel value and adjacent positions in the image, all the connected domains on the image and rectangular frames outside each connected domain are preprocessed, then the connected domains with the areas of the external rectangular frames being sequenced from large to small to be second to bottom N are selected as the areas to be selected, so that analysis and verification are carried out on each area to be selected, and each single target character is determined in a segmentation mode according to the characteristics of the connected domains of the single target character.
Step S7: and identifying a plurality of the single target characters one by one to obtain the information content of the identification targets.
In some embodiments, referring to fig. 4, step S7 includes the steps of:
s71, performing normalization processing on each single target character image, and performing binarization processing to obtain target images;
s72, identifying target information by adopting an artificial neural network according to the target image.
In this embodiment, the artificial neural network adopts the ANN, and obtains knowledge, experience, subjective judgment and tendency of importance of the target of the evaluation expert through learning of a given sample mode, and when comprehensive evaluation is performed on the partner, the ANN can evaluate the experience, knowledge and intuitive thinking of the expert on line, so that effective combination of qualitative analysis and quantitative analysis is realized, and objectivity of the comprehensive evaluation result of the partner can be better ensured. On the basis of selecting the evaluation index combination, evaluation is carried out on the evaluation indexes, and after the evaluation value is obtained, because the indexes have no unified measurement standard, direct analysis and comparison are difficult to carry out, and the calculation of the input neural network is also not facilitated. Therefore, before comprehensive evaluation is performed by using the neural network, the input evaluation value is firstly converted into a value between [0,1] through the action of the membership function, namely, the evaluation value is subjected to standard non-dimensionality quantification and is used as the input of the neural network, so that the ANN can process quantitative and qualitative indexes, and the target information (namely, the car number of the carriage) can be accurately identified.
Still another car identification method is provided in some embodiments, and referring to fig. 5, the car identification method includes the steps of:
s10, acquiring an input image with a carriage;
s20, carrying out detection processing on the input image to determine a carriage area comprising an identification target;
s30, analyzing and processing the carriage area to determine a target area corresponding to the identification target;
s40, extracting each single target character in the target area;
s50, identifying a plurality of single target characters one by one to obtain information content of the identification targets;
s60, determining the highest point of the cargoes in the carriage area;
and S70, calculating the cargo height of the carriage according to the highest point.
The identification method in the embodiment firstly carries out identification on the identification target information content, and then carries out detection on the height of the cargoes in the carriage, so that the two are mutually verified, and the accuracy of detection and identification is improved. In other embodiments, the height detection of the cargoes in the driving carriage can be advanced, the parameter information such as the car number of the carriage is identified, the accuracy of the two proofs is not affected by the sequence, and the sequence adjustment is performed based on the data acquired in advance.
In some embodiments, the vehicle cargo limit height value defined by the vehicle can be determined by determining parameters of the identification target, namely the vehicle number, the vehicle type, the load or the volume of the vehicle; by determining the car cargo height of the car where the input image is located, it can be mutually authenticated with the content of the recognition target to improve the accuracy of the ultra-high car detection, which can be as high as one hundred percent.
At least some embodiments of the present disclosure also provide a car identification system, as shown in fig. 6, comprising
An image acquisition unit 11 for acquiring a real-time image of a vehicle cabin;
a computing unit 12 connected to the image acquisition unit 11 for performing the car identification method as provided in any one of the embodiments of the present disclosure.
The image acquisition unit 11 can be a camera, a network camera and the like, and the image acquisition unit 11 is arranged on one side of a train track, so that an input image with a carriage can be acquired in real time in the process of driving in the train carriage;
the computing unit 12 is connected to the image acquisition unit 11, and in some examples, the computing unit 12 may be a mobile phone, a tablet computer, a server, or the like. In some embodiments, image acquisition unit 11 and computing unit 12 may communicate over a network connection, which may include a wired network, a wireless network, and/or any combination of wireless and wired networks. The network may include a local area network, the internet, a telecommunications network, and the like.
In some embodiments, the car identification system further includes a PLC13, where the PLC13 is connected to the computing unit 12, and may communicate through a network connection to obtain an analysis result fed back by the computing unit 12, that is, real-time basic information including a car cargo height, a car number, and the like, and receive a determination result of the car cargo height, so as to further provide the PLC13 with an initial condition of an automated operation, thereby improving the operation efficiency.
In some embodiments, the car identification system further includes a database 14 and a client 15; the database 14 communicates with the computing unit 12 via a network connection to receive and store output data (including images and image analysis data) of the computing unit 12; the client 15 communicates with the database 14 through network connection to obtain the storage content in the database 14, i.e. the image to be displayed and the image analysis data in real time, so as to facilitate the confirmation of the operator. The client 15 may be a mobile phone, a tablet computer, a notebook computer, etc.
At least some embodiments of the present disclosure also provide a car identification device, as shown in fig. 7, including a memory 21 and a processor 22; the memory 21 has stored thereon a computer program which, when executed by the processor 22, performs the car identification method provided by any of the embodiments of the present disclosure.
The memory 21 and the processor 22 may communicate with each other directly or indirectly, and in some examples, the memory and the processor may communicate with each other through a network connection. Further, one or more computer instructions may be stored on memory 21 that may be executed by processor 22 to perform various functions. Various applications and various data such as an input image, a video sequence, a corrected car area image, a target area image, and various data used and/or generated by the applications, etc. may also be stored in the computer-readable memory 21. The processor 2 may be a Central Processing Unit (CPU), a Tensor Processor (TPU), or a graphics processor GPU or the like having data processing capabilities and/or program execution capabilities.
In some examples, the car identification device includes, but is not limited to, a smart phone, a tablet computer, a server, and the like.
It should be noted that the car identification device provided by the embodiment of the present disclosure is exemplary, and not limiting, and the image processing device may further include other conventional components or structures according to practical application requirements, for example, to implement the necessary functions of the image processing device, and those skilled in the art may set other conventional components or structures according to specific application scenarios, which the embodiment of the present disclosure is not limited to.
At least some embodiments of the present disclosure also provide a computer readable storage medium, as shown in fig. 8, storing a computer program 31 executable by one or more processors, the computer program 31 being operable to implement the steps of the car identification method as provided by any of the embodiments of the present disclosure.
In some embodiments, the storage medium may include a storage component of a tablet computer, a hard disk of a personal computer, random Access Memory (RAM), read Only Memory (ROM), erasable Programmable Read Only Memory (EPROM), compact disc read only memory (CD-ROM), flash memory, or any combination of the foregoing, as well as other suitable storage media.
The embodiment of the application discloses a carriage identification method, system, device and storage medium, which can record and monitor and identify carriage numbers and carriage cargo heights in real time, ensure data accuracy and continuity, improve the working efficiency of a marshalling station, provide transportation confirmation information, realize automatic management of transportation confirmation, judge whether the carriage cargo heights are ultrahigh or not according to the carriage cargo heights so as to improve the car dumper efficiency and the working stability. Meanwhile, the automatic vehicle number and vehicle type copying device replaces manual vehicle number copying, vehicle type copying and manual judgment of whether the height of the cargoes in the carriage exceeds or not, automatic operation is achieved, and operation efficiency can be improved.
The various embodiments in this disclosure are described in a progressive manner, and identical and similar parts of the various embodiments are all referred to each other, and each embodiment is mainly described as different from other embodiments.
The scope of the present disclosure is not limited to the above-described embodiments, and it is apparent that various modifications and variations can be made to the present disclosure by those skilled in the art without departing from the scope and spirit of the disclosure. Such modifications and variations are intended to be included herein within the scope of the following claims and their equivalents.

Claims (12)

1. A car identification method, characterized by comprising:
acquiring an input image containing a carriage;
performing detection processing on the input image to determine a cabin area including an identification target;
analyzing the carriage area to determine a target area corresponding to an identification target;
determining the highest point of carriage cargoes in the carriage area;
calculating the cargo height of the carriage according to the highest point; the method specifically comprises the following steps: projecting the straight line where the highest point is located into a coordinate system with scales to obtain a scale value closest to the straight line; calculating the pixel distance between the straight line and the acquired scale value; converting the pixel distance from an image coordinate system to a physical coordinate system to obtain a distance difference between the straight line and the scale value; calculating the scale value and the distance difference to obtain the cargo height of the carriage;
extracting each single target character in the target area;
and identifying a plurality of the single target characters one by one to obtain the information content of the identification targets.
2. The car identification method according to claim 1, wherein the detecting the input image includes: and identifying an input image containing the carriage by a first yolov5 model trained in advance through a deep learning algorithm so as to obtain a carriage and a carriage cargo surrounding frame classified as a carriage area.
3. A car identification method according to claim 1, wherein the acquiring comprises acquiring an input image of a car for at least 1 frame; the calculating the cargo height of the carriage according to the highest point further comprises: determining the cargo height of a carriage in a carriage area in a plurality of frames of input images; the car cargo heights for each frame are averaged to determine an output car zone height.
4. The car identification method according to claim 1, wherein the analyzing the car area to determine a target area corresponding to an identification target includes:
identifying a carriage area according to a second yolov5 model pre-trained by a deep learning algorithm so as to obtain a surrounding frame classified as an identification target as an identification area;
preprocessing the identification area image;
and positioning the identification target area of the preprocessed identification area image to determine the area where the target character is located as the target area.
5. The car identification method as set forth in claim 4, wherein the extracting individual single target characters in the target area includes: and dividing each single target character in the target area to extract each single target character image.
6. The car identification method according to claim 5, wherein the dividing the target area according to the target character includes: inverting the positioned target area, and analyzing the communication area of the inverted image; according to the connected domain of the target character, a plurality of single target characters are segmented.
7. A car identification method according to claim 1, wherein said identifying a plurality of said single target characters one by one to obtain information contents of said identification targets comprises: performing normalization processing on each single target character image, and performing binarization processing to obtain target images; and identifying target information by adopting an artificial neural network according to the target image.
8. The car identification method according to claim 1, wherein after calculating the car cargo height according to the highest point, it is determined whether the car cargo height exceeds a set threshold; and judging that the carriage is abnormal when the carriage cargo height exceeds a set threshold value.
9. The vehicle compartment identification method of claim 1, wherein the identification target includes at least one of a vehicle number, a vehicle model, a load, and a volume.
10. A car identification system, comprising:
the image acquisition unit is used for acquiring real-time images of the carriage;
a computing unit connected to the image acquisition unit for performing the car identification method according to any one of claims 1-9.
11. A car identification device, characterized by comprising: a memory and a processor, the memory having stored thereon a computer program which, when executed by the processor, performs the car identification method as claimed in any one of claims 1 to 9.
12. A computer readable storage medium storing a computer program executable by one or more processors, the computer program operable to implement the steps of the car identification method of any one of claims 1-9.
CN202211406868.2A 2022-11-10 2022-11-10 Carriage identification method, system, device and storage medium Active CN115713758B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211406868.2A CN115713758B (en) 2022-11-10 2022-11-10 Carriage identification method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211406868.2A CN115713758B (en) 2022-11-10 2022-11-10 Carriage identification method, system, device and storage medium

Publications (2)

Publication Number Publication Date
CN115713758A CN115713758A (en) 2023-02-24
CN115713758B true CN115713758B (en) 2024-03-19

Family

ID=85232758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211406868.2A Active CN115713758B (en) 2022-11-10 2022-11-10 Carriage identification method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN115713758B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3602638A (en) * 1967-03-07 1971-08-31 Ford Motor Co Graphic data system for scanning and recording two-dimensional contours
US6053595A (en) * 1992-03-09 2000-04-25 Canon Kabushiki Kaisha Multi recording system using monochrome printer
CN110619329A (en) * 2019-09-03 2019-12-27 中国矿业大学 Carriage number and loading state identification method of railway freight open wagon based on airborne vision
CN110633492A (en) * 2019-08-02 2019-12-31 天津天瞳威势电子科技有限公司 Lane departure early warning method of Android platform of simulation robot
WO2020134324A1 (en) * 2018-12-29 2020-07-02 南京睿速轨道交通科技有限公司 Image-processing based algorithm for recognizing train number of urban rail train
CN113378646A (en) * 2021-05-18 2021-09-10 上海平奥供应链管理有限公司 Freight train information identification system and identification method
CN113888621A (en) * 2021-09-29 2022-01-04 中科海微(北京)科技有限公司 Loading rate determining method, loading rate determining device, edge computing server and storage medium
CN114322799A (en) * 2022-03-14 2022-04-12 北京主线科技有限公司 Vehicle driving method and device, electronic equipment and storage medium
CN114399671A (en) * 2021-11-30 2022-04-26 际络科技(上海)有限公司 Target identification method and device
CN114972182A (en) * 2022-04-15 2022-08-30 华为技术有限公司 Object detection method and device
CN114993658A (en) * 2022-06-16 2022-09-02 国能黄骅港务有限责任公司 Carriage presses detection device
CN115038990A (en) * 2020-01-31 2022-09-09 日产自动车株式会社 Object recognition method and object recognition device
CN115240400A (en) * 2022-07-01 2022-10-25 一汽解放汽车有限公司 Vehicle position recognition method and device, and vehicle position output method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3602638A (en) * 1967-03-07 1971-08-31 Ford Motor Co Graphic data system for scanning and recording two-dimensional contours
US6053595A (en) * 1992-03-09 2000-04-25 Canon Kabushiki Kaisha Multi recording system using monochrome printer
WO2020134324A1 (en) * 2018-12-29 2020-07-02 南京睿速轨道交通科技有限公司 Image-processing based algorithm for recognizing train number of urban rail train
CN110633492A (en) * 2019-08-02 2019-12-31 天津天瞳威势电子科技有限公司 Lane departure early warning method of Android platform of simulation robot
CN110619329A (en) * 2019-09-03 2019-12-27 中国矿业大学 Carriage number and loading state identification method of railway freight open wagon based on airborne vision
CN115038990A (en) * 2020-01-31 2022-09-09 日产自动车株式会社 Object recognition method and object recognition device
CN113378646A (en) * 2021-05-18 2021-09-10 上海平奥供应链管理有限公司 Freight train information identification system and identification method
CN113888621A (en) * 2021-09-29 2022-01-04 中科海微(北京)科技有限公司 Loading rate determining method, loading rate determining device, edge computing server and storage medium
CN114399671A (en) * 2021-11-30 2022-04-26 际络科技(上海)有限公司 Target identification method and device
CN114322799A (en) * 2022-03-14 2022-04-12 北京主线科技有限公司 Vehicle driving method and device, electronic equipment and storage medium
CN114972182A (en) * 2022-04-15 2022-08-30 华为技术有限公司 Object detection method and device
CN114993658A (en) * 2022-06-16 2022-09-02 国能黄骅港务有限责任公司 Carriage presses detection device
CN115240400A (en) * 2022-07-01 2022-10-25 一汽解放汽车有限公司 Vehicle position recognition method and device, and vehicle position output method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于BP神经网络的数字识别系统;李维宇等;《硅谷》;第81-82页 *

Also Published As

Publication number Publication date
CN115713758A (en) 2023-02-24

Similar Documents

Publication Publication Date Title
US20220084186A1 (en) Automated inspection system and associated method for assessing the condition of shipping containers
CN111079747B (en) Railway wagon bogie side frame fracture fault image identification method
CN111310645B (en) Method, device, equipment and storage medium for warning overflow bin of goods accumulation
WO2021051601A1 (en) Method and system for selecting detection box using mask r-cnn, and electronic device and storage medium
CN112037177B (en) Carriage loading rate assessment method and device and storage medium
CN108579094B (en) User interface detection method, related device, system and storage medium
CN115995056A (en) Automatic bridge disease identification method based on deep learning
CN113378648A (en) Artificial intelligence port and wharf monitoring method
CN111553914A (en) Vision-based goods detection method and device, terminal and readable storage medium
US20230049656A1 (en) Method of processing image, electronic device, and medium
CN115471476A (en) Method, device, equipment and medium for detecting component defects
CN114529555A (en) Image recognition-based efficient cigarette box in-and-out detection method
CN115713758B (en) Carriage identification method, system, device and storage medium
CN112749735A (en) Converter tapping steel flow identification method, system, medium and terminal based on deep learning
CN115393791A (en) Foreign matter detection method, device and computer readable storage medium
CN115909151A (en) Method for identifying serial number of motion container under complex working condition
CN115512098A (en) Electronic bridge inspection system and inspection method
CN111402185A (en) Image detection method and device
CN112329770B (en) Instrument scale identification method and device
CN111401104B (en) Classification model training method, classification method, device, equipment and storage medium
CN114399671A (en) Target identification method and device
CN112581472A (en) Target surface defect detection method facing human-computer interaction
CN115131307B (en) Article defect detection method and related device
CN112329783B (en) Image processing-based coupler yoke break identification method
CN112730427B (en) Product surface defect detection method and system based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant