CN113591643A - Underground vehicle station entering and exiting detection system and method based on computer vision - Google Patents

Underground vehicle station entering and exiting detection system and method based on computer vision Download PDF

Info

Publication number
CN113591643A
CN113591643A CN202110822470.6A CN202110822470A CN113591643A CN 113591643 A CN113591643 A CN 113591643A CN 202110822470 A CN202110822470 A CN 202110822470A CN 113591643 A CN113591643 A CN 113591643A
Authority
CN
China
Prior art keywords
car number
frame
vehicle
module
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110822470.6A
Other languages
Chinese (zh)
Inventor
王曰海
陈莞尔
杨建义
吴中伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110822470.6A priority Critical patent/CN113591643A/en
Publication of CN113591643A publication Critical patent/CN113591643A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses an underground vehicle station entering and exiting detection system and method based on computer vision, wherein the system comprises a video acquisition module, a car number frame detection and correction module, a car number identification module, a direction detection module and a time detection module; the video acquisition module is used for acquiring video information of an underground vehicle entering and exiting a station; the car number frame detection and correction module is used for acquiring an effective car number frame and correcting the car number frame; the vehicle number identification module is used for identifying the vehicle number in the effective vehicle number frame; the direction detection module is used for determining the running direction of the underground vehicle, and the time detection module is used for determining the arrival time and the departure time of the underground vehicle. Compared with the traditional technology, the method has the advantages of low price, convenience in deployment, high speed and high accuracy, and the running information of the underground transportation tool is identified in real time through software deployment.

Description

Underground vehicle station entering and exiting detection system and method based on computer vision
Technical Field
The invention belongs to the field of computer vision, and particularly relates to an underground vehicle station entrance and exit detection system and method based on computer vision.
Background
With the rapid development of economy and the continuous improvement of living standard, more and more cities cover underground traffic in order to meet the travel requirements of people. The underground traffic bears the huge capacity of urban public transport, the riding of the underground traffic becomes a thing which is almost always done by many people every day, and according to the statistics of big data of the Chinese underground traffic in 2019, the daily average passenger capacity of underground traffic stations in Beijing and Shanghai exceeds 1000 thousands of people.
With increasingly heavy underground transportation tasks, there is an urgent need for improving the efficiency of underground transportation operation scheduling management, and in order to better perform underground transportation scheduling command, the operation information of each row of underground transportation tools needs to be determined, which mainly includes the station entering and exiting time, the operation direction and the like of a specific vehicle number. The underground transportation means refers to transportation means running underground, and comprises underground transportation means running underground such as subways, light rails and the like. The running direction is the running direction of the underground vehicle relative to the station, and the running direction has two conditions of left and right.
The existing license plate recognition system for the common motor vehicle applicable to the ground can not be applied to the detection of the entrance and exit of underground vehicles. The main reasons are that: because the underground transportation tool has poor light in the running environment and enters the dark environment from the dark environment to the bright waiting hall environment (the carriage position is bright in the waiting hall, but the head position is outside the waiting hall, so the environment is dark) in the process of the underground transportation tool entering the station, and meanwhile, the underground transportation tool has strong head light, so that the number identification cannot be carried out due to serious reflection when the number at the front of the head of the underground transportation tool is grabbed in the practical application; the existing license plate recognition system for the common motor vehicle applicable to the ground can not be applied to the detection of the entrance and exit of underground vehicles. For example, although the existing license plate recognition special tool such as CN111414890A adopts a certain illumination compensation, in the running environment of underground vehicles, if a camera directly facing the license plate is used for shooting, the problem still appears in reflection, and the problem cannot be solved fundamentally.
For underground transportation vehicles, the transportation task is increasingly heavy, the improvement of the underground transportation operation scheduling management efficiency is urgently needed, and in order to better conduct underground transportation scheduling command, the operation information of each row of underground transportation vehicles needs to be determined, wherein the operation information mainly comprises specific vehicle numbers, station entering and leaving time, operation directions and the like.
In the prior art, the running information of the underground vehicle is generally acquired through hardware deployment, but the hardware solution has high construction cost, high maintenance cost and inconvenient use. The current commonly used operation information acquisition method for underground transportation stations is to adopt radio frequency technology to identify underground transportation vehicle numbers and monitor the stations entering and exiting, such as Wuxiao and the like (research on subway vehicle section vehicle number identification and positioning monitoring scheme based on RFID radio frequency technology, low carbon world 2016, stage 019, 203 + 204) can stably identify the identities of the vehicles entering and exiting in real time through RFID radio frequency identification and sensing shaft counting, and simultaneously display the operation position and change condition of each vehicle in real time on a scheduling platform, which needs to place a radio frequency sensor with high cost every 50 meters, the cost of the sensor is tens of thousands per kilometer, the required cost is very high, and due to hardware equipment, the later operation maintenance cost is very high, and the train operation can be influenced if the equipment is damaged and maintained.
Secondly, the running information of all underground vehicles cannot be acquired only by pure software without additionally adding a camera. For example, a patent (CN 112418097 a) combines a deep learning target detection technology, a machine vision technology, and an OCR technology, and can acquire a subway train number relatively quickly, but the running information of the whole subway is still deficient, which is mainly represented by: the technology cannot acquire the outbound time of the subway and cannot judge the running direction of the subway at the same time. Meanwhile, the technology needs to be deployed by means of hardware, a camera needs to be deployed at a position near the locomotive at first, the construction cost and the maintenance cost cannot be increased by means of the existing camera, and meanwhile, a vehicle sensor needs to be deployed to identify whether a vehicle enters a station or not so as to trigger camera equipment, and then locomotive identification and vehicle number identification are carried out. The technical scheme can not be realized by software completely, the technology adopts a mode of deploying a camera to collect the front identification and the number of the vehicle head, and the identification result of the technology has errors for the underground traffic operation environment and the reflective problem.
Disclosure of Invention
The invention discloses a system and a method for identifying the in-out station state of an underground vehicle based on computer vision, aiming at solving the problems that the running information of the underground vehicle is completely obtained through software, the high construction cost and the operation and maintenance cost of a radio frequency identification system of the existing underground vehicle are reduced, and the problem that the departure time and the running direction of the vehicle cannot be obtained in the existing underground vehicle is positively identified by adopting the vision technology. The invention does not need to add any hardware equipment, utilizes the favorable condition that the side surface of the head of the underground vehicle also has the train number, utilizes the camera which shoots the cab at the side surface (the camera is used for judging whether the driver has illegal operation and recording the train traffic situation) to obtain the video information of the train number, does not need a sensor, has low cost, is convenient to deploy compared with the traditional technology and is easy to popularize.
The invention aims to provide a system and a method for detecting the entrance and the exit of an underground vehicle based on computer vision, which can acquire the number of the underground vehicle and the entrance and exit time and direction information of the underground vehicle in real time so as to be used for dispatching and commanding the underground vehicle, and have higher accuracy and real-time performance and lower cost.
The following description is not to be construed in any way as limiting the subject matter of the appended claims.
The invention discloses a computer vision-based method for detecting the entrance and exit of an underground vehicle, which is used for detecting the entrance and exit information of the underground vehicle in real time, wherein the entrance and exit information comprises the following components: the underground vehicle comprises the following steps of vehicle number, station entering time, station exiting time and running direction:
s11, collecting a video of the side face of the head position of the underground vehicle;
s12, carrying out frame processing on the video in real time to obtain a plurality of continuous video frames, and obtaining the coordinates of the car number frame and correcting the car number frame by taking the current video frame; judging whether a car number frame exists in the corrected video frame picture, if so, calling the car number frame as an effective car number frame, and entering step S13, otherwise, entering step S15; preferably, an EAST text detector is adopted to obtain the coordinates of the car number frame; preferably, the method for correcting the car number frame comprises the steps of filtering an invalid frame and correcting the coordinate deviation of the car number frame; the method for filtering the invalid frame comprises the steps of setting an area of interest according to the approximate position of the car number when an underground vehicle stops, and setting the size and the aspect ratio range of the car number frame to be reserved according to the size of the car number. The EAST text detector is a classic text detection model and directly realizes end-to-end text detection; end-to-end means that the input is raw data and the output is the final result. The region of interest (region of interest) refers to a region to be processed, which is delineated from a processed image in a manner of a square frame, a circle, an ellipse, an irregular polygon and the like in machine vision and image processing.
S13, identifying the car number in the effective car number frame by using a trained car number text identification model; judging whether the car number is valid or not by combining the car number identification results of more than two adjacent continuous video frames, if the car number can be identified, the car number is valid, the station entering and exiting mark is station entering, the station entering time is recorded, and the step S14 is executed; if the vehicle number cannot be recognized, the vehicle number is invalid, and the step S11 is returned to;
s14, determining the movement direction of the underground vehicle according to the effective vehicle number frame coordinates;
and S15, when the video frames do not acquire the effective vehicle number frame, if the effective vehicle number frame is not acquired by more than three continuous video frames and the in-out station mark is in station, modifying the in-out station mark into out station, recording the out station time, and returning to the step S11 under other conditions.
Preferably, the detection method further comprises the step of transmitting the identified car number, the running direction, the arrival time and the departure time to a server and recording the same in a database.
Preferably, the training step of the car number text recognition model is as follows:
s31: establishing a picture data set; the picture data set comprises digital character pictures generated by various common fonts and real in-and-out video frames of underground vehicles; the method for generating the digital character picture comprises the following steps: intercepting a background picture in the video frame, printing digital characters with specified fonts on the background picture, and ensuring reasonable intervals among the characters;
s32: and (2) performing post-skewing processing on the digital character picture generated in the picture data set in the step S31 by adopting formulas (1) and (2):
Figure BDA0003172451960000031
Figure BDA0003172451960000032
in the above formula, h and w are the height and width of the digital character picture respectively; n is a back-sloping coefficient, a value is randomly selected for each picture as the back-sloping coefficient to perform back-sloping, the larger the back-sloping coefficient is, the larger the back-sloping degree is, and the value range of the coefficient is 0.05 h-0.15 h; alpha is a plane rotation deviation proportion hyper-parameter, and the degree of rotation deviation on a two-dimensional plane can be adjusted by adjusting alpha, wherein the value is
Figure BDA0003172451960000033
x is the abscissa of a point on the original image, y is the ordinate of the point on the original image, x 'is the abscissa of the point after post-skewing, and y' is the ordinate of the point after post-skewing; preferably, one or more of the following data enhancement methods may be superimposed on the digital character picture generated in the picture data set in step S31: random noise, rotation, translation;
s33: the picture obtained through the processing of the step S32 is trained by adopting a CRNN text recognition correction network; the correction method comprises one or more of the following: deleting RNN part in CRNN, modifying head structure and modifying target function.
S34: and (4) carrying out transfer learning by using real underground vehicle in-and-out video frames.
Preferably, the header structure is a network structure used for acquiring a text recognition result in the CRNN text recognition network, and the header structure makes a prediction by using features extracted before the network and outputs the text recognition result.
Preferably, the objective function is used for calculating an error between the 'prediction result' and the 'real mark' and guiding the updating of the text recognition model parameters through the back propagation of the error. The objective function is a minimum loss function, and the loss function is CTC loss.
Preferably, the common fonts include fonts of arial, arialuni and dengb.
Preferably, the method for determining the moving direction of the underground vehicle in the step S14 is as follows: judging according to the accumulated displacement of the abscissa of the center point of the effective train number frame, and if the accumulated displacement is greater than a positive threshold, determining that the train runs from the left side to the right side of the video acquisition module; if the accumulated displacement is smaller than a negative threshold value, the running direction of the train is from right to left; the absolute values of the positive threshold and the negative threshold are 10% -50% of the width of the region of interest, the positive threshold is positive, and the negative threshold is negative.
Preferably, the method for obtaining the train arrival time in step S13 is: if the recognized vehicle number is determined to be valid, taking the time of the valid vehicle number recognized for the first time as the arrival time; the method for acquiring the train outbound time in the step S15 is as follows: if the valid vehicle number frame is not detected in more than three continuous video frames, returning the time of the first frame in the continuous video frames as the outbound time; another method for acquiring the inbound and outbound time in steps S13 and S15 is: detecting the motion by a frame difference method, if the motion is detected and a valid vehicle number is detected, indicating the station entry, and returning to the station entry time; if the movement is detected and the valid vehicle number is not detected, the station is out, and the station returns to the station-out time.
Preferably, in the whole execution process of the method, the continuous transmission heartbeat packet is adopted to detect whether the program normally runs. The heartbeat packet is a self-defined command word which is sent to the server terminal by the terminal at regular time when the program runs normally and the terminal and the server are connected successfully, and is similar to heartbeat. In the whole execution process of the method, one method for ensuring the continuous operation of the program is as follows: by monitoring the transmitted heartbeat packet, whether the program normally runs and whether the terminal is stably connected with the server is monitored, and if a problem occurs, the program can be automatically restarted and reported through a mechanism similar to a watchdog so as to ensure the running stability and reliability of the program.
The invention also discloses a computer vision-based underground vehicle station-entering and station-exiting detection system, which is used for acquiring the information of the underground vehicle station-entering and station-exiting in real time, wherein the information of the station-entering and station-exiting comprises the following components: the system comprises a video acquisition module, a car number frame detection and correction module, a car number identification module, a direction detection module and a time detection module; wherein:
the video acquisition module comprises a camera and is used for acquiring a video of the side surface of the head position of the stop of the underground vehicle and transmitting the video information to the car number frame detection and correction module in real time; the video acquisition module is arranged on the side surface of the underground vehicle;
the car number frame detection and correction module is used for carrying out frame processing on the video acquired in the video acquisition module in real time to acquire a plurality of continuous video frames, acquiring the coordinates of the car number frame from the current video frame and correcting the car number frame; judging whether a car number frame exists in the corrected video frame picture, and if so, calling the car number frame as an effective car number frame;
the car number identification module is used for identifying the car number, identifying the car number in the effective car number frame by using more than three continuous video frames, and identifying the car number if the effective car number is effective;
the direction detection module is used for determining the running direction of the underground vehicle; the method for detecting the movement direction of the underground vehicle by preferably using the direction module comprises the following steps: judging according to the accumulated displacement of the abscissa of the center point of the effective train number frame, and if the accumulated displacement is greater than a positive threshold, determining that the train runs from the left side to the right side of the video acquisition module; if the accumulated displacement is smaller than a negative threshold value, the running direction of the train is from right to left; the absolute values of the positive threshold and the negative threshold are 10% -50% of the width of the region of interest, the positive threshold is positive, and the negative threshold is negative.
The time detection module is used for determining the arrival time and the departure time of underground vehicles,
the time detection module determines the time of entering the station and the time of leaving the station according to the output results of the vehicle number identification module and the vehicle number frame detection and correction module, wherein the time of entering the station is the time of the effective vehicle number identified for the first time in the vehicle number identification module; the outbound time acquiring method comprises the following steps: and detecting the video frames with more than three continuous frames by using the car number frame detection and correction module, wherein the valid car number frame is not detected, the station entering and exiting mark is station entering, and the time of the first frame in the continuous video frames is returned as the station exiting time.
Preferably, the system further comprises a communication transmission module for transmitting the station entering and exiting information to the server, and the detection result of the system comprises the number of underground vehicles, the station entering and exiting time and the running direction of the underground vehicles.
Preferably, the system detection result further includes a program running signal, where the program running signal to be transmitted is a signal for determining whether the program keeps running normally: when the program normally runs, outputting a program running signal; otherwise, when the program is interrupted, the signal is not output. And the program running signal is continuously transmitted through the communication transmission module in the whole system running process, the normal running of the program is indicated when the program running signal is detected, and the running of the program is indicated when the program running signal is not detected.
Preferably, the car number frame detection and correction module comprises an EAST text detector and a car number frame correction module; the car number frame correction module is used for filtering invalid frames and correcting the coordinate deviation of the car number frame so as to improve the speed and the precision; the method for filtering the invalid frame comprises the steps of setting an area of interest according to the approximate position of the car number when an underground vehicle stops, and setting the size and the aspect ratio range of a car number frame to be reserved according to the size of the car number; the input of the car number frame detection and correction module is the video stream collected by the video collection module, and the output information is transmitted to the car number identification module.
Preferably, the car number recognition module is a module for recognizing the car number by using a car number text recognition model, the input of the car number recognition module is from the car number frame detection and correction module, and the output information of the car number recognition module is transmitted to the direction detection module. The output result of the car number identification module can be further judged by combining the detection and identification results of more than two adjacent video frames, so that the validity of the car number identification result is ensured, and the accuracy is improved; the car number text recognition model is a model for training a self-built picture data set by adopting a CRNN text recognition correction network.
Preferably, the training step of the car number text recognition model is as follows:
s31: establishing a picture data set; the picture data set comprises digital character pictures generated by various common fonts and real in-and-out video frames of underground vehicles; the method for generating the digital character picture comprises the following steps: intercepting a background picture in the video frame, printing digital characters with specified fonts on the background picture, and ensuring reasonable intervals among the characters;
s32: and (2) performing post-skewing processing on the digital character picture generated in the picture data set in the step S31 by adopting formulas (1) and (2):
Figure BDA0003172451960000041
Figure BDA0003172451960000042
in the above formula, h and w are the height and width of the digital character picture respectively; n is a back slope coefficient, and one is randomly selected for each pictureThe value is used as a backsloping coefficient to perform backsloping, the backsloping degree is larger when the backsloping coefficient is larger, and the value range of the coefficient is 0.05 h-0.15 h; alpha is a plane rotation deviation proportion hyper-parameter, and the degree of rotation deviation on a two-dimensional plane can be adjusted by adjusting alpha, wherein the value is
Figure BDA0003172451960000051
x is the abscissa of a point on the original image, y is the ordinate of the point on the original image, x 'is the abscissa of the point after post-skewing, and y' is the ordinate of the point after post-skewing; preferably, one or more of the following data enhancement methods may be superimposed on the digital character picture generated in the picture data set in step S31: random noise, rotation, translation;
s33: the picture obtained through the processing of the step S32 is trained by adopting a CRNN text recognition correction network; the correction method comprises one or more of the following: deleting RNN part in CRNN, modifying head structure and modifying target function.
S34: and (4) carrying out transfer learning by using real underground vehicle in-and-out video frames.
Preferably, the header structure is a network structure used for acquiring a text recognition result in the CRNN text recognition network, and the header structure makes a prediction by using features extracted before the network and outputs the text recognition result.
Preferably, the objective function is used for calculating an error between the 'prediction result' and the 'real mark' and guiding the updating of the text recognition model parameters through the back propagation of the error. The objective function is a minimum loss function, and the loss function is CTC loss.
Preferably, the common fonts include fonts of arial, arialuni and dengb.
The invention has the following advantages and beneficial effects:
1. the invention realizes the real-time acquisition of the running information of the underground vehicle, reduces the high construction cost and the operation and maintenance cost of the radio frequency identification system of the existing underground vehicle, and solves the problems that the departure time and the running direction of the vehicle cannot be acquired in the existing underground vehicle positively identified by adopting a vision technology.
2. The invention overcomes the defect that no hardware equipment is required to be added, skillfully acquires the car number video information by utilizing the existing camera for shooting the cab from the side surface (the camera is only used for judging whether the driver has illegal operation or not and recording the train traffic condition, and is never used for car number identification), solves the problem of accurate car number identification, overcomes the problem of car head light reflection, does not need a sensor, has low cost, is convenient to deploy and is easy to popularize compared with the traditional technology.
3. The digital photos are subjected to inclination preprocessing in the training process of the car number recognition module, so that the trained model is more suitable for recognizing the car numbers of inclined underground vehicles, and the accuracy of car number recognition in underground traffic scenes is improved. In addition, the validity of the identification result is determined by introducing a voting method behind the car number identifier, and by utilizing the information of the front frame and the rear frame, the misjudgment is reduced, and the precision and the robustness are improved.
4. The number is determined through the number frame detection and correction module and the number identification module, the time and the direction of entering and leaving the station are determined through a plurality of frames of information before and after the priori knowledge is synthesized, only two simple models are involved, the calculation amount of the algorithm is small, the method is convenient and fast, and the method has high precision and real-time performance.
5. The region of interest is arranged in front of the car number frame detection and correction module, and then the invalid detection result is filtered according to the area of the car number frame, the width-to-height ratio of the car number frame and the like to determine whether the valid car number frame exists or not, so that useless interference information is abandoned, on one hand, whether the background exists or not is well determined, on the other hand, the burden of subsequent car number identification is reduced, and the speed and the precision are improved.
6. The method has the advantages that 30 ten thousand digital pictures generated by specific fonts are pre-trained, then the real car number data are used for transfer learning, and the car number recognition model is obtained.
Drawings
Further advantages of the present invention will become apparent to those skilled in the art after having the benefit of the following detailed description of the preferred embodiments and upon reference to the accompanying drawings in which:
fig. 1 is a general flowchart of a vehicle station entering and exiting detection method based on computer vision according to a first embodiment of the present invention.
Fig. 2 is a flowchart of a car number frame detection and correction module in the first embodiment of the present invention.
Fig. 3 is a schematic diagram of an expanding method used by a rectangular frame of a car number frame detection and correction module in the first embodiment of the present invention.
Fig. 4 is a flowchart of a direction detection module according to a first embodiment of the present invention.
FIG. 5 is a computer vision based underground vehicle inbound and outbound inspection system, the inspection system comprising: video acquisition module, car number frame detection and correction module, car number identification module, direction detection module, time detection module, communication transmission module, terminal and server
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described with reference to specific embodiments, and other advantages and effects of the present invention will be easily apparent to those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the specific embodiments described herein are only intended to explain the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The following description of various embodiments is not to be construed in any way as limiting the subject matter of the appended claims.
The first embodiment is as follows:
as shown in fig. 5, the system for detecting the entrance and exit of an underground vehicle based on computer vision is used for acquiring the entrance and exit information of the underground vehicle in real time, and the entrance and exit information includes: the detection system comprises a video acquisition module, a car number frame detection and correction module, a car number identification module, a direction detection module, a time detection module and a communication transmission module. Wherein:
the video acquisition module comprises a camera and is used for acquiring video information of the head position of the underground vehicle stop and transmitting the video information to the car number frame detection and correction module in real time; the video acquisition module is arranged on the side surface of the underground transportation means in an offset manner, so that the influence of light reflection of an underground transportation station can be eliminated, the existing camera device can be reused, and the running direction of a vehicle can be detected; the camera used in this example is a network camera, and the RTSP video stream of the camera is transmitted to the development board terminal through the network cable.
The car number frame detection and correction module is used for carrying out frame processing on the video acquired in the video acquisition module in real time to acquire a plurality of continuous video frames, acquiring the coordinates of the car number frame from the current video frame and correcting the car number frame; judging whether a car number frame exists in the corrected video frame picture, and if so, calling the car number frame as an effective car number frame;
the car number identification module is used for identifying the car number, identifying the car number in the effective car number frame by using more than three continuous video frames, and identifying the car number if the video frame is effective;
the direction detection module is used for determining the running direction of the underground vehicle;
preferably, as shown in fig. 4, the method for determining the train moving direction by the direction detection module is as follows: judging according to the accumulated displacement of the abscissa of the center point of the effective train number frame, and if the accumulated displacement is greater than a positive threshold, determining that the train runs from the left side to the right side of the video acquisition module; if the accumulated displacement is smaller than a negative threshold value, the running direction of the train is from right to left; the absolute values of the positive threshold and the negative threshold are 10% -50% of the width of the region of interest, the positive threshold is positive, and the negative threshold is negative.
The time detection module is used for determining the arrival time and the departure time of underground vehicles, the time detection module determines the arrival time and the departure time according to the output results of the car number identification module and the car number frame detection correction module, and the arrival time is the time of the effective car number identified for the first time in the car number identification module; the outbound time acquiring method comprises the following steps: and detecting the video frames with more than three continuous frames by using the car number frame detection and correction module, wherein the valid car number frame is not detected, the station entering and exiting mark is station entering, and the time of the first frame in the continuous video frames is returned as the station exiting time.
The station entrance and exit information includes: the number, the time of arrival, the time of departure and the direction of travel of the underground vehicle,
the car number frame detection and correction module, the car number identification module and the direction detection module are specifically deployed on the RK3399ProD development board in this embodiment.
The communication transmission module realizes communication between the terminal and the server and transmits a system detection result to the server, wherein the system detection result comprises the number of underground vehicles, the station entering and exiting time of the underground vehicles and the running direction. In the embodiment, information such as the car number, the direction, the station entering and exiting time, the program running signal and the like is transmitted to the server by utilizing a TCP (transmission control protocol) communication protocol and recorded in the database, and the system is handed to display the car number, the direction, the station entering and exiting time in a hall in real time.
Preferably, the car number frame detection and correction module comprises an EAST text detector and a car number frame correction module; the car number frame correction module is used for filtering invalid frames and correcting the coordinate deviation of the car number frame so as to improve the speed and the precision; the method for filtering out invalid boxes comprises one or more of the following steps: setting an interested area according to the approximate position of the car number when an underground vehicle stops, and setting the size of a car number frame to be reserved and the width-to-height ratio range according to the size of the car number; the interesting area is set to be the position where the car number of the car head is approximately located, the size of the interesting area is 600x300, the size range of the area of a car number frame to be reserved is 10000-50000, and the range of the width-to-height ratio is 1-3. The input of the car number frame detection and correction module is the video stream collected by the video collection module, and the output information is transmitted to the car number identification module.
Preferably, the vehicle number text recognition module is used for recognizing the vehicle number, the input of the vehicle number text recognition module is from the vehicle number frame detection and correction module, and the output information of the vehicle number text recognition module is transmitted to the direction detection module. The output result of the car number identification module can be further judged by combining the detection and identification results of more than two adjacent video frames, so that the validity of the car number identification result is ensured, and the accuracy is improved; the car number text recognition model is a model for training a self-built picture data set by adopting a CRNN text recognition correction network.
Preferably, the training step of the car number recognition model is as follows:
s31: establishing the picture data set; the image data set comprises 30 ten thousand digital character images generated by various common fonts and 15000 video frames which are acquired by a video acquisition module and contain the real vehicle number of the underground vehicle; the method for generating the digital character picture comprises the following steps: and intercepting a background picture in the video frame, printing digital characters with specified fonts on the background picture, and ensuring reasonable intervals among the characters.
S32: and (2) performing post-skewing processing on the digital character picture generated in the picture data set in the step S31 by adopting formulas (1) and (2):
Figure BDA0003172451960000071
Figure BDA0003172451960000072
in the above formula, h and w are the height and width of the digital character picture respectively; n is a back slope coefficient, and a value is randomly selected for each picture as a back slopePerforming backward skewing by using the skewing coefficient, wherein the backward skewing degree is larger when the backward skewing coefficient is larger, and the value range of the coefficient is 0.05 h-0.15 h; alpha is a plane rotation deviation proportion hyper-parameter, and the degree of rotation deviation on a two-dimensional plane can be adjusted by adjusting alpha, wherein the value is
Figure BDA0003172451960000073
x is the abscissa of a point on the original image, y is the ordinate of the point on the original image, x 'is the abscissa of the point after post-skewing, and y' is the ordinate of the point after post-skewing; preferably, one or more of the following data enhancement methods may be superimposed on the digital character picture generated in the picture data set in step S31: random noise, rotation, translation;
s33: the picture obtained through the processing of the step S32 is trained by adopting a CRNN text recognition correction network; the correction method comprises one or more of the following: deleting RNN part in CRNN, modifying head structure and modifying target function.
S34: and (4) carrying out transfer learning by using real underground vehicle in-and-out video frames.
Preferably, the header structure is a network structure used for acquiring a text recognition result in the CRNN text recognition network, and the header structure makes a prediction by using features extracted before the network and outputs the text recognition result.
Preferably, the objective function is used for calculating an error between the 'prediction result' and the 'real mark' and guiding the updating of the text recognition model parameters through the back propagation of the error. The objective function is a minimum loss function, and the loss function is CTC loss.
Preferably, the common fonts include fonts of arial, arialuni and dengb.
Preferably, the communication transmission module realizes communication between the terminal and the server, and completes information transmission by using a communication protocol. The information includes: car number, direction, time of entering and leaving station, program running signal. The program running signal to be transmitted is a signal for judging whether the program keeps normal running: when the program normally runs, outputting a program running signal; otherwise, when the program is interrupted, the signal is not output. And the program running signal is continuously transmitted through the communication transmission module in the whole system running process, the normal running of the program is indicated when the program running signal is detected, and the running of the program is indicated when the program running signal is not detected.
As shown in fig. 1, the invention also discloses a computer vision-based underground vehicle station-entering and station-exiting detection method, which is used for detecting the information of the underground vehicle station-entering and station-exiting in real time, wherein the information of the station-entering and station-exiting comprises: the detection method comprises the following steps of:
s11, collecting a video of the position of a vehicle head at a stop of an underground vehicle;
s12, as shown in figure 2, preprocessing video frames of the video in real time to obtain a plurality of continuous video frames, detecting texts of current video frames by adopting an EAST text detector to obtain car number frame coordinates, and correcting the car number frame by filtering invalid frames and correcting coordinate deviation of the car number frame; judging whether a car number frame exists in the corrected video frame picture, if so, calling the car number frame as an effective car number frame, and entering step S13, otherwise, entering step S15; the method for filtering the invalid frame comprises the steps of setting an area of interest according to the approximate position of the car number when an underground vehicle stops, and setting the size and the aspect ratio range of a car number frame to be reserved according to the size of the car number;
the method for filtering the invalid frame in the embodiment comprises the following steps: setting an interested area and detecting only the area; filtering according to the area of the train number frame, wherein the specific value is related to the installation of the camera, calibrating a first train in practical application to obtain the size of the average pixel area, and then taking a certain range according to the value to filter frames which do not meet the requirement; and filtering according to the width-height ratio of the car number frame, wherein the detected width-height ratio of the car number frame is within a certain range. If detect effective car number frame, need carry out car number discernment to it, the car number frame that detects influences follow-up car number discernment precision if there is the coordinate deviation problem, consequently need solve car number frame coordinate deviation problem, the method that this embodiment adopted is: and slightly expanding the rectangular frame to avoid incomplete division of the characters by the rectangular frame of the car number detector. As shown in fig. 3, the blue frame is an accurate car number area, the red frame is a car number area detected by the car number detector, the red frame can be seen to divide the car number into incomplete car numbers, and the yellow frame is a rectangular frame after the expansion operation, and a complete car number area can be obtained in the frame.
S13, identifying the car number in the effective car number frame by using a trained car number text identification model; judging whether the car number is valid or not by combining the car number identification result of the first 3 frames of video frames,
if the number of the vehicle can be identified, the number of the vehicle is valid, the station entering and exiting mark is station entering, the station entering time is recorded, and the step S14 is executed;
if the vehicle number cannot be recognized, the vehicle number is invalid, and the step S11 is returned to;
in the experiment, the number of the vehicle is judged by using continuous 3 frames, 35100 license plates and 36 wrong license plates are tested in the experiment, and the accuracy rate of vehicle number identification can reach 99.89%; if the single frame recognition result is used directly, the accuracy is 98.10%.
S14, determining the movement direction of the underground vehicle according to the effective vehicle number frame coordinates and the vehicle number identification result;
s15, when the video frames do not acquire the effective vehicle number frame, if more than three continuous video frames do not acquire the effective vehicle number frame and the in-out station mark is in station, modifying the in-out station mark into out station, recording the out station time, entering the step S16, and returning to the step S11 under other conditions
S16, transmitting the identified car number, the running direction, the station entering time and the station exiting time to a server by using a communication transmission module and recording the car number, the running direction, the station entering time and the station exiting time into a database;
preferably, the training step of the car number text recognition model is as follows:
s31: establishing a picture data set; the picture data set comprises digital character pictures generated by various common fonts and real in-and-out video frames of underground vehicles; the method for generating the digital character picture comprises the following steps: intercepting a background picture in the video frame, printing digital characters with specified fonts on the background picture, and ensuring reasonable intervals among the characters;
s32: and (2) performing post-skewing processing on the digital character picture generated in the picture data set in the step S31 by adopting formulas (1) and (2):
Figure BDA0003172451960000091
Figure BDA0003172451960000092
in the above formula, h and w are the height and width of the digital character picture respectively; n is a back-sloping coefficient, a value is randomly selected for each picture as the back-sloping coefficient to perform back-sloping, the larger the back-sloping coefficient is, the larger the back-sloping degree is, and the value range of the coefficient is 0.05 h-0.15 h; alpha is a plane rotation deviation proportion hyper-parameter, and the degree of rotation deviation on a two-dimensional plane can be adjusted by adjusting alpha, wherein the value is
Figure BDA0003172451960000093
x is the abscissa of a point on the original image, y is the ordinate of the point on the original image, x 'is the abscissa of the point after post-skewing, and y' is the ordinate of the point after post-skewing; preferably, one or more of the following data enhancement methods may be superimposed on the digital character picture generated in the picture data set in step S31: random noise, rotation, translation;
s33: the picture obtained through the processing of the step S32 is trained by adopting a CRNN text recognition correction network; the correction method comprises one or more of the following: deleting RNN part in CRNN, modifying head structure and modifying target function.
S34: and (4) carrying out transfer learning by using real underground vehicle in-and-out video frames.
Preferably, the header structure is a network structure used for acquiring a text recognition result in the CRNN text recognition network, and the header structure makes a prediction by using features extracted before the network and outputs the text recognition result.
Preferably, the objective function is used for calculating an error between the 'prediction result' and the 'real mark' and guiding the updating of the text recognition model parameters through the back propagation of the error. The objective function is a minimum loss function, and the loss function is CTC loss.
Preferably, the common fonts include fonts of arial, arialuni and dengb.
Preferably, the method for determining the moving direction of the underground vehicle in the step S14 is as follows: judging according to the accumulated displacement of the abscissa of the center point of the effective train number frame, and if the accumulated displacement is greater than a positive threshold, determining that the train runs from the left side to the right side of the video acquisition module; if the accumulated displacement is smaller than a negative threshold value, the running direction of the train is from right to left; the absolute values of the positive threshold and the negative threshold are 10% -50% of the width of the region of interest, the positive threshold is positive, and the negative threshold is negative.
Preferably, the method for obtaining the train arrival time in step S13 is: if the recognized vehicle number is determined to be valid, taking the time of the valid vehicle number recognized for the first time as the arrival time; the method for acquiring the train outbound time in the step S15 is as follows: if the valid vehicle number frame is not detected in more than three continuous video frames, returning the time of the first frame in the continuous video frames as the outbound time; another method for acquiring the inbound and outbound time in steps S13 and S15 is: detecting the motion by a frame difference method, if the motion is detected and a valid vehicle number is detected, indicating the station entry, and returning to the station entry time; if the movement is detected and the valid vehicle number is not detected, the station is out, and the station returns to the station-out time.
Preferably, in the whole execution process of the method, the continuous transmission heartbeat packet is adopted to detect whether the program normally runs. The heartbeat packet is a self-defined command word which is sent to the server terminal by the terminal at regular time when the program runs normally and the terminal and the server are connected successfully, and is similar to heartbeat. In the whole execution process of the method, one method for ensuring the continuous operation of the program is as follows: by monitoring the transmitted heartbeat packet, whether the program normally runs and whether the terminal is stably connected with the server is monitored, and if a problem occurs, the program can be automatically restarted and reported through a mechanism similar to a watchdog so as to ensure the running stability and reliability of the program.
Comparative example:
the video information of the front vehicle head position of the underground vehicle is shot by adopting a patent (CN 112418097A) and is used for detecting the information of the underground vehicle entering and exiting a station in real time, the recognition rate is very low due to the light problem, 2000 license plates are recognized in an experiment, only 1002 license plates can be recognized accurately, the vehicle number recognition accuracy rate is only 50.13%, and the running direction of the underground vehicle cannot be recognized due to the fact that the front shooting mode is adopted, and the displacement of a vehicle number frame cannot be obtained.

Claims (10)

1. A detection method for underground vehicle station entrance and exit based on computer vision is characterized in that the detection method is used for detecting information of the underground vehicle station entrance and exit in real time, and the information of the station entrance and exit comprises the following steps: the underground vehicle comprises the following steps of vehicle number, station entering time, station exiting time and running direction:
s11, collecting a video of the side face of the head position of the underground vehicle;
s12, carrying out frame processing on the video in real time to obtain a plurality of continuous video frames, and obtaining the coordinates of the car number frame and correcting the car number frame by taking the current video frame; judging whether a car number frame exists in the corrected video frame picture, if so, calling the car number frame as an effective car number frame, and entering step S13, otherwise, entering step S15; preferably, an EAST text detector is adopted to obtain the coordinates of the car number frame; preferably, the method for correcting the car number frame comprises the steps of filtering an invalid frame and correcting the coordinate deviation of the car number frame; the method for filtering the invalid frame comprises the steps of setting an area of interest according to the approximate position of the car number when an underground vehicle stops, and setting the size and the aspect ratio range of the car number frame to be reserved according to the size of the car number.
S13, identifying the car number in the effective car number frame by using a trained car number text identification model; judging whether the car number is valid or not by combining the car number identification results of more than two adjacent continuous video frames,
if the number of the vehicle can be identified, the number of the vehicle is valid, the station entering and exiting mark is station entering, the station entering time is recorded, and the step S14 is executed;
if the vehicle number cannot be recognized, the vehicle number is invalid, and the step S11 is returned to;
s14, determining the movement direction of the underground vehicle according to the effective vehicle number frame coordinates;
and S15, when the video frames do not acquire the effective vehicle number frame, if the effective vehicle number frame is not acquired by more than three continuous video frames and the in-out station mark is in station, modifying the in-out station mark into out station, recording the out station time, and returning to the step S11 under other conditions.
2. The computer vision-based underground vehicle station entering and exiting detection method of claim 1, further comprising the step of transmitting the identified car number, running direction, station entering time and station exiting time to a server and recording the same in a database.
3. The method for detecting the entrance and the exit of an underground vehicle based on computer vision as claimed in claim 1, wherein the training step of the car number text recognition model is as follows:
s31: establishing a picture data set; the picture data set comprises digital character pictures generated by various common fonts and real in-and-out video frames of underground vehicles; the method for generating the digital character picture comprises the following steps: intercepting a background picture in the video frame, printing digital characters with specified fonts on the background picture, and ensuring reasonable intervals among the characters;
s32: and (2) performing post-skewing processing on the digital character picture generated in the picture data set in the step S31 by adopting formulas (1) and (2):
Figure FDA0003172451950000011
Figure FDA0003172451950000012
in the above formula, h and w are the height and width of the digital character picture respectively; n is a back-sloping coefficient, a value is randomly selected for each picture as the back-sloping coefficient to perform back-sloping, the larger the back-sloping coefficient is, the larger the back-sloping degree is, and the value range of the coefficient is 0.05 h-0.15 h; alpha is a plane rotation deviation proportion hyper-parameter, and the degree of rotation deviation on a two-dimensional plane can be adjusted by adjusting alpha, wherein the value is
Figure FDA0003172451950000013
x is the abscissa of a point on the original image, y is the ordinate of the point on the original image, x 'is the abscissa of the point after post-skewing, and y' is the ordinate of the point after post-skewing; preferably, one or more of the following data enhancement methods may be superimposed on the digital character picture generated in the picture data set in step S31: random noise, rotation, translation;
s33: the picture obtained through the processing of the step S32 is trained by adopting a CRNN text recognition correction network; the correction method comprises one or more of the following: deleting RNN part in CRNN, modifying head structure and modifying target function.
S34: and (4) carrying out transfer learning by using real underground vehicle in-and-out video frames.
4. A method for detecting entrance and exit of underground vehicle based on computer vision as claimed in claim 1, wherein the method for determining the moving direction of underground vehicle in step S14 is: judging according to the accumulated displacement of the abscissa of the center point of the effective train number frame, and if the accumulated displacement is greater than a positive threshold, determining that the train runs from the left side to the right side of the video acquisition module; if the accumulated displacement is smaller than a negative threshold value, the running direction of the train is from right to left; the absolute values of the positive threshold and the negative threshold are 10% -50% of the width of the region of interest, the positive threshold is positive, and the negative threshold is negative.
5. The method for detecting the arrival and departure of underground vehicles based on computer vision as claimed in claim 1, wherein the method for obtaining the arrival time of the train in step S13 is: if the recognized vehicle number is determined to be valid, taking the time of the valid vehicle number recognized for the first time as the arrival time; the method for acquiring the train outbound time in the step S15 is as follows: if the valid vehicle number frame is not detected in more than three continuous video frames, returning the time of the first frame in the continuous video frames as the outbound time; another method for acquiring the inbound and outbound time in steps S13 and S15 is: detecting the motion by a frame difference method, if the motion is detected and a valid vehicle number is detected, indicating the station entry, and returning to the station entry time; if the movement is detected and the valid vehicle number is not detected, the station is out, and the station returns to the station-out time.
6. The computer vision-based underground vehicle station entering and exiting detection method as claimed in claim 1, wherein the method is implemented by continuously transmitting heartbeat packets to detect whether the program is operating normally.
7. The system for detecting the entrance and the exit of the underground vehicle based on the computer vision is characterized in that the detection system is used for acquiring the entrance and the exit information of the underground vehicle in real time, and the entrance and the exit information comprises: the system comprises a video acquisition module, a car number frame detection and correction module, a car number identification module, a direction detection module and a time detection module; wherein:
the video acquisition module is used for acquiring videos of the side face of the head position where the underground vehicle stops and transmitting the video information to the car number frame detection and correction module in real time; the video acquisition module is arranged on the side surface of the underground vehicle;
the car number frame detection and correction module is used for carrying out frame processing on the video acquired in the video acquisition module in real time to acquire a plurality of continuous video frames, acquiring the coordinates of the car number frame from the current video frame and correcting the car number frame; judging whether a car number frame exists in the corrected video frame picture, and if so, calling the car number frame as an effective car number frame;
the car number identification module is used for identifying the car number, identifying the car number in the effective car number frame by using more than three continuous video frames, and identifying the car number if the effective car number is effective;
the direction detection module is used for determining the running direction of the underground vehicle; preferably, the method for detecting the moving direction of the underground vehicle by the direction module comprises the following steps: judging according to the accumulated displacement of the abscissa of the center point of the effective train number frame, and if the accumulated displacement is greater than a positive threshold, determining that the train runs from the left side to the right side of the video acquisition module; if the accumulated displacement is smaller than a negative threshold value, the running direction of the train is from right to left; the absolute values of the positive threshold and the negative threshold are 10% -50% of the width of the region of interest, the positive threshold is positive, and the negative threshold is negative.
The time detection module determines the arrival time and the departure time through the output results of the car number identification module and the car number frame detection correction module, and the arrival time is the time of the effective car number identified for the first time in the car number identification module as the preferred time; the outbound time specifically comprises: and detecting the video frames with more than three continuous frames by using the car number frame detection and correction module, wherein the valid car number frame is not detected, the station entering and exiting mark is station entering, and the time of the first frame in the continuous video frames is returned as the station exiting time.
8. The system of claim 7, further comprising a communication transmission module for transmitting information of the underground vehicle to the server, wherein the system detection result comprises a vehicle number of the underground vehicle, time of the underground vehicle entering and exiting the station, and a running direction.
Preferably, the system detection result further includes a program running signal, and the program running signal to be transmitted is a signal for determining whether the program keeps normal running: when the program normally runs, outputting a program running signal; otherwise, when the program is interrupted, the signal is not output. And the program running signal is continuously transmitted through the communication transmission module in the whole system running process, the normal running of the program is indicated when the program running signal is detected, and the running of the program is indicated when the program running signal is not detected.
9. An underground vehicle station-entering detection system based on computer vision according to claim 7, characterized in that the vehicle number frame detection and correction module comprises an EAST text detector and a vehicle number frame correction module; the car number frame correction module is used for filtering invalid frames and correcting the coordinate deviation of the car number frame so as to improve the speed and the precision; the method for filtering the invalid frame comprises the steps of setting an area of interest according to the approximate position of the car number when an underground vehicle stops, and setting the size and the aspect ratio range of a car number frame to be reserved according to the size of the car number; the input of the car number frame detection and correction module is the video collected by the video collection module, and the output information is transmitted to the car number identification module.
10. The system of claim 7, wherein the car number recognition module is a module for recognizing the car number by using a car number text recognition model, the input of the car number recognition module is from the car number frame detection and correction module, and the output information of the car number recognition module is transmitted to the direction detection module. The output result of the car number identification module can be further judged by combining the detection and identification results of more than two adjacent video frames, so that the validity of the car number identification result is ensured, and the accuracy is improved; the car number text recognition model is a model for training a self-built picture data set by adopting a CRNN text recognition correction network. The training steps of the car number text recognition model are as follows:
s101: establishing a picture data set; the picture data set comprises digital character pictures generated by various common fonts and real in-and-out video frames of underground vehicles; the method for generating the digital character picture comprises the following steps: intercepting a background picture in the video frame, printing digital characters with specified fonts on the background picture, and ensuring reasonable intervals among the characters;
s102: and (3) carrying out post-skewing processing on the digital character picture generated in the picture data set in the step S101 by adopting formulas (3) and (4):
Figure FDA0003172451950000031
Figure FDA0003172451950000032
in the above formula, h and w are the height and width of the digital character picture respectively; n is a back-sloping coefficient, a value is randomly selected for each picture as the back-sloping coefficient to perform back-sloping, the larger the back-sloping coefficient is, the larger the back-sloping degree is, and the value range of the coefficient is 0.05 h-0.15 h; alpha is a plane rotation deviation proportion hyper-parameter, and the degree of rotation deviation on a two-dimensional plane can be adjusted by adjusting alpha, wherein the value is
Figure FDA0003172451950000033
x is the abscissa of a point on the original image, y is the ordinate of the point on the original image, x 'is the abscissa of the point after post-skewing, and y' is the ordinate of the point after post-skewing; preferably, one or more of the following data enhancement methods may be superimposed on the digital character picture generated in the picture data set in step S31: random noise, rotation, translation;
s103: the picture obtained through the processing of the step S102 is trained by adopting a CRNN text recognition correction network; the correction method comprises one or more of the following: deleting RNN part in CRNN, modifying head structure and modifying target function.
S104: and (4) carrying out transfer learning by using real underground vehicle in-and-out video frames.
CN202110822470.6A 2021-07-21 2021-07-21 Underground vehicle station entering and exiting detection system and method based on computer vision Withdrawn CN113591643A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110822470.6A CN113591643A (en) 2021-07-21 2021-07-21 Underground vehicle station entering and exiting detection system and method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110822470.6A CN113591643A (en) 2021-07-21 2021-07-21 Underground vehicle station entering and exiting detection system and method based on computer vision

Publications (1)

Publication Number Publication Date
CN113591643A true CN113591643A (en) 2021-11-02

Family

ID=78248549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110822470.6A Withdrawn CN113591643A (en) 2021-07-21 2021-07-21 Underground vehicle station entering and exiting detection system and method based on computer vision

Country Status (1)

Country Link
CN (1) CN113591643A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792575A (en) * 2021-07-23 2021-12-14 浙江大学绍兴微电子研究中心 Underground vehicle station entering and exiting detection system and method based on computer vision
CN115056828A (en) * 2022-05-13 2022-09-16 卡斯柯信号有限公司 Intelligent test system and method for train running interval

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792575A (en) * 2021-07-23 2021-12-14 浙江大学绍兴微电子研究中心 Underground vehicle station entering and exiting detection system and method based on computer vision
CN115056828A (en) * 2022-05-13 2022-09-16 卡斯柯信号有限公司 Intelligent test system and method for train running interval
CN115056828B (en) * 2022-05-13 2024-03-29 卡斯柯信号有限公司 Intelligent test system and method for train operation interval

Similar Documents

Publication Publication Date Title
CN108898044B (en) Loading rate obtaining method, device and system and storage medium
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
CN101510356B (en) Video detection system and data processing device thereof, video detection method
CN102759347B (en) Online in-process quality control device and method for high-speed rail contact networks and composed high-speed rail contact network detection system thereof
CN106541968B (en) The recognition methods of the subway carriage real-time prompt system of view-based access control model analysis
CN105550654B (en) Bullet train image capturing system, real-time license number detection system and method
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN103808723A (en) Exhaust gas blackness automatic detection device for diesel vehicles
CN100435160C (en) Video image processing method and system for real-time sampling of traffic information
CN113591643A (en) Underground vehicle station entering and exiting detection system and method based on computer vision
CN109747681A (en) A kind of train positioning device and method
CN107506753B (en) Multi-vehicle tracking method for dynamic video monitoring
CN111591321A (en) Continuous recognition and correction device and method for contents of track pole number plate
CN114463372A (en) Vehicle identification method and device, terminal equipment and computer readable storage medium
CN115657002A (en) Vehicle motion state estimation method based on traffic millimeter wave radar
CN113011252B (en) Rail foreign matter intrusion detection system and method
CN103778790A (en) Traffic flow square-wave statistical method based on video sequence
CN113792575A (en) Underground vehicle station entering and exiting detection system and method based on computer vision
CN111627224A (en) Vehicle speed abnormality detection method, device, equipment and storage medium
WO2020194570A1 (en) Sign position identification system and program
CN111179452A (en) ETC channel-based bus fee deduction system and method
CN116682268A (en) Portable urban road vehicle violation inspection system and method based on machine vision
CN113744535B (en) Dynamic coordinate synchronization method and device for RFID (radio frequency identification) tag and video inspection vehicle
CN115824231A (en) Intelligent positioning management system for automobile running
WO2022267266A1 (en) Vehicle control method based on visual recognition, and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211102