CN112215089A - Video identification method of subway color light signal machine - Google Patents

Video identification method of subway color light signal machine Download PDF

Info

Publication number
CN112215089A
CN112215089A CN202010995479.2A CN202010995479A CN112215089A CN 112215089 A CN112215089 A CN 112215089A CN 202010995479 A CN202010995479 A CN 202010995479A CN 112215089 A CN112215089 A CN 112215089A
Authority
CN
China
Prior art keywords
area
color
lamp
signal
subway
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010995479.2A
Other languages
Chinese (zh)
Inventor
王思远
蒋耀东
韩海亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casco Signal Ltd
Original Assignee
Casco Signal Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casco Signal Ltd filed Critical Casco Signal Ltd
Priority to CN202010995479.2A priority Critical patent/CN112215089A/en
Publication of CN112215089A publication Critical patent/CN112215089A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a video identification method of a subway color light signal machine, which is used for identifying the color light signal machine beside a rail by an unmanned vehicle under the environment of a subway tunnel. Compared with the prior art, the method has the advantages of accurate identification, long identification distance, high running speed, good real-time performance and the like.

Description

Video identification method of subway color light signal machine
Technical Field
The invention relates to the field of signal lamp identification, in particular to a video identification method of a subway color signal lamp.
Background
With the continuous development of rail transit automatic driving technology, ATO (automatic driving) is gradually converted into UTO (unmanned driving), and no driver is responsible for lookout any more. Some newly designed rail transit vehicles even cancel the cab, miniaturize the driver operating equipment and make hidden design. This trend has become the mainstream of the subsequent development.
The unmanned driving of rail transit mainly relies on a mature and high-reliability CBTC communication-based train control system, the system is a huge and complex system consisting of a series of devices of a vehicle-mounted device, a trackside device and a dispatching center, and driving control information is transmitted in the system in a network communication mode. However, when the CBTC signaling system is degraded or deactivated due to a fault, the vehicle still needs to drive strictly according to the indication of the wayside color signaling and cannot cross the red signaling. On the unattended vehicle, a set of equipment is needed to replace the eyes of a driver to identify the trackside signal machine, so that the train is ensured not to run the red light.
At present, the identification of signal lamps is mostly concentrated in the field of ground road traffic, and the color identification of red and green lamps at the crossing is relatively mature. There are also a few papers about the identification of large railway signal machines, which are similar to the technical problems faced by the outdoor environment as in the road traffic, and do not consider the influence factors such as the environmental interference of the reflection of the tunnel wall and the like.
Video identification aiming at a color lamp signal machine in a subway tunnel always has difficulty, the signal machine can be seriously diffracted and dazzled in the subway tunnel, machine identification is difficult, and some original color lamp identification methods are poor in performance and cannot work normally under the environment.
The common signal lamp identification is carried out outdoors, the sight is wide, no reflector is arranged around the signal lamp identification, and the installation position of the signal lamp is relatively fixed. The existing technology usually adopts classifiers such as a feature training SVM and the like, preliminarily locks the position of a signal lamp by detecting the shape of a signal lamp back plate, and then extracts and identifies the color of a light emitting area of the signal lamp. Through observing the video data on scene difficult discovery, outdoor illumination condition is better, even if at night, the street lamp shine also can guarantee that signal lamp appearance profile is clear, the discernment location of being convenient for to the luminous zone colour of colored light is bright-colored, the extraction and the discernment of the colour of being convenient for. However, in the tunnel, light is relatively dim, the position of the signal lamp mechanism is difficult to locate by the original method, and meanwhile, when video data are collected due to the fact that the signal lamp is high in brightness, obvious halos and shakes around the color lamp, a light emitting area in the center of the signal lamp is white, and recognition is affected. In addition, because the environment in the subway tunnel is narrow, the light emitting area of diffuse reflection on the wall and the roof often appears, and under the condition that the signal machine cannot be seen clearly, the difficulty is brought to the positioning. The practical condition in combining the tunnel, the equipment that hangs on the tunnel wall is more, and the tunnel bend is more, can shelter from remote signal lamp formation, and subway speed of traveling is faster again, in order to ensure the safe travel of train, need make quick and accurate discernment to remote signal lamp, accomplishes the camera lens and catches promptly and discern, consequently, has proposed higher requirement to the real-time of method.
Disclosure of Invention
The present invention is directed to a method for video recognition of a subway traffic light signal, which overcomes the above-mentioned drawbacks of the prior art.
The purpose of the invention can be realized by the following technical scheme:
a video identification method of a subway color light signal machine is used for identifying the color light signal machine beside a rail by an unmanned vehicle under the environment of a subway tunnel.
Preferably, the method comprises the steps of:
step 1, a far-focus camera on a vehicle captures video data in front of the running vehicle in real time;
step 2, segmenting images in the video data by using an HSV color space;
step 3, carrying out secondary classification on the image gray-scale image obtained after the segmentation, thereby screening a light-emitting area in the image;
step 4, screening out a red or green area in the luminous area, namely a color lamp halo area, based on the screened-out luminous area, executing step 5, and if no screening result exists, determining that no potential signal lamp area exists, and ending;
step 5, a plurality of screened luminous color areas exist on the graph, the area outline is found out, and the position information of the maximum three connected areas, namely the potential signal lamp position, is stored;
step 6, marking the minimum external matrix of the screened three connected regions, carrying out binarization processing on the original image to obtain a black-and-white image, and intercepting three candidate regions for identification;
step 7, finding out a circle in the candidate area so as to determine the accurate area with the signal lamp, executing step 8, and if the circle cannot be found, regarding the color area as the color lamp reflection formed on the wall, and ending;
step 8, finding the circle closest to the center point of the communication area where the signal lamp is located according to the recognized circle centers of the circles;
and 9, marking a locked color lamp area according to the finally determined coordinate and radius information of the circle, and printing color lamp color information and a driving instruction on a screen.
Preferably, the camera in step 1) is an afocal camera.
Preferably, the step 3) of performing two classifications on the image gray-scale map is performed by using a maximum inter-class threshold OTSU method.
Preferably, the light emitting region in step 3) is a high brightness region.
Preferably, the color lamp halo region in step 4) is obtained by setting a threshold value for screening and constructing red and green Mask masks.
Preferably, the region outline in step 5) is found by cv2.findcontours in OpenCV.
Preferably, the circle in step 7) is found in the candidate region by Hough transform.
Preferably, the circle closest to the Center point of the connected region where the signal lamp is located in step 8) is found by programming a Close _ to _ Center function.
Preferably, the halo in step 8) is relatively symmetrical, and the light emitting area of the color lamp is located at the center of the halo.
Compared with the prior art, the invention has the following advantages:
1. the method firstly makes up the vacancy of the signal lamp identification technology in the subway and lays a solid foundation for later development and improvement.
2. Aiming at the special environment in the subway, the influence of dim light on the appearance recognition and positioning of the signal machine and the influence of wall reflection on the recognition and positioning of the light emitting area of the colored lamp are solved, starting from the halo generated by the light emitting of the signal machine, and the accurate positioning is realized by screening layer by layer through an algorithm.
3. The recognition distance is far, and the appearance of the annunciator does not need to be recognized first, so that the recognition distance is greatly increased, and the recognition result can be rapidly displayed by capturing the light-emitting area. Through the test, in the subway tunnel, can accurately discern and fix a position the colour light semaphore and the colour light state outside 200 meters under the condition that the straight track does not have the sheltering from, remain sufficient safe braking distance for the train.
4. The operation speed is high, and because the recognition and positioning of the appearance of the annunciator in the traditional method are eliminated, and the specific area to be processed is cut, the operation cost is reduced, and the algorithm real-time performance is ensured.
5. The operation is stable, the identification is completely based on the video data captured in real time, the operation is independent, and the influence of other equipment and systems is avoided.
Drawings
FIG. 1 is a process flow diagram of the method of the present invention;
FIG. 2 illustrates the red light recognition effect of the present invention;
FIG. 3 illustrates the green light recognition effect of the present invention;
FIG. 4 is the multi-light recognition effect at the turnout of the present invention;
FIG. 5 illustrates the curve recognition effect of the present invention;
FIG. 6 illustrates the remote identification effect of the present invention;
FIG. 7 illustrates the HSV color space partitioning effect of the present invention;
FIG. 8 illustrates the OTSU threshold segmentation effect of the present invention;
FIG. 9 illustrates the red light emitting area extraction effect of the present invention;
FIGS. 10 and 11 illustrate the effect of marking three maximum luminous connected regions according to the present invention;
FIG. 12 illustrates an original binary image effect according to the present invention;
FIG. 13 is a Hough transform detection circle of the present invention and marks the circle closest to the center point.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, shall fall within the scope of protection of the present invention.
The principle of the invention is shown in figure 1: real-time video data in front of a vehicle is captured through an afocal lens installed in a cab and transmitted to an algorithm for continuous analysis and identification, the algorithm is used for screening highlight areas in the tunnel by using a maximum inter-class threshold method (OTSU) according to special conditions in the tunnel through HSV color space segmentation, the range of an area to be detected is greatly reduced, and the operation cost is reduced. Red and green masks are then created by setting thresholds, filtering out colored light emitting areas around the colored lights and reflected off the walls. And calling a findContours algorithm to obtain area profile information, arranging the maximum connected areas in a descending order, taking the first 3 to further analyze (actually, the halo range around the signal lamp is large, the first 3 are taken to eliminate the interference of a small light reflecting area formed by the reflection of an object in the tunnel and further narrow the range to be detected), and taking all the areas if the total number of the colored light emitting areas is less than 3. And (4) performing circular detection on the final candidate region by combining Hough transformation, wherein the circular region is a color lamp light-emitting region, and the non-circular region is a reflecting region. And finally, marking a circular area to realize accurate locking of the position of the signal lamp.
The method and the device aim at the special environment in the tunnel and the characteristics of the collected video data, carry out various processing on the image, finally eliminate external interference and accurately and quickly identify the sight of the signal lamp.
The specific treatment steps are as follows:
and step S1, the far-focus camera captures video data in front of the running vehicle in real time.
And step S2, partitioning by using HSV color space.
In step S3, the image gray-scale map is classified into two categories by the maximum inter-class threshold (OTSU) method, and a high-luminance region, which is a light-emitting region in the map, is selected.
And step S4, constructing red and green Mask masks by setting threshold value screening based on the screened light emitting areas, screening red or green areas in the light emitting areas, namely color lamp halo areas, if no result exists, determining that no potential signal lamp area exists, and automatically skipping the subsequent processing.
Step S5, finding out the area outline through cv2.findcontours in OpenCV, and storing the position information of the largest three connected areas, that is, the potential position of the traffic light.
And step S6, marking the minimum external matrix of the screened three connected regions, carrying out binarization processing on the original image to obtain a black-and-white image, and intercepting and identifying three candidate regions.
And step S7, finding out a circle in the candidate area through Hough transformation so as to determine the accurate area where the signal lamp exists, and if the circle cannot be found, regarding the color area as the color lamp reflection formed on the wall, and skipping the subsequent processing.
Step S8, due to the influence of light and other environmental factors, Hough transformation can find a plurality of circles in the light emitting area of the color lamp, the positioning is not accurate, and a circle closest to the Center point of the communication area where the signal lamp is located (the halos are relatively symmetrical, and the light emitting area of the color lamp is located at the Center position of the halos) is found according to the Center of each identified circle by compiling a Close _ to _ Center function.
And step S9, marking the locked color lamp area according to the finally determined coordinate and radius information of the circle, and printing the color lamp color information and the driving instruction on the screen.
The invention is described in detail below with reference to the following figures and specific examples:
as shown in fig. 2 and 3, the recognition results of the red and green signal lamps in the subway are respectively shown, it can be seen that the light emitting area of the signal lamp is accurately marked in a circular shape, a color lamp color label can be displayed on a screen to prompt a driver to go forward or stop, and the driver does not need to intervene in braking by combining a braking system of a train in the later period. It can also be seen that the single frame picture processing time is only between 6-9 ms.
As shown in FIG. 4, at a turnout, a plurality of signal lamps are often arranged, all the signal lamps can be identified and marked by the algorithm, and only the right side of the track or only the signal lamp near the track can be identified according to the position of the signal lamp by combining the actual situation at the later stage.
As shown in fig. 5 and 6, it can be seen that fig. 5 is a graph of the exact identification and location of the signal light in the curve area, which is just present in the shot, made by our algorithm; as can be seen from FIG. 6, the signal at a very long distance is very small and is just visible to the naked eye, and the algorithm also makes accurate identification and positioning, so that the advance of early warning is greatly improved, and the safety of vehicle driving is ensured.
As shown in fig. 7 and 8, the HSV color space segmentation effect and the OTSU threshold segmentation effect are respectively displayed, and it can be seen that only the high-brightness regions including the signal lamp region, the illuminating lamp on the wall, the high-brightness surface of the steel rail and the like are reserved after the threshold segmentation, thereby eliminating the interference of other regions and reducing the complexity of image processing.
As shown in fig. 9, a Mask in the HSV red threshold range is used to extract a red region in the light-emitting region, and the image processing range is further narrowed.
As shown in fig. 10 and 11, the algorithm marks three largest connected regions based on the screened red light emitting region, and marks the region containing the signal light and the position of the signal light.
As shown in fig. 12, in the binary diagram, the light-emitting area of the signal lamp is very obvious and is a clear white circle, and such a process can greatly improve the accuracy of the Hough transform on circle detection.
As shown in fig. 13, three circles are detected in the maximum connected area in the graph, and the Close _ to _ Center function in the algorithm screens out the circle (yellow mark area) closest to the Center point, which is the light emitting area of the signal lamp.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A video identification method of a subway color light signal machine is used for identifying the color light signal machine beside a rail by an unmanned vehicle under the environment of a subway tunnel and is characterized in that the method carries out various processing on images aiming at the special environment in the tunnel and the characteristics of collected video data, finally eliminates external interference and identifies the sight line of a signal lamp accurately and quickly.
2. The method of claim 1, wherein the method comprises the steps of:
step 1, a far-focus camera on a vehicle captures video data in front of the running vehicle in real time;
step 2, segmenting images in the video data by using an HSV color space;
step 3, carrying out secondary classification on the image gray-scale image obtained after the segmentation, thereby screening a light-emitting area in the image;
step 4, screening out a red or green area in the luminous area, namely a color lamp halo area, based on the screened-out luminous area, executing step 5, and if no screening result exists, determining that no potential signal lamp area exists, and ending;
step 5, a plurality of screened luminous color areas exist on the graph, the area outline is found out, and the position information of the maximum three connected areas, namely the potential signal lamp position, is stored;
step 6, marking the minimum external matrix of the screened three connected regions, carrying out binarization processing on the original image to obtain a black-and-white image, and intercepting three candidate regions for identification;
step 7, finding out a circle in the candidate area so as to determine the accurate area with the signal lamp, executing step 8, and if the circle cannot be found, regarding the color area as the color lamp reflection formed on the wall, and ending;
step 8, finding the circle closest to the center point of the communication area where the signal lamp is located according to the recognized circle centers of the circles;
and 9, marking a locked color lamp area according to the finally determined coordinate and radius information of the circle, and printing color lamp color information and a driving instruction on a screen.
3. The video recognition method of the subway signal as claimed in claim 2, wherein said camera of step 1) is an afocal camera.
4. The video identification method of the subway signal as claimed in claim 2, wherein said step 3) of classifying the image gray map is performed by using a maximum inter-class threshold method OTSU.
5. The video identification method of a subway signal as claimed in claim 2, wherein said light emitting area in said step 3) is a high brightness area.
6. The video identification method of a subway color signal as claimed in claim 2, wherein said color lamp halo region in step 4) is obtained by setting threshold value screening and constructing red and green Mask masks.
7. The video recognition method of a subway lamp signal as claimed in claim 2, wherein said area contour in step 5) is found by cv2.findcontours in OpenCV.
8. The video identification method of a subway signal as claimed in claim 2, wherein said circle in said step 7) is found in a candidate area by Hough transform.
9. The method as claimed in claim 2, wherein the circle closest to the Center point of the connected area where the traffic light is located in the step 8) is found by programming a Close _ to _ Center function.
10. The video identification method of a subway lamp signal as claimed in claim 2, wherein said halo in step 8) is relatively symmetrical, and said light emitting area of said lamp is located at the center of said halo.
CN202010995479.2A 2020-09-21 2020-09-21 Video identification method of subway color light signal machine Pending CN112215089A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010995479.2A CN112215089A (en) 2020-09-21 2020-09-21 Video identification method of subway color light signal machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010995479.2A CN112215089A (en) 2020-09-21 2020-09-21 Video identification method of subway color light signal machine

Publications (1)

Publication Number Publication Date
CN112215089A true CN112215089A (en) 2021-01-12

Family

ID=74049991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010995479.2A Pending CN112215089A (en) 2020-09-21 2020-09-21 Video identification method of subway color light signal machine

Country Status (1)

Country Link
CN (1) CN112215089A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805145A (en) * 2018-06-01 2018-11-13 中铁局集团有限公司 A kind of subway work railcar signal lamp and ambient brightness detecting device
CN109145746A (en) * 2018-07-20 2019-01-04 浙江浩腾电子科技股份有限公司 A kind of signal lamp detection method based on image procossing
CN111486852A (en) * 2020-04-07 2020-08-04 中铁检验认证中心有限公司 Intelligent traffic positioning and identifying system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108805145A (en) * 2018-06-01 2018-11-13 中铁局集团有限公司 A kind of subway work railcar signal lamp and ambient brightness detecting device
CN109145746A (en) * 2018-07-20 2019-01-04 浙江浩腾电子科技股份有限公司 A kind of signal lamp detection method based on image procossing
CN111486852A (en) * 2020-04-07 2020-08-04 中铁检验认证中心有限公司 Intelligent traffic positioning and identifying system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
文少波 等, 东南大学出版社 *

Similar Documents

Publication Publication Date Title
CN110197589B (en) Deep learning-based red light violation detection method
US8064643B2 (en) Detecting and recognizing traffic signs
CN103366571B (en) The traffic incidents detection method at night of intelligence
US7952490B2 (en) Method for identifying the activation of the brake lights of preceding vehicles
TWI302879B (en) Real-time nighttime vehicle detection and recognition system based on computer vision
CN102556021B (en) Control device for preventing cars from running red light
CN109598187A (en) Obstacle recognition method, differentiating obstacle and railcar servomechanism
CN109299674B (en) Tunnel illegal lane change detection method based on car lamp
CN106909937A (en) Traffic lights recognition methods, control method for vehicle, device and vehicle
CN102509090B (en) A kind of vehicle feature recognition device based on public safety video image in sky net engineering
CN105913041A (en) Pre-marked signal lights based identification method
CN104340109A (en) Driver assistance system and operating procedure for the latter
CN108357418A (en) A kind of front truck driving intention analysis method based on taillight identification
CN107316486A (en) Pilotless automobile visual identifying system based on dual camera
CN110688907B (en) Method and device for identifying object based on night road light source
CN105931467A (en) Target tracking method and device
CN109887276B (en) Night traffic jam detection method based on fusion of foreground extraction and deep learning
CN103287462A (en) Method and system for maintenance shunting signal detection
CN107169422A (en) The method of discrimination of high beam open and-shut mode based on headlamp radiation direction
US8229170B2 (en) Method and system for detecting a signal structure from a moving video platform
CN116234720A (en) Method for operating a lighting device and motor vehicle
KR20220115193A (en) Illegal parking management system of parking area for disabled persons
CN105046223A (en) Device for detecting severity of ''black-hole effect'' at tunnel entrance and method thereof
CN112215089A (en) Video identification method of subway color light signal machine
US20230401875A1 (en) Method for recognizing illumination state of traffic lights, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210112