GB2597561A - Monitoring railway nodes - Google Patents

Monitoring railway nodes Download PDF

Info

Publication number
GB2597561A
GB2597561A GB2102175.3A GB202102175A GB2597561A GB 2597561 A GB2597561 A GB 2597561A GB 202102175 A GB202102175 A GB 202102175A GB 2597561 A GB2597561 A GB 2597561A
Authority
GB
United Kingdom
Prior art keywords
railway
video
node
video stream
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2102175.3A
Other versions
GB202102175D0 (en
Inventor
Wing Chan Ho
Fai Lor Sum
Chun Ng Ka
Ching So Tin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MTR Corp Ltd
Original Assignee
MTR Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MTR Corp Ltd filed Critical MTR Corp Ltd
Publication of GB202102175D0 publication Critical patent/GB202102175D0/en
Publication of GB2597561A publication Critical patent/GB2597561A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L23/00Control, warning, or like safety means along the route or between vehicles or vehicle trains
    • B61L23/04Control, warning, or like safety means along the route or between vehicles or vehicle trains for monitoring the mechanical state of the route
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L23/00Control, warning, or like safety means along the route or between vehicles or vehicle trains
    • B61L23/007Safety arrangements on railway crossings
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L23/00Control, warning, or like safety means along the route or between vehicles or vehicle trains
    • B61L23/04Control, warning, or like safety means along the route or between vehicles or vehicle trains for monitoring the mechanical state of the route
    • B61L23/041Obstacle detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L27/00Central railway traffic control systems; Trackside control; Communication systems specially adapted therefor
    • B61L27/20Trackside control of safe travel of vehicle or vehicle train, e.g. braking curve calculation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L27/00Central railway traffic control systems; Trackside control; Communication systems specially adapted therefor
    • B61L27/50Trackside diagnosis or maintenance, e.g. software upgrades
    • B61L27/53Trackside diagnosis or maintenance, e.g. software upgrades for trackside elements or systems, e.g. trackside supervision of trackside control system conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L29/00Safety means for rail/road crossing traffic
    • B61L29/24Means for warning road traffic that a gate is closed or closing, or that rail traffic is approaching, e.g. for visible or audible warning
    • B61L29/28Means for warning road traffic that a gate is closed or closing, or that rail traffic is approaching, e.g. for visible or audible warning electrically operated
    • B61L29/30Supervision, e.g. monitoring arrangements

Abstract

A system comprises a video camera 10 arranged to monitor a railway node 20 (i.e. a stop, crossing or junction) and a video analytics module 30. The module receives a video stream 32 from the video camera, analyses it 34 to generate an alert signal 36 upon determining that it meets an alert condition for a predefined scenario at the railway node, and sends the alert signal to a railway control centre, a public address system or an on-board train management system. The scenario may be a road vehicle inside or approaching a junction, a pedestrian inside a junction, crossing, or dangerous area of a station (Fig.8), a pedestrian exhibiting dangerous behaviour or in distress, or an overcrowded station. The video analytics module may include a classifier to identify an object of interest, a tracker to track its location and movement and a trip wire detector to generate the alert signal if it passes a mapped safety boundary.

Description

MONITORING RAILWAY NODES
FIELD
The present disclosure relates to monitoring a railway node. A system, method, video analytics module and non-transitory computer readable storage medium for monitoring a railway node are disclosed.
BACKGROUND
A railway system may include a plurality of stops at which trains to stop to allow passengers to board or disembark, crossings at which pedestrians can cross over the rail track from one side to the other and junctions at which a road intersects the train line. In this disclosure such stops, crossings and junctions are referred to as railway nodes.
The teachings of the present disclosure may be used in any type of railway, but in some examples may be applied to railway nodes of a light rail transit (LRT) railway. Light rail transit (LRT) is a form of urban railway which combines features of a tram and a conventional railway. Similar to a tram system, a LRT serves an urban areas and the stops are relatively close together. However, unlike a tram system, a LRT has higher speed and higher passenger capacity. The trains of a LRT are sometimes referred to as light transit vehicles (LRV). While a tram generally runs along an existing road and shares the road space with other vehicles, in most cases an LRT line is for the LRVs and does not overlap with the road, except at junctions where the LTR line crosses a road. Compared to conventional railways the stops are more frequent and there are a much larger number of junctions and crossings.
In order to maintain safety, warning signs may be placed at railway nodes to deter passengers or vehicles from crossing the railway line when it is unsafe to do so. However, there is a risk that the warning signs may be ignored. While many underground stations have moving doors to block access to the tracks from platform, many LRT stops have smaller platforms that cannot bear the weight of such door systems. Further, while conventional trains may use physical barriers at a junction (e.g. a level crossing), especially in rural areas, this may be not practical in areas with high traffic density and short headway (time between) successive trains.
SUMMARY
A first aspect of the present disclosure provides a system for monitoring a railway node, wherein the railway node is a stop, crossing or junction. The system comprises at least one video camera arranged to monitor the railway node and a video analytics module. The video analytics module is to receive a video stream from the video camera, analyse the video stream, generate an alert signal in response to determining that the video stream meets an alert condition defined for a predefined scenario at the railway node and send the alert signal to at least one of a railway control centre, a public address system of the railway node or an on-board train management system.
A second aspect of the present disclosure provides a video analytics module comprising: a receiver to receive a video stream of railway node, the railway node being a station, crossing or junction; an analyser to analyse the video stream and generate an alert signal in response to determining that the video stream meets an alert condition, the alert condition being predefined scenario at the railway node or an abnormal situation at the railway node; and an output to communicate the alert signal to a system external to the video analytics module.
A third aspect of the present disclosure provides a non-transitory computer readable storage medium storing instructions which are executable by a processor to: receive a video stream of railway node, the railway node being a station, crossing or junction; analyse the video stream to determine whether the video stream contains a predefined scenario that meets an alert condition or an abnormal situation that meets an alert condition; and generate an alert in response to 1_5 determining that the video stream includes the predefined scenario that meets the alert condition.
A fourth aspect of the present disclosure provides a method of monitoring a railway node, wherein the railway node is a station, crossing or junction, the method comprising: receiving, by a video analytics module, a video stream of a railway node generated by a video camera; analysing the video stream, by the video analytics module, to determine whether the video stream contains a predefined scenario that meets an alert condition or an abnormal situation that meets an alert condition; generating, by the analytics module, an alert signal in response to determining that the video stream includes the predefined scenario or abnormal scenario; and communicating the alert signal to at least one of a railway control centre, a public address system of the railway node, an on-board train management system or a video management system.
BRIEF DESCRIPTION OF THE DRAWINGS
Examples of the present disclosure will be explained below with reference to the accompanying drawings, in which:-Fig. 1 is a schematic diagram of an example system for monitoring a railway node according to
the present disclosure;
Fig. 2 is a schematic diagram of an example system for monitoring a railway node according to the present disclosure; Fig. 3 is an example method of monitoring a railway node according to the present disclosure; Fig. 4A is an example of a video analytics module for monitoring a railway node according to the present disclosure; Fig. 4B is an example of a video analytics module for monitoring a railway node according to the present disclosure; Fig. 5 is an example of a railway node in the form of a railway stop according to the present disclosure; Fig. 6 is an example of a railway node in the form of a railway junction according to the present disclosure; Fig. 7A is an example method of operation of a video analytics module with respect to a safety
boundary according to the present disclosure;
Fig. 7B is an example method of operation of a video analytics module with respect to a restricted area according to the present disclosure; Fig. 8 is an example of an image from a post-analysed video stream of a railway stop generated by a video analytics module according to the present disclosure; Fig. 9 is an example of an image from a post-analysed video stream of a railway junction generated by a video analytics module according to the present disclosure; Fig. 10 is an example method of operation of a video analytics module with respect to detecting an overcrowded railway node according to the present disclosure; Fig. 11 is an example method of operation of a video analytics module with respect to identifying objects of interest according to the present disclosure; Fig. 12 is an example method of operation of a video analytics module with respect to passenger state according to the present disclosure; Fig. 13 is an example method of machine learning by a video analytics module according to the present disclosure; Fig. 14 is a schematic diagram showing an example structure of an example video analytics
module according to the present disclosure.
DETAILED DESCRIPTION
Various examples of the disclosure are discussed below. While specific implementations are discussed, it should be understood that this is done for illustrative purposes and variations with other components and configurations may be used without departing from the scope of the
disclosure as defined by appended claims.
Fig. 1 shows an example system 1 for monitoring a railway node according to the present disclosure. The railway node 20 is a stop, crossing or junction. The system comprises at least one video camera 10 arranged to monitor the railway node 20 and a video analytics module (VAM) 30. The VAM 30 is configured to receive 32 a video stream from the at least one video camera 10, analyse 34 the video stream, generate 36 an alert signal in response to determining that the video stream meets an alert condition defined for a predefined scenario at the railway node. The VAM 30 is configured to send the alert signal to at least one of a railway control centre, a public address system (PAS) of the railway node or an on-board train management system.
The VAM 30 may comprise a memory to store images included in the video stream received from the at least one video camera and at least one processor to analyze images of the video stream.
The at least one processor may include a general purpose processor, a hardware accelerator and/or specialized graphical processors. The at least one processor may utilize a combination of hardware and/or machine readable instructions executed by the at least one processor to implement artificial intelligence such as a neural network or machine learning algorithm in order to analyze the images of the video stream. The machine readable instructions may include predetermined rules for detecting certain pre-defined scenarios. The VAM may be implemented as a dedicated computing device, a general computing device executing software, a physical or virtual server or cloud computing service etc. In some examples, the VAM 30 is a computing device located at the railway node, so as to reduce latency and increase speed of analysis and response.
The system as described above is able to provide quick notice to any or all of the railway control center, train driver (via the on-board train management system) and pedestrians, vehicles or passengers at the railway node (via the PAS).
Fig. 2 shows another example of a system 1 for monitoring a railway node according to the present disclosure. The system is similar to the system of Fig 1 and like reference numerals denote like parts.
The VAM 30 may be local to the railway node, e.g. installed at or within a few meters of the junction. The VAM includes a first interface 35 to communicate with the PAS and the on-board train system. The first interface 35 may be a local interface for communicating with devices local to the railway node. For example the first interface may be a wired in/out port for sending a signal (e.g. a digital or analogue signal). For instance, the first interface may be an Ethernet port, universal serial bus (USB) port, Digital I/O, Dry Contact, RS282, RS485 etc. The VAM includes a second interface 37 to communicate with the video camera 10, the railway control centre 50 and the VMS 90 over a network 40, such as a LAN, metropolitan area network (MAN), an Intranet, the Internet, a virtual private network (VPN) or a telecommunication network. The second interface 37 may for example be a network interface, such as a local area network (LAN) interface, an Ethernet interface a telecommunication network interface. In other examples the VAM 30 may be at a remote site away from the railway node, in which case the first interface 35 may be a network interface similar to the second interface 37, or there may be no first interface 35 with communications with PAS and on-board train system being carried out via the second interface 37.
The Public Address System (PAS) 22 may be an audio broadcast system located at the railway node. The PAS may for example comprise a non-transitory machine readable storage medium storing a pre-recorded safety message and a speaker to broadcast the pre-recorded safety message. The alert signal from the VAM may cause the PAS to play the pre-recorded safety message or a specified one of a plurality of pre-recorded safety messages stored on the storage medium. The pre-recorded safety message is an audio message. The pre-recorded safety message(s) may for example include one or more of the following: a message reminding passengers not to perform a particular action, such as stepping past a boundary such as a yellow line of a platform of the stop, or asking passengers or vehicles to exit the track area or requesting passengers to seek help from a railway staff member etc. The on-board train management system 82 is a management system of a train which resides on the train 80 itself. For example, the on-board train management system 82 may include a display panel for the driver, a user interface by which the driver may control the train and/or automated driving or braking functions. The alert sent to the on-board train management system may cause the on-board train management system to display messages to a driver of the train, deliver a visual or audio alert to the driver, activate automatic breaking of the train and/or other automated driving functions.
The first interface 35 of the VAM 30 may send the alert signal to a transmitter 70 for wireless transmission to the on-board train management system. The transmitter 70 may be a track side device. In one example, in response to detecting a pre-defined scenario, such as man on track, the VAM 30 may send an alert signal to the on-board train management system to notify the driver of the pre-defined scenario and may further cause automatic braking of the train if no action is taken by the driver within a predetermined period of time. In this way the system may be able to directly notify the train driver, rather than relying on the railway control center to notify the train driver in the event of a potentially dangerous situation at the railway node.
In one example the transmitter 70 is a RFID tag and the alert signal sends data to the RFID tag and the RFID tag transmits the data to a RFID reader of the on-board train management system 82. Thus it will be appreciated that, the system 1 for monitoring the railway node may include an RFID tag and the VAM 30 may be configured to transmit the alert signal to an RFID reader of an on-board train management system 82 via the RFID tag. The on-board management system may be configured to automatically stop the train in response to emergency criteria being met. For example, the emergency criteria may include receiving an alert signal from the VAM 30. In addition to receiving the alert signal, the emergency criteria may include the train speed and distance of the train from the railway node, such that the train is automatically stopped if an alert signal is received and the train distance and speed are within predetermined criteria.
The railway control centre 50 may be a centre comprising a number of computer workstations for displaying information relating to the railway. For example the railway control centre may display data from the railway signalling system, status of railway lines, location of trains and their direction of travel, information relating to junctions etc. The alert signal sent to the railway control centre may cause an alert to be displayed on a display at the railway control centre. The alert may include textual information, a visual indicator and/or an audio alert and may convey information about what is happening at the railway node.
The video management server (VMS) 90 is a system configured for at least one of storing, displaying and playing back a video stream of the railway node generated by the at least one video camera 10. The VMS 90 may for example comprise a hardware server, a virtual server or a cloud based virtual machine etc. The VMS may form part of a CCTV system and may receive video streams from a plurality of video cameras at one or more railway junctions or other areas. The video streams may be received over the network 40.
While in the example of Fig. 2, the video camera 10 sends a video stream to the VAM 30 over a network 40 (e.g. via an Ethernet port), in other examples the video camera 10 may be connected directly to the VAM 30, rather than over a network so as to decrease latency. For example the VAM 30 may include an audio/video (A/V) port, a High-Definition Multimedia Interface (HDMI), Digital Video Interface (DVI), a universal serial bus (USB) port, Video Graphics Array (VGA) interface that connects with a cable which is connected to the video camera 10. In some examples the video camera 10 may connect to the VAM 30 via a local connection and with the VMS over a network.
While Fig. 2 shows the VAM 30 as having two communications interfaces: the first interface 35 and second interface 37, in other examples the VAM 30 may have just a single communications interfaces. In still other examples the VAM 30 may have more than two communications interfaces.
In the example of Fig. 2, the VAM 30 includes a video content analyser 31, which may include modules for receiving the video stream 32, analysing the video stream 34 and generating the alert 36 as in Fig. 1. The video content analyser 31 may include a memory, such as a cache or random access memory (RAM) to store images included in the video stream and at least one processor to analyze images in the video stream as described above.
The VAM 30 may be configured to generate a post analysed video stream and forward the post analysed video stream to the VMS 90. A post analysed video stream is a video stream which has been analysed, e.g. to identify objects such as vehicles, pedestrians, passengers etc. The post-analysed video stream may also include alerts, e.g. indicating that a certain predefined scenario has occurred or indicating the area at which a potentially dangerous situation has arisen.
Fig. 3 shows an example method 300 of monitoring a railway node according to the present disclosure. The railway node is a stop, crossing or junction. The method may for example be implemented by a system such as that shown in Fig. 1 or Fig. 2.
At block 310 a video analytics module receives a video stream of a railway node generated by a video camera.
For example, the video stream may be generated by a video camera 10 which is monitoring the railway node and sent to a video analytics module 30, as shown in the examples of Fig. 1 and Fig. 2.
At block 320 the video stream is analyzed, by the video analytics module, to determine whether the video stream contains a predefined scenario that meets an alert condition, or an abnormal situation that meets an alert condition.
At block 330 the analytics module generates an alert signal in response to determining that the video stream includes the predefined scenario or abnormal situation.
At block 340 the alert signal is sent to at least one of a railway control centre, a public address system of the railway node, an on-board train management system or a video management system.
A predefined scenario is a scenario in which a predefined and potentially dangerous situation occurs (e.g. man on track). An abnormal situation is a scenario which is not predefined, but which deviates from normal conditions at the railway node in a substantial way.
The predefined scenario may, in some examples, be one of: a road vehicle inside a junction, a road vehicle approaching a junction, a pedestrian inside a junction, a pedestrian inside a crossing, a passenger passing a safety boundary in a station, a passenger inside a predefined dangerous area of a station, a passenger exhibiting dangerous behaviour, a passenger in distress and an overcrowded station.
The VAM 30 may be configured to identify objects of interest in the video stream, track the objects of interest and generate an alert signal in response to at least one of: a count of a number of objects of interest in a specified area exceeding a predefined threshold, detecting an object of interest passing a predefined boundary, detecting an object of interest entering a predefined restricted area at a non-safe time, or detecting a posture, gesture or movement of a passenger which is indicative of potential danger.
Figs. 4A and 4B show examples of a video analytics module (VAM) 400 according to the present disclosure. The VAM 400 may be used in the system of Figs 1-2 or the method of Fig. 3 and may have any of the VAM features described above in relation to Figs 1-3.
The VAM 400 of Fig. 4A includes an object classifier 410 to identify an object of interest in the railway node, an object tracker 420 to track the location and movement of the object of interest in the railway node and an alert condition classifier 430. The alert condition classifier may determine, based on the location, movement or characteristics of the objects 420 tracked by the object tracker, whether an alert condition exists. An alert condition may be a predefined scenario corresponding to a potentially dangerous situation.
The VAM 400 of Fig. 4B includes an object classifier 410 and object tracker 420 as described for Fig. 4A. The VAM 400 of Fig. 4B further includes a map 440 of the railway node. The map 440 of the railway node may include any or all of the following: at least one safety boundary 442, at least one restricted area 444 and other data 445 relating to the geographic layout of the railway node. The VAM 400 of Fig. 4B also includes an alert condition classifier 430, which may include any or all of the following: a tripwire classifier 432, an intrusion classifier 434, a passenger state classifier 436 and an overcrowd classifier 438.
The various classifiers described above may be implemented by dedicated hardware, software running on a processor or a combination of dedicated hardware and software. The classifiers may utilize artificial intelligence such as a neural network or machine learning algorithm in order to detect, identify and track objects or determine whether the combination of objects match a predefined scenario defined by one of the alert condition classifiers.
Fig. 5 shows a plan view of an example railway stop 500. The railway node has a first video camera 520A and a second video camera 520B, the first video camera facing toward the second video camera and the second video camera facing toward the first video camera whereby a field of view of the first video camera partially overlaps with a field of view of the second video camera.
In this way the video cameras can monitor the railway node while reducing or eliminating blind spots.
The stop 500 includes two opposing platforms 510A, 510B and a railway track area between the platforms. The map 440 in the VAM may define one or more safety boundaries 530, such as but not limited to a line on a platform of the stop which passengers are advised to keep behind. For example, the platforms may have lines near their edge, which passengers are to keep behind except when boarding or disembarking from a train. The trip wire classifier 432 (which may be referred to as a trip wire detector) may be configured to generate an alert signal in response to an object of interest, such as a passenger, passing through a safety boundary 530.
It will be appreciated that although a passenger may have to pass the line in order to board or disembark from a train. Accordingly, the trip wire classifier may further comprise a safety condition classifier to classify a safety state of the railway node as safe or unsafe in relation to a safety boundary. For example, the safety condition classifier may be configured to classify the safety state as safe in response to determining that a train has stopped at the station platform. The trip wire classifier may be configured to generate an alert signal in response to the safety state being unsafe and the safety boundary being passed and not generate the alert signal in response to the safety boundary being passed while the safety state is "safe".
Fig. 7A shows an example method of operation of the VAM with respect to detecting a scenario in which an object of interest passes a safety boundary.
At block 710 a safety boundary is defined. For example one or more safety boundaries may be defined by the railway operator and stored in a railway node map 440 of the VAM when the system is installed or updated.
At block 720 objects of interest in the video stream of the railway node are detected and tracked. For example, this may be performed by an object classifier 410 and an object tracker 420 of the 20 VAM 400.
The VAM is able to track objects of interest in the video stream and also map safety boundaries and restricted areas onto the video stream to create a post-analyzed video stream. Fig. 8 shows an example of an image 800 in a post-analysed video stream of a railway stop. The image 800 includes an object of interest in the form of a passenger 830 which may be identified and tracked and the location of safety boundaries 810 and restricted areas 820. The post-analysed video stream may be sent to the VMS.
At block 730 the VAM determines if a safety boundary has been passed by an object of interest. At block 740 the VAM determines if the safety status of the node is safe or unsafe. If a safety boundary has been passed and the safety status is safe (e.g. a train has stopped at the platform adjacent the safety boundary), then no alert is generated at block 750. However, if the safety status is unsafe and the safety boundary has been passed then an alert is generated at block 760.
When the railway node is a crossing or junction, the safety boundary may be a line defining a boundary of the crossing or junction and the safety condition classifier may classify the safety state as safe in response to determining that there is no train within a predetermined distance of the railway node.
Fig. 6 shows an example of map of a railway junction 600. The railway junction 600 includes a railway track 620 and a road 610, 615 which crosses over the railway track. In this example a first part of the road 610 runs parallel to the railway track 620, while a second part of the road 615 crosses over the railway track. A plurality of cameras 630A, 630B are positioned at either side of the railway track to monitor the railway junction. In the example of Fig. 6, a pair of trains 660A, 660B are passing through the railway junction. Safety boundaries 650A and 650C are indicated on the map and may be stored in the VAM.
In the case of a railway junction, which is where a road crosses a railway line, both pedestrians and vehicles may be objects of interest. In the case of a railway crossing, which is a path by which pedestrians may cross a railway line, pedestrians are objects of interest. Fig. 8 shows an image 900 from a post-analysed video stream of a railway crossing which is next to a railway junction. The image 900 includes a plurality of objects of interest in the form of pedestrians 930A, 930B, 930C, 930D and vehicles 940A, 9403, 940C and 940D which have been identified and are being tracked by the VAM.
As mentioned above the map 440 of the railway node in the VAM may include restricted areas. Restricted areas are areas of the railway node which objects of interest should not enter For example, the restricted area may include a railway track area of the railway node. The VAM may include an intrusion classifier (also referred to as an "intrusion detector) to generate an alert signal in response to object of interest being detected in the restricted area. Fig. 5 and Fig. 8 show example restricted areas 540 and 820 defined for a railway stop. Fig. 6 and Fig. 9 show example restricted areas 640A, 6400, 910 and 920 for a railway junction and railway crossing.
Fig. 7B shows an example method of operation 900 of a VAM with respect to detecting a scenario in which an object of intrudes into a restricted area.
At block 910 a restricted areas is defined. For example one or more restricted areas may be defined by the railway operator and stored in a railway node map 440 of the VAM when the system is installed or updated.
At block 920 objects of interest in the video stream of the railway node are detected and tracked.
For example, this may be performed by an object classifier 410 and an object tracker 420 of the VAM 400.
At block 930 the VAM determines if an object of interest is in a restricted area and if so then at block 940 an alert is generated While not shown in Fig. 7B, for some safety restricted areas the VAM may determine if the safety status of the node is safe or unsafe and generate the alert in response to an object of interest being in the restricted area and the safety status being unsafe. For example, it may be safe for pedestrians or vehicles to enter a restricted area of a railway junction or railway crossing if there is no approaching train within a predetermined distance of the railway crossing or junction. Other restricted areas, such as the railway tracks of a stop in an area where there is no crossing, may always be considered to be unsafe.
Referring back to Fig. 4B, the VAM may include an overcrowded classifier 438. The over-crowded classifier may determine that the railway node is overcrowded based on a crowd density at the railway node. For example, the VAM may be configured to count a number of objects of interest, such as passengers or pedestrians, in a predetermined area of the railway node and determine that the railway node is overcrowded in response to the counted number of objects of interest exceeding a predetermined threshold. An example method of operation 1000 of the overcrowded classifier is shown in Fig. 10.
At block 1010 objects of interest, such as passengers or pedestrians, are detected.
At block 1020 the objects of interest in a predetermined area of the railway node are counted.
At block 1030 it is determined whether the number of objects of interest in the predetermined area is above a predetermined threshold. The predetermined threshold may have been set by an operator of the railway, as a number which the operator considers to be overcrowded for the railway node in question.
At block 1040, in response to the predetermined threshold being exceeded, an overcrowded alert is generated.
Referring back to Fig. 4B, the VAM may include a potential danger passenger state classifier 436 which is configured to identify at least one of a posture or gesture of a passenger which is indicative of potential danger and generate an alert in response to detecting said at least one of a posture or gesture which is indicative of potential danger.
An example method of operation 1100 of the passenger state classifier is shown in Fig. 11 At block 1110 an object is determined to be a passenger. For example if an object in the video stream is identified as having legs, arms, and torso, i.e. characteristics of a human, and is present at a railway stop, then it may be determined to be a passenger At block 1120 bodies parts of the passenger which are relevant to posture or gesture are identified for example one or more of the following: legs, arms, torso, shoulder, elbows, wrist, hip, knees, foot and eyes.
At block 1130 the position and motion of body parts of the passenger are determined. For example, the orientation of the torso, whether the arms are moving in a fast, waving motion, distance of the wrists and elbows from the body and motion of the wrists and elbows etc. At block 1140 it is determined whether the position and/or motion of any of the body parts is indicative of potential danger For example the posture or body language can be estimated by comparing the relative distances and positions of those human body parts. This may be by comparison with test data including positive and negative matches for postures and gesture indicative of potential danger and/or by comparison of the detected position and motion with predetermined rules.
For example, the passenger state classifier may be configured to determine that at least one of a prone posture, a swaying posture and a distress gesture of a passenger is indicate of potential danger. In some examples, the passenger state classifier may be configured to identify an arm waving gesture similar to a predetermined set of arm waving gestures as being indicative of danger.
It will be appreciated from the above, that the VAM first identifies objects and their location and then determines if an alert should be generated based on rules for generating alerts as described above. For example, the VAM may apply an object identification classifier and object tracker to detect and track objects of interest and then apply other classifiers, such as safety boundary, intrusion, passenger status, overcrowded classifiers etc, to determine if an alert should be generated.
With regard to object identification and tracking, machine learning was used to train the VAM to recognize humans (pedestrians or passengers) and vehicles like objects as objects of interest. Over 10,000 samples were input in the training process, including human samples such as pedestrians, riders on bicycle, people on a wheel chair, babies on a baby cart etc. Vehicle samples included car, van, truck and motorbike etc. The training process should be carried out at both day and night and in different weather conditions. In one example the training process was carried out over 2 days (48 hours) of continuous video stream from railway nodes and then honed and refined continuously over subsequent days, weeks and months. By comparing the similarity of the pixel patterns and machine learning the VAM was able to perform image recognition in a single frame of the video stream to identify objects of interest which appeared in the field of view. The objects of interest identified by the VAM may be assigned with a unique ID and the VAM may determine a center point of the object and register a center point of the object of interest as the current location of the object. The object tracker of the VAM is able to track the objects' movement in the real-time video by comparing the pixel shift and differences between the sequence of images.
Fig. 12 shows an example method 1200 of identifying an object of interest by a video analytics module according to the present disclosure. For example the method 1200 may be used by an object classifier, such as the object classifier 410 and object tracker 420 of Figs. 4A and 4B.
At block 1210 a video stream is pre-processed to facilitate the image object identification process.
For example, the pre-processing may include modifying images in the video stream to prepare for the visual analysis process. For example, the images may be modified to correct for environmental factors, to filter out noise and/or rectify images such that objects are of consistent size. For example scaling factors may be used to adjust the size of objects in the video stream to a predetermined size to facilitate recognition.
At block 1220 an object is detected in the video stream, e.g. from the edges, pixel patterns or other features of the object. The object detection may for example compare the similarity of pixel patterns in blocks of an image of the video stream with predetermined criteria or samples of various types of object to determine whether an object is present.
At block 1230 features of the object are extracted. For example, this may include a center point of the object or features of the object which may distinguish the object from other types of object.
At block 1240 the object is classified as a particular type of object, e.g. a human passenger or pedestrian, human on a bicycle/wheelchair/baby cart, or a road vehicle, by comparing the features with predetermined criteria and/or a data set of representative target objects.
Blocks 1220, 1230 and 1240 may for example be carried out by a neural network and/or image recognition and machine learning algorithms. It will be appreciated that while blocks 1220, 1230 and 1240 are described as distinct processes above, they may all be carried out in a different order or combined as part of the same process and carried out together. Further, Fig. 12 is just one example method of object recognition and the present disclosure is not limited thereto.
It will be appreciated that the various classifiers of the VAM of the present disclosure may utilize machine learning in order to identify and classify objects and scenarios in the video stream.
Fig 13 shows an example method 1300 of using machine learning, which may be employed to train the classifiers of the video analytics module.
At block 1310 images in the video stream are pre-filtered in preparation for the image recognition process. For example, noise may be reduced or removed, environmental factors compensated for and images scaled to a size suitable for the image recognition..
At block 1320 features are extracted. The features may be extracted from the pre-filtered images and may for example include: edges, colours, wavelets etc. Or features may be features of identified objects such as body parts or parts of a vehicle, or features of a scenario such as the type, location, and/or movement of identified objects within the railway node.
At block 1330 an object or scenario is classified by a classifier 1340. The classifier 1340 may compare the extracted features with a matching set 1342 of objects or scenarios matching the classifier criteria and a non-matching set 1344 of objects or scenarios which do not match the classifier criteria.
At block 1340 a result of the classification in block 1330 is output. For example the result may be a match of the classifier criteria or a non-match. Thus, for example, an object may be classified as a particular type or a scenario as matching a predefined scenario. The result may be output for further processing, e.g. for a next stage of the VAM analysis in the case of an intermediate result, such as object identification, or to generate an alert in the case of a predefined scenario being matched. The result may also be output to the classifier 1340 to enrich the classifier data set.
At block 1360 the result may be validated. For example a human user may view the classification result and the image and determine whether or not they agree with the classification result. Their validation may be output to the classifier to further enhance the classifier accuracy. This may result in the matching and non-matching sets being modified as any previous incorrect classifications may be corrected.
The process of Fig. 13 was applied to images to build up an initial set of training data with matching and non-matching sets for various classifiers, as described herein, and then for many hours of live video streams to train the machine learning to recognize objects of interest. The tracking of objects of interest to detect and track their movement may also rely on recognition of the objects and may use machine learning. The initial training for object identification as shown in Fig. 13 should include a large number of image samples (e.g. 10,000+ samples), which may include images of humans in different postures and different types of vehicle. The image samples may be taken from publicly available sources of images and video. It is also recommended to supplement this initial stock image samples with live video stream from one or more railway nodes over at least 48 hours including both day and night time. Further continuous training may then be used to fine-tune the object detection and allow it to cope with changes in weather, background, different models of road vehicle etc. Detection of the predefined scenarios such as: a road vehicle inside a junction, a road vehicle approaching a junction, a pedestrian inside a junction, a pedestrian inside a crossing, a passenger passing a safety boundary in a station, a passenger inside a predefined dangerous area of a station and an overcrowded station may be achieved by applying rules based on defined tripwires and restricted areas object location (as determined by the object recognition and tracking). Thus detection of these predefined scenarios may be achieved without machine learning except for the object recognition and tracking. However, in some examples, machine learning may be use to assist with detection of these predefined scenarios. Detection of a passenger exhibiting dangerous behavior or a passenger in distress, e.g. based on a posture and/or gesture of the passenger may be achieved by machine learning or a combination of predefined rules and machine learning.
Fig. 14 shows an example hardware structure of a VAM 1400 according to the present disclosure.
A video analyfics module (VAM) 1400 comprises a processor 1410, a non-transitory machine readable medium 1420 and an input/output interface 1430 to communicate with systems or devices external to the VAM. The machine readable medium stores machine readable instructions which are executable by the processor. The machine readable instructions include instructions 1422 to receive a video stream of railway node, instructions 1424 to analyse the video stream and instructions 1426 to generate an alert signal in response to determining that the video stream meets an alert condition, the alert condition being predefined scenario at the railway node or an abnormal situation at the railway node. For example the alert signal may be output via the input/output interface 1430. The VAM may be configured to carry out any of the methods described above and may for example comprise machine readable instructions stored on the storage medium for carrying out any of these methods.
Another aspect of the present disclosure provides a non-transitory computer readable storage medium storing instructions which are executable by a processor to: receive a video stream of railway node, the railway node being a station, crossing or junction; analyse the video stream to determine whether the video stream contains a predefined scenario that meets an alert condition or an abnormal situation that meets an alert condition; and generate an alert in response to determining that the video stream includes the predefined scenario that meets the alert condition.
The non-transitory computer readable storage medium may further include instructions to identify objects of interest in the video stream, track the objects of interest and generate an alert signal in response to at least one of: a count of a number of objects of interest in a specified area exceeding a predefined threshold, detecting an object of interest passing a predefined boundary, detecting an object of interest entering a predefined restricted area at a non-safe time, or detecting a posture, gesture or movement of a passenger which is indicative of potential danger. The storage medium may further comprise instructions for carrying out any of the methods described above.
It is to be understood that any feature described in relation to any one example may be used alone, or in combination with other features described, and may also be used in combination with any features of any other of the examples, or any combination of any other of the examples.

Claims (4)

  1. CLAIMS1. A system for monitoring a railway node, wherein the railway node is a stop, crossing or junction, the system comprising: at least one video camera arranged to monitor the railway node; and a video analytics module to receive a video stream from the video camera, analyse the video stream, generate an alert signal in response to determining that the video stream meets an alert condition defined for a predefined scenario at the railway node and send the alert signal to at least one of a railway control centre, a public address system of the railway node or an on-board train management system.
  2. 2 The system of claim 1 wherein the video analytics module is to generate a post analysed video stream and forward the post analysed video stream to a video management system.
  3. 3 The system of claim 1 or 2 wherein the predefined scenario is one of: a road vehicle inside a junction, a road vehicle approaching a junction, a pedestrian inside a junction, a pedestrian inside a crossing, a passenger passing a safety boundary in a station, a passenger inside a predefined dangerous area of a station, a passenger exhibiting dangerous behaviour, a passenger in distress and an overcrowded station.
  4. 4 The system of claim 1 or 2 wherein the video analytics module is configured to identify objects of interest in the video stream, track the objects of interest and generate an alert signal in response to at least one of: a count of a number of objects of interest in a specified area exceeding a predefined threshold, detecting an object of interest passing a predefined boundary, detecting an object of interest entering a predefined restricted area at a non-safe time, or detecting a posture, gesture or movement of a passenger which is indicative of potential danger The system of any one of the above claims wherein the video analytics module includes an object classifier to identify an object of interest in the railway node and an object tracker to track the location and movement of the object of interest in the railway node.6 The system of claim 5 wherein the video analytics module further comprises a map of the railway node defining at least one safety boundary and a trip wire detector to generate an alert signal in response to object of interest passes through the safety boundary.7 The system of claim 6 further comprising a safety condition classifier to classify a safety state of the railway node as safe or unsafe in relation to the safety boundary and wherein the trip wire detector is to generate the alert signal in response to the safety state being unsafe and the boundary being passed and not generate the alert signal in response to the safety state being safe.8. The system of claim 7 wherein the railway node is a stop, the safety boundary is a line on a platform of the station and safety condition classifier is to classify the safety state as safe in response to determining that a train has stopped at the station platform.9. The system of claim 7 wherein the railway node is a crossing or junction, the safety boundary is a line defining a boundary of the crossing or junction and safety condition classifier is to classify the safety state as safe in response to determining that there is no train within a predetermined distance of the railway node.10 The system of claim 5 wherein the video analytics module further comprises a map of the railway node defining at least one restricted area of the railway node and an intrusion detector to generate an alert signal in response to object of interest being detected in the restricted area.11 The system of claim 10 wherein the restricted area includes a railway track area of the railway node.12 The system of claim 5 wherein the video analytics module further comprises a passenger state classifier to identify at least one of a posture or gesture of a passenger which is indicative of potential danger and generate an alert in response to detecting said at least one of a posture or gesture which is indicative of potential danger.13 The system of claim 12 wherein the passenger state classifier is to determine that at least one of a prone posture, a swaying posture of a passenger is indicative of potential danger.14 The system of claim 12 wherein the passenger state classifier is to identify an arm waving gesture similar to a predetermined set of arm waving gestures as being indicative of potential danger.15 The system of claim 5 wherein the video analytics modules comprises an over-crowded classifier that is to count a number of objects of interest in a predetermined area of the railway node and determine that the railway node is overcrowded in response to the counted number of objects of interest exceeding a predetermined threshold.16 The system of claim 1 wherein the video analytics module is trained to detect an abnormal situation at the railway node and generate an alert in response to detecting the abnormal situation.17 The system of any one of the above claims including an RFID tag and which the video analytics module is to transmit the alert signal to an RFID reader of an on-board train management system via the RFID tag.18 The system of any one of the above claims further comprising a train with an on-board management system, the on-board management system being configured to automatically stop the train in response to emergency criteria being met, the emergency criteria including receiving an alert signal from the video analytics module.19. The system of any one of the above claims further comprising a public address system of a railway node, the public address system including a non-transitory machine readable storage storing a pre-recorded safety message and a speaker to broadcast the pre-recorded safety message.The system of any one of the above claims wherein the at least one video camera includes a first video camera and a second video camera, the first video camera facing toward the second video camera and the second video camera facing toward the first video camera whereby a field of view of the first video camera partially overlaps with a field of view of the second video camera.21. The system of claim 20 wherein the video management system is configured to at least one of: display, playback and store the post-analysed video stream.22 A video analytics module comprising: a receiver to receive a video stream of railway node, the railway node being a station, crossing or junction; an analyser to analyse the video stream and generate an alert signal in response to determining that the video stream meets an alert condition, the alert condition being predefined scenario at the railway node or an abnormal situation at the railway node; and an output to communicate the alert signal to a system external to the video analytics module.23. The video analytics module of claim 22 further comprising the features of the video analytics module described in any one of claims 1-16.24. A non-transitory computer readable storage medium storing instructions which are executable by a processor to: receive a video stream of railway node, the railway node being a station, crossing or junction; and analyse the video stream to determine whether the video stream contains a predefined scenario that meets an alert condition or an abnormal situation that meets an alert condition; and generate an alert in response to determining that the video stream includes the predefined scenario that meets the alert condition.The non-transitory computer readable storage medium of claim 24 including instructions to identify objects of interest in the video stream, track the objects of interest and generate an alert signal in response to at least one of: a count of a number of objects of interest in a specified area exceeding a predefined threshold, detecting an object of interest passing a predefined boundary, detecting an object of interest entering a predefined restricted area at a non-safe time, or detecting a posture, gesture or movement of a passenger which is indicative of potential danger.26. A method of monitoring a railway node, wherein the railway node is a station, crossing or junction, the method comprising: receiving, by a video analytics module, a video stream of a railway node generated by a video camera; analysing the video stream, by the video analyfics module, to determine whether the video stream contains a predefined scenario that meets an alert condition or an abnormal situation that meets an alert condition; generating, by the analyfics module, an alert signal in response to determining that the video stream includes the predefined scenario or abnormal scenario; and communicating the alert signal to at least one of a railway control centre, a public address system of the railway node, an on-board train management system or a video management system.
GB2102175.3A 2020-02-18 2021-02-16 Monitoring railway nodes Pending GB2597561A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
HK32020002930 2020-02-18

Publications (2)

Publication Number Publication Date
GB202102175D0 GB202102175D0 (en) 2021-03-31
GB2597561A true GB2597561A (en) 2022-02-02

Family

ID=75338842

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2102175.3A Pending GB2597561A (en) 2020-02-18 2021-02-16 Monitoring railway nodes

Country Status (1)

Country Link
GB (1) GB2597561A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009070839A1 (en) * 2007-12-04 2009-06-11 Sti-Global Ltd Improved railroad crossing
EP2397386A1 (en) * 2010-06-21 2011-12-21 Hitachi, Ltd. Status monitoring apparatus of railway vehicle
DE102018206593A1 (en) * 2018-04-27 2019-10-31 Siemens Aktiengesellschaft Mobile recognition of passengers and objects on platforms

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009070839A1 (en) * 2007-12-04 2009-06-11 Sti-Global Ltd Improved railroad crossing
EP2397386A1 (en) * 2010-06-21 2011-12-21 Hitachi, Ltd. Status monitoring apparatus of railway vehicle
DE102018206593A1 (en) * 2018-04-27 2019-10-31 Siemens Aktiengesellschaft Mobile recognition of passengers and objects on platforms

Also Published As

Publication number Publication date
GB202102175D0 (en) 2021-03-31

Similar Documents

Publication Publication Date Title
US11024165B2 (en) Driver behavior monitoring
US10885777B2 (en) Multiple exposure event determination
US11380105B2 (en) Identification and classification of traffic conflicts
Bonnin et al. Pedestrian crossing prediction using multiple context-based models
GB2556942A (en) Transport passenger monitoring systems
KR102453627B1 (en) Deep Learning based Traffic Flow Analysis Method and System
WO2020024552A1 (en) Road safety monitoring method and system, and computer-readable storage medium
CN112071084A (en) Method and system for judging illegal parking by utilizing deep learning
Zheng Developing a traffic safety diagnostics system for unmanned aerial vehicles usingdeep learning algorithms
KR20210004271A (en) Method and system for recognizing situation based on event tagging
Sheikh et al. Visual monitoring of railroad grade crossing
GB2597561A (en) Monitoring railway nodes
Detzer et al. Analysis of traffic safety for cyclists: The automatic detection of critical traffic situations for cyclists
CN112382068B (en) Station waiting line crossing detection system based on BIM and DNN
Klammsteiner et al. Vision Based Stationary Railway Track Monitoring System
Fang et al. Comparison of Deep-Learning Algorithms for the Detection of Railroad Pedestrians.
Dopfer et al. What can we learn from accident videos?
Juyal et al. Anomalous Activity Detection Using Deep Learning Techniques in Autonomous Vehicles
Jebamani et al. AR Upgraded Windshield
US20220410951A1 (en) Image-Based Vehicle Evaluation for Non-compliant Elements
Wei International Conference on Transportation and Development 2022: Application of Emerging Technologies
WO2024079650A1 (en) Automated safety management in environment
Kumar et al. Aerial Imaging Rescue and Integrated System for Road Monitoring Based on AI/ML
Banerjee et al. Experimental Design and Implementation of Real Time Priority Management of Ambulances at Traffic Intersections using Visual Detection and Audio Tagging
Sriwardene et al. Vision based smart driver assisting system for locomotives

Legal Events

Date Code Title Description
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40059754

Country of ref document: HK