GB2556942A - Transport passenger monitoring systems - Google Patents

Transport passenger monitoring systems Download PDF

Info

Publication number
GB2556942A
GB2556942A GB1620099.0A GB201620099A GB2556942A GB 2556942 A GB2556942 A GB 2556942A GB 201620099 A GB201620099 A GB 201620099A GB 2556942 A GB2556942 A GB 2556942A
Authority
GB
United Kingdom
Prior art keywords
vehicle
scene
output
objects
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1620099.0A
Other versions
GB201620099D0 (en
Inventor
Parvanov Angelov Plamen
Morris Gruff
James Parkinson Howard
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lancaster University
Original Assignee
Lancaster University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lancaster University filed Critical Lancaster University
Priority to GB1620099.0A priority Critical patent/GB2556942A/en
Publication of GB201620099D0 publication Critical patent/GB201620099D0/en
Priority to PCT/GB2017/053586 priority patent/WO2018096371A1/en
Publication of GB2556942A publication Critical patent/GB2556942A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L15/00Indicators provided on the vehicle or train for signalling purposes
    • B61L15/0072On-board train data handling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61BRAILWAY SYSTEMS; EQUIPMENT THEREFOR NOT OTHERWISE PROVIDED FOR
    • B61B1/00General arrangement of stations, platforms, or sidings; Railway networks; Rail vehicle marshalling systems
    • B61B1/02General arrangement of stations and platforms including protection devices for the passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L15/00Indicators provided on the vehicle or train for signalling purposes
    • B61L15/009On-board display devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L23/00Control, warning or like safety means along the route or between vehicles or trains
    • B61L23/04Control, warning or like safety means along the route or between vehicles or trains for monitoring the mechanical state of the route
    • B61L23/041Obstacle detection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B61RAILWAYS
    • B61LGUIDING RAILWAY TRAFFIC; ENSURING THE SAFETY OF RAILWAY TRAFFIC
    • B61L27/00Central railway traffic control systems; Trackside control; Communication systems specially adapted therefor
    • B61L27/40Handling position reports or trackside vehicle data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Transportation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention concerns providing information and advice to drivers, staff and passengers at boarding areas of public transport, in particular train and metro stations. A vehicle passenger monitoring system comprises an imaging sensor 18 (e.g. a camera) arranged to capture images of a scene including vehicle access points 16 (e.g. doors). An image processor receives the output from the imaging sensor, and has an object detector for determining the presence of discrete objects 14 in the scene based on variation of visual parameters within the captured images. An object identifier/classifier determines an object type for the detected objects, e.g. from a list that includes both animate and inanimate objects. A scene analyzer identifies one or more conditions for the imaged scene based on the object types. A signal generator sends an output including information derived from said condition to a communication system for communication to individuals. The system may be used to generate a passenger information display 22 and/or to monitor and report on adverse conditions such as overcrowding or hazards (e.g. baggage 24A,B).

Description

(71) Applicant(s):
University of Lancaster (Incorporated in the United Kingdom)
University House, Bailrigg, LANCASTER, LA1 4YW, United Kingdom (72) Inventor(s):
Plamen Parvanov Angelov Gruff Morris
Howard James Parkinson (56) Documents Cited:
EP 2423708 A1 US 5176082 A US 20080106599 A1
EP 2093698 A1 US 20160125248 A1
Velastin, Sergio A., Boghos A. Boghossian, and Maria Alicia Vicencio-Silva. A motion-based image processing system for detecting potentially dangerous situations in underground railway stations. Transportation Research Part C: Emerging Technologies 14.2 (2006): 96-113 (74) Agent and/or Address for Service:
(58) Field of Search:
INT CL B61B, G06K, G06T, H01B Other: WPI, EPODOC
Adamson Jones
BioCity Nottingham, Pennyfoot Street, NOTTINGHAM, Nottinghamshire, NG1 1GF, United Kingdom (54) Title of the Invention: Transport passenger monitoring systems
Abstract Title: Passenger monitoring system providing information on possible hazards (57) The invention concerns providing information and advice to drivers, staff and passengers at boarding areas of public transport, in particular train and metro stations. A vehicle passenger monitoring system comprises an imaging sensor 18 (e.g. a camera) arranged to capture images of a scene including vehicle access points 16 (e.g. doors). An image processor receives the output from the imaging sensor, and has an object detector for determining the presence of discrete objects 14 in the scene based on variation of visual parameters within the captured images. An object identifier/classifier determines an object type for the detected objects, e.g. from a list that includes both animate and inanimate objects. A scene analyzer identifies one or more conditions for the imaged scene based on the object types. A signal generator sends an output including information derived from said condition to a communication system for communication to individuals. The system may be used to generate a passenger information display 22 and/or to monitor and report on adverse conditions such as overcrowding or hazards (e.g. baggage 24A,B).
Figure 1
Figure GB2556942A_D0001
1/4
Figure GB2556942A_D0002
Figure 1
Figure GB2556942A_D0003
Figure 2
2/4
Figure GB2556942A_D0004
Figure 3
Figure GB2556942A_D0005
Figure 4
3/4
Figure GB2556942A_D0006
14-B
Figure GB2556942A_D0007
Figure GB2556942A_D0008
\ tfl \ /'··:>
14~A
Figure 5
Figure GB2556942A_D0009
Figure GB2556942A_D0010
Figure GB2556942A_D0011
Figure GB2556942A_D0012
Figure GB2556942A_D0013
Figure GB2556942A_D0014
Figure GB2556942A_D0015
Figure GB2556942A_D0016
Figure 6
4/4
/ \ / z \ / \
10^ 1 1
T9l00009999¥9999¥99999
28A 28B 28A
28C 28A
Figure 7
TITLE OF THE INVENTION
Transport Passenger Monitoring Systems
BACKGROUND OF THE INVENTION
This disclosure concerns vehicle passenger monitoring systems, for example including passenger-related information systems and associated methods.
As railway and mass transit systems continue to develop, there is a requirement to move more people faster. One bottleneck within such systems is the time taken to stop at a waypoint, allowing people to alight and board a vehicle, and then to commence or restart a journey.
As is well known, deceleration and acceleration times for railway vehicles may be shortened by using suitable motive technology, for example via the use of powerful electric (rather than diesel) motors. However there remains for transport operators the important unsolved problem of arranging for people to alight from and board vehicles rapidly, together with luggage of various types. Such problems are particularly prevalent for railway vehicles but also apply to other passenger vehicle types.
This problem is further complicated by health and safety considerations. Various different passenger scenarios and behaviour can cause wildly varying health and safety risks in the vicinity of a moving or stationary vehicle. In the UK the number of passengers killed or injured during a railway journey is currently very small. However the transition of passengers to and from railway vehicles is hazardous and the pressure to increase passenger throughput has the potential adverse effect of increasing safety risks.
Surveillance of railway stations and surveillance inside trains by video cameras is known. Generally the video stream may be made available to railway staff via display screens for review, and may be recorded for retrieval and inspection post hoc in the event of an incident.
In railways such as metros where many of the trains are the same length, mirrors and/or video screens may be provided at stations as aids to drivers. These may be positioned adjacent to the driver’s stopping point. Such video screens are generally driven by at least one camera looking obliquely along the platform side of the train. These aids allow the driver a better view of passengers alighting and boarding. However they require a train to be stationary and thus do not assist a driver during approach to the platform or else whilst leaving the platform. Furthermore the output of a number of video cameras is required to adequately capture a scene. The use of numerous video feeds causes a risk that a driver may miss some pertinent information in one feed whilst concentrating on another video feed. The use of multiple screens/feeds in itself can be overwhelming for less experienced viewers.
It is known for certain trains to be driverless. Since there is no driver present, such systems require means of checking safety without relying on manual inspection.
Computer vision based systems have been known since the 1980s and have been used to detect objects in a video scene. Existing systems are typically used to detect moving objects in a static scene. For example the use of background subtraction is known for detecting moving objects in a scene that is viewed using a static camera. There also exist some examples of using a vehicle mounted camera to detect static objects. Such systems have been used on dedicated inspection trains for the purpose of monitoring track quality, for example as described in US 2012/0274772.
Patent applications US2013180426 (A1) and JP2012066617 (A) teach the location of cameras on the outside of a train in order respectively to control a bridge to span the gap between the train and platform, and to detect the stopping position of a train relative to a platform.
Such examples concern the operation of the vehicle and/or associated track in an engineering sense. In contrast, in transport environments video surveillance recordings are generally used “as is” with little or no additional processing. Patent application JP2012001191 (A) teaches the location of a plurality of cameras on a railway train in order to provide a composite image to the driver of the train.
It is an object of the present invention to provide a passenger transport monitoring system which mitigates one or more of the above problems. It may be considered an additional to enhance rapid and safe alighting and boarding of passenger vehicles via provision of assistive information, e.g. to waiting passengers and/or vehicle operators.
BRIEF SUMMARY OF THE INVENTION
According to a first aspect of the invention there is provided a vehicle passenger monitoring system comprising: an imaging sensor arranged to capture images of a scene including one or more vehicle access point; an image processor arranged to receive the output of the imaging sensor, the image processor having an object detector for determining the presence of discrete objects in the scene based on variation of one or more visual parameter within the captured images, an object identifier/classifier arranged to determine an object type for the detected objects from a list of object types comprising animate and inanimate objects, and a scene analyser for identifying one or more condition for the imaged scene based on the identified object types; the system further comprising a signal generator for generating an output comprising information derived from said condition and transmitting said output to a communication system for communication to one or more individual.
The invention may allow beneficial information to be communicated to relevant parties, such as the vehicle operator/driver and/or passengers located either on-board the vehicle or waiting to board the vehicle. The information may take the form of any, or any combination, of a condition/status indication for the scene under surveillance, an alert and/or an instruction or announcement.
The output may take the form of an audio and/or visual output. The output may comprise one or more augmented image derived from the images captured by the imaging sensors. The output may comprise a video stream, e.g. in which one or more condition is indicated or highlighted.
The condition may comprise a risk rating or score for the scene or a portion thereof. The condition may comprise one or more individual identified risk or hazard within the scene. Any such individual risk/hazard may be identified according to any or any combination of an object’s size, shape, orientation, location in the scene, location relative to another object, and/or speed/direction of movement within the scene. For example, an object’s proximity to and/or movement towards a hazard may be determined. The hazard may comprise a moving vehicle or an identifiable edge/barrier/passageway or the like within the scene. A hazard may comprise an object obstructing a passageway.
Additionally or alternatively, the condition may comprise a combination or summation of objects, object types or individual risks/hazards identified within the scene. A count of the number of people and/or objects in the scene may provide an indication of crowding, overcrowding or a general level of activity within the scene. A count of certain object types may provide an indication of a risk or adverse scenario.
A plurality of conditions may be determined for different portions of a scene, or for a plurality of different scenes relating to a common vehicle. The different portions of the scene may comprise different vehicle access points or different sections of a vehicle, such as carriages in the case of a railroad vehicle. The different portions of the scene may be individually ranked or rated. The output by the communication system may comprise an indication of a condition for one or more vehicle access point, e.g. indicating its suitability for use in boarding/alighting the vehicle.
The output may comprise a coding, ranking or other indication of congestion, crowding or other risk associated with the vehicle or one or more vehicle access point.
The one or more imaging sensor typically comprises a camera, such as a video camera.
A plurality of imaging sensors may be used. One or more imaging sensor may be based on-board the vehicle, e.g. in transit. Additionally or alternatively, one or more imaging sensor may be located at or near or around a static location where it is intended that passengers will board or alight the vehicle and/or at the side(s) of the track traversed by the vehicle. The sensor(s) may capture images of the vehicle interior and/or exterior. One or more imaging sensor may capture images of an external scene comprising the vehicle exterior.
The one or more imaging sensor may capture images of a scene which includes at least a portion of the vehicle.
The captured images may be recorded for later retrieval and processing/inspection.
The image processor and/or signal generator may be located on the vehicle. The output may be communicated to an audio/visual display on the vehicle, e.g. in a vehicle cabin. Additionally or alternatively the output may be communicated off the vehicle, e.g. to a location at which passengers are to board or alight the vehicle and/or to a control/monitoring centre. The communication system may comprise a wired/wireless data transmitter and/or a device for output of the information to an end user.
The vehicle access point may comprise a vehicle platform, vehicle door, passage, gangway, ramp or the like, or a passenger waiting area. The vehicle access point may be located on the vehicle or adjacent the vehicle, e.g. when the vehicle is at a standstill.
A plurality of vehicle access points may be imaged by one or more imaging sensor in one or more scene. A comparison between the conditions for each vehicle access point may be made by the processor or signal generator.
The signal generator may receive data from one or more further information source. The signal generator may or may not be operationally linked therewith. The further information source may comprise at least one of a vehicle identification system, a vehicle geographical positioning system, a vehicle speed system, a vehicle loading/capacity system and/or a vehicle seat reservation system. Any said systems may or may not be located on the vehicle. The signal generator may process said data so as to generate augmented vehicle output data for transmission via the communication system.
The image processor may receive a video input and may process the video input to produce the one or more condition for the scene in real time. The signal generator may generate the signal output in real time. The signal output may be generated at a rate less than, or substantially equal to, the rate of receipt of the captured image data by the image processor.
The communication system may comprise a passenger information system, public announcement system or the like.
The object detector may process image pixel data. The visual parameter may comprise any or any combination of colour, brightness/density or the rate of change thereof over an image, e.g. between pixels. The object detector may process static images or frames, for example identifying objects/pixel clusters in individual images.
The object detector may identify pixel clusters according to the visual parameter, e.g. according to a degree of similarity of one or more visual parameters between adjacent pixels. Pixel clusters may be identified as objects, e.g. having determinable shape, size and colour/brightness properties. An edge detection technique may be used.
The object identifier/classifier may classify objects according to any or any combination of object size/area (e.g. in pixels), shape/geometry (e.g. relative dimensions), relative location in the scene, colour, brightness and/or movement. Each object type in a predetermined list of object types may has associated therewith any or any combination of said parameters, for example including one or more threshold parameter value. Thus a detected object in an image can be correlated to a predetermined object type. Different parameters may carry different weighting in determining object type.
The object identifier/classifier may process video and/or individual images/frames, e.g. processing object motion.
For an identified or detected object, a plurality of points may be identified as object markers. Movement of the object may be identified by tracking the position of the markers (e.g. the relative position and/or the position in the scene). Object movement can be logged and understood in a computationally efficient manner, such that it can inform object identification/classification and/or scene analysis in a practical and/or time-efficient manner.
The scene analyser may determine one or more condition for the scene based on one or more parameter for the identified object types, such as for example a count of objects of one or more specified type, proximity of objects of the same/different type, identification of an object as a hazard, direction of movement of one or more object relative to another object or hazard, and/or movement speed or acceleration of one or more object.
The scene analyser may discount one or more object or object type from the condition determination. For example, fixtures/fittings and certain other permanent/static objects may be discounted.
The scene analyser may identify one or more object or portion of the scene as a hazard, e.g. according to object type or a combination of object type with one or more further object parameter identified above, such as movement or proximity to another object or the vehicle.
The image processor may apply a pixel density estimation process, e.g. with respect to a plurality of neighbouring pixels. Density estimation may be performed in a plurality of directions. The image processor may apply a filter to the density estimation output.
The output may comprise a visual/video display output. The output may comprise an augmented visual output, e.g. comprising one or more images or a video feed from the imaging sensor(s). The augmented output may comprise indicia in addition to the images of the scene, wherein the indicia relate to a condition, hazard or instruction for the scene resulting from the image processing. The indicia may comprise text, highlighting, colourcoding or the like to identify any such feature in the scene.
An augmented video output for the scene may be particularly beneficial in helping a user identify key hazards in the scene, or else for providing clear instructions for passengers. Additionally or alternatively, an audio output may be provided, such as an alert signal or message, or a visual output of a different kind. Light projection may be used to indicate locations/portions of the scene, e.g. to highlight to passengers boarding locations and/or hazards.
According to a second aspect of the invention there is provided a method a vehicle and passenger monitoring method corresponding to the system of the first aspect.
According to third aspect of the invention, there is provided a data carrier comprising machine readable instructions for the control of one or more processor to perform the image/data processing corresponding to the system or method of the first or second aspects.
According to fourth aspect of the invention, there is provided apparatus for providing information to intending passengers waiting in a waiting area for transport on a vehicle, wherein the apparatus comprises: a plurality of imaging cameras on the vehicle operationally linked to at least one vehicle-based image processing unit so as to provide images of a scene surrounding the vehicle thereto and a vehicle-based output unit that creates output data from the processed image data and transmits the output data from the vehicle to at least one off-vehicle communication system; wherein the communication system comprises one or more information display for the waiting area at least one waiting area display generation unit receiving the output data and processing it into a format for communication via the information display.
Wireless transmission is typically used between the on-board vehicle components of the system and the off-vehicle components.
By the use of advanced image/video processing and communication techniques the invention may provide information that is otherwise unobtainable or impractical to derive by manual inspection.
Wherever practicable, any of the essential or preferable features defined in relation to any one aspect of the invention may be applied to any further aspect. Accordingly, the invention may comprise various alternative configurations of the features defined above.
BRIEF DESCRIPTION OF THE DRAWINGS
Practicable embodiments of the invention are described in further detail below by way of example only with reference to the accompanying drawings, of which:
Fig. 1 shows a schematic view from above of a vehicle passenger monitoring system according to an example of the invention;
Fig. 2 shows a schematic of the different components of a vehicle passenger monitoring system and the flow of information according to an example of the invention;
Fig. 3 shows a schematic the elements of an image processing unit according to an example of the invention;
Fig. 4 shows a schematic line drawing of augmented image/video output of a vehicle passenger monitoring system according to an example of the invention;
Fig. 5 shows a schematic line drawing of a further example of an augmented video output;
Fig. 6 shows a schematic line drawing of another example of a visual output; and,
Fig. 7 shows a schematic line drawing of lighting along a platform being used to provide passenger information.
DETAILED DESCRIPTION OF THE INVENTION
Embodiments of the present invention are described below, which incorporate advanced image/video processing and/or communication techniques. This information generated can be used by passengers waiting to board/alight a vehicle and/or the operators of transport and related services, for example:
• Personnel within vehicles (for example drivers and/or guards and/or conductors);
• Personnel in areas where people are waiting to board;
• Personnel in operator locations such as station management facilities, line control facilities (“signal boxes”) and the like;
• Technical and engineering personnel; and/or • Emergency service personnel
Figure 1 shows schematically the elements of an embodiment of the present invention. In Fig. 1, a railway vehicle 10 is shown in conjunction with a vehicle access point, which in this example takes the form of a station platform 12 and/or doorways 16 on the vehicle indicated by arrows. A plurality of passengers 14 on the platform 12 typically comprise individuals waiting to board the train 10 and/or alighting the train.
The vehicle 10 has mounted thereon a video camera 18 arranged to capture the scene indicated by the field of view 20, which includes the platform 12 and preferably at least a portion of the exterior of vehicle 10. Whilst a single camera 18 is shown for simplicity, it will be appreciated that a plurality of cameras will typically be provided for capturing the external scene surrounding the vehicle. At least one such camera may be forward-facing or rear-facing, providing images of the track and its environs.
Additionally or alternatively, one or more camera within the vehicle interior, e.g. within one or more carriage and/or having sight of one or more doorway, may be provided. Furthermore, it is possible that the system may comprise one or more further camera mounted off the vehicle, for example at the station, platform or other vehicle access point but which may be in communication with image processing equipment on-board the vehicle.
Whilst it is considered an important feature that one or more camera is mounted to the vehicle whilst in motion, e.g. to capture a scene pertaining to vehicle passenger access points whilst the vehicle is approaching/departing a passenger waiting area, the precise location of the image processing equipment may be less critical, provided suitable communication means can be implemented to ensure a reliable and fast data feed between the cameras and image processing equipment.
A passenger information display 22 is provided at the passenger waiting area on the platform and typically comprises one or more screen. The passenger information system may additionally or alternatively comprise one or more speaker and/or announcement system.
Turning now to Fig. 2, there is described below the flow of information via a system according to an example of the invention.
A transport vehicle 100 comprises at least one video camera 101 operably connected to at least one vehicle real-time video processing unit (VPU) 103.
The cameras 101 may be analogue cameras, providing an analogue video stream, or may be digital cameras proving video as a series of pixelated frames.
Where a camera 101 is an analogue camera, an additional processing step (not shown) to convert from analogue to digital format is implemented by the VPU or by other means as would be generally known to the person skilled in the art.
The VPU takes as input one or more video streams and analyses them. In particular the VPU identifies objects within the internal space of the vehicle 100. The VPU may identify and/or count objects including without limitation objects such as:
• People, • Animals, • Items of luggage, • Bicycles, • Wheelchairs, and/or • Pushchairs and prams
Importantly the VPU identifies both static and moving people and objects. The VPU may also identify unusual and/or dangerous situations within the vehicle 300, such as:
• A person lying down, • An obstruction in an aisle and/or doorway, • Smoke and/or fire, and/or • Groups of people
The VPU may also receive images from one or more cameras mounted on the vehicle, where the image is of a location outside the vehicle. Such cameras may for example provide images of the sides of the vehicle, including doors, or may be forward-facing or rear-facing cameras, providing images of the track and its environs. Forward-facing cameras may be used for example for detection of obstacles on the track, both in stations and between stations.
The VPU 103 outputs structured data, e.g. intermittently, and sends it to a vehicle data unit (VDU), 105. The VDU 105 may optionally also collect data from other systems 107 within the vehicle 100, without limitation such as:
• Vehicle identification system (which may identify the vehicle and/or the service), • Vehicle geographical position system, • Vehicle speed system, and/or • Vehicle passenger reservation system.
The VDU 105 formats the data that it (105) has collected along with any associated/derived data as vehicle output data (VOD). The VDU 105 sends VOD wirelessly to a vehicle data receiving system (VDRS) 215 located externally to the vehicle. The VDU 105 may also supply VOD to a vehicle display generation unit (VDGU) 109 within the vehicle which may use the VOD to create content and communicate it via an information output 111 mounted on the exterior and/or interior of the vehicle 100. The output unit may comprise a passenger display and/or vehicle driver/operator display.
The VDRS 215 may comprise a data coordinating unit for amassing data and/or controlling the dissemination of data through the relevant communication network to the relevant processing/display units. It may be located on or off the vehicle. A secure local (e.g. ad-hoc) network may be established between VDU 105 and a ground/station based data unit 305 (to be described below) by performing a conventional handshake or similar security data exchange to permit communication there-between.
In Fig. 2 the area where intending passengers assemble while waiting for the arrival of a transport vehicle 100 is designated 300. This area may for example comprise a platform at a railway station as described above.
The VDRS 215 may communicate with other systems 213, for example in order to determine which specific vehicle 100 will arrive at which specific waiting area 300 or to access other vehicle related information held in ground based systems. Such systems 213 may include without limitation:
• Vehicle routing systems;
• Vehicle/station timetabling systems;
• Network control systems; and/or • Signalling systems.
The VDRS can thus supply the relevant data between one or more vehicles and one or more off-vehicle information system, e.g. at a station. The VDRS 215 supplies data to a waiting area data unit (WADU) 305. The data comprise at least the VOD from the respective vehicle 100. These data may optionally include additional data from the other systems 213.
Within the waiting area 300 there may also be at least one video camera 301 operably connected to at least one waiting area real-time video processing unit (“WAPU” 303).
The cameras 301 may be analogue cameras, providing an analogue video stream, or may be digital cameras providing video as a series of pixelated frames.
Where a camera 301 is an analogue camera, an additional processing step (not shown) to convert from analogue to digital format is implemented by the VPU or by other means as would be known to the person skilled in the art.
WAPU 303 takes as input one or more video streams and analyses them. In particular, WAPU identifies objects within the waiting area 300. WAPU may identify and/or count objects such as, without limitation, the object types listed above for the VPU 103.
WAPU 303 may also identify information corresponding to the vehicle 100. For example, several train operating companies run services with different types of vehicles, potentially having the same or different numbers of cars/carriages and/or carriages of differing length and therefore different passenger capacity. WAPU 303 may identify any or any combination of vehicle type, length/size, passenger capacity and/or passenger access points.
Identification of the type of vehicle allows look-up of information useful to waiting passengers (which may be signalled to them), such as:
• Expected congestion level at each boarding point;
• Identification of boarding points with nearby luggage storage;
• Bicycle boarding points;
• Accessible boarding points; and/or • Location of boarding points relative to the platform.
WAPU 303 may also identify unusual and/or dangerous situations within the waiting area 300, such as:
• A person or object in an unexpected and/or dangerous location • A person lying down in the waiting area;
• A person or object located too close to a vehicle;
• A person or object located too close to the edge of the waiting area;
• A person or object on the permanent way (eg railroad tracks or rails) • A person or object located or moving in front of a vehicle; and/or • Smoke and/or fire
The WAPU 303 sends structured data, e.g. intermittently, to the WADU 305.
The WADU 305 supplies consolidated data to a waiting area display generation unit (WADGU) 309, which uses the data to generate content for at least one information output unit 311, e.g. comprising a visual display, within the waiting area 300.
The WADU 305 may also supply data to one or more external systems 211. Such systems 211 may comprise without limitation, systems to provide data to:
• a driver of the vehicle;
• a manager, guard, conductor or similar of the vehicle;
• an automatic control system of the vehicle (for example to apply the brakes) • a power control system • a signaller (for example controlling movement of the vehicle);
• managers of the transport organisation;
• personnel located in the waiting area;
• engineering and/or technical personnel; and/or • emergency service personnel.
As an example, in certain embodiments, a signal may be used to modify the movement of the vehicle. This may be for example modification of speed, for example by application of brakes and/or modification of traction power.
In certain embodiments using electrical traction, safety rules may be implemented so as automatically to switch off track power, for example to conducting rail or overhead lines.
Data provided as input to such systems 211 may comprise any suitable data, including without limitation:
• real time vehicle operation data; and/or • expected congestion or hazard data at locations within the vehicle and/or waiting area.
Output provided by such systems 211 may comprise any suitable format, including without limitation:
• Messages such as text messages;
• Pre-recorded audio announcements;
• Information montages such as web pages;
• Software applications;
• Video feed data, intelligently selected; and/or • Augmented video data with intelligent highlighting.
The data supplied in the waiting area 300 via the means of display 311 located in the waiting area 300 may comprise without limitation:
• Expected location of boarding points;
• Expected location of special boarding points (for example: bicycle loading, stepfree access points, etc);
• Expected congestion of each boarding point;
• Expected level of occupancy of the vehicle in the vicinity of the boarding point;
• Boarding points that are out of use; and/or • Expected arrival time of a vehicle.
The data supplied via the means of information output unit 311 located on the vehicle 100 may comprise without limitation:
• Expected congestion at each boarding point;
• Expected arrival time of the vehicle; and/or • Connection information (such as times and locations of connecting services)
The information output units 111 and/or 311 may comprise without limitation:
• Projector(s) or other light emitters projecting images onto the waiting area, e.g. onto a wall or the ground, or vehicle access points;
• Projector(s) projecting images onto the vehicle;
• Display screen(s) • Light-emitting means (e.g. lighting) mounted within the waiting area;
• Light-emitting means (e.g. lighting) located at a vehicle access point on/adjacent to the vehicle; and/or • Speaker(s) or other sound emitting means
The information provided by the means of information output 111 or 311 may provide information in a range of formats, for example without limitation:
• Textual information • Graphics (such as arrows, no-entry signs, etc) • Colour coded information • Light intensity information • Sound information
There will now be described several practical embodiments of the present invention, embodying both the overall principles and a plurality of options and variations for implementation within the scope of the overall invention. Some of the component units and systems of the present invention may comprise standard means of computation, e.g. running conventional operating systems and/or firmware, but operated to produce novel outputs in accordance with the present invention. Suitable wired and/or wireless communication techniques/protocols may be used to convey information between the different components of the system.
Figure 2 shows schematically the elements of a real-time video processing unit (e.g. 103 or 303) of an example of the present invention. The implementation of this unit in respect of either or both of the vehicle and the passenger waiting area may be substantially similar.
Image data 401, typically in the form of a video data stream, is received from a camera 101,301 by a video processing or object detection/identification unit 501. In the examples of the present invention described below, the video processing unit uses image data processing techniques based on an adapted form of an edge flow technique (as described for example in the paper Morris & Angelov, “Edge Flow”, Systems Man and Cybernetics (SMC), IEEE International Conference on, pp. 200-208, 2015).
In the context of the present invention a real-time processing methodology is implemented, where ‘real-time’ means that incoming video data is processed and the relevant analysis scene is carried out to produce derived outputs at a speed that is substantially equal to or faster than video data arrives. The output will by necessity have a small time delay relative to the time at which the image data is captured but the relative speed of processing image data is important to ensure that the time delay does not substantially increase over time when using the system. Thus any time lag in providing the output information is predictable and capped to the extent that it will not significantly detract from the timeliness of the information provided. The image processing time delay will thus be a fraction of a second and typically a tenth of a second or less, e.g. measurable in the order of hundredths of a second or milliseconds.
The image processing techniques used in the examples described below can be summarised as a combination of optical flow, edge detection and texture analysis, e.g. using a Sobel operator. In examples of the invention the image processing may be characterised by use of a windowed density estimation process.
The video processing unit 501 receives video 401 as input and from it derives as output 403, comprising candidate objects and identifiers/descriptors of such objects. This process may be enhanced by the use of feedback 409 from a later stage of the process.
The image processing steps are explained in detail by way of the following pseudo-code for object identification:
• Acquire a digital video frame comprising pixels, or generate this by digitising an analogue video stream.
• For each pixel, determine the density of the features in comparison to the pixels around the current pixel. The density of the features is determined by calculating feature density change in two-orthogonal directions, i.e. in Cartesian X and Y directions.
o A parameter is used, to determine how many pixels around the current pixel to consider as part of the density determination equation (minimum 2).
It can be fixed at system initialisation. The parameter may vary with the feedback loop - e.g. based on the type of objects desired to detect. The system autonomously adapts this parameter. Large values of this parameter lead to smooth density changes and accentuated wider density changes. Smaller values of the parameter lead to normalisation of all changes in the frame. This yields two output data spaces (one for each orthogonal direction) of the rate of change of the feature descriptors (i.e. feature velocity). The output can be considered the edge profile of each texture within the frame.
• Apply a Sobel filter to the density field in both the X data space and Y data space. A horizontal Sobel filter is applied to the X data space. A vertical filter is applied to the Y data space.
o A parameter which determines the size of the Sobel filter is defined. It is fixed at system initialisation. A 7x7 filter is a normal starting size for this application. It may vary with the feedback loop based on the type of objects desired to detect. The system autonomously adapts this parameter. A larger value of this parameter leads the Sobel filter to process a larger area of pixels (yielding a wider range of variance). A smaller value leads to the filter processing a smaller area of pixels (yielding a tighter range of variance). A small Sobel filter is sensitive to local changes whereas a larger Sobel filter is sensitive to more global changes. The output is two data spaces of the rate of change of density (rate of change of change of the feature descriptors is acceleration).
• The two output Sobel data spaces are constructively combined to yield a single data space showing the rate of change of the density in a single data space.
• Each pixel from the Sobel data space is assessed for similarity to its neighbouring pixels. Two further parameters are used:
o Sufficient rate of change of density to form a candidate object. Pixels outside this parameter are not considered to be candidate objects.
o Similarity range to neighbouring pixels. If the neighbouring pixels are within the similarity range they are flagged as sufficiently similar to form a candidate object. Outside the similarity range no flag is set.
• Flagged pixels are grouped into candidate objects based on their flag identifier, such as a flag number.
• Further parameters are used determine the minimum and maximum candidate object size. Any candidate object that is smaller or larger than the range specified is not grouped as a candidate object.
The image processor thus identifies candidate objects in the imaged scene. Certain features are then extracted from the candidate objects, e.g. both physical and mathematical features for the objects, which can feed into the object assessment/characterisation process 503. Various data features/parameters associated with those objects are used at a later stage for object classification process 505, to be described below, which makes reference to predetermined object models or templates that are accessible to the image classifier.
Motion of a candidate object may be used as part of the characterisation process. An optical flow process is applied to a plurality of pixels, typically a predetermined number of pixels which represents a very small subset of number of pixels within the candidate object. In this example a centre pixel is identified and a plurality of pixels spaced from the centre pixel for optical flow analysis of each candidate object. The number and/or a spacing parameter for the pixels selected could be set to achieve suitable results.
In one particular example of the invention it is stipulated to use a central pixel for the object and a plurality of further pixels spaced about the edge of the object within the 2D image. For example, selecting four pixels in addition to the central pixel, located on the extreme boundary of the object can be used to define 3D motion of the object..
Motion vectors for each pixel are determined and a resultant optical flow vector is normalised across the selected pixels. A magnitude and direction of motion is assigned to each candidate object. Thus the motion of candidate objects can be used to now identify object type in conjunction with geometric, optical features and/or mathematical features determined by the image processor. When four pixels at the extreme boundaries of the object are used in combination with a central pixel, this can beneficially be used to assess eight degrees of freedom of motion for the object.
Turning back to Fig. 3, candidate objects 403 are output from the edge flow process 501 to a characterisation process 503. Here further features are extracted through calculation of a range of characteristics for each of the candidate objects 403. Such characteristics comprise without limitation any combination or all of: length, width, area, number of pixels, size ratio, mean change in density, standard deviation of change in density, mean density, standard deviation of density, mean pixel descriptors, standard deviation of pixel descriptors, and/or x and y location.
Further details of suitable techniques for pixel density estimation are disclosed in US8250004B2 and US9390265B2 (both to Angelov).
Using an optional training phase, the characterised objects 403 could be filtered further at this stage by user selection to provide prototype objects of interest.
Candidate object characterisation is performed by density based clustering - based on a chosen selection of features. The choice of features is dependent on the criteria relevant to a specific embodiment. The object definitions are applied later (optionally using a classifier), when the characteristics of candidate objects are determined.
For example, detection of people uses as principal criteria the size ratio and movement magnitude and direction (i.e. people are generally elongate and tend to move around), whereas detection of luggage uses the mean and standard deviation features (i.e. because luggage is typically static but can be variously sized, although the texture of an individual item of luggage is usually fairly uniform). Based on this disclosure, it will be evident to skilled persons how such principles may be extended to the detection of further classes or sub-classes of object. Classes of object that may be detected include without limitation: people, luggage (rigid and soft), bicycles, wheelchairs, prams and/or pushchairs, animals, doors on the vehicle, and/or type of rolling-stock comprising the vehicle.
The application specific features required and the range appropriate to the application (i.e. threshold values) are fed back at 409 to the object detection process 501. Using a feedback process of this type, the parameters described in the object identification and/or feature extraction process may be tuned such that the detections are optimised to the required application objects. This may be semi-supervised or generally automated, based on the characteristics of the desired candidate objects. The system can adjust them itself given object detection criteria, or alternatively may be manipulated by an operator to force particular detection criteria. In either case, the adjustment of each parameter results in the described effect (at each parameter point). For example, the objects may only ever be detected with a particular range of change in density. Thus, the candidate object formation parameters can be adapted to reflect this.
The object characterisation tool 503 outputs at 405 characterised objects to an intelligent identification process 505, which may for example be a classifier. The intelligent identification process 505 defines the objects that are required to be detected (are of interest) for a specific embodiment of the present invention. Operation may be automated or (at least during a training phase) may have semi-supervised input from a human operator to determine this.
In general, embodiments of the invention will allow for identification of certain classes of objects that do not contribute to assessment of the dynamic nature of the scene. For example, fixtures and fittings, when determined to be correctly located on the platform, may be discounted in the determination of time-dependent safety factors.
Various embodiments will then be able to use different parameters or combinations of parameters in assessing status on-board the vehicle and/or in the vicinity of the passenger waiting area. In some examples, a simple count of the number of objects or people may provide some meaningful information on the level of crowding, which could trigger a simple alert or announcement. The ratio of objects/luggage to people could be used as another measure. Certain object types and/or parameters may contribute to the analysis of a scene to a greater extent to others. For example, counting of a number of bikes, wheelchairs and/or pushchairs may provide an indication of likely delays/bottlenecks in boarding the vehicle. Additionally or alternatively, a single or multiple large/oversized objects or individuals carrying multiple suitcase could contribute to an assessment of the ease of boarding the vehicle.
In this manner, simple thresholds for individual object types, object parameters or counts of objects/types may be implemented in order to identify adverse conditions. In other examples, compound criteria involving multiple such parameters can be implemented and/or suitable thresholds set. Any such determined factors/conditions affecting the status of the scene and having a likelihood of causing adverse conditions for passenger safety or boarding time may be amassed. Different conditions/factors can be weighted/scored and a rating for the scene as a whole may be generated, e.g. by summation of the identified scores for the individual factors. Thus an overall ranking may be output in addition to or instead of identification of individual adverse conditions.
For example, the system may deduce overcrowding if a certain number of people (or more) are trying to leave a vehicle via the same door. The threshold number used may vary between types of vehicle, time of day (for example commuters generally know exactly where they are going) and quantity of luggage. For example the number may be twenty.
Such tools may be developed over time based on semantic knowledge or experience. Certain factors may be machine learned, for example by recording time taken for boarding/alighting vehicles, along with the parameters identified for the scene and determining the probability of certain parameters or combinations of parameters adversely affecting conditions.
Certain individual hazards (i.e. hazardous objects, combinations of objects, or events) may be identified by the system and an alert or report generated and output in response thereto. It will be appreciated that a number of such hazards can readily be identified using the tools described above, such as: overcrowding; objects being in close proximity to a platform edge or moving vehicle; objects falling between the platform of vehicle; objects being thrown or propelled; people moving erratically, tripping, falling or running; unattended luggage; and/or people climbing on objects or located in unauthorised areas.
Examples of use
A first usage example provides augmented video at a platform-train interface. Augmentation data 407 is output from the object classifier 505 and passed to a video composition unit 507 that also inputs a video stream 401 from the cameras. The video composition unit 507 outputs a composite video stream 427 with features and/or objects (static or dynamic as appropriate) highlighted for a human viewer.
From the video feed 401 the system (comprising 501,503 and 505) detects and identifies in each frame: people, the platform edge, and the train. It assesses the pixel proximity of each person-object to the platform edge object, and whether a train 100 is present or not. The information is calibrated to adjust pixel distance based on camera perspective. The distance of each person-object from the platform edge object is compared to a safety requirement to determine whether any person/object constitutes a significant danger.
In this embodiment any danger is highlighted in real-time by a human-comprehensible signal, for example by an augmented video feed 427. In an augmented video feed, any person or luggage object or other object located dangerously near the platform edge or train may be highlighted by editing pixels in the video feed. This may be for example by filling or outlining the pixels representing the object in a distinctive colour or pattern (or a time-varying technique such as flashing). For example the pixels may be filled with solid colour chosen automatically to contrast with the background.
Fig. 4 shows an example in which proximity to a vehicle doorway causes different visual indicia, e.g. to indicate different levels of risk, for different sets of luggage 24. In fig. 4 a train 10 is shown with an open set of doors 16 whilst stationary at a platform 12. Both passengers 14 and luggage 24 are detected in the scene. The system described herein assesses the first set of luggage 24A accompanying a passenger at a distance from the vehicle doors as posing no significant current risk. This the luggage 24A is not highlighted in the display. In contrast, the two sets of luggage 24B located immediately adjacent the doors 16 are identified as a potential risk of obstruction or a trip hazard and are highlighted, e.g. using colour-coding or the like, in the visual display output. In this instance an alert colour such as amber may be used to shade or outline the luggage indicating the posed level of risk.
Turning now to Fig. 5 a hazard alert is generated for an instance in which an individual has fallen in the vicinity of a vehicle 10. The doors 16 are closed on arrival or departure from the platform 12. A plurality of people 14A standing a safe distance from the platform edge are identified and represented unaltered within the output video feed. However an individual 14B who has fallen in the vicinity of the doors 16 is highlighted as a hazard based on the orientation of the individual and proximity to the platform edge/doors 16. In this instance the fallen individual 14B may be highlighted in red, e.g. for identification by the train driver or other personnel. As well as indicating the potential hazard, the system may also output information to passengers to suggest that an alternative door is used for embarking or alighting the vehicle.
When a hazard situation arises, the system may also send signals in a range of other formats (see the lists above) to a range of personnel of the transport operating organisation.
In a second usage example of the present invention, the intelligent identification/classification process 505 generates real-time counts of specific objects 425 both in the vehicle 100 and optionally the waiting area 300, and supplies these to other systems, for example to the display generation units 109 and 309 on the vehicle 100 and in the waiting area 300 respectively. Using these counts, the display generation units 309 calculate a score representing congestion for each door on the vehicle 100. This may be as simple as the number of people in the proximity of each door (summed on the platform and in the vehicle). Optionally it may be a weighted score based on detecting objects such as luggage, wheelchairs, animals, bicycles, etc, as described above.
By comparing the score against pre-defined reference values the display generation units 309 generate a visual representation of the predicted congestion at each door. For example this may take the form of an image to be projected onto the platform in the area of each door, or onto the vehicle, for example onto and/or around each door. This may for example be a colour coded wash, for example using green for low predicted congestion and red for high predicted congestion. In this example it is also understandable that relevant alerts and/or output information can be generated at relevant points in the communication system and need not be co-located with the image processor.
In a third usage example a plurality of lighting elements are fixed to the platform edge, or projectors directed thereto. Such lights may be off or on or have an intermediate brightness and/or may be flashing, and when illuminated may take any one of a range of colours. Such lights may be used for a range of purposes, for example: to indicate where the train doors will arrive (for example on and green); to distinguish between carriages with different classes of service (for example on and yellow for first class doors, green for other class(es); to indicate whether doors are expected to be congested or not (for example via a traffic light scale); to indicate doors that are out of action (i.e. will not open) - for example by being off or flashing red.
An example is shown in Fig. 6, In which colour-coded regions 26 are designated either on screen or projected onto the platform itself in the vicinity of the different sets of doors 16. Those regions indicate an expected level of congestion at the respective doors, thereby indicating which doors are most appropriate for use by passengers. For example, the less congested region 26A may be coloured green, whereas the more-congested region 26B may be coloured red.
A variation of this example may comprise moving light patterns along the platform edge, to indicate to waiting passengers how to move to less congested boarding points.
Additionally or alternatively, a display screen on the platform may indicate suitable boarding locations or instructions.
In the example of Fig. 7 a train 10 with multiple sets of closed doors 16 is stationary on arrival at a platform 12. On the platform 12, e.g. towards the platform edge in the vicinity of the train 10, is provided a plurality of light emitting elements 28. The elements 28A spaced from the doors 16 are not illuminated. The elements 28B and 28C in front of, or adjacent to, a door 16 are illuminated, for example so as to indicate the expected level of congestion at the respective door 16. The lights 28B may be illuminated in a green colour, or an amber colour whilst the doors remain closed, to indicate a suitable boarding location. The lights 28C may be illuminated red to indicate unsuitable/congested locations.
In a further example, the intelligent identification/classification process 505 identifies specific objects in the waiting area 300 and determines the appropriateness of their position. For example, bicycles may be identified. If any bicycle is not at a suitable location to board the vehicle 100, the display generation units 309 generate information to assist the person with the bicycle to move to a suitable location to expedite boarding.
In a still further example, the system identifies the class of vehicle(s) at a platform. Optionally this may be done as the vehicles(s) arrive(s). This identification may be performed for example by recognising a serial number, or by image comparison. Once the vehicle has stopped, the locations and types of its boarding points are determined by lookus and signalled by any of the means discussed above, for example by light patterns.
Still further examples comprise combinations of the all or some of the features of the embodiments above. Any visual output of the system may be accompanied by, or replaced by, an audio announcement or alert indicating the identified condition or instructions to passengers.

Claims (28)

CLAIMS:
1. A vehicle passenger monitoring system comprising:
an imaging sensor arranged to capture images of a scene including one or more vehicle access point;
an image processor arranged to receive the output of the imaging sensor, the image processor having an object detector for determining the presence of discrete objects in the scene based on variation of one or more visual parameter within the captured images, an object identifier arranged to determine an object type for each of the detected objects according to one or more determined object feature, and a scene analyser for identifying one or more condition for the imaged scene based on the determined object types; and a signal generator for generating an output comprising information derived from said condition and transmitting said output to a communication system for communication to one or more individual.
2. A system according to claim 1, wherein the output comprises one or more augmented image derived from the images captured by the imaging sensors, wherein the one or more condition is indicated or highlighted in the augmented image.
3. A system according to claim 2, wherein the imaging sensor comprises a video camera and condition comprises a hazard within the scene, the augmented image comprising an output video stream in which the hazard is tracked.
4. A system according to any preceding claim, wherein the output comprises an indication of the suitability of one or more vehicle access point for boarding or alighting the vehicle.
5. A system according to claim 4, wherein the communication system comprises a light emitter for projecting light onto a surface in the vicinity of the vehicle or vehicle access point or onto a surface of the vehicle itself, said projected light providing a visual indication to passengers.
6. A system according to any preceding claim, wherein the condition comprises a rating for an object or a portion of the scene, the image processor determining a plurality of conditions for the scene and the output comprises a summation of said ratings.
7. A system according to any preceding claim, wherein the condition comprises one or more identified risk or hazard within the scene according to any or any combination of: an object’s geometry, orientation, relative proximity to one or more further object or object type, and/or a count of one or more predetermined object type in the scene.
8. A system according to any preceding claim, wherein the condition comprises the speed and/or direction of movement of an object within the scene, for example relative to one or more further object.
9. A system according to any preceding claim, wherein the object identifier identifies animate and inanimate objects as separate object types based at least in part on object motion in the captured images.
10. A system according to any preceding claim, wherein the condition comprises a sum of the number of objects in the scene and the signal generator generates an output indicative of overcrowding when the number of objects meets or exceeds a predetermined threshold.
11. A system according to any preceding claim, wherein a plurality of imaging sensors are provided, one or more imaging sensor being based on-board the vehicle, e.g. in transit and one or more further imaging sensor may be located at or near a static location where it is intended that passengers will board or alight the vehicle.
12. A system according to claim 11, wherein the plurality of sensors capture images of the vehicle access point from both the interior and exterior of the vehicle.
13. A system according to any preceding claim, comprising an imaging sensor mounted to the exterior of the vehicle and arranged to capture images of a scene which includes at least a portion of the vehicle.
14. A system according to any preceding claim, wherein a plurality of vehicle access points are imaged by one or more imaging sensor in one or more scene and a comparison between the conditions for each vehicle access point is made by the image processor or signal generator.
15. A system according to any preceding claim, wherein the image processor operates to identify the one or more condition at a rate less than, or substantially equal to, the rate of receipt of the captured image data by the image processor.
16. A system according to any preceding claim, wherein the communication system comprises a passenger information system.
17. A system according to any preceding claim, wherein the object detector processes image pixel data and the visual parameter comprises any or any combination of colour, brightness, density or the rate of change thereof over an image.
18. A system according to claim 17, wherein the object detector identifies pixel clusters according to a degree of similarity of one or more visual parameters between a first pixel and a plurality of further pixels in the vicinity thereof.
19. A system according to any preceding claim, wherein the object identifier classifies objects by comparison with a plurality of predetermined object type models or templates, each model or template having associated therewith a combination of said parameters, including one or more threshold parameter value.
20. A system according to any preceding claim, wherein the image processor identifies a plurality of points or pixels in the identified objects as object markers and movement of the object is identified by tracking the position of the markers within the received images.
21. A system according to any preceding claim, wherein the scene analyser discounts one or more object or object type.
22. A system according to any preceding claim, wherein the scene analyser identifies one or more object or portion of the scene as a hazard according to object type or a combination of object type with object movement, orientation and/or proximity to another object, such as a platform edge.
23. A system according to any preceding claim, wherein the image processor applies a pixel density estimation process.
24. A system according to claim 23, wherein the image processor applies a filter to the density estimation output, e.g. for determination of object surface texture.
25. A system according to any preceding claim, wherein the image processor comprises a feedback loop between the object identifier or the scene analyser and the object detector, whereby the visual parameters used by the object detector are updated, e.g. within an instance of use, according to the object features that result in positive object type identification for a scene or the object types that resulted in one or more positive condition determination for a scene
26. A vehicle and passenger monitoring method corresponding to the system of the any one of claims 1 to 25.
27. A data carrier comprising machine readable instructions for the control of one or more processor to perform the image/data processing corresponding to the system of any one of claims 1 to 25 or the method of claim 26.
28. Apparatus for providing information to intending passengers waiting in a waiting area for transport on a vehicle, wherein the apparatus comprises: a plurality of imaging cameras on the vehicle operationally linked to at least one vehicle-based image processing unit so as to provide images of a scene comprising one or more vehicle access point thereto and a vehicle-based output unit that creates output data from the processed image data and transmits the output data from the vehicle to at least one offvehicle communication system; wherein the communication system comprises one or more information display for the waiting area and at least one waiting area display generation unit receiving the output data and processing it into a format for communication via the information display.
Intellectual
Property
Office
Application No: GB1620099.0 Examiner: Alan Phipps
GB1620099.0A 2016-11-28 2016-11-28 Transport passenger monitoring systems Withdrawn GB2556942A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1620099.0A GB2556942A (en) 2016-11-28 2016-11-28 Transport passenger monitoring systems
PCT/GB2017/053586 WO2018096371A1 (en) 2016-11-28 2017-11-28 Passenger transport monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1620099.0A GB2556942A (en) 2016-11-28 2016-11-28 Transport passenger monitoring systems

Publications (2)

Publication Number Publication Date
GB201620099D0 GB201620099D0 (en) 2017-01-11
GB2556942A true GB2556942A (en) 2018-06-13

Family

ID=58073537

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1620099.0A Withdrawn GB2556942A (en) 2016-11-28 2016-11-28 Transport passenger monitoring systems

Country Status (2)

Country Link
GB (1) GB2556942A (en)
WO (1) WO2018096371A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018006405A1 (en) * 2018-08-14 2020-02-20 Arkadiy Zolotarov Information system for passengers
EP3674171A1 (en) * 2018-12-26 2020-07-01 Nabtesco Corporation Door control device
EP3936408A4 (en) * 2019-03-04 2022-05-11 Hitachi Kokusai Electric Inc. Train monitoring system
WO2023222475A1 (en) * 2022-05-19 2023-11-23 Siemens Mobility GmbH Creating and issuing exit information in a rail vehicle
EP4281935A4 (en) * 2021-08-19 2024-08-07 Samsung Electronics Co Ltd Method and system for generating an animation from a static image

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110769175B (en) 2018-07-27 2022-08-09 华为技术有限公司 Intelligent analysis system, method and device
WO2020045166A1 (en) * 2018-08-27 2020-03-05 株式会社日立国際電気 Image display system and image display mehtod
JP7020560B2 (en) 2018-08-30 2022-02-16 日本電気株式会社 Notification device, notification control device, notification system, notification method and program
CN110395271B (en) * 2019-07-26 2020-06-26 中国安全生产科学研究院 Rail transit platform shielding door system and using method thereof
WO2021026855A1 (en) * 2019-08-15 2021-02-18 深圳市大疆创新科技有限公司 Machine vision-based image processing method and device
DE102020201309A1 (en) * 2020-02-04 2021-08-05 Siemens Mobility GmbH Method and system for monitoring a means of transport environment
CN113256924A (en) * 2020-02-12 2021-08-13 中车唐山机车车辆有限公司 Monitoring system, monitoring method and monitoring device for rail train
IT202000007789A1 (en) * 2020-04-14 2021-10-14 Sunland Optics Srl Visual system for automatic management of entry into commercial establishments and / or public offices or offices open to the public
CN111619614B (en) * 2020-06-05 2022-05-27 上海应用技术大学 System and method for monitoring and dredging passenger crowding in carriage
WO2022177567A1 (en) * 2021-02-18 2022-08-25 Hitachi America, Ltd. Dependability assessment framework for railroad asset management
CN113495009B (en) * 2021-05-24 2022-11-04 柳州龙燊汽车部件有限公司 Quality detection method and system for matching manufacturing of carriage
IT202200016290A1 (en) * 2022-08-01 2024-02-01 FER CONSULTING Srl Device to assist the driving of rolling stock, particularly when carrying out manoeuvring activities
EP4365057A1 (en) * 2022-11-07 2024-05-08 Hitachi Rail STS S.p.A. Vehicule for public transport of passengers, in particular railway vehicule, provided with a safety system for road crossing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5176082A (en) * 1991-04-18 1993-01-05 Chun Joong H Subway passenger loading control system
US20080106599A1 (en) * 2005-11-23 2008-05-08 Object Video, Inc. Object density estimation in video
EP2093698A1 (en) * 2008-02-19 2009-08-26 British Telecommunications Public Limited Company Crowd congestion analysis
EP2423708A1 (en) * 2010-08-31 2012-02-29 Faiveley Transport System and method for detecting a target object
US20160125248A1 (en) * 2014-11-05 2016-05-05 Foundation Of Soongsil University-Industry Cooperation Method and service server for providing passenger density information

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004047258A1 (en) * 2004-09-24 2006-04-13 Siemens Ag Device for determining the utilization of vehicles
US8195598B2 (en) * 2007-11-16 2012-06-05 Agilence, Inc. Method of and system for hierarchical human/crowd behavior detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5176082A (en) * 1991-04-18 1993-01-05 Chun Joong H Subway passenger loading control system
US20080106599A1 (en) * 2005-11-23 2008-05-08 Object Video, Inc. Object density estimation in video
EP2093698A1 (en) * 2008-02-19 2009-08-26 British Telecommunications Public Limited Company Crowd congestion analysis
EP2423708A1 (en) * 2010-08-31 2012-02-29 Faiveley Transport System and method for detecting a target object
US20160125248A1 (en) * 2014-11-05 2016-05-05 Foundation Of Soongsil University-Industry Cooperation Method and service server for providing passenger density information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Velastin, Sergio A., Boghos A. Boghossian, and Maria Alicia Vicencio-Silva. "A motion-based image processing system for detecting potentially dangerous situations in underground railway stations." Transportation Research Part C: Emerging Technologies 14.2 (2006): 96-113 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018006405A1 (en) * 2018-08-14 2020-02-20 Arkadiy Zolotarov Information system for passengers
EP3674171A1 (en) * 2018-12-26 2020-07-01 Nabtesco Corporation Door control device
US20200208465A1 (en) * 2018-12-26 2020-07-02 Nabtesco Corporation Door control device
CN111376935A (en) * 2018-12-26 2020-07-07 纳博特斯克有限公司 Door control device
TWI718798B (en) * 2018-12-26 2021-02-11 日商納博特斯克股份有限公司 Door control device
EP3936408A4 (en) * 2019-03-04 2022-05-11 Hitachi Kokusai Electric Inc. Train monitoring system
US11414110B2 (en) 2019-03-04 2022-08-16 Hitachi Kokusai Electric Inc. Train monitoring system
EP4281935A4 (en) * 2021-08-19 2024-08-07 Samsung Electronics Co Ltd Method and system for generating an animation from a static image
US12094044B2 (en) 2021-08-19 2024-09-17 Samsung Electronics Co., Ltd. Method and system for generating an animation from a static image
WO2023222475A1 (en) * 2022-05-19 2023-11-23 Siemens Mobility GmbH Creating and issuing exit information in a rail vehicle

Also Published As

Publication number Publication date
GB201620099D0 (en) 2017-01-11
WO2018096371A1 (en) 2018-05-31

Similar Documents

Publication Publication Date Title
GB2556942A (en) Transport passenger monitoring systems
JP6829165B2 (en) Monitoring system and monitoring method
Zhang et al. Automated detection of grade-crossing-trespassing near misses based on computer vision analysis of surveillance video data
Bonnin et al. Pedestrian crossing prediction using multiple context-based models
KR102453627B1 (en) Deep Learning based Traffic Flow Analysis Method and System
CN108289203B (en) Video monitoring system for rail transit
KR101745551B1 (en) Apparatus and method for monitoring overspeed-vehicle using CCTV image information
CN112907981B (en) Shunting device for shunting traffic jam vehicles at intersection and control method thereof
WO2020024552A1 (en) Road safety monitoring method and system, and computer-readable storage medium
CN110188644A (en) A kind of staircase passenger's hazardous act monitoring system and method for view-based access control model analysis
CN111517204A (en) Escalator safety monitoring method, device, equipment and readable storage medium
Sabnis et al. A novel object detection system for improving safety at unmanned railway crossings
CN111178286A (en) Attitude trajectory prediction method and device and electronic equipment
CN102254401A (en) Intelligent analyzing method for passenger flow motion
Hwang et al. Hierarchical probabilistic network-based system for traffic accident detection at intersections
CN111062238A (en) Escalator flow monitoring method and system based on human skeleton information and multi-target tracking
Sheikh et al. Visual monitoring of railroad grade crossing
KR102109648B1 (en) System for detecting free riding of public transportation using face recognition of deep learning and method thereof
CN112382068A (en) Station waiting line crossing detection system based on BIM and DNN
CN111055890A (en) Intelligent detection method and detection system for railway vehicle anti-slip
Dharmadasa et al. Video-based road accident detection on highways: A less complex YOLOv5 approach
Akikawa et al. Smartphone-based risky traffic situation detection and classification
Liu et al. Intelligent video analysis system for railway station
KR101936004B1 (en) Intelligent pedestrian detecting method
JP2004287605A (en) Determining device, situation determining system, method and program

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)