WO2017196515A1 - Détection et comptage de piétons au niveau d'une intersection de trafic basé sur l'emplacement de zones de véhicule - Google Patents

Détection et comptage de piétons au niveau d'une intersection de trafic basé sur l'emplacement de zones de véhicule Download PDF

Info

Publication number
WO2017196515A1
WO2017196515A1 PCT/US2017/028662 US2017028662W WO2017196515A1 WO 2017196515 A1 WO2017196515 A1 WO 2017196515A1 US 2017028662 W US2017028662 W US 2017028662W WO 2017196515 A1 WO2017196515 A1 WO 2017196515A1
Authority
WO
WIPO (PCT)
Prior art keywords
pedestrian
view
zone
field
traffic
Prior art date
Application number
PCT/US2017/028662
Other languages
English (en)
Inventor
Michael Whiting
Yan Gao
Dilip SWAMINATHAN
Shashank Shivakumar
Robert Hwang
Todd KRETER
Original Assignee
Iteris, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/150,267 external-priority patent/US9460613B1/en
Priority claimed from US15/150,258 external-priority patent/US9449506B1/en
Priority claimed from US15/150,280 external-priority patent/US9607402B1/en
Application filed by Iteris, Inc. filed Critical Iteris, Inc.
Publication of WO2017196515A1 publication Critical patent/WO2017196515A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Definitions

  • the present invention relates to the field of traffic detection. Specifically, the present invention relates to determining a region in a field of view of the traffic detection zone used by one or more pedestrians, and identifying and counting pedestrians in a traffic detection zone for intersection traffic control.
  • magnetometers either in the road itself, or at either the side of a roadway or positioned higher above traffic to observe and detect vehicles in a desired area.
  • Each type of sensor provides information used to determine a presence of vehicles in specific traffic lanes, to provide information for proper actuation of traffic signals.
  • Outputs are sent to external devices or locations for use or storage, such as for example to a traffic signal controller, which performs control and timing functions based on the information provided. These outputs also provide traffic planners and engineers with information on the volume of traffic at key points in a traffic network. This information is important for comparing volumes over periods of time to help with accurate adjustment of signal timing and managing traffic flow.
  • Current systems and methods of traffic detection provide data that results only from a count of a total number of vehicles, which may or may not include bicycles or other road users, as therefore there is no way differentiating between different types of vehicles. As the need for modified signal timing to accommodate bicyclists, pedestrians and others becomes more critical for proper traffic management, a method for separating the count of all modes of use on a thoroughfare is needed to improve the ability to accurately manage traffic environments.
  • Traffic planners and engineers require data on the volume of pedestrian traffic at key points in a traffic network. This data is important for comparing volumes over periods of time to help with accurate adjustment of signal timing. No current method for automatic count and data collection for pedestrian activity exists in a traffic detection system. As the need for modified signal timing to accommodate roadway users such as pedestrians becomes more critical for proper traffic management, a method for accurately identifying and counting pedestrians using a roadway intersection would greatly improve the ability to efficiently manage traffic environments.
  • Yet another objective of the present invention is to automatically calibrate a traffic detection system by calculating pedestrian speed in a field of view for improved traffic intersection control.
  • a further objective is to provide a system and method of identifying pedestrian incidents in a traffic detection zone, and triggering an alarm based on pedestrian incidents. It is still a further objective of the present invention to combine vehicle detection, bicycle detection, and pedestrian detection in a whole scene analysis of a field of view for traffic intersection control.
  • the present invention provides systems and methods of identifying a presence, volume, velocity and trajectory of pedestrians in a region of interest in a field of view of a traffic detection zone. These systems and methods present an approach to traffic intersection control that includes, in aspect embodiment, both identification of a pedestrian detection zone in the field of view, and identification of individual pedestrians in the pedestrian detection zone. This approach, styled as a pedestrian zone detection, identification and counting framework, enable improved pedestrian counting in the pedestrian detection zone, and increased accuracy in various aspects of roadway management.
  • Identification of individual pedestrians in the pedestrian detection zone in the present is performed, in one embodiment, by comparing a part-based object recognition analysis with a model of a single walking pedestrian to differentiate individual pedestrians from groups of moving pedestrians. Such a comparison analyzes image characteristics to separate groups of pedestrians for improved count accuracy.
  • the present invention also includes calibration of pedestrian speed in traffic intersection control.
  • the pedestrian zone detection In this embodiment, the pedestrian zone detection, the pedestrian zone detection, and
  • identification and counting framework locates a region of interest based on locations of intersection or pavement markings and lane structures, such as a stop bar and lane lines, and computes features of an image inside the region of interest to calculate the pedestrian speed.
  • the present invention also includes incident detection in traffic intersection control.
  • the pedestrian zone detection, identification and counting framework learns a background of the pedestrian detection zone, and looks for changes in the background to identify non-moving objects such as prone objects or pedestrians or unauthorized vehicles. Identification of such non-moving objects initiate an alarm for responsible authorities to improve emergency response and efficient intersection performance.
  • FIG. 1 is a system diagram for a pedestrian zone detection, identification and counting according to one aspect of the present invention
  • FIG. 2 is a flowchart of steps performed for pedestrian zone detection, identification and counting according to one aspect of the present invention
  • FIG. 3 is a flowchart of steps performed for calibrating a pedestrian speed in pedestrian zone detection, identification and counting according to another aspect of the present invention
  • FIG. 4 is a flowchart of steps performed for incident detection in the pedestrian zone detection and pedestrian identification and counting according to one embodiment of the present invention
  • FIG. 5 is an exemplary representation of a field of view in a traffic detection zone, showing in a particular a region of interest for pedestrian detection according to the present invention
  • FIG. 6 is another exemplary representation of a field of view in a traffic detection zone, showing vehicular, bicycle, and pedestrian detection zones according to one embodiment of the present invention.
  • FIG. 7 is further exemplary representation of a field of view in a traffic detection zone, showing accumulated tracks of vehicles and pedestrians according to one embodiment of the present invention.
  • FIG. 1 is a system diagram illustrating elements of a pedestrian tracking and counting framework 100, according to one aspect of the present invention.
  • the pedestrian tracking and counting framework 100 is performed within one or more systems and / or methods that includes several components, each of which define distinct activities for defining an area used by pedestrians 102 in a traffic detection zone 1 14, and accurately counting pedestrians 102 in such an area, for traffic intersection control.
  • FIG. 5-7 are exemplary screenshot images 11 1 of a field of view 112 in a traffic detection 114.
  • a region of interest 103 is highlighted for a pedestrian detection zone 104, and a pedestrians 102 are shown present therein.
  • the pedestrian detection zone 104 is shown below user-drawn vehicular and bicycle detection zones 105 in the field of view 112.
  • arrows indicate accumulated tracks 106 of moving objects 107 are shown therein, as are pedestrian tracks 108 in the region of interest 103.
  • FIG. 5-7 also show standard intersection roadway markings and lane structures 109.
  • the pedestrian tracking and counting framework 100 ingests, receives, requests, or otherwise obtains input data 1 10 that represents a field of view 112 of the traffic detection zone 1 14.
  • Input data 110 is collected from the one or more sensors 120, which may be positioned in or near a roadway area for which the traffic detection zone 1 14 is identified and drawn.
  • the one or more sensors 120 include video systems 121 such as cameras, thermal cameras, radar systems 122, magnetometers 123, acoustic sensors 124, and any other devices or systems 125 which are capable of detecting a presence of objects within a traffic intersection environment.
  • the input data 1 10 is applied to a plurality of data processing component 140 within a computing environment 130 that also includes one or more
  • processors 132 a plurality of software and hardware components, and one or more modular software and hardware packages configured to perform specific processing functions.
  • the one or more processors 132, plurality of software and hardware components, and one or more modular software and hardware packages are configured to execute program instructions to perform algorithms for various functions within the pedestrian tracking and counting framework 100 that are described in detail herein, and embodied in the one or more data processing modules 140.
  • the plurality of data processing components 140 include a data ingest component 141 configured to ingest, receive, request, or otherwise obtain input data 110 as noted above, and a pedestrian zone detection and counting initialization component 142 configure to initialize the pedestrian tracking and counting framework 100 and retrieval of input data 110 for performing the various functions of the present invention.
  • the plurality of data processing modules 140 also include a pedestrian zone identification component 143, an image processing and pedestrian detection learning component 144, a speed calibration component 145, an incident detection component 146, and a counting component 147.
  • Output data 180 may include of a pedestrian count, generated by the counting component 147 according to one or more embodiments of the present invention.
  • Output data 180 may also include a calibrated pedestrian speed, generated by the speed calibration component 145 according to another embodiment of the present invention.
  • Output data 180 may further include an alarm indicated an incident detected in a pedestrian area 104, generated by the incident detection component 146 according to still another embodiment of the present invention.
  • Output data 180 may also be provided for additional analytics and processing in one or more third party or external applications 190. These may include a traffic management tool 191, a zone and lane analysis component 192, a traffic management system 193, and a signal controller 194.
  • the pedestrian zone identification component 143 is configured to define a pedestrian detection zone 104 in the field of view 112 of the traffic detection zone 1 14 for subsequent counting of pedestrians 102 therein. Differential analytical approaches 160 may be applied to achieve this determination. In one embodiment, the pedestrian zone identification component 143 applies a zone position analysis 161 that determines the pedestrian detection zone 104 based on locations of one or more of vehicle and bicycle detection zones 105 in the field of view 1 12.
  • Vehicle and bicycle detection zones 105 are typically drawn in various places in the field of view 112 depending on user requirements. In most situations, the user requires detection at or near the stop bar. Detection zones 105 are usually drawn above the stop bar, and an algorithm is applied to identify the detection zones 105 nearest to the stop bar. An area comprised of a pedestrian strip is created up to the top line of these zones 105, extending from the left to right edge of the field of view 1 12 below the top lines of the zones 105. The pedestrian strip height is determined by a calculation of the vehicle and bicycle zone heights, and may be extended to cover a larger area that is more likely to be used by all pedestrians 102.
  • the zone position analysis 161 therefore accomplishes defining a pedestrian detection zone 104 by identifying a position of at least one vehicle detection zone 105 and at least one bicycle detection zone 105 in nearest proximity to a stop bar, with each of the at least one vehicle detection zone 105 and the at least one bicycle detection zone 105 have a height that extends to or near to the stop bar.
  • the zone position analysis 161 calculates a height of a pedestrian strip in the field of view 1 12 from the height of the at least one vehicle detection zone 105 and the height of the at least one bicycle detection zone 105, and extends a length of the pedestrian strip to a leftmost edge of the field of view 1 12, and a rightmost edge of the field of view 112.
  • the zone position analysis 161 may also extend the height of the pedestrian strip into a portion of the at least one vehicle detection zone 105 and into a portion of the at least one bicycle detection zone 105.
  • the pedestrian zone identification component 143 applies an object movement analysis 162 that determines the pedestrian detection zone 104 based on movement of various objects within the field of view 1 12, such as vehicles, bicycles, and other pedestrians 102.
  • This analysis 162 does not rely upon any other data, such as the locations of vehicle and bicycle detection zones 105 in the field of view 112, or user drawing of such zones 105.
  • the object movement analysis 162 determines the area of the field of view 1 12 where pedestrians 102 typically enter the roadway, by identifying and differentiating pedestrians 102 from other roadway users and tracking their position as the move through the field of view 1 12. Pedestrians 102 have
  • Standard intersection roadway markings and lane structures 109 may also be used to identify areas where pedestrians 102 should be traveling.
  • the pedestrian zone identification component 143 identifies normal pedestrian tracks 108 in the field of view 1 12, a boundary box is created and the area can then be used to collect additional data from various analytics, such as determining count, speed, trajectory, and grouping of pedestrians 102.
  • the pedestrian zone identification component 143 obtains accumulated tracks 106 of moving objects 107 in the field of view 1 12. This enables refining the boundary of pedestrian detection zone 104, as well as other detection zones 105.
  • the object movement analysis 162 therefore accomplishes defining a pedestrian detection zone 104 by ascertaining a region of interest 103 in the field of view 1 12 for pedestrian tracks 108, based on at least one of lane structures and intersection road markings 109 and movement of pixels representing moving objects 107 relative to those lane structures and intersection road markings 109. Accumulated tracks 106 of moving objects 107 are determined in the field of view 1 12 by analyzing motion strength and frequency of activity of each pixel
  • the present invention also tracks pedestrian characteristics in the region of interest 103 to distinguish the accumulated tracks 106 of the moving objects 107 from the pedestrian tracks 108.
  • Analyzing motion strength of pixels in the object movement analysis 162 may include computing a binary thresholded image defining a histogram of oriented gradient features that further define a pedestrian contour, and updating the histogram as pixel activity occurs in the changing image.
  • Analyzing a frequency of pixel activity may include computing an activity frequency threshold and finding accumulated tracks 106 from pixel frequency activity.
  • the image processing and pedestrian detection learning component 144 is configured to detect one or more pedestrians 102 in the pedestrian zone 104 from similarities of a single walking pedestrian model with part-based object recognition of individual pedestrians 102, and increment a count for the counting module 147. Multiple analytical approaches 170 may therefore be applied to detect the one or more pedestrians 102 for the counting module 147.
  • the image processing and pedestrian detection learning component 144 applies a part-based object recognition analysis 171 and image analysis using a histogram of oriented gradient features 172 to develop a model 173 of the single walking pedestrian.
  • the image processing and pedestrian detection learning component 144 applies these analytical approaches 170 by, in one aspect of the present invention, analyzing portions of the field of view 112 by moving a sliding window through the pedestrian detection zone 104 in the field of view 112, and computing features of current pixel content identified in the sliding window by identifying part-based features that define an individual pedestrian 102.
  • the part-based features include one or more of body structure combinations, body shape, body width and walking gestures.
  • the image processing and pedestrian detection learning component 144 also determines a width and a height of one or more object parts, compares body structure combinations with one or more predetermined templates, and applies one or more geometric constraints to separate the part-based features.
  • the image processing and pedestrian detection learning component 144 then proceeds with developing the model 173 of a single walking pedestrian to separate each individual pedestrian in a group of moving pedestrians in the field of view 112. This is accomplished by computing a histogram of oriented gradient pedestrian features 172 based on pixels defining a pedestrian contour.
  • the image processing and pedestrian detection learning component 144 next determines a matching confidence between an individual pedestrian and a group of moving pedestrians by calculating a mathematical similarity between the computed features of current pixel content and the model of the single walking pedestrian 173. Where a matching confidence is high, this indicates that an individual pedestrian has been identified, and the present invention increments a pedestrian count in the counting component 147. Where a matching confidence is low, the present invention analyzes the next portion of the field of view 112 by moving the sliding window to the next position in the field of view 112 for further image processing.
  • FIG. 2 is a flowchart illustrating steps in a process 200 for performing the pedestrian tracking and counting framework 100, according to certain
  • Such a process 200 may include one or more algorithms for pedestrian zone identification within the component 143, and for image processing and pedestrian detection learning within the component 144, and for the various analytical approaches applied within each such component.
  • Pedestrian zone identification and counting in the process 200 are initialized at step 210 by retrieving input data 1 10 representing a field of view 112 for a traffic detection zone 1 14. The process 200 then detects and defines the pedestrian zone 104, using one of the analytical approaches 160, in either step 220 or 230.
  • the process 200 determines and defines a pedestrian zone 104 using existing positions of one or more of vehicle and bicycle lanes 105 in the traffic detection zone 1 14. Those, as noted above, may be either manually drawn by users, or automatically determined, and the process at step 220 proceeds by identifying a position of at least one of the vehicle detection zones and bicycle detection zones 105 in nearest proximity to a stop bar, and calculating a height of a pedestrian strip in the field of view 1 12 from the height of vehicle detection zone(s) 105 and the height of the bicycle detection zone(s) 105. It should be noted that the process 200 does not require both vehicle detection zones and bicycle zones 105, and therefore the pedestrian zone 104 may be calculated using one or both of these types of zones 105. Additionally, one or more of each zone may be used to determine and define the pedestrian zone 104 according to this embodiment of the present invention.
  • the process 200 applies the analytical approach 162 to determine and define pedestrian zones 104 at step 230, using movement of one or more objects 107 in the field of view 112.
  • this approach 162 ascertains a region of interest 103 in the field of view 112 for pedestrian tracks 108, based lane structures and intersection road markings 109 and movement of pixels representing moving objects 107.
  • the process 200 identifies a region of interest 103 in the form of a pedestrian detection zone 104 for further processing of images to detect and count pedestrians 102.
  • pixels in the region of interest 103 are processed to analyze pixel content, using a combination of analytical approaches 170 that examine characteristics of a person to separate groups of people and improve count accuracy.
  • One such analytical approach 170 is a part-based object recognition approach 172 which identifies an individual person from a group by using local features which are not affected by occlusion as compared to global features.
  • a single object, in this case a human pedestrian 102 can be thought of as having many individual parts like a head, arms, torso, legs, and each of those parts can be assigned a standard representative pixel size. Identification of these parts, and the relationship between them, can be used to recognize a person from a group, even if partly occluded.
  • the head can be approximated as a circular shaped feature, and the shoulders may be approximate as an arc in the image, such as using for example an edge feature space technique.
  • predetermined templates may be used to identify these parts using template matching techniques, such as for example edge intensity template matching.
  • Geometric constraints relative to the relationship between the parts may also be applied. For example, a constraint that the head cannot be next to the torso may be used to remove false matches. Additionally, other techniques such as boosted cascade like classifiers with edgelet features may be applied to learn part detection.
  • parts can include full body, head, torso, shoulder, legs, head- shoulders and many other combinations of such parts.
  • Another analytical approach 170 employed at step 250 is to develop a model 173 of a single walking pedestrian using a histogram of oriented gradient features 172. Because pedestrians 102 often travel in groups, this may cause the ability to count pedestrians 102 accurately to degrade. The present invention therefore uses various characteristics of pedestrians such as height, width, body shape, head shape, speed and location to separate each individual that may be in a group.
  • the process 200 creates a complex model 173 for the single walking pedestrian, based on all the 'single walking pedestrians' that have been identified.
  • the model 173 therefore continually evolves as more data is collected within the present invention.
  • the single walking pedestrian model 173 is comprised of a histogram of oriented gradient features (HoG) 172 that include head-torso-leg body structure, body shape, body width, walking gestures, and others to define a pedestrian contour.
  • the process 200 computes this by analyzing portions of the field of view 1 12 in a sliding window that moves through grouped pedestrians in the image 1 11 to separate individual pedestrians from the grouped pedestrians based on the matching confidence between the single walking pedestrian model 173 and the computed features of the current content in the portions of the field of view 1 12 in the sliding window.
  • the matching confidence is the mathematical interpretation of the similarity between the single walking pedestrian model 173 and the computed features of the current content of the sliding window.
  • the process 200 concludes that a single walking pedestrian is found. If it is low, the analysis proceeds to the next portion of the field of view 112 by moving the sliding window to the next position and performs the comparison again, until it reaches the end of the grouped pedestrians.
  • pedestrian detection using a HoG approach 172 and a single walking pedestrian model 173 in step 250 therefore takes an image 1 11 from input data 1 10, and may create a multi-scale image pyramid as the process 200 slides a moving window through the image to compute HoG features.
  • the process 200 may also apply one or more statistical classifiers, such as for example SVM or the like, to detect a pedestrian using these HoG features.
  • the process 200 learns by fusing results of these statistical classifiers across all portions of the field of view 112 in sliding window positions and different image scales, and develops the model 173 to detect pedestrians 102.
  • the pedestrian speed calibration component 145 is configured to calibrate a pedestrian speed with a region of interest 103 in the field of view 1 12 for more accurate detection and counting of pedestrians 102 in traffic intersection control. It is to be noted that pedestrian speed calibration may be performed manually by a user or automatically using one or more image processing steps as discussed below.
  • the pedestrian speed calibration component 145 performs automatic calibration of pedestrian speed with a region of interest 103 in the field of view 1 12 through a transformation of image pixels to actual distance traveled of a pedestrian 102 in the image. Because of the constant possibility of movement of sensors 120 such as cameras, and other changes such as focal length in the case of video cameras, the pedestrian speed calibration component 145 attempts a
  • the pedestrian speed calibration component 145 uses the intersection pavement markings and lane structures 109 to determine the speed at which a pedestrian 102 is moving in the field of view 1 12. Based on the position of vehicle and bicycle detection zones 105 in the field of view 112, the present invention detects the horizontal stop bar and lane lines to locate the stop bar location.
  • a stop bar finding algorithm may also be applied to identify one or more horizontally straight white lines in an image, by finding a peak in the horizontal projection.
  • the layout of the traffic detection zone 114 may also be used to find the stop bar, as the bottom zones of each lane are typically close to the stop bar. Once the stop bar is found, the present invention attempts to find lane lines which intersect with the stop bar.
  • Zone coordinates are also utilized to find most vertically-oriented lane lines, either to the left or to the right of a vehicle detection zone 105.
  • the pedestrian speed calibration component 145 therefore performs automatic calibration of pedestrian speed with a region of interest 103 in the field of view 1 12 by initially identifying a location of one or more of a stop bar and lane lines in the field of view 1 12, and determining an intersection of the lane lines with the stop bar to develop coordinates of the region of interest 103.
  • the pedestrian speed calibration component 145 also identifies a vertical orientation of the lane lines relative to the stop bar.
  • the pedestrian speed calibration component 145 then computes features of an image 11 1 inside the region of interest 103 to differentiate between image pixels.
  • Features analyzed may include edge gradients, thresholded grayscale pixels, and feature projections.
  • the present invention measures an inter-lane distance between the image pixels using a known lane line width and the vertical orientation of the lane lines relative to the stop bar to map the image pixels to an actual distance traveled of a pedestrian 102 in the region of interest 103.
  • the pedestrian speed calibration component 145 uses this measurement and mapping, calculates a pedestrian speed from the actual distance traveled that is calibrated with the region of interest 103.
  • the calculation includes computing the number of feet or meters traveled relative to lane lines and stop bar markings, and the distance per unit of time traveled by the pedestrian 102.
  • the calibrated pedestrian speed may then be provided to a traffic controller system as output data 180, or other external devices or location for storage or use.
  • FIG. 3 is a flowchart illustrating steps in a process 300 for performing the calibration of pedestrian speed in the pedestrian tracking and counting framework 100, according to another embodiment of the present invention.
  • Such a process 300 may include one or more algorithms for pedestrian speed calibration in a region of interest 103 within the component 145.
  • the process 300 is initialized at step 310 by retrieving input data 110 representing a field of view 1 12 in the traffic detection zone 1 14.
  • the process 300 analyzes this input data 1 10 to ascertain, at step 320, a region of interest 103 in which pedestrians 102 may use the roadway within the traffic detection zone 114.
  • the region of interest 103 may or may not be the specific pedestrian detection zone 104 referenced above with respect to other aspects of the present invention.
  • the process 300 determines the region of interest 103 using one or more of pavement or intersection markings and lane structures 109, positions of other detection zones 105, movement of objects 107 in the field of view 1 12, or some combination of these approaches.
  • the process 300 attempts to identify positions of both a stop bar and lane lines for vehicles and bicycles in the region of interest 103. Using this information, the process 300 develops positional coordinates of the region of interest 103 at step 340. This may be performed in combination with the
  • these zonal coordinates are used to further identify vertical orientations of the lane lines relative to the stop bar, so that those lane lines with the most vertical orientations relative to the detected stop bar are used for further computations of pedestrian speed as noted below.
  • the process 300 attempts to ascertain a relationship between an actual distance traveled by a pedestrian 102 and image pixels in the input data 1 10 at step 350. This involves measuring an inter-lane distance between the image pixels at step 360, and mapping image pixels to the actual distance traveled. This is performed using standard lane widths, so that once most vertical orientations of lane lines are established in step 340, the transformation from an image to actual distance traveled by a pedestrian 102 can be accomplished.
  • step 370 by calculating pedestrian speed. This is performed as noted above by computing the distance, in number of feet or meters, traveled relative to lane lines and stop bar markings, and the distance per unit of time traveled by the pedestrian 102.
  • the pedestrian speed is therefore calibrated to the region of interest 103 for appropriate traffic intersection control, and the speed is provided as output data 180 to one or more of a traffic management tool 191, traffic management system 193, intersection signal controller 194, additional analytics 192, or any other additional or external applications 190.
  • the incident detection component 146 is configured to detect various pedestrian incidents and to provide an alarm as output data 180 when a pedestrian incident is determined.
  • Incidents may include non-moving objects within the pedestrian detection zone 104, or within the field of view 112 generally, that can cause abnormal pedestrian and vehicle movements. Incidents may also include prone objects or pedestrians 102 within the pedestrian detection zone 104, for example pedestrians 102 have fallen to the pavement. Other types of incidents include a presence of unauthorized vehicles in the pedestrian detection zone 104.
  • the incident detection component 146 learns the background of the pedestrian detection zone 104 to continually search for parts of that area that are different than the background it has learned. If a change in the background has been present for some amount of time, and/or where moving vehicles (or even other walking pedestrians) being tracked are avoiding the area that has changed to avoid contact, the incident detection component 146 concludes that non-moving objects are in the pedestrian detection zone 104 and generates a warning signal.
  • Non-moving objects may include fallen pedestrians, stalled vehicles, objects that have fallen from moving vehicles, motorcyclists or bicyclists who are down, or objects placed in the pedestrian detection zone 104 by someone.
  • the incident detection component 146 tracks walking pedestrians 102 as they move all the way through the pedestrian detection zone 104, from the entry point through to the exit point. If a pedestrian 102 stops at the middle of the pedestrian detection zone 104 for some time, and does not move forward or backward and continues to be present in the zone 104, then the present invention can issue an alarm signaling "pedestrian down in the roadway" to alert the responsible authorities.
  • the incident detection component 146 may also track movement of vehicles, bicycles, motorcycles, and other objects 107 in the field of view 1 12. Where it detects that an object 107 has entered the pedestrian detection zone 104, and stop there and not proceed for some time, the incident detection component 146 may signal that an unauthorized vehicle is present in the pedestrian detection zone 104 to alert authorities for further investigation.
  • FIG. 4 is a flow diagram illustrating steps in a process 400 of incident detection in the pedestrian tracking and counting framework 100 according to one embodiment of the present invention.
  • a process 400 may include one or more algorithms for incident detection within the component 146.
  • the present invention receives an image 11 1 representing the field of view 1 12 in step 405 and thereby initializes the incident detection component 146.
  • the present invention performs pedestrian detection using one or more methods as described herein, and if a pedestrian 102 is identified at step 420, proceeds with tracking the pedestrian 102 at step 430, together with updating the identification, location, speed, and other characteristics. If no pedestrian 102 is identified at step 420, the algorithm loops back to begin processing a new image 1 11 representing the field of view 1 12.
  • the algorithm for incident detection in the component 146 determines if the pedestrian 102 is moving at step 440. If the pedestrian is found to be in motion, the process 400 returns to begin processing a new image 1 11 representing the field of view 1 12. If, however, the pedestrian 102 is not in motion at step 440, the algorithm for incident detection proceeds to determine how long the pedestrian 102 has been stationary at step 450. If the pedestrian 102 is not in motion in excess of a certain amount of time, a pedestrian down alarm is generated at step 470 as output data 180.
  • the certain amount of time may be preset by a user, and may also be learned by the process 400 as pedestrians 102 and other objects 107 are identified and tracked.
  • a timer may be updated at step 460 for determining whether a pedestrian 102 is not in motion for a certain amount of time, and this value is returned to the beginning of the algorithm. In this matter, were the incident detection components that pedestrians 102 are not in motion for some specific reason (for example, a blockage in traffic) then this value can be stored and used by the process 400.
  • the pedestrian tracking and counting framework 100 may be configured to provide a separate output 180 to a traffic signal controller 194 when a group of pre-determined people is identified to enable additional functions to be performed.
  • a user may set sample size for this output 180 using the traffic management tool 191 , or it may be automatically determined within the present invention.
  • an identified group as an output 180.
  • the traffic signal controller 194 may extend the walk time or hold a red light for vehicles to allow safe passage through the intersection.
  • the present invention may use an identified group of people to further identify periods of high pedestrian traffic for better intersection efficiency. It is therefore to be understood that many uses of output data 180 in applications for traffic intersection signal control are possible and within the scope of the present invention.
  • the pedestrian tracking and counting framework 100 of the present invention may be applied in many different circumstances.
  • the present invention may be used to identify pedestrians 102 during adverse weather conditions when physical danger may increase due to reduced visibility.
  • the present invention may therefore perform pedestrian detection in low-light, fog or other low-contrast conditions for improved roadway and intersection.
  • the present invention may be used to identify the difference between a pedestrian 102 and the pedestrian's shadow.
  • the pedestrian detection is improved through rejection of pedestrian shadows to ensure improved accuracy in pedestrian detection and counting.
  • the present invention may be used to determine a normal or average crossing speed for pedestrians 102 in a detection zone 104. This may be then be used to identify slow-moving pedestrians 102, such as the elderly, children, and disabled or wheelchair-bound persons, to extend and/ or adjust a signal timing for crossing the intersection for safer passage. It may also be used to identify faster-moving intersection users, such as pedestrians 102 using hover boards, skateboards, or other such devices in the pedestrian detection zone 104.
  • the present invention may further be used to identify late arrivals in the pedestrian detection zone 104, to extend and/or adjust signal timing for safe intersection passage.
  • the present invention may also receive and use additional input from the traffic signal controller to identify when a pedestrian 102 starts to cross the intersection after a certain percentage of the crossing time has expired.
  • the present invention may also be utilized to compute a crosswalk occupancy, for example to determine a pedestrian density in the detection zone 104.
  • the pedestrian tracking and counting framework 100 may be utilized in combination with existing approaches to determining vehicle and bicycle detection zones 105, and may be therefore performed using the existing field of view 112 in a traffic detection zone 1 14 that is designed to detect vehicles, bicycles and other road users needing the traffic signal to cross an intersection.
  • the present invention may use an existing vehicle detection status, such as speed or saturation, to dynamically change the sensitivity of pedestrian detection.
  • a known vehicular status may be applied to increase the likelihood of pedestrian crossing when stopped vehicle is detected, or when no vehicle in present. Conversely, it may be used to decrease the likelihood of pedestrian crossing while vehicular traffic is freely flowing. Therefore, the present invention use knowledge of either stopped or moving vehicles or bicycles in the respective other detection zones 105 or moving vehicles to improve pedestrian detection accuracy.
  • the present invention may be part of a whole scene analysis that combines vehicular, bicycle, and pedestrian detection to identify different moving objects 107, such as vehicles, motorcycles, bicycles and pedestrians.
  • Each object type has its own unique characteristics, and the present invention is configured to automatically learn these unique characteristics and apply them to identify the different types.
  • Output data 180 from such a whole scene analysis provides traffic engineers, responsible authorities, and the public with a complete picture of street and intersection activity (for example, who is using what and at what time and for how long) for improved roadway management.
  • the pedestrian tracking and counting framework 100 may be configured to learn features of a traffic intersection, such as the background, using the image processing paradigms discussed herein. This may further include one or more approaches for learning roadway lane structures for improving accuracy of in identifying vehicles, bicycles, pedestrians, and other objects 107 in a traffic detection zone 114.
  • detection and false call rates are key metrics used to measure accuracy, having low missed and false calls improves the overall performance and efficiency of a traffic management system.
  • the present invention may include an approach that incorporates a highly robust model that learns roadway structures to improve sensing accuracy of a traffic management system.
  • a roadway structure model provides a high confidence that learned structures correspond to physical roadway structures, and is robust to short-term changes lighting conditions, such as shadows cast by trees, buildings, clouds, vehicles, rain, fog, etc.
  • the model also adaptively learns long- term appearance changes of the roadway structures caused by natural wear and tear, seasonal changes in lighting (winter vs. summer), etc.
  • the model also exhibits a low decision time (in milliseconds) for responding to occlusions caused by fast moving traffic, and low computational complexity capable of running on an embedded computing platform.
  • the present invention looks at user-drawn zones 105 to initialize and establish borders for regions of interest 103 for various detection zones. Images 11 1 are processed to compute features inside borders for the region of interest 103, and find roadway structures using these computed features. The model is then developed to learn background structures from these features to detect an occlusion, and learn the relationship between structure occlusions and detection zones 105.
  • roadway structures such as lane lines, curbs, and medians are generally found adjoining detection zone boundaries.
  • roadway structures exhibit strong feature patterns that can be generalized. For example, they contain strong edges and are relatively bright in grayscale. Such structures can be effectively described by overlapping projector peaks of positive edges, negative edges and thresholded grayscale pixels. These structures are also persistent, and their feature signatures can be learned over time to detect occlusions and draw inferences regarding the presence of vehicles in the neighboring zones.
  • every zone requires the computation of a left and a right border region of interest 103. If two zones are considered horizontal neighbors, then they will share a border region of interest 103, and the area between the zones is established as the border region of interest 103. If a zone has no neighboring zones to the left or right, then the corresponding the boundary of the corresponding side is extended by an area proportionate to the zone width, and this extended area serves as the border region of interest 103 for the zone. Also, each border region of interest 103 may be sub-divided into tile regions of interest based on the size of the user-drawn zones. A larger zone provides a larger border area, allowing the model to work with smaller tiles that provide a more localized knowledge of structures and occlusion.
  • Features are computed in the border region of interest 103 by computing edges from projecting positive and negative edges across rows, and finding peak segments from each projected positive and negative edge. Additionally, the peak segments may be determined by computing a gray histogram and a cumulative histogram from image pixels, determining a gray threshold image, and projecting resulting pixels across rows.
  • Roadway structures are learned from each computed feature by finding overlapping feature segment locations, accumulating peak segment locations of overlapping features in a histogram, and finding peaks in the feature background histograms. The model of roadway structures is therefore established using feature histogram peak locations. This is used to identify an occlusion by finding overlapping positive edge peak segments, negative edge peak segments, and gray threshold peak segments with the background histogram. Matching scores are compute for each of these overlaps and compared to threshold values to differentiate between a visible structure and an occlusion.
  • the systems and methods of the present invention may be implemented in many different computing environments. For example, they may be implemented in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, electronic or logic circuitry such as discrete element circuit, a programmable logic device or gate array such as a PLD, PLA, FPGA, PAL, and any comparable means. In general, any means of
  • Exemplary hardware that can be used for the present invention includes computers, handheld devices, telephones (e.g., cellular, Internet enabled, digital, analog, hybrids, and others), and other such hardware. Some of these devices include processors (e.g., a single or multiple microprocessors or general processing units), memory, nonvolatile storage, input devices, and output devices.
  • processors e.g., a single or multiple microprocessors or general processing units
  • memory e.g., nonvolatile storage
  • input devices e.g., input devices, and output devices.
  • alternative software implementations including, but not limited to, distributed processing, parallel processing, or virtual machine processing can also be configured to perform the methods described herein.
  • the systems and methods of the present invention may also be wholly or partially implemented in software that can be stored on a non-transitory computer- readable storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like.
  • the systems and methods of this invention can be implemented as a program embedded on a mobile device or personal computer through such mediums as an applet, JAVA.RTM or CGI script, as a resource residing on one or more servers or computer workstations, as a routine embedded in a dedicated measurement system, system component, or the like.
  • the system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system.
  • the data processing functions disclosed herein may be performed by one or more program instructions stored in or executed by such memory, and further may be performed by one or more modules configured to carry out those program instructions. Modules are intended to refer to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, expert system or combination of hardware and software that is capable of performing the data processing functionality described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

Selon la présente invention, le comptage et la détection de piétons pour une commande d'intersection de trafic analysent des caractéristiques d'un champ de vision d'une zone de détection du trafic afin de déterminer une taille et un emplacement d'une zone piétonne, et appliquent des protocoles pour évaluer le contenu de pixel dans le champ de vision afin d'identifier des piétons de manière individuelle. La taille et l'emplacement d'une zone piétonne sont déterminés basés soit sur des emplacements de zones de détection de bicyclette et de véhicule, soit sur le mouvement de divers objets dans le champ de vision. Un étalonnage de la vitesse du piéton automatique avec une région d'intérêt pour la détection des piétons est réalisé à l'aide de marqueurs de voie et d'autres intersections dans le champ de vision. Le comptage et la détection comprennent en outre l'identification d'une présence, du volume, de la vitesse et de la trajectoire des piétons dans la zone piétonne de la zone de détection du trafic.
PCT/US2017/028662 2016-05-09 2017-04-20 Détection et comptage de piétons au niveau d'une intersection de trafic basé sur l'emplacement de zones de véhicule WO2017196515A1 (fr)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US15/150,267 US9460613B1 (en) 2016-05-09 2016-05-09 Pedestrian counting and detection at a traffic intersection based on object movement within a field of view
US15/150,258 2016-05-09
US15/150,258 US9449506B1 (en) 2016-05-09 2016-05-09 Pedestrian counting and detection at a traffic intersection based on location of vehicle zones
US15/150,267 2016-05-09
US15/150,280 2016-05-09
US15/150,280 US9607402B1 (en) 2016-05-09 2016-05-09 Calibration of pedestrian speed with detection zone for traffic intersection control
US15/470,627 US9805474B1 (en) 2016-05-09 2017-03-27 Pedestrian tracking at a traffic intersection to identify vulnerable roadway users for traffic signal timing, pedestrian safety, and traffic intersection control
US15/470,627 2017-03-27

Publications (1)

Publication Number Publication Date
WO2017196515A1 true WO2017196515A1 (fr) 2017-11-16

Family

ID=60267419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/028662 WO2017196515A1 (fr) 2016-05-09 2017-04-20 Détection et comptage de piétons au niveau d'une intersection de trafic basé sur l'emplacement de zones de véhicule

Country Status (1)

Country Link
WO (1) WO2017196515A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335479A (zh) * 2019-07-02 2019-10-15 华人运通(上海)自动驾驶科技有限公司 虚拟斑马线投影控制方法、装置和虚拟斑马线投影系统
CN110490030A (zh) * 2018-05-15 2019-11-22 保定市天河电子技术有限公司 一种基于雷达的通道人数统计方法及系统
CN111047857A (zh) * 2019-04-25 2020-04-21 泰州悦诚科技信息咨询中心 智慧城市实时管控系统
CN114998826A (zh) * 2022-05-12 2022-09-02 西北工业大学 密集场景下的人群检测方法
CN115918569A (zh) * 2022-12-13 2023-04-07 重庆市畜牧技术推广总站 一种畜牧业统计监测系统
US11657613B2 (en) 2020-08-11 2023-05-23 Analog Devices International Unlimited Company Zone based object tracking and counting

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140362222A1 (en) * 2013-06-07 2014-12-11 Iteris, Inc. Dynamic zone stabilization and motion compensation in a traffic management apparatus and system
CN104299426A (zh) * 2014-09-19 2015-01-21 辽宁天久信息科技产业有限公司 一种基于对行人检测计数统计的交通信号控制系统及方法
CN104318263A (zh) * 2014-09-24 2015-01-28 南京邮电大学 一种实时高精度人流计数方法
CN104318760A (zh) * 2014-09-16 2015-01-28 北方工业大学 一种基于似物性模型的路口违章行为智能检测方法及系统
US20150084791A1 (en) * 2013-09-23 2015-03-26 Electronics And Telecommunications Research Institute Apparatus and method for managing safety of pedestrian at crosswalk
US20150178571A1 (en) * 2012-09-12 2015-06-25 Avigilon Corporation Methods, devices and systems for detecting objects in a video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150178571A1 (en) * 2012-09-12 2015-06-25 Avigilon Corporation Methods, devices and systems for detecting objects in a video
US20140362222A1 (en) * 2013-06-07 2014-12-11 Iteris, Inc. Dynamic zone stabilization and motion compensation in a traffic management apparatus and system
US20150084791A1 (en) * 2013-09-23 2015-03-26 Electronics And Telecommunications Research Institute Apparatus and method for managing safety of pedestrian at crosswalk
CN104318760A (zh) * 2014-09-16 2015-01-28 北方工业大学 一种基于似物性模型的路口违章行为智能检测方法及系统
CN104299426A (zh) * 2014-09-19 2015-01-21 辽宁天久信息科技产业有限公司 一种基于对行人检测计数统计的交通信号控制系统及方法
CN104318263A (zh) * 2014-09-24 2015-01-28 南京邮电大学 一种实时高精度人流计数方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490030A (zh) * 2018-05-15 2019-11-22 保定市天河电子技术有限公司 一种基于雷达的通道人数统计方法及系统
CN111047857A (zh) * 2019-04-25 2020-04-21 泰州悦诚科技信息咨询中心 智慧城市实时管控系统
CN110335479A (zh) * 2019-07-02 2019-10-15 华人运通(上海)自动驾驶科技有限公司 虚拟斑马线投影控制方法、装置和虚拟斑马线投影系统
CN110335479B (zh) * 2019-07-02 2020-10-09 华人运通(上海)自动驾驶科技有限公司 虚拟斑马线投影控制方法、装置和虚拟斑马线投影系统
US11657613B2 (en) 2020-08-11 2023-05-23 Analog Devices International Unlimited Company Zone based object tracking and counting
CN114998826A (zh) * 2022-05-12 2022-09-02 西北工业大学 密集场景下的人群检测方法
CN115918569A (zh) * 2022-12-13 2023-04-07 重庆市畜牧技术推广总站 一种畜牧业统计监测系统

Similar Documents

Publication Publication Date Title
US9805474B1 (en) Pedestrian tracking at a traffic intersection to identify vulnerable roadway users for traffic signal timing, pedestrian safety, and traffic intersection control
US9460613B1 (en) Pedestrian counting and detection at a traffic intersection based on object movement within a field of view
US9449506B1 (en) Pedestrian counting and detection at a traffic intersection based on location of vehicle zones
WO2017196515A1 (fr) Détection et comptage de piétons au niveau d'une intersection de trafic basé sur l'emplacement de zones de véhicule
CN110178167B (zh) 基于摄像机协同接力的路口违章视频识别方法
US10311719B1 (en) Enhanced traffic detection by fusing multiple sensor data
US20160148058A1 (en) Traffic violation detection
CN109284674B (zh) 一种确定车道线的方法及装置
US10081308B2 (en) Image-based vehicle detection and distance measuring method and apparatus
US10713500B2 (en) Identification and classification of traffic conflicts using live video images
Cheng et al. Intelligent highway traffic surveillance with self-diagnosis abilities
CN101877058B (zh) 人流量统计的方法及系统
US20120148094A1 (en) Image based detecting system and method for traffic parameters and computer program product thereof
Salvi An automated nighttime vehicle counting and detection system for traffic surveillance
KR20080036016A (ko) 영상 분석을 위한 방법 및 이미지 평가 유닛
Mithun et al. Video-based tracking of vehicles using multiple time-spatial images
US10643465B1 (en) Dynamic advanced traffic detection from assessment of dilemma zone activity for enhancement of intersection traffic flow and adjustment of timing of signal phase cycles
CN109389016B (zh) 一种人头计数的方法及系统
CN106023650A (zh) 基于交通路口视频及计算机并行处理的实时行人预警方法
Maurya et al. Deep learning based vulnerable road user detection and collision avoidance
Malinovskiy et al. Model‐free video detection and tracking of pedestrians and bicyclists
Kanhere Vision-based detection, tracking and classification of vehicles using stable features with automatic camera calibration
Buslaev et al. On problems of intelligent monitoring for traffic
KR20150002040A (ko) 호그 연속기법에 기반한 칼만 필터와 클러스터링 알고리즘을 이용하여 실시간으로 보행자를 인식하고 추적하는 방법
Kamijo et al. Development and evaluation of real-time video surveillance system on highway based on semantic hierarchy and decision surface

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17796549

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17796549

Country of ref document: EP

Kind code of ref document: A1