WO2021022759A1 - Systems and methods for traffic violation detection - Google Patents

Systems and methods for traffic violation detection Download PDF

Info

Publication number
WO2021022759A1
WO2021022759A1 PCT/CN2019/126617 CN2019126617W WO2021022759A1 WO 2021022759 A1 WO2021022759 A1 WO 2021022759A1 CN 2019126617 W CN2019126617 W CN 2019126617W WO 2021022759 A1 WO2021022759 A1 WO 2021022759A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
objects
target
moving
candidate
Prior art date
Application number
PCT/CN2019/126617
Other languages
English (en)
French (fr)
Inventor
Jun Lin
Yayun Wang
Original Assignee
Zhejiang Dahua Technology Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co., Ltd. filed Critical Zhejiang Dahua Technology Co., Ltd.
Priority to EP19940751.1A priority Critical patent/EP3983931A4/en
Publication of WO2021022759A1 publication Critical patent/WO2021022759A1/en
Priority to US17/647,976 priority patent/US11790699B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • G08G1/0133Traffic data processing for classifying traffic situation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0141Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • G08G1/0145Measuring and analyzing of parameters relative to traffic conditions for specific applications for active traffic flow control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/056Detecting movement of traffic to be counted or controlled with provision for distinguishing direction of travel
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Definitions

  • the disclosure generally relates to traffic monitoring technology, and more particularly, relates to systems and methods for detecting a traffic violation behavior.
  • angles of a lane and a zebra crossing in an image captured by the monitoring device may vary in different installation scenarios, resulting in an inaccurate determination of a moving direction of the moving object.
  • a wandering pedestrian may be wrongly determined as a pedestrian who wants to cross a crosswalk in detecting uncourteous driving behaviors, which may result in an inaccurate detection result. Therefore, it is desired to provide systems and methods for detecting traffic violation behavior, thereby improving the detection efficiency and/or accuracy.
  • a method is provided.
  • the method may be implemented on at least one computing device, each of which may include at least one processor and a storage device.
  • the method may include determining a target object which should be given precedence with respect to a vehicle from one or more moving objects by performing an angle correction on the one or more moving objects, the one or more moving objects may be objects moving in a predetermined region; and when it is determined that a moving direction of the target object is a moving direction towards a middle of a road and a target vehicle passing through the predetermined region is not courteous to the target object, marking a state of the target vehicle as a violation state.
  • the determining a target object which should be given precedence with respect to a vehicle from one or more moving objects by performing an angle correction on the one or more moving objects may include determining an average angle between at least two lane lines captured by a monitoring device and a horizontal direction; determining a displacement of a first moving object included in the one or more moving objects based on two image frames captured by the monitoring device; and determining the first moving object as the target object which should be given precedence with respect to a vehicle when it is determined that the average angle and the displacement satisfy the following formulas:
  • refers to a threshold angle
  • s refers to a threshold distance
  • refers to the average angle
  • refers to a constant value
  • ⁇ x refers to a first displacement in the horizontal direction included in the displacement
  • ⁇ y refers to a second displacement in the vertical direction included in the displacement.
  • the determining that a moving direction of the target object is a moving direction towards a middle of a road may include determining a first image frame and a second image frame sequentially captured by the monitoring device, wherein the second image frame is an image captured at a time point which is after a predetermined time interval from a time point when the first image frame is captured, and the first image frame and the second image frame are images including the target object; and determining that the moving direction of the target object is towards the middle of the road based on an orientation of a first position relative to a second position, wherein the first position is a location of the target object included in the first image frame and the second location is a location of the target object included in the second image frame.
  • the method may further include controlling a signal lamp to perform a prompting operation after determining that the one or more moving objects are in the predetermined region.
  • the method may further include establishing a target binding relationship between the target object and the target vehicle and assigning a target ID to the target binding relationship.
  • the marking the state of the target vehicle as the violation state may include marking the state of the target vehicle corresponding to the target ID as the violation state.
  • the method may further include determining that a door of the target vehicle captured by the monitoring device is in an open state and the target object disappears after a predetermined period from a time point when the door is determined in the open state; and canceling the violation state of the target vehicle.
  • the method may further include performing at least one of operations on the target vehicle, the operations including a license plate recognition, a speed detection, a window detection, or a face recognition; and recording a processing result.
  • a device may include a determination module configured to determine a target object which should be given precedence with respect to a vehicle from one or more moving objects by performing an angle correction on the one or more moving objects, wherein the one or more moving objects are objects moving in a predetermined region; and a marking module configured to mark a state of the target vehicle as a violation state when it is determined that a moving direction of the target object is a moving direction towards a middle of a road and a target vehicle passing through the predetermined region is not courteous to the target object.
  • a non-transitory computer readable medium storing at least one set of instructions.
  • the at least one set of instructions may direct the at least one processor to perform a method.
  • the method may include determining a target object which should be given precedence with respect to a vehicle from one or more moving objects by performing an angle correction on the one or more moving objects, wherein the one or more moving objects are objects moving in a predetermined region; and when it is determined that a moving direction of the target object is a moving direction towards a middle of a road and a target vehicle passing through the predetermined region is not courteous to the target object, marking a state of the target vehicle as a violation state.
  • an electronic device may include at least one storage device storing executable instructions, and at least one processor in communication with the at least one storage device.
  • the at least one processor may be directed to cause the system to perform operations including determining a target object which should be given precedence with respect to a vehicle from one or more moving objects by performing an angle correction on the one or more moving objects, wherein the one or more moving objects are objects moving in a predetermined region; and when it is determined that a moving direction of the target object is a moving direction towards a middle of a road and a target vehicle passing through the predetermined region is not courteous to the target object, marking a state of the target vehicle as a violation state.
  • a system may include at least one storage device storing executable instructions, and at least one processor in communication with the at least one storage device. When executing the executable instructions, the at least one processor may cause the system to perform one or more of the following operations.
  • the system may identify one or more candidate objects within a predetermined region associated with a crosswalk at a road. For each of the one or more candidate objects, the system may further determine a moving direction of the candidate object with respect to the road.
  • the system may further determine one or more target objects from the one or more candidate objects based on one or more moving directions corresponding to the one or more candidate objects.
  • the system may further identify one or more vehicles within a predetermined range of the crosswalk.
  • the system may further obtain, from a camera, one or more images associated with the vehicle at one or more predetermined positions.
  • the system may further determine whether the vehicle has a traffic violation behavior associated with the one or more target objects based on the one or more images.
  • the system may perform one or more of the following operations.
  • the system may determine an initial position of the candidate object within the predetermined region based on an initial image captured at an initial time point.
  • the system may further determine whether the initial position of the candidate object is in the vicinity of an outer edge of the predetermined region.
  • the system may determine a plurality of intermediate positions of the candidate object within the predetermined region based on a plurality of sequential images associated with the candidate object.
  • the plurality of sequential images may be sequentially captured at a plurality of intermediate time points immediately after the initial time point.
  • the system may determine a plurality of intermediate moving directions of the candidate object based on a plurality of positions including the initial position and the plurality of intermediate positions. Each of the plurality of intermediate moving directions may correspond to two adjacent positions of the plurality of positions. The system may determine the moving direction of the candidate object based at least in part on the plurality of intermediate moving directions.
  • the system may perform one or more of the following operations.
  • the system may select candidate intermediate moving directions satisfying a predetermined condition from the plurality of intermediate moving directions.
  • the system may determine the moving direction of the candidate object based on the candidate intermediate moving directions.
  • the system may perform one or more of the following operations.
  • the system may identify one or more lane lines associated with the crosswalk.
  • the system may determine an average angle between the one or more lane lines and a horizontal direction.
  • For each of the plurality of intermediate moving positions the system may determine an angle between the intermediate moving direction and the horizontal direction.
  • the system may determine a reference angle based on the average angle and the angle between the intermediate moving direction and the horizontal direction.
  • the system may determine a moving distance connecting the two adjacent positions corresponding to the intermediate moving direction.
  • the system may determine whether the reference angle is less than a threshold angle and whether the moving distance is larger than a threshold distance. In response to a determination that the reference angle is less than the threshold angle and the moving distance is larger than the threshold distance, the system may determine the intermediate moving direction as a candidate intermediate moving direction.
  • the system may determine an average moving direction of the candidate intermediate moving directions as the moving direction of the candidate object.
  • the system may determine whether the moving direction is a direction away from the road or a direction crossing the road. In response to a determination that the moving direction is the direction crossing the road, the system may designate the candidate object as a target object.
  • the one or more predetermined positions may include a first line within a first predetermined distance range of a side of a crosswalk area, a second line within the crosswalk area, and a third line within a third predetermined distance range of the other side of the crosswalk area.
  • the system may determine whether three images associated with the vehicle at the first line, the second line, and the third line are captured within a first predetermined time period. In response to a determination that three images associated with the vehicle at the first line, the second line, and the third line are captured within the first predetermined time period, the system may determine that the vehicle has a traffic violation behavior associated with the one or more target objects.
  • the system may determine whether a door of the vehicle is open and whether at least one of the one or more target objects disappears in a second predetermined time period based on the one or more images associated with the vehicle. In response to a determination that the door of the vehicle is open and the at least one of the one or more target objects disappears, the system may determine that the vehicle does not have a traffic violation behavior associated with the one or more target objects.
  • the system in response to a determination that the vehicle has a traffic violation behavior associated with the one or more target objects, the system may detect at least one of a license plate, a speed, a face, or a driving behavior associated with the vehicle.
  • system may further cause a signal device to generate a reminder signal in response to determining the one or more target objects.
  • the system in response to a determination that the vehicle has a traffic violation behavior associated with the one or more target objects, may transmit information associated with the traffic violation behavior to a target device.
  • the target device may include at least one of a traffic management department, a data center, an alarm center or a terminal device associated with the vehicle.
  • a method is provided.
  • the method may be implemented on at least one computing device, each of which may include at least one processor and a storage device.
  • the method may include identifying one or more candidate objects within a predetermined region associated with a crosswalk at a road; for each of the one or more candidate objects, determining a moving direction of the candidate object with respect to the road; determining one or more target objects from the one or more candidate objects based on one or more moving directions corresponding to the one or more candidate objects; identifying one or more vehicles within a predetermined range of the crosswalk; and for each of the one or more vehicles, obtaining, from a camera, one or more images associated with the vehicle at one or more predetermined positions; and determining whether the vehicle has a traffic violation behavior associated with the one or more target objects based on the one or more images.
  • a system implemented on a computing device may have a processor, a storage medium, and a communication platform connected to a network.
  • the system may include an identification module configured to identify one or more candidate objects within a predetermined region associated with a crosswalk at a road and one or more vehicles within a predetermined range of the crosswalk.
  • the system may further include a moving direction determination module configured to determine a moving direction of each of the one or more candidate objects with respect to the road.
  • the system may further include a target object determination module configured to determine one or more target objects from the one or more candidate objects based on one or more moving directions corresponding to the one or more candidate objects.
  • the system may further include
  • the system may further include a traffic violation behavior determination module configured to obtain, form a camera, one or more images associated with each of the one or more vehicles at one or more predetermined positions, and determine whether the vehicle has a traffic violation behavior associated with the one or more target objects based on the one or more images.
  • a non-transitory computer readable medium storing at least one set of instructions.
  • the at least one set of instructions may direct the at least one processor to perform a method.
  • the method may include identifying one or more candidate objects within a predetermined region associated with a crosswalk at a road; for each of the one or more candidate objects, determining a moving direction of the candidate object with respect to the road; determining one or more target objects from the one or more candidate objects based on one or more moving directions corresponding to the one or more candidate objects; identifying one or more vehicles within a predetermined range of the crosswalk; and for each of the one or more vehicles, obtaining, from a camera, one or more images associated with the vehicle at one or more predetermined positions; and determining whether the vehicle has a traffic violation behavior associated with the one or more target objects based on the one or more images.
  • FIG. 1 is a schematic diagram illustrating an exemplary traffic monitoring system according to some embodiments of the present disclosure
  • FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure
  • FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary terminal device according to some embodiments of the present disclosure
  • FIG. 4 is a flowchart illustrating an exemplary process for marking a violation state of a vehicle according to some embodiments of the present disclosure
  • FIG. 5 is a flowchart illustrating an exemplary process for capturing images associated with a vehicle that is uncourteous to a target object on a crosswalk according to some embodiments of the present disclosure
  • FIG. 6 is a flowchart illustrating an exemplary process for determining whether a vehicle is not courteous to a moving object according to some embodiments of the present disclosure
  • FIG. 7 is a block diagram illustrating an exemplary violation state marking device according to some embodiments of the present disclosure.
  • FIG. 8 is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure.
  • FIG. 9 is a flowchart illustrating an exemplary process for determining whether a vehicle has a traffic violation behavior according to some embodiments of the present disclosure
  • FIG. 10 is a flowchart illustrating an exemplary process for determining a moving direction of a candidate object with respect to a road according to some embodiments of the present disclosure
  • FIG. 11 is a flowchart illustrating an exemplary process for selecting a candidate intermediate moving direction according to some embodiments of the present disclosure
  • FIG. 12 is a schematic diagram illustrating exemplary angles between lane lines and a horizontal direction according to some embodiments of the present disclosure
  • FIG. 13 is a schematic diagram illustrating an exemplary process for determining a candidate intermediate moving direction corresponding to two adjacent positions according to some embodiments of the present disclosure
  • FIG. 14 is a flowchart illustrating an exemplary process for determining whether a vehicle has a traffic violation behavior according to some embodiments of the present disclosure.
  • FIG. 15 illustrates an exemplary image associated with a vehicle at a predetermined position captured by a camera according to some embodiments of the present disclosure.
  • module, ” “unit, ” or “block, ” as used herein refers to logic embodied in hardware or firmware, or to a collection of software instructions.
  • a module, a unit, or a block described herein may be implemented as software and/or hardware and may be stored in any type of non-transitory computer-readable medium or any other storage device.
  • a software module/unit/block may be compiled and linked into an executable program. It will be appreciated that software modules can be callable from other modules/units/blocks or from themselves, and/or may be invoked in response to detected events or interrupts.
  • Software modules/units/blocks configured for execution on computing devices may be provided on a computer-readable medium, such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution) .
  • a computer-readable medium such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution) .
  • Such software code may be stored, partially or fully, on a storage device of the executing computing device, for execution by the computing device.
  • Software instructions may be embedded in a firmware, such as an erasable programmable read-only memory (EPROM) .
  • EPROM erasable programmable read-only memory
  • modules/units/blocks may be included in connected logic components, such as gates and flip-flops, and/or can be included of programmable units, such as programmable gate arrays or processors.
  • the modules/units/blocks or computing device functionality described herein may be implemented as software modules/units/blocks, but may be represented in hardware or firmware.
  • the modules/units/blocks described herein refer to logical modules/units/blocks that may be combined with other modules/units/blocks or divided into sub-modules/sub-units/sub-blocks despite their physical organization or storage. The description may be applicable to a system, an engine, or a portion thereof.
  • system, ” “engine, ” “unit, ” “module, ” and/or “block” used herein are one method to distinguish different components, elements, parts, sections or assembly of different levels in ascending order. However, the terms may be displaced by another expression if they achieve the same purpose.
  • top, ” “bottom, ” “upper, ” “lower, ” “vertical, ” “lateral, ” “above, ” “below, ” “upward (s) , ” “downward (s) , ” “left-hand side, ” “right-hand side, ” “horizontal, ” and other such spatial reference terms are used in a relative sense to describe the positions or orientations of certain surfaces/parts/components of a vehicle with respect to other such features of the vehicle when the vehicle is in a normal operating position and may change if the position or orientation of the vehicle changes.
  • the flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments in the present disclosure. It is to be expressly understood, the operations of the flowchart may be implemented not in order. Conversely, the operations may be implemented in an inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
  • the systems and methods may identify one or more candidate objects within a predetermined region associated with a crosswalk at a road. For each of the one or more candidate objects, the systems and methods may determine a moving direction of the candidate object with respect to the road. The systems and methods may further determine one or more target objects from the one or more candidate objects based on one or more moving directions corresponding to the one or more candidate objects. The systems and methods may also identify one or more vehicles within a predetermined range of the crosswalk. For each of the one or more vehicles, the systems and methods may obtain, from a camera, one or more images associated with the vehicle at one or more predetermined positions. The systems and methods may determine whether the vehicle has a traffic violation behavior associated with the one or more target objects based on the one or more images.
  • an angle correction (e.g., as described in FIG. 4) may be performed to improve a determination accuracy of the moving direction.
  • the moving direction of the candidate object may be determined based on a plurality of intermediate positions of the candidate object during a time period. This may be used to identify a candidate object that is wandering on the road and avoid misjudgment caused by such a candidate object.
  • the systems and methods may determine whether a door of the vehicle is open and whether at least one of the one or more target objects disappears in a predetermined time period based on the one or more images associated with the vehicle. If the door of the vehicle is open and the at least one of the one or more target objects disappears, the systems and methods may determine that the at least one of the one or more target objects gets into the vehicle, and the vehicle does not have the traffic violation behavior associated with the one or more target objects. Accordingly, the systems and methods may accurately determine whether a vehicle has the traffic violation behavior and capture and/or collect evidence of the traffic violation behavior without any human cost.
  • FIG. 1 is a schematic diagram illustrating an exemplary traffic monitoring system according to some embodiments of the present disclosure.
  • the traffic monitoring system 100 may include a server 110, a network 120, an image acquisition device 130, a terminal device 140, and a storage device 150.
  • the server 110 may be a single server or a server group.
  • the server group may be centralized or distributed (e.g., the server 110 may be a distributed system) .
  • the server 110 may be local or remote.
  • the server 110 may access information and/or data stored in the image acquisition device 130, the terminal device 140, and/or the storage device 150 via the network 120.
  • the server 110 may be directly connected to the image acquisition device 130, the terminal device 140, and/or the storage device 150 to access stored information and/or data.
  • the server 110 may be implemented on a cloud platform.
  • the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
  • the server 110 may be implemented on a computing device 200 having one or more components illustrated in FIG. 2 in the present disclosure.
  • the server 110 may include a processing device 112.
  • the processing device 112 may process data and/or information relating to traffic monitoring to perform one or more functions described in the present disclosure. For example, based on one or more images captured by the image acquisition device 130 mounted near a crosswalk, the processing device 112 may determine whether a traffic violation behavior exists at the crosswalk (e.g., whether a vehicle does not stop for pedestrians at the crosswalk) .
  • the processing device 112 may include one or more processing engines (e.g., single-core processing engine (s) or multi-core processor (s) ) .
  • the processing device 112 may include a central processing unit (CPU) , an application-specific integrated circuit (ASIC) , an application-specific instruction-set processor (ASIP) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a digital signal processor (DSP) , a field-programmable gate array (FPGA) , a programmable logic device (PLD) , a controller, a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • ASIP application-specific instruction-set processor
  • GPU graphics processing unit
  • PPU physics processing unit
  • DSP digital signal processor
  • FPGA field-programmable gate array
  • PLD programmable logic device
  • controller a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
  • RISC reduced
  • the server 110 may be unnecessary and all or part of the functions of the server 110 may be implemented by other components (e.g., the image acquisition device 130, the terminal device 140) of the traffic monitoring system 100.
  • the processing device 112 may be integrated into the image acquisition device 130 or the terminal device140 and the functions (e.g., monitoring a traffic behavior of a vehicle) of the processing device 112 may be implemented by the image acquisition device 130 or the terminal device140.
  • the network 120 may facilitate the exchange of information and/or data for the traffic monitoring system 100.
  • one or more components e.g., the server 110, the image acquisition device 130, the terminal device 140, or the storage device 150
  • the server 110 may obtain/acquire images from the image acquisition device 130 via the network 120.
  • acquisition device 130 may transmit images to the storage device 150 for storage via the network 120.
  • the network 120 may be any type of wired or wireless network, or combination thereof.
  • the network 120 may include a cable network, a wireline network, an optical fiber network, a telecommunications network, an intranet, an Internet, a local area network (LAN) , a wide area network (WAN) , a wireless local area network (WLAN) , a metropolitan area network (MAN) , a public telephone switched network (PSTN) , a Bluetooth network, a ZigBee network, a near field communication (NFC) network, or the like, or any combination thereof.
  • LAN local area network
  • WAN wide area network
  • WLAN wireless local area network
  • MAN metropolitan area network
  • PSTN public telephone switched network
  • Bluetooth network a Bluetooth network
  • ZigBee network ZigBee network
  • NFC near field communication
  • the image acquisition device 130 may be configured to acquire at least one image (the “image” herein refers to a single image or a frame of a video) .
  • the image acquisition device 130 may be mounted near a crosswalk of a road.
  • the image acquisition device 130 may acquire images associated with one or more moving objects near the crosswalk and/or vehicles passing through the crosswalk, wherein the images may be used to detect traffic violation behaviors.
  • the image acquisition device 130 may include a camera 130-1, a video recorder 130-2, a sensor 130-3, etc.
  • the camera 130-1 may include a gun camera, a dome camera, an integrated camera, a monocular camera, a binocular camera, a multi-view camera, or the like, or any combination thereof.
  • the video recorder 130-2 may include a PC Digital Video Recorder (DVR) , an embedded DVR, or the like, or any combination thereof.
  • the sensor 130-1 may include an acceleration sensor (e.g., a piezoelectric sensor) , a velocity sensor (e.g., a Hall sensor) , a distance sensor (e.g., a radar, an infrared sensor) , a steering angle sensor (e.g., a tilt sensor) , a traction-related sensor (e.g., a force sensor) , or the like, or any combination thereof.
  • the at least one image acquired by the image acquisition device 130 may be a two-dimensional image, a three-dimensional image, a four-dimensional image, etc.
  • the image acquisition device 130 may include a plurality of components each of which can acquire an image.
  • the image acquisition device 130 may include a plurality of sub-cameras that can capture images or videos simultaneously.
  • the image acquisition device 130 may transmit the acquired image to one or more components (e.g., the server 110, the terminal device 140, and/or the storage device 150) of the traffic monitoring system 100 via the network 120.
  • the terminal device 140 may be configured to receive information and/or data from the server 110, the image acquisition device 130, and/or the storage device 150 via the network 120. For example, the terminal device 140 may receive images and/or videos from the image acquisition device 130. As another example, the terminal device 140 may transmit instructions to the image acquisition device 130 and/or the server 110. In some embodiments, the terminal device 140 may provide a user interface via which a user may view information and/or input data and/or instructions to the traffic monitoring system 100. For example, the user may view, via the user interface, information associated with a traffic violation behavior of a vehicle obtained from the server 110. As another example, the user may input, via the user interface, an instruction to set a traffic monitoring parameter of the image acquisition device 130.
  • the terminal device 140 may include a mobile device 140-1, a computer 140-2, a wearable device 140-3, or the like, or any combination thereof.
  • the terminal device 140 may include a display that can display information in a human-readable form, such as text, image, audio, video, graph, animation, or the like, or any combination thereof.
  • the display of the terminal device 140 may include a cathode ray tube (CRT) display, a liquid crystal display (LCD) , a light emitting diode (LED) display, a plasma display panel (PDP) , a three dimensional (3D) display, or the like, or a combination thereof.
  • the terminal device 140 may be connected to one or more components (e.g., the server 110, the image acquisition device 130, and/or the storage device 150) of the traffic monitoring system 100 via the network 120.
  • the storage device 150 may be configured to store data and/or instructions.
  • the data and/or instructions may be obtained from, for example, the server 110, the image acquisition device 130, and/or any other component of the traffic monitoring system 100.
  • the storage device 150 may store data and/or instructions that the server 110 may execute or use to perform exemplary methods described in the present disclosure.
  • the storage device 150 may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof.
  • Exemplary mass storage devices may include a magnetic disk, an optical disk, a solid-state drive, etc.
  • Exemplary removable storage devices may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
  • Exemplary volatile read-and-write memories may include a random access memory (RAM) .
  • Exemplary RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc.
  • DRAM dynamic RAM
  • DDR SDRAM double date rate synchronous dynamic RAM
  • SRAM static RAM
  • T-RAM thyristor RAM
  • Z-RAM zero-capacitor RAM
  • Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
  • the storage device 150 may be implemented on a cloud platform.
  • the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
  • the storage device 150 may be connected to the network 120 to communicate with one or more other components (e.g., the server 110, the image acquisition device 130, or the terminal device 140) of the traffic monitoring system 100.
  • One or more components in the traffic monitoring system 100 may access the data or instructions stored in the storage device 150 via the network 120.
  • the storage device 150 may be directly connected to or communicate with one or more other components (e.g., the server 110, the image acquisition device 130, or the terminal device 140) of the traffic monitoring system 100.
  • the storage device 150 may be part of another component of the traffic monitoring system 100, such as the server 110, the image acquisition device 130, or the terminal device 140.
  • the traffic monitoring system 100 may include one or more additional components and/or one or more components of the traffic monitoring system 100 described above may be omitted. Additionally or alternatively, two or more components of the traffic monitoring system 100 may be integrated into a single component. A component of the traffic monitoring system 100 may be implemented on two or more sub-components.
  • FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure.
  • the computing device 200 may be used to implement any component of the traffic monitoring system 100 as described herein.
  • the processing device 112 may be implemented on the computing device 200, via its hardware, software program, firmware, or a combination thereof.
  • only one such computer is shown, for convenience, the computer functions relating to traffic monitoring as described herein may be implemented in a distributed fashion on a number of similar platforms to distribute the processing load.
  • the computing device 200 may include COM ports 250 connected to and from a network connected thereto to facilitate data communications.
  • the computing device 200 may include a transmission device (not shown) via which the computing device 200 may transmit information and/or data to external components.
  • the transmission device may include a Network Interface Controller (NIC) connected to an external network device.
  • the transmission device may include a Radio Frequency (RF) module configured to communicate with a network (e.g., the network 120) via a wireless connection.
  • a network e.g., the network 120
  • the computing device 200 may also include a processor (e.g., a processor 220) , in the form of one or more processors (e.g., logic circuits) , for executing program instructions.
  • the processor 220 may include interface circuits and processing circuits therein.
  • the interface circuits may be configured to receive electronic signals from a bus 210, wherein the electronic signals encode structured data and/or instructions for the processing circuits to process.
  • the processing circuits may conduct logic calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus 210.
  • the computing device 200 may further include one or more storages configured to store various data files (e.g., program instructions) to be processed and/or transmitted by the computing device 200.
  • the one or more storages may include a high-speed random access memory (not shown) , a non-volatile memory (e.g., one or more magnetic storage devices, flash memories, or other non-volatile solid-state memory) (not shown) , a disk 270, a read-only memory (ROM) 230, or a random-access memory (RAM) 240, or the like, or any combination thereof.
  • the one or more storages may further include a remote storage corresponding to the processor 220. The remote storage may connect to the computing device 200 via the network 120.
  • the computing device 200 may also include program instructions stored in the one or more storages (e.g., the ROM 230, RAM 240, and/or another type of non-transitory storage medium) to be executed by the processor 220.
  • the methods and/or processes of the present disclosure may be implemented as the program instructions.
  • the computing device 200 may also include an I/O component 260, supporting input/output between the computing device 200 and other components.
  • the computing device 200 may also receive programming and data via network communications.
  • processors 220 are also contemplated; thus, operations and/or method steps performed by one processor 220 as described in the present disclosure may also be jointly or separately performed by the multiple processors.
  • the processor 220 of the computing device 200 executes both operation A and operation B, it should be understood that operation A and operation B may also be performed by two different processors 220 jointly or separately in the computing device 200 (e.g., a first processor executes operation A and a second processor executes operation B, or the first and second processors jointly execute operations A and B) .
  • FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary terminal device according to some embodiments of the present disclosure.
  • the terminal device 140 may be implemented on the terminal device 300 shown in FIG. 3.
  • the mobile device 300 may include a communication platform 310, a display 320, a graphics processing unit (GPU) 330, a central processing unit (CPU) 340, an I/O 350, a memory 360, and a storage 390.
  • a communication platform 310 may include a communication platform 310, a display 320, a graphics processing unit (GPU) 330, a central processing unit (CPU) 340, an I/O 350, a memory 360, and a storage 390.
  • any other suitable component including but not limited to a system bus or a controller (not shown) , may also be included in the mobile device 300.
  • an operating system 370 e.g., iOS TM , Android TM , Windows Phone TM , etc.
  • apps may be loaded into the memory 360 from the storage 390 in order to be executed by the CPU 340.
  • the applications 380 may include a browser or any other suitable mobile apps for receiving and rendering information relating to traffic monitoring or other information from the processing device 112. User interactions with the information stream may be achieved via the I/O 350 and provided to the processing device 112 and/or other components of the traffic monitoring system 100 via the network 120.
  • FIG. 4 is a flowchart illustrating an exemplary process for marking a violation state of a vehicle according to some embodiments of the present disclosure.
  • the process 400 may be implemented as a set of instructions (e.g., an application) stored in a storage device (e.g., the storage device 150, the ROM 230 or RAM 240) .
  • the processor 220 and/or the modules in FIG. 7 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the modules may be configured to perform the process 400.
  • the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 400 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 400 illustrated in FIG. 4 and described below is not intended to be limiting.
  • a target object which should be given precedence with respect to a vehicle may be determined from one or more moving objects by performing an angle correction on the one or more moving objects.
  • the one or more moving objects may include one or more objects stay or moving in a predetermined region on a road.
  • the one or more moving objects may include a pedestrian (e.g., a person traveling on foot or by a wheelchair, a scooter, or a skateboard) , a bicyclist, a driver of a non-motorized vehicle (e.g., a motorbike) , or the like, or any combination thereof.
  • the predetermined region refers to a waiting region in which a moving object should stay to wait to cross the road. For example, a region including one or more zebra crossings at an end of a crosswalk may be determined as the predetermined region.
  • a region within a certain distance (e.g., 1 m, 1.5 m, 2 m, or 2.5 m) from the outermost zebra crossing (e.g., the one closest to a road curb) at an end of a crosswalk may be determined as the predetermined region.
  • An angle correction performed on a moving object refers to correcting an angle of a moving direction of the moving object with respect to a reference direction.
  • a two dimensional (2D) image including the moving object may be acquired by a monitoring device, and the 2D image may be represented in a 2D coordinate system including an X-axis and a Y-axis.
  • the reference direction may be the X-axis direction (or referred to as a horizontal direction herein) .
  • the monitoring device may be also referred to as a camera as described in connection with operation 910.
  • a vehicle passing through the predetermined region may refer to a vehicle passing through a crosswalk at the road where the predetermined region located.
  • the target object in the predetermined region may be determined by correcting the moving angle of each of the one or more moving objects, thereby ensuring an accurate determination of the moving angle of the target object.
  • the target vehicle that is not courteous to the target object may be marked only when it is determined that the target object is moving towards the middle of the road, thereby avoiding wrongly marking the state of the target vehicle as the violation state when the target object is wandering. Accordingly, problems in the prior arts that violation behaviors cannot be effectively marked may be solved, a vehicle that has a traffic violation behavior may be detected accurately, and the vehicle has the traffic violation behavior may be effectively marked.
  • the above operations may be executed by a device including but not limited to the monitoring device.
  • one or more operations described below may be performed.
  • An average angle between at least two lane lines captured by the monitoring device and a horizontal direction i.e., an X-axis direction of an image including the moving object
  • a displacement of a first moving object included in the one or more moving objects may be determined based on two image frames captured by the monitoring device.
  • the first moving object may be determined as the target object when it is determined that the average angle and the displacement satisfy an Equation (1) and an Equation (2) as below:
  • refers to the average angle
  • refers to a predetermined angle (i.e., a constant value, e.g., 85°, 87°, 89°, 90°, 92°, etc)
  • ⁇ x refers to a first displacement along the horizontal direction included in the displacement
  • ⁇ y refers to a second displacement along the vertical direction included in the displacement
  • refers to a threshold angle
  • s refers to a threshold distance.
  • the vertical direction may be perpendicular to the horizontal direction as aforementioned. For example, if the horizontal direction is an X-axis of a 2D image, the vertical direction may be a Y-axis of the 2D image.
  • the at least two lane lines may be detected using a LaneNet algorithm (i.e., an exemplary lane line detection algorithm) or manually.
  • the one or more moving objects e.g., a pedestrian, a non-motorized vehicle, etc.
  • vehicles in image frames may be identified using a You Only Look Once version 3 (YOLO 3) model (i.e., an exemplary object detection algorithm) .
  • YOLO 3 You Only Look Once version 3
  • the one or more moving objects and the vehicles may be tracked using a kernelized correlation filters (KCF) algorithm (i.e., an exemplary target tracking algorithm) .
  • KCF kernelized correlation filters
  • an application scenario of the monitoring device may have an angular offset
  • the angle correction may be performed to determine the moving direction of each of the one or more moving objects (e.g., a pedestrian, a non-motorized vehicle) and determine whether one of the one or more moving objects is a target object which should be given precedence with respect to a vehicle.
  • FIG. 12 is a diagram illustrating exemplary angles between lane lines and the horizontal direction according to some embodiments of the present disclosure. More descriptions about the angles between lane lines and the horizontal direction may be found in FIG. 12.
  • the displacements ⁇ x and ⁇ y of one of the one or more moving objects may be determined based on two image frames, wherein ⁇ x and ⁇ y may both be positive values.
  • whether the one of the one or more moving objects is an object which should be given precedence with respect to a vehicle may be determined according to Equation (1) by setting a threshold ⁇ . Specifically, when Equation (1) is satisfied, the one of the one or more moving objects may be determined as the target object. When Equation (1) is not satisfied, the one of the one or more moving objects may not be determined as the target object.
  • a first image frame and a second image frame sequentially captured by the monitoring device may be obtained.
  • the second image frame may be an image captured at a time point that is after a predetermined time interval from a time point when the first image frame is captured.
  • the first image frame and the second image frame may be images including the target object.
  • Whether the moving direction of the target object is a moving direction towards the middle of the road may be determined based on an orientation of a first position relative to a second position.
  • the first position may be a location of the target object included in the first image frame
  • the second position may be a location of the target object included in the second image frame.
  • the first position may be a position of the target object in a first image frame that includes the target object (e.g., an image frame captured when the target object enters a monitoring region of the monitoring device (e.g., the predetermined region) .
  • a preliminary moving direction of the target object may be determined based on the first position of the target object in the first image frame. For example, if the first position of the target object is in the vicinity of an outer edge of the predetermined region, the preliminary moving direction of the target object may be determined as a direction towards the middle of the road.
  • the second position of the target object in the second image frame may be determined by recording a position of the target object after a predetermined time interval. The moving direction of the target object may be determined based on the first position and the second position.
  • a value indicating a state that whether the target object should be given precedence with respect to a vehicle may be assigned to the target object based on the moving direction of the target object.
  • a signal lamp may be controlled to generate a prompt after determining that the one or more moving objects are in the predetermined region.
  • regions located on two ends of the crosswalk may be designated as waiting regions.
  • a region covering the crosswalk may be designated as a walking region according to different scenarios.
  • Each of the one or more moving objects (e.g., a pedestrian, a non-motorized vehicle) in the waiting region may be labeled with an identification number. If a moving object is in the waiting region, a signal light may be used to remind the vehicles about the moving object.
  • the signal light may emit a predetermined light, for example, a red light, a light that flashes with a predetermined frequency, or the like.
  • the signal light may be used for generating a sound or light, and the target object may be reminded to pay attention to vehicles by the light and the sound.
  • a binding relationship between the target object and the target vehicle may be established, and an identity document (ID) may be assigned to the binding relationship.
  • the marking the state of the target vehicle as the violation state may include marking the state of the target vehicle corresponding to the ID associated with the target vehicle and the target object as the violation state.
  • the target object may be bound to a plurality of vehicles identified by the monitoring device to establish a plurality of binding relationships. When establishing the binding relationships, the target object may be bound to each of the vehicles.
  • the vehicles may include vehicles in all image frames captured by the monitoring device when the target object is in the waiting region.
  • an ID may be generated, and a state attribute of the ID (or corresponding to the target object and the vehicle) may be updated.
  • a state attribute associated with an ID may indicate whether a moving object associated with the ID should be given precedence with respect to a vehicle.
  • FIG. 15 is an exemplary image including a vehicle 1570 at a predetermined position at a crosswalk captured by a camera according to some embodiments of the present disclosure.
  • a pedestrian 1560 is in a state that should be given precedence with respect to the vehicle1570.
  • the vehicle 1570 may be marked as a violation state if three images indicating that the vehicle 1570 crosses lines 1530, 1540, and 1550 are captured, which suggests that the vehicle 1570 does not stop for the pedestrian at the crosswalk.
  • the violation state of the target vehicle may be canceled if, according to images captured by the monitoring device, it is determined that a door of the target vehicle is opened and the target object disappears after a predetermined period after the door is opened (which suggests that the target object gets into the target vehicle) .
  • a count of lanes between the target vehicle and the target object may be determined based on a distance between the target vehicle and the target object in a direction that a moving object moves across the road (e.g., a direction as denoted by an arrow N in FIG. 13) . If it is determined that the count of lanes between the target vehicle and the target object is zero, it may be determined that the target object disappears. It should be noted that any other methods may be used to identify the disappearance of the target object and determine whether the target object gets into the vehicle.
  • At least one of a license plate recognition operation, a speed detection operation, a window detection operation, or a face detection operation may be performed on the target vehicle, and a processing result may be recorded.
  • the recorded processing result may be used to acquire subsequent information and/or be compared with the subsequent information.
  • the subsequent information may include, for example, license plate information, driver information, etc., in a database (e.g., a vehicle database, a face recognition database, etc. ) .
  • the license plate recognition operation may be configured to upload license plate information of the target vehicle to a vehicle database and compare the license plate information of the target vehicle with license plate information in the vehicle database.
  • the speed detection operation may be configured to determine a current speed of the target vehicle based on a displacement of the target vehicle between two consecutive image frames. If the speed of the target vehicle exceeds a defined speed limit of the current road, the state of the target vehicle may be recorded as the violation state.
  • the window detection operation may be configured to detect a front windshield of the target vehicle. Additionally or alternatively, other violation behaviors (e.g., not wearing a seat belt) may be identified and marked.
  • the face recognition operation may be configured to compare the face of the driver of the target vehicle with faces stored in a face recognition database to obtain profile information of the target vehicle.
  • FIG. 5 is a flowchart illustrating an exemplary process for capturing images associated with a vehicle that is uncourteous to a target object on a crosswalk according to some embodiments of the present disclosure.
  • the process 500 may be implemented as a set of instructions (e.g., an application) stored in a storage device (e.g., the storage device 150, the ROM 230 or RAM 240) .
  • the processor 220 and/or the modules in FIG. 7 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the modules in FIG. 7 may be configured to perform the process 500.
  • the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 500 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 500 illustrated in FIG. 5 and described below is not intended to be limiting.
  • a region at each end of a crosswalk on a road may be determined as a waiting region, and a region covering the crosswalk may be determined as a walking region.
  • one or more lane lines may be detected using a LaneNet algorithm or manually.
  • one or more moving objects e.g., a pedestrian, a non-motorized vehicle
  • one or more vehicles in a plurality of image frames may be identified using a YOLO 3 model.
  • the one or more moving objects and vehicles may be tracked using a KCF algorithm for target tracking.
  • the image frames may be captured by a monitoring device near the crosswalk.
  • an application scenario of the monitoring device may have an angular offset
  • an angle correction may be performed to determine a moving direction of each of the one or more moving objects (e.g., a pedestrian, a non-motorized vehicle) and determine whether one of the one or more moving objects is a target object which should be given precedence with respect to a vehicle.
  • displacements ⁇ x and ⁇ y of one of the one or more moving objects may be determined based on two image frames, wherein ⁇ x and ⁇ y may both be positive values.
  • whether the one of the one or more moving objects is an object which should be given precedence with respect to a vehicle may be determined according to Equation (1) described in FIG. 4 by setting a threshold ⁇ .
  • each of the one or more moving objects may be bound to each vehicle identified in the image frames.
  • An ID may be assigned to each pair of bounded moving objects and vehicles.
  • a state attribute of each ID (or corresponding to each pair of bounded moving objects and vehicles) may be updated simultaneously. If the state attribute of an ID indicates that the corresponding moving object should be given precedence with respect to a vehicle, and three images of the vehicle are captured at a front line, a middle line, and a back line lines, respectively, such as lines 1530, 1540, and 1550 shown in FIG. 15, the vehicle corresponding to the ID may be recorded as a vehicle in a violation state. In some embodiments, a vehicle being in a violation state may also be referred to as a target vehicle.
  • each of the one or more moving objects e.g., a pedestrian, a non-motorized vehicle
  • Each of the one or more moving objects may be labeled with an identification number. If a moving object is in the waiting region, vehicles may be prompted by a signal light.
  • an initial position of a specific moving object in an initial image frame including the specific moving object e.g., captured when the specific moving object enters the waiting region
  • a preliminary moving direction of the specific moving object may be determined based on the initial image frame.
  • a current position of the specific moving object may be recorded.
  • a moving direction of the specific moving object may be determined based on the current position and a previous position of the specific moving object in a previous image frame.
  • a value indicating the state that whether the specific moving object should be given precedence with respect to a vehicle may be assigned to the specific moving object based on the moving direction of the specific moving object. Then, the specific moving object may be bound to each vehicle identified in image frames including the specific moving object and the vehicle (s) , and an ID may be assigned to each pair of bounded specific moving objects and vehicles. A state attribute of each ID may be updated simultaneously.
  • a vehicle in a violation state may be also referred to as a target vehicle.
  • FIG. 6 is a flowchart illustrating an exemplary process for determining whether a vehicle is not courteous to a moving object according to some embodiments of the present disclosure. As illustrated in FIG. 6, the process 600 may include one or more following operations.
  • the process 600 may be started for a predetermined region.
  • a target ID may be generated by binding a moving object with a vehicle.
  • the state attribute of the moving object and the corresponding vehicle may be updated simultaneously.
  • whether the vehicle satisfies a capturing condition may be determined.
  • the capturing condition refers to that the vehicle is not courteous to the moving object when the vehicle passes through the predetermined region.
  • the state of the vehicle may be recorded as a violation state.
  • the process 600 for the predetermined region may be ended.
  • a post-processing may be performed on the target vehicle.
  • the post-processing may include a license plate recognition, a speed detection, a window detection, a face detection, or the like, or any combination thereof.
  • the recorded face information of the driver of the target vehicle may be compared with information in a database of face information to obtain profile information of the driver of the target vehicle.
  • a count of lanes between the target object (e.g., a pedestrian or a non-motorized vehicle) and the target vehicle may be determined based on a distance between the target object and the target vehicle in a direction that a moving object crossing the road (e.g., a direction as denoted by an arrow N in FIG. 13) . If it is determined that the count of lanes between the target object and the target vehicle is zero, a state (e.g., open or closed) of a door of the target vehicle may be detected. An alarm and/or the violation state of the target vehicle may be canceled, if it is determined that the door of the target vehicle is opened and the target object disappears after a predetermined period after the door is opened (which suggests that the target object gets into the target vehicle) .
  • a state e.g., open or closed
  • the speed detection, the window detection, and/or the face detection on the target vehicle it may be convenient to obtain subsequent information (e.g., face information of the driver of the target vehicle) associated with the target vehicle and compare the information with information in a database.
  • subsequent information e.g., face information of the driver of the target vehicle
  • the recorded face information of the driver of the target vehicle may be compared with information in a database of face information to obtain profile information of the driver of the target vehicle.
  • FIG. 7 is a block diagram illustrating an exemplary violation state marking device according to some embodiments of the present disclosure.
  • the violation state marking device 700 may include a determination module 772 and a marking module 774.
  • one or more components of the violation state marking device 700 may be integrated into the processing device 112.
  • the determination module 772 may be configured to determine a target object which should be given precedence with respect to a vehicle from one or more moving objects by performing an angle correction on the one or more moving objects.
  • the one or more moving objects may include one or more objects stay or moving in a predetermined region on a road.
  • the predetermined region may be referred to as a waiting region as described elsewhere in the present disclosure (e.g., FIG. 4 and the descriptions thereof) .
  • the marking module 774 may be configured to mark a state of a target vehicle as a violation state when it is determined that a moving direction of the target object is a moving direction towards the middle of the road and the target vehicle passing through the predetermined region is not courteous to the target object.
  • the determination module 772 may determine the target object which should be given precedence with respect to a vehicle from the one or more moving objects by performing the angle correction on the one or more moving objects according to one or more operations described below.
  • An average angle between at least two lane lines captured by a monitoring device and a horizontal direction (e.g., an X-axis direction of an image including the moving object (s) ) may be determined.
  • a displacement of a first moving object included in the one or more moving objects may be determined based on two image frames captured by the monitoring device.
  • the first moving object may be determined as the target object which should be given precedence with respect to a vehicle when it is determined that the average angle and the displacement satisfy Equation (1) described in FIG. 4.
  • the marking module 774 may determine that the moving direction of the target object is a moving direction towards the middle of the road according to one or more operations described below.
  • a first image frame and a second image frame sequentially captured by the monitoring device may be obtained.
  • the second image frame may be an image captured at a time point that is after a predetermined time interval from a time point when the first image frame is captured.
  • the first image frame and the second image frame may be images including the target object.
  • Whether the moving direction of the target object is a moving direction towards the middle of the road may be determined based on an orientation of a first position relative to a second position.
  • the first position may be a location of the target object included in the first image frame
  • the second location may be a location of the target object included in the second image frame.
  • the violation state marking device 700 may further be configured to control a signal lamp to generate a prompt after determining that the one or more moving objects are in the predetermined region.
  • the violation state marking device 700 may further be configured to establish a binding relationship between the target object and the target vehicle and assign an ID to the binding relationship. The marking the state of the target vehicle as the violation state, the violation state marking device 700 may further be configured to mark the state of the target vehicle corresponding to the target ID as the violation state.
  • the violation state marking device 700 may further be configured to cancel the violation state of the target vehicle if, according to images captured by the monitoring device, it is determined that a door of the target vehicle is opened and the target object disappears after a predetermined period after the door is opened.
  • the violation state marking device 700 may further be configured to perform a post-processing on the target vehicle.
  • the post-processing may include a license plate recognition, a speed detection, a window detection, a face recognition, or the like, or any combination thereof.
  • the violation state marking device 700 may record one or more results of the post-processing and/or compare the one or more results with information in one or more databases.
  • the modules in the violation state marking device 700 may be connected to or communicate with each other via a wired connection or a wireless connection.
  • the wired connection may include a metal cable, an optical cable, a hybrid cable, or the like, or any combination thereof.
  • the wireless connection may include a Local Area Network (LAN) , a Wide Area Network (WAN) , a Bluetooth, a ZigBee, a Near Field Communication (NFC) , or the like, or any combination thereof.
  • LAN Local Area Network
  • WAN Wide Area Network
  • Bluetooth a ZigBee
  • NFC Near Field Communication
  • FIG. 8 is a block diagram illustrating an exemplary processing device 112 according to some embodiments of the present disclosure. As illustrated in FIG. 8, the processing device 112 may include an identification module 810, a moving direction determination module 820, a target object determination module 830, and a traffic violation behavior determination module 840.
  • the identification module 810 may be configured to identify one or more candidate objects within a predetermined region associated with a crosswalk at a road. For example, the identification module 810 may identify the one or more candidate objects from one or more images (or image frames of a video) including the one or more candidate objects captured by a camera (e.g., the image acquisition device 130) . The identification module 810 may further be configured to identify one or more vehicles within a predetermined range of the crosswalk. For example, the identification module 810 may identify the one or more vehicles from one or more images (or image frames of a video) including the one or more vehicles captured by the camera (e.g., the image acquisition device 130) . In some embodiments, the identification module 810 may identify the one or more candidate objects and/or the one or more vehicles using an object detection algorithm (e.g., a YOLO 3algorithm) .
  • an object detection algorithm e.g., a YOLO 3algorithm
  • the moving direction determination module 820 may be configured to determine a moving direction of each of the one or more candidate objects with respect to the road. In some embodiments, the moving direction determination module 820 may determine whether an initial position of the candidate object is in the vicinity of an outer edge of the predetermined region. In response to a determination that the initial position is not in the vicinity of the outer edge of the predetermined region, the moving direction determination module 820 may determine a preliminary moving direction for the candidate object. In some embodiments, the moving direction determination module 820 may determine a plurality of intermediate positions of the candidate object within the predetermined region based on a plurality of sequential images associated with the candidate object, the plurality of sequential images being sequentially captured at a plurality of intermediate time points immediately after the initial time point.
  • the moving direction determination module 820 may determine a plurality of intermediate moving directions of the candidate object based on a plurality of positions including the initial position and the plurality of intermediate positions, wherein each of the plurality of intermediate moving directions corresponds to two adjacent positions of the plurality of positions.
  • the moving direction determination module 820 may determine the moving direction of the candidate object based at least in part on the plurality of intermediate moving directions. For example, the moving direction determination module 820 may determine the moving direction of the candidate object based on the preliminary moving direction and the plurality of intermediate moving directions.
  • the target object determination module 830 may be configured to determine one or more target objects from the one or more candidate objects based on one or more moving directions corresponding to the one or more candidate objects.
  • the target object determination module 830 may determine whether the moving direction of the candidate object is a direction away from the road or a direction crossing the road. In response to a determination that the moving direction is a direction crossing the road, the target object determination module 830 may designate the candidate object as a target object. In some embodiments, in response to a determination that the moving direction of the candidate object is a direction away from the road, the target object determination module 830 may designate the candidate object as a negative object.
  • the traffic violation behavior determination module 840 may be configured to obtain, form a camera, one or more images associated with each of the one or more vehicles at one or more predetermined positions. In some embodiments, for each of the one or more target objects, the traffic violation behavior determination module 840 may generate a binding relationship between each of the one or more vehicles and the target object. In some embodiments, the traffic violation behavior determination module 840 may generate a binding relationship between each of the one or more vehicles and each negative object. The traffic violation behavior determination module 840 may obtain the one or more images associated with each of the one or more vehicles based on the binding relationship.
  • the traffic violation behavior determination module 840 may further be configured to determine whether the vehicle has a traffic violation behavior associated with the one or more target objects based on the one or more images. In some embodiments, for a vehicle, the traffic violation behavior determination module 840 may determine whether one or more images associated with the vehicle are captured within a first predetermined time period. In response to a determination that the one or more images associated with the vehicle are captured within the first predetermined time period, the traffic violation behavior determination module 840 may determine that the vehicle has a traffic violation behavior associated with the one or more target objects.
  • the traffic violation behavior determination module 840 may determine whether a door of the vehicle is open and whether at least one of the one or more target objects disappears within a second predetermined time period. If the door of the vehicle is open and the at least one of the one or more target objects disappears, the traffic violation behavior determination module 840 may speculate that the at least one target object gets on the vehicle, and determine that the vehicle does not have the traffic violation behavior associated with the one or more target objects
  • the modules in the processing device 112 may be connected to or communicate with each other via a wired connection or a wireless connection.
  • the wired connection may include a metal cable, an optical cable, a hybrid cable, or the like, or any combination thereof.
  • the wireless connection may include a Local Area Network (LAN) , a Wide Area Network (WAN) , a Bluetooth, a ZigBee, a Near Field Communication (NFC) , or the like, or any combination thereof.
  • LAN Local Area Network
  • WAN Wide Area Network
  • Bluetooth a ZigBee
  • NFC Near Field Communication
  • the moving direction determination module 820 and the target object determination module 830 may be integrated into a single module which may determine both the one or more moving directions of the one or more candidate objects and the one or more target objects.
  • the processing device 112 may further include a transmission module configured to transmit signals (e.g., an electrical signal, an electromagnetic signal) to one or more components (e.g., the terminal device 140) of the traffic monitoring system 100.
  • the processing device 112 may include a storage module (not shown) used to store information and/or data (e.g., a map of a region, position information of a plurality of crosswalks and/or a plurality of monitoring devices in the region) associated with the traffic monitoring system 100.
  • FIG. 9 is a flowchart illustrating an exemplary process for determining whether a vehicle has a traffic violation behavior according to some embodiments of the present disclosure.
  • process 900 may be implemented as a set of instructions (e.g., an application) stored in a storage device (e.g., the storage device 150, the ROM 230 or RAM 240) .
  • the processor 220 and/or the modules in FIG. 8 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the modules in FIG. 8 may be configured to perform the process 900.
  • the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 900 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 900 illustrated in FIG. 9 and described below is not intended to be limiting.
  • the processing device 112 may identify one or more candidate objects within a predetermined region associated with a crosswalk at a road.
  • a candidate object refers to an object that is located in the predetermined region.
  • the candidate object may include a pedestrian (e.g., a person traveling on foot or by a wheelchair, a scooter, or a skateboard) , a bicyclist, a driver of a non-motorized vehicle (e.g., a motorbike) , or the like, or any combination thereof.
  • the predetermined region associated with the crosswalk refers to a waiting region in which the candidate object (s) should stay to wait to cross the road.
  • the predetermined region may be a region including one or more zebra crossings at an end of the crosswalk.
  • the processing device 112 may identify the one or more candidate objects from one or more images (or image frames of a video) including the one or more candidate objects captured by a camera (e.g., the image acquisition device 130) .
  • the camera may be a surveillance equipment mounted near the crosswalk configured to capture images of the crosswalk at regular or irregular intervals.
  • the camera may capture the image (s) at a regular interval (e.g., each 0.1 seconds, 0.2 seconds, 0.3 seconds, 0.4 seconds, 0.5 seconds, 0.6 seconds, etc. ) .
  • the processing device 112 may identify the one or more candidate objects according to an object detection algorithm (also referred to as a first object detection algorithm) .
  • an object detection algorithm also referred to as a first object detection algorithm
  • the processing device 112 may determine a position of the one or more candidate objects at the time point when the image of the candidate object (s) is captured.
  • Exemplary first object detection algorithms may include a YOLO model (e.g., YOLO 1, YOLO 2, or YOLO 3) , a region-convolutional neural network (R-CNN) algorithm, a scale normalization for image pyramids (SIPN) algorithm, a detection with enriched semantics (DES) algorithm, a scale-transferrable detection network (STDN) algorithm, a fast R-CNN algorithm, a faster R-CNN algorithm, a single shot multi-box detector (SSD) algorithm, or the like, or any combination thereof.
  • a YOLO model e.g., YOLO 1, YOLO 2, or YOLO 3
  • R-CNN region-convolutional neural network
  • SIPN scale normalization for image pyramids
  • DES detection with enriched semantics
  • STDN scale-transferrable detection network
  • STDN single shot multi-box detector
  • the processing device 112 may track the candidate object based on an image sequence that includes the candidate object using a target tracking algorithm.
  • the image sequence of the candidate object may include a plurality of images including the candidate object sequentially captured by the camera as aforementioned.
  • the processing device 112 may determine a position of the candidate object based on each image in the image sequence of the candidate object, so as to determine a moving trajectory of the candidate object.
  • Exemplary target tracking algorithms may include a kernelized correlation filters (KCF) algorithm, a multiple instance learning (MIL) algorithm, a minimum output sum of squared error (MOSSE) algorithm, a tracking-learning detection (TLD) algorithm, a structured output tracking with kernels (Struck) algorithm, a circulant structure of tracking-by-detection with kernels (CSK) algorithm, a hierarchical convolutional features (HCF) algorithm, a multi-domain convolutional neural networks (MDNet) algorithm, or the like, or any combination thereof.
  • KCF kernelized correlation filters
  • MIL multiple instance learning
  • MOSSE minimum output sum of squared error
  • TLD tracking-learning detection
  • Struck structured output tracking with kernels
  • CSK circulant structure of tracking-by-detection with kernels
  • HCF hierarchical convolutional features
  • MDNet multi-domain convolutional neural networks
  • the processing device 112 may determine a moving direction of each of the one or more candidate objects with respect to the road.
  • a moving direction of a candidate object with respect to the road may indicate a movement of the candidate object on the road.
  • the moving direction of the candidate object with respect to the road may include a direction away from the middle of the road (also referred to as a direction away from the road) , a direction towards the middle of the road (also referred to as a direction crossing the road) , a direction along the road (i.e., a direction parallel with the extension direction of the road) , or the like.
  • the processing device 112 may determine a plurality of positions of the candidate object within the predetermined region based on a plurality of images including the candidate object.
  • the plurality of images of the candidate object may be captured at a plurality of time points within a time interval after the candidate object enters the predetermined region.
  • the processing device 112 may determine a plurality of intermediate moving directions of the candidate object based on the plurality of positions, wherein each of the plurality of intermediate moving directions may correspond to two adjacent positions of the plurality of positions.
  • the processing device 112 may select one or more candidate intermediate moving directions satisfying a predetermined condition from the plurality of intermediate moving directions.
  • the processing device 112 may determine the moving direction of the candidate object based at least in part on the plurality of candidate intermediate moving directions. More descriptions regarding the selection of the candidate intermediate moving directions may be found elsewhere in the present disclosure (e.g., FIG. 11 and the descriptions thereof) .
  • the processing device 112 may determine an initial position of the candidate object within the predetermined region based on an initial image of the candidate object captured at an initial time point. The processing device 112 may determine a preliminary moving direction of the candidate object based on the initial position of the candidate object. In some embodiments, the processing device 112 may determine a plurality of intermediate positions of the candidate object within the predetermined region based on a plurality of sequential images associated with the candidate object. The processing device 112 may further determine a plurality of intermediate moving directions of the candidate object based on a plurality of positions including the initial position and the plurality of intermediate positions. The processing device 112 may then determine the moving direction of the candidate object based at least in part on the plurality of intermediate moving directions. More descriptions regarding the determination of the moving direction of a candidate object may be found elsewhere in the present disclosure (e.g., FIG. 10 and the descriptions thereof) .
  • the processing device 112 may determine one or more target objects from the one or more candidate objects based on one or more moving directions of the one or more candidate objects.
  • the processing device 112 may determine whether the moving direction of the candidate object is a direction away from the road or a direction crossing the road. In response to a determination that the moving direction is a direction crossing the road, the processing device 112 may designate the candidate object as a target object.
  • the processing device 112 may cause a signal device to generate a reminder signal.
  • the reminder signal may be configured to remind a driver of a vehicle that is about to pass through the crosswalk to slow down (and/or stop) , and/or remind the target object (s) to be careful.
  • the signal device may include a signal light, a display, a microphone, or the like, or any combination thereof.
  • the reminder signal may include a light, a sound (e.g., a beep) , a message, or the like, or any combination thereof.
  • the processing device 112 in response to determining the one or more candidate objects, the processing device 112 may also cause the signal device to generate the reminder signal.
  • the processing device 112 may designate the candidate object as a negative object.
  • the processing device 112 may delete information associated with negative objects in traffic monitoring at a regular or irregular interval to reduce data computational burdens of the processing device 112.
  • the processing device 112 may delete information of negative objects at a regular interval (e.g., each 3 seconds, 5 seconds, 7 seconds, 10 seconds, 15 seconds, etc. ) .
  • the processing device 112 may delete information of negative objects if a total amount of data is greater than a threshold (e.g., 70%, 75%, 80%, 85%, 90%, etc. of the capacity of a storage device that stores the data) .
  • a threshold e.g., 70%, 75%, 80%, 85%, 90%, etc. of the capacity of a storage device that stores the data
  • the processing device 112 may delete information of a target object after the target object has passed the road.
  • the processing device 112 may identify one or more vehicles within a predetermined range of the crosswalk.
  • the predetermined range may include at least a portion of a region that can be captured of the camera. In some embodiments, the predetermined range may be set according to a default setting of the traffic monitoring system 100 or set manually by a user or operator via the terminal device 140.
  • the processing device 112 may identify the one or more vehicles from a plurality of images of the vehicle (s) using a second object detection algorithm.
  • the second object detection algorithm may be different from or the same as the first object detection algorithm.
  • the first object detection algorithm may be a YOLO 3 algorithm
  • the second object detection algorithm may be an R-CNN algorithm.
  • both the first object detection algorithm and the second object detection algorithm may be the YOLO 3 algorithm.
  • the first object detection algorithm and the second object detection algorithm are merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure.
  • the processing device 112 may direct the camera to capture one or more images associated with the vehicle at one or more predetermined positions.
  • the one or more images associated with a vehicle at one or more predetermined positions are referred to as image (s) /.
  • the one or more image (s) / may include one or more same or different images as the one or more images including the one or more candidate objects as described in connection with FIG. 9.
  • the one or more predetermined positions may be set according to a default setting of the traffic monitoring system 100 or set manually by a user or operator via the terminal device 140.
  • the one or more predetermined positions may be defined by one or more lines in the predetermined range, each of which may be perpendicular to (or substantially perpendicular to) the crosswalk and have a distinctive position relative to the crosswalk.
  • three predetermined positions may be defined by a first line, a second line, and a third line, each of which is within the predetermined range and perpendicular to the crosswalk.
  • the first line may be within a first predetermined distance range of a side of a crosswalk area (e.g., a side where vehicles enter the crosswalk area) .
  • the second line may be within the crosswalk area.
  • the third line may be within a third predetermined distance range of the other side of the crosswalk area (e.g., a side where vehicles exit the crosswalk area) .
  • the crosswalk area refers to a walking region on which a moving object can walk to cross the road.
  • a solid box 1510 represents a crosswalk area where a pedestrian 1560 locates.
  • Lines 1530, 1540, and 1550 represent the first line, the second line, and the third line, respectively.
  • the camera may automatically capture a first image of the vehicle 1570 at the line 1530, a second image of the vehicle 1570 at the line 1540, and a third image of the vehicle 1570 at the line 1550.
  • the camera may capture images (e.g., a video) of the crosswalk area 1510 continuously or intermittently (e.g., periodically) , and the first, second, and third images of the vehicle 1570 may be selected from the images by the processing device 112 or another computing device (e.g., a processor of the camera) .
  • the processing device 112 may generate a binding relationship between each of the one or more vehicles and the target object.
  • the processing device 112 may generate a binding relationship between each of the one or more vehicles and each negative object as described in connection with operation 930.
  • a binding relationship between a candidate object (e.g., a target object or a negative object) and a vehicle may be generated by assigning an attribute value to the candidate object.
  • the attribute value assigned to the target object may indicate a type of the candidate object with respect to the vehicle, for example, being a target object or a negative object.
  • the processing device 112 may update the binding relationship in real-time or periodically.
  • the processing device 112 may update the attribute value of the candidate object.
  • the processing device 112 may generate an ID for each binding relationship between a candidate object and a vehicle.
  • the processing device 112 may obtain the one or more images / of the vehicle according to the corresponding ID and the corresponding attribute value of the candidate object (e.g., a target or negative object) .
  • the candidate object e.g., a target or negative object
  • the processing device 112 may direct the camera to capture the one or more image (s) /.
  • the camera may capture images (e.g., a video) of the vehicle continuously or intermittently (e.g., periodically) , and a first particular image, a second particular image, and a third particular image of the vehicle may be selected from the images of the vehicle by the processing device 112 or another computing device (e.g., a processor of the camera) .
  • images e.g., a video
  • intermittently e.g., periodically
  • a first particular image, a second particular image, and a third particular image of the vehicle may be selected from the images of the vehicle by the processing device 112 or another computing device (e.g., a processor of the camera) .
  • the camera may transmit the image (s) / corresponding to the binding relationship with a specific ID to a storage device (e.g., the storage device 150) , a terminal device (e.g., the terminal device 140) , etc.
  • a user may query the image (s) / based on the specific ID.
  • the processing device 112 may determine whether the vehicle has a traffic violation behavior associated with the one or more target objects based on the one or more images.
  • the processing device 112 may determine whether one or more images / associated with the vehicle are captured within a first predetermined time period.
  • the first predetermined time period may be set according to a default setting of the traffic monitoring system 100 or set manually by a user or operator via the terminal device 140.
  • the processing device 112 may determine that the vehicle has a traffic violation behavior associated with the one or more target objects.
  • a first image / of the vehicle at the first line, a second image / of the vehicle at the second line, and a third image / of the vehicle at the third line may be sequentially captured within the first predetermined time period, which suggests that the vehicle passes through the crosswalk without stopping for the one or more target objects.
  • the processing device 112 may determine that the vehicle has a traffic violation behavior associated with the one or more target objects and is deemed as a target vehicle.
  • the processing device 112 may further verify whether the target object gets into the vehicle.
  • the processing device 112 may determine whether a door of the vehicle is open and whether at least one of the one or more target objects disappears within a second predetermined time period.
  • the second predetermined time period may be set according to a default setting of the traffic monitoring system 100 or set manually by a user or operator via the terminal device 140.
  • the second predetermined time period may be the same as or different from the first predetermined time period. For example, the second predetermined time period may be longer than the first predetermined time period.
  • the processing device 112 may speculate that the at least one target object gets on the vehicle, and determine that the vehicle does not have the traffic violation behavior associated with the one or more target objects. More descriptions for determining whether a vehicle has a traffic violation behavior may be found elsewhere in the present disclosure (e.g., FIG, 14 and the descriptions thereof) .
  • the processing device 112 may perform one or more additional operations with respect to the vehicle.
  • the additional operation (s) may include, for example, making a state of the vehicle as a violation state, detecting a license plate, a speed, a window, a face, a driving behavior associated with the vehicle based on the one or more images / associated with the vehicle, or the like, or any combination thereof.
  • the processing device 112 may compare a result of an additional operation with information in one or more databases.
  • the processing device 112 may determine face information of the driver of the vehicle by analyzing the image (s) / associated with the vehicle, and compare the face information with information in a driver database, so as to obtain profile information of the driver.
  • the processing device 112 may determine license plate information of the vehicle by analyzing the image (s) / associated with the vehicle, and compare the license plate information of the vehicle with information in a vehicle database, so as to obtain vehicle and/or driver information.
  • the processing device 112 may transmit a processing result regarding a vehicle (e.g., a determination result as to whether the vehicle has a traffic violation behavior, a result of an additional operation) to a user device (e.g., the terminal device 140) or a storage device (e.g., the storage device 150) .
  • a processing result may be transmitted to a terminal device or a storage device of a traffic management department, a traffic data center, an alarm center, the terminal device 140 or the like, or any combination thereof.
  • one or more other optional operations may be added elsewhere in the process 900, and/or one or more operations described above may be omitted.
  • the processing device 112 may store information and/or data (e.g., the predetermined region, the one or more images associated with a vehicle, etc. ) associated with the traffic monitoring system 100 in a storage device (e.g., the storage device 150) disclosed elsewhere in the present disclosure.
  • operation 920 and operation 930 may be combined into a single operation in which the processing device 112 may both determine the moving direction of each of the one or more candidate objects and determine the one or more target objects.
  • FIG. 10 is a flowchart illustrating an exemplary process for determining a moving direction of a candidate object with respect to a road according to some embodiments of the present disclosure.
  • process 1000 may be implemented as a set of instructions (e.g., an application) stored in a storage device (e.g., the storage device 150, the ROM 230 or RAM 240) .
  • the processing device 112 e.g., the processor 220, the modules in FIG. 8) may execute the set of instructions, and when executing the instructions, the processing device 112 may be configured to perform the process 1000.
  • the operations of the illustrated process presented below are intended to be illustrative.
  • the process 1000 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 1000 illustrated in FIG. 10 and described below is not intended to be limiting. In some embodiments, one or more operations of the process 1000 may be performed to achieve at least part of operation 920 as described in connection with FIG. 9. For example, for each of the one or more candidate objects as described in connection with 920, the process 1000 may be performed to determine the moving direction of the candidate object with respect to a road.
  • the processing device 112 may determine an initial position of the candidate object within a predetermined region associated with a crosswalk based on an initial image captured at an initial time point.
  • the predetermined region may be a waiting region at a road where the crosswalk is located as described in elsewhere of the present disclosure (e.g., FIG. 4 and FIG. 9 and the descriptions thereof) .
  • the initial image captured at the initial time point refers to the first image that is captured by a camera and includes the candidate object after the candidate object enters the predetermined region. For example, if the camera captures an image every 0.2 seconds and the candidate object enters the predetermined region at 10: 00, the initial image may be captured at 0.2 seconds after 10: 00.
  • the initial position of the candidate object may be a position of the candidate object at 0.2 seconds after 10: 00.
  • the processing device 112 may determine whether the initial position of the candidate object is in the vicinity of an outer edge of the predetermined region.
  • the outer edge of the predetermined region refers to an edge of the predetermined region that is further from the centerline of the road than any other edge (e.g., an inner edge) of the predetermined region.
  • a candidate object may be regarded as being in the vicinity of the outer edge of the predetermined region if, for example, its distance to the outer edge is smaller than its distance to an edge opposite to the outer edge.
  • the predetermined region may be a quadrilateral waiting area on the left side of the road.
  • the left side of the predetermined region may be further from the centerline of the road than any other sides, and regarded as an outer edge of the predetermined region. If a distance between a candidate object and the left side is smaller than a distance between the candidate object and the right side of the predetermined region, the candidate object may be regarded as being in the vicinity of the outer edge of the predetermined region.
  • the process 1000 may proceed to operation 1040. In response to a determination that the initial position is not in the vicinity of the outer edge of the predetermined region, the process 1000 may proceed to operation 1030.
  • the processing device 112 may determine a preliminary moving direction of the candidate object.
  • the preliminary moving direction may be a direction away from the road.
  • the processing device 112 may determine a plurality of intermediate positions of the candidate object within the predetermined region based on a plurality of sequential images associated with the candidate object.
  • the plurality of sequential images may be sequentially captured by the camera at a plurality of intermediate time points immediately after the initial time point. Each of the plurality of intermediate time points may correspond to an intermediate position of the candidate object.
  • the plurality of sequential images may be images captured within a predetermined time period, such as 1 second, 1.5 seconds, 2 seconds, 2.5 seconds, etc. For example, the camera may capture an image every 0.2 seconds within 2 seconds after the initial time point, thereby generating 10 images.
  • the processing device 112 may determine a plurality of intermediate moving directions of the candidate object based on a plurality of positions including the initial position and the plurality of intermediate positions.
  • Each of the plurality of intermediate moving directions may correspond to two adjacent positions of the plurality of positions.
  • each pair of adjacent positions of the positions may include a first position corresponding to an earlier time point and a second position corresponding to a later time point.
  • the processing device 112 may determine a direction from the first position to the second position as the intermediate moving direction corresponding to the pair of adjacent positions.
  • the processing device 112 may determine the moving direction of the candidate object based at least in part on the plurality of intermediate moving directions.
  • the processing device 112 may select one or more candidate intermediate moving directions satisfying a predetermined condition from the plurality of intermediate moving directions. For example, an intermediate moving direction may be selected as a candidate intermediate moving direction if an angle between the intermediate moving direction and a certain direction (e.g. a direction as denoted by an arrow N in FIG. 13) is less than a threshold angle, such as 50°, 45°, 40°, 35°, 25°, 15°, etc. More descriptions regarding selecting a candidate intermediate moving direction may be found elsewhere in the present disclosure (e.g., FIG. 11 and the descriptions thereof) .
  • a threshold angle such as 50°, 45°, 40°, 35°, 25°, 15°, etc.
  • the processing device 112 may classify the candidate intermediate moving direction as a target moving direction crossing the road or a negative moving direction away from the road.
  • the predetermined region may be on the left side of the road. If for a certain pair of adjacent positions, the second position is on the right side of the first position, the processing device 112 may determine the candidate intermediate moving direction as the target moving direction crossing the road. If the second position is on the left side of the first position, the processing device 112 may determine the candidate intermediate moving direction as the negative moving direction away from the road.
  • the processing device 112 may determine the moving direction of the candidate object based on a classification result of the candidate intermediate moving directions (i.e., the target moving direction (s) and the negative moving direction (s) ) . For example, the processing device 112 may determine a first count of target moving directions and a second count of negative moving directions, and designate a direction corresponding to the larger count among the first and second counts as the moving direction of the candidate object. As another example, a first value (e.g., 1) may be assigned to a target moving direction and a second value (e.g., 0) may be assigned to a negative moving direction. The processing device 112 may determine a weighted average value of the all candidate intermediate moving direction (s) .
  • a classification result of the candidate intermediate moving directions i.e., the target moving direction (s) and the negative moving direction (s)
  • the processing device 112 may determine a first count of target moving directions and a second count of negative moving directions, and designate a direction corresponding to the larger count among
  • the processing device 112 may determine a direction crossing the road as the moving direction of the candidate object if the weighted average value is greater than a threshold value.
  • the classification result of the candidate intermediate moving directions may also refer to an average moving direction of the candidate intermediate moving directions.
  • the processing device 112 may determine the moving direction of the candidate object based on the intermediate moving directions and the preliminary moving direction in a similar manner as how the moving direction is determined based on the intermediate moving directions as aforementioned.
  • one or more operations may be omitted and/or one or more additional operations may be added.
  • operations 1010, 1020 and 1030 may be omitted.
  • two or more operations may be integrated into a single operation.
  • operation 1040 and operation 1050 may be combined into a single operation.
  • FIG. 11 is a flowchart illustrating an exemplary process for selecting a candidate intermediate moving direction according to some embodiments of the present disclosure.
  • process 1100 may be implemented as a set of instructions (e.g., an application) stored in a storage device (e.g., the storage device 150, the ROM 230 or RAM 240) .
  • the processor 220 and/or the modules in FIG. 8 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the modules in FIG. 8 may be configured to perform the process 1100.
  • the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 1100 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 1100 illustrated in FIG. 11 and described below is not intended to be limiting.
  • the candidate intermediate moving direction described elsewhere in the present disclosure may be selected from the intermediate moving directions by performing the process 1100. For example, for each of the intermediate moving directions as described in connection with operation 1060, the process 1100 may be performed to determine whether the intermediate moving direction can be selected as a candidate intermediate moving direction.
  • the processing device 112 may identify one or more lane lines associated with a crosswalk.
  • the processing device 112 may identify the one or more lane lines based on an image of a candidate object including the crosswalk captured by a camera (e.g., the image acquisition device 130) .
  • the processing device 112 may identify the one or more lane lines using a lane line detection algorithm.
  • exemplary lane line detection algorithms may include a LaneNet model, a spatial convolution neural network (SCNN) , a vanishing point guided network (VPGNET) , or the like.
  • the one or more lane lines may be determined manually. For example, a user or operator may mark the lane line (s) from the image via a user interface of a terminal (e.g., the terminal device 140) .
  • the processing device 112 may determine an average angle between the one or more lane lines and a horizontal direction of the image of the candidate object.
  • the processing device 112 may determine an angle between each of the one or more lane lines and the horizontal direction.
  • the processing device 112 may further determine the average angle based on the angle (s) between the one or more lane lines and the horizontal direction.
  • the horizontal direction may be a direction as indicated by an arrow A
  • angles between lane lines 1212, 1214, and 1216 and the horizontal direction may be denoted as ⁇ 1, ⁇ 2, and ⁇ 3, respectively.
  • An average angle ⁇ may be determined as ( ⁇ 1 + ⁇ 2 + ⁇ 3 ) /3.
  • the processing device 112 may determine an angle between an intermediate moving direction of the candidate object and the horizontal direction of the image of the candidate object.
  • the intermediate moving direction of the candidate object may correspond to two adjacent positions of a plurality of positions as described in connection with operation 1050 in FIG. 10.
  • the intermediate moving direction may be determined based on a pair of adjacent positions including a first position (x1, y1) corresponding to an earlier time point and a second position (x2, y2) corresponding to a later time point.
  • the processing device 112 may determine an angle between the intermediate moving direction and the horizontal direction as arctan ( ⁇ y/ ⁇ x) , such as an angle ⁇ in FIG. 13.
  • the processing device 112 may determine a reference angle based on the average angel between the one or more lane lines and the horizontal direction and the angle between the intermediate moving direction and the horizontal direction. For example, the processing device 112 may determine the reference angle as
  • the predetermined angle ⁇ may be set according to a default setting of the traffic monitoring system 100 or by a user manually via the terminal device 140. For example, the ⁇ may be equal to one of 85°, 87°, 89°, 90°, 92°, etc.
  • the processing device 112 may determine a moving distance between the two adjacent positions corresponding to the intermediate moving direction in a direction crossing a road where the crosswalk is located (e.g., a direction as denoted by an arrow N in FIG. 13) .
  • the processing device 112 may determine the moving distance between the first position (x1, y1) and the second position (x2, y2) as
  • the moving distance may be an actual distance in the physical space or a pixel distance in an image.
  • the processing device 112 may determine whether the reference angle is less than a threshold angle and whether the moving distance is larger than a threshold distance.
  • the threshold angle and/or the threshold distance may be determined by the processing device 112 (or another computing device) according to different scenarios.
  • the threshold angle may be associated with an orientation of the camera when capturing images, a field of view (FOV) of the camera, a distance between the camera and the candidate object, etc.
  • the threshold distance may be associated with a resolution of the camera.
  • the threshold distance may be a threshold pixel distance denoted by a count of pixels.
  • the threshold pixel distance may correspond to a certain distance (e.g., 0.1 m, 0.2 m, 0.4 m, 0.6 m, 0.8 m, 1 m, etc. ) in the physical space.
  • the certain distance may correspond to a greater threshold pixel distance if the camera has a higher resolution.
  • the threshold angle and/or the threshold distance may be a default setting of the processing device 112 or set by a user via the terminal device 140.
  • the threshold angle may be set as one of 15°, 25°, 35°, 40°, 45°, etc.
  • the threshold distance may be set as one of 30 pixels, 40 pixels, 50 pixels, 60 pixels, etc.
  • the processing device 112 may determine the intermediate moving direction as a candidate intermediate moving direction in 1170.
  • the processing device 112 may determine the intermediate moving direction is not a candidate intermediate moving direction in 1180.
  • the candidate intermediate moving direction (s) By selecting the candidate intermediate moving direction (s) and determining the moving direction of the candidate object based on the candidate intermediate moving direction (s) , the accuracy and reliability of the determined moving direction may be improved.
  • an intermediate moving direction whose reference angle is greater than the threshold angle e.g., an intermediate moving direction that is along the extension direction of the road
  • the moving distance corresponding to the intermediate moving direction is shorter than a threshold distance and greater than 0, it may suggest that the camera has a vibration and the intermediate moving direction may be removed. Removing such an intermediate moving direction may reduce or eliminate the effect of equipment error and improve the determination accuracy.
  • FIG. 12 is a schematic diagram illustrating exemplary angles between lane lines and a horizontal direction according to some embodiments of the present disclosure.
  • lines 1212, 1214, and 1216 represent three lane lines on a road.
  • Line 1220 represents a line along the horizontal direction denoted by an arrow A.
  • ⁇ 1, ⁇ 2, and ⁇ 3 represent to angles between the three lane lines (i.e., lines 1212, 1214, and 1216) and the horizontal direction, respectively.
  • An average angle ⁇ between the three lane lines and the horizontal direction may be determined as ( ⁇ 1 + ⁇ 2 + ⁇ 3 ) /3.
  • FIG. 13 is a schematic diagram illustrating an exemplary process for determining a candidate intermediate moving direction corresponding to two adjacent positions according to some embodiments of the present disclosure.
  • lines 1332, 1334, and 1336 represent three lane lines on a road that includes a crosswalk.
  • An X-axis represents the horizontal direction of an image including the crosswalk and a Y-axis represents the vertical direction of the image.
  • a first point 1310 represents a first position (x1, y1) of a candidate object within a predetermined region (i.e., a region denoted by a dashed box 1340) of the crosswalk at a first time point.
  • a second point 1320 represents a second position (x2, y2) of the candidate object within the predetermined region at a second time point after the first time point.
  • the first position and the second position may be regarded as two adjacent positions of the candidate object.
  • a direction (denoted as an arrow M) from the first point 1310 to the second point 1320 may be determined as an intermediate moving direction of the candidate object.
  • An angle between the intermediate moving direction and the horizontal direction of the image is denoted as an angle ⁇ .
  • an average angle ⁇ between the three lane lines 1332, 1334, and 1336 and the horizontal direction may be determined.
  • the crosswalk i.e., a direction denoted as arrow N
  • a reference angle i.e.,
  • ⁇ y/ ⁇ x the intermediate moving direction
  • the horizontal direction i.e., the X-axis
  • a moving distance (i.e.,
  • FIG. 14 is a flowchart illustrating an exemplary process for determining whether a vehicle has a traffic violation behavior according to some embodiments of the present disclosure.
  • process 1400 may be implemented as a set of instructions (e.g., an application) stored in a storage device (e.g., the storage device 150, the ROM 230 or RAM 240) .
  • the processor 220 and/or the modules in FIG. 8 may execute the set of instructions, and when executing the instructions, the processor 220 and/or the modules in FIG. 8 may be configured to perform the process 1400.
  • the operations of the illustrated process presented below are intended to be illustrative.
  • the process 1400 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 1400 illustrated in FIG. 14 and described below is not intended to be limiting. In some embodiments, one or more operations of the process 1400 may be performed to achieve at least part of operation 960 as described in connection with FIG. 9. For example, for each of the one or more vehicles as described in connection with operation 960, the process 1400 may be performed to determine whether the vehicle has a traffic violation behavior.
  • the processing device 112 may direct a camera to capture one or more images / associated with a vehicle corresponding to a target object at one or more predetermined positions.
  • the one or more predetermined positions may be associated with a crosswalk area of a crosswalk.
  • the one or more predetermined positions may be defined by one or more lines in the predetermined range, each of which may be perpendicular to (or substantially perpendicular to) the crosswalk and have a distinctive position relative to the crosswalk.
  • three predetermined positions may be defined by a first line, a second line, and a third line, each of which is within the predetermined range and perpendicular to the crosswalk.
  • the first line may be within a first predetermined distance range of a side of a crosswalk area (e.g., a side where vehicles enter the crosswalk area) .
  • the second line may be within the crosswalk area.
  • the third line may be within a third predetermined distance range of the other side of the crosswalk area (e.g., a side where vehicles exit the crosswalk area) .
  • the target object may be an object in a predetermined region associated with the crosswalk whose moving direction is a direction crossing a road where the crosswalk is located.
  • the processing device 112 may determine whether the one or more images / associated with the vehicle are captured within a first predetermined time period.
  • the first predetermined time period may be a default setting of the processing device 112 or set manually by a user or operator via the terminal device 140.
  • the process 1400 may proceed to operation 1430. In response to a determination that the one or more images / associated with the vehicle are not captured within the first predetermined time period, the process 1400 may proceed to operation 1450, in which processing device 112 may determine that the vehicle does not have a traffic violation behavior associated with the target object.
  • the processing device 112 may determine whether a door of the vehicle is open and whether the target object disappears in a second predetermined time period based on the one or more images / associated with the vehicle.
  • the processing device 112 may determine a count/number of lanes between the vehicle and the target object. For example, the processing device 112 may determine a distance between the vehicle and the target object in a direction that the target object crossing the road (e.g., a direction as denoted by an arrow N in FIG. 13) . The processing device 112 may determine the count/number of lanes between the vehicle and the target object based on the distance. If the count/number of lanes is equal to 0, the processing device 112 may determine whether the door of the vehicle is open and whether the target object disappears in the second predetermined time period. If the count/number of lanes is greater than 0, the processing device 112 may determine that the door of the vehicle is close and/or the target object does not disappear in the second predetermined time period.
  • the process 1400 may proceed to 1450, in which the processing device 112 may determine that the vehicle does not have a traffic violation behavior associated with the target object.
  • the process 1400 may proceed to 1440, in which the processing device 112 may determine that the vehicle has a traffic violation behavior associated with the target object.
  • the processing device 112 may determine that the target object gets into the target vehicle based on any other technique, such as an object detection technique (e.g., using a region convolutional neural network (R-CNN) algorithm) .
  • an object detection technique e.g., using a region convolutional neural network (R-CNN) algorithm
  • R-CNN region convolutional neural network
  • one or more other optional operations e.g., a storing operation may be added elsewhere in the process 1400.
  • the processing device 112 may store information and/or data (e.g., the one or more images / associated with the vehicle, the predetermined positions, information about the behavior of the vehicle, etc. ) associated with the traffic monitoring system 100 in a storage device (e.g., the storage device 150) disclosed elsewhere in the present disclosure.
  • a storage device e.g., the storage device 150
  • operation 1430 may be omitted.
  • FIG. 15 illustrates an exemplary image associated with a vehicle at a predetermined position captured by a camera according to some embodiments of the present disclosure.
  • a solid box 1510 represents a crosswalk area associated with a crosswalk.
  • Dotted boxes 1522 and 1524 represent to two predetermined regions at two ends of the crosswalk, i.e., two waiting regions of the crosswalk.
  • a driver 1560 of a non-motorized vehicle is waiting to cross the crosswalk at the predetermined region 1522.
  • Lines 1530, 1540, and 1550 represent three lines perpendicular to the crosswalk.
  • the line 1530 is located within a first predetermined distance range of a side of the crosswalk area where the vehicle enters the crosswalk area 1510; the line 1540 is within the crosswalk area 1510; the line 1550 is within a third predetermined distance range of the other side of the crosswalk area where the vehicle exits the crosswalk area 1510.
  • the vehicle 1570 passes through the crosswalk without stopping for the driver 1560, thereby having a traffic violation behavior associated with the driver 1560.
  • aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a “unit, ” “module, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable media having computer readable program code embodied thereon.
  • a non-transitory computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electromagnetic, optical, or the like, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran, Perl, COBOL, PHP, ABAP, dynamic programming languages such as Python, Ruby, and Groovy, or other programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
  • LAN local area network
  • WAN wide area network
  • SaaS Software as a Service
  • the numbers expressing quantities, properties, and so forth, used to describe and claim certain embodiments of the application are to be understood as being modified in some instances by the term “about, ” “approximate, ” or “substantially. ”
  • “about, ” “approximate” or “substantially” may indicate ⁇ 20%variation of the value it describes, unless otherwise stated.
  • the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment.
  • the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable.
PCT/CN2019/126617 2019-08-08 2019-12-19 Systems and methods for traffic violation detection WO2021022759A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19940751.1A EP3983931A4 (en) 2019-08-08 2019-12-19 SYSTEMS AND PROCEDURES FOR DETECTING TRAFFIC VIOLATIONS
US17/647,976 US11790699B2 (en) 2019-08-08 2022-01-13 Systems and methods for traffic violation detection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910731311.8A CN110490108B (zh) 2019-08-08 2019-08-08 一种违章状态的标记方法、装置、存储介质及电子装置
CN201910731311.8 2019-08-08

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/647,976 Continuation US11790699B2 (en) 2019-08-08 2022-01-13 Systems and methods for traffic violation detection

Publications (1)

Publication Number Publication Date
WO2021022759A1 true WO2021022759A1 (en) 2021-02-11

Family

ID=68550342

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126617 WO2021022759A1 (en) 2019-08-08 2019-12-19 Systems and methods for traffic violation detection

Country Status (4)

Country Link
US (1) US11790699B2 (zh)
EP (1) EP3983931A4 (zh)
CN (1) CN110490108B (zh)
WO (1) WO2021022759A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114724363A (zh) * 2022-03-29 2022-07-08 北京万集科技股份有限公司 车辆管控方法、装置、设备、存储介质和程序产品
CN114758511A (zh) * 2022-06-14 2022-07-15 深圳市城市交通规划设计研究中心股份有限公司 一种跑车超速检测系统、方法、电子设备及存储介质
US11790699B2 (en) 2019-08-08 2023-10-17 Zhejiang Dahua Technology Co., Ltd. Systems and methods for traffic violation detection

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11610320B2 (en) * 2020-02-19 2023-03-21 Object Video Labs, LLC Remote camera-assisted robot guidance
CN111275008B (zh) * 2020-02-24 2024-01-16 浙江大华技术股份有限公司 目标车辆异常的检测方法及装置、存储介质、电子装置
US20220114888A1 (en) * 2020-10-14 2022-04-14 Deka Products Limited Partnership System and Method for Intersection Navigation
JP2022134409A (ja) * 2021-03-03 2022-09-15 トヨタ自動車株式会社 地図データ利用システム、地図データ利用方法及びプログラム
CN113420714B (zh) * 2021-07-12 2023-08-22 浙江大华技术股份有限公司 一种采集图像上报方法、装置及电子设备
CN115116219B (zh) * 2022-06-07 2024-04-05 启迪设计集团股份有限公司 一种非机动车待行区设置需求的判定方法
CN115346373A (zh) * 2022-08-16 2022-11-15 白犀牛智达(北京)科技有限公司 一种红绿灯识别方法和装置
CN115240435A (zh) * 2022-09-21 2022-10-25 广州市德赛西威智慧交通技术有限公司 一种基于ai技术的车辆违章行驶检测方法、装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103183027A (zh) * 2011-12-28 2013-07-03 华为技术有限公司 车辆防撞方法和装置
CN104361747A (zh) * 2014-11-11 2015-02-18 杭州新迪数字工程系统有限公司 机动车斑马线未礼让行人自动抓拍系统与识别方法
CN109741608A (zh) * 2019-01-29 2019-05-10 浙江浩腾电子科技股份有限公司 基于深度学习的机动车右转礼让行人分析抓拍系统及方法
CN110490108A (zh) * 2019-08-08 2019-11-22 浙江大华技术股份有限公司 一种违章状态的标记方法、装置、存储介质及电子装置

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005283531A (ja) * 2004-03-31 2005-10-13 Equos Research Co Ltd 車載装置及びデータ作成装置
US7671725B2 (en) * 2006-03-24 2010-03-02 Honda Motor Co., Ltd. Vehicle surroundings monitoring apparatus, vehicle surroundings monitoring method, and vehicle surroundings monitoring program
US8825350B1 (en) * 2011-11-22 2014-09-02 Kurt B. Robinson Systems and methods involving features of adaptive and/or autonomous traffic control
CN103198659B (zh) * 2013-04-08 2016-03-23 长安大学 车辆通过人行横道时与行人抢道的抓拍装置及其抓拍方法
WO2014172708A1 (en) * 2013-04-19 2014-10-23 Polaris Sensor Technologies, Inc. Pedestrian right of way monitoring and reporting system and method
US9275286B2 (en) * 2014-05-15 2016-03-01 Xerox Corporation Short-time stopping detection from red light camera videos
CN105448094B (zh) * 2015-12-31 2017-12-05 招商局重庆交通科研设计院有限公司 一种基于车路协同技术的逆行警告与风险规避方法
US9672734B1 (en) * 2016-04-08 2017-06-06 Sivalogeswaran Ratnasingam Traffic aware lane determination for human driver and autonomous vehicle driving system
CN107161147A (zh) * 2017-05-04 2017-09-15 广州汽车集团股份有限公司 一种车辆防碰撞巡航控制系统及其控制方法
WO2018235154A1 (ja) * 2017-06-20 2018-12-27 株式会社日立製作所 走行制御システム
CN107248291A (zh) * 2017-07-14 2017-10-13 深圳云天励飞技术有限公司 机动车抢道判定方法、装置、监控处理设备及监控系统
US10217354B1 (en) * 2017-10-02 2019-02-26 Bertram V Burke Move over slow drivers cell phone technology
CN109035795A (zh) 2018-08-16 2018-12-18 武汉元鼎创天信息科技有限公司 一种礼让行人检测系统
US10475338B1 (en) * 2018-09-27 2019-11-12 Melodie Noel Monitoring and reporting traffic information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103183027A (zh) * 2011-12-28 2013-07-03 华为技术有限公司 车辆防撞方法和装置
CN104361747A (zh) * 2014-11-11 2015-02-18 杭州新迪数字工程系统有限公司 机动车斑马线未礼让行人自动抓拍系统与识别方法
CN109741608A (zh) * 2019-01-29 2019-05-10 浙江浩腾电子科技股份有限公司 基于深度学习的机动车右转礼让行人分析抓拍系统及方法
CN110490108A (zh) * 2019-08-08 2019-11-22 浙江大华技术股份有限公司 一种违章状态的标记方法、装置、存储介质及电子装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11790699B2 (en) 2019-08-08 2023-10-17 Zhejiang Dahua Technology Co., Ltd. Systems and methods for traffic violation detection
CN114724363A (zh) * 2022-03-29 2022-07-08 北京万集科技股份有限公司 车辆管控方法、装置、设备、存储介质和程序产品
CN114758511A (zh) * 2022-06-14 2022-07-15 深圳市城市交通规划设计研究中心股份有限公司 一种跑车超速检测系统、方法、电子设备及存储介质
CN114758511B (zh) * 2022-06-14 2022-11-25 深圳市城市交通规划设计研究中心股份有限公司 一种跑车超速检测系统、方法、电子设备及存储介质

Also Published As

Publication number Publication date
US11790699B2 (en) 2023-10-17
CN110490108B (zh) 2022-02-08
EP3983931A1 (en) 2022-04-20
CN110490108A (zh) 2019-11-22
EP3983931A4 (en) 2022-08-03
US20220139217A1 (en) 2022-05-05

Similar Documents

Publication Publication Date Title
US11790699B2 (en) Systems and methods for traffic violation detection
KR102441085B1 (ko) 횡단보도 인식 결과를 이용한 안내 정보 제공 장치 및 방법
JP7052663B2 (ja) 物体検出装置、物体検出方法及び物体検出用コンピュータプログラム
EP3462377B1 (en) Method and apparatus for identifying driving lane
US11967052B2 (en) Systems and methods for image processing
EP3441909B1 (en) Lane detection method and apparatus
US20190122059A1 (en) Signal light detection
US11315026B2 (en) Systems and methods for classifying driver behavior
US20190272435A1 (en) Road detection using traffic sign information
WO2020248248A1 (en) Systems and methods for object tracking
WO2022088886A1 (en) Systems and methods for temperature measurement
CN102997900A (zh) 外界识别方法、装置以及车辆系统
US11367287B2 (en) Methods and systems for video surveillance
US11250240B1 (en) Instance segmentation using sensor data having different dimensionalities
US20220139090A1 (en) Systems and methods for object monitoring
JP5898001B2 (ja) 車両周辺監視装置
US10769420B2 (en) Detection device, detection method, computer program product, and information processing system
CN105825495A (zh) 物体检测装置和物体检测方法
CN113240756B (zh) 车载bsd摄像头的位姿变动检测方法、设备和存储介质
US20220189297A1 (en) Systems and methods for traffic monitoring
KR102306789B1 (ko) 교행 다차로에서의 이상차량 인식방법 및 장치
Haris et al. Lane line detection and departure estimation in a complex environment by using an asymmetric kernel convolution algorithm
KR20160015091A (ko) 교통신호제어 시스템의 보행자 검출 및 행동패턴 추적 방법
CN110705495A (zh) 交通工具的检测方法、装置、电子设备和计算机存储介质
KR20150002040A (ko) 호그 연속기법에 기반한 칼만 필터와 클러스터링 알고리즘을 이용하여 실시간으로 보행자를 인식하고 추적하는 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19940751

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019940751

Country of ref document: EP

Effective date: 20220113

NENP Non-entry into the national phase

Ref country code: DE