CN113112828A - Intersection monitoring method, device, equipment, storage medium and program product - Google Patents

Intersection monitoring method, device, equipment, storage medium and program product Download PDF

Info

Publication number
CN113112828A
CN113112828A CN202110404652.1A CN202110404652A CN113112828A CN 113112828 A CN113112828 A CN 113112828A CN 202110404652 A CN202110404652 A CN 202110404652A CN 113112828 A CN113112828 A CN 113112828A
Authority
CN
China
Prior art keywords
target
intersection
time
image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110404652.1A
Other languages
Chinese (zh)
Inventor
车正平
姜波
史雪凤
刘亚书
唐剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Voyager Technology Co Ltd
Original Assignee
Beijing Voyager Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Voyager Technology Co Ltd filed Critical Beijing Voyager Technology Co Ltd
Priority to CN202110404652.1A priority Critical patent/CN113112828A/en
Publication of CN113112828A publication Critical patent/CN113112828A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/07Controlling traffic signals
    • G08G1/08Controlling traffic signals according to detected number or speed of vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

According to an embodiment of the present disclosure, a method, an apparatus, a device, a storage medium, and a program product for intersection monitoring are provided. The method proposed herein comprises: acquiring an intersection image captured by a camera associated with the pick-up tool mounting, the pick-up tool being located within a predetermined range of the target intersection; determining, based on the intersection image, a number of target vehicles traveling through the target intersection at a target time, the target time being a time at which the camera captures the intersection image; and updating a time distribution table with the target time and the number, the time distribution table indicating a distribution over time of the number of target vehicles passing through the target intersection. According to the fact of the disclosure, the traffic information of the target vehicle at the specific intersection can be accurately monitored.

Description

Intersection monitoring method, device, equipment, storage medium and program product
Technical Field
Implementations of the present disclosure relate to the field of intelligent transportation, and more particularly, to methods, apparatuses, devices, storage media, and program products for intersection monitoring.
Background
The intersection itself is a region where various traffic flows are interlaced, and is a place where traffic safety accidents occur frequently. The causes of the intersection safety accidents are many, and besides the intersection shape design of the intersection is various, the definition of signal lamp sign lines, the traffic rules and safety awareness of drivers and pedestrians, the time and frequency of passing through the intersection of a specific type of vehicles are also important factors influencing the safety risk of the intersection.
For example, large vehicles (including large trucks, buses, muck trucks, etc.) generally have high bodies and long bodies, and are easy to cause accidents such as collision, rolling and the like due to the fact that other vehicles are easy to block visual blind areas caused by the high body height and the long body height and dangerous areas caused by the difference between the visual blind areas and the inner wheels of the large vehicles. In addition, the large-sized vehicle has large safe braking distance, and is easy to have dangerous events such as object scattering, vehicle rollover and the like which endanger other vehicles and pedestrians when passing a bend. In addition, non-motor vehicles (including electric motorcycles, bicycles, etc.) are prone to driving without following traffic lights or traffic sign lines, and may also cause intersection accidents.
Disclosure of Invention
The embodiment of the disclosure provides a scheme for intersection monitoring.
In a first aspect of the disclosure, a method for intersection monitoring is provided. The method comprises the following steps: acquiring an intersection image captured by a camera associated with an acquisition tool, the acquisition tool being located within a predetermined range of a target intersection; determining, based on the intersection image, a number of target vehicles traveling through the target intersection at a target time, the target time being a time at which the camera captures the intersection image; and updating a time distribution table with the target time and the number, the time distribution table indicating a distribution over time of the number of target vehicles passing through the target intersection.
In a second aspect of the present disclosure, an apparatus for intersection monitoring is provided. The device includes: an image acquisition module configured to acquire an intersection image captured by a camera associated with an acquisition tool, the acquisition tool being located within a predetermined range of a target intersection; a number determination module configured to determine, based on the intersection image, a number of target vehicles traveling through the target intersection at a target time, the target time being a time at which the intersection image was captured by the camera; and an update module configured to update a time distribution table with the target time and the number, the time distribution table indicating a distribution over time of the number of target vehicles passing through the target intersection.
In a third aspect of the present disclosure, there is provided an electronic device comprising: a memory and a processor; wherein the memory is for storing one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method according to the first aspect of the disclosure.
In a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon one or more computer instructions, wherein the one or more computer instructions are executed by a processor to implement a method according to the first aspect of the present disclosure.
In a fifth aspect of the present disclosure, a computer program product is provided comprising computer executable instructions, wherein the computer executable instructions, when executed by a processor, implement the method according to the first aspect of the present disclosure.
According to various embodiments of the present disclosure, traffic information of a target vehicle (e.g., a large vehicle) at a specific intersection can be accurately monitored, so that safety management or traffic control of the intersection can be facilitated.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented;
FIG. 2 illustrates a flow diagram of an example method of intersection monitoring, in accordance with some embodiments of the present disclosure;
FIG. 3 illustrates a flow diagram of an example method of determining a number according to some embodiments of the present disclosure;
FIG. 4 illustrates a schematic block diagram of an apparatus for determining a drivable area in accordance with some embodiments of the present disclosure; and
FIG. 5 illustrates a block diagram of a computing device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As discussed above, intersections are one of the most complex scenarios of the environment in the road network, which tend to converge a number of different types of traffic elements, e.g., motor vehicles, non-motor vehicles, pedestrians, etc. In addition, in driving in an intersection environment, vehicles are often also limited by traffic lights or specific turning lanes. Thus, intersections have become prone to accidents in current traffic environments.
In addition, some specific types of vehicles are also important factors affecting intersection safety. For example, some large vehicles may easily cause a blind area for the vision of other vehicles, thereby improving the probability of the intersection accident; some electric motorcycles may not travel according to traffic lights, and may also increase the probability of intersection accidents. Therefore, it is desirable to know the traffic situation of a specific type of vehicle at an intersection so as to facilitate safety control or traffic scheduling at the intersection.
Some conventional solutions implement intersection traffic monitoring by deploying roadside equipment (e.g., overhead cameras). However, such roadside apparatuses require high cost and poor flexibility. In addition, such roadside equipment may also cause obstruction in the intersection traffic and present new traffic hazards.
In view of this, the embodiments of the present disclosure provide a solution for intersection monitoring. In this scheme, first, an intersection image captured by a camera associated with a capture tool mount is acquired, where the capture tool is located within a predetermined range of a target intersection. Then, based on the intersection image, a number of target vehicles traveling through the target intersection at a target time is determined, wherein the target time is a time at which the camera captures the intersection image. Further, the time distribution table is updated with the target time and the number, wherein the time distribution table indicates a distribution over time of the number of target vehicles passing through the target intersection.
According to such an approach, embodiments of the present disclosure can acquire an intersection image with a collection tool (e.g., a vehicle normally passing in a road) and generate a traffic distribution of a target vehicle (e.g., a large vehicle or a non-motor vehicle) at the intersection based on the intersection image. Such information can facilitate safety management or traffic control for the intersection.
Some example embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Example Environment
Referring initially to FIG. 1, a schematic diagram of an example environment 100 is schematically illustrated in which embodiments in accordance with the present disclosure may be implemented. As shown in fig. 1, the environment 100 can include a collection tool 120 traveling near the target intersection 110, the collection tool 120 can be, for example, a vehicle in motion. In the example of fig. 1, the target intersection 110 is shown as an intersection. It should be understood that this type is merely illustrative, and that the target intersection 110 may also be any other suitable type, examples of which include, but are not limited to: crossroads, T-shaped intersections, roundabout intersections, Y-shaped intersections, high-speed ramp junction entrances, elevated road junction entrances and the like.
As shown in fig. 1, a camera 125 associated with the acquisition tool 120 can capture an intersection image 140 and send to a computing device 150. Illustratively, the collection tool 120 may comprise a motor vehicle in motion. The camera 125 may be, for example, a tachograph carried by the motor vehicle; alternatively, the camera 125 may also be a terminal device that is being used for navigation of the acquisition tool 120, for example. For example, the driver of the capture tool 120 may position a smartphone in a particular location of the vehicle so that it can capture images in front of the tool 120.
As another example, the collection tool 120 may also include a non-motorized vehicle in motion, such as an electric motorcycle, for example. Accordingly, the camera 125 may be, for example, a tachograph integrated on the non-motor vehicle; alternatively, the camera 125 may be, for example, a camera fixed on a helmet of the driver to capture an image in front.
As shown in fig. 1, the acquisition tool 120 can periodically upload location information (e.g., GPS location) to the computing device 150, and when the computing device 150 determines that the acquisition tool 120 is within a predetermined range of the target intersection 110, the computing device 150 can send an instruction to the acquisition tool 120 or a terminal device associated with the acquisition tool (e.g., a driver's smartphone) to cause the camera 125 to capture the intersection image 140.
As shown in fig. 1, after the camera 125 captures the intersection image 140, the computing device 150 can acquire the captured intersection image 140 and analyze the traffic situation of the target vehicle at the target intersection 110 based on the intersection image 140. In some implementations, the computing device 150 may be, for example, a server device with greater computing power for performing the analysis of the road junction image 140. It should be understood that the computing device 150 may also include suitable computing devices located in other locations, such as edge computing devices located near the target intersection 110, and the like.
In this disclosure, a target vehicle is used to represent a particular type of vehicle, rather than a specific vehicle. Illustratively, the target vehicle may comprise a large vehicle having a size greater than a predetermined threshold, examples of which include, but are not limited to: trucks, buses or muck trucks and the like. Alternatively, the target vehicle may also comprise a non-motorized vehicle, examples of which include, but are not limited to, an electric motorcycle or bicycle, or the like. In the example of FIG. 1, the target vehicles may include, for example, target vehicles 130-1 and 130-2 (referred to individually or collectively as target vehicles 130), which are schematically illustrated as trucks.
As shown in fig. 1, the computing device 150 can determine a number of target vehicles 130 included in the intersection image based on the intersection image 140 and update the time distribution table 160 based on the number. As shown in fig. 1, the time distribution table 160 can indicate the distribution over time of the number of target vehicles passing through a target intersection. The specific process for determining the number and updating the time distribution table 160 will be discussed in detail below, and will not be described in detail here.
It should be understood that the number and type of vehicles, the arrangement of intersections, and the specific form of the schedule shown in fig. 1 are merely illustrative.
Example procedure
The process of intersection monitoring according to an embodiment of the present disclosure will be described in detail below with reference to fig. 2 to 3. Fig. 2 shows a schematic diagram of a process 200 for intersection monitoring, according to some embodiments of the present disclosure. For ease of discussion, the process of intersection monitoring is discussed with reference to FIG. 1. Process 200 may be performed, for example, at computing device 150 shown in fig. 1. As discussed above, process 200 may also be performed by other suitable devices. For ease of description, the process 200 will be described below with the computing device 150 as an example. It should be understood that process 200 may also include blocks not shown and/or may omit blocks shown. The scope of the present disclosure is not limited in this respect.
As shown in fig. 2, at block 202, the computing device 150 acquires an intersection image 140 captured by the camera 125 associated with the acquisition tool 120, wherein the acquisition tool 120 is located within a predetermined range of the target intersection 110.
The target intersection 110 can include any suitable intersection, examples of which include, but are not limited to: crossroads, T-shaped intersections, roundabout intersections, Y-shaped intersections, high-speed ramp junction entrances, elevated road junction entrances and the like.
In some implementations, the target intersection 110 can be determined from a set of candidate intersections based on historical incident information. For example, intersections at which an accident has occurred may be screened from intersections included in the predetermined area as the target intersection 110. Alternatively, the target intersection 110 can also be an intersection determined to have a higher risk based on an appropriate risk prediction model, for example.
As discussed above, the collection vehicle 120 may comprise a vehicle in motion, such as a motor vehicle or a non-motor vehicle. Accordingly, the camera 125 may comprise, for example, a tachograph mounted on the capture tool 120. Alternatively, the camera 125 may also be a smart device associated with the acquisition tool 120, such as a smartphone mounted in a suitable location in the vehicle, smart glasses worn by the driver, a camera mounted on the helmet of the rider, and so forth. Such a camera 125 is capable of capturing images in front of the acquisition tool 120.
In some implementations, the collection tool 120 may be, for example, a vehicle for providing travel services. Such vehicles are more normative, for example, having a drive recorder installed in a fixed location as the camera 125 for capturing the intersection image 140. Accordingly, the computing device 150 may be, for example, a server device for providing travel services.
In some implementations, the capture tool 120, such as a terminal device associated with the capture tool 120 (e.g., a driver's smartphone or smart glasses, etc.), can determine location information and periodically upload the location information to the computing device 150.
The camera 125 may be caused to capture the intersection image 140 when the computing device 150, the capture tool 120, or the terminal device determines that the capture tool 120 is within a predetermined range of the target intersection 110. Illustratively, the computing device 150 can determine that the collection tool 120 has traveled within the predetermined range of the target intersection 110 based on a comparison of the location information uploaded by the collection tool 120 and the location of the target intersection 110.
In some implementations, the camera 125 can be caused to capture the intersection image 140 when it is determined that the acquisition tool 120 has traveled within the predetermined range of the target intersection 110. Alternatively, when it is determined that the capture tool 120 is already located within the predetermined range of the target intersection 110, it may also be determined whether the camera 125 needs to capture the intersection image 140 according to the capture situation of the historical intersection image.
In some implementations, the computing device 150 can determine a first number of historical intersection images that the target intersection 110 has previously been acquired, and cause the camera 125 to capture the intersection images 140 in response to the first number being less than a predetermined threshold.
For example, if 1000 historical intersection images of the target intersection 110 have been acquired in the past month, the camera 125 may not be required to capture new intersection images. Conversely, if the number of historical intersection images is less than 1000, the camera 125 may be enabled to capture the intersection images 140. Based on the mode, the situation that enough intersection images are collected and then are repeatedly collected, and the waste of computing resources and communication resources is caused can be avoided.
In other implementations, the computing device 150 can determine a historical time of the most recently acquired historical intersection image of the target intersection 110 and cause the camera 125 to capture the intersection image 140 in response to the current time differing from the historical time by less than a predetermined threshold.
For example, if another acquisition tool has captured an intersection image of the target intersection 110 5 minutes ago, the camera 125 may not be required to capture a new intersection image.
In another example, another capture tool captures an intersection image of the target intersection 5 minutes ago within the same hour period, then the camera 125 may not capture a new intersection image. For example, if the time of the previous capture of the historical intersection image is 01 minutes at 2 pm and the current time is 06 minutes at 2 pm, the camera 125 may not capture a new intersection image. Conversely, if the time at which the historical intersection image was previously captured is another hour period, e.g., 58 minutes at 1 pm, then the camera 125 may still be enabled to capture the intersection image 140, although the current time is 01 minutes at 2 pm.
In still other implementations, the computing device 150 can determine a second number of historical intersection images that the target intersection 110 has been captured for a target time period corresponding to the current time, and cause the camera 125 to capture the intersection images 140 in response to the current time differing from the historical time by less than a predetermined threshold.
For example, the target time period may indicate, for example, an hour period corresponding to the current time. For example, 24 hours a day may be divided into 24 time periods, and if 50 historical intersection images have been acquired for the hour period (2 pm to 3 pm) corresponding to the current time (e.g., 06 minutes at 2 pm), the camera 125 may not capture a new intersection image. Conversely, if the number of historical intersection images is less than 50, the camera 125 may be enabled to capture the intersection images 140. In this way, it is ensured that sufficient data for analyzing the traffic situation of the target vehicle is available for each time period.
At the camera 125, the intersection image 140 is captured, and the captured intersection image 140 can be sent by the acquisition tool 120, the camera 125, or other suitable transit device to the computing device 150 for subsequent analysis.
At block 204, the computing device 150 determines, based on the intersection image 140, a number of target vehicles 130 traveling through the target intersection 110 at a target time, where the target time is the time at which the intersection image 140 was captured by the camera 125.
In some implementations, the computing device 150 can utilize a target detection algorithm to determine the number of target vehicles 130. The detailed process of block 204 will be described below with reference to fig. 3, which illustrates a flowchart of an example process of determining a number according to some embodiments of the present disclosure.
As shown in fig. 3, at block 302, the computing device 150 may determine a set of candidate blocks associated with the vehicle from the intersection image 140. The computing device 150 can, for example, utilize an appropriate object detection model to determine the vehicles included in the intersection image 140. Examples of such object detection models may include, but are not limited to: YOLOv3, CenterNet, EfficientDet, and the like.
At block 304, the computing device 150 may determine a target frame corresponding to the target vehicle from a set of candidate frames. If the computing device 150 determines that candidate frames that may correspond to vehicles are included in the intersection image 140, the computing device 150 may further screen out a target frame from the set of candidate frames that corresponds to the target vehicle.
In some implementations, the computing device 150 may exclude a partial noise candidate box based on the travelable region. Specifically, the computing device 150 can determine a drivable region associated with the target intersection 110 based on the intersection image 140. For example, the computing device 150 may utilize a deep learning algorithm segmentation model (e.g., deplab v3) to determine road boundaries and may determine travelable regions, for example, in conjunction with tachograph setup angle and range modeling, and the like.
Further, the computing device 150 may filter candidate frames located outside the drivable area from the set of candidate frames and determine a target frame based on the filtered set of candidate frames. For example, the computing device 150 may exclude candidate boxes located in parking spaces to avoid incorporating target vehicles parked at the roadside into the calculation.
In some implementations, the computing device 150 may determine the target frame from a set of candidate frames or a filtered set of candidate frames based on a shape feature of the target vehicle, the shape feature including at least one of: size, color, or shape. For example, the computing device 150 may further determine a target frame corresponding to the target vehicle 130 from the set of candidate frames or the filtered set of candidate frames according to a size feature, a shape feature, or a color feature of the target vehicle 130.
At block 306, the computing device 150 may determine the number of target vehicles 130 based on the target block. Taking fig. 1 as an example, the computing device 150 may determine, for example, that two target frames corresponding to the target vehicles 130 are included in the intersection image 140, thereby determining that the number of target vehicles 130 is 2.
With continued reference to fig. 2, at block 206, the computing device 150 updates the time distribution table 160 with the target time of day and the number, wherein the time distribution table 160 indicates a distribution over time of the number of target vehicles 130 that pass through the target intersection 110.
In some implementations, as shown in fig. 1, the time profile table 160 can indicate the number of target intersections 110 traversed by the target vehicle 130 over multiple time periods of the day, where the multiple time periods have the same length of time. Illustratively, the time distribution table 160 can represent the number of target vehicles 130 passing through the target intersection 110 over a 24 hour period of time.
In some implementations, the number in the time distribution table 160 can be determined, for example, based on the historical number of target vehicles determined from the historical intersection images and an average of the numbers determined based on the intersection images 140. For example, prior to updating the time distribution table 160, the computing device 150 has acquired, for example, 9 historical intersection images, and accordingly determined 9 historical numbers, and stored a near average (e.g., 2) in the previous time distribution table. Accordingly, upon determining that the current number is 2 based on the intersection image 140, the computing device 150 calculates an average of the current number and 9 historical numbers, for example, and saves the average in the time distribution table 160.
In some implementations, updating the time distribution table 160 may also include, for example, creating a new time distribution table 160. For example, the target intersection 110 may be a newly monitored intersection that has not acquired any historical intersection images. Accordingly, the computing device 150 may create a new time distribution table 160 and store the target time of day in association with the determined number.
Based on the manner discussed above, the embodiments of the present disclosure can utilize a common vehicle to acquire the intersection image, and further generate the time distribution table of the target vehicle at the target intersection through image analysis. Such a time profile can effectively assist in safety management or traffic control for the target intersection.
In some implementations, the computing device 150 can also adjust a target navigation path associated with the target intersection 110 in response to the time distribution table 160 indicating that the number of target vehicles passing through the target intersection 110 over a predetermined period of time is greater than a predetermined threshold, wherein the target navigation path indicates that the vehicle to be navigated is expected to pass through the target intersection 110 within the predetermined period of time.
Illustratively, the computing device 150 or other devices providing navigation services may also adjust the navigation path plan for other vehicles according to a time distribution table. For example, if an initial navigation path from a starting point to an end point is scheduled to pass through the target intersection 110 for a desired period of time (e.g., specifically, 9 pm and 10 pm), and the time distribution table 160 indicates that the number of target vehicles (e.g., large vehicles) passing through the target intersection 110 exceeds a predetermined threshold at the predetermined period of time (9 pm to 10 pm), this indicates that the navigation path has a high probability of causing the navigated vehicles to encounter a high number of target vehicles when passing through the target intersection, which may result in a certain traffic risk.
In some implementations, based on such a determination, the computing device 150 or other device may adjust the initial navigation path such that the adjusted navigation path bypasses the target intersection 110 to reduce possible traffic risks.
In some implementations, the computing device 150 can also send an alert associated with the target vehicle to the predetermined vehicle in response to the time distribution table 160 indicating that the number of target vehicles passing through the target intersection 110 over the predetermined time period is greater than a predetermined threshold, wherein the predetermined vehicle is expected to pass through the target intersection 110 within the predetermined time period.
Illustratively, the computing device 150 or other suitable device may determine that a predetermined vehicle is expected to pass through the target intersection 110 at 9 pm and 10 pm, and the time distribution table 160 indicates that the number of target vehicles (e.g., large vehicles) passing through the target intersection 110 at the predetermined time period (9 pm to 10 pm) exceeds a predetermined threshold, which indicates that a high probability of the navigation path would result in a high probability of the navigated vehicle encountering a high number of target vehicles when passing through the target intersection, which may result in a certain traffic risk. Accordingly, the computing device 150 or other suitable device can send an early warning message to the predetermined vehicle and the terminal device associated with the predetermined vehicle to alert the target intersection 110 that a greater number of target vehicles (e.g., large vehicles) are likely to be driven past to alert the driver to drive carefully, thereby reducing the probability of an intersection accident occurring.
Alternatively, such a predetermined vehicle may also be an autonomous vehicle, for example, and such an early warning message may be used to instruct the autonomous vehicle to adjust a path plan or decision model through the target intersection 110, for example, to cause the autonomous vehicle to more cautiously pass through the target intersection, thereby reducing the probability of an intersection accident occurring.
In some implementations, the computing device 150 can also train a risk prediction model using the temporal distribution table 160 and historical incident information associated with the target intersection, where the risk prediction model is configured to determine a risk level for the intersection based on the temporal distribution table of the intersection.
In some implementations, the computing device 150 can, for example, train a machine learning model with the time distribution table 160 and historical accident information to enable the machine learning model to predict the probability of an accident occurring at an intersection based on the time distribution table of the intersection, and in turn, screen out intersections with higher risk. Such information may be further provided to, for example, traffic management departments to enhance traffic management at the intersection. Alternatively, such information may also be pushed to the traffic driver for alerting the risk level of the intersection so that the driver can more carefully navigate the intersection.
Example apparatus and devices
Embodiments of the present disclosure also provide corresponding apparatuses for implementing the above methods or processes. Fig. 4 shows a schematic block diagram of an apparatus 400 for intersection monitoring according to some embodiments of the present disclosure.
As shown in fig. 4, the apparatus 400 may include an image acquisition module 410 configured to acquire an intersection image captured by a camera associated with an acquisition tool, the acquisition tool being located within a predetermined range of a target intersection. The apparatus 400 further includes a number determination module 420 configured to determine, based on the intersection image, a number of target vehicles traveling through the target intersection at a target time, the target time being a time at which the intersection image was captured by the camera. Further, the apparatus 400 includes an update module 430 configured to update a time distribution table with the target time and number, the time distribution table indicating a distribution over time of a number of target vehicles passing through the target intersection.
In some implementations, the image acquisition module 410 includes: a first number determination module configured to determine a first number of historical intersection images that a target intersection has previously been acquired; a first enabling module configured to cause the camera to capture an intersection image in response to the first number being less than a predetermined threshold; and a first acquisition module configured to acquire the captured intersection image.
In some implementations, the image acquisition module 410 includes: a historical moment determination module configured to determine a historical moment of a historical intersection image of a target intersection which is collected recently; a second enabling module configured to cause the camera to capture an intersection image in response to a difference between the current time and the historical time being less than a predetermined threshold; and a second acquisition module configured to acquire the captured intersection image.
In some implementations, the image acquisition module 410 includes: a second number determination module configured to determine a second number of historical intersection images that the target intersection has been acquired at a target time period corresponding to the current time; a third enabling module configured to cause the camera to capture an intersection image in response to the second number being less than a predetermined threshold; and a third acquisition module configured to acquire the captured intersection image.
In some implementations, the number determination module 520 includes: a candidate frame determination module configured to determine a set of candidate frames associated with the vehicle from the intersection image; a target frame determination module configured to determine a target frame corresponding to the target vehicle from a set of candidate frames; and a counting module configured to determine a number of target vehicles based on the target box.
In some implementations, the target box determination module includes: a travelable region determination module configured to determine a travelable region associated with the target intersection based on the intersection image; a filtering module configured to filter candidate frames located outside the drivable area from a set of candidate frames; and a target box screening module configured to determine a target box based on the filtered set of candidate boxes.
In some implementations, the target box determination module includes: a shape screening module configured to determine a target frame from a set of candidate frames based on a shape feature of the target vehicle, the shape feature including at least one of: size, color, or shape.
In some implementations, wherein the target vehicle comprises a large vehicle having a size greater than a predetermined threshold.
In some implementations, the apparatus 400 further includes: a training module configured to train a risk prediction model using the time profile and historical incident information associated with the target intersection, the risk prediction model configured to determine a risk level for the intersection based on the time profile of the intersection.
In some implementations, the apparatus 400 further includes: a navigation module configured to adjust a target navigation path associated with the target intersection in response to the time profile indicating that a number of target vehicles passing through the target intersection over a predetermined period of time is greater than a predetermined threshold, the target navigation path indicating that a vehicle to be navigated is expected to pass through the target intersection within the predetermined period of time.
In some implementations, the apparatus 400 further includes: an early warning module configured to send an early warning associated with the target vehicle to the predetermined vehicle in response to the time profile indicating that the number of target vehicles passing through the target intersection over the predetermined period of time is greater than a predetermined threshold, the predetermined vehicle expected to pass through the target intersection within the predetermined period of time.
In some implementations, the time profile indicates a number of times of day that the target vehicle passes the target intersection, the plurality of time periods having the same length of time.
In some implementations, the target intersection is determined from a set of candidate intersections based on historical incident information.
In some implementations, the capture tool is a vehicle in motion and the camera is a tachograph carried by the vehicle.
The elements included in apparatus 400 may be implemented in a variety of ways including software, hardware, firmware, or any combination thereof. In some embodiments, one or more of the units may be implemented using software and/or firmware, such as machine executable instructions stored on a storage medium. In addition to, or in the alternative to, machine-executable instructions, some or all of the elements in apparatus 400 may be implemented at least in part by one or more hardware logic components. By way of example, and not limitation, exemplary types of hardware logic components that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standards (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and so forth.
Fig. 5 illustrates a block diagram of a computing device/server 500 in which one or more embodiments of the present disclosure may be implemented. It should be appreciated that the computing device/server 500 illustrated in FIG. 5 is merely exemplary and should not be construed as limiting in any way the functionality and scope of the embodiments described herein.
As shown in fig. 5, computing device/server 500 is in the form of a general purpose computing device. Components of computing device/server 500 may include, but are not limited to, one or more processors or processing units 510, memory 520, storage 530, one or more communication units 540, one or more input devices 550, and one or more output devices 560. The processing unit 510 may be a real or virtual processor and may be capable of performing various processes according to programs stored in the memory 520. In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capability of computing device/server 500.
Computing device/server 500 typically includes a number of computer storage media. Such media may be any available media that is accessible by computing device/server 500 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. Memory 520 may be volatile memory (e.g., registers, cache, Random Access Memory (RAM)), non-volatile memory (e.g., Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory), or some combination thereof. Storage 530 may be a removable or non-removable medium and may include a machine-readable medium, such as a flash drive, a magnetic disk, or any other medium that may be capable of being used to store information and/or data (e.g., training data for training) and that may be accessed within computing device/server 500.
Computing device/server 500 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data media interfaces. Memory 520 may include a computer program product 525 having one or more program modules configured to perform the various methods or acts of the various embodiments of the disclosure.
The communication unit 540 enables communication with other computing devices over a communication medium. Additionally, the functionality of the components of computing device/server 500 may be implemented in a single computing cluster or multiple computing machines capable of communicating over a communications connection. Thus, computing device/server 500 may operate in a networked environment using logical connections to one or more other servers, network Personal Computers (PCs), or another network node.
The input device 550 may be one or more input devices such as a mouse, keyboard, trackball, or the like. Output device 560 may be one or more output devices such as a display, speakers, printer, or the like. Computing device/server 500 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., as desired through communication unit 540, with one or more devices that enable a user to interact with computing device/server 500, or with any device (e.g., network card, modem, etc.) that enables computing device/server 500 to communicate with one or more other computing devices. Such communication may be performed via input/output (I/O) interfaces (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium is provided, on which one or more computer instructions are stored, wherein the one or more computer instructions are executed by a processor to implement the above-described method.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products implemented in accordance with the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing has described implementations of the present disclosure, and the above description is illustrative, not exhaustive, and not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The terminology used herein was chosen in order to best explain the principles of implementations, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the implementations disclosed herein.
Example implementation
TS 1. a method for intersection monitoring, comprising:
acquiring an intersection image captured by a camera associated with a capture tool, the capture tool being located within a predetermined range of a target intersection;
determining, based on the intersection image, a number of target vehicles traveling through the target intersection at a target time, the target time being a time at which the intersection image was captured by the camera; and
updating a time distribution table with the target time and the number, the time distribution table indicating a distribution over time of a number of the target vehicles passing through the target intersection.
The method of TS 1, wherein obtaining the intersection image comprises:
determining a first number of historical intersection images that the target intersection has previously been acquired;
in response to the first number being less than a predetermined threshold, causing the camera to capture the intersection image; and
acquiring the captured intersection image.
TS 3. the method of TS 1, wherein obtaining the intersection image includes:
determining the historical moment of the recently acquired historical intersection image of the target intersection;
in response to a difference between a current time and the historical time being less than a predetermined threshold, causing the camera to capture the intersection image; and
acquiring the captured intersection image.
TS 4. the method of TS 1, wherein obtaining the intersection image includes:
determining a second number of historical intersection images that have been collected by the target intersection at a target time period corresponding to the current time;
in response to the second number being less than a predetermined threshold, causing the camera to capture the intersection image; and
acquiring the captured intersection image.
TS 5. the method of TS 1, wherein determining the number of target vehicles traveling through the target intersection at the target time comprises:
determining a set of candidate frames associated with a vehicle from the intersection image;
determining a target frame corresponding to the target vehicle from the set of candidate frames; and
based on the target box, determining the number of the target vehicles.
TS 6. the method of TS 5, wherein determining a target frame corresponding to the target vehicle from the set of candidate frames comprises:
determining a drivable area associated with the target intersection based on the intersection image;
filtering candidate frames located outside the drivable area from the set of candidate frames; and
based on the filtered set of candidate boxes, the target box is determined.
TS 7. the method of TS 5, wherein determining a target frame corresponding to the target vehicle from the set of candidate frames comprises:
determining the target frame from the set of candidate frames based on an appearance feature of the target vehicle, the appearance feature including at least one of: size, color, or shape.
TS 8. the method of TS 1, wherein the target vehicle comprises a large vehicle having a size greater than a predetermined threshold.
TS 9. the method according to TS 1, further comprising:
training a risk prediction model using the time profile and historical incident information associated with the target intersection, the risk prediction model configured to determine a risk level for an intersection based on the time profile of the intersection.
TS 10. the method according to TS 1, further comprising:
in response to the time profile indicating that the number of target vehicles passing through the target intersection over a predetermined period of time is greater than a predetermined threshold, adjusting a target navigation path associated with the target intersection, the target navigation path indicating that a vehicle to be navigated is expected to pass through the target intersection within the predetermined period of time.
TS 11. the method according to TS 1, further comprising:
in response to the time profile indicating that the number of target vehicles passing through the target intersection over a predetermined period of time is greater than a predetermined threshold, send an alert associated with the target vehicle to a predetermined vehicle that is expected to pass through the target intersection within the predetermined period of time.
TS 12. the method of TS 1, wherein the time profile indicates a number of times of day that the target vehicle passes through the target intersection, the plurality of time periods having the same length of time.
TS 13. the method of TS 1, wherein the target intersection is determined from a set of candidate intersections based on historical incident information.
TS 14. the method of TS 1, wherein the collection tool is a vehicle in motion and the camera is a tachograph carried by the vehicle.
TS 15. an apparatus for identifying abnormal parking, comprising:
an image acquisition module configured to acquire an intersection image captured by a camera associated with a capture tool, the capture tool being located within a predetermined range of a target intersection;
a number determination module configured to determine, based on the intersection image, a number of target vehicles traveling through the target intersection at a target time, the target time being a time at which the intersection image was captured by the camera; and
an update module configured to update a time distribution table with the target time of day and the number, the time distribution table indicating a distribution over time of a number of the target vehicles passing through the target intersection.
TS 16. an electronic device, comprising:
a memory and a processor;
wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions are to be executed by the processor to implement the method according to any one of TS 1 to 14.
TS 17. a computer readable storage medium having one or more computer instructions stored thereon, wherein the one or more computer instructions are executed by a processor to implement the method according to any one of TS 1 to 14.
TS 18. a computer program product comprising computer executable instructions, wherein the computer executable instructions, when executed by a processor, implement the method according to any of TS 1 to 14.

Claims (10)

1. A method for intersection monitoring, comprising:
acquiring an intersection image captured by a camera associated with a capture tool, the capture tool being located within a predetermined range of a target intersection;
determining, based on the intersection image, a number of target vehicles traveling through the target intersection at a target time, the target time being a time at which the intersection image was captured by the camera; and
updating a time distribution table with the target time and the number, the time distribution table indicating a distribution over time of a number of the target vehicles passing through the target intersection.
2. The method of claim 1, wherein acquiring the intersection image comprises:
determining a first number of historical intersection images that the target intersection has previously been acquired;
in response to the first number being less than a predetermined threshold, causing the camera to capture the intersection image; and
acquiring the captured intersection image.
3. The method of claim 1, wherein acquiring the intersection image comprises:
determining the historical moment of the recently acquired historical intersection image of the target intersection;
in response to a difference between a current time and the historical time being less than a predetermined threshold, causing the camera to capture the intersection image; and
acquiring the captured intersection image.
4. The method of claim 1, wherein acquiring the intersection image comprises:
determining a second number of historical intersection images that have been collected by the target intersection at a target time period corresponding to the current time;
in response to the second number being less than a predetermined threshold, causing the camera to capture the intersection image; and
acquiring the captured intersection image.
5. The method of claim 1, wherein determining a number of target vehicles traveling past the target intersection at a target time comprises:
determining a set of candidate frames associated with a vehicle from the intersection image;
determining a target frame corresponding to the target vehicle from the set of candidate frames; and
based on the target box, determining the number of the target vehicles.
6. The method of claim 5, wherein determining a target box corresponding to the target vehicle from the set of candidate boxes comprises:
determining a drivable area associated with the target intersection based on the intersection image;
filtering candidate frames located outside the drivable area from the set of candidate frames; and
based on the filtered set of candidate boxes, the target box is determined.
7. The method of claim 5, wherein determining a target box corresponding to the target vehicle from the set of candidate boxes comprises:
determining the target frame from the set of candidate frames based on an appearance feature of the target vehicle, the appearance feature including at least one of: size, color, or shape.
8. An electronic device, comprising:
a memory and a processor;
wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions are to be executed by the processor to implement the method of any one of claims 1 to 7.
9. A computer readable storage medium having one or more computer instructions stored thereon, wherein the one or more computer instructions are executed by a processor to implement the method of any one of claims 1 to 7.
10. A computer program product comprising computer executable instructions, wherein the computer executable instructions, when executed by a processor, implement the method of any one of claims 1 to 7.
CN202110404652.1A 2021-04-15 2021-04-15 Intersection monitoring method, device, equipment, storage medium and program product Pending CN113112828A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110404652.1A CN113112828A (en) 2021-04-15 2021-04-15 Intersection monitoring method, device, equipment, storage medium and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110404652.1A CN113112828A (en) 2021-04-15 2021-04-15 Intersection monitoring method, device, equipment, storage medium and program product

Publications (1)

Publication Number Publication Date
CN113112828A true CN113112828A (en) 2021-07-13

Family

ID=76717061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110404652.1A Pending CN113112828A (en) 2021-04-15 2021-04-15 Intersection monitoring method, device, equipment, storage medium and program product

Country Status (1)

Country Link
CN (1) CN113112828A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205388826U (en) * 2016-03-09 2016-07-20 郑永春 Vehicle recognition cameras
CN107085953A (en) * 2017-06-06 2017-08-22 郑州云海信息技术有限公司 A kind of Intelligent traffic management systems and method based on cloud computing
CN107560622A (en) * 2016-07-01 2018-01-09 板牙信息科技(上海)有限公司 A kind of method and apparatus based on driving image-guidance
CN107813830A (en) * 2016-08-31 2018-03-20 法乐第(北京)网络科技有限公司 A kind of method and device for aiding in vehicle drive
CN108230670A (en) * 2016-12-22 2018-06-29 株式会社日立制作所 Predict the method and apparatus for giving the moving body number of place appearance in given time period
JP2018180912A (en) * 2017-04-12 2018-11-15 トヨタ自動車株式会社 Information collection system, vehicle, information collection method, program, and storage medium
CN109272175A (en) * 2018-11-15 2019-01-25 山东管理学院 A kind of data collection system and method based on Urban Migrant network
CN109785627A (en) * 2019-02-22 2019-05-21 魏一凡 A kind of crossroad access flux monitoring system
CN111383455A (en) * 2020-03-11 2020-07-07 上海眼控科技股份有限公司 Traffic intersection object flow statistical method, device, computer equipment and medium
CN112634611A (en) * 2020-12-15 2021-04-09 北京百度网讯科技有限公司 Method, device, equipment and storage medium for identifying road conditions

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205388826U (en) * 2016-03-09 2016-07-20 郑永春 Vehicle recognition cameras
CN107560622A (en) * 2016-07-01 2018-01-09 板牙信息科技(上海)有限公司 A kind of method and apparatus based on driving image-guidance
CN107813830A (en) * 2016-08-31 2018-03-20 法乐第(北京)网络科技有限公司 A kind of method and device for aiding in vehicle drive
CN108230670A (en) * 2016-12-22 2018-06-29 株式会社日立制作所 Predict the method and apparatus for giving the moving body number of place appearance in given time period
JP2018180912A (en) * 2017-04-12 2018-11-15 トヨタ自動車株式会社 Information collection system, vehicle, information collection method, program, and storage medium
CN107085953A (en) * 2017-06-06 2017-08-22 郑州云海信息技术有限公司 A kind of Intelligent traffic management systems and method based on cloud computing
CN109272175A (en) * 2018-11-15 2019-01-25 山东管理学院 A kind of data collection system and method based on Urban Migrant network
CN109785627A (en) * 2019-02-22 2019-05-21 魏一凡 A kind of crossroad access flux monitoring system
CN111383455A (en) * 2020-03-11 2020-07-07 上海眼控科技股份有限公司 Traffic intersection object flow statistical method, device, computer equipment and medium
CN112634611A (en) * 2020-12-15 2021-04-09 北京百度网讯科技有限公司 Method, device, equipment and storage medium for identifying road conditions

Similar Documents

Publication Publication Date Title
US8452524B2 (en) Method and device for identifying traffic-relevant information
US9008359B2 (en) Detection of static object on thoroughfare crossings
US20180211117A1 (en) On-demand artificial intelligence and roadway stewardship system
CN106611512B (en) Method, device and system for processing starting of front vehicle
US8666117B2 (en) Video-based system and method for detecting exclusion zone infractions
CN110619747A (en) Intelligent monitoring method and system for highway road
CN106571046B (en) Vehicle-road cooperative driving assisting method based on road surface grid system
US11315026B2 (en) Systems and methods for classifying driver behavior
CN111183465B (en) Adaptive traffic control using vehicle trajectory data
AU2021254525A1 (en) On-demand artificial intelligence and roadway stewardship system
Fernández-Caballero et al. Road-traffic monitoring by knowledge-driven static and dynamic image analysis
CN110032947B (en) Method and device for monitoring occurrence of event
CN108491782A (en) A kind of vehicle identification method based on driving Image Acquisition
CN109643488B (en) Traffic abnormal event detection device and method
CN104282154A (en) Vehicle overload monitoring system and method
CN110942038A (en) Traffic scene recognition method, device, medium and electronic equipment based on vision
CN111094095A (en) Automatically receiving a travel signal
CN114387785A (en) Safety management and control method and system based on intelligent highway and storable medium
CN112349087B (en) Visual data input method based on holographic perception of intersection information
CN111339996A (en) Method, device and equipment for detecting static obstacle and storage medium
CN113112828A (en) Intersection monitoring method, device, equipment, storage medium and program product
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
CN115797880A (en) Method and device for determining driving behavior, storage medium and electronic device
CN115019242A (en) Abnormal event detection method and device for traffic scene and processing equipment
Robert Bringing richer information with reliability to automated traffic monitoring from the fusion of multiple cameras, inductive loops and road maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination