CN112750300B - Method and device for acquiring delay index data of road intersection - Google Patents

Method and device for acquiring delay index data of road intersection Download PDF

Info

Publication number
CN112750300B
CN112750300B CN201911037467.2A CN201911037467A CN112750300B CN 112750300 B CN112750300 B CN 112750300B CN 201911037467 A CN201911037467 A CN 201911037467A CN 112750300 B CN112750300 B CN 112750300B
Authority
CN
China
Prior art keywords
camera
time
vehicle
shooting range
road intersection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911037467.2A
Other languages
Chinese (zh)
Other versions
CN112750300A (en
Inventor
慎东辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN201911037467.2A priority Critical patent/CN112750300B/en
Publication of CN112750300A publication Critical patent/CN112750300A/en
Application granted granted Critical
Publication of CN112750300B publication Critical patent/CN112750300B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/123Traffic control systems for road vehicles indicating the position of vehicles, e.g. scheduled vehicles; Managing passenger vehicles circulating according to a fixed timetable, e.g. buses, trains, trams

Abstract

The application discloses a method and a device for acquiring delay index data of a road intersection. The specific implementation scheme is as follows: a first camera and a second camera are arranged at a traffic light of a road intersection, wherein the first camera faces an entering lane of the traffic light control flow direction, and the second camera faces an exiting lane of the traffic light control flow direction; the real-time position and the moving speed of each vehicle in the shooting range of the first camera and the real-time position and the moving speed of each vehicle in the shooting range of the second camera are obtained by respectively identifying the first camera and the second camera; determining the actual passing time used when each vehicle passes through the road intersection according to the real-time position and moving speed of each vehicle in the first camera shooting range and the real-time position and moving speed of each vehicle in the second camera shooting range; and acquiring the average delay time of the road intersection according to the reference passing time and the actual passing time which are measured in advance.

Description

Method and device for acquiring delay index data of road intersection
Technical Field
The application relates to the field of image processing, belongs to the intelligent traffic technology, and particularly relates to a method and a device for acquiring delay index data of a road intersection, an intelligent traffic system, electronic equipment and a storage medium.
Background
The urban road intersection delay index data is a key component in an advanced traffic information management system and is an important index for evaluating intersection service level and vehicle passing efficiency.
In the related art, the actual survey of signalized intersections is usually performed by using a point sampling method to obtain the actual measurement delay of the road intersections. The sample application method is to observe the number of the parked vehicles on the approach road at the entrance of the intersection in continuous time intervals, and further obtain the queuing time of the vehicles on the approach road at the entrance of the intersection. For example, 3-4 observers are required to be arranged at each approach at an intersection, of which 1 is a time teller, 1-2 is an observer, and the other is a recorder. The observation interval is typically 15 seconds, so that there are four time intervals of 0-15 seconds, 15-30 seconds, 30-45 seconds, and 45-60 seconds per minute.
And after the observation is started, the timekeeper holds the stopwatch, the timekeeper gives a time every 15 seconds, the number of vehicles staying behind the entrance approach stop line is counted after the timekeeper gives a time, the recorder is informed to record item by item, meanwhile, the second observer counts the approach traffic volume in each corresponding minute, and the second observer counts and records the approach traffic volume according to the stopped vehicles and the non-stopped vehicles respectively. The observation is continued until the sample volume requirement is met or the specified time is reached. And after the sample data is obtained, calculating to obtain the delay index data of the road intersection according to the sample data and the delay index calculation formula.
The problems that exist at present are: the sample application method needs to arrange a large amount of manpower in a real scene to sample and collect data in the early stage, and cannot achieve the signal period level or different time window levels; the method has great limitation, and is easy to over-fit the sampling result, so that the result is inconsistent with the final real scene result, and the method cannot have universality under a large-batch scene well.
Disclosure of Invention
The present application aims to solve at least to some extent one of the above mentioned technical problems.
Therefore, the first purpose of the application is to provide a method for acquiring delay index data of a road intersection.
The second objective of this application is to provide a delay index data acquisition device at road intersection.
A third object of the present application is to provide an intelligent transportation system.
A fourth object of the present application is to provide an electronic device.
A fifth object of the present application is to propose a storage medium.
A sixth object of the present application is to propose a computer program product.
In order to achieve the above object, a method for obtaining delay indicator data of a road intersection is provided in an embodiment of a first aspect of the present application, where a first camera and a second camera are disposed at a traffic light of the road intersection, where the first camera faces an entering lane to which the traffic light controls a flow direction, and the second camera faces an exiting lane to which the traffic light controls a flow direction, the method includes: real-time position and moving speed of each vehicle in the shooting range of the first camera are obtained by identifying the first camera in real time; the real-time position and the moving speed of each vehicle in the shooting range of the second camera are obtained by identifying the second camera in real time; determining the actual passing time used by each vehicle when the vehicle passes through the road intersection according to the real-time position and moving speed of each vehicle in the first camera shooting range and the real-time position and moving speed of each vehicle in the second camera shooting range; determining a reference transit time when a vehicle passes through the road intersection, which is measured in advance; and acquiring the average delay time of the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection.
According to an embodiment of the application, the real-time position and moving speed of each vehicle in the shooting range of the first camera are obtained by identifying the first camera in real time, and the method comprises the following steps: carrying out vehicle identification on the video data acquired by the first camera to obtain a two-dimensional plane position coordinate of each vehicle in the video data; and generating the three-dimensional longitude and latitude position coordinate and the moving speed of each vehicle according to the three-dimensional longitude and latitude position coordinate of the first camera and the two-dimensional plane position coordinate of each vehicle.
According to an embodiment of the present application, determining an actual passing time used by each vehicle when passing through the road intersection according to the real-time position and moving speed of each vehicle within the first camera photographing range and the real-time position and moving speed of each vehicle within the second camera photographing range includes: calculating the time used by each vehicle when passing through the shooting range of the first camera according to the real-time position and the moving speed of each vehicle in the shooting range of the first camera; calculating the time used by each vehicle when passing through the shooting range of the second camera according to the real-time position and the moving speed of each vehicle in the shooting range of the second camera; determining the dead zone time used when each vehicle passes through the dead zone between the first camera and the second camera; and determining the actual passing time of each vehicle when passing through the road intersection according to the time of each vehicle when passing through the first camera shooting range, the time of each vehicle when passing through the second camera shooting range and the blind area time.
According to the embodiment of the application, determining the blind zone time used when each vehicle passes through the blind zone between the first camera and the second camera comprises: acquiring a first video acquired by the first camera in real time, and identifying the first video to determine the number of all vehicles passing through the shooting range of the first camera; determining all slow vehicles in each vehicle according to the moving speed of each vehicle in the shooting range of the first camera; summing the time spent by all the slow vehicles when the slow vehicles pass through the shooting range of the first camera; and determining the ratio of the obtained sum value to the number of all vehicles as the blind area time used when each vehicle passes through the blind area between the first camera and the second camera.
According to one embodiment of the application, determining a reference transit time when a vehicle passes through the road intersection as measured in advance comprises: identifying and analyzing video images acquired by the first camera and the second camera within a preset time period to obtain the free flow speed of the corresponding driving-in and driving-out of the road intersection; and determining the reference passing time when the vehicle passes through the road intersection under the free flow scene according to the free flow speed, the first camera shooting range distance and the second camera shooting range distance.
According to an embodiment of the present application, acquiring an average delay time of the road intersection based on the reference passing time and an actual passing time taken by each vehicle when passing through the road intersection includes: determining the delay time of each vehicle when passing through the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection; and acquiring the average delay time of the road intersection by using the delay time when each vehicle passes through the road intersection and the number of all vehicles passing through the road intersection.
The road intersection's that this application second aspect embodiment provided delay index data acquisition device, road intersection's traffic light department sets up first camera and second camera, wherein first camera orientation the lane of driveing into of traffic light control flow direction, the second camera orientation the lane of driveing out of traffic light control flow direction, the device includes: the perception identification module is used for identifying the first camera in real time to obtain the real-time position and the moving speed of each vehicle within the shooting range of the first camera, and identifying the second camera in real time to obtain the real-time position and the moving speed of each vehicle within the shooting range of the second camera; the first determining module is used for determining the actual passing time used by each vehicle when the vehicle passes through the road intersection according to the real-time position and the moving speed of each vehicle in the first camera shooting range and the real-time position and the moving speed of each vehicle in the second camera shooting range; the second determination module is used for determining the reference passing time when the vehicle passes through the road intersection, which is measured in advance; and the delay index data acquisition module is used for acquiring the average delay time of the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection.
An embodiment of a third aspect of the present application provides an intelligent transportation system, including: the system comprises a first camera and a second camera, wherein the first camera and the second camera are arranged at a traffic light of a road intersection, the first camera faces a traffic light control flowing-direction entering lane, and the second camera faces a traffic light control flowing-direction exiting lane; the server is used for identifying the first camera in real time to obtain the real-time position and the moving speed of each vehicle in the shooting range of the first camera, and the real-time position and the moving speed of each vehicle in the shooting range of the second camera are obtained by identifying the second camera in real time, determining the actual passing time used by each vehicle when passing through the road intersection according to the real-time position and moving speed of each vehicle in the first camera shooting range and the real-time position and moving speed of each vehicle in the second camera shooting range, and acquiring the average delay time of the road intersection according to the reference passing time of the vehicles passing through the road intersection and the actual passing time of each vehicle passing through the road intersection, which are measured in advance.
An embodiment of a fourth aspect of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the method for obtaining delay indicator data of a road intersection according to the embodiment of the first aspect of the present application.
A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method for acquiring delay indicator data of a road intersection according to the embodiment of the first aspect of the present application is provided.
In an embodiment of the sixth aspect of the present application, a computer program product is provided, where the computer program is executed by a processor to implement the method for obtaining delay indicator data at a road intersection in the embodiment of the first aspect of the present application.
One embodiment in the above application has the following advantages or benefits: by arranging a first camera and a second camera at each traffic light of the road intersection, wherein the first camera faces an entering lane of the traffic light for controlling the flow direction, and the second camera faces an exiting lane of the traffic light for controlling the flow direction, then, by respectively identifying the first camera and the second camera, the real-time position and the moving speed of each vehicle in the shooting range of the first camera and the real-time position and the moving speed in the shooting range of the second camera are obtained, then, the actual passing time used when each vehicle passes through the road intersection can be determined according to the real-time position and the moving speed of each vehicle in the shooting range of the first camera and the real-time position and the moving speed in the shooting range of the second camera, and then, according to the pre-measured reference passing time and the actual passing time, and acquiring the average delay time of the road intersection. The camera is arranged at the traffic light of the road intersection, the average delay time of the road intersection is calculated based on the video collected by the camera, the whole process does not need to manually participate in the operation of sampling and collecting data in the early stage of a real scene, the labor cost is greatly saved, moreover, the video is collected by the camera to be in the signal period level or different time window levels, the diversity of the data is ensured, and the usability of the method is ensured.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic view of the installation positions of a first camera and a second camera according to an embodiment of the present application;
FIG. 2 is a schematic illustration according to a first embodiment of the present application;
FIG. 3 is a schematic diagram according to a second embodiment of the present application;
FIG. 4 is a schematic illustration according to a third embodiment of the present application;
FIG. 5 is a schematic illustration of a fourth embodiment according to the present application;
FIG. 6 is a schematic illustration according to a fifth embodiment of the present application;
FIG. 7 is a schematic illustration according to a sixth embodiment of the present application;
fig. 8 is a block diagram of an electronic device for implementing the method for acquiring delay indicator data at a road intersection according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
First, it should be noted that, in the embodiment of the present application, a first camera and a second camera are disposed at a traffic light at a road intersection, where the first camera faces an entering lane of the traffic light for controlling a flow direction, and the second camera faces an exiting lane of the traffic light for controlling the flow direction. For example, as shown in fig. 1, three cameras can be arranged at the position of one traffic light for controlling the flow direction, wherein the number of the second cameras can be two, the left front camera and the right front camera are respectively seen, and the first camera is seen from the back. And carrying out real-time video acquisition on scenes in respective shooting ranges through the cameras. The method comprises the steps of identifying based on a video collected in real time, and calculating the average delay of the urban road intersection based on a video identification result.
Example one
As shown in fig. 2, the method for acquiring delay indicator data of a road intersection may include:
and step 210, identifying the first camera in real time to obtain the real-time position and the moving speed of each vehicle in the shooting range of the first camera.
Optionally, vehicle identification is performed on video data acquired by the first camera to obtain a two-dimensional plane position coordinate of each vehicle in the video data, and a three-dimensional longitude and latitude position coordinate and a moving speed of each vehicle in a shooting range of the first camera are generated according to the three-dimensional longitude and latitude position coordinate of the first camera and the two-dimensional plane position coordinate of each vehicle.
That is, the video stream captured by the first camera can identify the coordinates of the two-dimensional plane (such as 1080 × 720 picture) corresponding to the vehicle through the computer sensing module, and the three-dimensional longitude and latitude position coordinates and the moving speed of the vehicle are converted through the actual three-dimensional longitude and latitude position coordinates of the first camera and the identified position coordinates of the two-dimensional plane of the vehicle.
And step 220, identifying the second camera in real time to obtain the real-time position and the moving speed of each vehicle in the shooting range of the second camera.
Optionally, vehicle identification is performed on the video data acquired by the second camera to obtain a two-dimensional plane position coordinate of each vehicle in the video data, and a three-dimensional longitude and latitude position coordinate and a moving speed of each vehicle within a shooting range of the second camera are generated according to the three-dimensional longitude and latitude position coordinate of the second camera and the two-dimensional plane position coordinate of each vehicle.
And step 230, determining the actual passing time used by each vehicle when passing through the road intersection according to the real-time position and moving speed of each vehicle in the first camera shooting range and the real-time position and moving speed of each vehicle in the second camera shooting range.
Optionally, a first time used when each vehicle passes through the first camera shooting range and a second time used when each vehicle passes through the second camera shooting range are respectively calculated according to the real-time position and moving speed of each vehicle in the first camera shooting range and the real-time position and moving speed of each vehicle in the second camera shooting range, a blind zone time used when each vehicle passes through a blind zone between the first camera and the second camera is estimated, and an actual passing time used when each vehicle passes through the road intersection is determined according to the first time, the second time and the blind zone time. Therefore, the dead zone time used when the vehicle passes through the dead zone area of the camera is estimated, and the actual passing time used when the vehicle passes through the road intersection is calculated by utilizing the first time used when the vehicle passes through the shooting range of the first camera, the dead zone time used when the vehicle passes through the dead zone area and the second time used when the vehicle passes through the shooting range of the second camera, so that the calculation result is more in line with the real situation, and the accuracy of the result is greatly improved.
In step 240, a reference transit time is determined for a vehicle passing through the road intersection, which is measured in advance. In an embodiment of the present application, the reference passing time may be used to indicate a time that a vehicle should pass when passing through the road intersection in a free-flow scene.
Optionally, the video images acquired by the first camera and the second camera within a preset time period are identified and analyzed to obtain free flow speeds of the road intersection corresponding to the entrance and the exit, and then the reference passing time of the vehicle passing through the road intersection in the free flow scene can be determined according to the free flow speeds, the distance of the first camera shooting range and the distance of the second camera shooting range.
As an example, the preset time period may be 5 to 7 am of a week. For example, the video images acquired by the first camera and the second camera in a preset time period (for example, 5 to 7 am of a week) may be acquired, the video images acquired by the first camera and the second camera are respectively identified and analyzed to obtain the free flow speed of the vehicle entering and exiting the road intersection, then, the time that the vehicle should pass through the road intersection in the free flow scene is calculated according to the free flow speed, the first camera shooting range distance and the second camera shooting range distance, and the time that the vehicle should pass through the road intersection in the free flow scene is taken as the reference passing time.
And step 250, acquiring the average delay time of the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection.
In the embodiment of the application, the delay time of each vehicle passing through the road intersection can be determined according to the reference passing time and the actual passing time of each vehicle passing through the road intersection, and the average delay time of the road intersection can be obtained by using the delay time of each vehicle passing through the road intersection and the number of all vehicles passing through the road intersection.
For example, the actual passing time of each vehicle passing through the road intersection may be used, and the reference passing time is subtracted, so that each obtained difference is the delay time when the corresponding vehicle passes through the road intersection, that is, the delay time when each vehicle passes through the road intersection is the actual passing time-reference passing time used when each vehicle passes through the road intersection. Then, the delay time when each vehicle passes through the road intersection can be summed, the obtained sum value and the number of all vehicles passing through the road intersection are subjected to division operation, and the obtained numerical value is the average delay time of the road intersection, namely
Figure GDA0003076509730000081
The method for acquiring the delay index data of the road intersection comprises the steps of arranging a first camera and a second camera at each traffic light of the road intersection, wherein the first camera faces an entering lane of the traffic light for controlling the flow direction, and the second camera faces an exiting lane of the traffic light for controlling the flow direction, so that the real-time position and the moving speed of each vehicle in the shooting range of the first camera and the real-time position and the moving speed of each vehicle in the shooting range of the second camera are obtained by respectively identifying the first camera and the second camera, and then the actual passing time of each vehicle when passing through the road intersection can be determined according to the real-time position and the moving speed of each vehicle in the shooting range of the first camera and the real-time position and the moving speed of each vehicle in the shooting range of the second camera, then, the average delay time of the road intersection is obtained according to the reference passing time and the actual passing time which are measured in advance. The camera is arranged at the traffic light of the road intersection, the average delay time of the road intersection is calculated based on the video collected by the camera, the whole process does not need to manually participate in the operation of sampling and collecting data in the early stage of a real scene, the labor cost is greatly saved, in addition, the signal cycle grade can be achieved by collecting the video through the camera, or different time window grades can be achieved, the diversity of the data is ensured, and the method usability is ensured.
Example two
Fig. 3 is a schematic diagram according to a second embodiment of the present application. As shown in fig. 3, the method for acquiring delay indicator data of a road intersection may include:
and 310, identifying the first camera in real time to obtain the real-time position and the moving speed of each vehicle in the shooting range of the first camera.
And 320, identifying the second camera in real time to obtain the real-time position and the moving speed of each vehicle in the shooting range of the second camera.
It should be noted that, in the embodiment of the present application, the implementation process of step 310 and step 320 may refer to the description of the implementation process of step 210 and step 220, and is not described herein again.
And step 330, calculating the time taken by each vehicle to pass through the first camera shooting range according to the real-time position and the moving speed of each vehicle in the first camera shooting range.
Optionally, the moving distance of each vehicle in the first camera shooting range is calculated according to the real-time position of each vehicle in the first camera shooting range, and the time taken by each vehicle to pass through the first camera shooting range is calculated according to the moving distance of each vehicle in the first camera shooting range and the moving speed of each vehicle in the first camera shooting range. For example, the moving distance of the vehicle in the first camera shooting range can be calculated according to the position of the vehicle starting to enter the first camera shooting range and the position of the vehicle exiting the first camera shooting range, and the time taken by the vehicle to pass through the first camera shooting range can be calculated according to the moving distance and the moving speed of the vehicle.
And step 340, calculating the time used by each vehicle when passing through the shooting range of the second camera according to the real-time position and the moving speed of each vehicle in the shooting range of the second camera.
Optionally, the moving distance of each vehicle in the second camera shooting range is calculated according to the real-time position of each vehicle in the second camera shooting range, and the time taken by each vehicle when passing through the second camera shooting range is calculated according to the moving distance and the moving speed of each vehicle in the second camera shooting range.
Step 350, determining the dead zone time used when each vehicle passes through the dead zone between the first camera and the second camera.
Optionally, for a blind area which cannot be identified between the first camera and the second camera, the time used when the slow vehicles participating in queuing pass through the shooting range of the first camera can be adopted to calculate the blind area time required when the vehicles pass through the blind area.
As an example, as shown in fig. 4, the specific implementation process of determining the blind area time used by each vehicle to pass through the blind area between the first camera and the second camera may be as follows:
and step 410, acquiring a first video acquired by the first camera in real time, and identifying the first video to determine the number of all vehicles passing through the shooting range of the first camera.
And step 420, determining all slow vehicles in each vehicle according to the moving speed of each vehicle in the shooting range of the first camera.
And step 430, summing the time spent by all the slow vehicles passing through the shooting range of the first camera.
And step 440, determining the ratio of the obtained sum to the number of all vehicles as the blind area time used when each vehicle passes through the blind area between the first camera and the second camera.
Therefore, the time used when the slow vehicles participating in the queuing pass through the shooting range of the first camera is summed, the sum is divided by the total number of all vehicles, the obtained ratio is the blind area time used when the slow vehicles pass through the blind area between the first camera and the second camera, and the blind area time needed when the slow vehicles participating in the queuing pass through the blind area is calculated by adopting the time used when the slow vehicles participating in the queuing pass through the shooting range of the first camera, so that the estimation result is closer to the real situation.
And step 360, determining the actual passing time used when each vehicle passes through the road intersection according to the time used when each vehicle passes through the first camera shooting range, the time used when each vehicle passes through the second camera shooting range and the blind area time.
As an example, the actual passing time of each vehicle is the time taken by each vehicle when passing through the first camera shooting range + the blind area time + the time taken by each vehicle when passing through the second camera shooting range.
In step 370, a reference transit time when the vehicle passes through the road intersection, which is measured in advance, is determined.
And 380, acquiring the average delay time of the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection.
It should be noted that, in the embodiment of the present application, the implementation process of step 370 and step 380 may refer to the description of the implementation process of step 240 and step 250, and is not described herein again.
The method for obtaining delay index data of a road intersection according to the embodiment of the application can calculate the time taken by each vehicle when passing through the shooting range of the first camera according to the real-time position and the moving speed of each vehicle in the shooting range of the first camera, calculate the time taken by each vehicle when passing through the shooting range of the second camera according to the real-time position and the moving speed of each vehicle in the shooting range of the second camera, determine the dead zone time taken by each vehicle when passing through the dead zone between the first camera and the second camera, determine the actual passing time taken by each vehicle when passing through the road intersection according to the time taken by each vehicle when passing through the shooting range of the first camera, the time taken by the second camera and the dead zone time, thereby estimating the dead zone time taken by the vehicle when passing through the dead zone area of the camera, and then the actual passing time used when the vehicle passes through the road intersection is calculated by utilizing the first time used when the vehicle passes through the first camera shooting range, the blind area time used when the vehicle passes through the blind area and the second time used when the vehicle passes through the second camera shooting range, so that the calculation result is more consistent with the real situation, and the accuracy of the result is greatly improved.
Fig. 5 is a schematic diagram according to a fourth embodiment of the present application. As shown in fig. 5, the delay index data acquisition device 500 for a road intersection may include: a perceptual identification module 510, a first determination module 520, a second determination module 530, and a delinquent index data acquisition module 540.
Specifically, the perception identification module 510 is configured to obtain a real-time position and a moving speed of each vehicle within a shooting range of the first camera by performing real-time identification on the first camera, and obtain a real-time position and a moving speed of each vehicle within a shooting range of the second camera by performing real-time identification on the second camera.
As an example, the sensing and recognizing module 510 performs vehicle recognition on the video data collected by the first camera, obtains two-dimensional plane position coordinates of each vehicle in the video data, and generates three-dimensional longitude and latitude position coordinates and moving speed of each vehicle according to the three-dimensional longitude and latitude position coordinates of the first camera and the two-dimensional plane position coordinates of each vehicle.
The first determining module 520 is used for determining the actual passing time of each vehicle when passing through the road intersection according to the real-time position and moving speed of each vehicle within the first camera shooting range and the real-time position and moving speed of each vehicle within the second camera shooting range.
As an example, as shown in fig. 6, the first determining module 520 may include: a first calculation unit 521, a second calculation unit 522, a first determination unit 523, and a second determination unit 524. The first calculation unit 521 is used for calculating the time taken by each vehicle when the vehicle passes through the first camera shooting range according to the real-time position and the moving speed of each vehicle in the first camera shooting range; the second calculating unit 522 is used for calculating the time taken by each vehicle to pass through the second camera shooting range according to the real-time position and the moving speed of each vehicle in the second camera shooting range; the first determining unit 523 is configured to determine a blind area time used when each vehicle passes through a blind area between the first camera and the second camera; the second determining unit 524 is configured to determine the actual passing time taken for each vehicle to pass through the road intersection, based on the time taken for each vehicle to pass through the first camera shooting range, the time taken for each vehicle to pass through the second camera shooting range, and the blind zone time.
In the embodiment of the present application, a specific implementation process of the first determining unit 523 determining the blind area time used when each vehicle passes through the blind area between the first camera and the second camera may be as follows: acquiring a first video acquired by a first camera in real time, and identifying the first video to determine the number of all vehicles passing through the shooting range of the first camera; determining all slow vehicles in each vehicle according to the moving speed of each vehicle in the shooting range of the first camera; summing the time spent by all the slow cars passing through the shooting range of the first camera; and determining the ratio of the obtained sum value to the number of all vehicles as the blind area time used when each vehicle passes through the blind area between the first camera and the second camera.
The second determination module 530 is used to determine a reference transit time when a vehicle passes through a road intersection, which is measured in advance. As an example, the second determining module 530 performs recognition analysis on video images captured by the first camera and the second camera within a preset time period to obtain a free flow speed corresponding to the entrance and exit of the road intersection, and then determines a reference passing time when the vehicle passes through the road intersection in a free flow scene according to the free flow speed, the first camera shooting range distance and the second camera shooting range distance.
The delay index data obtaining module 540 is configured to obtain an average delay time of the road intersection according to the reference passing time and an actual passing time used when each vehicle passes through the road intersection. As an example, the delay index data obtaining module 540 determines a delay time when each vehicle passes through the road intersection based on the reference passing time and the actual passing time used when each vehicle passes through the road intersection, and then obtains an average delay time of the road intersection using the delay time when each vehicle passes through the road intersection and the number of all vehicles passing through the road intersection.
The device for acquiring the delay index data of the road intersection comprises a first camera and a second camera arranged at each traffic light of the road intersection, wherein the first camera faces an entering lane of the traffic light for controlling the flow direction, the second camera faces an exiting lane of the traffic light for controlling the flow direction, so that the real-time position and the moving speed of each vehicle in the shooting range of the first camera and the real-time position and the moving speed of each vehicle in the shooting range of the second camera are obtained by respectively identifying the first camera and the second camera, and then the actual passing time used when each vehicle passes through the road intersection can be determined according to the real-time position and the moving speed of each vehicle in the shooting range of the first camera and the real-time position and the moving speed of each vehicle in the shooting range of the second camera, then, the average delay time of the road intersection is obtained according to the reference passing time and the actual passing time which are measured in advance. The camera is arranged at the traffic light of the road intersection, the average delay time of the road intersection is calculated based on the video collected by the camera, the whole process does not need to manually participate in the operation of sampling and collecting data in the early stage of a real scene, the labor cost is greatly saved, in addition, the signal cycle grade can be achieved by collecting the video through the camera, or different time window grades can be achieved, the diversity of the data is ensured, and the method usability is ensured.
Fig. 7 is a schematic diagram according to a sixth embodiment of the present application. As shown in fig. 7, the intelligent transportation system 700 may include: a first camera 710, a second camera 720, and a server 730. The first camera 710 and the second camera 720 may be disposed at a traffic light at a road intersection; the first camera 710 faces the traffic light to control the incoming lane, and the second camera 720 faces the traffic light to control the outgoing lane.
The server 730 may be configured to obtain a real-time position and a moving speed of each vehicle within a shooting range of the first camera 710 by performing real-time recognition on the first camera 710, obtain a real-time position and a moving speed of each vehicle within a shooting range of the second camera 720 by performing real-time recognition on the second camera 720, determine an actual transit time for each vehicle to pass through the road intersection according to the real-time position and the moving speed of each vehicle within the shooting range of the first camera 710 and the real-time position and the moving speed within the shooting range of the second camera 720, and obtain an average delay time for the road intersection according to a reference transit time measured in advance when the vehicle passes through the road intersection and the actual transit time for each vehicle to pass through the road intersection.
That is, the server according to the embodiment of the present application may implement the method for acquiring delay indicator data at a road intersection according to any one of the embodiments described above.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 8, the present invention is a block diagram of an electronic device according to the method for acquiring delay indicator data at a road intersection according to the embodiment of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 8, the electronic apparatus includes: one or more processors 801, memory 802, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, if desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 8 illustrates an example of a processor 801.
The memory 802 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by at least one processor, so that the at least one processor executes the method for acquiring the delay index data of the road intersection provided by the application. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the delay index data acquisition method for a road intersection provided by the present application.
The memory 802 is a non-transitory computer-readable storage medium, and can be used for storing non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the delay index data acquisition method for a road intersection in the embodiment of the present application (for example, the perception identification module 510, the first determination module 520, the second determination module 530, and the delay index data acquisition module 540 shown in fig. 5). The processor 801 executes various functional applications of the server and data processing by running non-transitory software programs, instructions, and modules stored in the memory 802, that is, implements the method for obtaining delay indicator data of a road intersection in the above-described method embodiments.
The memory 802 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of electronic equipment of a delay index data acquisition method for a road intersection, and the like. Further, the memory 802 may include high-speed random access memory and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 802 may optionally include a memory remotely located from the processor 801, which may be connected to the electronics of the delay indicator data acquisition method at the intersection via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method for acquiring delay index data of a road intersection may further include: an input device 803 and an output device 804. The processor 801, the memory 802, the input device 803, and the output device 804 may be connected by a bus or other means, as exemplified by the bus connection in fig. 8.
The input device 803 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of an electronic apparatus of a delay index data acquisition method for a road intersection, for example, an input device such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 804 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: embodied in one or more computer programs that implement the method of intersection delay indicator data acquisition described in the embodiments above when executed by a processor, the one or more computer programs being executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the camera is arranged at each traffic light of the road intersection, the average delay time of the road intersection is calculated based on the video collected by the camera, the whole process does not need to manually participate in the operation of sampling and collecting data in the early stage of a real scene, the labor cost is greatly saved, moreover, the video is collected by the camera to be in the signal period level or in different time window levels, the diversity of the data is ensured, and the usability of the method is ensured.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (11)

1. A method for acquiring delay index data of a road intersection is characterized in that a first camera and a second camera are arranged at a traffic light of the road intersection, wherein the first camera faces an entering lane in the traffic light control flow direction, and the second camera faces an exiting lane in the traffic light control flow direction, and the method comprises the following steps:
the real-time position and the moving speed of each vehicle in the shooting range of the first camera are obtained by identifying the first camera in real time;
the real-time position and the moving speed of each vehicle in the shooting range of the second camera are obtained by identifying the second camera in real time;
determining the actual passing time used by each vehicle when the vehicle passes through the road intersection according to the real-time position and moving speed of each vehicle in the first camera shooting range and the real-time position and moving speed of each vehicle in the second camera shooting range;
determining a reference transit time when a vehicle passes through the road intersection, which is measured in advance;
acquiring the average delay time of the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection;
the time used when each vehicle passes through the shooting range of the first camera is calculated according to the real-time position and the moving speed of each vehicle in the shooting range of the first camera;
calculating the time used by each vehicle when passing through the shooting range of the second camera according to the real-time position and the moving speed of each vehicle in the shooting range of the second camera;
determining the dead zone time used when each vehicle passes through the dead zone between the first camera and the second camera;
determining the actual passing time of each vehicle when passing through the road intersection according to the time taken by each vehicle when passing through the first camera shooting range, the time taken when passing through the second camera shooting range and the blind area time;
determining a blind zone time for each vehicle to pass through a blind zone between the first camera and the second camera, comprising:
acquiring a first video acquired by the first camera in real time, and identifying the first video to determine the number of all vehicles passing through the shooting range of the first camera;
determining all slow vehicles in each vehicle according to the moving speed of each vehicle in the shooting range of the first camera;
summing the time spent by all the slow vehicles passing through the shooting range of the first camera;
and determining the ratio of the obtained sum to the number of all vehicles as the blind area time used when each vehicle passes through the blind area between the first camera and the second camera.
2. The method of claim 1, wherein the obtaining of the real-time position and moving speed of each vehicle within the shooting range of the first camera by real-time identification of the first camera comprises:
carrying out vehicle identification on the video data acquired by the first camera to obtain a two-dimensional plane position coordinate of each vehicle in the video data;
and generating the three-dimensional longitude and latitude position coordinate and the moving speed of each vehicle according to the three-dimensional longitude and latitude position coordinate of the first camera and the two-dimensional plane position coordinate of each vehicle.
3. The method of claim 1, wherein determining a pre-measured reference transit time for a vehicle to pass through the road intersection comprises:
identifying and analyzing video images acquired by the first camera and the second camera within a preset time period to obtain the free flow speed of the corresponding driving-in and driving-out of the road intersection;
and determining the reference passing time when the vehicle passes through the road intersection under the free flow scene according to the free flow speed, the first camera shooting range distance and the second camera shooting range distance.
4. The method according to any one of claims 1 to 3, wherein obtaining the average delay time of the road intersection based on the reference transit time and an actual transit time taken by each vehicle when passing the road intersection includes:
determining the delay time of each vehicle when passing through the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection;
and acquiring the average delay time of the road intersection by using the delay time when each vehicle passes through the road intersection and the number of all vehicles passing through the road intersection.
5. The device for acquiring the delay index data of the road intersection is characterized in that a first camera and a second camera are arranged at a traffic light of the road intersection, wherein the first camera faces the traffic light to control the flowing-in lane, and the second camera faces the traffic light to control the flowing-out lane, and the device comprises:
the perception identification module is used for identifying the first camera in real time to obtain the real-time position and the moving speed of each vehicle within the shooting range of the first camera, and identifying the second camera in real time to obtain the real-time position and the moving speed of each vehicle within the shooting range of the second camera;
the first determining module is used for determining the actual passing time used by each vehicle when the vehicle passes through the road intersection according to the real-time position and the moving speed of each vehicle in the first camera shooting range and the real-time position and the moving speed of each vehicle in the second camera shooting range;
the second determination module is used for determining the reference passing time when the vehicle passes through the road intersection, which is measured in advance;
the delay index data acquisition module is used for acquiring the average delay time of the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection;
the first determining module includes:
the first calculation unit is used for calculating the time taken by each vehicle when the vehicle passes through the shooting range of the first camera according to the real-time position and the moving speed of the vehicle in the shooting range of the first camera;
the second calculation unit is used for calculating the time taken by each vehicle when the vehicle passes through the shooting range of the second camera according to the real-time position and the moving speed of each vehicle in the shooting range of the second camera;
a first determination unit configured to determine a blind area time used when each vehicle passes through a blind area between the first camera and the second camera;
a second determining unit configured to determine an actual passing time taken for each vehicle to pass through the road intersection, based on the time taken for each vehicle to pass through the first camera shooting range, the time taken for each vehicle to pass through the second camera shooting range, and the blind area time;
the first determining unit is specifically configured to:
acquiring a first video acquired by the first camera in real time, and identifying the first video to determine the number of all vehicles passing through the shooting range of the first camera;
determining all slow vehicles in each vehicle according to the moving speed of each vehicle in the shooting range of the first camera;
summing the time spent by all the slow vehicles passing through the shooting range of the first camera;
and determining the ratio of the obtained sum to the number of all vehicles as the blind area time used when each vehicle passes through the blind area between the first camera and the second camera.
6. The apparatus of claim 5, wherein the perceptual identification module is specifically configured to:
carrying out vehicle identification on the video data acquired by the first camera to obtain a two-dimensional plane position coordinate of each vehicle in the video data;
and generating the three-dimensional longitude and latitude position coordinate and the moving speed of each vehicle according to the three-dimensional longitude and latitude position coordinate of the first camera and the two-dimensional plane position coordinate of each vehicle.
7. The apparatus of claim 5, wherein the second determining module is specifically configured to:
identifying and analyzing video images acquired by the first camera and the second camera within a preset time period to obtain the free flow speed of the corresponding driving-in and driving-out of the road intersection;
and determining the reference passing time when the vehicle passes through the road intersection under the free flow scene according to the free flow speed, the first camera shooting range distance and the second camera shooting range distance.
8. The apparatus according to any one of claims 5 to 7, wherein the delay indicator data obtaining module is specifically configured to:
determining the delay time of each vehicle when passing through the road intersection according to the reference passing time and the actual passing time used when each vehicle passes through the road intersection;
and acquiring the average delay time of the road intersection by using the delay time when each vehicle passes through the road intersection and the number of all vehicles passing through the road intersection.
9. An intelligent transportation system, comprising:
the system comprises a first camera and a second camera, wherein the first camera and the second camera are arranged at a traffic light of a road intersection, the first camera faces a traffic light control flowing-direction entering lane, and the second camera faces a traffic light control flowing-direction exiting lane;
the server is used for identifying the first camera in real time to obtain the real-time position and the moving speed of each vehicle in the shooting range of the first camera, and real-time position and moving speed of each vehicle in the shooting range of the second camera are obtained by real-time identification of the second camera, determining the actual passing time of each vehicle when the vehicle passes through the road intersection according to the real-time position and moving speed of each vehicle in the first camera shooting range and the real-time position and moving speed of each vehicle in the second camera shooting range, acquiring the average delay time of the road intersection according to the reference passing time of the vehicles passing through the road intersection and the actual passing time of each vehicle passing through the road intersection, which are measured in advance;
the time used when each vehicle passes through the shooting range of the first camera is calculated according to the real-time position and the moving speed of each vehicle in the shooting range of the first camera;
calculating the time used by each vehicle when passing through the shooting range of the second camera according to the real-time position and the moving speed of each vehicle in the shooting range of the second camera;
determining the dead zone time used when each vehicle passes through the dead zone between the first camera and the second camera;
determining the actual passing time of each vehicle when passing through the road intersection according to the time taken by each vehicle when passing through the first camera shooting range, the time taken when passing through the second camera shooting range and the blind area time;
determining the dead zone time used when each vehicle passes through the dead zone between the first camera and the second camera, including:
acquiring a first video acquired by the first camera in real time, and identifying the first video to determine the number of all vehicles passing through the shooting range of the first camera;
determining all slow vehicles in each vehicle according to the moving speed of each vehicle in the shooting range of the first camera;
summing the time spent by all the slow vehicles passing through the shooting range of the first camera;
and determining the ratio of the obtained sum to the number of all vehicles as the blind area time used when each vehicle passes through the blind area between the first camera and the second camera.
10. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of obtaining delay indicator data for a road intersection as claimed in any one of claims 1 to 4.
11. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the delay indicator data acquisition method for a road intersection according to any one of claims 1 to 4.
CN201911037467.2A 2019-10-29 2019-10-29 Method and device for acquiring delay index data of road intersection Active CN112750300B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911037467.2A CN112750300B (en) 2019-10-29 2019-10-29 Method and device for acquiring delay index data of road intersection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911037467.2A CN112750300B (en) 2019-10-29 2019-10-29 Method and device for acquiring delay index data of road intersection

Publications (2)

Publication Number Publication Date
CN112750300A CN112750300A (en) 2021-05-04
CN112750300B true CN112750300B (en) 2022-09-27

Family

ID=75640564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911037467.2A Active CN112750300B (en) 2019-10-29 2019-10-29 Method and device for acquiring delay index data of road intersection

Country Status (1)

Country Link
CN (1) CN112750300B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311943B (en) * 2023-03-24 2023-12-26 阿波罗智联(北京)科技有限公司 Method and device for estimating average delay time of intersection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1868212A (en) * 2003-09-24 2006-11-22 杨伯翰大学 Automated estimation of average stopped delay at signalized intersections
CN101777259A (en) * 2010-01-22 2010-07-14 同济大学 Method for acquiring mean delay of urban road junction
CN104751650A (en) * 2013-12-31 2015-07-01 中国移动通信集团公司 Method and equipment for controlling road traffic signals
KR20160105255A (en) * 2015-02-27 2016-09-06 아주대학교산학협력단 Smart traffic light control apparatus and method for preventing traffic accident
CN107331169A (en) * 2017-09-01 2017-11-07 山东创飞客交通科技有限公司 Urban road intersection signal time distributing conception evaluation method and system under saturation state
CN108615376A (en) * 2018-05-28 2018-10-02 安徽科力信息产业有限责任公司 A kind of integrative design intersection schemes evaluation method based on video detection

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101364347A (en) * 2008-09-17 2009-02-11 同济大学 Detection method for vehicle delay control on crossing based on video
CN202841374U (en) * 2012-08-21 2013-03-27 深圳市科为时代科技有限公司 Double camera panoramic camera
CN104750963B (en) * 2013-12-31 2017-12-01 中国移动通信集团公司 Intersection delay duration method of estimation and device
CN104900070B (en) * 2015-05-18 2018-08-24 东莞理工学院 A kind of modeling of intersection wagon flow and self-adaptation control method and system
CN108074403A (en) * 2018-01-18 2018-05-25 徐晓音 Control method and system that the traffic lights of crossing vehicle congestion is shown are prevented based on bidirectional camera shooting head
CN109003442B (en) * 2018-06-22 2020-08-21 安徽科力信息产业有限责任公司 Road delay time calculation and traffic jam situation determination method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1868212A (en) * 2003-09-24 2006-11-22 杨伯翰大学 Automated estimation of average stopped delay at signalized intersections
CN101777259A (en) * 2010-01-22 2010-07-14 同济大学 Method for acquiring mean delay of urban road junction
CN104751650A (en) * 2013-12-31 2015-07-01 中国移动通信集团公司 Method and equipment for controlling road traffic signals
KR20160105255A (en) * 2015-02-27 2016-09-06 아주대학교산학협력단 Smart traffic light control apparatus and method for preventing traffic accident
CN107331169A (en) * 2017-09-01 2017-11-07 山东创飞客交通科技有限公司 Urban road intersection signal time distributing conception evaluation method and system under saturation state
CN108615376A (en) * 2018-05-28 2018-10-02 安徽科力信息产业有限责任公司 A kind of integrative design intersection schemes evaluation method based on video detection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
双摄像头环境下交叉口控制延误提取的误差统计;张惠玲等;《石家庄铁道大学学报(自然科学版)》;20101225(第04期);全文 *
监控环境下信号交叉口控制延误的获取方法;张惠玲等;《北京交通大学学报》;20101215(第06期);全文 *

Also Published As

Publication number Publication date
CN112750300A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
JP7480823B2 (en) Information processing device, information processing method, and program
CN113593262B (en) Traffic signal control method, traffic signal control device, computer equipment and storage medium
US10180326B2 (en) Staying state analysis device, staying state analysis system and staying state analysis method
CN111815675B (en) Target object tracking method and device, electronic equipment and storage medium
CN110910665B (en) Signal lamp control method and device and computer equipment
Khan et al. Unmanned aerial vehicle-based traffic analysis: A case study to analyze traffic streams at urban roundabouts
EP2913798A2 (en) Method, device and system for obtaining the number of persons
CN112041848A (en) People counting and tracking system and method
CN112101339B (en) Map interest point information acquisition method and device, electronic equipment and storage medium
US20150169954A1 (en) Image processing to derive movement characteristics for a plurality of queue objects
CN112132113A (en) Vehicle re-identification method and device, training method and electronic equipment
CN106295598A (en) A kind of across photographic head method for tracking target and device
CN113538911B (en) Intersection distance detection method and device, electronic equipment and storage medium
EP2709058A1 (en) Calibration of camera-based surveillance systems
CN110796865B (en) Intelligent traffic control method and device, electronic equipment and storage medium
CN113011323B (en) Method for acquiring traffic state, related device, road side equipment and cloud control platform
JP7200207B2 (en) Map generation method, map generation device, electronic device, non-transitory computer-readable storage medium and computer program
CN111693064A (en) Road condition information processing method, device, equipment and medium
CN110706258A (en) Object tracking method and device
CN112750300B (en) Method and device for acquiring delay index data of road intersection
CN111339877B (en) Method and device for detecting length of blind area, electronic equipment and storage medium
CN111601013B (en) Method and apparatus for processing video frames
CN111291681A (en) Method, device and equipment for detecting lane line change information
CN110796864B (en) Intelligent traffic control method, device, electronic equipment and storage medium
CN112735147B (en) Method and device for acquiring delay index data of road intersection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211014

Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Apollo Zhilian (Beijing) Technology Co.,Ltd.

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant