CN111127877A - Road condition information monitoring method and device - Google Patents

Road condition information monitoring method and device Download PDF

Info

Publication number
CN111127877A
CN111127877A CN201911133735.0A CN201911133735A CN111127877A CN 111127877 A CN111127877 A CN 111127877A CN 201911133735 A CN201911133735 A CN 201911133735A CN 111127877 A CN111127877 A CN 111127877A
Authority
CN
China
Prior art keywords
vehicle
lane
road
preset area
vehicles
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911133735.0A
Other languages
Chinese (zh)
Inventor
谢才东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201911133735.0A priority Critical patent/CN111127877A/en
Publication of CN111127877A publication Critical patent/CN111127877A/en
Priority to PCT/CN2020/097888 priority patent/WO2021098211A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method and a device for monitoring road condition information are provided, wherein the method comprises the steps of firstly obtaining videos of road intersections, then determining vehicles running on each lane and running parameters of each vehicle in each video frame, such as the positions, running speeds or running directions of the vehicles, and the like according to the videos, and accordingly obtaining the road condition information of the road according to the number of the vehicles running on each lane or the running parameters of the vehicles running on each lane. Therefore, the video of the road intersection can be acquired only, the video is processed to acquire the road condition information of the road, and a physical coil is not needed, so that the road surface is not damaged or any physical device is not damaged, and the cost for monitoring the road condition can be reduced.

Description

Road condition information monitoring method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for monitoring road condition information.
Background
With the popularization of vehicles, more and more vehicles run on roads, and therefore, the road condition of the roads needs to be monitored so as to regulate and control the vehicles running on the roads in time. For example, when the road condition of the road is monitored to be in a congestion state, the number of vehicles entering the road can be reduced by increasing the duration of the red light of the traffic signal lamp at the road entrance.
Taking the road condition as the traffic flow of the road as an example, one way to monitor the road condition is: a physical coil is laid at the entrance of a road, when a vehicle passes through the physical coil, the inductance value of the physical coil can be changed, the frequency of an oscillating circuit in a detector connected with the physical coil is further changed, and the vehicle flow entering the road can be judged by counting the frequency change times of the oscillating circuit.
However, laying a physical coil at the entrance of a road damages the road surface, and the physical coil is easily damaged by being crushed by a vehicle many times, resulting in a high cost of the method.
Disclosure of Invention
The application provides a road condition information monitoring method and device, which are used for reducing the cost of monitoring road conditions.
In a first aspect, a method for monitoring road condition information is provided, in which a video of a road intersection is first obtained, the video includes a plurality of video frames, and then, a driving parameter of a vehicle included in each video frame is obtained according to the video, the driving parameter of each vehicle at least includes a position of the vehicle, and may further include a driving speed and/or a driving direction. Because the road may include a plurality of lanes, after the driving parameters of each vehicle are obtained, the lane where the vehicle is located is determined according to the position of the vehicle, and the road condition information of the road is obtained according to the number of vehicles driving on each lane or the driving parameters of the vehicles driving on each lane.
In the technical scheme, the video of the road intersection is only required to be acquired, then the video is processed to acquire the road condition information of the road, and a physical coil is not required to be arranged, so that the road surface is not damaged or no physical device is damaged, and the cost for monitoring the road condition of the road can be reduced.
In one possible design, a coordinate range of a preset area of each lane of a plurality of lanes included in the road is obtained first, and then a first preset area to which the vehicle belongs is determined according to the position of the vehicle and the coordinate range of the preset area of each lane, so that the lane corresponding to the first preset area is determined as the lane to which the vehicle belongs.
According to the technical scheme, the lane where the lane is located is determined by comparing the position of the vehicle with the coordinate range of the preset area, and the method is simple and easy to achieve.
In one possible design, the traffic information includes at least one of traffic flow of the road, a headway distance of vehicles in the road, and whether there are oncoming vehicles on the road.
In the above technical solution, a plurality of parameters for representing the traffic information may be obtained, and the above parameters are only examples and are not limited in the embodiment of the present application.
In one possible design, obtaining the traffic flow of the road according to the number of vehicles traveling on each lane may include, but is not limited to, the following:
acquiring the inlet traffic flow of the road according to the first total number of vehicles running on at least one inlet lane; and/or obtaining the outlet traffic flow of the road according to the second total number of vehicles running on the at least one outlet lane.
In the above technical solution, after the number of vehicles traveling on each lane is obtained, the inlet traffic flow and the outlet traffic flow of the road may be further obtained according to the type of each lane, for example, whether the lane is an inlet lane or an outlet lane.
In one possible design, obtaining the headway distance of the vehicle in the road according to the driving parameters of the vehicle driving on each lane may include, but is not limited to, the following ways:
and determining the distance between the vehicle heads according to the time difference of the two adjacent vehicles on each lane entering the preset area of the lane and the driving speed of the first vehicle which enters the preset area of the lane from the back of the two vehicles.
In one possible design, obtaining whether there is a vehicle in the road in the reverse direction according to the driving parameters of the vehicle driving on each lane may include, but is not limited to, the following:
the first mode is as follows:
if the first lane where the vehicle is located is determined to be an entrance lane, and the first lane is provided with two preset areas, namely a first preset area and a second preset area, the first preset area is an area close to an intersection stop line of the first lane, and the second preset area is an area far away from the intersection stop line, determining whether the vehicle is located in the first preset area at a first moment according to the position of the vehicle, if so, determining whether the vehicle is located in the second preset area at a second moment, wherein the time interval between the first moment and the second moment is less than or equal to a preset time length, namely, whether the vehicle is located in the second preset area within the preset time length after the first moment, and if so, determining that a reverse vehicle exists in the first lane.
The second mode is as follows:
if the second lane where the vehicle is located is determined to be an exit lane, and the second lane is provided with two preset areas, namely a third preset area and a fourth preset area, the third preset area is an area far away from an intersection stop line of the second lane, and the fourth preset area is an area close to the intersection stop line, determining whether the vehicle is located in the third preset area at a third moment according to the position of the vehicle, if so, determining whether the vehicle is located in the fourth preset area at a fourth moment, wherein the time interval between the third moment and the fourth moment is less than or equal to a preset time length, namely, whether the vehicle is located in the fourth preset area within the preset time length after the third moment, and if so, determining that a reverse vehicle exists in the second lane.
In the technical scheme, two preset areas are arranged on each lane, so that whether vehicles in the opposite direction exist on the lane can be monitored, and the content of road condition information monitoring can be enriched.
In one possible design, the method may further obtain radar data of the road intersection including a plurality of sets of driving parameters, then may determine a position of the vehicle in each video frame according to the obtained video, and determine the driving parameter corresponding to each vehicle from the plurality of sets of driving parameters according to the position of the vehicle in each video frame and the acquisition time of each video frame.
In the technical scheme, the driving parameters of the vehicle can be acquired through the radar, then the driving parameters of the vehicle acquired by the radar are fitted with the vehicle in each video frame, and the driving parameters of each vehicle can be acquired, so that the processing amount can be reduced, the radar data is accurate, and the accuracy of the acquired road condition information can be improved.
A second aspect provides a traffic information monitoring device, where the traffic information monitoring device includes an obtaining unit and a processing unit, and the units may execute corresponding functions executed in any of the design examples of the first aspect, specifically:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a video of a road intersection, and the video comprises a plurality of video frames;
the processing unit is used for acquiring the driving parameters of the vehicle included in each video frame according to the video, wherein the driving parameters of the vehicle include the driving speed and/or the driving direction and the position of the vehicle; determining a lane in which the vehicle is located according to the position of the vehicle, wherein the road comprises a plurality of lanes; and acquiring the road condition information of the road according to the number of the vehicles running on each lane or the running parameters of the vehicles running on each lane.
In one possible design, the processing unit is specifically configured to:
acquiring a coordinate range of a preset area of each lane in the plurality of lanes;
determining a first preset area to which the vehicle belongs according to the position of the vehicle and the coordinate range of the preset area of each lane;
and determining the lane corresponding to the first preset area as the lane where the vehicle is located.
In one possible design, the traffic information includes at least one of traffic flow of the road, a headway distance of vehicles in the road, and whether there are oncoming vehicles on the road.
In one possible design, the traffic information is traffic flow of the road, and the processing unit is specifically configured to:
acquiring the inlet traffic flow of the road according to the first total number of vehicles running on at least one inlet lane; and/or acquiring the outlet traffic flow of the road according to the second total number of vehicles running on the at least one outlet lane.
In one possible design, the traffic information is a distance between two adjacent vehicle heads of the vehicle on the road, and the processing unit is specifically configured to:
and determining the distance between the vehicle heads according to the time difference of two adjacent vehicles on each lane entering a preset area of the lane and the running speed of a first vehicle in the two vehicles, wherein the first vehicle is the vehicle which enters the preset area of the lane from the back of the two vehicles.
In a possible design, the traffic information is whether there are vehicles driving backwards on the road, and the processing unit is specifically configured to:
determining that a first lane where a vehicle is located is an entrance lane;
determining whether the vehicle is located in a first preset area of a first lane at a first moment according to the position of the vehicle, wherein the first lane comprises the first preset area and a second preset area, the first preset area is an area close to an intersection stop line of the first lane, and the second preset area is an area far away from the intersection stop line;
if so, determining whether the vehicle is located in the second preset area at a second moment, wherein the time interval between the first moment and the second moment is less than or equal to a preset time length;
if so, determining that the retrograde vehicle exists in the first lane.
In a possible design, the traffic information is whether there are vehicles driving backwards on the road, and the processing unit is specifically configured to:
determining that a second lane where the vehicle is located is an exit lane;
determining whether the vehicle is located in a third preset area of the second lane at a third moment according to the position of the vehicle, wherein the second lane comprises the third preset area and a fourth preset area, the third preset area is an area far away from an intersection stop line of the second lane, and the fourth preset area is an area close to the intersection stop line;
if so, determining whether the vehicle is located in a fourth preset area at a fourth moment, wherein the time interval between the third moment and the fourth moment is less than or equal to a preset time length;
if so, determining that the retrograde vehicle exists in the second lane.
In one possible design, the obtaining unit is further configured to:
acquiring radar data of a road intersection, wherein the radar data comprises multiple groups of driving parameters;
the processing unit is further to:
determining the position of the vehicle in each video frame; and determining the driving parameters corresponding to each vehicle from the multiple groups of driving parameters according to the position of the vehicle in each video frame and the acquisition time of each video frame.
In a third aspect, a traffic information monitoring device is provided, which includes at least one processor coupled to at least one memory; the at least one processor is configured to execute the computer program or instructions stored in the at least one memory to cause the apparatus to perform the method described in the first aspect above.
In one possible design, the at least one processor, when executing the computer program or instructions stored in the at least one memory, performs the steps of:
acquiring a video of a road intersection, wherein the video comprises a plurality of video frames;
acquiring driving parameters of a vehicle included in each video frame according to the video, wherein the driving parameters of the vehicle include driving speed and/or driving direction and position of the vehicle;
determining a lane in which the vehicle is located according to the position of the vehicle, wherein the road comprises a plurality of lanes;
and acquiring the road condition information of the road according to the number of the vehicles running on each lane or the running parameters of the vehicles running on each lane.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program comprising program instructions that, when executed by a computer, cause the computer to perform the method of any one of the first aspects.
In a fifth aspect, the present application provides a computer program product, which stores a computer program, the computer program comprising program instructions, which, when executed by a computer, cause the computer to perform the method of any one of the first aspect.
In a sixth aspect, the present application provides a chip system, which includes a processor and may further include a memory, and is configured to implement the method of the first aspect. The chip system may be formed by a chip, and may also include a chip and other discrete devices.
Advantageous effects of the second to sixth aspects and implementations thereof described above reference may be made to the description of the advantageous effects of the method of the first aspect and implementations thereof.
Drawings
FIG. 1 is a schematic diagram of an example of an application scenario to which the present application is directed;
FIG. 2 is a block diagram of an exemplary monitoring system provided herein;
fig. 3 is a flowchart of a method for monitoring road condition information provided in the present application;
FIG. 4A is a schematic diagram of one example of obtaining a position of a vehicle via radar as provided herein;
FIG. 4B is a schematic diagram of one example of obtaining a position of a vehicle via video only as provided herein;
FIG. 4C is a schematic diagram illustrating an example of obtaining the driving speed of a vehicle through video only, according to the present disclosure;
fig. 5 is a schematic diagram of an example of arranging a virtual coil on a lane in the present application;
fig. 6 is a schematic diagram of another example of arranging a virtual coil on a lane in the present application;
FIG. 7 is a flow chart illustrating the method of determining whether there is a vehicle traveling in a wrong direction on a road according to the present application;
fig. 8 is a schematic structural diagram of an example of the road condition information monitoring device provided in the present application;
fig. 9 is a schematic structural diagram of another example of the road condition information monitoring device provided in the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
In the embodiments of the present application, "a plurality" means two or more, and in view of this, the "plurality" may also be understood as "at least two". "at least one" is to be understood as meaning one or more, for example one, two or more. For example, including at least one means including one, two, or more, and does not limit which ones are included, for example, including at least one of A, B and C, then including may be A, B, C, A and B, A and C, B and C, or a and B and C. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" generally indicates that the preceding and following related objects are in an "or" relationship, unless otherwise specified.
Unless stated to the contrary, the embodiments of the present application refer to the ordinal numbers "first", "second", etc., for distinguishing between a plurality of objects, and do not limit the sequence, timing, priority, or importance of the plurality of objects.
First, technical features related to the present application will be explained.
The method in the embodiment of the present application may be applied to a road monitoring scenario, for example, monitoring the traffic flow of the road a at the intersection as shown in fig. 1, or may be applied to other road monitoring scenarios, which are not the same hereFor example. Please refer to fig. 1, which is a schematic diagram illustrating a method for measuring traffic flow of a road in the prior art. In fig. 1, a physical coil is laid on the ground of a road a, and the physical coil is connected to a coil detector and constitutes an oscillation circuit together with a capacitor device inside the coil detector. When no vehicle passes through the physical coil in the road A, the inductance value of the physical coil is L1Then, the oscillation frequency of the oscillation circuit satisfies the following formula:
Figure BDA0002279016250000051
where f is the oscillation frequency of the oscillation circuit and C is the capacitance value of the capacitive device of the coil detector.
When a vehicle passes through the physical coil, the inductance value of the physical coil is L1Is changed into L2So that the oscillation frequency of the oscillation circuit becomes:
Figure BDA0002279016250000052
therefore, the coil detector may determine whether or not a vehicle passes through the road a by detecting a variation value of the oscillation frequency of the oscillation circuit, for example, when the variation value of the oscillation frequency is greater than or equal to Δ f, where Δ f ═ f1-f2 |. And then counting the times that the oscillation frequency variation value is greater than or equal to delta f within the preset time length, so as to obtain the traffic flow of the road A.
However, the above method for obtaining the traffic flow of the road requires a physical coil to be laid on the road, and the physical coil may damage the road surface and may be easily damaged, thereby resulting in high cost of the method.
In view of this, the present application provides a road condition monitoring system. Please refer to fig. 2, which is a schematic structural diagram of the monitoring system. As shown in fig. 2, the monitoring system includes a shooting device 21 and a processing device 22, wherein the shooting device 21 is configured to shoot a video of a monitored road intersection, the video including a vehicle traveling on the road, and then transmit the obtained video to the processing device 22.
The photographing device 21 may be a camera, and acquires a video of the monitored road intersection according to a preset acquisition frequency. Alternatively, the shooting device 21 may also be an electric alarm camera, and the electric alarm camera may acquire position information and attribute information of the vehicle at the time of acquisition in each video frame, where the attribute information may be a license plate number, a vehicle type, a color, or the like. Alternatively, the photographing device 21 may be a combination of a radar that can detect whether there is a vehicle on the road by scanning a beam and, when it is detected that there is a vehicle on the road, can detect the traveling speed, traveling direction, and position information of the vehicle at that time. The position information may be understood as spatial coordinates based on a world coordinate system or a geocentric coordinate system, for example, world geodetic system-1984 coordinates, etc. In the embodiment of the present application, the specific form of the photographing apparatus 21 is not limited.
When the processing device 22 receives the video and/or radar data of the road intersection sent by the shooting device 21, the driving parameters of the vehicle included in the video and/or radar data can be obtained, and the driving parameters can include the driving speed and/or the driving direction, the position information of the vehicle, and the like. Then, the lane where the vehicle is located is determined according to the position information of each vehicle, for example, the vehicle is determined to be on an entering lane or an exiting lane of the road, so that the road condition information of the road is obtained according to the driving parameters of the vehicles driven on each lane.
The processing device 22 may be a smart terminal box, a server cluster, or a cloud server, among others.
In the monitoring system described in fig. 2, it is described that the monitoring system includes the shooting device 21 and the processing device 22 as an example, it is understood that the monitoring system may further include other devices, for example, a video forwarding device for forwarding a video acquired by the shooting device 21 may also be included. In addition, the number of the photographing devices 21 and the processing devices 22 in the monitoring system is not limited, and for example, one photographing device 21 and one processing device 22 may be included in the monitoring system, or a plurality of photographing devices 21 and one processing device 22 may be included, or a plurality of photographing devices 21 and a plurality of processing devices 22 may also be included.
The following describes the monitoring method of the traffic information provided by the present application with reference to the monitoring system shown in fig. 2.
Please refer to fig. 3, which is a flowchart of a method for monitoring road condition information provided in the present application, and the flowchart is described as follows:
s31, the photographing device 21 acquires the monitoring information of the monitored road.
The photographing device 21 may be provided at a road entrance of a monitored road, which may be a road B as shown in fig. 4A. The photographing device 21 may collect monitoring information of the monitored road at a preset frequency, for example, a frequency of 20 frames/second.
As an example, the shooting device 21 only includes a camera, and the monitoring information may be a video frame captured by the camera at each capturing moment, so as to obtain a video of the monitored road in the monitoring period, and the video may include a plurality of video frames. As another example, the photographing device 21 includes a camera and a radar, and the monitoring information may be a video frame acquired by the camera at each acquisition time, and a plurality of sets of driving parameters of the vehicle detected by the radar at each acquisition time, each set of driving parameters including information such as a driving speed and/or a driving direction, and a position of the vehicle.
Next, a description will be given of a manner in which, when the photographing device 21 includes a radar, the radar acquires the traveling speed and/or traveling direction of the vehicle at each acquisition time, and the position of the vehicle.
For the position of the vehicle:
as an example, coordinate information of the radar itself is stored in advance, and then, the distance and direction of each vehicle from the radar are determined according to the scanning beam in the radar, so that the position of each vehicle is determined. For example, the monitored road is a road B shown in fig. 4A, the radar is arranged on one side of the road B, the coordinate of the position where the radar is located is (250000,420000), the radar determines that the distance between the vehicle running on the road B and the radar is 10 meters by scanning beams, the included angle between the connecting line of the vehicle and the radar and the horizontal line of the position where the radar is located is 53.13 °, and therefore the radar can calculate that the horizontal distance between the vehicle and the radar is 6 meters, the vertical distance is 8 meters, and the coordinate of the vehicle at the current acquisition time is (250006,420008).
For the running speed:
the radar can determine the running speed of the vehicle according to the acquired positions of the vehicle at two adjacent acquisition moments and the time interval of the two acquisition moments. For example, the coordinate information of the radar vehicle at the first acquisition time and the second acquisition time are (250006,420008) and (250010,420008), respectively, and assuming that the radar transmits a scanning beam at a frequency of 20 frames/second, the acquisition time interval of the first acquisition time and the second acquisition time is 1/20-0.05 second, so that the traveling speed of the vehicle is determined to be 4/0.05-80 m/s.
For the direction of travel:
the radar can determine the driving direction of the vehicle according to the positions of the vehicle at two adjacent acquisition moments. In the above example, the vehicle moves along the horizontal axis at the adjacent acquisition time, and the value of the coordinate of the horizontal axis increases, and therefore, the traveling direction of the vehicle is determined as traveling rightward along the horizontal axis.
Of course, the radar may also obtain the driving parameters of each vehicle in the video frame in other manners, or the radar may also obtain other driving parameters, which is not limited herein.
S32, the photographing apparatus 21 transmits the acquired monitoring information to the processing apparatus 22.
If the monitoring information includes only the video of the monitored road, the shooting device 21 sends the acquired video to the processing device 22; if the monitoring information includes the video of the monitored road and the sets of driving parameters, the photographing apparatus 21 transmits the video together with the sets of driving parameters to the processing apparatus 22.
Specifically, the photographing apparatus 21 may periodically transmit all the monitoring information acquired in a period to the processing apparatus 22, for example, the monitoring information includes 10 video frames acquired in a period, and the photographing apparatus 21 transmits the 10 video frames to the processing apparatus 22. The photographing apparatus 21 may also transmit the monitoring information of each acquisition time to the processing apparatus 22 after acquiring the monitoring information of the acquisition time. For example, if the monitoring information includes a video, the shooting device 21 sends the first video frame to the processing device 22 after acquiring the first video frame at the first capturing time, then acquires the second video frame at the second capturing time, and sends the second video frame to the processing device 22, and so on. The manner in which the photographing apparatus 21 transmits the monitoring information is not limited herein.
S33, the processing device 22 obtains the driving parameters of the vehicle included in the monitoring information corresponding to each collection time.
In the embodiment of the present application, the driving parameters may include, but are not limited to, a position, a driving speed, and a driving direction, and for convenience of description, the driving parameters including a position, a driving speed, and a driving direction are taken as an example below.
According to the difference of the monitoring information, the processing device 22 may obtain the driving parameters of the vehicle included in the monitoring information corresponding to each collection time, including but not limited to the following two modes:
in a first mode, the monitoring information only includes the video of the monitored road:
for the location information:
as an example, the processing device 22 may store a map of the monitored road in advance, the map including coordinate information where a plurality of specific objects are actually located, and when acquiring the position of the vehicle, first acquire the position relationship between the vehicle and the specific object in the video frame, and then determine the actual position of the vehicle according to the position relationship and the coordinate information where the specific object is actually located. For example, the monitored road is a road C shown in fig. 4B, and the map of the road C includes coordinate information of the road line of the road C. The photographing apparatus 21 determines the position of the vehicle in the video frame as: the center of the vehicle is spaced from the lane line 1 by 30 pixels. And in the video frame, the total width between the lane line 1 and the lane line 2 is 50 pixels, and then the actual width between the lane line 1 and the lane line 2 is determined according to the ordinate of the actual position of the lane line 1 and the lane line 2, for example, the abscissa of the lane line 1 is 420005, the abscissa of the lane line 2 is 420000, and the actual width between the lane line 1 and the lane line 2 is 5 meters, so that the actual distance from the center of the vehicle to the lane line 1 is (30/50) × 5 ═ 3 meters, and the value of the ordinate of the vehicle is 420002. And determining the value of the abscissa of the vehicle in the same manner, which is not described herein again.
For the running speed:
as one example, processing device 22 may determine a travel speed of the vehicle based on a position of the vehicle determined from the video frames corresponding to the two capture times. The specific manner is similar to the manner in which the radar acquires the running speed of the vehicle in step S31, and is not described herein again.
It should be noted that when the processing device 22 determines that a new vehicle appears in the video frame for the first time, the driving speed of the vehicle can be estimated. For example, since the vehicle is not included in the video frame acquired at the previous capturing time, it may be assumed that the edge position of the road that can be captured by the capturing device 21 within one capturing time of the vehicle travels to the current position, as indicated by the arrow in fig. 4C, so that the travel distance of the vehicle within one capturing time is determined according to the edge position of the road that can be captured by the capturing device 21 and the position where the vehicle is located, and then the travel speed of the vehicle in the video frame is estimated according to the capturing frequency of the video frame. Then, the corresponding running speeds of the vehicle at different acquisition moments can be updated in real time according to a plurality of subsequent video frames, and an average value is obtained to serve as the running speed of the vehicle.
For the direction of travel:
the processing device 22 may determine the driving direction of the vehicle in each video frame in the same manner as the radar determines the driving direction of the vehicle in step S31, or may determine the driving direction of the vehicle by capturing the orientation corresponding to the head of the vehicle in the video frame. Alternatively, the driving direction of the vehicle may be determined based on the position of the vehicle in the video frame. For example, in the first video frame, the coordinate information of the vehicle in the video frame is (10, 20); in the second video frame after the first video frame, the coordinate information of the vehicle in the video frame is (22,20), and the driving direction of the vehicle can be determined as driving to the right along the horizontal axis.
It should be noted that, the processing device 22 may acquire the driving parameters of the vehicle included in each video frame, and process the video frame to obtain the driving parameters after acquiring each video frame, or may process several video frames after acquiring them, which is not limited herein.
In a second mode, the monitoring information comprises videos of the monitored road and a plurality of groups of running parameters of vehicles running on the road at each acquisition time:
in this case, the processing device 22 may first determine the position of the vehicle in the video frame corresponding to each capturing time from the video of the monitored road, and then determine the driving parameter corresponding to each vehicle from the acquired sets of driving parameters according to the position corresponding to each vehicle and the capturing time.
For example, the processing device 22 obtains that the video frame corresponding to the vehicle at the first capturing time includes the first vehicle and the second vehicle, and determines that the position of the first vehicle is (250006,420008) and the position of the second vehicle is (250010,420008) according to the video frame. Then, the processing device 22 determines a running parameter corresponding to the first collection time from the plurality of sets of acquired running parameters, determines a set of running parameters including the position of the first vehicle in the running parameters corresponding to the first collection time, that is, the running parameter of the first vehicle, and determines a set of running parameters including the position of the second vehicle as the running parameter of the second vehicle. For example, the first collection time comprises a first set of driving parameters and a second set of driving parameters, the first set of driving parameters is { (position: 250006,420008) (driving speed: 80 m/s) (driving direction: horizontal axis to the right) }, the second set of driving parameters is { (position: 250010,420008) (driving speed: 60 m/s) (driving direction: horizontal axis to the right) }, and the first set of driving parameters comprises the position of the first vehicle, so that the first set of driving parameters is determined as the driving parameters of the first vehicle, and the second set of driving parameters is determined as the driving parameters of the second vehicle in the same way.
In this way, the processing device 22 does not need to process the video frames any more, and the amount of computation by the processing device 22 can be reduced.
When there are multiple vehicles in a video frame, each vehicle may be numbered to distinguish between the multiple vehicles. For example, the first video frame includes 3 vehicles, which may be labeled as vehicle 1, vehicle 2, and vehicle 3, respectively. For vehicles in different video frames, the shooting device 21 may determine whether the vehicle in the video frame is a newly-appeared vehicle according to the association relationship between the vehicles in two adjacent video frames, renumber the newly-appeared vehicle if the vehicle is a newly-appeared vehicle, and continue to use the number of the vehicle in the previous video frame if the vehicle is a vehicle already appeared in the previous video frame.
As an example, in the first video frame, the coordinate information of the position where the vehicle 1 is located is (250006,420008), the driving speed of the vehicle is 80 km/h, and assuming that the shooting device 21 captures the video frames at the frequency of 20 frames per second, the coordinate information of the vehicle 1 in the next video frame captured by the shooting device 21 is (250007.1,420008), so when the shooting device 21 acquires the second video frame, if the coordinate information of the position where any vehicle in the second video frame is located is determined to be (250007.1,420008), it is described that the vehicle is the same as the vehicle 1 in the first video frame, and the vehicle is also labeled as the vehicle 1; if it is detected that there is a large difference between the vertical coordinate of one vehicle and the vertical coordinate of any one vehicle in the previous video frame, the vehicle can be considered as the second vehicle newly appearing in the video frame, and the vehicle is renumbered, that is, the vehicle is marked as the vehicle 4.
As another example, if the shooting device 21 is a combination of a radar and an electric warning camera, since the electric warning camera can acquire attribute information of each vehicle, after the radar acquires driving parameters of each vehicle, it can determine whether the vehicle in the video frame is a new vehicle according to the attribute information of the vehicle acquired by the electric warning camera, for example, if the attribute information of a certain vehicle in the second video frame is the same as the attribute information of the vehicle 1 in the first video frame, the vehicle in the second video frame is marked as the vehicle 1, and if the attribute information of a certain vehicle in the second video frame is different from the attribute information of any one vehicle in the first video frame, the vehicle is renumbered.
After the serial number of the vehicle in each video frame is obtained, a mapping relationship between the vehicle and the driving parameter in the video frame may be established, for example, the driving parameters of the vehicles 1 to 3 in the first video frame are sequentially the driving parameters 1 to 3, the driving parameter of the vehicle 1 in the second video frame is the driving parameter 4, and the driving parameter of the vehicle 4 is the driving parameter 5. Any one of the travel parameters 1 to 5 is a set of a position, a travel speed, and a travel direction corresponding to the vehicle.
S34, the processing device 22 determines the lane in which each vehicle is located according to the position of each vehicle.
In the embodiment of the present application, the processing device 22 stores in advance the coordinate information of the preset area of each lane in the road, so as to determine the lane in which the vehicle is located according to the coordinate information of the preset area of each lane and the position in which each vehicle is located.
As an example, the preset area may be a virtual coil provided on each lane, and the position of one lane is represented by the virtual coil, and if the position of the vehicle is located in the virtual coil, it indicates that the vehicle is located on the lane. For example, in fig. 5, the road includes 4 lanes, which are lane 1 to lane 4, a virtual coil is set in advance for each lane, and coordinate information corresponding to the virtual coil is stored in the processing device 22. To facilitate understanding of those skilled in the art, the process of setting the virtual coil in the present application will be described below.
The virtual coil may include various shapes, for example, it may be rectangular or elliptical or other shapes, and in the embodiment of the present application, the virtual coil is rectangular. First, according to eachAnd calibrating the corner points of the virtual coil by using the lane line and the intersection stop line of the lane. The selection principle of the corner points of the virtual coil is as follows: and taking two inner cross points as angular points at the intersection of the intersection stop line and the lane line, wherein the inner cross points as the angular points extend along the lane line in the direction opposite to the intersection stop line, one point is taken as the angular point on the inner side of the reverse extension line so as to obtain 4 angular points, and a rectangle formed by connecting the 4 angular points is the virtual coil of the lane. For example, in fig. 5, from bottom to top, 6 lane lines are sequentially included, which are respectively marked as lane line 1 to lane line 6, and an intersection stop line shared by all lanes, two corner points corresponding to virtual coils of lane 1 are determined as the inside intersections of lane line 1 and lane line 2 and the intersection stop line, and are marked as corner point a and corner point B, then, with the corner point a as a starting point, a certain length is extended along the lane line 1 in a direction opposite to the intersection stop line, where the length may be the length of a vehicle or a fixed value, for example, 10 meters, to obtain a corner point C, a corner point D is obtained on the lane line 2 in the same manner, and the corner points a to D are connected, so that a virtual coil corresponding to lane 1 can be obtained. The coordinate information of the virtual coil is xB,x≤x≤xA,x,yC,y≤y≤yA,yX denotes the abscissa, y denotes the ordinate, xB,xThe abscissa, y, of the corner point BC,yRepresents the ordinate of the corner point C and so on, and stores the coordinate information of each corner point in the processing device 22. As an example, the coordinate information of each corner point is shown in table 1.
TABLE 1
Corner point Abscissa x Ordinate y
A 2561326.752 418357.2016
B 2561342.1 418357.2016
C 2561326.752 418346.6662
D 2561342.1 418346.6662
In the same manner as described above, the coordinate information of the virtual coil of each lane is determined and stored in the processing device 22 in advance.
In this way, when the processing device 22 acquires the position of each vehicle, it is possible to determine which lane the vehicle is located in, and then determine the lane in which the vehicle is located, based on the coordinate information of the virtual coil of each lane stored in advance.
For example, the processing device 22 acquires the coordinate information of the vehicle 1 in the first video frame as (2561330.75, 418349.7) from the driving parameters 1 of the vehicle 1 in the first video frame, and determines that the vehicle 1 is located in the lane 1 because 2561342.1<2561330.75<2561326.752 and 418346.6662<418349.7<418357.2016, that is, the vehicle 1 is located in the virtual coil corresponding to the lane 1. And determining the lane where the vehicle is located in each video frame in the same manner, wherein the specific process is not repeated herein.
In addition, in the embodiment of the application, a certain distance exists between the corner point a and the corner point C which form the virtual coil, so that the problem of detection failure caused by loss of single-point detection can be avoided, and the accuracy of detection can be ensured. For example, the distance between the corner point a and the corner point C is 10 meters, and assuming that the upper limit of the driving speed of the vehicle in the road is 80 km/h, the shooting device 21 acquires video frames at a frequency of 20 frames/sec, so that the position difference of the same vehicle is obtained by the video frames acquired at two adjacent acquisition moments, and the length of the virtual coil is 10 meters, so that the shooting device 21 can acquire the video frames including the vehicle at 9 acquisition moments, that is, even if the shooting device 21 does not detect the vehicle at a certain acquisition moment, the shooting device 21 can have 8 chances to detect the vehicle, and the accuracy can be improved.
In the foregoing example, the vehicle in each lane is detected by one virtual coil corresponding to the lane, and in other embodiments, a plurality of virtual coils may be provided on each lane. The virtual coils correspond to a plurality of preset areas.
For example, two preset regions, namely a first preset region and a second preset region, may be set on each lane, and each preset region is taken as a virtual coil, for example, as shown in fig. 6, a virtual coil close to the stop line of the intersection is labeled as a virtual coil a, and another virtual coil is labeled as a virtual coil B. The arrangement of the virtual coil a is similar to that of the virtual coil in step S34, and is not described in detail here. After determining the corner point a to the corner point D corresponding to the virtual coil a, the corner point C may be extended by a certain length along the lane line 1 in a direction opposite to the intersection stop line, where the length may be a fixed value, for example, 5 meters, to obtain a corner point E, and the corner point F is obtained on the lane line 2 in the same manner. Then, in the same manner, the angular point E and the angular point F extend on the lane line 1 and the lane line 2, respectively, to obtain an angular point G and an angular point H, and the angular points E to the angular point H are connected, so that a virtual coil B corresponding to the lane 1 can be obtained. The coordinate information of each virtual coil is then determined and stored in the processing device 22 in advance. The arrangement mode of the virtual coil a and the virtual coil B in other lanes is similar to that of the lane 1, and the description is omitted here.
In this case, when the processing device 22 acquires the position of each vehicle from the photographing device 21, it may be determined which lane the vehicle is located in, based on the coordinate information of the virtual coil of each lane stored in advance, and the lane in which the vehicle is located. In the embodiment of the present application, one lane corresponds to two virtual coils, and as long as the vehicle is located in any one of the two virtual coils, the vehicle is considered to be located on the lane. The specific determination method is similar to the foregoing, and is not described herein again.
S35, the processing device 22 obtains the road condition information of the road according to the vehicle included in each lane or the driving parameter of each vehicle.
In this embodiment, the traffic information may include traffic flow of a road, a distance between vehicle heads of vehicles in the road, or parking time of the vehicles on the road, and the traffic flow may include traffic flow of an entrance lane and traffic flow of an exit lane. Of course, the traffic information may also include other parameters, which are not listed here. Next, a process of the processing device 22 acquiring the road condition information of the road will be described, taking three parameters of the traffic flow, the distance between the two vehicle heads, and the parking time as examples of the road condition information.
The road condition information is traffic flow:
as an example, the processing device 22 determines that the vehicles 1 and 2 are located in the lane 1, the vehicle 3 is located in the lane 3, and the vehicle 1 in the second video frame is located in the lane 1 and the vehicle 4 is located in the lane 1, based on the acquired coordinate information of the vehicle 1 to the vehicle 3 in the first video frame, the position of the vehicle 1 in the second video frame, and the coordinate information of the virtual coil of each lane. Therefore, the total number of the vehicles in each lane is counted, and the traffic flow of each lane can be obtained. When the traffic flow is counted, the repeated vehicles in the video frames can be excluded through the serial numbers of the vehicles in each video frame. For example, the first video frame includes vehicles 1 to 3, and the second video frame includes vehicles 1 and 4, and it can be seen that the vehicle 1 in the first video frame and the vehicle 1 in the second video frame are the same vehicle, so when the traffic flow of each lane is counted, the vehicle 1 in the second video frame may not be counted, so that the number of vehicles included in the lane 1 in the acquisition time period of the two video frames is 3, the number of vehicles included in the lane 2 in the acquisition time period of the two video frames is 0, the number of vehicles included in the lane 3 in the acquisition time period of the two video frames is 1, and the number of vehicles included in the lane 4 in the acquisition time period of the two time-frequency frames is 0. Alternatively, for more accuracy, the processing device 22 may count the number of vehicles located in each lane in more video frames collected within a preset time period (e.g., 1 second), which is not described herein again.
In addition, the 4 lanes included in the road may be divided into two types, i.e., an entrance lane and an exit lane, and then the traffic flow of each type of lane may be counted. For example, in the road shown in fig. 5, lane 1 and lane 2 are entrance lanes, and lane 3 and lane 4 are exit lanes, so that after the traffic flow of each lane is obtained, the traffic flow of lane 1 and lane 2 can be added to obtain the traffic flow of the entrance lane; and adding the traffic flow of the lane 3 and the traffic flow of the lane 4 to obtain the traffic flow of the exit lane.
The road condition information is the distance between the heads of the vehicles:
the processing device 22 may determine the headway of the adjacent vehicles based on the time difference between the virtual coils of the two adjacent vehicles entering the lane (or the time difference between the virtual coils of the exiting lane), and the traveling speed of the vehicles.
Following the above example, the processing device 22 determines that in the first video frame, the vehicle 1 and the vehicle 2 are located in the lane 1, and in the second video frame, a new vehicle, i.e., the vehicle 4, is present in the lane 1, and thus, it can be determined that the vehicle 2 and the vehicle 4 are adjacent. And, the vehicle 2 and the vehicle 4 appear in the virtual coil of the lane 1 in sequence at an acquisition time, and the video frames are acquired by the shooting device 21 at a frequency of 20 frames per second, then the time difference of the vehicle 2 and the vehicle 4 entering the virtual coil of the lane 1 can be considered to be 1/20 seconds, and then the headway distance between the vehicle 2 and the vehicle 4 is determined according to the driving speed of the vehicle 4, for example, 80 km/h (i.e., 22 m/s): 22 × 1.1 m (1/20).
The road condition information is the parking time of the vehicle:
the parking duration is understood to mean the total parking duration of the vehicle within a predetermined time period.
As an example, if the processing device 22 determines that the positions of the vehicle 1 in the first video frame and the second video frame differ by less than a threshold value, which may be 1 meter or the like, then the vehicle 1 is considered to be parked within a capture time, so that the parking duration of the vehicle 1 is added to the capture time, and video frames are captured by the shooting device 21 at a frequency of 20 frames per second, then a capture time is 1/20 seconds. Then, the positions of the vehicle 1 in the second video frame and the third video frame are counted in the same mode to be processed until all the video frames acquired in the preset time length are processed, and the total parking time length of the vehicle 1 in the preset time length is obtained. The processing manner of the total parking time of other vehicles is similar to that of the vehicle 1, and is not described herein again.
In the embodiment of the application, the traffic information may include traffic flow of a road, a distance between vehicle heads of vehicles in the road, or a parking duration of the vehicles in the road, and whether the vehicles are driven backwards in the road may also be detected. Detecting whether the vehicle drives in the wrong direction needs to set at least two virtual coils in each lane, and a process of determining whether the vehicle drives in the wrong direction on the road will be described by taking the virtual coils shown in fig. 6 as an example.
Please refer to fig. 7, which is a flowchart illustrating a method for determining whether there is a vehicle in the road in a reverse direction.
And S701, determining the type of the lane where the vehicle is located.
The type may be an entrance lane type or an exit lane type. If the type of the lane where the vehicle is located is the type of the entrance lane, executing step S702 to step S705; if the type of the lane in which the vehicle is located is the exit lane type, steps S706 to S709 are executed.
S702, determining whether the vehicle is located in a first preset area of the lane at a first moment. If not, executing step S703; if so, go to step S704.
The first preset area is an area close to the intersection stop line of the first lane, such as a virtual coil a shown in fig. 6. It should be noted that, in the embodiment of the present application, whether the vehicle is located in the lane is determined according to whether the vehicle is located in the preset virtual coil of the lane, and therefore, when it is determined that the vehicle is located in the lane and the vehicle is not located in the virtual coil a at the first time, the vehicle may be considered to be located in the virtual coil B at the first time.
And S703, determining that the vehicle is a non-retrograde vehicle.
And S704, determining whether the vehicle enters a second preset area of the lane at a second moment.
The time interval between the first time and the second time is less than or equal to a preset time, and the preset time may be 10 seconds or 1 minute, and the like, which is not limited herein. The first preset area is an area far away from the intersection stop line, such as a virtual coil B shown in fig. 6.
S705, if yes, determining that the vehicle is a reverse driving vehicle; if not, the vehicle is a non-retrograde vehicle.
And S706, determining whether the vehicle is located in a third preset area of the lane at a third moment. If not, executing step S707; if so, step S708 is executed.
The third preset area is an area away from the intersection stop line of the second lane, such as a virtual coil B shown in fig. 6.
And S707, determining that the vehicle is a non-retrograde vehicle.
And S708, determining whether the vehicle enters a fourth preset area of the lane at a fourth moment.
The time interval between the third time and the fourth time is less than or equal to a preset time length. The fourth preset area is an area close to the intersection stop line, such as the virtual coil a shown in fig. 6.
S709, if so, determining that the vehicle is a retrograde vehicle; if not, the vehicle is a non-retrograde vehicle.
As an example, the processing device 22 determines, according to the acquired coordinate information of the position of the vehicle 1 to the position of the vehicle 3 in the first video frame, the position of the vehicle 1 in the second video frame, the position of the vehicle 4 in the third video frame, and the virtual coil for each lane, the virtual coil B of the vehicle 1 in the lane 1, the virtual coil a of the vehicle 2 in the lane 1, the virtual coil a of the vehicle 3 in the lane 3 in the first video frame, the virtual coil a of the vehicle 1 in the lane 1, and the virtual coil B of the vehicle 4 in the lane 4 in the second video frame. The vehicle 4 in the third video frame is located at the virtual coil a of the lane 4.
Since the lane 1 in which the vehicle 1 is located is an entrance lane and the virtual coil in which the vehicle 1 is located is the virtual coil B, it is determined that the vehicle 1 is a non-retrograde vehicle. The vehicle 2 is located at the virtual coil a of the lane 1 in the first video frame, but the vehicle 2 does not exist in the second video frame and the vehicle 2 is not detected in the subsequent 10 video frames, so that it can be determined that the vehicle 2 is a non-retrograde vehicle.
Since the lane 3 in which the vehicle 3 is located is the exit lane and the virtual coil in which the vehicle 3 is located is the virtual coil a, it is determined that the vehicle 3 is a non-retrograde vehicle. The lane 4 in which the vehicle 4 is located is an exit lane, the vehicle 4 is located at the virtual coil B of the lane 4 in the second video frame, and the vehicle 4 is located at the virtual coil a of the lane 4 in the third video frame, thereby determining that the vehicle 4 is a retrograde vehicle.
In summary, it is determined that there is no oncoming vehicle in lanes 1 to 3, there is an oncoming vehicle in lane 4, and the oncoming vehicle is vehicle 4.
If the processing device 22 is connected to the intersection traffic signal, after the processing device 22 acquires the traffic information of the road, the processing device may convert the traffic information into a signal adapted to the intersection traffic signal based on the interface input requirement of the intersection traffic signal, for example, the signal may be an RS485 signal and is sent to the intersection traffic signal, and the intersection traffic signal may regulate and control the signal lamps of the intersection according to the traffic information of the road, or may provide signal lamp control data for the downstream intersection, and the like. For example, when the traffic flow of the entrance lane of the road is less than the threshold value, it indicates that there are fewer vehicles running on the road, so that the intersection signal can reduce the duration of the green light in the turn signal to reduce the idle discharge of the green light.
In the technical scheme, the road condition information of the road can be acquired without arranging a physical coil, so that the road surface is not damaged or no physical device is damaged, and the cost for monitoring the road condition of the road can be reduced. And whether the road condition information of the vehicles running backwards exists in the road can be detected through the position information of each vehicle and the coordinate information of the virtual coil included in the lane, so that the diversification of road condition detection can be increased.
In other embodiments, a plurality of virtual coils may be further disposed in each lane to detect other road condition information, for example, to detect a queuing length in each lane, which is not described herein again.
In the embodiments provided in the present application, in order to implement each function in the method provided in the embodiments of the present application, the monitoring device for road condition information may include a hardware structure and/or a software module, and implement each function in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether any of the above-described functions is implemented as a hardware structure, a software module, or a hardware structure plus a software module depends upon the particular application and design constraints imposed on the technical solution.
Fig. 8 shows a schematic structural diagram of a traffic information monitoring device 800. The monitoring device 800 for road condition information may be used to implement the function of the processing device 22 in the embodiment shown in fig. 3. The monitoring device 800 for road condition information may be a hardware structure, a software module, or a hardware structure and a software module. The road condition information monitoring apparatus 800 may be implemented by a chip system. In the embodiment of the present application, the chip system may be composed of a chip, and may also include a chip and other discrete devices.
The monitoring apparatus 800 for traffic information may include an obtaining unit 801 and a processing unit 802.
The acquisition unit 801 may be used to perform step S32 in the embodiment shown in fig. 3, and/or other processes for supporting the techniques described herein. In a possible implementation manner, the obtaining unit 801 may be configured to communicate with the processing unit 802, or the obtaining unit 801 may be configured to communicate with the monitoring device 800 of road condition information and other modules, which may be circuits, devices, interfaces, buses, software modules, transceivers, or any other devices capable of implementing communication.
Processing unit 802 may be used to perform steps S33-S35 in the embodiment shown in FIG. 3, and/or other processes for supporting the techniques described herein.
All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The division of the modules in the embodiment shown in fig. 8 is schematic, and only one logical function division is provided, and in actual implementation, there may be another division manner, and in addition, each functional module in the embodiments of the present application may be integrated in one processor, or may exist alone physically, or two or more modules are integrated in one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Fig. 9 shows a traffic information monitoring device 900 according to an embodiment of the present disclosure, where the traffic information monitoring device 900 may be used to implement the functions of the traffic information monitoring device 900 in the embodiment shown in fig. 3. The monitoring device 900 for road condition information may be a chip system. In the embodiment of the present application, the chip system may be composed of a chip, and may also include a chip and other discrete devices.
The traffic information monitoring apparatus 900 includes at least one processor 920, and is used to implement or support the traffic information monitoring apparatus 900 to implement the functions of the processing device 22 in the embodiment shown in fig. 3. For example, the processor 920 may determine a lane where each vehicle is located according to the position of each vehicle, and determine road condition information of a road according to the number of vehicles included in each lane or the driving parameter of each vehicle, which is specifically described in the detailed description of the method example, and is not repeated herein.
The traffic information monitoring device 900 may further include at least one memory 930 for storing program instructions and/or data. A memory 930 is coupled to the processor 920. The coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, and may be an electrical, mechanical or other form for information interaction between the devices, units or modules. The processor 920 may operate in conjunction with the memory 930. Processor 920 may execute program instructions stored in memory 930. At least one of the at least one memory may be included in the processor.
The monitoring device 900 for traffic information may further include an interface 910 for communicating with the processor 920 or communicating with other devices via a transmission medium, so that the monitoring device 900 for traffic information may communicate with other devices. Illustratively, the other device may be a computing module. The processor 920 may transceive data using the interface 910.
The specific connection medium among the interface 910, the processor 920 and the memory 930 is not limited in the embodiments of the present application. In the embodiment of the present application, the memory 930, the processor 920, and the interface 910 are connected by a bus 940 in fig. 9, the bus is represented by a thick line in fig. 9, and the connection manner between other components is merely illustrative and is not limited. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
In the embodiments of the present application, the processor 920 may be a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
In the embodiment of the present application, the memory 930 may be a non-volatile memory, such as a hard disk (HDD) or a solid-state drive (SSD), and may also be a volatile memory (e.g., a random-access memory (RAM)). The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
Also provided in an embodiment of the present application is a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to perform the method performed by the processing device 22 in the embodiment shown in fig. 3.
Also provided in an embodiment of the present application is a computer program product including instructions that, when executed on a computer, cause the computer to perform the method performed by the processing device 22 in the embodiment shown in fig. 3.
The embodiment of the present application provides a chip system, which includes a processor and may further include a memory, and is used to implement the functions of the processing device 22 in the foregoing method. The chip system may be formed by a chip, and may also include a chip and other discrete devices.
The embodiment of the application provides a road condition information monitoring system, and the image processing system comprises a shooting device and the road condition information monitoring device.
The method provided by the embodiment of the present application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network appliance, a user device, or other programmable apparatus. The computer instructions may be stored in, or transmitted from, a computer-readable storage medium to another computer-readable storage medium, e.g., from one website, computer, server, or data center, over a wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL), for short) or wireless (e.g., infrared, wireless, microwave, etc.) network, the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more integrated servers, data centers, etc., the available medium may be magnetic medium (e.g., floppy disk, hard disk, magnetic tape), optical medium (e.g., digital video disc (digital video disc, DVD for short), or a semiconductor medium (e.g., SSD).

Claims (16)

1. A method for monitoring road condition information is characterized by comprising the following steps:
acquiring a video of a road intersection, wherein the video comprises a plurality of video frames;
acquiring driving parameters of a vehicle included in each video frame according to the video, wherein the driving parameters of the vehicle include driving speed and/or driving direction and position of the vehicle;
determining a lane in which the vehicle is located according to the position of the vehicle, wherein the road comprises a plurality of lanes;
and acquiring the road condition information of the road according to the number of the vehicles running on each lane or the running parameters of the vehicles running on each lane.
2. The method of claim 1, wherein determining the lane in which the vehicle is located based on the position of the vehicle comprises:
acquiring a coordinate range of a preset area of each lane in the plurality of lanes;
determining a first preset area to which the vehicle belongs according to the position of the vehicle and the coordinate range of the preset area of each lane;
and determining the lane corresponding to the first preset area as the lane where the vehicle is located.
3. The method according to claim 1 or 2, wherein the traffic information is traffic flow of the road, and the obtaining of the traffic information of the road according to the number of vehicles traveling on each lane comprises:
acquiring the inlet traffic flow of the road according to the first total number of vehicles running on at least one inlet lane; and/or acquiring the outlet traffic flow of the road according to the second total number of vehicles running on the at least one outlet lane.
4. The method according to any one of claims 1 to 3, wherein the road condition information is a headway distance of vehicles in the road, and the obtaining of the road condition information of the road according to the driving parameters of the vehicles driving on each lane comprises:
and determining the distance between the vehicle heads according to the time difference of two adjacent vehicles on each lane entering a preset area of the lane and the running speed of a first vehicle in the two vehicles, wherein the first vehicle is the vehicle which enters the preset area of the lane from the back of the two vehicles.
5. The method according to any one of claims 1 to 4, wherein the obtaining the road condition information of the road according to the driving parameters of the vehicles driving on each lane, the obtaining the road condition information being whether there are retrograde vehicles on the road, comprises:
determining that a first lane where a vehicle is located is an entrance lane;
determining whether the vehicle is located in a first preset area of a first lane at a first moment according to the position of the vehicle, wherein the first lane comprises the first preset area and a second preset area, the first preset area is an area close to an intersection stop line of the first lane, and the second preset area is an area far away from the intersection stop line;
if so, determining whether the vehicle is located in the second preset area at a second moment, wherein the time interval between the first moment and the second moment is less than or equal to a preset time length;
if so, determining that the retrograde vehicle exists in the first lane.
6. The method according to any one of claims 1 to 5, wherein the obtaining the road condition information of the road according to the driving parameters of the vehicles driving on each lane, the obtaining the road condition information being whether there are retrograde vehicles on the road, comprises:
determining that a second lane where the vehicle is located is an exit lane;
determining whether the vehicle is located in a third preset area of the second lane at a third moment according to the position of the vehicle, wherein the second lane comprises the third preset area and a fourth preset area, the third preset area is an area far away from an intersection stop line of the second lane, and the fourth preset area is an area close to the intersection stop line;
if so, determining whether the vehicle is located in a fourth preset area at a fourth moment, wherein the time interval between the third moment and the fourth moment is less than or equal to a preset time length;
if so, determining that the retrograde vehicle exists in the second lane.
7. The method according to any one of claims 1-6, further comprising:
acquiring radar data of a road intersection, wherein the radar data comprises multiple groups of driving parameters;
acquiring the driving parameters of the vehicle included in each video frame according to the video, wherein the driving parameters comprise:
determining the position of the vehicle in each video frame;
and determining the driving parameters corresponding to each vehicle from the multiple groups of driving parameters according to the position of the vehicle in each video frame and the acquisition time of each video frame.
8. The utility model provides a monitoring devices of road conditions information which characterized in that includes:
the system comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a video of a road intersection, and the video comprises a plurality of video frames;
the processing unit is used for acquiring the driving parameters of the vehicle included in each video frame according to the video, wherein the driving parameters of the vehicle include the driving speed and/or the driving direction and the position of the vehicle; determining a lane in which the vehicle is located according to the position of the vehicle, wherein the road comprises a plurality of lanes; and acquiring the road condition information of the road according to the number of the vehicles running on each lane or the running parameters of the vehicles running on each lane.
9. The apparatus according to claim 8, wherein the processing unit is specifically configured to:
acquiring a coordinate range of a preset area of each lane in the plurality of lanes;
determining a first preset area to which the vehicle belongs according to the position of the vehicle and the coordinate range of the preset area of each lane;
and determining the lane corresponding to the first preset area as the lane where the vehicle is located.
10. The apparatus according to claim 8 or 9, wherein the traffic information is traffic flow of the road, and the processing unit is specifically configured to:
acquiring the inlet traffic flow of the road according to the first total number of vehicles running on at least one inlet lane; and/or acquiring the outlet traffic flow of the road according to the second total number of vehicles running on the at least one outlet lane.
11. The device according to any one of claims 8 to 10, wherein the road condition information is a headway distance of the vehicles on the road, and the processing unit is specifically configured to:
and determining the distance between the vehicle heads according to the time difference of two adjacent vehicles on each lane entering a preset area of the lane and the running speed of a first vehicle in the two vehicles, wherein the first vehicle is the vehicle which enters the preset area of the lane from the back of the two vehicles.
12. The apparatus according to any one of claims 8 to 11, wherein the traffic information is whether there are vehicles traveling in the road in the opposite direction, and the processing unit is specifically configured to:
determining that a first lane where a vehicle is located is an entrance lane;
determining whether the vehicle is located in a first preset area of a first lane at a first moment according to the position of the vehicle, wherein the first lane comprises the first preset area and a second preset area, the first preset area is an area close to an intersection stop line of the first lane, and the second preset area is an area far away from the intersection stop line;
if so, determining whether the vehicle is located in the second preset area at a second moment, wherein the time interval between the first moment and the second moment is less than or equal to a preset time length;
if so, determining that the retrograde vehicle exists in the first lane.
13. The apparatus according to any one of claims 8 to 12, wherein the traffic information is whether there are vehicles traveling in the road in the opposite direction, and the processing unit is specifically configured to:
determining that a second lane where the vehicle is located is an exit lane;
determining whether the vehicle is located in a third preset area of the second lane at a third moment according to the position of the vehicle, wherein the second lane comprises the third preset area and a fourth preset area, the third preset area is an area far away from an intersection stop line of the second lane, and the fourth preset area is an area close to the intersection stop line;
if so, determining whether the vehicle is located in a fourth preset area at a fourth moment, wherein the time interval between the third moment and the fourth moment is less than or equal to a preset time length;
if so, determining that the retrograde vehicle exists in the second lane.
14. The apparatus according to any of claims 8-13, wherein the obtaining unit is further configured to:
acquiring radar data of a road intersection, wherein the radar data comprises multiple groups of driving parameters;
the processing unit is further to:
determining the position of the vehicle in each video frame; and determining the driving parameters corresponding to each vehicle from the multiple groups of driving parameters according to the position of the vehicle in each video frame and the acquisition time of each video frame.
15. The road condition information monitoring device is characterized by comprising at least one processor, wherein the at least one processor is coupled with at least one memory; the at least one processor configured to execute computer programs or instructions stored in the at least one memory to cause the apparatus to perform the method of any of claims 1-7.
16. A computer-readable storage medium, having stored thereon a computer program or instructions, which, when read and executed by a computer, cause the computer to perform the method of any one of claims 1 to 7.
CN201911133735.0A 2019-11-19 2019-11-19 Road condition information monitoring method and device Pending CN111127877A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911133735.0A CN111127877A (en) 2019-11-19 2019-11-19 Road condition information monitoring method and device
PCT/CN2020/097888 WO2021098211A1 (en) 2019-11-19 2020-06-24 Road condition information monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911133735.0A CN111127877A (en) 2019-11-19 2019-11-19 Road condition information monitoring method and device

Publications (1)

Publication Number Publication Date
CN111127877A true CN111127877A (en) 2020-05-08

Family

ID=70495772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911133735.0A Pending CN111127877A (en) 2019-11-19 2019-11-19 Road condition information monitoring method and device

Country Status (2)

Country Link
CN (1) CN111127877A (en)
WO (1) WO2021098211A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112541465A (en) * 2020-12-21 2021-03-23 北京百度网讯科技有限公司 Traffic flow statistical method and device, road side equipment and cloud control platform
CN112699747A (en) * 2020-12-21 2021-04-23 北京百度网讯科技有限公司 Method and device for determining vehicle state, road side equipment and cloud control platform
WO2021098211A1 (en) * 2019-11-19 2021-05-27 华为技术有限公司 Road condition information monitoring method and device
CN114418468A (en) * 2022-03-29 2022-04-29 成都秦川物联网科技股份有限公司 Smart city traffic scheduling strategy control method and Internet of things system
CN116910629A (en) * 2023-09-13 2023-10-20 四川公路桥梁建设集团有限公司 Road surface detection method and device based on big data

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241415A (en) * 2021-12-16 2022-03-25 海信集团控股股份有限公司 Vehicle position monitoring method, edge calculation device, monitoring device and system
CN114280609B (en) * 2021-12-28 2023-10-13 上海恒岳智能交通科技有限公司 77GHz millimeter wave signal detection processing method and system
CN114882709B (en) * 2022-04-22 2023-05-30 四川云从天府人工智能科技有限公司 Vehicle congestion detection method, device and computer storage medium
CN116878572B (en) * 2023-07-11 2024-03-15 中国人民解放军国防大学联合勤务学院 Multisource geographic information data analysis method based on armored car test environment
CN117523858B (en) * 2023-11-24 2024-05-14 邯郸市鼎舜科技开发有限公司 Road electronic bayonet detection method and device

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201170927Y (en) * 2007-10-16 2008-12-24 余亚莉 Intelligent dust type traffic sensor and signal control network and message transmission system thereof
CN102332209A (en) * 2011-02-28 2012-01-25 王志清 Automobile violation video monitoring method
CN102509490A (en) * 2011-10-13 2012-06-20 北京工业大学 Traffic information video acquisition experimental teaching method of modular design
CN102789686A (en) * 2012-07-10 2012-11-21 华南理工大学 Road traffic flow detecting method based on road surface brightness composite mode recognition
CN103164988A (en) * 2011-12-09 2013-06-19 爱信艾达株式会社 Traffic information notifying system, program and method
CN103456172A (en) * 2013-09-11 2013-12-18 无锡加视诚智能科技有限公司 Traffic parameter measuring method based on videos
WO2014080978A1 (en) * 2012-11-22 2014-05-30 三菱重工業株式会社 Traffic information processing system, server device, traffic information processing method, and program
CN105046954A (en) * 2015-06-18 2015-11-11 无锡华通智能交通技术开发有限公司 Crossing-traffic-state dynamic detection system based on video intelligence analysis and method thereof
CN106781537A (en) * 2016-11-22 2017-05-31 武汉万集信息技术有限公司 A kind of overspeed of vehicle grasp shoot method and system
CN107146419A (en) * 2017-06-26 2017-09-08 黑龙江八农垦大学 The automatic monitoring system of high speed moving vehicle
CN107301776A (en) * 2016-10-09 2017-10-27 上海炬宏信息技术有限公司 Track road conditions processing and dissemination method based on video detection technology
CN107862873A (en) * 2017-09-26 2018-03-30 三峡大学 A kind of vehicle count method and device based on relevant matches and state machine
CN108122408A (en) * 2016-11-29 2018-06-05 中国电信股份有限公司 A kind of road condition monitoring method, apparatus and its system for monitoring road conditions
CN109003439A (en) * 2018-08-30 2018-12-14 新华三技术有限公司 A kind of peccancy detection method and device
CN109615870A (en) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 A kind of traffic detection system based on millimetre-wave radar and video
CN109615862A (en) * 2018-12-29 2019-04-12 南京市城市与交通规划设计研究院股份有限公司 Road vehicle movement of traffic state parameter dynamic acquisition method and device
CN109686088A (en) * 2018-12-29 2019-04-26 重庆同济同枥信息技术有限公司 A kind of traffic video alarm method, equipment and system
CN109887281A (en) * 2019-03-01 2019-06-14 北京云星宇交通科技股份有限公司 A kind of method and system monitoring traffic events

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127877A (en) * 2019-11-19 2020-05-08 华为技术有限公司 Road condition information monitoring method and device

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201170927Y (en) * 2007-10-16 2008-12-24 余亚莉 Intelligent dust type traffic sensor and signal control network and message transmission system thereof
CN102332209A (en) * 2011-02-28 2012-01-25 王志清 Automobile violation video monitoring method
CN102509490A (en) * 2011-10-13 2012-06-20 北京工业大学 Traffic information video acquisition experimental teaching method of modular design
CN103164988A (en) * 2011-12-09 2013-06-19 爱信艾达株式会社 Traffic information notifying system, program and method
CN102789686A (en) * 2012-07-10 2012-11-21 华南理工大学 Road traffic flow detecting method based on road surface brightness composite mode recognition
WO2014080978A1 (en) * 2012-11-22 2014-05-30 三菱重工業株式会社 Traffic information processing system, server device, traffic information processing method, and program
CN103456172A (en) * 2013-09-11 2013-12-18 无锡加视诚智能科技有限公司 Traffic parameter measuring method based on videos
CN105046954A (en) * 2015-06-18 2015-11-11 无锡华通智能交通技术开发有限公司 Crossing-traffic-state dynamic detection system based on video intelligence analysis and method thereof
CN107301776A (en) * 2016-10-09 2017-10-27 上海炬宏信息技术有限公司 Track road conditions processing and dissemination method based on video detection technology
CN106781537A (en) * 2016-11-22 2017-05-31 武汉万集信息技术有限公司 A kind of overspeed of vehicle grasp shoot method and system
CN108122408A (en) * 2016-11-29 2018-06-05 中国电信股份有限公司 A kind of road condition monitoring method, apparatus and its system for monitoring road conditions
CN107146419A (en) * 2017-06-26 2017-09-08 黑龙江八农垦大学 The automatic monitoring system of high speed moving vehicle
CN107862873A (en) * 2017-09-26 2018-03-30 三峡大学 A kind of vehicle count method and device based on relevant matches and state machine
CN109003439A (en) * 2018-08-30 2018-12-14 新华三技术有限公司 A kind of peccancy detection method and device
CN109615870A (en) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 A kind of traffic detection system based on millimetre-wave radar and video
CN109615862A (en) * 2018-12-29 2019-04-12 南京市城市与交通规划设计研究院股份有限公司 Road vehicle movement of traffic state parameter dynamic acquisition method and device
CN109686088A (en) * 2018-12-29 2019-04-26 重庆同济同枥信息技术有限公司 A kind of traffic video alarm method, equipment and system
CN109887281A (en) * 2019-03-01 2019-06-14 北京云星宇交通科技股份有限公司 A kind of method and system monitoring traffic events

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021098211A1 (en) * 2019-11-19 2021-05-27 华为技术有限公司 Road condition information monitoring method and device
CN112541465A (en) * 2020-12-21 2021-03-23 北京百度网讯科技有限公司 Traffic flow statistical method and device, road side equipment and cloud control platform
CN112699747A (en) * 2020-12-21 2021-04-23 北京百度网讯科技有限公司 Method and device for determining vehicle state, road side equipment and cloud control platform
CN114418468A (en) * 2022-03-29 2022-04-29 成都秦川物联网科技股份有限公司 Smart city traffic scheduling strategy control method and Internet of things system
CN114418468B (en) * 2022-03-29 2022-07-05 成都秦川物联网科技股份有限公司 Smart city traffic scheduling strategy control method and Internet of things system
US11587436B1 (en) 2022-03-29 2023-02-21 Chengdu Qinchuan Iot Technology Co., Ltd. Methods for controlling traffic scheduling strategies in smart cities and Internet of Things (IoT) systems thereof
CN116910629A (en) * 2023-09-13 2023-10-20 四川公路桥梁建设集团有限公司 Road surface detection method and device based on big data
CN116910629B (en) * 2023-09-13 2023-12-15 四川公路桥梁建设集团有限公司 Road surface detection method and device based on big data

Also Published As

Publication number Publication date
WO2021098211A1 (en) 2021-05-27

Similar Documents

Publication Publication Date Title
CN111127877A (en) Road condition information monitoring method and device
CN101510356B (en) Video detection system and data processing device thereof, video detection method
CN107301776A (en) Track road conditions processing and dissemination method based on video detection technology
CN103927878A (en) Automatic snapshot device and method for illegal parking
CN103376336A (en) Method and system for measuring vehicle running speed
US10242575B1 (en) Marked parking space identification system and method thereof
CN104574954A (en) Vehicle checking method and system based on free flow system as well as control equipment
WO2021036243A1 (en) Method and apparatus for recognizing lane, and computing device
CN203165216U (en) License plate recognition device
CN115035744B (en) Vehicle identification method, device and system based on image analysis and RFID
CN112447060A (en) Method and device for recognizing lane and computing equipment
CN115424263A (en) Traffic light labeling method, vehicle-mounted device and storage medium
CN114495512A (en) Vehicle information detection method and system, electronic device and readable storage medium
CN110880205B (en) Parking charging method and device
CN112906428B (en) Image detection region acquisition method and space use condition judgment method
CN110865361B (en) Saturated headway detection method based on radar data
CN116503818A (en) Multi-lane vehicle speed detection method and system
CN116913081A (en) Vehicle queuing length detection method based on roadside laser radar
PH12019000145A1 (en) Artificial intelligence traffic detection system
WO2023005020A1 (en) Reflector localization method, robot and computer-readable storage medium
CN111709354B (en) Method and device for identifying target area, electronic equipment and road side equipment
TWI526996B (en) Abnormal trade proofing electronic toll collecting method and system
CN111709357B (en) Method and device for identifying target area, electronic equipment and road side equipment
CN108416305B (en) Pose estimation method and device for continuous road segmentation object and terminal
CN112330827B (en) Parking charging method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200508