WO2021098211A1 - 一种路况信息的监测方法及装置 - Google Patents

一种路况信息的监测方法及装置 Download PDF

Info

Publication number
WO2021098211A1
WO2021098211A1 PCT/CN2020/097888 CN2020097888W WO2021098211A1 WO 2021098211 A1 WO2021098211 A1 WO 2021098211A1 CN 2020097888 W CN2020097888 W CN 2020097888W WO 2021098211 A1 WO2021098211 A1 WO 2021098211A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
lane
road
preset area
vehicles
Prior art date
Application number
PCT/CN2020/097888
Other languages
English (en)
French (fr)
Inventor
谢才东
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021098211A1 publication Critical patent/WO2021098211A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Definitions

  • This application relates to the field of image processing technology, and in particular to a method and device for monitoring road condition information.
  • the duration of the red light of the traffic signal light at the entrance of the road can be increased to reduce the number of vehicles entering the road.
  • one way to monitor road conditions is to lay a physical coil at the entrance of the road.
  • the inductance value of the physical coil will change, which will lead to
  • the frequency of the oscillation circuit in the detector connected to the physical coil changes.
  • the traffic volume entering the road can be judged.
  • the present application provides a method and device for monitoring road condition information to reduce the cost of monitoring road conditions.
  • a method for monitoring road condition information is provided.
  • a video of a road intersection is first obtained, and the video includes a plurality of video frames, and then, the driving parameters of the vehicle included in each video frame are obtained according to the video,
  • the driving parameters of each vehicle include at least the location of the vehicle, and may also include the driving speed and/or the driving direction. Since the road may include multiple lanes, after obtaining the driving parameters of each vehicle, the lane in which the vehicle is located is determined according to the location of the vehicle, and based on the number of vehicles driving on each lane or according to the vehicles driving on each lane The driving parameters to obtain the road condition information of the road.
  • the lane where the lane is located is determined by comparing the position of the vehicle with the coordinate range of the preset area, which is simple and easy to implement.
  • the road condition information includes at least one of the traffic volume of the road, the distance between the heads of the vehicles on the road, and whether there is a reverse vehicle on the road.
  • obtaining the traffic flow of the road according to the number of vehicles driving on each lane can include but is not limited to the following methods:
  • the import traffic flow and the exit traffic flow of the road can be further obtained according to the type of each lane, for example, whether it is an entry lane or an exit lane.
  • obtaining the head distance of the vehicles on the road may include but is not limited to the following methods:
  • the head distance is determined according to the time difference between two adjacent vehicles in each lane entering the preset area of the lane, and the driving speed of the first vehicle entering the preset area of the lane after the two vehicles.
  • obtaining whether there are retrograde vehicles on the road may include but not limited to the following methods:
  • the first preset area is close to the first The area of the stop line at the intersection of the lane, and the second preset area is an area far from the stop line at the intersection. According to the position of the vehicle, it is determined whether the vehicle is located in the first preset area at the first moment.
  • the second lane where the vehicle is located is the exit lane, and the second lane is provided with two preset areas, namely the third preset area and the fourth preset area, the third preset area is far away from the second The area of the intersection stop line of the lane, the fourth preset area is the area close to the intersection stop line, then according to the position of the vehicle, it is determined whether the vehicle is located in the third preset area at the third moment, if yes, then Determine whether the vehicle is located in the fourth preset area at the fourth time, and the time interval between the third time and the fourth time is less than or equal to the preset duration, that is, whether the vehicle is preset after the third time It is located in the fourth preset area within the time period, and if yes, it is determined that there is a contrarian vehicle in the second lane.
  • the method can also obtain radar data including multiple sets of driving parameters of road junctions, and then, according to the obtained video, determine the position of the vehicle in each video frame, and according to each video frame The position of the vehicle in the middle and the collection time of each video frame are used to determine the driving parameter corresponding to each vehicle from the multiple sets of driving parameters.
  • the driving parameters of the vehicle can be obtained by radar, and then the driving parameters of the vehicle obtained by the radar are fitted with the vehicles in each video frame, and the driving parameters of each vehicle can be obtained, thereby reducing processing
  • the radar data is more accurate, which can improve the accuracy of the acquired road condition information.
  • a device for monitoring road condition information includes an acquiring unit and a processing unit. These units can perform the corresponding functions performed in any of the design examples of the first aspect, specifically:
  • An acquiring unit configured to acquire a video of a road intersection, where the video includes a plurality of video frames
  • the processing unit is configured to obtain the driving parameters of the vehicle included in each video frame according to the video, the driving parameters of the vehicle including the driving speed and/or the driving direction, and the position of the vehicle; determine according to the position of the vehicle The lane where the vehicle is located, and the road includes multiple lanes; and, according to the number of vehicles traveling on each lane or the driving parameters of the vehicles traveling on each lane, the road condition information of the road is acquired.
  • the processing unit is specifically configured to:
  • the lane corresponding to the first preset area is the lane where the vehicle is located.
  • the road condition information includes at least one of the traffic volume of the road, the distance between the heads of the vehicles on the road, and whether there is a reverse vehicle on the road.
  • the road condition information is the traffic volume of the road
  • the processing unit is specifically configured to:
  • the road condition information is the head distance of vehicles on the road
  • the processing unit is specifically configured to:
  • the headway distance is determined, and the first vehicle is all the vehicles.
  • the road condition information is whether there is a retrograde vehicle on the road
  • the processing unit is specifically configured to:
  • the position of the vehicle it is determined whether the vehicle is located in the first preset area of the first lane at the first moment, and the first lane includes the first preset area and the second preset area, so
  • the first preset area is an area close to the intersection stop line of the first lane
  • the second preset area is an area away from the intersection stop line
  • the road condition information is whether there is a retrograde vehicle on the road
  • the processing unit is specifically configured to:
  • the second lane includes the third preset area and the fourth preset area, so
  • the third preset area is an area away from the intersection stop line of the second lane
  • the fourth preset area is an area close to the intersection stop line
  • the acquiring unit is also used for:
  • the processing unit is also used for:
  • a device for monitoring road condition information includes at least one processor, and the at least one processor is coupled with at least one memory; the at least one processor is configured to execute the at least one A computer program or instruction stored in a memory to make the device execute the method described in the first aspect.
  • the driving parameters of the vehicle included in each video frame where the driving parameters of the vehicle include driving speed and/or driving direction, and the position of the vehicle;
  • an embodiment of the present application provides a computer-readable storage medium that stores a computer program, and the computer program includes program instructions that, when executed by a computer, cause the The computer executes the method described in any one of the first aspect.
  • an embodiment of the present application provides a computer program product, the computer program product stores a computer program, the computer program includes program instructions, and when executed by a computer, the program instructions cause the computer to execute the first The method of any one of the aspects.
  • the present application provides a chip system.
  • the chip system includes a processor and may also include a memory for implementing the method described in the first aspect.
  • the chip system can be composed of chips, or it can include chips and other discrete devices.
  • FIG. 1 is a schematic diagram of an example of an application scenario involved in this application
  • FIG. 2 is a structural block diagram of an example of the monitoring system provided by this application.
  • FIG. 3 is a flowchart of a method for monitoring road condition information provided by this application.
  • 4A is a schematic diagram of an example of obtaining the position of a vehicle through radar provided in this application;
  • FIG. 4B is a schematic diagram of an example of obtaining the position of a vehicle only through video provided by this application;
  • FIG. 4C is a schematic diagram of an example of acquiring the driving speed of a vehicle only through video provided by this application;
  • FIG. 5 is a schematic diagram of an example of setting a virtual coil on the lane in this application.
  • FIG. 6 is a schematic diagram of another example of setting a virtual coil on the lane in this application.
  • Fig. 7 is a flow chart of judging whether there is a retrograde vehicle on the road in this application.
  • FIG. 8 is a schematic structural diagram of an example of a monitoring device for road condition information provided by this application.
  • FIG. 9 is a schematic structural diagram of another example of a monitoring device for road condition information provided by this application.
  • “multiple” refers to two or more than two. In view of this, “multiple” may also be understood as “at least two” in the embodiments of the present application. “At least one” can be understood as one or more, for example, one, two or more. For example, including at least one refers to including one, two or more, and does not limit which ones are included. For example, including at least one of A, B, and C, then the included can be A, B, C, A and B, A and C, B and C, or A and B and C.
  • ordinal numbers such as “first” and “second” mentioned in the embodiments of the present application are used to distinguish multiple objects, and are not used to limit the order, timing, priority, or importance of multiple objects.
  • FIG. 1 is a schematic diagram of a method of measuring the traffic flow on a road in the prior art.
  • a physical coil is laid on the ground of road A in advance, and the physical coil is connected to the coil detector and forms an oscillation circuit with the capacitive device inside the coil detector.
  • the inductance value of the physical coil is L 1 , then the oscillation frequency of the oscillation circuit satisfies the following formula:
  • f is the oscillation frequency of the oscillating circuit
  • C is the capacitance value of the capacitive device of the coil detector.
  • the inductance value of the physical coil changes from L 1 to L 2 , so that the oscillation frequency of the oscillation circuit becomes:
  • the embodiment of the present application provides a road state monitoring system.
  • the monitoring system includes a shooting device 21 and a processing device 22, wherein the shooting device 21 is used to shoot a video of the monitored road intersection, the video includes vehicles driving on the road, and then the acquired video Send to the processing device 22.
  • the photographing device 21 may be a camera, and acquires a video of the monitored road intersection according to a preset acquisition frequency.
  • the photographing device 21 may also be an electric police camera, which may obtain the position information and attribute information of the vehicle at the time of collection in each video frame, and the attribute information may be the license plate number, vehicle type, or color.
  • the imaging device 21 can also be a combination of a radar and a camera. The radar can detect whether there is a vehicle on the road by scanning the beam, and when it detects that there is a vehicle on the road, it can detect the driving speed of the vehicle at that moment, The direction of travel, and the location of the vehicle.
  • the location information can be understood as a space coordinate based on a world coordinate system or a geocentric coordinate system.
  • the geocentric coordinate system for example, the 1984 world geodetic system (1984 coordinate system) and so on.
  • the specific form of the photographing device 21 is not limited.
  • the processing device 22 After the processing device 22 receives the video and/or radar data of the road junction sent by the photographing device 21, it can acquire the driving parameters of the vehicle included in the video and/or radar data, and the driving parameters may include driving speed and/or Or the driving direction, and the location information of the vehicle, etc. Then, determine the lane in which the vehicle is located according to the location information of each vehicle, for example, determine that the vehicle is on the entry or exit lane of the road, so as to obtain the driving parameters of the vehicle on each lane. Traffic information.
  • the processing device 22 may be a smart terminal box, a server cluster, a cloud server, or the like.
  • the monitoring system includes a photographing device 21 and a processing device 22 as an example for description. It is understandable that the monitoring system may also include other devices, for example, it may also include A video forwarding device that forwards the video acquired by the shooting device 21, etc.
  • the number of photographing devices 21 and processing devices 22 in the monitoring system is not limited.
  • the monitoring system may include one photographing device 21 and one processing device 22, or may include multiple photographing devices 21 and one processing device 22, Or, multiple photographing devices 21 and multiple processing devices 22 may also be included.
  • FIG. 3 is a flowchart of a method for monitoring road condition information provided by this application.
  • the description of the flowchart is as follows:
  • the photographing device 21 acquires monitoring information of the monitored road.
  • the photographing device 21 may be set at the entrance of the monitored road, and the monitored road may be a road B as shown in FIG. 4A.
  • the photographing device 21 may collect monitoring information of the monitored road at a preset frequency, for example, a frequency of 20 frames per second.
  • the shooting device 21 only includes a camera, and the monitoring information may be a video frame collected by the camera at each collection moment, so as to obtain a video of the monitored road during the monitoring period.
  • the video may include multiple videos. frame.
  • the shooting device 21 includes a camera and a radar, and the monitoring information may be a video frame collected by the camera at each collection time, and multiple sets of driving parameters of the vehicle detected by the radar at each collection time, each group
  • the driving parameters include information such as driving speed and/or driving direction, as well as the location of the vehicle.
  • the imaging device 21 includes a radar
  • the manner in which the radar obtains the traveling speed and/or traveling direction of the vehicle at each collection time and the position of the vehicle will be described.
  • the radar stores its own coordinate information in advance, and then determines the distance and direction of each vehicle from the radar according to the scanning beam in the radar, thereby determining the position of each vehicle.
  • the monitored road is road B shown in Fig. 4A
  • the radar is set on one side of road B
  • the coordinates of the position of the radar are (250,000, 420,000)
  • the radar scans the beam to determine the relationship between the vehicles driving on road B
  • the distance of the radar is 10 meters
  • the angle between the line of the vehicle and the radar and the horizontal line of the position of the radar is 53.13°. Therefore, the radar can calculate that the horizontal distance of the vehicle from the radar is 6 meters and the vertical distance is 8 meters.
  • the coordinates of the current collection time are (250006,420008).
  • the radar can determine the driving speed of the vehicle according to the acquired position of the vehicle at two adjacent acquisition moments and the time interval between the two acquisition moments.
  • the coordinate information of the radar vehicle at the first acquisition time and the second acquisition time are (250006,420008) and (250010,420008) respectively.
  • the radar sends scanning beams at a frequency of 20 frames/second
  • the radar can determine the driving direction of the vehicle by the position of the vehicle at two adjacent acquisition moments. Following the above example, the vehicle moved along the horizontal axis at the adjacent collection time, and the coordinate value of the horizontal axis increased. Therefore, it is determined that the driving direction of the vehicle is driving along the horizontal axis to the right.
  • the radar can also obtain the driving parameters of each vehicle in the video frame in other ways, or the radar can also obtain other driving parameters, which is not limited here.
  • the photographing device 21 sends the acquired monitoring information to the processing device 22.
  • the shooting device 21 sends the acquired video to the processing device 22; if the monitoring information includes the video of the monitored road and the multiple sets of driving parameters, the shooting device 21 sends the video together with multiple sets of driving parameters to the processing device 22.
  • the photographing device 21 may periodically send all the monitoring information acquired in this period to the processing device 22. For example, if the monitoring information includes 10 video frames acquired in one period, the photographing device 21 may send the 10 video frames to the processing device 22. The frame is sent to the processing device 22. The photographing device 21 may also send the monitoring information of the collection time to the processing device 22 after acquiring the monitoring information of each collection time. For example, if the monitoring information includes video, after the shooting device 21 obtains the first video frame at the first collection moment, it sends the first video frame to the processing device 22, and then obtains the second video frame at the second collection moment. For the second video frame, the second video frame is sent to the processing device 22, and so on. The manner in which the photographing device 21 sends monitoring information is not restricted here.
  • the processing device 22 acquires the driving parameters of the vehicle included in the monitoring information corresponding to each collection moment.
  • the driving parameters may include, but are not limited to, the position, the driving speed, and the driving direction.
  • the following takes the driving parameters including the position, the driving speed, and the driving direction as an example.
  • the processing device 22 acquiring the driving parameters of the vehicle included in the monitoring information corresponding to each collection time may include, but is not limited to, the following two ways:
  • the monitoring information only includes the video of the monitored road:
  • the processing device 22 may pre-store a map of the monitored road.
  • the map includes the actual coordinate information of a plurality of specific targets.
  • the processing device 22 may pre-store a map of the monitored road.
  • the map includes the actual coordinate information of a plurality of specific targets.
  • the processing device 22 may pre-store a map of the monitored road.
  • the map includes the actual coordinate information of a plurality of specific targets.
  • the processing device 22 may pre-store a map of the monitored road.
  • the map includes the actual coordinate information of a plurality of specific targets.
  • the total width between lane line 1 and lane line 2 is 50 pixels, then according to the ordinate of the actual location of lane line 1 and lane line 2 to determine the distance between lane line 1 and lane line 2
  • the abscissa of lane line 1 is 420005 and the abscissa of lane line 2 is 420,000
  • the actual width between lane line 1 and lane line 2 is 5 meters, so that the center distance of the vehicle is obtained from lane line 1.
  • the value of the abscissa of the vehicle is determined, which will not be repeated here.
  • the processing device 22 may determine the driving speed of the vehicle according to the position of the vehicle determined from the video frames corresponding to the two acquisition moments.
  • the specific method is similar to the radar obtaining the driving speed of the vehicle in step S31, and will not be repeated here.
  • the processing device 22 determines that a new vehicle appears in the video frame for the first time, it can estimate the driving speed of the vehicle. For example, since the vehicle is not included in the video frame acquired at the last acquisition time, it can be assumed that the vehicle travels to the current position from the edge position of the road that can be photographed by the photographing device 21 within one acquisition time, as shown in Fig.
  • the path indicated by the arrow in 4C is used to determine the driving distance of the vehicle at a collection time based on the edge position of the road that can be photographed by the photographing device 21 and the position of the vehicle, and then according to the collection frequency of the video frame, predict Estimate the speed of the vehicle in the video frame. Then, according to multiple subsequent video frames, the corresponding driving speed of the vehicle at different acquisition moments can be updated in real time, and an average value can be obtained as the driving speed of the vehicle.
  • the processing device 22 can use the same method that the radar determines the driving direction of the vehicle in step S31 to determine the driving direction of the vehicle in each video frame, or it can also determine the vehicle's head by collecting the direction corresponding to the head of the vehicle in the video frame.
  • Direction of travel Alternatively, the driving direction of the vehicle can also be determined according to the position of the vehicle in the video frame. For example, in the first video frame, the coordinate information of the vehicle in the video frame is (10, 20); in the second video frame after the first video frame, the coordinate information of the vehicle in the video frame is (22, 20), it can be determined that the driving direction of the vehicle is driving right along the horizontal axis.
  • the processing device 22 obtains the driving parameters of the vehicle included in each video frame. After each video frame is obtained, the video frame is processed to obtain the driving parameters, or it may also obtain several driving parameters. After each video frame, the several video frames are processed separately, which is not limited here.
  • the monitoring information includes the video of the monitored road and multiple sets of driving parameters of the vehicles driving on the road at each collection time:
  • the processing device 22 may first determine the position of the vehicle in the video frame corresponding to each acquisition time according to the video of the monitored road, and then, according to the position corresponding to each vehicle and the acquisition time, obtain multiple sets of In the driving parameters, the driving parameters corresponding to each vehicle are determined.
  • the processing device 22 obtains that the video frame corresponding to the vehicle at the first acquisition moment includes the first vehicle and the second vehicle, and determines that the position of the first vehicle is (250006, 420008) according to the video frame, and the position of the second vehicle is (250006, 420008). The location is (250010,420008). Then, the processing device 22 determines the driving parameter corresponding to the first acquisition time from the acquired sets of driving parameters, and determines a group of driving parameters including the position of the first vehicle among the driving parameters corresponding to the first acquisition time, that is, the first vehicle
  • the driving parameters of a vehicle, a group of driving parameters including the position of the second vehicle are the driving parameters of the second vehicle.
  • the first collection time includes the first group of driving parameters and the second group of driving parameters.
  • the first group of driving parameters is ⁇ (location: 250006,420008) (driving speed: 80 m/s) (driving direction: horizontal axis) Right) ⁇
  • the second group of driving parameters is ⁇ (position: 250010,420008) (driving speed: 60 m/s) (driving direction: horizontal axis to the right) ⁇
  • the first group of driving parameters includes the first vehicle’s Therefore, the first set of driving parameters is determined to be the driving parameters of the first vehicle.
  • the second set of driving parameters is determined to be the driving parameters of the second vehicle.
  • the processing device 22 does not need to process the video frame anymore, and the calculation amount of the processing device 22 can be reduced.
  • each vehicle can be numbered to distinguish multiple vehicles. For example, if the first video frame includes three vehicles, the three vehicles may be marked as vehicle 1, vehicle 2, and vehicle 3.
  • the shooting device 21 can determine whether the vehicle in the video frame is a newly-appearing vehicle according to the association relationship between each vehicle in two adjacent video frames. The newly-appearing vehicle is renumbered, and if it is a vehicle that has appeared in the previous video frame, the number of the vehicle in the previous video frame is used.
  • the coordinate information of the location of the vehicle 1 is (250006, 420008), and the traveling speed of the vehicle is 80 kilometers per hour. If the video frame is captured frequently, the coordinate information of the vehicle 1 in the next video frame captured by the shooting device 21 is (250007.1, 420008).
  • the coordinate information of the location of a certain vehicle is (250007.1, 420008), which means that the vehicle is the same as vehicle 1 in the first video frame, and the vehicle is also marked as vehicle 1; if a vehicle is detected If the ordinate and the ordinate of any vehicle in the previous video frame have a large gap, it can be considered that the vehicle is the second one newly appeared in the video frame, so that the vehicle is renumbered, that is, the vehicle is marked For the vehicle 4.
  • the shooting device 21 is a combination of a radar and a police camera
  • the police camera can obtain the attribute information of each vehicle
  • after the radar obtains the driving parameters of each vehicle it can be The obtained attribute information of the vehicle is used to determine whether the vehicle in the video frame is a newly emerging vehicle. For example, if the attribute information of a certain vehicle in the second video frame is the same as the attribute information of vehicle 1 in the first video frame, then Mark the vehicle in the second video frame as vehicle 1. If the attribute information of a certain vehicle in the second video frame is different from the attribute information of any vehicle in the first video frame, renumber the vehicle .
  • the mapping relationship between the vehicle in the video frame and the driving parameter can be established.
  • the driving parameters of vehicle 1 to vehicle 3 in the first video frame are driving parameter 1 in turn ⁇ Driving parameter 3.
  • the driving parameter of vehicle 1 is driving parameter 4
  • the driving parameter of vehicle 4 is driving parameter 5.
  • Any one of the driving parameter 1 to the driving parameter 5 refers to a collection of the position, the driving speed, and the driving direction corresponding to the vehicle.
  • the processing device 22 determines the lane where each vehicle is located according to the location of each vehicle.
  • the processing device 22 pre-stores the coordinate information of the preset area of each lane in the road, so as to determine where the vehicle is located according to the coordinate information of the preset area of each lane and the location of each vehicle. Lane.
  • the preset area may be a virtual coil set on each lane, and the position of a lane is represented by the virtual coil. If the position of the vehicle is within the virtual coil, it means that the vehicle is located on the lane .
  • the road in FIG. 5 includes 4 lanes, lane 1 to lane 4, and a virtual coil is set for each lane in advance, and the coordinate information corresponding to the virtual coil is stored in the processing device 22. To facilitate the understanding of those skilled in the art, the process of setting the virtual coil in the present application will be described below.
  • the virtual coil may include multiple shapes, for example, it may be rectangular, elliptical, or other shapes. In the embodiment of the present application, the virtual coil is rectangular as an example.
  • the corner points of the virtual loop are calibrated according to the lane line of each lane and the intersection stop line.
  • the selection principle of the corner points of the virtual loop is as follows: At the intersection of the intersection stop line and the lane line, take the two inner intersections as the corner points, and the inner intersection point as the corner point extends along the lane line in the opposite direction to the intersection stop line. Another point is taken as a corner point on the inner side of the reverse extension line to obtain 4 corner points, and the rectangle formed by connecting the 4 corner points is the virtual loop of the lane.
  • lane line 1 to lane line 6 there are six lane lines from bottom to top, which are marked as lane line 1 to lane line 6, and the intersection stop line shared by all lanes, and lane line 1 and lane line 2 and intersection stop line are determined
  • the inner intersection of is the two corner points corresponding to the virtual loop of lane 1, denoted as corner point A and corner point B, and then, taking corner point A as the starting point, extend along the lane line 1 in the opposite direction to the intersection stop line.
  • the length can be the length of the vehicle or a fixed value, such as 10 meters, to get corner point C, use the same way to get corner point D on lane line 2, connect corner point A to corner point D, you can get The virtual loop corresponding to lane 1.
  • the coordinate information of the virtual coil is x B,x ⁇ x ⁇ x A,x ,y C,y ⁇ y ⁇ y A,y , x represents the abscissa, y represents the ordinate, x B,x represents the corner point B
  • the abscissa, y C, y represent the ordinate of the corner point C, and so on, and the coordinate information of each corner point is stored in the processing device 22.
  • the coordinate information of each corner point is shown in Table 1.
  • the coordinate information of the virtual coil of each lane is determined and stored in the processing device 22 in advance.
  • the processing device 22 After the processing device 22 obtains the position of each vehicle, it can determine the coordinate range of the virtual coil corresponding to which lane the vehicle is located according to the pre-stored coordinate information of the virtual coil of each lane, and then determine the The lane where the vehicle is located.
  • the processing device 22 obtains the coordinate information of the vehicle 1 in the first video frame from the driving parameter 1 of the vehicle 1 in the first video frame as (2561330.75, 418349.7), because 2561342.1 ⁇ 2561330.75 ⁇ 2561326.752, and 418346.6662 ⁇ 418349.7 ⁇ 418357.2016, that is, vehicle 1 is located in the virtual coil corresponding to lane 1, thereby determining that vehicle 1 is located in lane 1. In the same way, the lane where the vehicle is located in each video frame is determined, and the specific process is not repeated here.
  • the distance between corner point A and corner point C is 10 meters.
  • the shooting device 21 captures video frames at a frequency of 20 frames/second, thereby passing From the video frames acquired at two adjacent acquisition moments, it is obtained that the position of the same vehicle is about 1.1 meters apart, and the length of the virtual coil is 10 meters, so that the shooting device 21 can acquire the video frames including the vehicle at 9 acquisition moments That is to say, even if the photographing device 21 does not detect the vehicle at a certain collection moment, the photographing device 21 may still have 8 chances to detect the vehicle, which can improve the accuracy.
  • a virtual coil corresponding to each lane is used to detect vehicles in the lane.
  • multiple virtual coils may also be set on each lane.
  • the multiple virtual coils correspond to multiple preset areas.
  • two preset areas can be set on each lane, namely the first preset area and the second preset area.
  • each preset area as a virtual coil as an example, as shown in Figure 6, stop near the intersection.
  • the virtual coil of the line is marked as virtual coil A
  • the other virtual coil is marked as virtual coil B.
  • the setting method of the virtual coil A is similar to the setting method of the virtual coil in step S34, and will not be repeated here.
  • the corner point C can be extended along the lane line 1 to the direction opposite to the intersection stop line for a certain length.
  • the length can be a fixed value, for example, 5 meters, to get For the corner point E, the corner point F is obtained on the lane line 2 in the same way. Then, in the same way, extend corner point E and corner point F on lane line 1 and lane line 2 respectively to obtain corner point G and corner point H, and connect corner point E to corner point H to obtain the 1 corresponds to the virtual coil B. Then the coordinate information of each virtual coil is determined and stored in the processing device 22 in advance. The setting of virtual coil A and virtual coil B in other lanes is similar to that of lane 1, and will not be repeated here.
  • the processing device 22 After the processing device 22 obtains the position of each vehicle from the photographing device 21, it can determine in which lane the vehicle is located according to the pre-stored coordinate information of the virtual coil of each lane. Within the range of coordinates, the lane where the vehicle is located is then determined. In the embodiment of the present application, one lane corresponds to two virtual coils, and as long as the vehicle is located in any one of the two virtual coils, the vehicle is considered to be located in the lane.
  • the specific determination method is similar to the foregoing content, and will not be repeated here.
  • the processing device 22 obtains road condition information of the road according to the vehicles included in each lane or the driving parameters of each vehicle.
  • the road condition information may include the traffic flow of the road, the distance between the heads of the vehicles on the road, or the parking time of the road vehicles, etc., where the traffic flow may include the traffic flow of the entrance lane and the traffic flow of the exit lane. .
  • the road condition information may also include other parameters, which are not listed here. In the following, taking the three parameters of the traffic flow, the distance between the vehicle heads and the parking duration as examples of the road condition information, the process of obtaining the road condition information of the road by the processing device 22 will be described.
  • the road condition information is the traffic flow:
  • the processing device 22 acquires the position of vehicle 1 to the position of vehicle 3 in the first video frame, the position of vehicle 1 in the second video frame, the position of vehicle 4, and the virtual coil of each lane.
  • vehicle 1 and vehicle 2 are located in lane 1
  • vehicle 3 is located in lane 3
  • vehicle 1 in the second video frame is located in lane 1
  • vehicle 4 is located in lane 1.
  • the traffic flow of each lane can be obtained.
  • the number of vehicles in each video frame can be used to exclude duplicate vehicles in the video frame.
  • vehicle 1 to vehicle 3 are included in the first video frame
  • vehicle 1 and vehicle 4 are included in the second video frame.
  • Vehicle 1 is the same vehicle, so when the traffic volume of each lane is counted, vehicle 1 in the second video frame may not be counted, so as to obtain the number of vehicles in lane 1 included in the collection time period of the two video frames
  • the number of vehicles included in lane 2 during the acquisition time period of two video frames is 0, the number of vehicles included in lane 3 during the acquisition time period of two video frames is 1, and the number of vehicles included in lane 4 in two time-frequency frames
  • the number of vehicles included in the collection time period is 0.
  • the processing device 22 may count the number of vehicles located in each lane in more video frames collected within a preset period of time (for example, 1 second), which will not be repeated here.
  • the four lanes included in the road can be divided into two types: entrance lanes and exit lanes, and then the traffic volume of each type of lane can be counted.
  • lane 1 and lane 2 are entrance lanes
  • lane 3 and lane 4 are exit lanes. Therefore, after obtaining the traffic volume of each lane, the traffic in lane 1 and lane 2 can be Add the traffic flow to get the traffic flow of the entrance lane; add the traffic flow of lane 3 and lane 4 to get the traffic flow of the exit lane.
  • the road condition information is the headway of the vehicle:
  • the processing device 22 may determine the distance between the heads of adjacent vehicles based on the time difference between the virtual coils of two adjacent vehicles entering the lane (or the time difference of the virtual coils leaving the lane) and the driving speed of the vehicles.
  • the road condition information is the parking time of the vehicle:
  • the parking duration can be understood as the total parking duration of the vehicle within a preset duration.
  • the processing device 22 determines that the difference between the positions of the vehicle 1 in the first video frame and the second video frame is less than a threshold value, which can be 1 meter, etc., then it is considered that the vehicle 1 is parked within a collection time.
  • a threshold value which can be 1 meter, etc.
  • the parking time of the vehicle 1 is added to an acquisition time, and the video frame is acquired by the shooting device 21 at a frequency of 20 frames per second, and then an acquisition time is 1/20 second.
  • 1 total parking time The processing method for the total parking time of other vehicles is similar to that of vehicle 1, and will not be repeated here.
  • the road condition information may not only include the traffic volume of the road, the distance between the heads of the vehicles on the road, or the parking time of the vehicles on the road, but also can detect whether there is a vehicle on the road going backwards. It is necessary to set at least two virtual coils in each lane to detect whether a vehicle is going backwards. Taking the virtual coil shown in FIG. 6 as an example, the process of determining whether a vehicle is going backwards on the road will be described.
  • Figure 7 is a flowchart for judging whether there is a vehicle going backwards on the road.
  • the type can be an entrance lane type or an exit lane type. If the type of the lane where the vehicle is located is the entrance lane type, then step S702 to step S705 are executed; if the type of the lane where the vehicle is located is the exit lane type, then step S706 to step S709 are executed.
  • step S702 Determine whether the vehicle is located in the first preset area of the lane at the first moment. If it is not, go to step S703; if it is, go to step S704.
  • the first preset area is an area close to the intersection stop line of the first lane, such as a virtual loop A shown in FIG. 6. It should be noted that the embodiment of the present application judges whether the vehicle is located in the lane according to whether the vehicle is located in the virtual loop preset in the lane. Therefore, when it is determined that the vehicle is located in the lane, and the vehicle is not located in the virtual loop A at the first moment , It can be considered that the vehicle is located at the virtual coil B at the first moment.
  • the time interval between the first moment and the second moment is less than or equal to a preset time length, and the preset time length may be 10 seconds or 1 minute, etc., which is not limited here.
  • the first preset area is an area far away from the intersection stop line, such as a virtual coil B as shown in FIG. 6.
  • step S706 Determine whether the vehicle is located in the third preset area of the lane at the third moment. If it is not, go to step S707; if it is, go to step S708.
  • the third preset area is an area away from the intersection stop line of the second lane, as shown in the virtual circle B in FIG. 6.
  • the time interval between the third time and the fourth time is less than or equal to the preset time length.
  • the fourth preset area is an area close to the intersection stop line, such as a virtual coil A as shown in FIG. 6.
  • the processing device 22 acquires the position of vehicle 1 to the position of vehicle 3 in the first video frame, the position of vehicle 1 in the second video frame, the position of vehicle 4, and the position of vehicle in the third video frame.
  • the position of 4, and the coordinate information of the virtual coil of each lane determine that in the first video frame, vehicle 1 is located in the virtual coil B of lane 1, vehicle 2 is located in the virtual coil A of lane 1, and vehicle 3 is located in the virtual coil of lane 3 A, in the second video frame, vehicle 1 is located in the virtual loop A of lane 1, and vehicle 4 is located in the virtual loop B of lane 4.
  • the vehicle 4 in the third video frame is located in the virtual loop A of the lane 4.
  • Vehicle 2 Since the lane 1 where the vehicle 1 is located is the entrance lane, and the virtual loop where the vehicle 1 is located is the virtual loop B, it is determined that the vehicle 1 is a non-retrograde vehicle. Vehicle 2 is located in the virtual coil A of lane 1 in the first video frame, but the vehicle 2 does not exist in the second video frame, and the vehicle 2 is not detected in the subsequent 10 video frames, so it can be determined Vehicle 2 is a non-retrograde vehicle.
  • Lane 4 where the vehicle 4 is located is the exit lane. In the second video frame, the vehicle 4 is located in the virtual loop B of lane 4, and in the third video frame, the vehicle 4 is located in the virtual loop A of lane 4, thereby determining that the vehicle 4 is Retrograde vehicles.
  • the processing device 22 can convert the road condition information into a signal adapted to the intersection signal based on the interface input requirements of the intersection signal, for example It can be an RS485 signal, sent to the intersection signal, and the intersection signal can control the signal lights of the intersection according to the road condition information, or can provide signal light control data for downstream intersections, etc.
  • a signal adapted to the intersection signal based on the interface input requirements of the intersection signal
  • the intersection signal can control the signal lights of the intersection according to the road condition information, or can provide signal light control data for downstream intersections, etc.
  • the intersection signal can reduce the green light duration in the direction signal light to reduce the idle green light.
  • the road condition information of the road can be obtained without setting physical coils, so that the road surface is not damaged or there is no damage to any physical device, and the cost of monitoring the road condition can be reduced. Moreover, it is possible to detect whether there is road condition information of a reverse vehicle on the road through the position information of each vehicle and the coordinate information of the virtual coil included in the lane, which can increase the diversification of road condition detection.
  • multiple virtual coils may also be set in each lane to realize the detection of other road condition information, for example, to detect the queue length in each lane, etc., which will not be repeated here.
  • the device for monitoring road condition information may include a hardware structure and/or a software module, which may be combined with a hardware structure, a software module, or a hardware structure.
  • the form of software module realizes the above-mentioned functions. Whether a certain function of the above-mentioned functions is executed by a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraint conditions of the technical solution.
  • FIG. 8 shows a schematic structural diagram of a monitoring device 800 for road condition information.
  • the device 800 for monitoring road condition information may be used to implement the function of the processing device 22 in the embodiment shown in FIG. 3.
  • the device 800 for monitoring road condition information may be a hardware structure, a software module, or a hardware structure plus a software module.
  • the device 800 for monitoring road condition information can be implemented by a chip system.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the device 800 for monitoring road condition information may include an acquiring unit 801 and a processing unit 802.
  • the obtaining unit 801 may be used to perform step S32 in the embodiment shown in FIG. 3, and/or used to support other processes of the technology described herein.
  • the acquisition unit 801 can be used to communicate with the processing unit 802, or the acquisition unit 801 can be used to communicate with the road condition information monitoring device 800 and other modules, which can be circuits, devices, interfaces, buses, Software module, transceiver or any other device that can realize communication.
  • the processing unit 802 may be used to execute step S33 to step S35 in the embodiment shown in FIG. 3, and/or used to support other processes of the technology described herein.
  • the division of modules in the embodiment shown in FIG. 8 is schematic, and is only a logical function division. In actual implementation, there may be other division methods.
  • the functional modules in the various embodiments of the present application may be integrated In a processor, it can also exist alone physically, or two or more modules can be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules.
  • FIG. 9 shows a road condition information monitoring device 900 provided by an embodiment of this application.
  • the road condition information monitoring device 900 may be used to implement the functions of the road condition information monitoring device 900 in the embodiment shown in FIG. 3.
  • the monitoring device 900 of the road condition information may be a chip system.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the monitoring device 900 for road condition information includes at least one processor 920, and the monitoring device 900 for implementing or supporting road condition information implements the function of the processing device 22 in the embodiment shown in FIG. 3.
  • the processor 920 may determine the lane in which each vehicle is located according to the location of each vehicle, and determine the road condition information of the road according to the number of vehicles included in each lane or the driving parameters of each vehicle. For details, refer to the method example The detailed description in, I won’t repeat it here.
  • the device 900 for monitoring road condition information may further include at least one memory 930 for storing program instructions and/or data.
  • the memory 930 and the processor 920 are coupled.
  • the coupling in the embodiments of the present application is an indirect coupling or communication connection between devices, units or modules, and may be in electrical, mechanical or other forms, and is used for information exchange between devices, units or modules.
  • the processor 920 may cooperate with the memory 930 to operate.
  • the processor 920 may execute program instructions stored in the memory 930. At least one of the at least one memory may be included in the processor.
  • the device 900 for monitoring road condition information may further include an interface 910 for communicating with the processor 920 or for communicating with other devices through a transmission medium, so that the device 900 for monitoring road condition information can communicate with other devices.
  • the other device may be a computing module.
  • the processor 920 may use the interface 910 to send and receive data.
  • the specific connection medium between the aforementioned interface 910, processor 920, and memory 930 is not limited in the embodiment of the present application.
  • the memory 930, the processor 920, and the interface 910 are connected by a bus 940.
  • the bus is represented by a thick line in FIG. It is not limited.
  • the bus can be divided into an address bus, a data bus, a control bus, and so on. For ease of presentation, only one thick line is used in FIG. 9, but it does not mean that there is only one bus or one type of bus.
  • the processor 920 may be a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, which can implement Or execute the methods, steps, and logical block diagrams disclosed in the embodiments of the present application.
  • the general-purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware processor, or executed and completed by a combination of hardware and software modules in the processor.
  • the memory 930 may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., and may also be a volatile memory (volatile memory). For example, random-access memory (RAM).
  • the memory is any other medium that can be used to carry or store desired program codes in the form of instructions or data structures and that can be accessed by a computer, but is not limited to this.
  • the memory in the embodiments of the present application may also be a circuit or any other device capable of realizing a storage function for storing program instructions and/or data.
  • An embodiment of the present application also provides a computer-readable storage medium, including instructions, which when run on a computer, cause the computer to execute the method executed by the processing device 22 in the embodiment shown in FIG. 3.
  • An embodiment of the present application also provides a computer program product, including instructions, which when run on a computer, cause the computer to execute the method executed by the processing device 22 in the embodiment shown in FIG. 3.
  • the embodiment of the present application provides a chip system, which includes a processor and may also include a memory, configured to implement the functions of the processing device 22 in the foregoing method.
  • the chip system can be composed of chips, or it can include chips and other discrete devices.
  • the embodiment of the present application provides a road condition information monitoring system.
  • the image processing system includes a photographing device and the aforementioned road condition information monitoring device.
  • the methods provided in the embodiments of the present application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented by software, it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, network equipment, user equipment, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by the computer or a data storage device such as a server, data center, etc. integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, hard disk, Magnetic tape), optical media (for example, digital video disc (digital video disc, DVD for short)), or semiconductor media (for example, SSD), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种路况信息的监测方法及装置,方法包括,获取道路路口的视频,根据该视频确定每条车道上行驶的车辆以及每个车辆在各个视频帧的行驶参数,例如,车辆的位置,行驶速度或者行驶方向等,根据每条车道上行驶的车辆数或根据每条车道上行驶的车辆的行驶参数,获取该道路的路况信息。只需要获取道路路口的视频,对视频进行处理即可获取该道路的路况信息,不用设置物理线圈,从而不会破坏路面或者不存在任何物理器件的损坏,降低对道路路况进行监测的成本。

Description

一种路况信息的监测方法及装置 技术领域
本申请涉及图像处理技术领域,尤其涉及一种路况信息的监测方法及装置。
背景技术
随着车辆的普及,道路上行驶的车辆越来越多,因此,需要对道路的路况进行监测,以及时对道路中行驶的车辆进行调控。例如,当监测出道路路况为拥堵状态,则可以通过增加该道路入口的交通信号灯的红灯的时长,以减少进入该道路的车辆的数量。
以道路路况为道路的车流量为例,一种监测道路路况的方式是:在道路的入口铺设物理线圈,当有车辆经过该物理线圈时,物理线圈的电感值会发生变化,进而导致与该物理线圈连接的检测器中振荡电路的频率发生变化,通过统计该振荡电路的频率变化的次数,即可判断进入该道路的车流量。
然而,在道路的入口铺设物理线圈会损坏路面,且物理线圈被车辆多次碾压容易损坏,造成该方法的成本较高。
发明内容
本申请提供一种路况信息的监测方法及装置,用以降低对道路路况进行监测的成本。
第一方面,提供一种路况信息的监测方法,在该方法中,首先获取道路路口的视频,该视频包括多个视频帧,然后,根据该视频获取每个视频帧包括的车辆的行驶参数,每个车辆的行驶参数至少包括车辆的位置,还可以包括行驶速度和/或行驶方向。由于该道路可能包括多条车道,从而在获取每个车辆的行驶参数后,则根据车辆的位置确定车辆所在的车道,并根据每条车道上行驶的车辆数或根据每条车道上行驶的车辆的行驶参数,获取该道路的路况信息。
在上述技术方案中,只需要获取道路路口的视频,然后对视频进行处理即可获取该道路的路况信息,不用设置物理线圈,从而不会破坏路面或者不存在任何物理器件的损坏,可以降低对道路路况进行监测的成本。
在一种可能的设计中,首先获取该道路包括的多条车道中每条车道的预设区域的坐标范围,然后,根据车辆的位置以及每条车道的预设区域的坐标范围,确定该车辆所属的第一预设区域,从而确定与该第一预设区域对应的车道为该车辆所在的车道。
在上述技术方案中,通过将车辆的位置与预设区域的坐标范围进行比较,来确定车道所在的车道,方式简便易于实现。
在一种可能的设计中,该路况信息包括道路的车流量、道路中车辆的车头间距以及道路是否存在逆行车辆的至少一种。
在上述技术方案中,可以获取多种用于表征路况信息的参数,上述几种参数只是举例说明,在本申请实施例中不作限制。
在一种可能的设计中,根据每条车道上行驶的车辆数获取该道路的车流,可以包括但不限于如下方式:
根据至少一个进口车道上行驶的车辆的第一车辆总数,获取该道路的进口车流量;和/或,根据至少一个出口车道上行驶的车辆的第二车辆总数,获取该道路的出口车流量。
在上述技术方案中,当获取每条车道上行驶的车辆数之后,则可以根据每条车道的类型,例如,是进口车道还是出口车道,进一步获取该道路的进口车流量以及出口车流量。
在一种可能的设计中,根据每条车道上行驶的车辆的行驶参数,获取该道路中车辆的车头间距,可以包括但不限于如下方式:
根据每条车道上相邻的两个车辆进入该车道的预设区域的时间差,以及该两个车辆中后进入该车道的预设区域的第一车辆的行驶速度,确定该车头间距。
在一种可能的设计中,根据每条车道上行驶的车辆的行驶参数,获取该道路是否存在逆行车辆,可以包括但不限于如下方式:
第一种方式:
若确定车辆所在的第一车道为进口车道,且该第一车道设置有两个预设区域,分别为第一预设区域和第二预设区域,该第一预设区域为靠近该第一车道的路口停止线的区域,该第二预设区域为远离该路口停止线的区域,则根据车辆的位置,确定该车辆在第一时刻是否位于该第一预设区域,若为是,则确定该车辆是否在第二时刻位于该第二预设区域,该第一时刻与该第二时刻的时间间隔小于或等于预设时长,也就是说,该车辆是否在第一时刻之后的预设时长内位于该第二预设区域,若为是,则确定该第一车道中存在逆行车辆。
第二种方式:
若确定车辆所在的第二车道为出口车道,且该第二车道设置有两个预设区域,分别为第三预设区域和第四预设区域,该第三预设区域为远离该第二车道的路口停止线的区域,该第四预设区域为靠近该路口停止线的区域,则根据车辆的位置,确定该车辆是否在第三时刻位于该第三预设区域,若为是,则确定该车辆是否在第四时刻位于该第四预设区域,该第三时刻与该第四时刻的时间间隔小于或等于预设时长,也就是说,该车辆是否在第三时刻之后的预设时长内位于该第四预设区域,若为是,则确定该第二车道中存在逆行车辆。
在上述技术方案中,通过在每个车道上设置两个预设区域,从而可以监测出该车道上是否存在逆行车辆,可以丰富路况信息监测的内容。
在一种可能的设计中,该方法还可以获取道路路口的包括多组行驶参数的雷达数据,然后,可以根据获取的视频,确定每个视频帧中车辆的位置,并根据该每个视频帧中车辆的位置以及该每个视频帧的采集时刻,从该多组行驶参数中确定与每个车辆对应的行驶参数。
在上述技术方案中,可以通过雷达获取车辆的行驶参数,然后,雷达获取的车辆的行驶参数与每个视频帧中的车辆进行拟合,即可得到每个车辆的行驶参数,从而可以减少处理量,且雷达数据比较精确,从而可以提高获取的路况信息的准确性。
第二方面,提供一种路况信息的监测装置,该路况信息的监测装置包括获取单元和处理单元,这些单元可以执行上述第一方面任一种设计示例中的所执行的相应功能,具体的:
获取单元,用于获取道路路口的视频,所述视频包括多个视频帧;
处理单元,用于根据所述视频获取每个视频帧包括的车辆的行驶参数,所述车辆的行驶参数包括行驶速度和/或行驶方向,以及所述车辆的位置;根据所述车辆的位置确定所述车辆所在的车道,所述道路包括多条车道;以及,根据每条车道上行驶的车辆数或所述每条车道上行驶的车辆的行驶参数,获取所述道路的路况信息。
在一种可能的设计中,所述处理单元具体用于:
获取所述多条车道中每条车道的预设区域的坐标范围;
根据所述车辆的位置以及所述每条车道的预设区域的坐标范围,确定所述车辆所属的 第一预设区域;
确定与所述第一预设区域对应的车道为所述车辆所在的车道。
在一种可能的设计中,该路况信息包括道路的车流量、道路中车辆的车头间距以及道路是否存在逆行车辆的至少一种。
在一种可能的设计中,所述路况信息为所述道路的车流量,所述处理单元具体用于:
根据至少一个进口车道上行驶的车辆的第一车辆总数,获取所述道路的进口车流量;和/或,根据至少一个出口车道上行驶的车辆的第二车辆总数,获取所述道路的出口车流量。
在一种可能的设计中,所述路况信息为所述道路中车辆的车头间距,所述处理单元具体用于:
根据每条车道上相邻的两个车辆进入所述车道的预设区域的时间差,以及所述两个车辆中的第一车辆的行驶速度,确定所述车头间距,所述第一车辆为所述两个车辆中后进入所述车道的预设区域的车辆。
在一种可能的设计中,所述路况信息为所述道路是否存在逆行车辆,所述处理单元具体用于:
确定车辆所在的第一车道为进口车道;
根据所述车辆的位置,确定所述车辆是否在第一时刻位于所述第一车道的第一预设区域,所述第一车道包括所述第一预设区域和第二预设区域,所述第一预设区域为靠近所述第一车道的路口停止线的区域,所述第二预设区域为远离所述路口停止线的区域;
若为是,则确定所述车辆是否在第二时刻位于所述第二预设区域,所述第一时刻与所述第二时刻的时间间隔小于或等于预设时长;
若为是,则确定所述第一车道中存在逆行车辆。
在一种可能的设计中,所述路况信息为所述道路是否存在逆行车辆,所述处理单元具体用于:
确定车辆所在的第二车道为出口车道;
根据所述车辆的位置,确定所述车辆是否在第三时刻位于所述第二车道的第三预设区域,所述第二车道包括所述第三预设区域和第四预设区域,所述第三预设区域为远离所述第二车道的路口停止线的区域,所述第四预设区域为靠近所述路口停止线的区域;
若为是,则确定所述车辆是否在第四时刻位于所述第四预设区域,所述第三时刻与所述第四时刻的时间间隔小于或等于预设时长;
若为是,则确定所述第二车道中存在逆行车辆。
在一种可能的设计中,所述获取单元还用于:
获取道路路口的雷达数据,所述雷达数据包括多组行驶参数;
所述处理单元还用于:
确定每个视频帧中车辆的位置;以及,根据所述每个视频帧中车辆的位置以及所述每个视频帧的采集时刻,从所述多组行驶参数中确定与每个车辆对应的行驶参数。
第三方面,提供一种路况信息的监测装置,该路况信息的监测装置包括至少一个处理器,所述至少一个处理器与至少一个存储器耦合;所述至少一个处理器,用于执行所述至少一个存储器中存储的计算机程序或指令,以使得所述装置执行上述第一方面描述的方法。
在一种可能的设计中,该至少一个处理器在执行至少一个存储器中存储的计算机程序 或指令时,执行如下步骤:
获取道路路口的视频,所述视频包括多个视频帧;
根据所述视频获取每个视频帧包括的车辆的行驶参数,所述车辆的行驶参数包括行驶速度和/或行驶方向,以及所述车辆的位置;
根据所述车辆的位置确定所述车辆所在的车道,所述道路包括多条车道;
根据每条车道上行驶的车辆数或所述每条车道上行驶的车辆的行驶参数,获取所述道路的路况信息。
第四方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被计算机执行时,使所述计算机执行第一方面中任意一项所述的方法。
第五方面,本申请实施例提供一种计算机程序产品,所述计算机程序产品存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被计算机执行时,使所述计算机执行第一方面中任意一项所述的方法。
第六方面,本申请提供了一种芯片系统,该芯片系统包括处理器,还可以包括存储器,用于实现第一方面所述的方法。该芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
上述第二方面至第六方面及其实现方式的有益效果可以参考对第一方面的方法及其实现方式的有益效果的描述。
附图说明
图1为本申请所涉及的应用场景的一种示例的示意图;
图2为本申请提供的监测系统的一种示例的结构框图;
图3为本申请提供的一种路况信息的监测方法的流程图;
图4A为本申请提供的通过雷达获取车辆的位置的一种示例的示意图;
图4B为本申请提供的仅通过视频获取车辆的位置的一种示例的示意图;
图4C为本申请提供的仅通过视频获取车辆的行驶速度的一种示例的示意图;
图5为本申请中在车道上设置虚拟线圈的一种示例的示意图;
图6为本申请中在车道上设置虚拟线圈的另一种示例的示意图;
图7为本申请中判断该道路中是否有车辆逆行的流程图;
图8为本申请提供的路况信息的监测装置的一种示例的结构示意图;
图9为本申请提供的路况信息的监测装置的另一种示例的结构示意图。
具体实施方式
为了使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施例作进一步地详细描述。
本申请实施例中“多个”是指两个或两个以上,鉴于此,本申请实施例中也可以将“多个”理解为“至少两个”。“至少一个”,可理解为一个或多个,例如理解为一个、两个或更多个。例如,包括至少一个,是指包括一个、两个或更多个,而且不限制包括的是哪几个,例如,包括A、B和C中的至少一个,那么包括的可以是A、B、C、A和B、A和C、B和C、或A和B和C。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,字 符“/”,如无特殊说明,一般表示前后关联对象是一种“或”的关系。
除非有相反的说明,本申请实施例提及“第一”、“第二”等序数词用于对多个对象进行区分,不用于限定多个对象的顺序、时序、优先级或者重要程度。
首先,对本申请所涉及的技术特征进行说明。
本申请实施例中的方法可以应用于道路监控场景,例如,监控如图1所示的十字路口中道路A的车流量,或者也可以是其他道路监控场景,在此不一一举例。请参考图1,为现有技术中测量道路的车流量的一种方式的示意图。在图1中,预先在道路A的地面铺设物理线圈,该物理线圈与线圈检测器连接,并与线圈检测器内部的电容器件构成振荡电路。在道路A中没有车辆经过物理线圈时,物理线圈的电感值为L 1,则该振荡电路的振荡频率满足如下公式:
Figure PCTCN2020097888-appb-000001
其中,f为振荡电路的振荡频率,C为线圈检测器的电容器件的电容值。
当有车辆经过该物理线圈时,该物理线圈的电感值由L 1变为L 2,从而该振荡电路的振荡频率变为:
Figure PCTCN2020097888-appb-000002
因此,线圈检测器可以通过检测振荡电路的振荡频率的变化值,来确定是否有车辆经过该道路A,例如,当振荡频率的变化值大于或等于Δf时,则确定有车辆经过道路A,其中,Δf=|f1-f2|。然后统计在预设时长内,振荡频率变化值大于或等于Δf的次数,即可以获取道路A的车流量。
但是,采用上述方法获取道路的车流量,必须在道路铺设物理线圈,而铺设物理线圈会损坏路面,且容易损坏,从而造成该方法的成本较高。
鉴于此,本申请实施例提供一种道路状态的监测系统。请参考图2,为该监测系统的结构示意图。如图2所示,该监测系统包括拍摄设备21和处理设备22,其中,拍摄设备21用于拍摄所监测的道路路口的视频,该视频中包括该道路上行驶的车辆,然后将获取的视频发送给处理设备22。
拍摄设备21可以是摄像机,按照预设的采集频率获取所监测的道路路口的视频。或者,拍摄设备21也可以是电警摄像机,电警摄像机可以获取每个视频帧中车辆在采集时刻的位置信息和属性信息,该属性信息可以是车牌号、车型或者颜色等。或者,拍摄设备21也可以是雷达和摄像机的组合,雷达可以通过扫描波束检测该道路上是否有车辆,以及在检测到该道路上有车辆时,可以检测出该车辆在该时刻的行驶速度、行驶方向,以及该车辆的位置信息。该位置信息可以理解为基于世界坐标系或者地心坐标系的空间坐标,该地心坐标系,例如,1984年世界大地坐标系统(world geodetic system一1984 coordinate system)等。在本申请实施例中,不对拍摄设备21的具体形式进行限制。
当处理设备22接收到拍摄设备21发送的该道路路口的视频和/或雷达数据后,则可以获取该视频和/或雷达数据中包括的车辆的行驶参数,该行驶参数可以包括行驶速度和/或行驶方向,以及车辆的位置信息等。然后,根据每个车辆的位置信息确定该车辆所在的车道,例如,确定该车辆在该道路的进车道或者出车道上,从而根据每条车道上所行驶的车辆的行驶参数,获取该道路的路况信息。
处理设备22可以是智能终端盒、服务器集群或者云端服务器等。
在图2所述的监测系统中,是以监测系统包括拍摄设备21和处理设备22为例进行说明的,可以理解的是,该监测系统中还可以包括其他设备,例如,还可以包括用于转发拍摄设备21获取的视频的视频转发设备等。另外,不限制监测系统中拍摄设备21和处理设备22的数量,例如,该监测系统中可以包括一个拍摄设备21和一个处理设备22,或者,可以包括多个拍摄设备21和一个处理设备22,或者也可以包括多个拍摄设备21和多个处理设备22。
下面结合图2所示的监测系统,介绍本申请提供的路况信息的监测方法。
请参考图3,为本申请提供的一种路况信息的监测方法的流程图,该流程图的描述如下:
S31、拍摄设备21获取被监测道路的监控信息。
该拍摄设备21可以设置在被监测道路的道路入口处,该被监测道路可以如图4A中所示的道路B。拍摄设备21可以以预设频率,例如,20帧/秒的频率,采集被监测道路的监控信息。
作为一种示例,该拍摄设备21只包括摄像机,则该监控信息可以是摄像机在每个采集时刻采集的视频帧,从而得到该被监测道路在监控时段内的视频,该视频可以包括多个视频帧。作为另一种示例,该拍摄设备21包括摄像机和雷达,则该监控信息可以是摄像机在每个采集时刻采集的视频帧,以及雷达检测到的车辆在各个采集时刻的多组行驶参数,每组行驶参数包括行驶速度和/或行驶方向,以及该车辆的位置等信息。
下面,对拍摄设备21包括雷达时,雷达获取车辆在各个采集时刻的行驶速度和/或行驶方向,以及该车辆的位置的方式进行说明。
针对车辆的位置:
作为一种示例,雷达中预先存储自身的坐标信息,然后,根据雷达中的扫描波束来确定每个车辆与雷达的距离以及方向,从而确定出每个车辆的位置。例如,该被监测道路为图4A所示的道路B,雷达设置在道路B的一边,且雷达所在的位置的坐标为(250000,420000),雷达通过扫描波束确定行驶在道路B中的车辆与雷达的距离10米,车辆与雷达的连线与雷达所在位置的水平线的夹角为53.13°,从而,雷达可以计算出车辆距离雷达的水平距离为6米,垂直距离为8米,得到车辆在当前采集时刻的坐标为(250006,420008)。
针对行驶速度:
雷达可以根据获取的该车辆在相邻的两个采集时刻的位置,以及这两个采集时刻的时间间隔,确定车辆的行驶速度。例如,雷达车辆在第一个采集时刻和第二个采集时刻的坐标信息分别为(250006,420008)和(250010,420008),假设雷达以20帧/秒的频率发送扫描波束,则第一个采集时刻和第二个采集时刻的采集时刻间隔为1/20=0.05秒,从而确定车辆的行驶速度为4/0.05=80米/秒。
针对行驶方向:
雷达可以该车辆在相邻的两个采集时刻的位置,确定车辆的行驶方向。沿用上述例子,该车辆在相邻的采集时刻沿横轴发生了移动,且横轴的坐标取值增大,因此,确定该车辆的行驶方向为沿横轴向右行驶。
当然,雷达也可以通过其他方式获取视频帧中每个车辆的行驶参数,或者,雷达还可以获取其他行驶参数,在此不作限制。
S32、拍摄设备21将获取的监控信息发送给处理设备22。
需要说明的是,若监控信息只包括该被监测道路的视频,则拍摄设备21将获取的视频发送给处理设备22;若监控信息包括被监测道路的视频以及该多组行驶参数,则拍摄设备21将该视频和多组行驶参数一起发送给处理设备22。
具体来讲,拍摄设备21可以周期性向处理设备22发送该周期内获取的所有的监控信息,例如,该监控信息包括在一个周期内获取的10个视频帧,则拍摄设备21将该10个视频帧发送给处理设备22。拍摄设备21也可以在获取每一个采集时刻的监控信息之后,便向处理设备22发送该采集时刻的监控信息。例如,该监控信息包括视频,则拍摄设备21在获取第一个采集时刻的第一个视频帧后,则将该第一个视频帧发送给处理设备22,然后再获取第二个采集时刻的第二个视频帧,再向处理设备22发送该第二个视频帧,以此类推。在此不对拍摄设备21发送监控信息的方式进行限制。
S33、处理设备22获取每个采集时刻对应的监控信息中所包括的车辆的行驶参数。
在本申请实施例中,行驶参数可以包括但不限于位置、行驶速度以及行驶方向,为方便说明,下面以行驶参数包括位置、行驶速度以及行驶方向为例。
根据监控信息的不同,处理设备22获取每个采集时刻对应的监控信息中所包括的车辆的行驶参数可以包括但不限于如下两种方式:
第一种方式,监控信息只包括该被监测道路的视频:
针对位置信息:
作为一种示例,处理设备22可以预先存储该被监测道路的地图,该地图中包括多个特定目标实际所在的坐标信息,在获取车辆的位置时,首先获取在该视频帧中,车辆与特定目标的位置关系,然后根据该位置关系及特定目标实际所在的坐标信息,确定该车辆实际位置。例如,该被监测道路为图4B所示的道路C,在道路C的地图中包括道路C的各个车道线所在的坐标信息。拍摄设备21确定车辆在视频帧中的位置为:车辆的中心距离车道线1的距离为30像素。且在视频帧中,车道线1和车道线2之间的总宽度为50像素,则根据车道线1和车道线2实际所在的位置的纵坐标,确定出车道线1和车道线2之间的实际宽度,例如,车道线1的横坐标为420005,车道线2的横坐标为420000,则车道线1和车道线2之间的实际宽度为5米,从而得到车辆的中心距离车道线1的实际距离为(30/50)*5=3米,从而得到车辆的纵坐标的取值为420002。采用相同的方式,确定车辆的横坐标的取值,在此不再赘述。
针对行驶速度:
作为一种示例,处理设备22可以根据从两个采集时刻对应的视频帧确定的车辆的位置,确定该车辆的行驶速度。具体方式与步骤S31中雷达获取车辆的行驶速度相似,在此不再赘述。
需要说明的是,当处理设备22确定视频帧中首次出现新的车辆时,则可以预估该车辆的行驶速度。例如,由于上一个采集时刻所获取的视频帧中不包括该车辆,则可以假设该车辆的在一个采集时刻内由该拍摄设备21所能拍摄到的道路的边缘位置行驶到当前位置,如图4C中箭头所指示的路径,从而根据该拍摄设备21所能拍摄到的道路的边缘位置和该车辆所在的位置,确定其在一个采集时刻内的行驶距离,然后根据视频帧的采集频率,预估该车辆在该视频帧的行驶速度。然后,可以根据后续多个视频帧,实时更新该车辆在不同的采集时刻对应的行驶速度,获取一个平均值作为该车辆的行驶速度。
针对行驶方向:
处理设备22可以采用步骤S31中雷达确定车辆的行驶方向的相同方式,确定在每个视频帧中车辆的行驶方向,或者,也可以通过采集视频帧中车辆的头部所对应的朝向,确定车辆的行驶方向。或者,也可以根据车辆在视频帧的位置,确定车辆的行驶方向。例如,在第一个视频帧中,车辆在视频帧的坐标信息为(10,20);在第一个视频帧之后的第二个视频帧中,车辆在视频帧的坐标信息为(22,20),则可以确定该车辆的行驶方向为沿横轴向右行驶。
需要说明的是,处理设备22获取每个视频帧中所包括的车辆的行驶参数,可以是在获取每一个视频帧后,则对该视频帧进行处理得到行驶参数,或者,也可以是获取若干个视频帧之后,再对该若干个视频帧分别进行处理,在此不作限制。
第二种方式,监控信息包括该被监测道路的视频和以及该道路上行驶的车辆在各个采集时刻的多组行驶参数:
在这种情况下,处理设备22可以首先根据被监测道路的视频,确定每个采集时刻对应的视频帧中车辆的位置,然后根据每个车辆所对应的位置以及采集时刻,从获取的多组行驶参数中确定与每个车辆对应的行驶参数。
例如,处理设备22获取车辆在第一个采集时刻对应的视频帧中包括第一车辆和第二车辆,以及根据该视频帧确定出第一车辆的位置为(250006,420008),第二车辆的位置为(250010,420008)。然后,处理设备22从获取的多组行驶参数中确定与第一采集时刻对应的行驶参数,确定与第一采集时刻对应的行驶参数中包括第一车辆的位置的一组行驶参数,即该第一车辆的行驶参数,包括第二车辆的位置的一组行驶参数即为第二车辆的行驶参数。例如,第一个采集时刻包括第一组行驶参数和第二组行驶参数,第一组行驶参数为{(位置:250006,420008)(行驶速度:80米/秒)(行驶方向:横轴向右)},第二组行驶参数为{(位置:250010,420008)(行驶速度:60米/秒)(行驶方向:横轴向右)},由于第一组行驶参数中包括第一车辆的位置,因此确定第一组行驶参数为第一车辆的行驶参数,采用相同的方式,确定第二组行驶参数即为第二车辆的行驶参数。
这样,处理设备22不用再对视频帧进行处理,可以减少处理设备22的计算量。
当一个视频帧中有多个车辆时,可以对每个车辆进行编号,以区分多个车辆。例如,第一个视频帧中包括3个车辆,可以将这3个车辆分别标记为车辆1、车辆2以及车辆3。对于不同视频帧中的车辆,拍摄设备21可以根据两个相邻的视频帧中各个车辆之间的关联关系,确定视频帧中的车辆是否是新出现的车辆,若是新出现的车辆,则对该新出现的车辆重新编号,若是上一个视频帧中已经出现的车辆,则沿用上一个视频帧中该车辆的编号。
作为一种示例,在第一个视频帧中,车辆1所在的位置的坐标信息为(250006,420008),该车辆的行驶速度为80千米/小时,假设拍摄设备21以每秒20帧的频率采集视频帧,则车辆1在拍摄设备21采集的下一个视频帧中的坐标信息为(250007.1,420008),因此,当拍摄设备21获取第二个视频帧后,若确定第二个视频帧中某一个车辆所在的位置的坐标信息为(250007.1,420008),则说明该车辆与第一个视频帧中的车辆1相同,则将该车辆也标记为车辆1;若检测到有一个车辆的纵坐标与上一个视频帧中任意一个车辆的纵坐标都有较大的差距,则可以认为该车辆为第二个是视频帧中新出现的车辆,从而对该车辆重新编号,即将该车辆标记为车辆4。
作为另一种示例,若拍摄设备21是雷达和电警摄像机的组合,由于电警摄像机可以获取每个车辆的属性信息,则在雷达获取每个车辆的行驶参数后,可以根据电警摄像机中获取的车辆的属性信息,确定视频帧中的车辆是否是新出现的车辆,例如,在第二个视频帧中某一个车辆的属性信息与第一个视频帧中车辆1的属性信息相同,则将第二个视频帧中该车辆标记为车辆1,若第二个视频帧中某一个车辆的属性信息与第一个视频帧中任意一个车辆的属性信息均不相同,则对该车辆重新编号。
在获取每个视频帧中车辆的编号后,则可以建立视频帧中的车辆与行驶参数之间的映射关系,例如,第一个视频帧中车辆1~车辆3的行驶参数依次为行驶参数1~行驶参数3,第二个视频帧中车辆1的行驶参数为行驶参数4,车辆4的行驶参数为行驶参数5。该行驶参数1~行驶参数5中的任意一个行驶参数是指与该车辆对应的位置、行驶速度以及行驶方向的集合。
S34、处理设备22根据每个车辆的位置确定每个车辆所在的车道。
在本申请实施例中,处理设备22中预先存储该道路中每个车道的预设区域的坐标信息,从而根据每个车道的预设区域的坐标信息和每个车辆所在的位置,确定车辆所在的车道。
作为一种示例,该预设区域可以是在每个车道上设置的虚拟线圈,通过虚拟线圈来表示一个车道的位置,若车辆所在的位置位于该虚拟线圈内,则说明该车辆位于该车道上。例如,在图5中道路中包括4个车道,分别为车道1~车道4,预先为每个车道设置虚拟线圈,并将虚拟线圈对应的坐标信息存储在处理设备22中。为方便本领域技术人员理解,下面对本申请设置虚拟线圈的过程进行说明。
虚拟线圈可以包括多种形状,例如,可以为矩形或者为椭圆或者为其他形状等,在本申请实施例中,以虚拟线圈为矩形为例。首先,根据每个车道的车道线以及路口停止线,标定虚拟线圈的角点。虚拟线圈的角点的选择原则如下:路口停止线与车道线交叉处,取两个内侧的交叉点作为角点,作为角点的内侧交叉点沿车道线向与路口停止线相反的方向延伸,在反向延伸线内侧再取一个点作为角点,从而得到4个角点,连接该4个角点所形成的矩形即为该车道的虚拟线圈。例如,在图5中从下往上依次包括6条车道线,分别标记为车道线1~车道线6,以及所有的车道共用的路口停止线,确定车道线1和车道线2与路口停止线的内侧交叉点为车道1的虚拟线圈对应的两个角点,记为角点A和角点B,然后,以角点A为起点,沿车道线1向与路口停止线相反的方向延伸一定长度,该长度可以为车辆的长度或者一个固定值,例如为10米,得到角点C,采用相同的方式在车道线2上得到角点D,连接角点A~角点D,即可得到与车道1对应的虚拟线圈。该虚拟线圈的坐标信息即为x B,x≤x≤x A,x,y C,y≤y≤y A,y,x表示横坐标,y表示纵坐标,x B,x表示角点B的横坐标,y C,y表示角点C的纵坐标,以此类推,并在处理设备22中存储各个角点的坐标信息。作为一种示例,各个角点的坐标信息如表1所示。
表1
角点 横坐标x 纵坐标y
A 2561326.752 418357.2016
B 2561342.1 418357.2016
C 2561326.752 418346.6662
D 2561342.1 418346.6662
采用上述相同的方式,确定每个车道的虚拟线圈的坐标信息,并预先保存在处理设备22中。
这样,当处理设备22获取每个车辆的位置后,则可以根据预先存储的每个车道的虚拟线圈的坐标信息,确定该车辆位于哪一个车道对应的虚拟线圈的坐标范围内,进而确定出该车辆所在的车道。
例如,处理设备22从第一个视频帧的车辆1的行驶参数1中,获取第一个视频帧中车辆1的坐标信息为(2561330.75,418349.7),由于2561342.1<2561330.75<2561326.752,且418346.6662<418349.7<418357.2016,即车辆1位于与车道1对应的虚拟线圈,从而确定车辆1位于车道1中。采用相同的方式,确定每个视频帧中车辆所在的车道,具体过程在此不再赘述。
另外,在本申请实施例中,组成该虚拟线圈的角点A和角点C之间存在一定的距离,这样,可以避免单点检测存在丢失而导致检测失败的问题,可以保证检测的准确性。例如,角点A和角点C之间的距离为10米,假设该道路中车辆的行驶速度上限为80千米/小时,拍摄设备21以20帧/秒的频率采集视频帧,从而,通过相邻的两个采集时刻获取的视频帧,得到该同一个车辆的位置相差约1.1米,而虚拟线圈的长度为10米,从而拍摄设备21可以在9个采集时刻获取包括该车辆的视频帧,也就是说,即使拍摄设备21在某一个采集时刻未检测到该车辆,该拍摄设备21还可以有8次机会检测到该车辆,可以提高准确性。
在前述示例中,是通过与每个车道对应的一个虚拟线圈来对该车道中的车辆进行检测,在其他实施例中,还可以在每个车道上设置多个虚拟线圈。该多个虚拟线圈对应多个预设区域。
例如,可以在每个车道上设置两个预设区域,分别为第一预设区域和第二预设区域,以每个预设区域为虚拟线圈为例,如图6所示,靠近路口停止线的虚拟线圈标记为虚拟线圈A,另一个虚拟线圈则标记为虚拟线圈B。虚拟线圈A的设置方式与步骤S34中虚拟线圈的设置方式相似,在此不再赘述。当确定虚拟线圈A对应的角点A~角点D之后,可以将角点C沿车道线1向与路口停止线相反的方向延伸一定长度,该长度可以为固定值,例如为5米,得到角点E,采用相同的方式在车道线2上得到角点F。然后,采用相同的方式,将角点E和角点F分别在车道线1和车道线2上延伸,得到角点G和角点H,连接角点E~角点H,即可得到与车道1对应的虚拟线圈B。然后确定每个虚拟线圈的坐标信息,预先保存在处理设备22中。其他车道中虚拟线圈A和虚拟线圈B的设置方式与车道1相似,在此不再赘述。
在这种情况下,当处理设备22从拍摄设备21获取每个车辆的位置后,则可以根据预先存储的每个车道的虚拟线圈的坐标信息,确定该车辆位于哪一个车道对应的虚拟线圈的坐标范围内,进而确定出该车辆所在的车道。在本申请实施例中,一个车道对应两个虚拟线圈,则只要车辆位于这两个虚拟线圈中的任意一个虚拟线圈,则认为该车辆位于该车道上。具体确定方式与前述内容相似,在此不再赘述。
S35、处理设备22根据每个车道所包括的车辆或每个车辆的行驶参数,获取该道路的路况信息。
在本申请实施例中,该路况信息可以包括道路的车流量、该道路中车辆的车头间距或者该道路车辆的停车时长等,其中,车流量可以包括进口车道的车流量和出口车道的车流 量。当然,该路况信息还可以包括其他参数,在此不一一列举。下面,针对该路况信息为车流量、车头间距以及停车时长这三种参数为例,对处理设备22获取该道路的路况信息的过程进行说明。
路况信息为车流量:
作为一种示例,处理设备22根据获取的第一个视频帧中车辆1的位置~车辆3的位置,第二个视频帧中车辆1的位置,车辆4的位置,以及每个车道的虚拟线圈的坐标信息,确定第一个视频帧中车辆1和车辆2位于车道1,车辆3位于车道3,第二个视频帧中的车辆1位于车道1,车辆4位于车道1。从而统计每个车道所包括的车辆的总数,即可获取每个车道的车流量。在统计车流量时,可以通过每个视频帧中车辆的编号,排除视频帧中重复的车辆。例如,在第一个视频帧中包括车辆1~车辆3,而在第二个视频帧中包括车辆1和车辆4,可见,第一个视频帧中的车辆1和第二个视频帧中的车辆1是同一个车辆,从而在统计每个车道的车流量时,可以不统计第二个视频帧中的车辆1,从而得到车道1中在两个视频帧的采集时间段内包括的车辆数为3,车道2在两个视频帧的采集时间段内包括的车辆数为0,车道3在两个视频帧的采集时间段内包括的车辆数为1,以及车道4在两个时频帧的采集时间段内包括的车辆数为0。或者,为了更加准确,处理设备22可以统计一个预设时长(例如1秒)内所采集到的更多个视频帧中位于各个车道内的车辆数,在此不再赘述。
另外,可以将该道路包括的4个车道分为进口车道和出口车道两种类型,然后统计每种类型的车道的车流量。例如,在图5所示的道路中,车道1和车道2为进口车道,车道3和车道4为出口车道,从而,在获取每个车道的车流量之后,可以将车道1和车道2的车流量相加,得到进口车道的车流量;将车道3和车道4的车流量相加,得到出口车道的车流量。
路况信息为车辆的车头间距:
处理设备22可以根据两个相邻的车辆进入车道的虚拟线圈的时间差(或者离开车道的虚拟线圈的时间差),以及车辆的行驶速度,确定相邻的车辆的车头间距。
沿用上述例子,处理设备22确定第一个视频帧中,车辆1和车辆2位于车道1,且在第二个视频帧中,车道1中出现了新的车辆,即车辆4,因此,可以确定车辆2和车辆4相邻。且,车辆2和车辆4相隔一个采集时间依次出现在车道1的虚拟线圈内,以拍摄设备21以每秒20帧的频率采集视频帧,则可以认为车辆2和车辆4进入车道1的虚拟线圈的时间差为1/20秒,然后根据车辆4的行驶速度,例如为80千米/小时(即22米/秒),确定车辆2和车辆4之间的车头间距为:22*(1/20)=1.1米。
路况信息为车辆的停车时长:
停车时长可以理解为在一个预设时长内车辆的停车总时长。
作为一种示例,处理设备22确定在车辆1在第一个视频帧和第二个视频帧中的位置相差小于阈值,该阈值可以为1米等,则认为车辆1在一个采集时间内处于停车状态,从而将车辆1的停车时长加上一个采集时间,以拍摄设备21以每秒20帧的频率采集视频帧,则一个采集时间为1/20秒。然后,采用相同的方式统计第二个视频帧和第三个视频帧中车辆1的位置进行处理,直到对该预设时长内获取的所有的视频帧进行上述处理,得到该预设时长内车辆1的停车总时长。对其他车辆的停车总时长的处理方式与车辆1相似,在此不再赘述。
在本申请实施例中,该路况信息除了可以包括道路的车流量、该道路中车辆的车头间距或者该道路中车辆的停车时长外,还可以检测该道路中是否有车辆逆行。检测车辆是否逆行需要在每个车道中设置至少两个虚拟线圈,以图6所示的虚拟线圈为例,对确定该道路是否有车辆逆行的过程进行说明。
请参考图7,为判断该道路中是否有车辆逆行的流程图。
S701、确定车辆所在的车道的类型。
该类型可以是进口车道类型或出口车道类型。若该车辆所在的车道的类型为进口车道类型,则执行步骤S702~步骤S705;若该车辆所在的车道的类型为出口车道类型,则执行步骤S706~步骤S709。
S702、确定该车辆在第一时刻是否位于该车道的第一预设区域。若为否,则执行步骤S703;若为是,则执行步骤S704。
该第一预设区域为靠近所述第一车道的路口停止线的区域,如图6所示的虚拟线圈A。需要说明的是,本申请实施例是根据车辆是否位于车道预设的虚拟线圈来判断车辆是否位于该车道,因此,当确定该车辆位于该车道,且该车辆在第一时刻不位于虚拟线圈A,则可以认为该车辆在第一时刻位于虚拟线圈B。
S703、确定该车辆为非逆行车辆。
S704、确定该车辆是否在第二时刻进入该车道的第二预设区域。
该第一时刻与该第二时刻的时间间隔小于或等于预设时长,该预设时长可以为10秒或者1分钟等,在此不作限制。该第一预设区域为远离所述路口停止线的区域,如图6所示的虚拟线圈B。
S705、若为是,则确定该车辆为逆行车辆;若为否,则该车辆为非逆行车辆。
S706、确定该车辆是否在第三时刻位于该车道的第三预设区域。若为否,则执行步骤S707;若为是,则执行步骤S708。
该第三预设区域为远离所述第二车道的路口停止线的区域,如图6所示的虚拟线圈B。
S707、确定该车辆为非逆行车辆。
S708、确定该车辆是否在第四时刻进入该车道的第四预设区域。
该第三时刻与该第四时刻的时间间隔小于或等于预设时长。第四预设区域为靠近所述路口停止线的区域,如图6所示的虚拟线圈A。
S709、若为是,则确定该车辆为逆行车辆;若为否,则该车辆为非逆行车辆。
作为一种示例,处理设备22根据获取的第一个视频帧中车辆1的位置~车辆3的位置,第二个视频帧中车辆1的位置,车辆4的位置,第三个视频帧中车辆4的位置,以及每个车道的虚拟线圈的坐标信息,确定第一个视频帧中车辆1位于车道1的虚拟线圈B,车辆2位于车道1的虚拟线圈A,车辆3位于车道3的虚拟线圈A,第二个视频帧中的车辆1位于车道1的虚拟线圈A,车辆4位于车道4的虚拟线圈B。第三个视频帧中的车辆4位于车道4的虚拟线圈A。
由于车辆1所在的车道1为进口车道,且车辆1所在的虚拟线圈为虚拟线圈B,从而确定车辆1为非逆行车辆。车辆2在第一个视频帧中位于车道1的虚拟线圈A,但是在第二个视频帧中已经不存在该车辆2,且后续10个视频帧中均没有检测到该车辆2,从而可以确定车辆2为非逆行车辆。
由于车辆3所在的车道3为出口车道,且车辆3所在的虚拟线圈为虚拟线圈A,因此,确定车辆3为非逆行车辆。车辆4所在的车道4为出口车道,在第二个视频帧中车辆4位于车道4的虚拟线圈B,且第三个视频帧中该车辆4位于车道4的虚拟线圈A,从而确定车辆4为逆行车辆。
综上,确定出车道1~车道3中不存在逆行车辆,车道4中存在逆行车辆,且逆行车辆为车辆4。
若该处理设备22与路口信号机连接,则当处理设备22获取该道路的路况信息后,可以基于路口信号机的接口输入要求,将该路况信息转换为与路口信号机适配的信号,例如,可以为RS485信号,发送给路口信号机,路口信号机可以根据该道路的路况信息对路口的信号灯进行调控,或者可以对下游路口提供信号灯控制数据等。例如,当道路的进口车道的车流量小于阈值时,说明该道路上行驶的车辆较少,从而路口信号机可以减少该方向信号灯中绿灯的时长,以减少绿灯空放。
在上述技术方案中,不用设置物理线圈即可获取道路的路况信息,从而不会破坏路面或者不存在任何物理器件的损坏,可以降低对道路路况进行监测的成本。且,可以通过每个车辆的位置信息以及车道中包括的虚拟线圈的坐标信息,检测出道路中是否存在逆行车辆的路况信息,可以增加道路路况检测的多样化。
在其他实施例中,还可以在每个车道中设置多个虚拟线圈,来实现其他路况信息的检测,例如,检测每个车道中的排队长度等,在此不再赘述。
上述本申请提供的实施例中,为了实现上述本申请实施例提供的方法中的各功能,路况信息的监测装置可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
图8示出了一种路况信息的监测装置800的结构示意图。其中,路况信息的监测装置800可以用于实现图3所示的实施例中处理设备22的功能。路况信息的监测装置800可以是硬件结构、软件模块、或硬件结构加软件模块。路况信息的监测装置800可以由芯片系统实现。本申请实施例中,芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
路况信息的监测装置800可以包括获取单元801以及处理单元802。
获取单元801可以用于执行图3所示的实施例中的步骤S32,和/或用于支持本文所描述的技术的其它过程。一种可能的实现方式,获取单元801可以用于与处理单元802通信,或者,获取单元801可以用于路况信息的监测装置800和其它模块进行通信,其可以是电路、器件、接口、总线、软件模块、收发器或者其它任意可以实现通信的装置。
处理单元802可以用于执行图3所示的实施例中的步骤S33~步骤S35,和/或用于支持本文所描述的技术的其它过程。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
图8所示的实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本申请各个实施例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集 成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
如图9所示为本申请实施例提供的路况信息的监测装置900,其中,路况信息的监测装置900可以用于实现图3所示的实施例中路况信息的监测装置900的功能。其中,该路况信息的监测装置900可以为芯片系统。本申请实施例中,芯片系统可以由芯片构成,也可以包含芯片和其他分立器件。
路况信息的监测装置900包括至少一个处理器920,用于实现或用于支持路况信息的监测装置900实现图3所示的实施例中处理设备22的功能。示例性地,处理器920可以根据每个车辆的位置确定每个车辆所在的车道,以及根据每个车道所包括的车辆数或每个车辆的行驶参数,确定道路的路况信息,具体参见方法示例中的详细描述,此处不做赘述。
路况信息的监测装置900还可以包括至少一个存储器930,用于存储程序指令和/或数据。存储器930和处理器920耦合。本申请实施例中的耦合是装置、单元或模块之间的间接耦合或通信连接,可以是电性,机械或其它的形式,用于装置、单元或模块之间的信息交互。处理器920可能和存储器930协同操作。处理器920可能执行存储器930中存储的程序指令。所述至少一个存储器中的至少一个可以包括于处理器中。
路况信息的监测装置900还可以包括接口910,用于与处理器920通信,或者用于通过传输介质和其它设备进行通信,从而用于路况信息的监测装置900可以和其它设备进行通信。示例性地,该其它设备可以是计算模块。处理器920可以利用接口910收发数据。
本申请实施例中不限定上述接口910、处理器920以及存储器930之间的具体连接介质。本申请实施例在图9中以存储器930、处理器920以及接口910之间通过总线940连接,总线在图9中以粗线表示,其它部件之间的连接方式,仅是进行示意性说明,并不引以为限。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图9中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
在本申请实施例中,处理器920可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
在本申请实施例中,存储器930可以是非易失性存储器,比如硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)等,还可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM)。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本申请实施例中的存储器还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。
本申请实施例中还提供一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行图3所示的实施例中处理设备22执行的方法。
本申请实施例中还提供一种计算机程序产品,包括指令,当其在计算机上运行时,使得计算机执行图3所示的实施例中处理设备22执行的方法。
本申请实施例提供了一种芯片系统,该芯片系统包括处理器,还可以包括存储器,用于实现前述方法中处理设备22的功能。该芯片系统可以由芯片构成,也可以包含芯片和其 他分立器件。
本申请实施例提供了一种路况信息的监测系统,该图像处理系统包括拍摄设备和前述的路况信息的监测装置。
本申请实施例提供的方法中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,简称DSL)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机可以存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,简称DVD))、或者半导体介质(例如,SSD)等。

Claims (16)

  1. 一种路况信息的监测方法,其特征在于,包括:
    获取道路路口的视频,所述视频包括多个视频帧;
    根据所述视频获取每个视频帧包括的车辆的行驶参数,所述车辆的行驶参数包括行驶速度和/或行驶方向,以及所述车辆的位置;
    根据所述车辆的位置确定所述车辆所在的车道,所述道路包括多条车道;
    根据每条车道上行驶的车辆数或所述每条车道上行驶的车辆的行驶参数,获取所述道路的路况信息。
  2. 根据权利要求1所述的方法,其特征在于,根据所述车辆的位置确定所述车辆所在的车道,包括:
    获取所述多条车道中每条车道的预设区域的坐标范围;
    根据所述车辆的位置以及所述每条车道的预设区域的坐标范围,确定所述车辆所属的第一预设区域;
    确定与所述第一预设区域对应的车道为所述车辆所在的车道。
  3. 根据权利要求1或2所述的方法,其特征在于,所述路况信息为所述道路的车流量,根据每条车道上行驶的车辆数,获取所述道路的路况信息,包括:
    根据至少一个进口车道上行驶的车辆的第一车辆总数,获取所述道路的进口车流量;和/或,根据至少一个出口车道上行驶的车辆的第二车辆总数,获取所述道路的出口车流量。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述路况信息为所述道路中车辆的车头间距,根据每条车道上行驶的车辆的行驶参数,获取所述道路的路况信息,包括:
    根据每条车道上相邻的两个车辆进入所述车道的预设区域的时间差,以及所述两个车辆中的第一车辆的行驶速度,确定所述车头间距,所述第一车辆为所述两个车辆中后进入所述车道的预设区域的车辆。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述路况信息为所述道路是否存在逆行车辆,根据每条车道上行驶的车辆的行驶参数,获取所述道路的路况信息,包括:
    确定车辆所在的第一车道为进口车道;
    根据所述车辆的位置,确定所述车辆是否在第一时刻位于所述第一车道的第一预设区域,所述第一车道包括所述第一预设区域和第二预设区域,所述第一预设区域为靠近所述第一车道的路口停止线的区域,所述第二预设区域为远离所述路口停止线的区域;
    若为是,则确定所述车辆是否在第二时刻位于所述第二预设区域,所述第一时刻与所述第二时刻的时间间隔小于或等于预设时长;
    若为是,则确定所述第一车道中存在逆行车辆。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述路况信息为所述道路是否存在逆行车辆,根据每条车道上行驶的车辆的行驶参数,获取所述道路的路况信息,包括:
    确定车辆所在的第二车道为出口车道;
    根据所述车辆的位置,确定所述车辆是否在第三时刻位于所述第二车道的第三预设区域,所述第二车道包括所述第三预设区域和第四预设区域,所述第三预设区域为远离所述第二车道的路口停止线的区域,所述第四预设区域为靠近所述路口停止线的区域;
    若为是,则确定所述车辆是否在第四时刻位于所述第四预设区域,所述第三时刻与所述第四时刻的时间间隔小于或等于预设时长;
    若为是,则确定所述第二车道中存在逆行车辆。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    获取道路路口的雷达数据,所述雷达数据包括多组行驶参数;
    根据所述视频获取每个视频帧包括的车辆的行驶参数,包括:
    确定每个视频帧中车辆的位置;
    根据所述每个视频帧中车辆的位置以及所述每个视频帧的采集时刻,从所述多组行驶参数中确定与每个车辆对应的行驶参数。
  8. 一种路况信息的监测装置,其特征在于,包括:
    获取单元,用于获取道路路口的视频,所述视频包括多个视频帧;
    处理单元,用于根据所述视频获取每个视频帧包括的车辆的行驶参数,所述车辆的行驶参数包括行驶速度和/或行驶方向,以及所述车辆的位置;根据所述车辆的位置确定所述车辆所在的车道,所述道路包括多条车道;以及,根据每条车道上行驶的车辆数或所述每条车道上行驶的车辆的行驶参数,获取所述道路的路况信息。
  9. 根据权利要求8所述的装置,其特征在于,所述处理单元具体用于:
    获取所述多条车道中每条车道的预设区域的坐标范围;
    根据所述车辆的位置以及所述每条车道的预设区域的坐标范围,确定所述车辆所属的第一预设区域;
    确定与所述第一预设区域对应的车道为所述车辆所在的车道。
  10. 根据权利要求8或9所述的装置,其特征在于,所述路况信息为所述道路的车流量,所述处理单元具体用于:
    根据至少一个进口车道上行驶的车辆的第一车辆总数,获取所述道路的进口车流量;和/或,根据至少一个出口车道上行驶的车辆的第二车辆总数,获取所述道路的出口车流量。
  11. 根据权利要求8-10中任一项所述的装置,其特征在于,所述路况信息为所述道路中车辆的车头间距,所述处理单元具体用于:
    根据每条车道上相邻的两个车辆进入所述车道的预设区域的时间差,以及所述两个车辆中的第一车辆的行驶速度,确定所述车头间距,所述第一车辆为所述两个车辆中后进入所述车道的预设区域的车辆。
  12. 根据权利要求8-11中任一项所述的装置,其特征在于,所述路况信息为所述道路是否存在逆行车辆,所述处理单元具体用于:
    确定车辆所在的第一车道为进口车道;
    根据所述车辆的位置,确定所述车辆是否在第一时刻位于所述第一车道的第一预设区域,所述第一车道包括所述第一预设区域和第二预设区域,所述第一预设区域为靠近所述第一车道的路口停止线的区域,所述第二预设区域为远离所述路口停止线的区域;
    若为是,则确定所述车辆是否在第二时刻位于所述第二预设区域,所述第一时刻与所述第二时刻的时间间隔小于或等于预设时长;
    若为是,则确定所述第一车道中存在逆行车辆。
  13. 根据权利要求8-12中任一项所述的装置,其特征在于,所述路况信息为所述道路是否存在逆行车辆,所述处理单元具体用于:
    确定车辆所在的第二车道为出口车道;
    根据所述车辆的位置,确定所述车辆是否在第三时刻位于所述第二车道的第三预设区域,所述第二车道包括所述第三预设区域和第四预设区域,所述第三预设区域为远离所述第二车道的路口停止线的区域,所述第四预设区域为靠近所述路口停止线的区域;
    若为是,则确定所述车辆是否在第四时刻位于所述第四预设区域,所述第三时刻与所述第四时刻的时间间隔小于或等于预设时长;
    若为是,则确定所述第二车道中存在逆行车辆。
  14. 根据权利要求8-13中任一项所述的装置,其特征在于,所述获取单元还用于:
    获取道路路口的雷达数据,所述雷达数据包括多组行驶参数;
    所述处理单元还用于:
    确定每个视频帧中车辆的位置;以及,根据所述每个视频帧中车辆的位置以及所述每个视频帧的采集时刻,从所述多组行驶参数中确定与每个车辆对应的行驶参数。
  15. 一种路况信息的监测装置,其特征在于,包括至少一个处理器,所述至少一个处理器与至少一个存储器耦合;所述至少一个处理器,用于执行所述至少一个存储器中存储的计算机程序或指令,以使得所述装置执行如权利要求1至7中任一项所述的方法。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序或指令,当计算机读取并执行所述计算机程序或指令时,使得计算机执行如权利要求1至7中任意一项所述的方法。
PCT/CN2020/097888 2019-11-19 2020-06-24 一种路况信息的监测方法及装置 WO2021098211A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911133735.0A CN111127877A (zh) 2019-11-19 2019-11-19 一种路况信息的监测方法及装置
CN201911133735.0 2019-11-19

Publications (1)

Publication Number Publication Date
WO2021098211A1 true WO2021098211A1 (zh) 2021-05-27

Family

ID=70495772

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097888 WO2021098211A1 (zh) 2019-11-19 2020-06-24 一种路况信息的监测方法及装置

Country Status (2)

Country Link
CN (1) CN111127877A (zh)
WO (1) WO2021098211A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241415A (zh) * 2021-12-16 2022-03-25 海信集团控股股份有限公司 车辆的位置监控方法、边缘计算设备、监控设备及系统
CN114280609A (zh) * 2021-12-28 2022-04-05 上海恒岳智能交通科技有限公司 一种77GHz毫米波信号探测处理方法与系统
CN114882709A (zh) * 2022-04-22 2022-08-09 四川云从天府人工智能科技有限公司 车辆拥堵检测方法、装置及计算机存储介质
CN116878572A (zh) * 2023-07-11 2023-10-13 中国人民解放军国防大学联合勤务学院 一种基于装甲车试验环境的多源地理信息数据分析方法
CN117523858A (zh) * 2023-11-24 2024-02-06 邯郸市鼎舜科技开发有限公司 道路电子卡口检测方法和装置

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127877A (zh) * 2019-11-19 2020-05-08 华为技术有限公司 一种路况信息的监测方法及装置
CN112699747A (zh) * 2020-12-21 2021-04-23 北京百度网讯科技有限公司 用于确定车辆状态的方法、装置、路侧设备和云控平台
CN112541465A (zh) * 2020-12-21 2021-03-23 北京百度网讯科技有限公司 一种车流量统计方法、装置、路侧设备及云控平台
CN114418468B (zh) 2022-03-29 2022-07-05 成都秦川物联网科技股份有限公司 一种智慧城市交通调度策略控制方法和物联网系统
CN116910629B (zh) * 2023-09-13 2023-12-15 四川公路桥梁建设集团有限公司 一种基于大数据的路面检测方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201170927Y (zh) * 2007-10-16 2008-12-24 余亚莉 智能尘埃式交通传感器和信号控制网络及其信息传输系统
CN102509490A (zh) * 2011-10-13 2012-06-20 北京工业大学 模块化设计的交通信息视频采集实验教学方法
CN103456172A (zh) * 2013-09-11 2013-12-18 无锡加视诚智能科技有限公司 一种基于视频的交通参数测量方法
WO2014080978A1 (ja) * 2012-11-22 2014-05-30 三菱重工業株式会社 交通情報処理システム、サーバ装置、交通情報処理方法、及びプログラム
CN105046954A (zh) * 2015-06-18 2015-11-11 无锡华通智能交通技术开发有限公司 基于视频智能分析的路口通行状态动态检测系统及方法
CN106781537A (zh) * 2016-11-22 2017-05-31 武汉万集信息技术有限公司 一种车辆超速抓拍方法及系统
CN111127877A (zh) * 2019-11-19 2020-05-08 华为技术有限公司 一种路况信息的监测方法及装置

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102332209B (zh) * 2011-02-28 2015-03-18 王志清 一种汽车违章视频监测方法
JP5796740B2 (ja) * 2011-12-09 2015-10-21 アイシン・エィ・ダブリュ株式会社 交通情報通知システム、交通情報通知プログラム及び交通情報通知方法
CN102789686B (zh) * 2012-07-10 2014-07-16 华南理工大学 一种基于路面亮度组合模式识别的道路交通流检测方法
CN107301776A (zh) * 2016-10-09 2017-10-27 上海炬宏信息技术有限公司 基于视频检测技术的车道路况处理及发布方法
CN108122408B (zh) * 2016-11-29 2021-02-05 中国电信股份有限公司 一种路况监测方法、装置及其用于监测道路路况的系统
CN107146419A (zh) * 2017-06-26 2017-09-08 黑龙江八农垦大学 高速运动车辆的自动监测系统
CN107862873B (zh) * 2017-09-26 2019-12-03 三峡大学 一种基于相关匹配和状态机的车辆计数方法及装置
CN109003439A (zh) * 2018-08-30 2018-12-14 新华三技术有限公司 一种违章检测方法及装置
CN109615870A (zh) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 一种基于毫米波雷达和视频的交通检测系统
CN109686088B (zh) * 2018-12-29 2021-07-30 重庆同枥信息技术有限公司 一种交通视频告警方法、设备及系统
CN109615862A (zh) * 2018-12-29 2019-04-12 南京市城市与交通规划设计研究院股份有限公司 道路车辆交通运动状态参数动态获取方法及装置
CN109887281B (zh) * 2019-03-01 2021-03-26 北京云星宇交通科技股份有限公司 一种监控交通事件的方法及系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201170927Y (zh) * 2007-10-16 2008-12-24 余亚莉 智能尘埃式交通传感器和信号控制网络及其信息传输系统
CN102509490A (zh) * 2011-10-13 2012-06-20 北京工业大学 模块化设计的交通信息视频采集实验教学方法
WO2014080978A1 (ja) * 2012-11-22 2014-05-30 三菱重工業株式会社 交通情報処理システム、サーバ装置、交通情報処理方法、及びプログラム
CN103456172A (zh) * 2013-09-11 2013-12-18 无锡加视诚智能科技有限公司 一种基于视频的交通参数测量方法
CN105046954A (zh) * 2015-06-18 2015-11-11 无锡华通智能交通技术开发有限公司 基于视频智能分析的路口通行状态动态检测系统及方法
CN106781537A (zh) * 2016-11-22 2017-05-31 武汉万集信息技术有限公司 一种车辆超速抓拍方法及系统
CN111127877A (zh) * 2019-11-19 2020-05-08 华为技术有限公司 一种路况信息的监测方法及装置

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241415A (zh) * 2021-12-16 2022-03-25 海信集团控股股份有限公司 车辆的位置监控方法、边缘计算设备、监控设备及系统
CN114280609A (zh) * 2021-12-28 2022-04-05 上海恒岳智能交通科技有限公司 一种77GHz毫米波信号探测处理方法与系统
CN114280609B (zh) * 2021-12-28 2023-10-13 上海恒岳智能交通科技有限公司 一种77GHz毫米波信号探测处理方法与系统
CN114882709A (zh) * 2022-04-22 2022-08-09 四川云从天府人工智能科技有限公司 车辆拥堵检测方法、装置及计算机存储介质
CN114882709B (zh) * 2022-04-22 2023-05-30 四川云从天府人工智能科技有限公司 车辆拥堵检测方法、装置及计算机存储介质
CN116878572A (zh) * 2023-07-11 2023-10-13 中国人民解放军国防大学联合勤务学院 一种基于装甲车试验环境的多源地理信息数据分析方法
CN116878572B (zh) * 2023-07-11 2024-03-15 中国人民解放军国防大学联合勤务学院 一种基于装甲车试验环境的多源地理信息数据分析方法
CN117523858A (zh) * 2023-11-24 2024-02-06 邯郸市鼎舜科技开发有限公司 道路电子卡口检测方法和装置
CN117523858B (zh) * 2023-11-24 2024-05-14 邯郸市鼎舜科技开发有限公司 道路电子卡口检测方法和装置

Also Published As

Publication number Publication date
CN111127877A (zh) 2020-05-08

Similar Documents

Publication Publication Date Title
WO2021098211A1 (zh) 一种路况信息的监测方法及装置
CN109829351B (zh) 车道信息的检测方法、装置及计算机可读存储介质
WO2020052530A1 (zh) 一种图像处理方法、装置以及相关设备
CN105336169B (zh) 一种基于视频判断交通拥堵的方法和系统
CN108320510A (zh) 一种基于无人机航拍视频交通信息统计方法及系统
WO2021253245A1 (zh) 识别车辆变道趋势的方法和装置
CN104021682B (zh) 过饱和交叉口自修复控制方法
CN103617412A (zh) 实时车道线检测方法
WO2018177192A1 (zh) 侧方停车的状态检测方法及摄像机
KR20200064873A (ko) 객체와 감지 카메라의 거리차를 이용한 속도 검출 방법
CN103376336A (zh) 车辆行驶速度测量方法和系统
CN116503818A (zh) 一种多车道车速检测方法及系统
US20220196408A1 (en) Lane Line Information Determining Method and Apparatus
WO2024016524A1 (zh) 基于独立非均匀增量采样的网联车位置估计方法及装置
CN112990128A (zh) 一种基于视频跟踪的多车辆测速方法
CN110880205B (zh) 一种停车收费方法及装置
CN114495520B (zh) 一种车辆的计数方法、装置、终端和存储介质
CN116030631A (zh) 一种基于无人机航拍视频的实时交通拥堵状态评估方法
US20200193184A1 (en) Image processing device and image processing method
JP2013168178A (ja) 車両検知システムおよび車両検知システムの車両検知方法
CN110865361B (zh) 一种基于雷达数据的饱和车头时距检测方法
CN116913081A (zh) 一种基于路侧激光雷达的车辆排队长度检测方法
CN110796869B (zh) 一种检测违法跟车的方法和装置
CN111986473A (zh) 基于车型鉴别的大数据处理方法
WO2023005020A1 (zh) 反光板定位方法、机器人及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20888873

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20888873

Country of ref document: EP

Kind code of ref document: A1