CN107665332B - Intersection occupancy and vehicle flow calculation method and device - Google Patents

Intersection occupancy and vehicle flow calculation method and device Download PDF

Info

Publication number
CN107665332B
CN107665332B CN201710712634.3A CN201710712634A CN107665332B CN 107665332 B CN107665332 B CN 107665332B CN 201710712634 A CN201710712634 A CN 201710712634A CN 107665332 B CN107665332 B CN 107665332B
Authority
CN
China
Prior art keywords
stop line
lane
intersection
frame
detection area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710712634.3A
Other languages
Chinese (zh)
Other versions
CN107665332A (en
Inventor
雷帮军
徐光柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Jiugan Technology Co ltd
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN201710712634.3A priority Critical patent/CN107665332B/en
Publication of CN107665332A publication Critical patent/CN107665332A/en
Application granted granted Critical
Publication of CN107665332B publication Critical patent/CN107665332B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for calculating intersection occupancy and vehicle flow, wherein the method comprises the following steps: s1, acquiring a video of the intersection stop line shot from the rear upper side of the lane side in the intersection stop line in a preset historical time period, and detecting vehicles near the intersection stop line in the video; and S2, acquiring the intersection occupancy and/or the vehicle flow according to the detection result. The invention has simple algorithm and high accuracy.

Description

Intersection occupancy and vehicle flow calculation method and device
Technical Field
The invention relates to the field of intelligent traffic, in particular to a method and a device for calculating intersection occupancy and vehicle flow.
Background
In intelligent transportation, accurate vehicle detection is the most basic requirement. The current vehicle detection mode mainly comprises an annular coil, video, geomagnetism, ultrasonic waves, radar and infrared rays.
Among them, the toroidal coil is most widely used in the tradition, but needs excavation road surface to a great extent, and is big to the environmental damage, and simultaneously, later maintenance is more complicated, and the total cost is high, is being gradually replaced by other modes at present. In addition, ultrasonic waves are easily affected by wind speed, the detection accuracy of infrared rays is low, the requirement of radar on installation is high, professional installation is needed, and data indexes measurable in the three modes are limited. Accordingly, terrestrial magnetism and video modes are increasingly recognized.
In the video mode, a video vehicle detector is required to be installed at the intersection, and information such as flow and occupancy of the intersection in all directions is acquired in real time, so that lane saturation is calculated, and optimal control of traffic lights is realized. The self-adaptive control traffic light has the greatest advantage that the traffic flow can be reasonably adjusted according to the traffic flow condition on site. The green light release time is increased in the direction of large traffic flow, and the green light release time is decreased in the direction of relatively small traffic flow. And the condition of empty passage occupation is avoided, the phenomenon of stagnation is reduced, the traffic rate of highways is improved, and urban traffic is optimized.
For example, in an intelligent traffic light control system and an implementation method thereof, a day is divided into a traffic peak period, an off-peak period and a valley period according to the change situation of the traffic flow, and on the basis of cooperative control of traffic lights at a plurality of intersections, if the weighted traffic flow of a certain lane is continuously and maximally repeated, the duration of the traffic light is automatically adjusted, but how to acquire the traffic flow of people and vehicles according to the acquired images is not described. For example, in a video-based traffic flow detection method, a side boundary and a vehicle tail position of each target are detected in a set virtual detection area through binary segmentation, so that whether the vehicle is a vehicle can be determined, but the side boundary and the vehicle tail position detected through the binary segmentation are used for determining whether the vehicle is inaccurate. In another patent of a video-based traffic parameter measurement method, traffic parameters such as traffic flow and vehicle speed are measured and calculated on the basis of vehicle detection and continuous tracking, and due to the fact that a tracking algorithm is complex, the method is susceptible to dynamic scene change, illumination change, camouflage color under a complex background and the like, and the measurement accuracy is not high.
Disclosure of Invention
In order to overcome the problems that the existing traffic parameter calculation method based on the video mode is complex and the measurement accuracy is not high or at least partially solve the problems, the invention provides a method and a device for calculating the intersection occupancy and the vehicle flow.
According to a first aspect of the present invention, there is provided a method for calculating intersection occupancy and vehicle flow, comprising:
s1, acquiring a video of the intersection stop line shot from the rear upper side of the lane side in the intersection stop line in a preset historical time period, and detecting vehicles near the intersection stop line in the video;
and S2, acquiring the intersection occupancy and/or the vehicle flow according to the detection result.
Specifically, the step S1 specifically includes:
s11, acquiring a detection area and a lane containing an intersection stop line in each frame of the video, and acquiring a first edge characteristic of the intersection stop line from the detection area;
s12, for each lane in each frame in the video, if the first edge feature of the intersection stop line of the lane is judged to be incomplete, whether the vehicle in the detection area containing the intersection stop line in the lane moves is continuously judged;
correspondingly, the step S2 specifically includes:
and acquiring the intersection occupancy and/or the vehicle flow of the lane according to the result of judging whether the vehicle moves.
Specifically, the step of acquiring the detection area including the intersection stop line in step S11 specifically includes:
acquiring a second edge feature of the intersection stop line from a reference frame of the video by using an edge detection algorithm;
horizontally projecting the second edge characteristic of the intersection stop line to obtain the longitudinal coordinate of the intersection stop line;
acquiring the length of the stop line of the intersection according to the second edge characteristic;
and acquiring the detection area of the intersection stop line through left and right detection according to the longitudinal coordinate and the length of the intersection stop line.
Specifically, the step S12 specifically includes:
and for each lane in each frame in the video, if the number of pixels of the first edge feature of the intersection stop line of the lane is smaller than a first preset threshold value, continuously judging whether the vehicle in the detection area containing the intersection stop line in the lane moves.
Specifically, for each lane in each frame of the video, if it is determined that the number of pixels of the first edge feature of the intersection stop line of the lane is smaller than the first preset threshold, the step of continuously determining whether the vehicle in the detection area including the intersection stop line in the lane moves specifically includes:
acquiring complete edge features of the intersection stop line from a detection area of a reference frame of the video by using an edge detection algorithm;
and for each lane in each frame in the video, if the number of the pixels of the first edge feature of the intersection stop line of the lane is judged to be less than the preset multiple of the number of the pixels of the complete edge feature of the intersection stop line in the lane, continuously judging whether the vehicle in the detection area of the intersection stop line in the lane moves, wherein the preset multiple is less than 1.
Specifically, the step of determining whether the vehicle in the detection area including the intersection stop line in the lane moves in step S12 specifically includes:
acquiring a first distance between the frame and the gray scale of the detection area containing the intersection stop line in the lane in the reference frame, and a second distance between the adjacent frame of the frame and the gray scale of the detection area containing the intersection stop line in the lane in the reference frame;
if the difference value between the first distance and each second distance is smaller than a second preset threshold value, the fact that the vehicle in the detection area containing the stop line of the intersection in the lane stops is known; or,
and if the difference value between the first distance and each second distance is judged to be larger than or equal to the second preset threshold value, the vehicle movement in the detection area including the stop line of the road junction in the lane is known.
Specifically, the step S2 further specifically includes:
for each lane, acquiring a first frame number of vehicles in the detection area, which contains a stop line at an intersection in the lane, in the video;
and dividing the first frame number by the total frame number of the video to obtain the intersection occupancy of the lane.
Specifically, the step S2 further specifically includes:
for each lane, acquiring a second frame number of vehicle movement in a detection area in the lane;
and dividing the second frame number by the total frame number of the video to obtain the vehicle flow of the lane.
According to a second aspect of the present invention, there is provided an intersection occupancy and vehicle flow rate calculation device comprising:
the system comprises a detection unit, a processing unit and a display unit, wherein the detection unit is used for acquiring a video of an intersection stop line shot from the rear upper side of a lane side in the intersection stop line in a preset historical time period and detecting a vehicle near the intersection stop line in the video;
and the acquisition unit is used for acquiring the intersection occupancy and/or the vehicle flow according to the detection result.
According to a third aspect of the invention, there is provided a non-transitory computer readable storage medium storing a computer program of the method as described above.
The invention provides a method and a device for calculating intersection occupancy and vehicle flow.
Drawings
FIG. 1 is a schematic flow chart of a method for calculating intersection occupancy and vehicle traffic according to an embodiment of the present invention;
fig. 2 is a schematic view of a scene in which an intersection stop line is photographed from the rear upper side of a lane side in the intersection stop line in the intersection occupancy rate and vehicle flow calculation method provided by the embodiment of the present invention;
fig. 3 is a schematic diagram illustrating an intersection stop line, an intersection stop line detection area and a lane designation in the intersection occupancy and vehicle flow calculation method according to the embodiment of the present invention;
FIG. 4 is a schematic diagram of another lane assignment in the intersection occupancy and vehicle flow calculation method according to the embodiment of the present invention;
FIG. 5 is a schematic diagram of a reference frame in a method for calculating intersection occupancy and vehicle traffic according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a device for calculating intersection occupancy and vehicle traffic according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a device for calculating intersection occupancy and vehicle flow according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
In an embodiment of the present invention, a method for calculating intersection occupancy and vehicle traffic is provided, fig. 1 is a schematic flow chart of the method for calculating intersection occupancy and vehicle traffic provided in the embodiment of the present invention, and the method includes S1, acquiring a video of a stop line at an intersection, which is photographed from the rear upper side of a lane side in the stop line at the intersection, within a preset historical time period, and detecting a vehicle near the stop line at the intersection in the video; and S2, acquiring the intersection occupancy and/or the vehicle flow according to the detection result.
Specifically, fig. 2 is a schematic view of a scene in which an intersection stop line is photographed from the rear upper side of a lane side in the intersection stop line in the intersection occupancy and vehicle flow calculation method provided by the embodiment of the present invention. In S1, a vehicle near the intersection stop line in the video is detected from the video of the intersection stop line captured from the rear upper side of the lane side in the intersection stop line. When a vehicle stops near the intersection stop line or passes through the intersection stop line, the intersection stop line in the video is partially shielded, so that whether a vehicle exists near the intersection stop line or not is judged. And selecting videos in a preset historical time period from the shot videos of the stop line at the intersection according to the preset historical time period as a time period before the preset current time. The preset historical time period may be a continuous time period or a plurality of discontinuous time periods, such as all videos from 3/1/2017 to 5/1/2017 or videos from 7:00 am to 9:00 am and 5:00 pm to 7:00 pm from 3/1/2017 to 5/1/2017. In S2, intersection occupancy and/or vehicle traffic are/is acquired based on the detected vehicles near the intersection stop line in the video. And the intersection occupancy rate is the ratio of the time occupied by the intersection in the preset historical time period to the preset historical time period. The vehicle flow is the number of vehicles passing through the intersection stop line in unit time in the preset historical time period.
According to the embodiment, the intersection occupancy and/or the vehicle flow of each lane are/is obtained by detecting the vehicles near the intersection stop line in the video through the video of the intersection stop line shot from the rear upper part of the intersection stop line, and the algorithm is simple and the accuracy is high.
On the basis of the foregoing embodiment, step S1 in this embodiment specifically includes: s11, acquiring a detection area and a lane containing an intersection stop line in each frame of the video, and acquiring a first edge characteristic of the intersection stop line from the detection area; s12, for each lane in each frame in the video, if the first edge feature of the intersection stop line of the lane is judged to be incomplete, whether the vehicle in the detection area containing the intersection stop line in the lane moves is continuously judged; correspondingly, the step S2 specifically includes: and acquiring the intersection occupancy and/or the vehicle flow of the lane according to the result of judging whether the vehicle moves.
Specifically, in S11, when the detection region and the lane of the intersection stop line in each frame of the video are acquired, the detection region may be manually specified or automatically acquired, and the lane is manually specified. And positions of a road junction stop line, a detection area and a lane in each frame of the video are the same. Fig. 3 is a schematic diagram illustrating designation of a stop line, a detection area and a lane at an intersection in the intersection occupancy and vehicle flow calculation method provided by the embodiment of the invention. Fig. 4 is a schematic diagram of another lane assignment in the intersection occupancy and vehicle flow calculation method according to the embodiment of the present invention. In fig. 3 and 4, there are two ways of lane designation, but not limited to the two ways of designation. And acquiring a first edge characteristic of the intersection stop line from the detection area of each frame. An edge detection algorithm may be used in conjunction with the color and shape characteristics of the intersection stop line to obtain a first edge characteristic of the intersection stop line. The edge detection algorithm may be Sobel, Prewitt detection operator, etc. In S12, for each lane in each frame, it is determined whether the edge feature of the intersection stop line in the frame is complete, that is, whether the intersection stop line of the lane is blocked. The completeness is a condition that a stop line is not blocked relative to an intersection, if the first edge characteristic of the intersection stop line of the lane in the frame is judged to be incomplete, the intersection stop line of the lane is blocked, vehicles are arranged near the stop line of the lane, and whether the vehicles in the detection area containing the intersection stop line in the lane in the frame move or not is continuously judged. The step S2 is specifically to acquire the number of frames in the video in which the vehicle stops and/or the number of frames in which the vehicle moves in the detection area of each lane according to the result of determining whether the vehicle in the detection area of each lane in each frame moves.
In the embodiment, through the video of the intersection stop line shot from the rear upper part of the intersection stop line, under the condition that the edge characteristics of the intersection stop line of each lane are judged and known to be incomplete, namely under the condition that vehicles are near the intersection stop line, whether the vehicles in the detection area of each lane move or not is further judged, the intersection occupancy and/or the vehicle flow of each lane are/is obtained according to the result of judging whether the vehicles move or not, the algorithm is simple, and the accuracy is high.
On the basis of the foregoing embodiment, in this embodiment, the step of acquiring the detection area including the intersection stop line in step S11 specifically includes: acquiring a second edge feature of the intersection stop line from a reference frame of the video by using an edge detection algorithm; horizontally projecting the second edge characteristic of the intersection stop line to obtain the longitudinal coordinate of the intersection stop line; acquiring the length of the stop line of the intersection according to the second edge characteristic; and acquiring the detection area of the intersection stop line through left and right detection according to the longitudinal coordinate and the length of the intersection stop line.
Specifically, the edge detection algorithm may be Sobel, Prewitt detection operator, or the like. An edge detection algorithm may be used, and in conjunction with the color and shape characteristics of the intersection stop line, a second edge characteristic of the intersection stop line may be obtained from a reference frame of the video. The reference frame is a frame in which the intersection stop line is not blocked at all, as shown in fig. 5. A frame satisfying a condition may be selected from the video as the reference frame. And horizontally projecting the second edge feature of the intersection stop line, wherein two peak positions of the horizontal projection of the second edge feature are the longitudinal coordinates of the intersection stop line. And simultaneously dividing the number of pixels of the second edge feature of the intersection stop line by 2 to obtain the length of the intersection stop line. The intersection stop line is white, pixels with the gray scale larger than a preset threshold value are found out through left and right detection, and an area formed by the pixels is used as a detection area of the intersection stop line.
In the embodiment, the second edge feature of the stop line in the reference frame is obtained first, and the first detection area of the intersection stop line is automatically obtained through left-right detection according to the ordinate and the length of the second edge feature.
On the basis of any of the foregoing embodiments, in this embodiment, the step S12 specifically includes: and for each lane in each frame in the video, if the number of pixels of the first edge feature of the intersection stop line of the lane is smaller than a first preset threshold value, continuously judging whether the vehicle in the detection area containing the intersection stop line in the lane moves.
Specifically, for each lane in each frame, whether the number of pixels of the first edge feature of the intersection stop line of the lane in the frame is greater than a first preset threshold is judged, and if the number of pixels of the first edge feature of the intersection stop line of the lane in the frame is greater than or equal to the first preset threshold, it is indicated that no vehicle is near the intersection stop line. If the number of pixels of the first edge feature of the intersection stop line of the lane in the frame is smaller than the first preset threshold, the situation that the vehicles stop or pass by the intersection stop line is indicated, and under the situation, whether the vehicles in the detection area containing the intersection stop line in the lane in the frame move or not is further judged.
In the embodiment, under the condition that the number of pixels of the first edge feature of the intersection stop line of the lane in the frame is judged to be smaller than the first preset threshold value, namely under the condition that a vehicle exists, whether the vehicle moves is further judged, and the intersection occupancy and the vehicle flow are calculated by judging the result of whether the vehicle moves, so that the algorithm is simple and the accuracy is high.
On the basis of the foregoing embodiment, in this embodiment, for each lane in each frame, if it is determined that the number of pixels of the first edge feature of the intersection stop line of the lane is smaller than the first preset threshold, the step of continuously determining whether the vehicle in the detection area including the intersection stop line in the lane moves specifically includes: acquiring complete edge features of the intersection stop line from a detection area of a reference frame of the video by using an edge detection algorithm; and for each lane in each frame, if the number of the pixels of the first edge feature of the intersection stop line of the lane is judged to be less than the preset multiple of the number of the pixels of the complete edge feature of the intersection stop line in the lane, continuously judging whether the vehicle in the detection area containing the intersection stop line in the lane moves, wherein the preset multiple is less than 1.
Specifically, an edge detection algorithm is used for acquiring the complete edge feature of the intersection stop line from the detection area of the reference frame of the video, and since the intersection stop line in the reference frame is not shielded by the vehicle, the edge feature of the intersection stop line acquired from the detection area of the reference frame is the complete edge feature. The predetermined multiple is less than 1, such as 0.7. The first preset threshold is set according to the number of pixels of the complete edge feature of the intersection stop line of the lane in the reference frame.
In the embodiment, under the condition that the number of the pixels of the first edge feature of the intersection stop line of the lane in the frame is judged and known to be smaller than the preset multiple of the number of the pixels of the complete edge feature, namely under the condition that a vehicle exists, whether the vehicle moves is further judged, and the judgment is carried out by the number of the pixels of the complete edge feature, so that the accuracy is high.
On the basis of any of the above embodiments, the step of determining whether the vehicle in the detection area including the intersection stop line in the lane moves in step S12 in this embodiment specifically includes: acquiring a first distance between the frame and the gray scale of the detection area containing the intersection stop line in the lane in the reference frame, and a second distance between the adjacent frame of the frame and the gray scale of the detection area containing the intersection stop line in the lane in the reference frame; if the difference value between the first distance and each second distance is smaller than a second preset threshold value, the fact that the vehicles in the detection area containing the stop line of the intersection in the lane stop in the frame is known; or if the difference between the first distance and each second distance is judged to be larger than or equal to the second preset threshold, the vehicle motion in the detection area containing the stop line of the intersection in the lane in the frame is known.
Specifically, the number of the adjacent frames may be set according to actual conditions. And for each lane in each frame, respectively comparing the gray scale of the detection area containing the intersection stop line in the lane in the frame and the adjacent frame with the gray scale of the detection area containing the intersection stop line in the lane in the reference frame. And when the difference value between the first distance and each second distance is smaller than a second preset threshold value, the state of the vehicle in the detection area of the lane is a stop state. And when the difference value between the first distance and each second distance is greater than or equal to a second preset threshold value, the state of the vehicle in the detection area of the lane is a motion state.
In the embodiment, the gray levels of the detection areas containing the intersection stop lines in the lane in the frame and the adjacent frame are respectively obtained and compared with the gray levels of the detection areas containing the intersection stop lines in the lane in the reference frame, so that whether the vehicle in the detection area of the lane moves or not is judged, and the algorithm is simple and the accuracy is high.
On the basis of the foregoing embodiments, the step S2 in this embodiment further specifically includes: for each lane, acquiring a first frame number of vehicles in the detection area, which contains a stop line at an intersection in the lane, in the video; and dividing the first frame number by the total frame number of the video to obtain the intersection occupancy of the lane.
Specifically, the interval time between two adjacent frames of the video shot by the camera is the same, so that the intersection occupancy of each lane is the first frame number of the vehicle stop in the detection area of each lane divided by the total frame number of the video, and the algorithm is simple and high in accuracy.
On the basis of the foregoing embodiment, the step S2 in this embodiment further specifically includes: for each lane, acquiring a second frame number of vehicle movement in a detection area in the lane; and dividing the second frame number by the total frame number of the video to obtain the vehicle flow of the lane.
Specifically, the interval time between two adjacent frames of the video shot by the camera is the same, so that the vehicle flow of each lane is the number of the second frames of the vehicle movement in the detection area of each lane divided by the total number of the frames of the video, and the algorithm is simple and has high accuracy.
In another embodiment of the present invention, a device for calculating intersection occupancy and vehicle flow rate is provided, and fig. 6 is a schematic structural diagram of a device for calculating intersection occupancy and vehicle flow rate provided in an embodiment of the present invention, the device including a detecting unit 1 and an obtaining unit 2, wherein:
the detection unit 1 is used for acquiring a video of the intersection stop line shot from the rear upper side of the lane side in the intersection stop line in a preset historical time period, and detecting vehicles near the intersection stop line in the video; the acquisition unit 2 is used for acquiring the intersection occupancy and/or the vehicle flow according to the detection result.
Specifically, the detection unit 1 detects a vehicle near the intersection stop line in the video from the video of the lane side in the intersection stop line captured from the upper rear. When a vehicle stops near the intersection stop line and passes through the intersection stop line, the intersection stop line in the video is partially shielded, so that whether a vehicle exists near the intersection stop line or not is judged. And selecting videos in a preset historical time period from the shot videos of the stop line at the intersection according to the preset historical time period as a time period before the preset current time. The preset historical time period may be one continuous time period or a plurality of discontinuous time periods. The acquisition unit 2 acquires intersection occupancy and/or vehicle flow according to the detected vehicles near the intersection stop line in the video. And the intersection occupancy rate is the ratio of the time occupied by the intersection in the preset historical time period to the preset historical time period. The vehicle flow is the number of vehicles passing through the intersection stop line in unit time in the preset historical time period.
According to the embodiment, the intersection occupancy and/or the vehicle flow of each lane are/is obtained by detecting the vehicles near the intersection stop line in the video through the video of the intersection stop line shot from the rear upper part of the intersection stop line, and the algorithm is simple and the accuracy is high.
On the basis of the above embodiment, the detecting unit in this embodiment includes a determining subunit and a determining subunit, where:
the determining subunit is configured to acquire a detection area and a lane of a crossing stop line in each frame of the video, and acquire a first edge feature of the crossing stop line from the detection area; the judging subunit is configured to, for each lane in each frame, if it is judged that the first edge feature of the intersection stop line of the lane is incomplete, continue to judge whether the vehicle in the detection area including the intersection stop line in the lane moves; correspondingly, the obtaining unit is specifically configured to obtain the intersection occupancy and/or the vehicle flow of the lane according to a result of determining whether the vehicle is moving.
On the basis of the foregoing embodiment, in this embodiment, the determining subunit is specifically configured to: acquiring a second edge feature of the intersection stop line from a reference frame of the video by using an edge detection algorithm; horizontally projecting the second edge characteristic of the intersection stop line to obtain the longitudinal coordinate of the intersection stop line; acquiring the length of the stop line of the intersection according to the second edge characteristic; and acquiring a first detection area of the intersection stop line through left and right detection according to the longitudinal coordinate and the length of the intersection stop line.
On the basis of the foregoing embodiment, in this embodiment, the determining subunit is specifically configured to: and for each lane in each frame, if the number of pixels of the first edge feature of the intersection stop line of the lane in the frame is smaller than a first preset threshold value, judging whether the vehicle in the detection area containing the intersection stop line in the lane in the frame moves.
On the basis of the foregoing embodiment, in this embodiment, the determining subunit is further specifically configured to: acquiring complete edge features of the intersection stop line from a detection area of a reference frame of the video by using an edge detection algorithm; and for each lane in each frame, if the number of the pixels of the first edge feature of the intersection stop line of the lane is judged to be less than the preset multiple of the number of the pixels of the complete edge feature of the intersection stop line in the lane, continuously judging whether the vehicle in the detection area containing the intersection stop line in the lane moves, wherein the preset multiple is less than 1.
On the basis of the foregoing embodiments, in this embodiment, the determining subunit is specifically configured to: acquiring a first distance between the frame and the gray scale of the detection area containing the intersection stop line in the lane in the reference frame, and a second distance between the adjacent frame of the frame and the gray scale of the detection area containing the intersection stop line in the lane in the reference frame; if the difference value between the first distance and each second distance is smaller than a second preset threshold value, the fact that the vehicles in the detection area containing the stop line of the intersection in the lane stop in the frame is known; or if the difference between the first distance and each second distance is judged to be larger than or equal to the second preset threshold, the vehicle motion in the detection area containing the stop line of the intersection in the lane in the frame is known.
On the basis of the foregoing embodiments, in this embodiment, the obtaining unit is further specifically configured to: for each lane, acquiring a first frame number of vehicles in the detection area, which contains a stop line at an intersection in the lane, in the video; and dividing the first frame number by the total frame number of the video to obtain the intersection occupancy of the lane.
On the basis of the foregoing embodiment, in this embodiment, the obtaining unit is further specifically configured to: for each lane, acquiring a second frame number of vehicle movement in a detection area in the lane; and dividing the second frame number by the total frame number of the video to obtain the vehicle flow of the lane.
The present embodiment provides a device for calculating intersection occupancy and vehicle flow rate, and fig. 7 is a schematic structural diagram of a device for calculating intersection occupancy and vehicle flow rate according to an embodiment of the present invention, where the device includes: at least one processor 71, at least one memory 72, and a bus 73; wherein,
the processor 71 and the memory 72 are communicated with each other through the bus 73;
the memory 72 stores program instructions executable by the processor 71, and the processor calls the program instructions to execute the methods provided by the method embodiments, for example, the method includes: s1, acquiring a video of the intersection stop line shot from the rear upper side of the lane side in the intersection stop line in a preset historical time period, and detecting vehicles near the intersection stop line in the video; and S2, acquiring the intersection occupancy and/or the vehicle flow according to the detection result.
The present embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the methods provided by the above method embodiments, for example, including: s1, acquiring a video of the intersection stop line shot from the rear upper side of the lane side in the intersection stop line in a preset historical time period, and detecting vehicles near the intersection stop line in the video; and S2, acquiring the intersection occupancy and/or the vehicle flow according to the detection result.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above-described embodiments of the intersection occupancy and vehicle flow calculation apparatus are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, the method of the present application is only a preferred embodiment and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A method for calculating intersection occupancy and vehicle flow, comprising:
s1, acquiring a video of the intersection stop line shot from the rear upper side of the lane side in the intersection stop line in a preset historical time period, and detecting vehicles near the intersection stop line in the video;
s2, acquiring intersection occupancy and/or vehicle flow according to the detection result;
wherein, the step S1 specifically includes:
s11, acquiring a detection area and a lane containing an intersection stop line in each frame of the video, and acquiring a first edge characteristic of the intersection stop line from the detection area;
s12, for each lane in each frame in the video, if the first edge feature of the intersection stop line of the lane is judged to be incomplete, whether the vehicle in the detection area containing the intersection stop line in the lane moves is continuously judged;
correspondingly, the step S2 specifically includes:
acquiring the intersection occupancy and/or the vehicle flow of the lane according to the result of judging whether the vehicle moves;
wherein the step of determining whether the vehicle in the detection area including the intersection stop line in the lane moves in step S12 specifically includes:
acquiring a first distance between the frame and the gray scale of the detection area containing the intersection stop line in the lane in the reference frame, and a second distance between the adjacent frame of the frame and the gray scale of the detection area containing the intersection stop line in the lane in the reference frame;
if the difference value between the first distance and each second distance is smaller than a second preset threshold value, the fact that the vehicle in the detection area containing the stop line of the intersection in the lane stops is known; or,
and if the difference value between the first distance and each second distance is judged to be larger than or equal to the second preset threshold value, the vehicle movement in the detection area including the stop line of the road junction in the lane is known.
2. The method according to claim 1, wherein the step of acquiring the detection area including the intersection stop line in step S11 specifically comprises:
acquiring a second edge feature of the intersection stop line from a reference frame of the video by using an edge detection algorithm;
horizontally projecting the second edge characteristic of the intersection stop line to obtain the longitudinal coordinate of the intersection stop line;
acquiring the length of the stop line of the intersection according to the second edge characteristic;
and acquiring the detection area of the intersection stop line through left and right detection according to the longitudinal coordinate and the length of the intersection stop line.
3. The method according to claim 1 or 2, wherein the step S12 specifically includes:
and for each lane in each frame in the video, if the number of pixels of the first edge feature of the intersection stop line of the lane is smaller than a first preset threshold value, continuously judging whether the vehicle in the detection area containing the intersection stop line in the lane moves.
4. The method according to claim 3, wherein, for each lane in each frame of the video, if it is determined that the number of pixels of the first edge feature of the intersection stop line of the lane is smaller than the first preset threshold, the step of continuously determining whether the vehicle in the detection area including the intersection stop line in the lane is moving specifically comprises:
acquiring complete edge features of the intersection stop line from a detection area of a reference frame of the video by using an edge detection algorithm;
for each lane in each frame in the video, if the number of pixels of the first edge feature of the intersection stop line of the lane is smaller than the preset multiple of the number of pixels of the complete edge feature of the intersection stop line in the lane, whether the vehicle in the detection area containing the intersection stop line in the lane moves is continuously judged, and the preset multiple is smaller than 1.
5. The method according to claim 1 or 2, wherein the step S2 further specifically includes:
for each lane, acquiring a first frame number of vehicles in the detection area, which contains a stop line at an intersection in the lane, in the video;
and dividing the first frame number by the total frame number of the video to obtain the intersection occupancy of the lane.
6. The method according to claim 1 or 2, wherein the step S2 further specifically includes:
for each lane, acquiring a second frame number of vehicle movement in a detection area in the lane;
and dividing the second frame number by the total frame number of the video to obtain the vehicle flow of the lane.
7. An intersection occupancy and vehicle flow calculation device, comprising:
the system comprises a detection unit, a processing unit and a display unit, wherein the detection unit is used for acquiring a video of an intersection stop line shot from the rear upper side of a lane side in the intersection stop line in a preset historical time period and detecting a vehicle near the intersection stop line in the video;
the acquisition unit is used for acquiring the intersection occupancy and/or the vehicle flow according to the detection result;
the detection unit comprises a determination subunit and a judgment subunit, wherein:
the determining subunit is configured to acquire a detection area and a lane of a crossing stop line in each frame of the video, and acquire a first edge feature of the crossing stop line from the detection area;
the judging subunit is configured to, for each lane in each frame, if it is judged that the first edge feature of the intersection stop line of the lane is incomplete, continue to judge whether the vehicle in the detection area including the intersection stop line in the lane moves;
correspondingly, the obtaining unit is specifically configured to obtain the intersection occupancy and/or the vehicle flow of the lane according to a result of judging whether the vehicle moves;
wherein the judging subunit is specifically configured to: acquiring a first distance between the frame and the gray scale of the detection area containing the intersection stop line in the lane in the reference frame, and a second distance between the adjacent frame of the frame and the gray scale of the detection area containing the intersection stop line in the lane in the reference frame;
if the difference value between the first distance and each second distance is smaller than a second preset threshold value, the fact that the vehicles in the detection area containing the stop line of the intersection in the lane stop in the frame is known; or,
and if the difference value between the first distance and each second distance is judged to be larger than or equal to the second preset threshold value, the vehicle motion in the detection area containing the stop line of the intersection in the lane in the frame is known.
8. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of any one of claims 1 to 6.
CN201710712634.3A 2017-08-18 2017-08-18 Intersection occupancy and vehicle flow calculation method and device Active CN107665332B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710712634.3A CN107665332B (en) 2017-08-18 2017-08-18 Intersection occupancy and vehicle flow calculation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710712634.3A CN107665332B (en) 2017-08-18 2017-08-18 Intersection occupancy and vehicle flow calculation method and device

Publications (2)

Publication Number Publication Date
CN107665332A CN107665332A (en) 2018-02-06
CN107665332B true CN107665332B (en) 2020-08-04

Family

ID=61096984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710712634.3A Active CN107665332B (en) 2017-08-18 2017-08-18 Intersection occupancy and vehicle flow calculation method and device

Country Status (1)

Country Link
CN (1) CN107665332B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110149500A (en) * 2019-05-24 2019-08-20 深圳市珍爱云信息技术有限公司 Processing method, device, equipment and the storage medium of monitor video

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101777263B (en) * 2010-02-08 2012-05-30 长安大学 Traffic vehicle flow detection method based on video
US8933818B2 (en) * 2013-03-13 2015-01-13 Iteris, Inc. Bicycle presence detection in a roadway using video data analytics
CN103714703A (en) * 2013-12-17 2014-04-09 重庆凯泽科技有限公司 Vehicle flow detection algorithm based on video image processing
CN106355903B (en) * 2016-09-13 2019-03-15 枣庄学院 Multilane traffic volume detection method based on video analysis

Also Published As

Publication number Publication date
CN107665332A (en) 2018-02-06

Similar Documents

Publication Publication Date Title
JP7106664B2 (en) Intelligent driving control method and device, electronic device, program and medium
JP6833630B2 (en) Object detector, object detection method and program
CN105260713B (en) A kind of method for detecting lane lines and device
US9721460B2 (en) In-vehicle surrounding environment recognition device
US9679196B2 (en) Object sensing device
JP5997276B2 (en) Three-dimensional object detection device and foreign object detection device
US9704060B2 (en) Method for detecting traffic violation
JP2021185548A (en) Object detection device, object detection method and program
CN106647776B (en) Method and device for judging lane changing trend of vehicle and computer storage medium
JP5977827B2 (en) Three-dimensional object detection apparatus and three-dimensional object detection method
KR20210078530A (en) Lane property detection method, device, electronic device and readable storage medium
CN110718061B (en) Traffic intersection vehicle flow statistical method and device, storage medium and electronic equipment
CN106203272B (en) The method and apparatus for determining the movement of movable objects
US20130202155A1 (en) Low-cost lane marker detection
US7646311B2 (en) Image processing for a traffic control system
CN111652060A (en) Laser radar-based height-limiting early warning method and device, electronic equipment and storage medium
KR101667835B1 (en) Object localization using vertical symmetry
JP7305965B2 (en) Video surveillance system parameter setting method, device and video surveillance system
CN109840463A (en) A kind of Lane detection method and apparatus
CN117392423A (en) Laser radar-based true value data prediction method, device and equipment for target object
CN107665332B (en) Intersection occupancy and vehicle flow calculation method and device
Taubel et al. A lane departure warning system based on the integration of the optical flow and Hough transform methods
JP5783319B2 (en) Three-dimensional object detection apparatus and three-dimensional object detection method
KR101934345B1 (en) Field analysis system for improving recognition rate of car number reading at night living crime prevention
Sharma et al. An optical flow and hough transform based approach to a lane departure warning system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231109

Address after: No. 57-5 Development Avenue, No. 6015, Yichang Area, China (Hubei) Free Trade Zone, Yichang City, Hubei Province, 443005

Patentee after: Hubei Jiugan Technology Co.,Ltd.

Address before: 443002, China Three Gorges University, 8, University Road, Hubei, Yichang

Patentee before: CHINA THREE GORGES University

TR01 Transfer of patent right