CN114333356A - Road plane intersection traffic volume statistical method based on video multi-region marks - Google Patents

Road plane intersection traffic volume statistical method based on video multi-region marks Download PDF

Info

Publication number
CN114333356A
CN114333356A CN202111446985.7A CN202111446985A CN114333356A CN 114333356 A CN114333356 A CN 114333356A CN 202111446985 A CN202111446985 A CN 202111446985A CN 114333356 A CN114333356 A CN 114333356A
Authority
CN
China
Prior art keywords
vehicle
area
road plane
plane intersection
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111446985.7A
Other languages
Chinese (zh)
Other versions
CN114333356B (en
Inventor
熊文磊
王丽园
马天奕
李正军
罗丰
杨晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CCCC Second Highway Consultants Co Ltd
Original Assignee
CCCC Second Highway Consultants Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CCCC Second Highway Consultants Co Ltd filed Critical CCCC Second Highway Consultants Co Ltd
Priority to CN202111446985.7A priority Critical patent/CN114333356B/en
Publication of CN114333356A publication Critical patent/CN114333356A/en
Application granted granted Critical
Publication of CN114333356B publication Critical patent/CN114333356B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a road plane intersection traffic statistical method based on video multi-region marks, which comprises the following steps: collecting a road plane intersection video; calibrating the number of the marking area to obtain a marking area of the road plane intersection; setting a vehicle detection area; creating a traffic volume statistical summary table of the road plane intersection; creating a marked area vehicle information table set; pre-treating; tracking the vehicle which is subjected to vehicle calibration operation and obtains a vehicle number; in and out inspection operation; screening operation; filling the vehicle information and the marked area number into a traffic volume statistical summary table of the road plane intersection; and outputting a traffic volume statistical summary table at the road plane intersection. The method improves the working efficiency of traffic volume investigation of the road plane intersection, and removes the influence of subjective factors on investigation results; excessive computing resource consumption caused by extracting the vehicle overall process driving track data is avoided, and the software implementation difficulty is greatly reduced.

Description

Road plane intersection traffic volume statistical method based on video multi-region marks
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a method for counting traffic volume of a road plane intersection based on video multi-region marks.
Background
The intersection is used as an important node of urban traffic and is a bottleneck point influencing the smooth traffic of the urban traffic. The road traffic is frequently shunted, converged and crossed at the plane intersection, so that the traffic condition is particularly complex, the problem of urban traffic congestion is often highlighted at the intersection, and the key problem of urban traffic is solved by solving the problem of the intersection. Through scientific and reasonable intersection traffic volume survey, basic data can be provided for intersection optimization design, and traffic conditions and problem symptoms can be comprehensively mastered.
In the prior art, most of intersection traffic investigation is performed by traditional manual field investigation to obtain traffic data, namely, the most original manual data acquisition and manual statistics are performed; the manual operation has the advantages of simplicity, feasibility and no high technical threshold.
In the prior art, some methods for counting the traffic volume of a road plane intersection based on different types of videos exist, but the whole-course track data of vehicles are extracted from the videos of the intersection, and then graphical statistics is carried out; the method has the advantage that the automatic statistics of the computer can be realized without depending on the manual work.
The defects of the prior art are as follows:
1. because the manual investigation method in the prior art is time-consuming and labor-consuming, has high labor intensity, and needs to carry out service training on workers in advance, the subjective factors greatly influence the investigation result, and the result is not completely objective and accurate;
2. because the manual investigation method in the prior art adopts a large amount of manpower to carry out investigation statistics for a long time, and particularly needs a large amount of economic cost under the condition of long-term continuous traffic investigation;
3. in the prior art, the intersection traffic volume investigation method based on the video vehicle whole-course track data has large system calculation force requirements and is complex in implementation method, so that the development cost is high, the system bug is large, and the application and popularization degree is further low.
Disclosure of Invention
Aiming at the problems, the invention provides a road plane intersection traffic volume statistical method based on video multi-region marks, and aims to improve the working efficiency of traffic volume investigation of a road plane intersection and remove the influence of subjective factors on investigation results; excessive computing resource consumption caused by extracting the vehicle overall process driving track data is avoided, and the software implementation difficulty is greatly reduced.
In order to solve the problems, the technical scheme provided by the invention is as follows:
the method for counting the traffic volume of the road plane intersection based on the video multi-region mark comprises the following steps:
s100, collecting a road plane intersection video of a road plane intersection to be counted; the road plane intersection video comprises an unmanned aerial vehicle aerial video and a road plane intersection monitoring bayonet video;
s200, respectively and independently calibrating each inlet and outlet of each traffic direction of the road plane intersection in the road plane intersection video to obtain a unique marking area number, so as to obtain a marking area of the road plane intersection; the road intersection marking area comprises an entrance marking area and an exit marking area; the marking region numbers comprise inlet marking region numbers for characterizing the position of the inlet marking regions and outlet marking region numbers for characterizing the position of the outlet marking regions;
then, vehicle detection areas for vehicle detection are arranged at each inlet and each outlet in each traveling direction; two adjacent vehicle detection areas are mutually independent and isolated; the vehicle detection area is a rectangular frame arranged at an intersection in each vehicle running direction of the road plane intersection; one side of the vehicle detection area is superposed with the stop line, and the other side opposite to the stop line is positioned at one side far away from the road plane intersection and the stop line; the side lengths of two adjacent sides of the vehicle detection area and the stop line are preset manually and are respectively superposed with the leftmost side of the left first lane and the rightmost side of the right first lane in the same direction; each vehicle detection area covers all traffic lanes in the same direction in the area;
s300, creating a road plane intersection traffic volume statistical summary table for recording traffic volumes of different vehicle types in each traffic direction of the road plane intersection;
s400, creating a marked area vehicle information table set; the marked area vehicle information table set comprises a plurality of marked area vehicle information tables used for storing vehicle information of vehicles passing through the marked area of the road intersection; the marked area vehicle information table and the marked area of the road plane intersection are in one-to-one correspondence; the vehicle information comprises a vehicle type and a vehicle number; the vehicle type includes character strings "Car", "Bus", and "Truck"; the vehicle numbers and the vehicles are in one-to-one correspondence;
s500, identifying and calibrating newly-entering vehicles for the road plane intersection video, and specifically comprising the following steps:
s510, dividing the road plane intersection video into road plane intersection video frame streams at a manually preset acquisition frequency; the road plane intersection video frame stream comprises video frames which are arranged in a time-increasing manner;
s520, establishing a current check frame pointer; the current check frame pointer points to the storage address of the video frame, and the initial value of the offset of the address value of the current check frame pointer is 0;
s530, the video frame pointed by the current check frame pointer is checked, and the video frame is calibrated into an initial frame, a new target appearing frame and a common frame one by one according to the following standards:
when the initial value of the offset of the address value of the current check frame pointer is 0, the pointed video frame is calibrated as the initial frame;
the video frame where the newly entered vehicle appears is marked as the new target appearing frame; the new vehicle is a vehicle which is not existed in the video frame adjacent to the video frame pointed by the current check frame pointer;
the video frame which does not meet the mark as the initial frame and does not meet the mark as the new target appearing frame is marked as the common frame;
s540, according to the calibration result, the following operations are carried out:
if the video frame pointed by the current check frame pointer is the normal frame, directly executing S600;
if the video frame pointed by the current check frame pointer is the initial frame or the new target appearing frame, respectively defining a vehicle boundary box for each new entering vehicle in the video frame pointed by the current check frame pointer; the vehicle boundary frame is a rectangle framed along the outer edge of the vehicle and moves synchronously along with the vehicle; then, a vehicle type identifying operation for designating one vehicle type for each vehicle and a vehicle designating operation for assigning one vehicle number for each vehicle are performed once for each of the newly entered vehicles;
s600, carrying out in-out inspection operation on the road plane intersection video, and specifically comprising the following steps:
s610, tracking each vehicle which has undergone the vehicle calibration operation and successfully obtains the vehicle number;
then checking whether the vehicle boundary box of each vehicle which has been subjected to the vehicle calibration operation and successfully obtains the vehicle number in the video frame pointed by the current check frame pointer overlaps with the road intersection marking area, and performing the following operations according to the checking result:
adding 1 to the address value of the current inspection frame pointer if no vehicle bounding box of a vehicle overlaps the intersection marking area; then go back to and execute S530 again;
if the vehicle boundary box of a vehicle is overlapped with the road plane intersection marking area, recording the vehicle number of the vehicle corresponding to the vehicle boundary box overlapped with the road plane intersection marking area into the marking area vehicle information table corresponding to the road plane intersection marking area overlapped with the vehicle number; then, S700 is executed;
s700, screening the marked area vehicle information table to screen out the vehicle information of the vehicle which is recorded in the marked area vehicle information table corresponding to the entrance marked area and the marked area vehicle information table corresponding to the exit marked area;
s800, filling the vehicle information of each vehicle obtained by screening in S700, the number of the entrance marking area corresponding to the entrance marking area with the record corresponding to each vehicle and the number of the exit marking area corresponding to the exit marking area with the record corresponding to each vehicle into a traffic volume statistical summary table of the road plane intersection;
s900, outputting the traffic volume statistical summary table of the road plane intersection processed in the S800 in real time, and then adding 1 to the address value of the pointer of the current inspection frame; then go back to and execute S530 again;
and the traffic volume statistical summary table of the road plane intersection output in real time is the final result obtained by the method.
Preferably, the step S200 of then setting a vehicle detection area for vehicle detection at each entrance and exit in each traveling direction specifically includes the following steps:
s210, counting the number of entrances in the road plane intersection, and recording the number as the total number of the entrance mark areas; counting the number of exits in the road plane intersection and recording the number as the total number of exit marking areas;
s220, marking each inlet mark area and each outlet mark area, and setting the vehicle detection area; the vehicle detection areas cover all traffic lanes in the corresponding areas, and two adjacent vehicle detection areas are mutually independent and isolated;
and S230, adjusting the frames of the vehicle detection area to enable the frames to have intervals of manually preset widths.
Preferably, the traffic volume statistical summary table of the road plane intersection comprises the vehicle type, the marked area number and traffic volume data of corresponding vehicle types used for representing corresponding directions in the road plane intersection; the initial value of the traffic data is 0.
Preferably, the set of marked area vehicle information tables further includes an entrance attribute of the entrance marked area for characterizing the approach of the vehicle and an exit attribute of the exit marked area for characterizing the approach of the vehicle.
Preferably, in S610, the checking whether the vehicle bounding box of each of the video frames pointed by the current check frame pointer, which has undergone the vehicle calibration operation and successfully obtained the vehicle number, overlaps with the intersection marking area includes the following steps:
s611, calculating the overlapping degree of the vehicle boundary frame and the frame of the road plane intersection marking area; the border overlap is expressed as follows:
Figure BDA0003385063680000061
wherein:
Figure BDA0003385063680000062
the frame overlapping degree is used as the frame overlapping degree; vehicle _ id is the vehicle number; x is the intersection of the video frame at the road planeSequential encoding of sequences in a stream of video frames with an initial value of 1; [ r ] of1,c1]、[r2,c2]、[r3,c3]、[r4,c4]Respectively representing the coordinates of 4 vertexes of the vehicle boundary box under a rectangular coordinate system, wherein r represents a row coordinate, and c represents a column coordinate; [ rw ]1,cl1]、[rw2,cl2]Respectively representing coordinates of each vertex of the road plane intersection marking area under a rectangular coordinate system, wherein rw represents a row coordinate, and cl represents a column coordinate;
s612, judging whether the relation between the frame overlapping degrees of the adjacent 2 video frames simultaneously satisfies the following formula one by one:
Figure BDA0003385063680000063
then, the following operation is made according to the result of the determination:
if so, writing the written vehicle information into the marking area vehicle information table corresponding to the marking area of the road intersection;
if not, judging whether the relation between the frame overlapping degrees of two adjacent video frames meets the following formula again:
Figure BDA0003385063680000064
then, the following operation is made according to the result of the determination:
if so, writing the written vehicle information into the marking area vehicle information table corresponding to the marking area of the road intersection;
if not, not writing the written vehicle information into the marking area vehicle information table corresponding to the marking area of the road intersection; the value of x is then incremented by 1.
Preferably, s700. the step of performing a screening operation on the marked area vehicle information table specifically includes the following steps:
s710, establishing an import vehicle information traversal pointer; the imported vehicle information traversal pointer points to a storage address of the marked region vehicle information table corresponding to the imported marked region, and the initial value of the offset of the address value of the imported vehicle information traversal pointer is 0;
s720, taking out the vehicle number in the vehicle information in the storage space of the mark area vehicle information table corresponding to the exit mark area where the record update occurs last time;
s730, with the vehicle number as a retrieval key word, traversing the vehicle information traversing pointer of the import vehicle to point to the marked area vehicle information table, and then performing the following operations according to a retrieval result:
if the vehicle number can be retrieved when the vehicle information table of the marked area is pointed by the traversal pointer of the imported vehicle information, taking out the exit marked area number corresponding to the exit marked area with the latest record update and the import marked area number corresponding to the import marked area corresponding to the vehicle information table of the marked area pointed by the traversal pointer of the imported vehicle information; then adding 1 to the address value of the traversing pointer of the information of the imported vehicle; then executing S740;
if the vehicle number cannot be retrieved from the marked area vehicle information table pointed by the imported vehicle information traversal pointer, adding 1 to the address value of the imported vehicle information traversal pointer; then executing S740;
s740, checking whether the address value of the vehicle information traversing pointer is larger than the address value of the last marked area vehicle information table, and performing the following operations according to the checking result:
if the address value of the vehicle information traversing pointer is not larger than the address value of the last marked area vehicle information table, returning to and executing S730 again;
if the address value of the current check frame pointer is greater than the address value of the last video frame, S800 is performed.
Preferably, in S800, the step of filling the vehicle information of each vehicle obtained in S700 and the marking area numbers passing through all the marking areas of the intersection into the intersection traffic volume statistics summary table further includes the following steps:
s910, after the vehicle information of one vehicle and the marking area numbers passing through all the marking areas of the road plane intersection are filled into a traffic volume statistical summary table of the road plane intersection, adding 1 to the value of the traffic volume data.
Preferably, in S510, the road intersection video is segmented into the road intersection video frame stream by using a pre-constructed convolutional neural network at an artificially preset acquisition frequency.
Preferably, in S540, a vehicle type identification operation for designating one vehicle type for each vehicle and a vehicle designation operation for assigning one vehicle number to each vehicle are performed once for each new entering vehicle using a pre-constructed convolutional neural network.
Preferably, in S610, a Deep Sort algorithm is adopted to track each vehicle that has undergone the vehicle calibration operation and successfully obtained the vehicle number in each video frame.
Compared with the prior art, the invention has the following advantages:
1. because the method does not need to allocate a large number of personnel to carry out long-time and high-intensity field operation, the traffic data of the road plane intersection can be obtained only by aerial photography by an unmanned aerial vehicle or direct acquisition of a panoramic video of the road plane intersection by a traffic police department and then processing by the method, thereby greatly improving the working efficiency of traffic investigation of the road plane intersection, removing the influence of subjective factors on investigation results and having objective and accurate results;
2. the invention judges the driving direction by utilizing the inlet and outlet mark areas of the vehicle passing through the road plane intersection, does not need the vehicle whole-process track data in the prior art, thereby avoiding excessive computing resource consumption caused by extracting the vehicle whole-process driving track data, having simple and clear algorithm process and greatly reducing the software realization difficulty.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention;
fig. 2 is a schematic view of a road intersection according to an embodiment of the present invention.
Detailed Description
The present invention is further illustrated by the following examples, which are intended to be purely exemplary and are not intended to limit the scope of the invention, as various equivalent modifications of the invention will occur to those skilled in the art upon reading the present disclosure and fall within the scope of the appended claims.
As shown in fig. 1, the method for counting traffic at a road intersection based on video multi-zone markers comprises the following steps:
s100, collecting a road plane intersection video of a road plane intersection to be counted; the video of the road plane intersection comprises an unmanned aerial vehicle aerial video and a road plane intersection monitoring bayonet video.
It should be noted that, the pictures of the video at the intersection of the road and the plane must satisfy the following requirements at the same time:
the high angle, and do not shelter from, and the video shooting angle is fixed, and completely covers road intersection, and the vehicle profile is clear can distinguish in the video.
S200, respectively and independently calibrating each inlet and outlet of each traffic direction of the road plane intersection in the road plane intersection video to obtain a unique marking area number, so as to obtain a marking area of the road plane intersection; the road plane intersection marking area comprises an entrance marking area and an exit marking area; the mark region numbers include an inlet mark region number for characterizing a position of the inlet mark region and an outlet mark region number for characterizing a position of the outlet mark region.
Then, vehicle detection areas for vehicle detection are arranged at each inlet and each outlet in each traveling direction; two adjacent vehicle detection areas are mutually independent and isolated; the vehicle detection area is a rectangular frame arranged at an intersection in each vehicle running direction of the road plane intersection; one side of the vehicle detection area is superposed with the stop line, and the other side opposite to the stop line is positioned at one side far away from the road plane intersection and the stop line; the side lengths of two adjacent sides of the vehicle detection area and the stop line are preset manually and are respectively superposed with the leftmost side of the left first lane and the rightmost side of the right first lane in the same direction; each vehicle detection zone covers all traffic lanes in the same direction in the area.
As shown in FIG. 2, in this embodiment, the inlet label region is numbered Ri(1. ltoreq. i. ltoreq.m) and the exit mark region number is SjAnd (1. ltoreq. j. ltoreq.n); wherein: m represents the number of entrance marking areas of the intersection; n represents the number of exit marking areas of the intersection; r denotes an inlet label region; s represents an exit mark area; i and j are respective counters to identify the number of regions.
It should be noted that the inlet marking area number and the outlet marking area number are manually marked by a user using the method.
It should be further explained that, in order to facilitate the use of subsequent users, the marking area of the road intersection is given a unique marking area number, and in this specific embodiment, a unique marking area name is also given; the mark area name is defined by user's customization, and can modify editing to facilitate memory and information transfer.
In this specific embodiment, each of the inlets and outlets in each of the traveling directions is provided with a vehicle detection area for vehicle detection, which specifically includes the following steps:
s210, counting the number of entrances in the road plane intersection, and recording the number as the total number of the entrance mark areas; and counting the number of exits in the road plane intersection, and recording the number as the total number of exit marking areas.
S220, marking each inlet mark area and each outlet mark area, and setting a vehicle detection area; the vehicle detection areas cover all traffic lanes in the corresponding areas, and the two adjacent vehicle detection areas are mutually independent and isolated.
And S230, adjusting the frames of the vehicle detection area to enable the frames to have intervals of manually preset widths.
The function of S230 is to consider the vehicle boundary frame [ [ r ]1,c1],[r2,c2],[r3,c3],[r4,c4]]Fine-tuning the border [ rw ] of the adjacent vehicle detection zone1,cl1],[rw2,cl2],...]Therefore, a certain interval is formed between the frames of the adjacent vehicle detection areas, and misjudgment of the vehicle driving area is avoided.
It should be further explained that the vehicle detection area is set along the direction perpendicular to the road direction; wherein: the position of the entrance marking area is near the corresponding stop line, the position of the exit marking area is near the corresponding zebra crossing, and the positions of all the marking areas are not required to be present in the intersection vehicle intersection area, so as to avoid the misjudgment of the vehicle driving area.
It should be further noted that, because the vehicle boundary frame is wider and longer than the actual vehicle size, in order to avoid the misjudgment of the vehicle driving area, the frame of the adjacent vehicle detection area needs to be adjusted, and the interval between the vehicle detection areas is enlarged on the basis of ensuring that the marking area can cover each driving lane.
S300, a road plane intersection traffic volume statistical summary table for recording traffic volumes of different vehicle types in each traffic direction of the road plane intersection is created.
The road plane intersection traffic volume statistical summary table comprises vehicle types, mark area numbers and traffic volume data of corresponding vehicle types used for representing corresponding directions in the road plane intersection; the initial value of the traffic data is 0.
In this embodiment, the total traffic volume statistical table at the road intersection is a four-dimensional data table, which is expressed as RESULT (R)i,Sj,Ty,Nijy) Wherein: riAnd SjThe meaning of (c) is already described in S200, and is not described herein again; it is noted that RiAnd SjIs the total set of all the marked areasNumbering; t isyIs a vehicle type, is a subset of vehicle information, which will be detailed in S400 below; n is a radical ofijyTraffic volume data;
it is further noted that in NijyThe i is aligned with the serial number of a marking area of a road plane intersection where vehicles pass in and out, and the y is aligned with the type of the vehicles; such as N121Is represented by R1Driving into the crossing, from S2The type of vehicle exiting the intersection is traffic data for "Car".
S400, creating a marked area vehicle information table set; the marked area vehicle information table set comprises a plurality of marked area vehicle information tables used for storing vehicle information of vehicles passing through the marked area of the road intersection; the marked area vehicle information table and the marked area of the road plane intersection are in one-to-one correspondence; the vehicle information includes a vehicle type and a vehicle number; the vehicle type includes character strings "Car", "Bus", and "Truck"; the vehicle numbers and the vehicles are in one-to-one correspondence.
It should be noted that the value of y corresponds to a vehicle type, where: the y value is 1 to represent that the vehicle type is Car, the y value is 2 to represent that the vehicle type is Bus, and the y value is 3 to represent that the vehicle type is Truck; the value of y is automatically marked by the program.
As shown in fig. 2, for a road intersection having m entrances and n exits, the number of the marked area vehicle information tables in the marked area vehicle information table set is m + n, and the marked area vehicle information tables correspond to m + n road intersection marked areas.
It should be noted that the vehicle number is a unique serial number in the video sequence, and is automatically marked by the program.
It should be noted that, in order to facilitate the use of subsequent users, and because the marked area vehicle information table and the marked area of the road intersection are in a one-to-one correspondence relationship, in this specific embodiment, the naming rule of the marked area vehicle information table is "marked area name" + character string "vehicle information table"; the naming of the vehicle information table in the marked area can be customized and defined by a user, and can be modified and edited to facilitate memory and information transfer, and can also be automatically generated by the system according to the name of the marked area, so that the labor of the user is reduced; the two methods can be switched at any time and are very flexible.
The marked area vehicle information table set also includes an ingress attribute for characterizing an ingress marked area traversed by the vehicle and an egress attribute for characterizing an egress marked area traversed by the vehicle.
S500, identifying and calibrating newly-entering vehicles for the road plane intersection video, and specifically comprising the following steps:
s510, dividing the road plane intersection video into road plane intersection video frame streams at a manually preset acquisition frequency; the stream of video frames at the intersection of the road plane contains video frames arranged in time increments.
In the embodiment, the road intersection video is divided into the road intersection video frame streams by adopting the pre-constructed convolutional neural network at the manually preset acquisition frequency.
S520, establishing a current check frame pointer; the current check frame pointer points to the storage address of the video frame, and the initial value of the offset of the address value of the current check frame pointer is 0.
S530, checking the video frame pointed by the current check frame pointer, and calibrating the video frame as an initial frame, a new target appearing frame and a common frame one by one according to the following standards:
and when the initial value of the offset of the address value of the current check frame pointer is 0, the pointed video frame is marked as an initial frame.
Marking the video frame of the newly entered vehicle as a new target occurrence frame; the newly entering vehicle is a vehicle that is not present in the next previous video frame relative to the video frame pointed to by the current inspection frame pointer.
Video frames that do not meet the criteria of being neither the initial frame nor the new target occurrence frame are rated as normal frames.
S540, according to the calibration result, the following operations are carried out:
if the video frame pointed by the current check frame pointer is a normal frame, S600 is directly performed.
If the video frame pointed by the current check frame pointer is an initial frame or a new target appearing frame, respectively defining a vehicle boundary box for each new entering vehicle in the video frame pointed by the current check frame pointer; the vehicle boundary frame is a rectangle framed along the outer edge of the vehicle and moves synchronously along with the vehicle; then, a vehicle type identifying operation for designating a vehicle type for each vehicle and a vehicle designating operation for assigning a vehicle number to each vehicle are performed once for each newly entering vehicle;
in this embodiment, a vehicle type identification operation for designating one vehicle type for each vehicle and a vehicle calibration operation for assigning one vehicle number to each vehicle are performed once for each newly entering vehicle using a previously constructed convolutional neural network.
It should be noted that, in this embodiment, for a vehicle that has appeared in an initial frame, vehicle detection and vehicle type identification operations can be completed for the vehicle in the initial frame; and for the new target vehicle appearing in the subsequent frame, the vehicle detection and vehicle type identification operation can be completed in the video frame where the new target vehicle appears for the first time.
It should be further explained that the method of the present invention divides the vehicles into three categories of Car (small Bus), Bus (Bus/large Bus) and Truck (Truck), and the statistical result of the traffic volume of the subsequent vehicle division is established on the basis of the vehicle classification rule.
S600, carrying out in-out inspection operation on the video of the road plane intersection, and specifically comprising the following steps:
s610, tracking each vehicle which has been subjected to the vehicle calibration operation and successfully obtains the vehicle number.
Then checking whether each vehicle boundary box of the vehicle which has been subjected to the vehicle calibration operation and successfully obtains the vehicle number in the video frame pointed by the current check frame pointer overlaps with the road intersection marking area, and performing the following operations according to the checking result:
if no vehicle boundary box of the vehicle is overlapped with the marking area of the road intersection, adding 1 to the address value of the pointer of the current check frame; and then returns to and performs S530 again.
If the vehicle boundary box of the vehicle is overlapped with the marking area of the road plane intersection, recording the vehicle number of the vehicle corresponding to the vehicle boundary box overlapped with the marking area of the road plane intersection into a marking area vehicle information table corresponding to the overlapped marking area of the road plane intersection; then S700 is performed.
In S610, it is checked whether a vehicle boundary box of each vehicle, which has undergone vehicle calibration operation and successfully obtained a vehicle number, in the video frame pointed by the current check frame pointer overlaps with a road intersection marking area, which specifically includes the following steps:
s611, calculating the overlapping degree of the vehicle boundary frame and the frame of the road plane intersection marking area; the frame overlapping degree is expressed by formula (1):
Figure BDA0003385063680000151
wherein:
Figure BDA0003385063680000152
the overlapping degree of the frames; vehicle _ id is a vehicle number; x is a sequential code representing the sequence of video frames in a video frame stream of a road intersection, and the initial value is 1;
[r1,c1]、[r2,c2]、[r3,c3]、[r4,c4]respectively representing the coordinates of 4 vertexes of the vehicle boundary frame under a rectangular coordinate system, wherein r represents a row coordinate, and c represents a column coordinate; [ rw ]1,cl1]、[rw2,cl2]Respectively representing the coordinates of each vertex of the road intersection marking area under a rectangular coordinate system, wherein rw represents a row coordinate and cl represents a column coordinate.
S612, judging whether the relation between the frame overlapping degrees of the adjacent 2 video frames meets the conditions of the formulas (2) and (3) one by one:
Figure BDA0003385063680000153
then, the following operation is made according to the result of the determination:
and if so, writing the written vehicle information into a marking area vehicle information table corresponding to the marking area of the road intersection.
If not, judging whether the relation between the frame overlapping degrees of two adjacent video frames meets the condition of the formula (4) again:
Figure BDA0003385063680000154
then, the following operation is made according to the result of the determination:
and if so, writing the written vehicle information into a marking area vehicle information table corresponding to the marking area of the road intersection.
If not, not writing the written vehicle information into a vehicle information table of a marking area corresponding to the marking area of the road intersection; the value of x is then incremented by 1.
In this specific embodiment, a Deep Sort algorithm is adopted to track each vehicle which has undergone vehicle calibration operation and successfully obtained a vehicle number in each video frame.
It should be noted that, except for the initial frame (i.e. x is 1), the frame between the vehicle boundary frame and the vehicle detection area is the same as the frame between the vehicle boundary frame and the vehicle detection area
Figure BDA0003385063680000161
When the value changes suddenly from 0 to non-0, the writing operation of the vehicle information into the corresponding mark area data table is executed,
Figure BDA0003385063680000162
when the value changes from a value other than 0 to 0, the information writing operation is not executed; in the initial frame, if the vehicle boundary frame and the frame of the vehicle detection area are detected
Figure BDA0003385063680000163
If the value is greater than 0, the vehicle information is directly written into the corresponding mark area vehicle information table.
S700, screening the marked area vehicle information table, and screening out vehicle information of vehicles which are recorded in both the marked area vehicle information table corresponding to the inlet marked area and the marked area vehicle information table corresponding to the outlet marked area.
The method for screening the vehicle information table in the marked area specifically comprises the following steps:
s710, establishing an import vehicle information traversal pointer; and the imported vehicle information traversal pointer points to the storage address of the marking area vehicle information table corresponding to the imported marking area, and the initial value of the offset of the address value of the imported vehicle information traversal pointer is 0.
And S720, extracting the vehicle number in the vehicle information in the storage space of the mark area vehicle information table corresponding to the exit mark area where the record update occurs last time.
S730, with the vehicle number as a retrieval key word, traversing the vehicle information table of the marked area pointed by the traversing pointer of the vehicle information of the inlet, and then performing the following operations according to the retrieval result:
if the vehicle number can be retrieved from the vehicle information table of the marked area pointed by the traversal pointer of the import vehicle information, taking out the exit marked area number corresponding to the exit marked area with the latest record update and the import marked area number corresponding to the import marked area corresponding to the vehicle information table of the marked area pointed by the traversal pointer of the import vehicle information; then adding 1 to the address value of the traversing pointer of the information of the imported vehicle; then S740 is performed.
If the vehicle number cannot be retrieved from the vehicle information table of the marked area pointed by the inlet vehicle information traversal pointer, adding 1 to the address value of the inlet vehicle information traversal pointer; then S740 is performed.
S740, checking whether the address value of the vehicle information traversing pointer is larger than the address value of the vehicle information table of the last marked area, and performing the following operations according to the checking result:
if the address value of the vehicle information traversal pointer is not greater than the address value of the last marked area vehicle information table, go back to and execute S730 again.
If the address value of the current check frame pointer is greater than the address value of the last video frame, S800 is performed.
And S800, filling the vehicle information of each vehicle obtained by screening in the S700, the number of the entrance marking area corresponding to the entrance marking area with the record corresponding to each vehicle and the number of the exit marking area corresponding to the exit marking area with the record corresponding to each vehicle into a traffic volume statistical summary table of the road plane intersection.
When the vehicle information is written into the traffic volume statistical summary table of the road intersection, it is described that the vehicle has successively passed through the entrance marked area and the exit marked area of the road intersection, and the driving process at the road intersection is completed; therefore, on the basis, the vehicle driving direction can be obtained by extracting the inlet mark area number and the outlet mark area number of the vehicle.
In this embodiment, the method further comprises the following steps:
and S810, adding 1 to the traffic data value after successfully filling the vehicle information of one vehicle and the marking area numbers passing through the marking areas of all the road plane intersections into the traffic volume statistical summary table of the road plane intersections.
In S810, R to be obtained in S700 isi、Sj、TyAnd vehicle number and RESULT (R)i,Sj,Ty,Nijy) The association is matched, and the matching rule is R obtained in S700i、Sj、TyAnd RESULT (R)i,Sj,Ty,Nijy) R in (1)i、Sj、TyIf the two are the same, the association matching is successful; on the basis of successful association, N is addedijyAnd adding 1.
S900, outputting the traffic volume statistical summary table of the road plane intersection processed in the S800 in real time, and then adding 1 to the address value of the pointer of the current inspection frame; and then returns to and performs S530 again.
And the traffic volume statistical summary table of the road plane intersection output in real time is the final result obtained by the method.
It should be noted that after the video data of all the road plane intersections are processed according to the method of the present invention, the traffic statistical data of the road plane intersections according to the directions and the vehicle types can be obtained, which is very convenient for providing data support for various statistical calibers.
It should be noted that, when the method of the present invention is adopted, the panoramic video playing and traffic volume statistics of the road plane intersection can be synchronously performed, and the investigators can check the rationality and reliability of the statistical result in real time. Meanwhile, as the extraction of the whole-course track data of the vehicle is not involved, compared with the conventional traffic volume statistical case based on video analysis, the more complex the geometric structure of the road plane intersection serving as a survey object and the traffic flow of the road plane intersection, the more obvious the advantages of the method are.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. To those skilled in the art; various modifications to these embodiments will be readily apparent, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Finally, it should be noted that the above embodiments are merely representative examples of the present invention. It is obvious that the invention is not limited to the above-described embodiments, but that many variations are possible. Any simple modification, equivalent change and modification made to the above embodiments in accordance with the technical spirit of the present invention should be considered to be within the scope of the present invention.
Here, it should be noted that the description of the above technical solutions is exemplary, the present specification may be embodied in different forms, and should not be construed as being limited to the technical solutions set forth herein. Rather, these descriptions are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Furthermore, the technical solution of the present invention is limited only by the scope of the claims.
The shapes, sizes, ratios, angles, and numbers disclosed to describe aspects of the specification and claims are examples only, and thus, the specification and claims are not limited to the details shown. In the following description, when a detailed description of related known functions or configurations is determined to unnecessarily obscure the focus of the present specification and claims, the detailed description will be omitted.
Where the terms "comprising", "having" and "including" are used in this specification, there may be another part or parts unless otherwise stated, and the terms used may generally be in the singular but may also be in the plural.
It should be noted that although the terms "first," "second," "top," "bottom," "side," "other," "end," "other end," and the like may be used and used in this specification to describe various components, these components and parts should not be limited by these terms. These terms are only used to distinguish one element or section from another element or section. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, with the top and bottom elements being interchangeable or switchable with one another, where appropriate, without departing from the scope of the present description; the components at one end and the other end may be of the same or different properties to each other.
Further, in constituting the component, although it is not explicitly described, it is understood that a certain error region is necessarily included.
In describing positional relationships, for example, when positional sequences are described as being "on.. above", "over.. below", "below", and "next", unless such words or terms are used as "exactly" or "directly", they may include cases where there is no contact or contact therebetween. If a first element is referred to as being "on" a second element, that does not mean that the first element must be above the second element in the figures. The upper and lower portions of the member will change depending on the angle of view and the change in orientation. Thus, in the drawings or in actual construction, if a first element is referred to as being "on" a second element, it can be said that the first element is "under" the second element and the first element is "over" the second element. In describing temporal relationships, unless "exactly" or "directly" is used, the description of "after", "subsequently", and "before" may include instances where there is no discontinuity between steps. The features of the various embodiments of the present invention may be partially or fully combined or spliced with each other and performed in a variety of different configurations as would be well understood by those skilled in the art. Embodiments of the invention may be performed independently of each other or may be performed together in an interdependent relationship.

Claims (10)

1. A road plane intersection traffic volume statistical method based on video multi-region marks is characterized by comprising the following steps: comprises the following steps:
s100, collecting a road plane intersection video of a road plane intersection to be counted; the road plane intersection video comprises an unmanned aerial vehicle aerial video and a road plane intersection monitoring bayonet video;
s200, respectively and independently calibrating each inlet and outlet of each traffic direction of the road plane intersection in the road plane intersection video to obtain a unique marking area number, so as to obtain a marking area of the road plane intersection; the road intersection marking area comprises an entrance marking area and an exit marking area; the marking region numbers comprise inlet marking region numbers for characterizing the position of the inlet marking regions and outlet marking region numbers for characterizing the position of the outlet marking regions;
then, vehicle detection areas for vehicle detection are arranged at each inlet and each outlet in each traveling direction; two adjacent vehicle detection areas are mutually independent and isolated; the vehicle detection area is a rectangular frame arranged at an intersection in each vehicle running direction of the road plane intersection; one side of the vehicle detection area is superposed with the stop line, and the other side opposite to the stop line is positioned at one side far away from the road plane intersection and the stop line; the side lengths of two adjacent sides of the vehicle detection area and the stop line are preset manually and are respectively superposed with the leftmost side of the left first lane and the rightmost side of the right first lane in the same direction; each vehicle detection area covers all traffic lanes in the same direction in the area;
s300, creating a road plane intersection traffic volume statistical summary table for recording traffic volumes of different vehicle types in each traffic direction of the road plane intersection;
s400, creating a marked area vehicle information table set; the marked area vehicle information table set comprises a plurality of marked area vehicle information tables used for storing vehicle information of vehicles passing through the marked area of the road intersection; the marked area vehicle information table and the marked area of the road plane intersection are in one-to-one correspondence; the vehicle information comprises a vehicle type and a vehicle number; the vehicle type includes character strings "Car", "Bus", and "Truck"; the vehicle numbers and the vehicles are in one-to-one correspondence;
s500, identifying and calibrating newly-entering vehicles for the road plane intersection video, and specifically comprising the following steps:
s510, dividing the road plane intersection video into road plane intersection video frame streams at a manually preset acquisition frequency; the road plane intersection video frame stream comprises video frames which are arranged in a time-increasing manner;
s520, establishing a current check frame pointer; the current check frame pointer points to the storage address of the video frame, and the initial value of the offset of the address value of the current check frame pointer is 0;
s530, the video frame pointed by the current check frame pointer is checked, and the video frame is calibrated into an initial frame, a new target appearing frame and a common frame one by one according to the following standards:
when the initial value of the offset of the address value of the current check frame pointer is 0, the pointed video frame is calibrated as the initial frame;
the video frame where the newly entered vehicle appears is marked as the new target appearing frame; the new vehicle is a vehicle which is not existed in the video frame adjacent to the video frame pointed by the current check frame pointer;
the video frame which does not meet the mark as the initial frame and does not meet the mark as the new target appearing frame is marked as the common frame;
s540, according to the calibration result, the following operations are carried out:
if the video frame pointed by the current check frame pointer is the normal frame, directly executing S600;
if the video frame pointed by the current check frame pointer is the initial frame or the new target appearing frame, respectively defining a vehicle boundary box for each new entering vehicle in the video frame pointed by the current check frame pointer; the vehicle boundary frame is a rectangle framed along the outer edge of the vehicle and moves synchronously along with the vehicle; then, a vehicle type identifying operation for designating one vehicle type for each vehicle and a vehicle designating operation for assigning one vehicle number for each vehicle are performed once for each of the newly entered vehicles;
s600, carrying out in-out inspection operation on the road plane intersection video, and specifically comprising the following steps:
s610, tracking each vehicle which has undergone the vehicle calibration operation and successfully obtains the vehicle number;
then checking whether the vehicle boundary box of each vehicle which has been subjected to the vehicle calibration operation and successfully obtains the vehicle number in the video frame pointed by the current check frame pointer overlaps with the road intersection marking area, and performing the following operations according to the checking result:
adding 1 to the address value of the current inspection frame pointer if no vehicle bounding box of a vehicle overlaps the intersection marking area; then go back to and execute S530 again;
if the vehicle boundary box of a vehicle is overlapped with the road plane intersection marking area, recording the vehicle number of the vehicle corresponding to the vehicle boundary box overlapped with the road plane intersection marking area into the marking area vehicle information table corresponding to the road plane intersection marking area overlapped with the vehicle number; then, S700 is executed;
s700, screening the marked area vehicle information table to screen out the vehicle information of the vehicle which is recorded in the marked area vehicle information table corresponding to the entrance marked area and the marked area vehicle information table corresponding to the exit marked area;
s800, filling the vehicle information of each vehicle obtained by screening in S700, the number of the entrance marking area corresponding to the entrance marking area with the record corresponding to each vehicle and the number of the exit marking area corresponding to the exit marking area with the record corresponding to each vehicle into a traffic volume statistical summary table of the road plane intersection;
s900, outputting the traffic volume statistical summary table of the road plane intersection processed in the S800 in real time, and then adding 1 to the address value of the pointer of the current inspection frame; then go back to and execute S530 again;
and the traffic volume statistical summary table of the road plane intersection output in real time is the final result obtained by the method.
2. The video multi-region marker-based road plane intersection traffic volume statistical method according to claim 1, characterized in that: in S200, a vehicle detection area for vehicle detection is then provided at each entrance and exit in each traveling direction, and the method specifically includes the following steps:
s210, counting the number of entrances in the road plane intersection, and recording the number as the total number of the entrance mark areas; counting the number of exits in the road plane intersection and recording the number as the total number of exit marking areas;
s220, marking each inlet mark area and each outlet mark area, and setting the vehicle detection area; the vehicle detection areas cover all traffic lanes in the corresponding areas, and two adjacent vehicle detection areas are mutually independent and isolated;
and S230, adjusting the frames of the vehicle detection area to enable the frames to have intervals of manually preset widths.
3. The video multi-zone marker-based road plane intersection traffic volume statistical method according to claim 2, characterized in that: the road plane intersection traffic volume statistical summary table comprises the vehicle types, the marked area numbers and traffic volume data of corresponding vehicle types used for representing corresponding directions in the road plane intersection; the initial value of the traffic data is 0.
4. The video multi-zone marker-based road plane intersection traffic volume statistical method according to claim 3, characterized in that: the set of tagged region vehicle information tables also includes an ingress attribute for the ingress tagged region characterizing a vehicle approach and an egress attribute for the egress tagged region characterizing a vehicle approach.
5. The video multi-zone marker-based road plane intersection traffic volume statistical method according to claim 4, characterized in that: in S610, the checking whether the vehicle bounding box of each of the video frames pointed by the current check frame pointer, which has undergone the vehicle calibration operation and successfully obtained the vehicle number, overlaps with the intersection marking area specifically includes the following steps:
s611, calculating the overlapping degree of the vehicle boundary frame and the frame of the road plane intersection marking area; the border overlap is expressed as follows:
Figure FDA0003385063670000051
wherein:
Figure FDA0003385063670000052
the frame overlapping degree is used as the frame overlapping degree; vehicle _ id is the vehicle number; x is the representation that the video frame is at the road intersectionSequential encoding of sequences in a stream of video frames, with an initial value of 1; [ r ] of1,c1]、[r2,c2]、[r3,c3]、[r4,c4]Respectively representing the coordinates of 4 vertexes of the vehicle boundary box under a rectangular coordinate system, wherein r represents a row coordinate, and c represents a column coordinate; [ rw ]1,cl1]、[rw2,cl2]Respectively representing coordinates of each vertex of the road plane intersection marking area under a rectangular coordinate system, wherein rw represents a row coordinate, and cl represents a column coordinate;
s612, judging whether the relation between the frame overlapping degrees of the adjacent 2 video frames simultaneously satisfies the following formula one by one:
Figure FDA0003385063670000053
then, the following operation is made according to the result of the determination:
if so, writing the written vehicle information into the marking area vehicle information table corresponding to the marking area of the road intersection;
if not, judging whether the relation between the frame overlapping degrees of two adjacent video frames meets the following formula again:
Figure FDA0003385063670000054
then, the following operation is made according to the result of the determination:
if so, writing the written vehicle information into the marking area vehicle information table corresponding to the marking area of the road intersection;
if not, not writing the written vehicle information into the marking area vehicle information table corresponding to the marking area of the road intersection; the value of x is then incremented by 1.
6. The video multi-zone marker-based road plane intersection traffic volume statistical method according to claim 5, characterized in that: s700, screening the marked area vehicle information table, and specifically comprising the following steps:
s710, establishing an import vehicle information traversal pointer; the imported vehicle information traversal pointer points to a storage address of the marked region vehicle information table corresponding to the imported marked region, and the initial value of the offset of the address value of the imported vehicle information traversal pointer is 0;
s720, taking out the vehicle number in the vehicle information in the storage space of the mark area vehicle information table corresponding to the exit mark area where the record update occurs last time;
s730, with the vehicle number as a retrieval key word, traversing the vehicle information traversing pointer of the import vehicle to point to the marked area vehicle information table, and then performing the following operations according to a retrieval result:
if the vehicle number can be retrieved when the vehicle information table of the marked area is pointed by the traversal pointer of the imported vehicle information, taking out the exit marked area number corresponding to the exit marked area with the latest record update and the import marked area number corresponding to the import marked area corresponding to the vehicle information table of the marked area pointed by the traversal pointer of the imported vehicle information; then adding 1 to the address value of the traversing pointer of the information of the imported vehicle; then executing S740;
if the vehicle number cannot be retrieved from the marked area vehicle information table pointed by the imported vehicle information traversal pointer, adding 1 to the address value of the imported vehicle information traversal pointer; then executing S740;
s740, checking whether the address value of the vehicle information traversing pointer is larger than the address value of the last marked area vehicle information table, and performing the following operations according to the checking result:
if the address value of the vehicle information traversing pointer is not larger than the address value of the last marked area vehicle information table, returning to and executing S730 again;
if the address value of the current check frame pointer is greater than the address value of the last video frame, S800 is performed.
7. The video multi-zone marker-based road plane intersection traffic volume statistical method according to claim 6, characterized in that: in S800, the step of filling the vehicle information of each vehicle obtained in S700 and the number of the marked area passing through the marked area of all the road intersection into the traffic volume statistics summary table for the road intersection further includes the following steps:
s910, after the vehicle information of one vehicle and the marking area numbers passing through all the marking areas of the road plane intersection are filled into a traffic volume statistical summary table of the road plane intersection, adding 1 to the value of the traffic volume data.
8. The video multi-zone marker-based road plane intersection traffic volume statistical method according to claim 7, characterized in that: and S510, dividing the road plane intersection video into road plane intersection video frame streams by adopting a pre-constructed convolutional neural network at a manually preset acquisition frequency.
9. The video multi-zone marker-based road plane intersection traffic volume statistical method according to claim 8, characterized in that: in S540, a vehicle type identification operation for designating one vehicle type for each vehicle and a vehicle calibration operation for assigning one vehicle number to each vehicle are performed once for each new incoming vehicle using a pre-constructed convolutional neural network.
10. The video multi-zone marker-based road plane intersection traffic volume statistical method according to claim 9, characterized in that: in S610, a Deep Sort algorithm is adopted to track each vehicle that has undergone the vehicle calibration operation and successfully obtained the vehicle number in each video frame.
CN202111446985.7A 2021-11-30 2021-11-30 Road plane intersection traffic volume statistical method based on video multi-region marking Active CN114333356B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111446985.7A CN114333356B (en) 2021-11-30 2021-11-30 Road plane intersection traffic volume statistical method based on video multi-region marking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111446985.7A CN114333356B (en) 2021-11-30 2021-11-30 Road plane intersection traffic volume statistical method based on video multi-region marking

Publications (2)

Publication Number Publication Date
CN114333356A true CN114333356A (en) 2022-04-12
CN114333356B CN114333356B (en) 2023-12-15

Family

ID=81049556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111446985.7A Active CN114333356B (en) 2021-11-30 2021-11-30 Road plane intersection traffic volume statistical method based on video multi-region marking

Country Status (1)

Country Link
CN (1) CN114333356B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014059655A (en) * 2012-09-14 2014-04-03 Toshiba Corp Road situation-monitoring device, and road situation-monitoring method
CN103733237A (en) * 2011-07-05 2014-04-16 高通股份有限公司 Road-traffic-based group, identifier, and resource selection in vehicular peer-to-peer networks
CN103730015A (en) * 2013-12-27 2014-04-16 株洲南车时代电气股份有限公司 Method and device for detecting traffic flow at intersection
CN104794907A (en) * 2015-05-05 2015-07-22 江苏大为科技股份有限公司 Traffic volume detection method using lane splitting and combining
CN105069407A (en) * 2015-07-23 2015-11-18 电子科技大学 Video-based traffic flow acquisition method
CN106128103A (en) * 2016-07-26 2016-11-16 北京市市政工程设计研究总院有限公司 A kind of intersection Turning movement distribution method based on recursion control step by step and device
KR101696881B1 (en) * 2016-01-06 2017-01-17 주식회사 씨티아이에스 Method and apparatus for analyzing traffic information
CN107545742A (en) * 2017-09-14 2018-01-05 四川闫新江信息科技有限公司 Monitor the traffic controller of the adjust automatically of vehicle flowrate
CN110033620A (en) * 2019-05-17 2019-07-19 东南大学 A kind of intersection flux and flow direction projectional technique based on Traffic monitoring data
CN110111575A (en) * 2019-05-16 2019-08-09 北京航空航天大学 A kind of Forecast of Urban Traffic Flow network analysis method based on Complex Networks Theory
WO2020189475A1 (en) * 2019-03-19 2020-09-24 株式会社Ihi Moving body monitoring system, control server for moving body monitoring system, and moving body monitoring method
CN112802348A (en) * 2021-02-24 2021-05-14 辽宁石化职业技术学院 Traffic flow counting method based on mixed Gaussian model
CN112907978A (en) * 2021-03-02 2021-06-04 江苏集萃深度感知技术研究所有限公司 Traffic flow monitoring method based on monitoring video
CN113192336A (en) * 2021-05-28 2021-07-30 三峡大学 Road congestion condition detection method taking robust vehicle target detection as core
CN113257005A (en) * 2021-06-25 2021-08-13 之江实验室 Traffic flow statistical method based on correlation measurement
CN113269768A (en) * 2021-06-08 2021-08-17 中移智行网络科技有限公司 Traffic congestion analysis method, device and analysis equipment
KR102323437B1 (en) * 2021-06-01 2021-11-09 시티아이랩 주식회사 Method, System for Traffic Monitoring Using Deep Learning
CN113688717A (en) * 2021-08-20 2021-11-23 云往(上海)智能科技有限公司 Image recognition method and device and electronic equipment

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103733237A (en) * 2011-07-05 2014-04-16 高通股份有限公司 Road-traffic-based group, identifier, and resource selection in vehicular peer-to-peer networks
JP2014059655A (en) * 2012-09-14 2014-04-03 Toshiba Corp Road situation-monitoring device, and road situation-monitoring method
CN103730015A (en) * 2013-12-27 2014-04-16 株洲南车时代电气股份有限公司 Method and device for detecting traffic flow at intersection
CN104794907A (en) * 2015-05-05 2015-07-22 江苏大为科技股份有限公司 Traffic volume detection method using lane splitting and combining
CN105069407A (en) * 2015-07-23 2015-11-18 电子科技大学 Video-based traffic flow acquisition method
KR101696881B1 (en) * 2016-01-06 2017-01-17 주식회사 씨티아이에스 Method and apparatus for analyzing traffic information
CN106128103A (en) * 2016-07-26 2016-11-16 北京市市政工程设计研究总院有限公司 A kind of intersection Turning movement distribution method based on recursion control step by step and device
CN107545742A (en) * 2017-09-14 2018-01-05 四川闫新江信息科技有限公司 Monitor the traffic controller of the adjust automatically of vehicle flowrate
WO2020189475A1 (en) * 2019-03-19 2020-09-24 株式会社Ihi Moving body monitoring system, control server for moving body monitoring system, and moving body monitoring method
CN110111575A (en) * 2019-05-16 2019-08-09 北京航空航天大学 A kind of Forecast of Urban Traffic Flow network analysis method based on Complex Networks Theory
CN110033620A (en) * 2019-05-17 2019-07-19 东南大学 A kind of intersection flux and flow direction projectional technique based on Traffic monitoring data
CN112802348A (en) * 2021-02-24 2021-05-14 辽宁石化职业技术学院 Traffic flow counting method based on mixed Gaussian model
CN112907978A (en) * 2021-03-02 2021-06-04 江苏集萃深度感知技术研究所有限公司 Traffic flow monitoring method based on monitoring video
CN113192336A (en) * 2021-05-28 2021-07-30 三峡大学 Road congestion condition detection method taking robust vehicle target detection as core
KR102323437B1 (en) * 2021-06-01 2021-11-09 시티아이랩 주식회사 Method, System for Traffic Monitoring Using Deep Learning
CN113269768A (en) * 2021-06-08 2021-08-17 中移智行网络科技有限公司 Traffic congestion analysis method, device and analysis equipment
CN113257005A (en) * 2021-06-25 2021-08-13 之江实验室 Traffic flow statistical method based on correlation measurement
CN113688717A (en) * 2021-08-20 2021-11-23 云往(上海)智能科技有限公司 Image recognition method and device and electronic equipment

Also Published As

Publication number Publication date
CN114333356B (en) 2023-12-15

Similar Documents

Publication Publication Date Title
CN111462275B (en) Map production method and device based on laser point cloud
Yang et al. Hierarchical extraction of urban objects from mobile laser scanning data
WO2018068653A1 (en) Point cloud data processing method and apparatus, and storage medium
CN108734105B (en) Lane line detection method, lane line detection device, storage medium, and electronic apparatus
WO2020052530A1 (en) Image processing method and device and related apparatus
JP6781711B2 (en) Methods and systems for automatically recognizing parking zones
CN111542860A (en) Sign and lane creation for high definition maps for autonomous vehicles
CN112880693A (en) Map generation method, positioning method, device, equipment and storage medium
CN111931627A (en) Vehicle re-identification method and device based on multi-mode information fusion
CN109800321B (en) Bayonet image vehicle retrieval method and system
Huang et al. Spatial-temproal based lane detection using deep learning
CN113469075A (en) Method, device and equipment for determining traffic flow index and storage medium
CN115841649A (en) Multi-scale people counting method for urban complex scene
CN105574485A (en) Vehicle information identification method and system
CN112132853A (en) Method and device for constructing ground guide arrow, electronic equipment and storage medium
Bu et al. A UAV photography–based detection method for defective road marking
KR20210018493A (en) Lane property detection
CN109635701B (en) Lane passing attribute acquisition method, lane passing attribute acquisition device and computer readable storage medium
CN114898243A (en) Traffic scene analysis method and device based on video stream
CN111598069B (en) Highway vehicle lane change area analysis method based on deep learning
Zhang et al. Image-based approach for parking-spot detection with occlusion handling
CN114333356A (en) Road plane intersection traffic volume statistical method based on video multi-region marks
CN111462490A (en) Road network visualization method and device based on multistage subregion division
CN113945222B (en) Road information identification method and device, electronic equipment, vehicle and medium
CN104809438A (en) Method and device for detecting electronic eyes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant