CN111243281A - Road multi-video joint detection system and detection method - Google Patents
Road multi-video joint detection system and detection method Download PDFInfo
- Publication number
- CN111243281A CN111243281A CN201811333100.0A CN201811333100A CN111243281A CN 111243281 A CN111243281 A CN 111243281A CN 201811333100 A CN201811333100 A CN 201811333100A CN 111243281 A CN111243281 A CN 111243281A
- Authority
- CN
- China
- Prior art keywords
- target
- vehicle
- target vehicle
- information
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 83
- 238000012806 monitoring device Methods 0.000 claims abstract description 143
- 238000012545 processing Methods 0.000 claims abstract description 112
- 238000012544 monitoring process Methods 0.000 claims abstract description 109
- 238000000034 method Methods 0.000 claims abstract description 89
- 239000011159 matrix material Substances 0.000 claims description 27
- 230000009466 transformation Effects 0.000 claims description 26
- 238000013507 mapping Methods 0.000 claims description 24
- 230000006399 behavior Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 8
- 230000002159 abnormal effect Effects 0.000 claims description 6
- 230000000875 corresponding effect Effects 0.000 description 127
- 238000012360 testing method Methods 0.000 description 33
- 230000008569 process Effects 0.000 description 32
- 230000006870 function Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 3
- 238000010998 test method Methods 0.000 description 3
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- GNFTZDOKVXKIBK-UHFFFAOYSA-N 3-(2-methoxyethoxy)benzohydrazide Chemical compound COCCOC1=CC=CC(C(=O)NN)=C1 GNFTZDOKVXKIBK-UHFFFAOYSA-N 0.000 description 1
- 241000270295 Serpentes Species 0.000 description 1
- 206010044074 Torticollis Diseases 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000018197 inherited torticollis Diseases 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the invention provides a road multi-video joint detection system and a detection method. The method is applied to data processing equipment in a road multi-video combined detection system, the system also comprises a plurality of video monitoring equipment, the video monitoring equipment is arranged at the roadside of a target road at intervals, and the monitoring areas of two adjacent video monitoring equipment are overlapped; the method comprises the following steps: continuously receiving target information sent by a plurality of video monitoring devices; when any video monitoring device acquires a vehicle image, determining target information based on the vehicle image, and sending the determined target information to the data processing device; the target information is related to each target vehicle in the vehicle image; determining a running track of each target vehicle based on the received plurality of target information; for each target vehicle, the travel information of the target vehicle is determined based on the travel track of the target vehicle.
Description
Technical Field
The invention relates to the field of traffic, in particular to a road multi-video joint detection system and a detection method.
Background
With the development of the times, video monitoring equipment is widely applied to the traffic field. In the traffic field, video monitoring equipment can be generally used for monitoring and detecting vehicles, pedestrians and the like on roads.
In practical applications, in order to better understand the driving state of a vehicle on a lane, it is often necessary to detect driving information of the vehicle on a section of road, such as the driving track of the vehicle.
Therefore, in order to meet the above requirements, how to effectively detect the driving information of the vehicle on a section of road by using the video monitoring device is an urgent problem to be solved.
Disclosure of Invention
The embodiment of the invention aims to provide a road multi-video combined detection system and a detection method, so as to realize the purpose of effectively detecting the driving information of vehicles on a section of road by using video monitoring equipment. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a road multi-video joint detection method, where the method is applied to a data processing device in a road multi-video joint detection system, the system further includes a plurality of video monitoring devices, the video monitoring devices are arranged at intervals on a road side of a target road, and monitoring areas of two adjacent video monitoring devices are overlapped; the method comprises the following steps:
continuously receiving target information sent by a plurality of video monitoring devices; when any video monitoring device acquires a vehicle image, determining target information based on the vehicle image, and sending the determined target information to a data processing device; the target information is related to each target vehicle in the vehicle image;
determining a running track of each target vehicle based on the received plurality of target information;
for each target vehicle, the travel information of the target vehicle is determined based on the travel track of the target vehicle.
Optionally, the determining the driving track of each target vehicle based on the received multiple target information includes:
for each piece of received target information, determining the position information of each target vehicle in the vehicle image corresponding to the target information based on the target information; the position information of any target vehicle is used for representing the position of the target vehicle on the target road;
when the position information of each target vehicle in the multiple vehicle images is obtained, determining each position information of the same target vehicle in the multiple vehicle images based on the position information of each target vehicle in the multiple vehicle images;
aiming at each target vehicle, arranging the position information of the target vehicle according to a preset arrangement mode to form a running track of the target vehicle; the preset arrangement mode is a mode of arranging according to the sequence of the timestamps corresponding to the position information, and the timestamp corresponding to any position information is the timestamp of the vehicle image to which the position information belongs.
Optionally, the target information includes a vehicle image acquired by the video monitoring device and a timestamp of the vehicle image;
the determining, for each piece of received target information, the position information of each target vehicle in the vehicle image corresponding to the target information based on the target information includes:
for each piece of received target information, position information of each target vehicle is determined in a vehicle image included in the target information.
Optionally, the target information includes: the video monitoring equipment determines the position information of each target vehicle and the time stamp of the vehicle image from the acquired vehicle image;
the determining, for each piece of received target information, the position information of each target vehicle in the vehicle image corresponding to the target information based on the target information includes:
for each piece of received target information, position information of the respective target vehicle is extracted from the target information.
Optionally, when the position information of each target vehicle in the multiple vehicle images is obtained, determining each position information of the same target vehicle in the multiple vehicle images based on the position information of each target vehicle in the multiple vehicle images includes:
when the position information of each target vehicle in the plurality of vehicle images is obtained, aiming at two pieces of position information meeting a first condition, when the distance value between areas corresponding to the two pieces of position information is smaller than a first distance threshold value, the two pieces of position information are determined as the position information of the same target vehicle; wherein the first condition comprises: the two pieces of position information are extracted from different vehicle images, the different vehicle images are collected by the same video monitoring equipment, and the timestamps are adjacent;
for two pieces of position information meeting a second condition, when the distance value between the areas corresponding to the two pieces of position information is smaller than a second distance threshold value, determining the two pieces of position information as the position information of the same target vehicle; wherein the second condition comprises: the two position information are extracted from different vehicle images, and the different vehicle images are collected by adjacent video monitoring equipment and have the same timestamp.
Optionally, the determining, for each piece of received target information, the position information of each target vehicle in the vehicle image included in the target information includes:
determining the reference pixel point coordinates of each target vehicle in the vehicle image; the coordinates of the reference pixel points are used for representing the position of the target vehicle in the vehicle image;
determining a plane coordinate corresponding to the reference pixel point coordinate of each target vehicle by using a mapping relation between a preset pixel point coordinate and the plane coordinate;
and determining the position information of each target vehicle based on the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle.
Optionally, the determining, by using a mapping relationship between preset pixel coordinates and plane coordinates, the plane coordinates corresponding to the reference pixel coordinates of each target vehicle includes:
calculating a plane coordinate corresponding to the reference pixel point coordinate of each target vehicle by using a predetermined perspective transformation matrix; the perspective transformation matrix is used for representing the mapping relation between the pixel point coordinates and the plane coordinates.
Optionally, the determining, for each target vehicle, the position information of the target vehicle based on the plane coordinate corresponding to the reference pixel point coordinate of the target vehicle includes:
for each target vehicle, comparing the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle with a plurality of lane plane coordinates determined in advance, determining lane identification of the target vehicle according to a comparison result, and taking the lane identification and the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle as position information of the target vehicle;
and the lane plane coordinate is a two-dimensional coordinate corresponding to a lane sideline of the target road.
Optionally, the determining, for each target vehicle, the driving information of the target vehicle based on the driving track of the target vehicle includes:
aiming at each target vehicle, calculating the corresponding instantaneous speed of the target vehicle by using the distance between two plane coordinates in the running track of the target vehicle and the difference value of the timestamps corresponding to the two plane coordinates;
determining, for each target vehicle, a first timestamp and a second timestamp; calculating a difference between the first timestamp and the second timestamp; calculating the interval vehicle speed of the target vehicle by using the length of the target road and the difference value; wherein the first timestamp is: the time stamp corresponding to the plane coordinate belonging to the start point coordinate of the target road in the travel track of the target vehicle is: and the time stamp corresponds to the plane coordinate of the terminal point coordinate of the target road in the running track of the target vehicle.
Optionally, the determining, for each target vehicle, the driving information of the target vehicle based on the driving track of the target vehicle includes:
for each target vehicle, determining a target track of the target vehicle in a preset time period from the running tracks of the target vehicle;
and judging whether the target vehicle has abnormal lane changing behavior or not according to the plurality of lane marks in the target track.
Optionally, the method further includes:
counting the number of target vehicles passing through a preset section of the target road in a preset time period based on a plurality of target information sent by target video monitoring equipment and received in the preset time period, and calculating section flow by using the counted number of the target vehicles and the preset time period; and the target video monitoring equipment is the video monitoring equipment at the preset section position.
Optionally, the method further includes:
acquiring road network data; the road network data comprises plane coordinates of a plurality of road sections of the region where the target road is located;
and determining the road sections matched with the target vehicles according to the position relation between the plane coordinates corresponding to the reference pixel point coordinates of the target vehicles and the plane coordinates of each road section for each target vehicle.
In a second aspect, an embodiment of the present invention provides a method for joint detection of multiple videos on a road, where the method is applied to video monitoring devices in a joint detection system of multiple videos on a road, the system includes a plurality of video monitoring devices and data processing devices, the video monitoring devices are arranged at intervals on a roadside of a target road, and monitoring areas of two adjacent video monitoring devices are overlapped; the method comprises the following steps:
determining target information based on a vehicle image every time a vehicle image is acquired; wherein the target information is related to each target vehicle in the vehicle image;
transmitting the determined target information to a data processing apparatus so that the data processing apparatus determines a travel track of each target vehicle based on the received plurality of target information; and determining, for each target vehicle, travel information of the target vehicle based on the travel trajectory of the target vehicle.
Optionally, the determining target information based on the vehicle image includes:
taking the vehicle image and a timestamp of the vehicle image as target information; or
And determining the position information of each target vehicle in the vehicle image, and taking the determined position information and the timestamp of the vehicle image as target information.
Optionally, the determining the position information of each target vehicle in the vehicle image includes:
determining the reference pixel point coordinates of each target vehicle in the vehicle image; the coordinates of the reference pixel points are used for representing the position of the target vehicle in the vehicle image;
determining a plane coordinate corresponding to the reference pixel point coordinate of each target vehicle by using a mapping relation between a preset pixel point coordinate and the plane coordinate;
and determining the position information of each target vehicle based on the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle.
Optionally, the determining, by using a mapping relationship between preset pixel coordinates and plane coordinates, the plane coordinates corresponding to the reference pixel coordinates of each target vehicle includes:
calculating a plane coordinate corresponding to the reference pixel point coordinate of each target vehicle by using a predetermined perspective transformation matrix; the perspective transformation matrix is used for representing the mapping relation between the pixel point coordinates and the plane coordinates.
Optionally, the determining, for each target vehicle, the position information of the target vehicle based on the plane coordinate corresponding to the reference pixel point coordinate of the target vehicle includes:
for each target vehicle, comparing the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle with a plurality of lane plane coordinates determined in advance, determining lane identification of the target vehicle according to a comparison result, and taking the lane identification and the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle as position information of the target vehicle;
and the lane plane coordinate is a two-dimensional coordinate corresponding to a lane sideline of the target road.
In a third aspect, an embodiment of the present invention provides a road multi-video joint detection apparatus, where the apparatus is applied to a data processing device in a road multi-video joint detection system, the system further includes a plurality of video monitoring devices, the video monitoring devices are arranged at intervals on a road side of a target road, and monitoring areas of two adjacent video monitoring devices overlap; the device comprises:
the receiving module is used for continuously receiving the target information sent by the plurality of video monitoring devices; when any video monitoring device acquires a vehicle image, determining target information based on the vehicle image, and sending the determined target information to a data processing device; the target information is related to each target vehicle in the vehicle image;
the first determination module is used for determining the running track of each target vehicle based on the received target information;
and the second determination module is used for determining the running information of each target vehicle based on the running track of the target vehicle.
Optionally, the first determining module includes:
the first determining submodule is used for determining the position information of each target vehicle in the vehicle image corresponding to the target information according to the received target information and based on the target information; the position information of any target vehicle is used for representing the position of the target vehicle on the target road;
the second determining submodule is used for determining each piece of position information of the same target vehicle in the plurality of vehicle images based on the position information of each target vehicle in the plurality of vehicle images when the position information of each target vehicle in the plurality of vehicle images is obtained;
the arrangement submodule is used for arranging the position information of each target vehicle according to a preset arrangement mode aiming at each target vehicle to form a running track of the target vehicle; the preset arrangement mode is a mode of arranging according to the sequence of the timestamps corresponding to the position information, and the timestamp corresponding to any position information is the timestamp of the vehicle image to which the position information belongs.
In a fourth aspect, an embodiment of the present invention provides a road multi-video joint detection apparatus, where the apparatus is applied to video monitoring devices in a road multi-video joint detection system, the system includes a plurality of the video monitoring devices and a data processing device, the plurality of the video monitoring devices are arranged at intervals on a road side of a target road, and monitoring areas of two adjacent video monitoring devices are overlapped; the device comprises:
the determining module is used for determining target information based on a vehicle image every time the vehicle image is acquired; wherein the target information is related to each target vehicle in the vehicle image;
a transmitting module, configured to transmit the determined target information to a data processing device, so that the data processing device determines a driving track of each target vehicle based on the received plurality of target information; and determining, for each target vehicle, travel information of the target vehicle based on the travel trajectory of the target vehicle.
In a fifth aspect, an embodiment of the present invention provides an electronic device, which is a data processing device and includes a processor and a memory, wherein,
the memory is used for storing a computer program;
the processor is configured to implement the steps of the road multi-video joint detection method provided by the first aspect of the embodiment of the present invention when executing the program stored in the memory.
In a sixth aspect, an embodiment of the present invention provides an electronic device, which is a video monitoring device, including a processor and a memory, wherein,
the memory is used for storing a computer program;
the processor is configured to implement the steps of the road multi-video joint detection method provided by the second aspect of the embodiment of the present invention when executing the program stored in the memory.
In a seventh aspect, an embodiment of the present invention provides a road multi-video joint detection system, where the system includes multiple video monitoring devices and data processing devices, the multiple video monitoring devices are arranged at intervals on a roadside of a target road, and monitoring areas of two adjacent video monitoring devices overlap; wherein,
any video monitoring device is the video monitoring device of the sixth aspect;
the data processing apparatus is the data processing apparatus of the fifth aspect.
In the scheme provided by the embodiment of the invention, a plurality of video monitoring devices are arranged at intervals at the road side of the target road, and the monitoring areas of two adjacent video monitoring devices are overlapped. Therefore, for each target vehicle, the plurality of video monitoring devices can acquire a plurality of continuous vehicle images containing the target vehicle, so as to determine a plurality of target information; and the data processing device can determine the running track of the target vehicle on the target road based on the received plurality of target information, and then can determine the running information of the target vehicle according to the running track. Therefore, the aim of effectively detecting the driving information of the vehicles on the section of road by using the video monitoring equipment can be fulfilled by applying the embodiment of the invention.
Of course, it is not necessary for any product or method of practicing the invention to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a road multi-video joint detection method provided from the perspective of a data processing device according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a road multi-video joint detection method provided from the perspective of a video monitoring device according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a road multi-video joint detection apparatus provided from the perspective of a data processing device according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a road multi-video joint detection apparatus provided from the perspective of a video monitoring device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a video monitoring apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a road multi-video joint detection system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to achieve the purpose of effectively detecting the driving information of vehicles on a section of road by using video monitoring equipment, the embodiment of the invention provides a road multi-video joint detection method, a road multi-video joint detection device, electronic equipment and a storage medium from the perspective of data processing equipment and video monitoring equipment respectively, and provides a road multi-video joint detection system.
In a first aspect, from the perspective of a data processing device, an embodiment of the present invention provides a road multi-video joint detection method. The method is applied to data processing equipment in a road multi-video combined detection system. The system further comprises a plurality of video monitoring devices, the video monitoring devices are arranged at intervals at the roadside of the target road, and the monitoring areas of two adjacent video monitoring devices are overlapped. The video monitoring device may be a camera or the like.
It should be noted that an execution subject of the road multi-video joint detection method provided by the embodiment of the present invention may be a road multi-video joint detection apparatus, and the apparatus may be run in a data processing device; the data processing device may be a device with a computing function, such as a server.
As shown in fig. 1, from the perspective of a data processing device, a road multi-video joint detection method provided by an embodiment of the present invention may include the following steps:
s101, continuously receiving target information sent by a plurality of video monitoring devices;
when any video monitoring device acquires a vehicle image, determining target information based on the vehicle image, and sending the determined target information to a data processing device; the target information is related to each target vehicle in the vehicle image.
The target information may include the vehicle image, a position of each target vehicle on the target road determined from the vehicle image, a time when the vehicle image is acquired, and the like. Of course, the contents of the target information are not limited to the above.
For clarity of layout and ease of understanding, the processing procedure of the video surveillance equipment in this step is described in the following description of a road multi-video joint detection method provided from the perspective of the video surveillance equipment.
S102, determining the running track of each target vehicle based on the received target information;
for the data processing device, the target information sent by the plurality of video monitoring devices can be continuously received, and any target information is related to each target vehicle in the corresponding vehicle image. It should be noted that, for a target vehicle traveling from the starting point of the target road to the end point of the target road, during this process, each video monitoring device along the way sequentially acquires a vehicle image containing the target vehicle, and sends corresponding target information to the data processing device. Because the monitoring areas of two adjacent video monitoring devices are overlapped, when the target vehicle runs in the monitoring overlapping area, the target vehicle can be simultaneously present on the vehicle images of the two adjacent video monitoring devices, and the position of the target vehicle on the vehicle images of the two adjacent video monitoring devices is consistent at the same time.
Therefore, the data processing device can track a plurality of positions of the target vehicle sequentially appearing on the target road in the received plurality of target information by using at least the condition of the same time and the same position, so that the running track of the target vehicle is determined from the determined plurality of positions.
For clarity of layout and ease of understanding of the solution, the step of determining the travel track of each target vehicle based on the received plurality of target information will be described hereinafter.
And S103, determining the running information of each target vehicle based on the running track of the target vehicle.
The data processing device may determine, for each target vehicle, travel information of the target vehicle based on a travel locus of the target vehicle. For example: the running track of the target vehicle can be used as the running information of the target vehicle; alternatively, two positions in the driving track of the target vehicle may be determined, and a position difference between the two positions may be obtained, and times corresponding to the two positions and used for obtaining the vehicle image may be determined, and a time difference between the two times may be obtained, and then a ratio between the position difference and the time difference may be obtained to obtain an instantaneous vehicle speed of the target vehicle, which may be used as the driving information of the target vehicle, and so on.
In the scheme provided by the embodiment of the invention, a plurality of video monitoring devices are arranged at intervals at the road side of the target road, and the monitoring areas of two adjacent video monitoring devices are overlapped. The data processing device can continuously receive target information which is sent by a plurality of video monitoring devices and is determined based on the acquired vehicle images and is related to each target vehicle in the vehicle images; then, based on the received target information, the running track of each target vehicle can be determined; then, for each target vehicle, the travel information of the target vehicle may be determined based on the travel track of the target vehicle. Therefore, the aim of effectively detecting the driving information of the vehicles on the section of road by using the video monitoring equipment can be fulfilled by applying the embodiment of the invention.
The following describes a procedure of determining the travel locus of each target vehicle based on the plurality of pieces of target information received in step S102. Optionally, the step may include step a 1-step a 3:
step a 1: for each piece of received target information, determining the position information of each target vehicle in the vehicle image corresponding to the target information based on the target information;
step a 2: when the position information of each target vehicle in the multiple vehicle images is obtained, determining each position information of the same target vehicle in the multiple vehicle images based on the position information of each target vehicle in the multiple vehicle images;
step a 3: and aiming at each target vehicle, arranging the position information of the target vehicle according to a preset arrangement mode to form a running track of the target vehicle.
The steps a 1-a 3 are described in detail below:
1) for step a 1:
the position information of any target vehicle is used for representing the position of the target vehicle on the target road.
The step a1 has a corresponding implementation process according to the content of the received target information, and the implementation process of the step a1 is described below with reference to different contents of the target information. Specifically, the method comprises the following steps:
in a first optional manner, the target information includes a vehicle image acquired by a video monitoring device and a timestamp of the vehicle image. Wherein the timestamp of the vehicle image is the time at which the vehicle image was acquired.
Then, with respect to the above target information, the specific implementation procedure of step a1 may be step a 11:
step a 11: for each piece of received target information, position information of each target vehicle is determined in a vehicle image included in the target information.
It can be understood that, in this manner, after each vehicle image is acquired by any video monitoring device, the vehicle image is not processed, but the vehicle image and the timestamp of the vehicle image are directly used as target information, and the target information is sent to the data processing device.
Correspondingly, the data processing equipment processes each piece of received target information and determines the position information of each target vehicle in any vehicle image. It is understood that in the implementation process of step a11, the data processing device has a vehicle recognition function, such as a vehicle feature recognition function, a license plate recognition function, a vehicle trajectory tracking function, and so on. The data processing device may determine the position information of each target vehicle using a vehicle recognition function. Such as determining the position information of the target vehicle 1 as x1, the position information of the target vehicle 2 as x2, and so on.
For clarity of layout and ease of understanding of the solution, a specific implementation process for determining the position information of each target vehicle in any vehicle image is described hereinafter.
In another alternative, the target information includes: the video monitoring device determines the position information of each target vehicle and the time stamp of the vehicle image from the acquired vehicle image.
Then, with respect to the above target information, the specific implementation procedure of step a1 may be step a 12:
step a 12: for each piece of received target information, position information of the respective target vehicle is extracted from the target information.
It can be understood that, in this manner, each time any video monitoring device acquires a vehicle image, the vehicle image needs to be processed to determine the position information of each target vehicle in any vehicle image; the determined position information of each target vehicle together with the time stamp of the vehicle image is then taken as the target information, and the target information is transmitted to a processing device.
Then, at this time, any video monitoring device has a vehicle identification function, such as a vehicle feature identification function, a license plate identification function, a vehicle trajectory tracking function, and the like. Any one of the video surveillance devices determines the position information of each target vehicle using the vehicle recognition function.
In this way, the video monitoring device completes the processing procedure of determining the position information of each target vehicle in any vehicle image. Thus, the data processing apparatus can extract the position information of each target vehicle directly from the target information without performing the above-described processing, and the calculation process of the data processing apparatus can be simplified.
By combining the two modes, the determination of the position information of each target vehicle in any vehicle image can be completed by the data processing device or the video monitoring device. The following describes in detail the implementation process of determining the position information of each target vehicle in any vehicle image, which may include steps b 1-b 3:
step b 1: determining the reference pixel point coordinates of each target vehicle in the vehicle image;
the coordinates of the reference pixel points are used for representing the position of the target vehicle in the vehicle image; the reference pixel point coordinates may be coordinates of a pixel point in the vehicle image and in the image area where the target vehicle is located, for example, the reference pixel point coordinates may be coordinates of a central point of a detection frame including the target vehicle in the vehicle image. The detection frame may be rectangular, circular, or trapezoidal.
The process of determining the coordinates of the reference pixel points of each target vehicle in the vehicle image may be implemented by a manual or an automated tool, etc. in the prior art, which is not described in detail herein.
Step b 2: determining a plane coordinate corresponding to the reference pixel point coordinate of each target vehicle by using a mapping relation between a preset pixel point coordinate and the plane coordinate;
and the plane coordinates corresponding to the reference pixel point coordinates are used for representing the position of the vehicle on the target road.
Optionally, a predetermined perspective transformation matrix may be used to calculate the plane coordinates corresponding to the reference pixel coordinates of each target vehicle. The perspective transformation matrix is one of mapping relations between pixel point coordinates and plane coordinates. The perspective transformation matrix is used for representing the mapping relation between the pixel point coordinates and the plane coordinates. For clarity of layout, the determination process of the perspective transformation matrix is described later.
Step b 3: and determining the position information of each target vehicle based on the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle.
As an alternative implementation, step b3 may be step b 31:
step b 31: and taking the plane coordinate corresponding to the reference pixel point coordinate of the target vehicle as the position information of the target vehicle.
As another alternative implementation, step b3 may include step b 32:
step b 32: and aiming at each target vehicle, comparing the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle with a plurality of preset lane plane coordinates, determining the lane identification of the target vehicle according to the comparison result, and taking the lane identification and the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle as the position information of the target vehicle. The lane marks are added in the position information, so that accurate and rich position information of the target vehicle can be obtained, and accurate and rich running information can be obtained subsequently based on the position information.
The following illustrates the process of determining the lane marking of a target vehicle: assume that the coordinates of the reference pixel of a target vehicle correspond to plane coordinates (x0, y 0). The target road has two lanes, lane 1 and lane 2 respectively, and each lane has two sidelines about, and to each lane, predetermined a plurality of lane plane coordinates can be: and each plane coordinate corresponding to the left sideline of the lane and each plane coordinate corresponding to the right sideline of the lane.
It will be appreciated that the planar coordinates on each edge form a line segment, and then there are three edges for the two lanes, say edges 1, 2 and 3 (the two edges for lane 1 are edges 1 and 2, and the two edges for lane 2 are edges 2 and 3). The perpendicular distances of (x0, y0) from edges 1, 2, and 3, respectively, can be calculated, resulting in perpendicular distances d1, d2, and d3, respectively. If both d1 and d2 are less than the width of lane 1, then the target vehicle is determined to be on lane 1, and the lane identification of the target vehicle may be determined to be 1; similarly, if both d2 and d3 are less than the width of lane 2, then the target vehicle is determined to be on lane 2 and the lane identification of the target vehicle may be determined to be 2. Of course, the process of determining the lane identification of a target vehicle is not limited to that described above.
The lane marks are used to distinguish lanes, and the lane marks may be numerical values related to lanes, for example, 1 represents a first lane, and the like, and of course, the lane marks may also be other characters.
The lane plane coordinates are two-dimensional coordinates corresponding to lane sidelines of the target road. For clarity of layout, the determination process of the lane plane coordinates will be described later.
2) For step a 2:
step a2 may include step c1 and step c 2:
step c1, when the position information of each target vehicle in the multiple vehicle images is obtained, aiming at two pieces of position information meeting the first condition, when the distance value between the areas corresponding to the two pieces of position information is smaller than the first distance threshold value, the two pieces of position information are determined as the position information of the same target vehicle;
wherein the first condition comprises: the two position information are extracted from different vehicle images that are captured by the same video surveillance device with adjacent timestamps.
With step c1, the data processing device may determine the same target vehicle among a plurality of location information of the same video monitoring device; and for each target vehicle, a plurality of location information of the target vehicle with respect to the same video surveillance device may be determined.
Specifically, in this step, the position information may be a plane coordinate corresponding to a reference pixel point coordinate; that is to say, for the plane coordinates corresponding to two reference pixel point coordinates from the same video monitoring device at adjacent moments, if the distance between the plane coordinates corresponding to the two reference pixel point coordinates is smaller than the first distance threshold, then it can be determined that the plane coordinates corresponding to the two reference pixel point coordinates are the plane coordinates corresponding to the reference pixel point coordinates of the same target vehicle.
The first distance threshold may be set according to an empirical value, for example, the first distance threshold may be 2 meters.
And c2, determining two pieces of position information meeting the second condition as the position information of the same target vehicle when the distance value between the areas corresponding to the two pieces of position information is smaller than the second distance threshold value.
Wherein the second condition comprises: the two position information are extracted from different vehicle images, and the different vehicle images are collected by adjacent video monitoring equipment and have the same timestamp.
It is understood that, by using step c2, the same target vehicle can be determined from the plurality of location information of two adjacent video monitoring devices; and for each target vehicle, a plurality of position information of the target vehicle about the two adjacent video monitoring devices can be determined.
Specifically, in step c2, the position information may be a plane coordinate corresponding to the reference pixel point coordinate; that is, for the plane coordinates z1(z1 is from the video monitoring apparatus 1) and z2(z2 is from the video monitoring apparatus 2) corresponding to the two reference pixel point coordinates from the adjacent two video monitoring apparatuses 1 and 2 at the same time, if the distance between z1 and z2 is smaller than the second distance threshold, then z1 and z2 can be determined as the position information of the same target vehicle.
The second distance threshold may be set according to an empirical value, for example, the second distance threshold may be 5 meters.
It should be noted that the first distance threshold and the second distance threshold are related to a time interval of capturing the vehicle image by the video monitoring device.
Optionally, in step c2, the position information may further include a lane marker. That is, for the above example, if the distance of z1 and z2 is less than the second distance threshold and the corresponding lane identifications are the same, then z1 and z2 may be determined to be the location information of the same target vehicle. The lane identification is used for assisting the plane coordinates corresponding to the coordinates of the reference pixel points to judge, so that the accuracy of determining the same target vehicle in the position information of two adjacent video monitoring devices can be improved.
It should be noted that, if the position information of each target vehicle is determined by the video monitoring device, an alternative implementation manner of step a2 may be:
and c3, when the data processing device obtains the position information of each target vehicle in the images of the multiple vehicles, determining the position information corresponding to the same vehicle identification in the multiple target information of the same video monitoring device as the position information of the same target vehicle.
Wherein, step c3 is an alternative implementation of step c 1.
And c4, determining two pieces of position information meeting the second condition as the position information of the same target vehicle when the distance value between the areas corresponding to the two pieces of position information is smaller than the second distance threshold value.
Wherein the second condition comprises: the two position information are extracted from different vehicle images, and the different vehicle images are collected by adjacent video monitoring equipment and have the same timestamp.
Wherein step c4 is the same as step c 2.
The above-mentioned steps c 3-c 4 are specifically explained below: because the shooting intervals of a plurality of vehicle images of the same video monitoring device are short, the position change of the same target vehicle in two adjacent vehicle images is small. Any video monitoring device can identify and track each target vehicle in a plurality of vehicle images of the video monitoring device by utilizing a vehicle identification function, and the vehicle identification and the corresponding position information of each target vehicle are determined in each vehicle image. For example, the position information of the target vehicle 1 is determined to be x1, the position information of the target vehicle 2 is determined to be x2 in the first vehicle image, the position information of the target vehicle 1 is determined to be y1, the position information of the target vehicle 2 is determined to be y2 in the second vehicle image, and so on. That is, in the target information sent to the data processing device by any video monitoring device, in addition to the position information and the timestamp of the vehicle image, the target information may also include the vehicle identifier of the target vehicle corresponding to the position information. It should be noted that each video surveillance device is independent of the identification and tracking process of the target vehicle in the own vehicle image.
Then, after receiving each piece of target information, the data processing device may determine, for a plurality of pieces of target information of the same video monitoring device, position information corresponding to the same vehicle identifier as position information of the same target vehicle according to the vehicle identifier in each piece of target information, and then may extract the position information of each target vehicle from the plurality of pieces of position information. That is, the same target vehicle may be tracked using the vehicle identification for multiple location information of the same video surveillance device.
For the plurality of location information of two adjacent video surveillance devices, since the vehicle identifications of the respective video surveillance devices are not correlated, the data processing device still needs to determine the same target vehicle according to the step c 2.
3) For step a 3:
the preset arrangement mode is a mode of arranging according to the sequence of the timestamps corresponding to the position information, and the timestamp corresponding to any position information is the timestamp of the vehicle image to which the position information belongs.
It is understood that the travel track of a target vehicle is a plurality of position information arranged in the sequence of the time stamps corresponding to the position information. Specifically, the driving track of a target vehicle is a plane coordinate corresponding to the coordinates of a plurality of reference pixels of the target vehicle arranged according to the sequence of the timestamps, and of course, the driving track may further include a plurality of lane identifications of the target vehicle. And the plane coordinates corresponding to the reference pixel point coordinates correspond to the lane marks one by one.
Optionally, in an implementation manner, the data processing device determines, for each target vehicle, the travel information of the target vehicle based on the travel track of the target vehicle, and may include steps d and e:
step d, aiming at each target vehicle, calculating the corresponding instantaneous speed of the target vehicle by using the distance between two plane coordinates in the running track of the target vehicle and the difference value of the time stamps corresponding to the two plane coordinates;
specifically, a ratio of the distance to the difference may be obtained to obtain an instantaneous vehicle speed corresponding to the target vehicle.
If each instant vehicle speed corresponding to the target vehicle is smaller than the preset vehicle speed in a period of time, the target vehicle can be judged to have continuously low-speed torticollis behavior in the period of time.
Step e, determining a first time stamp and a second time stamp for each target vehicle; calculating a difference between the first timestamp and the second timestamp; and calculating the section speed of the target vehicle by using the length of the target road and the difference value.
Wherein the first timestamp is: the time stamp corresponding to the plane coordinate belonging to the start point coordinate of the target road in the travel track of the target vehicle is: and the time stamp corresponds to the plane coordinate of the terminal point coordinate of the target road in the running track of the target vehicle.
Optionally, in another implementation manner, the data processing device determines, for each target vehicle, the travel information of the target vehicle based on the travel track of the target vehicle, and may include step f and step g:
step f, aiming at each target vehicle, determining a target track of the target vehicle in a preset time period from the running tracks of the target vehicle;
specifically, for each target vehicle, two timestamps corresponding to a preset time period may be determined according to a timestamp corresponding to each piece of position information in the travel track of the target vehicle, and then the travel track between the two timestamps in the travel track of the target vehicle is determined to be the target track.
And g, judging whether the target vehicle has abnormal lane changing behaviors or not according to the plurality of lane marks in the target track.
Specifically, it may be determined whether the target vehicle has a behavior of continuously changing lanes, such as directly changing from the first lane to the third lane, within the detection time period. Whether the target vehicle has the behavior of changing the road into the snake way within the detection time period can be judged, for example, the target vehicle is changed from the first lane to the second lane and then back to the first lane; if the above behavior occurs, it can be determined that the target vehicle has abnormal lane-change behavior. In practical applications, the detection time period may be a short time period, and the detection time period may be set according to an empirical value, such as 5 minutes and the like.
In addition, optionally, the data processing apparatus is further configured to:
counting the number of target vehicles passing through a preset section of the target road in a preset time period based on a plurality of target information sent by target video monitoring equipment and received in the preset time period, and calculating section flow by using the counted number of the target vehicles and the preset time period;
and the target video monitoring equipment is the video monitoring equipment at the preset section position.
For example, a road section at the end point of the target road may be used as a preset section. And taking the video monitoring equipment at the end position as target video monitoring equipment. Then, through the above process, the number of target vehicles passing through the end point of the target road within the preset time can be detected, and the ratio of the counted number of target vehicles to the preset time is obtained to obtain the cross-sectional flow.
Of course, the preset cross section may be any cross section of the target road, for example, a road cross section at the starting point of the target road or a road cross section at the midpoint of the target road, and so on, which is reasonable.
Therefore, in the optional multiple modes, the scheme provided by the embodiment of the invention can extract multiple traffic parameters such as instantaneous vehicle speed, interval vehicle speed, section flow and abnormal driving behaviors related to lane change and the like based on the running track of the target vehicle, and can provide a large amount of data bases for traffic detection.
In addition, optionally, the data processing device is further configured to determine a road segment matched with the target vehicle in combination with the road network data, so as to implement vehicle positioning, tracking, road segment navigation, and the like. The implementation steps of the process are exemplarily described below in two embodiments.
Optionally, the first embodiment of determining the road segment matched with the target vehicle by combining the road network data may include the following steps:
firstly, obtaining road network data;
the road network data contains plane coordinates of a plurality of road sections of the region where the target road is located.
It is to be understood that the road network data may be data used for various navigation maps, and the road network data includes plane coordinates of each road segment in the area where the target road is located.
And secondly, determining the road sections matched with the target vehicles according to the position relation between the plane coordinates corresponding to the reference pixel point coordinates of the target vehicles and the plane coordinates of each road section for each target vehicle.
Specifically, for each target vehicle, when a plane coordinate corresponding to a reference pixel point coordinate of the target vehicle is obtained, the plane coordinate corresponding to the reference pixel point coordinate may be compared with the plane coordinates of each road segment, and a position relationship between the plane coordinate corresponding to the reference pixel point coordinate and the plane coordinate of each road segment is determined; determining the road section to which the plane coordinate corresponding to the reference pixel point coordinate belongs according to the determined position relation, and then determining the road section currently matched with the target vehicle; and the specific position of the target vehicle on the currently matched road section can be determined according to the plane coordinate corresponding to the reference pixel point coordinate.
In this embodiment, for each target vehicle, based on the planar coordinates corresponding to the reference pixel point coordinates of the target vehicle and the positional relationship between the planar coordinates and each road section, a process of determining a road section matched with the target vehicle may be performed, where a distance between the planar coordinates corresponding to the reference pixel point coordinates and each road section is first calculated, a line section formed by the planar coordinates corresponding to two reference pixel point coordinates and an included angle between the line sections corresponding to each road section are calculated, and then the planar coordinates corresponding to the reference pixel point coordinates of the target vehicle and the positional relationship between the planar coordinates and each road section are determined by using the obtained distances and included angles, so as to determine the road section matched with the target vehicle.
The following illustrates the process, and it is assumed that the plane coordinate corresponding to the current reference pixel coordinate of the target vehicle is a1, and the plane coordinate corresponding to the reference pixel coordinate before a1 is a 0. There are three road segments X, Y and Z in the road network data. It will be appreciated that X, Y and Z are line segments made up of a plurality of planar coordinates.
First, the vertical distance d of a1 from X, Y and Z, respectively, can be calculated;
then, the line segments A0-A1 may be calculated as being angled with respect to X, Y and Z, respectively
Then, one implementation may be: determining whether there is a road segment d andsatisfies the following conditions: d is less than the preset distanceLess than the preset angle, and if so, it is determined that a1 belongs to the road segment and the target vehicle matches the road segment. The preset distance and the preset angle may be set according to an empirical value, for example, the preset distance may be 30 meters, and the preset angle may be 45 degrees.
Optionally, another implementation may be: judging whether a road section meets the following conditions: d is minimum andless than the preset angle, and if so, it is determined that a1 belongs to the road segment and the target vehicle matches the road segment.
Optionally, another implementation may be: judging whether a road section meets the following conditions: d is minimum andat a minimum, if yes, it is determined that A1 belongs to the road segment and the target vehicle matches the road segment.
Optionally, in other implementations, the road segment with the smallest d may also be determined as the road segment to which a1 belongs, that is, the road segment matched with the target vehicle; or will include an angleIt is reasonable to determine the minimum road segment as the road segment to which a1 belongs, i.e., the road segment matching the target vehicle.
Optionally, the second embodiment of determining the road segment matched with the target vehicle by combining the road network data may include the following steps:
the method comprises the steps that firstly, for each target vehicle, video monitoring equipment corresponding to a vehicle image of the target vehicle is determined;
and secondly, determining the road section matched with the target vehicle according to the corresponding relation between the predetermined video monitoring equipment and the road section in the road network data.
The corresponding relation between the video monitoring equipment and the road sections in the road network data comprises the following steps: and road sections in the road network data corresponding to the video monitoring equipment. The correspondence may be determined during testing of the sample vehicle. The corresponding relation may be a table or the like. For clarity of layout, the test procedure is described hereinafter.
The above steps are exemplified: the current vehicle image of the target vehicle 1 is from the video monitoring device 3, and the corresponding relation between the video monitoring device and the road section in the road network data is found, and if the road section corresponding to the video monitoring device 3 is X, it is determined that the target vehicle 1 is currently running on the road section X.
Of course, if the specific position of the target vehicle on the matched road section is to be determined, the plane coordinate corresponding to the reference pixel point coordinate of the target vehicle needs to be determined.
It can be seen that, compared with the first embodiment, the second embodiment can easily and quickly determine the road section matched with a target vehicle in the road network data.
For the above two embodiments, the plane coordinate corresponding to the reference pixel point coordinate may be displayed on the matched road segment on the navigation map, for example, the plane coordinate may be displayed in a dot or arrow manner. It can be understood that the sequentially obtained plane coordinates corresponding to the coordinates of each reference pixel point of the target vehicle can be displayed on the respective matched road segments in real time, so that the real-time and continuous driving track formed by the plane coordinates corresponding to the coordinates of each reference pixel point of the target vehicle can be displayed on the navigation map.
According to the scheme provided by the embodiment of the invention, the target vehicle in the vehicle image can be matched to the road section of the navigation map, the traffic microscopic data extraction is realized, and the driving track of the target vehicle is unified to the road network data, so that a good data basis can be provided for the subsequent fusion of multiple data sources.
The following description is supplemented with the test procedure of the sample vehicle according to an exemplary embodiment of the present invention, in which a sample test vehicle can be used to determine the perspective transformation matrix of each video surveillance device and to determine the lane plane coordinates. It should be noted that the determination process of the perspective transformation matrix and the lane plane coordinates may be performed by the data processing device or the video monitoring device. Specifically, the method comprises the following steps:
first, the conditions required for the test procedure are: the method comprises the steps of straight target road, test time with better visibility, a test vehicle with a GPS device, and time correction synchronization of the time correction of the video monitoring device and the GPS data of the test vehicle.
It should be noted that the testing time can be selected in the daytime, sunny day and less vehicles, so as to obtain better testing effect. The higher the positioning accuracy of the GPS device is, the better the coordinate conversion effect of the coordinate mapping matrix is, and in the embodiment of the present invention, the GPS device needs to have at least sub-meter-level positioning accuracy.
Specifically, the test process may be: and the test vehicle runs from the starting point of the target road to the end point of the target road, each video monitoring device continuously shoots a test image about the test vehicle during running, and the perspective transformation matrix and the lane plane coordinate of each video monitoring device are determined based on the shot test image.
It should be noted that, in order to obtain valid matching points for calculating the perspective transformation matrix, the test vehicle cannot always be in the same lane during driving. As an alternative implementation, the test vehicle may complete a full travel on the left lane and then complete a full travel on the right lane.
Specifically, the process of determining the perspective transformation matrix may include steps a to D:
step A, respectively acquiring GPS coordinates of a test vehicle and simultaneously acquiring a test image at a plurality of test time points in the process that the test vehicle runs on the target road;
wherein the positions of the test vehicles at the plurality of test time points are not collinear.
Note that the GPS coordinates may be acquired from a GPS device of the test vehicle, and the GPS coordinates are three-dimensional coordinates.
Step B, converting each GPS coordinate into a corresponding plane coordinate by using a coordinate projection method;
the coordinate projection method is an existing method and can convert a three-dimensional GPS coordinate into a two-dimensional plane coordinate. Wherein the planar coordinates are used to characterize a location on the target road.
Step C, determining the reference pixel point coordinates of the test vehicle in each test image;
wherein, the reference pixel point coordinate is: and the coordinates of the pixel points corresponding to the central points of the preset rectangular frames containing the test vehicles are identified in the test images.
The data processing equipment or any video monitoring equipment has an image recognition function, and can recognize the position of a preset rectangular frame containing a test vehicle from a test image; and then determining the coordinates of the pixel points corresponding to the central points of the preset rectangular frames in each test image, and taking the coordinates as the coordinates of the reference pixel points of the test vehicle.
Of course, in the embodiment of the present invention, the shape of the detection frame including the test vehicle is not limited to a rectangle, and may be a circle, a trapezoid, or the like.
And D, taking the plane coordinates of each test time point and the coordinates of the reference pixel points as a group of matching points, and calculating a perspective transformation matrix by using the plurality of groups of matching points.
The perspective transformation matrix is used for representing the mapping relation between the pixel point coordinates and the plane coordinates. The number of groups of matching points is at least four.
The process of calculating the perspective transformation matrix using multiple sets of matching points is prior art and will not be described here.
An own perspective transformation matrix may be determined for each video surveillance device and the determined perspective transformation matrix may be stored in the video surveillance device or in a data processing device.
After a test image is acquired, the lane plane coordinates may be determined. The determining process of the lane plane coordinates may include steps E and F:
e, determining a plane coordinate corresponding to the lane pixel point coordinate by using a mapping relation between a preset pixel point coordinate and the plane coordinate;
wherein, the lane pixel point coordinate is: and (3) manually calibrating pixel points of lane sidelines on a test image about the target road to obtain coordinates. During calibration, two pixel points can be selected from the pixel points of the side line of the lane for coordinate calibration. In the embodiment of the present invention, if the target road is a straight line segment, the coordinates of all the pixels of the lane edge may be determined based on the coordinates of two pixels, for example, by using the coordinates of a pixel corresponding to the start point and the coordinates of a pixel corresponding to the end point of the target road and using a linear relationship.
In this step, the mapping relationship between the preset pixel coordinates and the plane coordinates may be determined by other tools.
It can be understood that, if the target road is curved, the coordinates of each pixel point on the side line of the lane can be calibrated, and then the plane coordinates corresponding to the pixel point coordinates of each lane are determined by using the mapping relationship between the preset pixel point coordinates and the plane coordinates.
And F, taking the plane coordinate corresponding to the determined lane pixel point coordinate as a lane plane coordinate.
It is understood that two lane plane coordinates may be determined, and then the two lane plane coordinates are calculated to obtain a plurality of lane plane coordinates by using a linear relationship.
In a second aspect, from the perspective of a video monitoring device, an embodiment of the present invention provides a road multi-video joint detection method. The method is applied to video monitoring equipment in a road multi-video combined detection system. The system further comprises a plurality of video monitoring devices, the video monitoring devices are arranged at intervals at the roadside of the target road, and the monitoring areas of two adjacent video monitoring devices are overlapped.
It should be noted that an execution subject of the road multi-video joint detection method provided by the embodiment of the present invention may be a road multi-video joint detection apparatus, and the apparatus may be operated in a video monitoring device; the video monitoring device may be a camera or the like.
As shown in fig. 1, from the perspective of a video monitoring device, a road multi-video joint detection method provided by the embodiment of the present invention may include the following steps:
s201, determining target information based on a vehicle image every time a vehicle image is acquired;
in the embodiment of the invention, any video monitoring equipment can be a camera, such as a dome camera, a gunlock and the like, and also can be a camera, such as a bayonet camera, a high-point monitoring camera and the like; of course, all devices having a video monitoring function may be used as the video monitoring device in the embodiment of the present invention, for example: electronic police, etc. The plurality of video monitoring devices may be spaced apart on a roadside of the target road; the plurality of video monitoring devices may be disposed on the same side of the target road, or may be disposed on different sides of the target road. The spacing between each two adjacent video surveillance devices may be the same or different. It is emphasized, however, that the monitoring areas of two adjacent video monitoring devices overlap.
The setting mode of each video monitoring device may be: installing each video monitoring device on a special erection device at the roadside of the target road; of course, for convenience of installation, the video monitoring device may be installed on a lamp post on the road side of the target road.
In the embodiment of the present invention, each video monitoring device continuously captures a vehicle on the target road, and the vehicle images are obtained, for example, the time interval between the video monitoring device capturing the vehicle images may be 100 milliseconds, and one captured vehicle image may include a plurality of vehicles. Where 100 milliseconds is an empirical value determined based on the detection requirements.
For each video monitoring device, each time a vehicle image is acquired, target information is determined based on the vehicle image, and the determined target information is sent to the data processing device.
Wherein the target information is related to each target vehicle in the vehicle image; the target information may include the vehicle image, a position of each target vehicle on the target road determined from the vehicle image, a time when the vehicle image is acquired, and the like. Of course, the contents of the target information are not limited to the above.
In this step, the determining of the target information based on the vehicle image may include the following two implementation processes:
the first implementation process comprises the following steps: and taking the vehicle image and the timestamp of the vehicle image as target information.
The second implementation process comprises the following steps: and determining the position information of each target vehicle in the vehicle image, and taking the determined position information and the timestamp of the vehicle image as target information.
In the implementation process, the video monitoring device determines the position information of each target vehicle in any vehicle image, and takes the determined position information and the timestamp of the vehicle image as target information.
For the implementation process of determining the position information of each target vehicle in any vehicle image, reference may be made to the detailed description of the corresponding content in the foregoing, and details are not repeated here.
The video monitoring device may adopt any one of the above-described implementation manners, and determine target information based on a vehicle image every time a vehicle image is acquired.
S202, sending the determined target information to a data processing device so that the data processing device determines the running track of each target vehicle based on the received target information; and determining, for each target vehicle, travel information of the target vehicle based on the travel trajectory of the target vehicle.
For the processing procedure of the data processing device, reference may be made to the related contents of the road multi-video joint detection method provided from the perspective of the video monitoring device, and details are not repeated here.
In the scheme provided by the embodiment of the invention, a plurality of video monitoring devices are arranged at intervals at the road side of the target road, and the monitoring areas of two adjacent video monitoring devices are overlapped. Any video monitoring device can determine target information related to each target vehicle in a vehicle image based on the vehicle image when acquiring the vehicle image; then sending the determined target information to a data processing device so that the data processing device determines the running track of each target vehicle based on the received plurality of target information; and determining, for each target vehicle, travel information of the target vehicle based on the travel trajectory of the target vehicle. Therefore, the aim of effectively detecting the driving information of the vehicles on the section of road by using the video monitoring equipment can be fulfilled by applying the embodiment of the invention.
In a third aspect, in comparison with a road multi-video joint detection method provided from the perspective of data processing equipment, an embodiment of the present invention further provides a road multi-video joint detection apparatus. The device is applied to data processing equipment in a road multi-video combined detection system, the system comprises a plurality of video monitoring equipment and data processing equipment, the video monitoring equipment is arranged at the roadside of a target road at intervals, and monitoring areas of two adjacent video monitoring equipment are overlapped; as shown in fig. 3, the apparatus includes:
a receiving module 301, configured to continuously receive target information sent by multiple video monitoring devices; when any video monitoring device acquires a vehicle image, determining target information based on the vehicle image, and sending the determined target information to a data processing device; the target information is related to each target vehicle in the vehicle image;
a first determining module 302, configured to determine a driving track of each target vehicle based on the received target information;
the second determining module 303 is configured to determine, for each target vehicle, the traveling information of the target vehicle based on the traveling track of the target vehicle.
Optionally, in an embodiment of the present invention, the first determining module 302 includes:
the first determining submodule is used for determining the position information of each target vehicle in the vehicle image corresponding to the target information according to the received target information and based on the target information; the position information of any target vehicle is used for representing the position of the target vehicle on the target road;
the second determining submodule is used for determining each piece of position information of the same target vehicle in the plurality of vehicle images based on the position information of each target vehicle in the plurality of vehicle images when the position information of each target vehicle in the plurality of vehicle images is obtained;
the arrangement submodule is used for arranging the position information of each target vehicle according to a preset arrangement mode aiming at each target vehicle to form a running track of the target vehicle; the preset arrangement mode is a mode of arranging according to the sequence of the timestamps corresponding to the position information, and the timestamp corresponding to any position information is the timestamp of the vehicle image to which the position information belongs.
Optionally, in the embodiment of the present invention, the target information includes a vehicle image acquired by a video monitoring device and a timestamp of the vehicle image;
the first determination submodule includes:
and a first determining unit configured to determine, for each piece of received target information, position information of each target vehicle in a vehicle image included in the target information.
Optionally, in this embodiment of the present invention, the target information includes: the video monitoring equipment determines the position information of each target vehicle and the time stamp of the vehicle image from the acquired vehicle image;
the first determination submodule includes:
and a second determination unit configured to extract, for each piece of the received target information, position information of the respective target vehicle from the target information.
Optionally, in an embodiment of the present invention, the second determining sub-module is specifically configured to:
when the position information of each target vehicle in the plurality of vehicle images is obtained, aiming at two pieces of position information meeting a first condition, when the distance value between areas corresponding to the two pieces of position information is smaller than a first distance threshold value, the two pieces of position information are determined as the position information of the same target vehicle; wherein the first condition comprises: the two pieces of position information are extracted from different vehicle images, the different vehicle images are collected by the same video monitoring equipment, and the timestamps are adjacent;
for two pieces of position information meeting a second condition, when the distance value between the areas corresponding to the two pieces of position information is smaller than a second distance threshold value, determining the two pieces of position information as the position information of the same target vehicle; wherein the second condition comprises: the two position information are extracted from different vehicle images, and the different vehicle images are collected by adjacent video monitoring equipment and have the same timestamp.
Optionally, in an embodiment of the present invention, the first determining unit includes:
the first determining subunit is used for determining the reference pixel point coordinates of each target vehicle in the vehicle image; the coordinates of the reference pixel points are used for representing the position of the target vehicle in the vehicle image;
the second determining subunit is used for determining the plane coordinates corresponding to the reference pixel point coordinates of each target vehicle by using the mapping relation between the preset pixel point coordinates and the plane coordinates;
and the third determining subunit is used for determining the position information of each target vehicle based on the plane coordinate corresponding to the reference pixel point coordinate of the target vehicle.
Optionally, in an embodiment of the present invention, the second determining subunit is specifically configured to:
calculating a plane coordinate corresponding to the reference pixel point coordinate of each target vehicle by using a predetermined perspective transformation matrix; the perspective transformation matrix is used for representing the mapping relation between the pixel point coordinates and the plane coordinates.
Optionally, in an embodiment of the present invention, the third determining subunit is specifically configured to:
for each target vehicle, comparing the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle with a plurality of lane plane coordinates determined in advance, determining lane identification of the target vehicle according to a comparison result, and taking the lane identification and the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle as position information of the target vehicle;
and the lane plane coordinate is a two-dimensional coordinate corresponding to a lane sideline of the target road.
Optionally, in this embodiment of the present invention, the second determining module 303 is specifically configured to:
aiming at each target vehicle, calculating the corresponding instantaneous speed of the target vehicle by using the distance between two plane coordinates in the running track of the target vehicle and the difference value of the timestamps corresponding to the two plane coordinates;
determining, for each target vehicle, a first timestamp and a second timestamp; calculating a difference between the first timestamp and the second timestamp; calculating the interval vehicle speed of the target vehicle by using the length of the target road and the difference value; wherein the first timestamp is: the time stamp corresponding to the plane coordinate belonging to the start point coordinate of the target road in the travel track of the target vehicle is: and the time stamp corresponds to the plane coordinate of the terminal point coordinate of the target road in the running track of the target vehicle.
Optionally, in this embodiment of the present invention, the second determining module 303 is specifically configured to:
for each target vehicle, determining a target track of the target vehicle in a preset time period from the running tracks of the target vehicle;
and judging whether the target vehicle has abnormal lane changing behavior or not according to the plurality of lane marks in the target track.
Optionally, in an embodiment of the present invention, the apparatus further includes:
the calculation module is used for counting the number of target vehicles passing through a preset section of the target road in a preset time length based on a plurality of target information sent by the target video monitoring equipment in the preset time length and received, and calculating section flow by using the counted number of the target vehicles and the preset time length; and the target video monitoring equipment is the video monitoring equipment at the preset section position.
Optionally, in an embodiment of the present invention, the apparatus further includes:
the acquisition module is used for acquiring road network data; the road network data comprises plane coordinates of a plurality of road sections of the region where the target road is located;
and the matching module is used for determining the road sections matched with the target vehicles according to the position relation between the plane coordinates corresponding to the reference pixel point coordinates of the target vehicles and the plane coordinates of each road section.
In the scheme provided by the embodiment of the invention, a plurality of video monitoring devices are arranged at intervals at the road side of the target road, and the monitoring areas of two adjacent video monitoring devices are overlapped. The data processing device can continuously receive target information which is sent by a plurality of video monitoring devices and is determined based on the acquired vehicle images and is related to each target vehicle in the vehicle images; then, based on the received target information, the running track of each target vehicle can be determined; then, for each target vehicle, the travel information of the target vehicle may be determined based on the travel track of the target vehicle. Therefore, the aim of effectively detecting the driving information of the vehicles on the section of road by using the video monitoring equipment can be fulfilled by applying the embodiment of the invention.
In a fourth aspect, in comparison with the road multi-video joint detection method provided from the perspective of video monitoring equipment, the embodiment of the invention also provides a road multi-video joint detection device. The device is applied to video monitoring equipment in a road multi-video combined detection system, the system comprises a plurality of video monitoring equipment and data processing equipment, the video monitoring equipment is arranged at the roadside of a target road at intervals, and the monitoring areas of two adjacent video monitoring equipment are overlapped; as shown in fig. 4, the apparatus includes:
the determining module 401 is configured to determine target information based on a vehicle image each time the vehicle image is acquired; wherein the target information is related to each target vehicle in the vehicle image;
a sending module 402, configured to send the determined target information to a data processing device, so that the data processing device determines a driving track of each target vehicle based on the received multiple pieces of target information; and determining, for each target vehicle, travel information of the target vehicle based on the travel trajectory of the target vehicle.
Optionally, in this embodiment of the present invention, the determining module 401 includes:
a first determination sub-module for taking the vehicle image and a time stamp of the vehicle image as target information; or
And the second determining submodule is used for determining the position information of each target vehicle in the vehicle image and taking the determined position information and the time stamp of the vehicle image as target information.
Optionally, in an embodiment of the present invention, the second determining sub-module includes:
the first determining unit is used for determining the reference pixel point coordinates of each target vehicle in the vehicle image; the coordinates of the reference pixel points are used for representing the position of the target vehicle in the vehicle image;
the second determining unit is used for determining the plane coordinates corresponding to the reference pixel point coordinates of each target vehicle by utilizing the mapping relation between the preset pixel point coordinates and the plane coordinates;
and the third determining unit is used for determining the position information of each target vehicle based on the plane coordinate corresponding to the reference pixel point coordinate of the target vehicle.
Optionally, in an embodiment of the present invention, the second determining unit is specifically configured to:
calculating a plane coordinate corresponding to the reference pixel point coordinate of each target vehicle by using a predetermined perspective transformation matrix; the perspective transformation matrix is used for representing the mapping relation between the pixel point coordinates and the plane coordinates.
Optionally, in an embodiment of the present invention, the third determining unit is specifically configured to:
for each target vehicle, comparing the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle with a plurality of lane plane coordinates determined in advance, determining lane identification of the target vehicle according to a comparison result, and taking the lane identification and the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle as position information of the target vehicle;
and the lane plane coordinate is a two-dimensional coordinate corresponding to a lane sideline of the target road.
In the scheme provided by the embodiment of the invention, a plurality of video monitoring devices are arranged at intervals at the road side of the target road, and the monitoring areas of two adjacent video monitoring devices are overlapped. Any video monitoring device can determine target information related to each target vehicle in a vehicle image based on the vehicle image when acquiring the vehicle image; then sending the determined target information to a data processing device so that the data processing device determines the running track of each target vehicle based on the received plurality of target information; and determining, for each target vehicle, travel information of the target vehicle based on the travel trajectory of the target vehicle. Therefore, the aim of effectively detecting the driving information of the vehicles on the section of road by using the video monitoring equipment can be fulfilled by applying the embodiment of the invention.
In a fifth aspect, with respect to a road multi-video joint detection method provided from the perspective of a data processing device, an embodiment of the present invention further provides an electronic device, which is a data processing device, as shown in fig. 5, the data processing device may include a processor 501 and a memory 502, wherein,
the memory 502 is used for storing computer programs;
the processor 501 is configured to implement the steps of the road multi-video joint detection method provided by the embodiment of the invention from the perspective of a data processing device when executing the program stored in the memory 502.
The Memory may include a RAM (Random Access Memory) or an NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
Through above-mentioned electronic equipment, can realize: a plurality of video monitoring devices are arranged at intervals on the road side of a target road, and the monitoring areas of two adjacent video monitoring devices are overlapped. The data processing device can continuously receive target information which is sent by a plurality of video monitoring devices and is determined based on the acquired vehicle images and is related to each target vehicle in the vehicle images; then, based on the received target information, the running track of each target vehicle can be determined; then, for each target vehicle, the travel information of the target vehicle may be determined based on the travel track of the target vehicle. Therefore, the aim of effectively detecting the driving information of the vehicles on the section of road by using the video monitoring equipment can be fulfilled by applying the embodiment of the invention.
In a sixth aspect, with respect to a road multi-video joint detection method provided from the perspective of a video monitoring device, an embodiment of the present invention further provides an electronic device, which is a video monitoring device, as shown in fig. 6, the video monitoring device may include a processor 601 and a memory 602, wherein,
the memory 602 is used for storing computer programs;
the processor 601 is configured to implement the steps of the road multi-video joint detection method provided by the embodiment of the invention from the perspective of a video monitoring device when executing the program stored in the memory 602.
The Memory may include a RAM (Random Access Memory) or an NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
Through above-mentioned electronic equipment, can realize: a plurality of video monitoring devices are arranged at intervals on the road side of a target road, and the monitoring areas of two adjacent video monitoring devices are overlapped. Any video monitoring device can determine target information related to each target vehicle in a vehicle image based on the vehicle image when acquiring the vehicle image; then sending the determined target information to a data processing device so that the data processing device determines the running track of each target vehicle based on the received plurality of target information; and determining, for each target vehicle, travel information of the target vehicle based on the travel trajectory of the target vehicle. Therefore, the aim of effectively detecting the driving information of the vehicles on the section of road by using the video monitoring equipment can be fulfilled by applying the embodiment of the invention.
In a seventh aspect, in relation to a road multi-video joint detection method provided from the perspective of a data processing device, an embodiment of the present invention further provides a computer-readable storage medium, which can be applied to the data processing device. The computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the steps of the road multi-video joint detection method provided by an embodiment of the present invention from the perspective of a data processing device.
The above-mentioned computer-readable storage medium stores an application program that executes, when executed, the road multi-video joint detection method provided by the embodiment of the present invention from the perspective of a data processing device, and thus can implement: a plurality of video monitoring devices are arranged at intervals on the road side of a target road, and the monitoring areas of two adjacent video monitoring devices are overlapped. The data processing device can continuously receive target information which is sent by a plurality of video monitoring devices and is determined based on the acquired vehicle images and is related to each target vehicle in the vehicle images; then, based on the received target information, the running track of each target vehicle can be determined; then, for each target vehicle, the travel information of the target vehicle may be determined based on the travel track of the target vehicle. Therefore, the aim of effectively detecting the driving information of the vehicles on the section of road by using the video monitoring equipment can be fulfilled by applying the embodiment of the invention.
In an eighth aspect, in relation to a road multi-video joint detection method provided from the perspective of a video monitoring device, an embodiment of the present invention further provides a computer-readable storage medium, which can be applied to the video monitoring device. The computer readable storage medium has stored therein a computer program which, when executed by a processor, implements the steps of an embodiment of the present invention with respect to a road multi-video joint detection method provided from the perspective of a video surveillance device.
The above-mentioned computer-readable storage medium stores an application program that executes, when executed, the embodiment of the present invention with respect to the road multi-video joint detection method provided from the perspective of the video surveillance device, and thus can implement: a plurality of video monitoring devices are arranged at intervals on the road side of a target road, and the monitoring areas of two adjacent video monitoring devices are overlapped. Any video monitoring device can determine target information related to each target vehicle in a vehicle image based on the vehicle image when acquiring the vehicle image; then sending the determined target information to a data processing device so that the data processing device determines the running track of each target vehicle based on the received plurality of target information; and determining, for each target vehicle, travel information of the target vehicle based on the travel trajectory of the target vehicle. Therefore, the aim of effectively detecting the driving information of the vehicles on the section of road by using the video monitoring equipment can be fulfilled by applying the embodiment of the invention.
For the embodiments of the electronic device and the computer-readable storage medium, since the contents of the related methods are substantially similar to those of the foregoing embodiments of the methods, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the embodiments of the methods.
In a ninth aspect, in relation to a road multi-video joint detection method provided from the perspective of a data processing device and from the perspective of a video monitoring device, an embodiment of the present invention further provides a road multi-video joint detection system. The video monitoring devices are arranged at intervals at the road side of the target road, and the monitoring areas of two adjacent video monitoring devices are overlapped.
As shown in fig. 7, the road multi-video joint detection system 700 includes a plurality of video monitoring devices 710 and a data processing device 720. Here, only one data processing device 720 is shown in fig. 7 as an example, but the number of the data processing devices 720 in the road multi-video joint detection system is not limited to one. Wherein:
any video monitoring device 710, configured to determine target information based on a vehicle image and send the determined target information to a data processing device each time a vehicle image is acquired;
and the data processing device 720 is used for continuously receiving the target information sent by the plurality of video monitoring devices, determining the running tracks of the target vehicles based on the received target information, and determining the running information of each target vehicle based on the running track of the target vehicle.
For the processing procedure of the data processing device, reference may be made to the related contents of the foregoing road multi-video joint detection method provided from the perspective of the data processing device; for the processing procedure of the video monitoring device, reference may be made to the related contents of the road multi-video joint detection method provided from the perspective of the video monitoring device, and details are not repeated here.
In the scheme provided by the embodiment of the invention, a plurality of video monitoring devices are arranged at intervals at the road side of the target road, and the monitoring areas of two adjacent video monitoring devices are overlapped. Therefore, for each target vehicle, the plurality of video monitoring devices can acquire a plurality of continuous vehicle images containing the target vehicle, so as to determine a plurality of target information; and the data processing device can determine the position information of the target vehicle based on the received plurality of target information, determine the running track of the target vehicle on the target road based on the plurality of position information, and then determine the running information of the target vehicle according to the running track. In the scheme provided by the embodiment of the invention, the target vehicle spanning multiple video monitoring devices can be tracked through the overlapping area of the vehicle images, the method is simple and effective, the engineering is easy to realize, the monitoring range can cover the whole target road section, and the whole-course detection of the target road can be realized. Therefore, the aim of effectively detecting the driving information of the vehicles on the section of road by using the video monitoring equipment can be fulfilled by applying the embodiment of the invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (23)
1. A road multi-video joint detection method is characterized in that the method is applied to data processing equipment in a road multi-video joint detection system, the system also comprises a plurality of video monitoring equipment, the video monitoring equipment are arranged at the roadside of a target road at intervals, and the monitoring areas of two adjacent video monitoring equipment are overlapped; the method comprises the following steps:
continuously receiving target information sent by a plurality of video monitoring devices; when any video monitoring device acquires a vehicle image, determining target information based on the vehicle image, and sending the determined target information to a data processing device; the target information is related to each target vehicle in the vehicle image;
determining a running track of each target vehicle based on the received plurality of target information;
for each target vehicle, the travel information of the target vehicle is determined based on the travel track of the target vehicle.
2. The method of claim 1, wherein determining a travel trajectory for each target vehicle based on the received plurality of target information comprises:
for each piece of received target information, determining the position information of each target vehicle in the vehicle image corresponding to the target information based on the target information; the position information of any target vehicle is used for representing the position of the target vehicle on the target road;
when the position information of each target vehicle in the multiple vehicle images is obtained, determining each position information of the same target vehicle in the multiple vehicle images based on the position information of each target vehicle in the multiple vehicle images;
aiming at each target vehicle, arranging the position information of the target vehicle according to a preset arrangement mode to form a running track of the target vehicle; the preset arrangement mode is a mode of arranging according to the sequence of the timestamps corresponding to the position information, and the timestamp corresponding to any position information is the timestamp of the vehicle image to which the position information belongs.
3. The method of claim 2, wherein the target information comprises a vehicle image captured by a video surveillance device and a timestamp of the vehicle image;
the determining, for each piece of received target information, the position information of each target vehicle in the vehicle image corresponding to the target information based on the target information includes:
for each piece of received target information, position information of each target vehicle is determined in a vehicle image included in the target information.
4. The method of claim 2, wherein the target information comprises: the video monitoring equipment determines the position information of each target vehicle and the time stamp of the vehicle image from the acquired vehicle image;
the determining, for each piece of received target information, the position information of each target vehicle in the vehicle image corresponding to the target information based on the target information includes:
for each piece of received target information, position information of the respective target vehicle is extracted from the target information.
5. The method of claim 2, wherein determining the respective location information of the same target vehicle in the plurality of vehicle images based on the location information of the respective target vehicle in the plurality of vehicle images when obtaining the location information of the respective target vehicle in the plurality of vehicle images comprises:
when the position information of each target vehicle in the plurality of vehicle images is obtained, aiming at two pieces of position information meeting a first condition, when the distance value between areas corresponding to the two pieces of position information is smaller than a first distance threshold value, the two pieces of position information are determined as the position information of the same target vehicle; wherein the first condition comprises: the two pieces of position information are extracted from different vehicle images, the different vehicle images are collected by the same video monitoring equipment, and the timestamps are adjacent;
for two pieces of position information meeting a second condition, when the distance value between the areas corresponding to the two pieces of position information is smaller than a second distance threshold value, determining the two pieces of position information as the position information of the same target vehicle; wherein the second condition comprises: the two position information are extracted from different vehicle images, and the different vehicle images are collected by adjacent video monitoring equipment and have the same timestamp.
6. The method according to claim 3, wherein the determining, for each piece of received target information, position information of each target vehicle in a vehicle image included in the target information comprises:
determining the reference pixel point coordinates of each target vehicle in the vehicle image; the coordinates of the reference pixel points are used for representing the position of the target vehicle in the vehicle image;
determining a plane coordinate corresponding to the reference pixel point coordinate of each target vehicle by using a mapping relation between a preset pixel point coordinate and the plane coordinate;
and determining the position information of each target vehicle based on the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle.
7. The method according to claim 6, wherein the determining the plane coordinates corresponding to the reference pixel coordinates of each target vehicle by using the mapping relationship between the preset pixel coordinates and the plane coordinates comprises:
calculating a plane coordinate corresponding to the reference pixel point coordinate of each target vehicle by using a predetermined perspective transformation matrix; the perspective transformation matrix is used for representing the mapping relation between the pixel point coordinates and the plane coordinates.
8. The method of claim 6, wherein the determining, for each target vehicle, the position information of the target vehicle based on the plane coordinates corresponding to the coordinates of the reference pixel of the target vehicle comprises:
for each target vehicle, comparing the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle with a plurality of lane plane coordinates determined in advance, determining lane identification of the target vehicle according to a comparison result, and taking the lane identification and the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle as position information of the target vehicle;
and the lane plane coordinate is a two-dimensional coordinate corresponding to a lane sideline of the target road.
9. The method of claim 6, wherein determining the travel information of each target vehicle based on the travel track of the target vehicle comprises:
aiming at each target vehicle, calculating the corresponding instantaneous speed of the target vehicle by using the distance between two plane coordinates in the running track of the target vehicle and the difference value of the timestamps corresponding to the two plane coordinates;
determining, for each target vehicle, a first timestamp and a second timestamp; calculating a difference between the first timestamp and the second timestamp; calculating the interval vehicle speed of the target vehicle by using the length of the target road and the difference value; wherein the first timestamp is: the time stamp corresponding to the plane coordinate belonging to the start point coordinate of the target road in the travel track of the target vehicle is: and the time stamp corresponds to the plane coordinate of the terminal point coordinate of the target road in the running track of the target vehicle.
10. The method of claim 8, wherein determining the travel information of each target vehicle based on the travel track of the target vehicle comprises:
for each target vehicle, determining a target track of the target vehicle in a preset time period from the running tracks of the target vehicle;
and judging whether the target vehicle has abnormal lane changing behavior or not according to the plurality of lane marks in the target track.
11. The method of claim 6, further comprising:
counting the number of target vehicles passing through a preset section of the target road in a preset time period based on a plurality of target information sent by target video monitoring equipment and received in the preset time period, and calculating section flow by using the counted number of the target vehicles and the preset time period; and the target video monitoring equipment is the video monitoring equipment at the preset section position.
12. The method of claim 6, further comprising:
acquiring road network data; the road network data comprises plane coordinates of a plurality of road sections of the region where the target road is located;
and determining the road sections matched with the target vehicles according to the position relation between the plane coordinates corresponding to the reference pixel point coordinates of the target vehicles and the plane coordinates of each road section for each target vehicle.
13. A road multi-video joint detection method is characterized in that the method is applied to video monitoring equipment in a road multi-video joint detection system, the system comprises a plurality of video monitoring equipment and data processing equipment, the video monitoring equipment is arranged at the roadside of a target road at intervals, and the monitoring areas of two adjacent video monitoring equipment are overlapped; the method comprises the following steps:
determining target information based on a vehicle image every time a vehicle image is acquired; wherein the target information is related to each target vehicle in the vehicle image;
transmitting the determined target information to a data processing apparatus so that the data processing apparatus determines a travel track of each target vehicle based on the received plurality of target information; and determining, for each target vehicle, travel information of the target vehicle based on the travel trajectory of the target vehicle.
14. The method of claim 13, wherein the determining target information based on the vehicle image comprises:
taking the vehicle image and a timestamp of the vehicle image as target information; or
And determining the position information of each target vehicle in the vehicle image, and taking the determined position information and the timestamp of the vehicle image as target information.
15. The method of claim 14, wherein the determining the location information of each target vehicle in the vehicle image comprises:
determining the reference pixel point coordinates of each target vehicle in the vehicle image; the coordinates of the reference pixel points are used for representing the position of the target vehicle in the vehicle image;
determining a plane coordinate corresponding to the reference pixel point coordinate of each target vehicle by using a mapping relation between a preset pixel point coordinate and the plane coordinate;
and determining the position information of each target vehicle based on the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle.
16. The method according to claim 15, wherein the determining the plane coordinates corresponding to the reference pixel coordinates of each target vehicle by using the mapping relationship between the preset pixel coordinates and the plane coordinates comprises:
calculating a plane coordinate corresponding to the reference pixel point coordinate of each target vehicle by using a predetermined perspective transformation matrix; the perspective transformation matrix is used for representing the mapping relation between the pixel point coordinates and the plane coordinates.
17. The method of claim 15, wherein the determining, for each target vehicle, the position information of the target vehicle based on the plane coordinates corresponding to the coordinates of the reference pixel of the target vehicle comprises:
for each target vehicle, comparing the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle with a plurality of lane plane coordinates determined in advance, determining lane identification of the target vehicle according to a comparison result, and taking the lane identification and the plane coordinates corresponding to the reference pixel point coordinates of the target vehicle as position information of the target vehicle;
and the lane plane coordinate is a two-dimensional coordinate corresponding to a lane sideline of the target road.
18. A road multi-video combined detection device is characterized in that the device is applied to data processing equipment in a road multi-video combined detection system, the system further comprises a plurality of video monitoring equipment, the video monitoring equipment are arranged at the roadside of a target road at intervals, and the monitoring areas of two adjacent video monitoring equipment are overlapped; the device comprises:
the receiving module is used for continuously receiving the target information sent by the plurality of video monitoring devices; when any video monitoring device acquires a vehicle image, determining target information based on the vehicle image, and sending the determined target information to a data processing device; the target information is related to each target vehicle in the vehicle image;
the first determination module is used for determining the running track of each target vehicle based on the received target information;
and the second determination module is used for determining the running information of each target vehicle based on the running track of the target vehicle.
19. The apparatus of claim 18, wherein the first determining module comprises:
the first determining submodule is used for determining the position information of each target vehicle in the vehicle image corresponding to the target information according to the received target information and based on the target information; the position information of any target vehicle is used for representing the position of the target vehicle on the target road;
the second determining submodule is used for determining each piece of position information of the same target vehicle in the plurality of vehicle images based on the position information of each target vehicle in the plurality of vehicle images when the position information of each target vehicle in the plurality of vehicle images is obtained;
the arrangement submodule is used for arranging the position information of each target vehicle according to a preset arrangement mode aiming at each target vehicle to form a running track of the target vehicle; the preset arrangement mode is a mode of arranging according to the sequence of the timestamps corresponding to the position information, and the timestamp corresponding to any position information is the timestamp of the vehicle image to which the position information belongs.
20. A road multi-video joint detection device is characterized in that the device is applied to video monitoring equipment in a road multi-video joint detection system, the system comprises a plurality of video monitoring equipment and data processing equipment, the video monitoring equipment is arranged at the roadside of a target road at intervals, and the monitoring areas of two adjacent video monitoring equipment are overlapped; the device comprises:
the determining module is used for determining target information based on a vehicle image every time the vehicle image is acquired; wherein the target information is related to each target vehicle in the vehicle image;
a transmitting module, configured to transmit the determined target information to a data processing device, so that the data processing device determines a driving track of each target vehicle based on the received plurality of target information; and determining, for each target vehicle, travel information of the target vehicle based on the travel trajectory of the target vehicle.
21. An electronic device, being a data processing device, comprising a processor and a memory, wherein,
the memory is used for storing a computer program;
the processor, when executing the program stored in the memory, implementing the method steps of any of claims 1-12.
22. An electronic device, the electronic device being a video surveillance device, comprising a processor and a memory, wherein,
the memory is used for storing a computer program;
the processor, when executing the program stored in the memory, is configured to perform the method steps of any of claims 13-17.
23. A road multi-video joint detection system is characterized by comprising a plurality of video monitoring devices and data processing devices, wherein the video monitoring devices are arranged at intervals at the road side of a target road, and the monitoring areas of two adjacent video monitoring devices are overlapped; wherein,
any video surveillance apparatus is the video surveillance apparatus of claim 22;
the data processing apparatus as claimed in claim 21.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811333100.0A CN111243281A (en) | 2018-11-09 | 2018-11-09 | Road multi-video joint detection system and detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811333100.0A CN111243281A (en) | 2018-11-09 | 2018-11-09 | Road multi-video joint detection system and detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111243281A true CN111243281A (en) | 2020-06-05 |
Family
ID=70879169
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811333100.0A Pending CN111243281A (en) | 2018-11-09 | 2018-11-09 | Road multi-video joint detection system and detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111243281A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112365722A (en) * | 2020-09-22 | 2021-02-12 | 浙江大华系统工程有限公司 | Road monitoring area identification method and device, computer equipment and storage medium |
CN113132493A (en) * | 2021-04-29 | 2021-07-16 | 任勇金 | Method and system for monitoring vehicles across regions based on positioning information |
CN113191305A (en) * | 2021-05-17 | 2021-07-30 | 武汉中科通达高新技术股份有限公司 | Data management method and device based on Flink framework |
CN113380039A (en) * | 2021-07-06 | 2021-09-10 | 联想(北京)有限公司 | Data processing method and device and electronic equipment |
CN114067554A (en) * | 2020-08-05 | 2022-02-18 | 北京万集科技股份有限公司 | Road side terminal abnormality detection method and device, computer equipment and storage medium |
WO2022061725A1 (en) * | 2020-09-25 | 2022-03-31 | 华为技术有限公司 | Traffic element observation method and apparatus |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201266436Y (en) * | 2008-08-26 | 2009-07-01 | 河海大学 | Multi-video united detection device for mobile vehicle |
CN102509457A (en) * | 2011-10-09 | 2012-06-20 | 青岛海信网络科技股份有限公司 | Vehicle tracking method and device |
CN102622886A (en) * | 2012-03-23 | 2012-08-01 | 长安大学 | Video-based method for detecting violation lane-changing incident of vehicle |
CN102819952A (en) * | 2012-06-29 | 2012-12-12 | 浙江大学 | Method for detecting illegal lane change of vehicle based on video detection technique |
CN103729892A (en) * | 2013-06-20 | 2014-04-16 | 深圳市金溢科技有限公司 | Vehicle positioning method and device and processor |
CN105719483A (en) * | 2016-01-15 | 2016-06-29 | 同济大学 | Method for acquiring whole-sample data of vehicle traveling track by combination of videos and GPS (Global Positioning System) |
CN106971593A (en) * | 2017-04-01 | 2017-07-21 | 深圳市元征科技股份有限公司 | Lane recognition method and device |
JP2017138660A (en) * | 2016-02-01 | 2017-08-10 | トヨタ自動車株式会社 | Object detection method, object detection device and program |
CN107305627A (en) * | 2016-04-22 | 2017-10-31 | 杭州海康威视数字技术股份有限公司 | A kind of automobile video frequency monitoring method, server and system |
CN108109132A (en) * | 2016-11-25 | 2018-06-01 | 杭州海康威视数字技术股份有限公司 | A kind of image analysis method and device |
CN108154146A (en) * | 2017-12-25 | 2018-06-12 | 陈飞 | A kind of car tracing method based on image identification |
-
2018
- 2018-11-09 CN CN201811333100.0A patent/CN111243281A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN201266436Y (en) * | 2008-08-26 | 2009-07-01 | 河海大学 | Multi-video united detection device for mobile vehicle |
CN102509457A (en) * | 2011-10-09 | 2012-06-20 | 青岛海信网络科技股份有限公司 | Vehicle tracking method and device |
CN102622886A (en) * | 2012-03-23 | 2012-08-01 | 长安大学 | Video-based method for detecting violation lane-changing incident of vehicle |
CN102819952A (en) * | 2012-06-29 | 2012-12-12 | 浙江大学 | Method for detecting illegal lane change of vehicle based on video detection technique |
CN103729892A (en) * | 2013-06-20 | 2014-04-16 | 深圳市金溢科技有限公司 | Vehicle positioning method and device and processor |
CN105719483A (en) * | 2016-01-15 | 2016-06-29 | 同济大学 | Method for acquiring whole-sample data of vehicle traveling track by combination of videos and GPS (Global Positioning System) |
JP2017138660A (en) * | 2016-02-01 | 2017-08-10 | トヨタ自動車株式会社 | Object detection method, object detection device and program |
CN107305627A (en) * | 2016-04-22 | 2017-10-31 | 杭州海康威视数字技术股份有限公司 | A kind of automobile video frequency monitoring method, server and system |
CN108109132A (en) * | 2016-11-25 | 2018-06-01 | 杭州海康威视数字技术股份有限公司 | A kind of image analysis method and device |
CN106971593A (en) * | 2017-04-01 | 2017-07-21 | 深圳市元征科技股份有限公司 | Lane recognition method and device |
CN108154146A (en) * | 2017-12-25 | 2018-06-12 | 陈飞 | A kind of car tracing method based on image identification |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114067554A (en) * | 2020-08-05 | 2022-02-18 | 北京万集科技股份有限公司 | Road side terminal abnormality detection method and device, computer equipment and storage medium |
CN114067554B (en) * | 2020-08-05 | 2023-06-20 | 北京万集科技股份有限公司 | Road side terminal abnormality detection method, device, computer equipment and storage medium |
CN112365722A (en) * | 2020-09-22 | 2021-02-12 | 浙江大华系统工程有限公司 | Road monitoring area identification method and device, computer equipment and storage medium |
WO2022061725A1 (en) * | 2020-09-25 | 2022-03-31 | 华为技术有限公司 | Traffic element observation method and apparatus |
CN113132493A (en) * | 2021-04-29 | 2021-07-16 | 任勇金 | Method and system for monitoring vehicles across regions based on positioning information |
CN113191305A (en) * | 2021-05-17 | 2021-07-30 | 武汉中科通达高新技术股份有限公司 | Data management method and device based on Flink framework |
CN113380039A (en) * | 2021-07-06 | 2021-09-10 | 联想(北京)有限公司 | Data processing method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111243281A (en) | Road multi-video joint detection system and detection method | |
CN110174093B (en) | Positioning method, device, equipment and computer readable storage medium | |
Sochor et al. | Comprehensive data set for automatic single camera visual speed measurement | |
US8238610B2 (en) | Homography-based passive vehicle speed measuring | |
CN102279406B (en) | Fence identification method using global positioning system (GPS) to position tracks | |
CN111275960A (en) | Traffic road condition analysis method, system and camera | |
TWI534764B (en) | Apparatus and method for vehicle positioning | |
CN108694882A (en) | Method, apparatus and equipment for marking map | |
CN109785637B (en) | Analysis and evaluation method and device for vehicle violation | |
CN102359780B (en) | Ground target positioning method applied into video monitoring system | |
CN111402286B (en) | Target tracking method, device and system and electronic equipment | |
CN104021588A (en) | System and method for recovering three-dimensional true vehicle model in real time | |
CN113465608B (en) | Road side sensor calibration method and system | |
CN107784038A (en) | A kind of mask method of sensing data | |
Sochor et al. | Brnocompspeed: Review of traffic camera calibration and comprehensive dataset for monocular speed measurement | |
CN105488485A (en) | Lane line automatic extraction method based on vehicle trajectory | |
CN111009139A (en) | Signal lamp timing information processing method and device and vehicle-road cooperative system | |
CN112446915B (en) | Picture construction method and device based on image group | |
US9942450B2 (en) | Automatic time signature-based video matching for a camera network | |
CN113850237A (en) | Internet vehicle target detection and evaluation method and system based on video and track data | |
CN112633035A (en) | Driverless vehicle-based lane line coordinate true value acquisition method and device | |
CN105488127B (en) | Pile No. correlating method and system | |
CN111435565A (en) | Road traffic state detection method, road traffic state detection device, electronic equipment and storage medium | |
TWI811954B (en) | Positioning system and calibration method of object location | |
US11885640B2 (en) | Map generation device and map generation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200605 |
|
RJ01 | Rejection of invention patent application after publication |