CN111462503B - Vehicle speed measuring method and device and computer readable storage medium - Google Patents
Vehicle speed measuring method and device and computer readable storage medium Download PDFInfo
- Publication number
- CN111462503B CN111462503B CN201910059660.XA CN201910059660A CN111462503B CN 111462503 B CN111462503 B CN 111462503B CN 201910059660 A CN201910059660 A CN 201910059660A CN 111462503 B CN111462503 B CN 111462503B
- Authority
- CN
- China
- Prior art keywords
- target vehicle
- coordinate system
- determining
- vehicle
- splicing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/052—Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a vehicle speed measuring method and device and a computer readable storage medium, and belongs to the field of intelligent traffic. The method comprises the following steps: and determining the track of the target vehicle in the splicing coordinate system at the current moment, and then selecting two speed measuring reference points belonging to monitoring pictures of different cameras from the track of the target vehicle in the splicing coordinate system at the current moment. And finally, determining the running speed of the target vehicle according to the difference between the actual distance and the acquisition time of the video frame images respectively belonging to the two speed measuring reference points. The distance between the two speed measuring reference points selected by the invention is generally longer, and the difference value between the acquisition times of the video frame images of the two speed measuring reference points is also larger, so that the error of the determined running speed of the target vehicle is smaller, and the accuracy is higher.
Description
Technical Field
The invention relates to the field of intelligent traffic, in particular to a vehicle speed measuring method and device and a computer readable storage medium.
Background
With the increasing number of vehicles in China, the problem of traffic accidents caused by overspeed driving of the vehicles is increasingly serious, so that the speed measurement of the vehicles becomes an indispensable part in traffic control.
In the related art, a vehicle speed measurement method is provided, which includes: the method comprises the steps of collecting video frame images through a binocular camera to obtain a left video frame image and a right video frame image. Determining the disparity maps of the left video frame image and the right video frame image, and determining the actual three-dimensional coordinates of the target vehicle at the previous moment according to the determined disparity maps and the coordinates of the target vehicle in the image coordinate system at the previous moment. And determining the actual three-dimensional coordinates of the target vehicle at the current moment according to the determined disparity map and the coordinates of the target vehicle at the current moment in the image coordinate system. And determining the actual running distance between the front and back adjacent moments of the target vehicle according to the actual three-dimensional coordinate of the last moment of the target vehicle and the actual three-dimensional coordinate of the current moment. And determining the quotient of the actual running distance and the time difference between the two adjacent moments as the running speed of the target vehicle. The target vehicle is any vehicle which carries out speed measurement at present, and the image coordinate system is a coordinate system established aiming at the acquired video frame image.
However, since the time difference between two adjacent time points is generally small, the actual travel distance of the target vehicle between the two adjacent time points is short, and therefore, there is a problem that an error is amplified due to the fact that the time difference between the two adjacent time points is too small, the actual travel distance is too short, and the like, and therefore the accuracy of the determined travel speed is low.
Disclosure of Invention
The embodiment of the invention provides a vehicle speed measuring method, a vehicle speed measuring device and a computer readable storage medium, which can solve the problem that the accuracy of determined running speed is lower due to error amplification caused by factors such as too small time difference between two adjacent speed measuring moments, too short actual running distance and the like in the related technology. The technical scheme is as follows:
in a first aspect, a vehicle speed measurement method is provided, and is applied to a multi-view camera, where monitoring ranges of two adjacent cameras in a plurality of cameras included in the multi-view camera have an overlapping portion along a road direction, and the method includes:
determining the track of a target vehicle in a splicing coordinate system at the current moment, wherein the target vehicle is any vehicle for measuring speed, and the splicing coordinate system is a coordinate system used for drawing the complete track of the target vehicle in the monitoring range of the plurality of cameras;
selecting two speed measuring reference points belonging to monitoring pictures of different cameras from the track of the target vehicle in the splicing coordinate system at the current moment;
determining the actual distance between the two speed measuring reference points according to the splicing coordinates of the two speed measuring reference points in the splicing coordinate system;
and determining the running speed of the target vehicle according to the actual distance and a first time difference, wherein the first time difference is a difference value between the acquisition times of the video frame images to which the two speed measurement reference points belong respectively.
Optionally, the determining an actual distance between the two speed measurement reference points according to the stitching coordinates of the two speed measurement reference points in the stitching coordinate system includes:
determining a first distance according to the abscissa of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent lane lines on the road and the distance between two adjacent lane lines in the splicing coordinate system;
determining a second distance according to the vertical coordinates of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent calibration lines on the road and the distance between two adjacent calibration lines in the splicing coordinate system;
and determining the actual distance between the two speed measuring reference points according to the first distance and the second distance.
Optionally, the determining the track of the target vehicle in the stitching coordinate system at the current time includes:
acquiring video frame images through the plurality of cameras to obtain a plurality of video frame images;
when at least one video frame image in the plurality of video frame images comprises the target vehicle, determining vehicle information of the target vehicle in the at least one video frame image to obtain at least one piece of vehicle information, wherein each piece of vehicle information comprises a calibration coordinate of the target vehicle in a calibration coordinate system and license plate information of the target vehicle;
converting the calibration coordinates of the target vehicle in the at least one piece of vehicle information into corresponding at least one stitching coordinate in the stitching coordinate system according to the number of at least one camera, wherein the at least one camera is used for acquiring the at least one video frame image;
determining the current splicing coordinate of the target vehicle in the splicing coordinate system according to the license plate information of the target vehicle and the at least one splicing coordinate in the at least one piece of vehicle information;
when the current splicing coordinate of the target vehicle in the splicing coordinate system is different from the previous splicing coordinate, connecting a line between the previous splicing coordinate and the current splicing coordinate in the splicing coordinate system to obtain the track of the target vehicle in the splicing coordinate system at the current moment.
Optionally, the determining the vehicle information of the target vehicle in the at least one video frame image to obtain at least one piece of vehicle information includes:
performing vehicle detection on the at least one video frame image to determine a vehicle position of the target vehicle in each video frame image;
determining license plate information of the target vehicle from each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle;
determining image coordinates of the target vehicle in an image coordinate system of each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle to obtain at least one image coordinate;
and converting the at least one image coordinate into the calibration coordinate system to obtain a calibration coordinate of the target vehicle in the calibration coordinate system.
Optionally, the numbers of the plurality of cameras are increased from 0 one by one;
the converting the calibration coordinates of the target vehicle in the at least one piece of vehicle information into corresponding at least one stitching coordinate in the stitching coordinate system according to the number of the at least one camera includes:
and taking the abscissa of the calibration coordinate of the target vehicle in the at least one piece of vehicle information as the abscissa of the target vehicle in the splicing coordinate system, and correspondingly adding the ordinate of the calibration coordinate of the target vehicle in the at least one piece of vehicle information and the number of the at least one camera to obtain the ordinate of the target vehicle in the splicing coordinate system.
Optionally, the determining, according to the license plate information of the target vehicle and the at least one stitching coordinate in the at least one piece of vehicle information, a current stitching coordinate of the target vehicle in the stitching coordinate system includes:
when the number of the at least one splicing coordinate is two, and characters of a license plate number included in license plate information of a target vehicle corresponding to the at least one splicing coordinate are completely the same, determining a splicing coordinate corresponding to a first camera as a current splicing coordinate of the target vehicle in the splicing coordinate system, and comparing with other cameras in the at least one camera, wherein the size of a pixel area occupied by the target vehicle in a video frame image shot by the first camera is the largest;
when the number of the at least one splicing coordinate is two and characters of license plate numbers included in license plate information of the target vehicle corresponding to the at least one splicing coordinate are not identical, determining the matching degree between the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate;
when the determined matching degree is within a preset matching degree range, determining the distance between the at least one splicing coordinate;
and when the distance between the at least one splicing coordinate is smaller than a preset distance, determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system.
Optionally, before determining the trajectory of the target vehicle in the stitching coordinate system at the current time, the method further includes:
acquiring a reference image through each camera in the plurality of cameras, wherein each reference image comprises two calibration lines distributed up and down and at least two lane lines distributed left and right, and one identical calibration line is arranged in the reference images shot by two adjacent cameras;
and establishing the splicing coordinate system according to the serial numbers of the cameras, and two calibration lines and at least two lane lines included in each reference image in the collected multiple reference images.
Optionally, the establishing a stitching coordinate system according to the serial numbers of the cameras, and two calibration lines and at least two lane lines included in each reference image in the collected multiple reference images includes:
according to the serial numbers of the cameras, the same calibration lines in the reference images shot by two adjacent cameras in the cameras are overlapped, and the leftmost lane lines in the reference images are connected to obtain a vertical connecting line;
determining the direction of the serial numbers of the cameras from small to large as the direction of the vertical connecting line;
acquiring a lowermost calibration line in a reference image shot by a camera with the smallest number, and determining the horizontal rightward direction as the direction of the lowermost calibration line;
and taking the vertical connecting line with the direction as a longitudinal axis of the splicing coordinate system, taking the calibration line with the direction at the lowest edge as a transverse axis of the splicing coordinate system, and establishing the splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
Optionally, after determining the running speed of the target vehicle according to the actual distance and the first time difference, the method further includes:
capturing an image of a vehicle driven on the road by the target vehicle;
and sending the vehicle image of the target vehicle, the running speed of the target vehicle and the vehicle information of the target vehicle to a server.
Optionally, after determining the track of the target vehicle in the stitching coordinate system at the current time, the method further includes:
when the target vehicle is determined to have lane changing behavior according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of a lane line on the road in the splicing coordinate system, acquiring a lane changing process diagram of the target vehicle;
when the lane changing behavior of the target vehicle is determined according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of a lane line on the road in the splicing coordinate system, determining the duration of the lane changing behavior of the target vehicle;
when the duration is less than a reference duration and the target vehicle runs in at least three different lanes within the duration, determining that the lane changing behavior of the target vehicle is a continuous lane changing behavior;
after determining that the lane changing behavior of the target vehicle is the continuous lane changing behavior, determining a continuous lane changing event evidence obtaining graph of the target vehicle according to the collected lane changing process graph;
and sending the vehicle information of the target vehicle and the continuous lane change event evidence obtaining graph to a server.
Optionally, the determining a continuous lane change event forensics map of the target vehicle according to the collected lane change process map includes:
selecting a first process diagram from the acquired lane changing process diagrams, wherein the first process diagram is the lane changing process diagram with the largest pixel area size occupied by the target vehicle in the acquired lane changing process diagram;
intercepting a license plate expansion area from the first process diagram, and determining the intercepted license plate expansion area as a vehicle sketch map of the target vehicle, wherein the license plate expansion area is an area which comprises the head or the tail of the target vehicle after being expanded according to the license plate area;
and determining the collected lane change process graph and the vehicle sketch graph as a continuous lane change event evidence obtaining graph of the target vehicle.
In a second aspect, a vehicle speed measuring device is provided, which is applied to a multi-view camera, where there is an overlapping portion in the monitoring ranges of two adjacent cameras in a plurality of cameras included in the multi-view camera along the direction of a road, and the device includes:
the system comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining the track of a target vehicle in a splicing coordinate system at the current moment, the target vehicle is any vehicle for measuring speed, and the splicing coordinate system is a coordinate system used for drawing the complete track of the target vehicle in the monitoring range of the cameras;
the selecting module is used for selecting two speed measuring reference points belonging to monitoring pictures of different cameras from the track of the target vehicle in the splicing coordinate system at the current moment;
the second determining module is used for determining the actual distance between the two speed measuring reference points according to the splicing coordinates of the two speed measuring reference points in the splicing coordinate system;
and the third determining module is used for determining the running speed of the target vehicle according to the actual distance and a first time difference, wherein the first time difference is a difference value between the acquisition times of the video frame images to which the two speed measurement reference points belong respectively.
Optionally, the second determining module includes:
the first determining submodule is used for determining a first distance according to the abscissa of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent lane lines on the road and the distance between two adjacent lane lines in the splicing coordinate system;
the second determining submodule is used for determining a second distance according to the vertical coordinates of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent calibration lines on the road and the distance between two adjacent calibration lines in the splicing coordinate system;
and the third determining submodule is used for determining the actual distance between the two speed measuring reference points according to the first distance and the second distance.
Optionally, the first determining module includes:
the acquisition submodule is used for acquiring video frame images through the plurality of cameras to obtain a plurality of video frame images;
a fourth determining sub-module, configured to determine vehicle information of the target vehicle in at least one of the video frame images to obtain at least one piece of vehicle information when the target vehicle is included in at least one of the video frame images, where each piece of vehicle information includes a calibration coordinate of the target vehicle in a calibration coordinate system and license plate information of the target vehicle;
the conversion submodule is used for converting the calibration coordinates of the target vehicle in the at least one piece of vehicle information into corresponding at least one splicing coordinate in the splicing coordinate system according to the serial number of at least one camera, and the at least one camera is used for acquiring the at least one video frame image;
a fifth determining submodule, configured to determine a current stitching coordinate of the target vehicle in the stitching coordinate system according to the license plate information of the target vehicle and the at least one stitching coordinate in the at least one piece of vehicle information;
and the track drawing sub-module is used for connecting a line between the previous splicing coordinate and the current splicing coordinate in the splicing coordinate system when the current splicing coordinate and the previous splicing coordinate of the target vehicle in the splicing coordinate system are different so as to obtain the track of the target vehicle in the splicing coordinate system at the current moment.
Optionally, the fourth determining sub-module includes:
a vehicle detection unit for performing vehicle detection on the at least one video frame image to determine a vehicle position of the target vehicle in each video frame image;
the first determining unit is used for determining license plate information of the target vehicle from each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle;
the second determining unit is used for determining the image coordinates of the target vehicle in the image coordinate system of each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle so as to obtain at least one image coordinate;
and the conversion unit is used for converting the at least one image coordinate into the calibration coordinate system so as to obtain a calibration coordinate of the target vehicle in the calibration coordinate system.
Optionally, the numbers of the plurality of cameras are increased from 0 one by one;
the conversion sub-module is further configured to use an abscissa of the calibration coordinate of the target vehicle in the at least one piece of vehicle information as an abscissa of the target vehicle in the stitching coordinate system, and add a ordinate of the calibration coordinate of the target vehicle in the at least one piece of vehicle information and the number of the at least one camera in correspondence to each other to obtain an ordinate of the target vehicle in the stitching coordinate system.
Optionally, the fifth determining sub-module includes:
a third determining unit, configured to determine, when the number of the at least one stitching coordinate is two and characters of a license plate number included in license plate information of a target vehicle corresponding to the at least one stitching coordinate are completely the same, a stitching coordinate corresponding to a first camera as a current stitching coordinate of the target vehicle in the stitching coordinate system, where a size of a pixel area occupied by the target vehicle in a video frame image captured by the first camera is the largest compared to other cameras in the at least one camera;
a fourth determining unit, configured to determine a matching degree between license plate numbers included in the license plate information of the target vehicle corresponding to the at least one mosaic coordinate when the number of the at least one mosaic coordinate is two and characters of the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one mosaic coordinate are not identical;
the fifth determining unit is used for determining the distance between the at least one splicing coordinate when the determined matching degree is within a preset matching degree range;
and the sixth determining unit is used for determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system when the distance between the at least one splicing coordinate is smaller than the preset distance.
Optionally, the apparatus further comprises:
the first acquisition module is used for acquiring a reference image through each camera in the multiple cameras, each reference image comprises two calibration lines distributed up and down and at least two lane lines distributed left and right, and one identical calibration line exists in the reference images shot by the two adjacent cameras;
and the establishing module is used for establishing the splicing coordinate system according to the serial numbers of the cameras, and two calibration lines and at least two lane lines included in each reference image in the collected multiple reference images.
Optionally, the establishing module includes:
the calibration line superposition submodule is used for superposing the same calibration lines in the reference images shot by two adjacent cameras in the cameras according to the serial numbers of the cameras, and connecting the leftmost lane lines in the reference images to obtain a vertical connecting line;
the sixth determining submodule is used for determining the direction of the serial numbers of the cameras from small to large as the direction of the vertical connecting line;
the seventh determining submodule is used for acquiring the lowest calibration line in the reference image shot by the camera with the smallest serial number and determining the horizontal right direction as the direction of the lowest calibration line;
and the establishing submodule is used for taking the vertical connecting line with the direction as a longitudinal axis of the splicing coordinate system, taking the calibration line with the direction at the lowest edge as a transverse axis of the splicing coordinate system, and establishing the splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
Optionally, the apparatus further comprises:
the snapshot module is used for snapshot of a vehicle image of the target vehicle running on the road;
the first sending module is used for sending the vehicle image of the target vehicle, the running speed of the target vehicle and the vehicle information of the target vehicle to a server.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring a lane change process diagram of the target vehicle when the lane change behavior of the target vehicle is determined according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of a lane line on the road in the splicing coordinate system;
the fourth determination module is used for determining the duration of the lane-changing behavior of the target vehicle when the lane-changing behavior of the target vehicle is determined to be finished according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of the lane line on the road in the splicing coordinate system;
the fifth determining module is used for determining that the lane changing behavior of the target vehicle is a continuous lane changing behavior when the duration is less than a reference duration and the target vehicle runs in at least three different lanes within the duration;
the sixth determining module is used for determining a continuous lane changing event evidence obtaining graph of the target vehicle according to the collected lane changing process graph after determining that the lane changing behavior of the target vehicle is the continuous lane changing behavior;
and the second sending module is used for sending the vehicle information of the target vehicle and the continuous lane change event evidence obtaining graph to a server.
Optionally, the sixth determining module includes:
the selection submodule is used for selecting a first process diagram from the acquired lane changing process diagrams, and the first process diagram is the lane changing process diagram with the largest pixel area size occupied by the target vehicle in the acquired lane changing process diagrams;
an eighth determining submodule, configured to intercept a license plate extension area from the first process map, and determine the intercepted license plate extension area as a vehicle sketch map of the target vehicle, where the license plate extension area is an area that includes a head or a tail of the target vehicle after being extended according to the license plate area;
and the ninth determining submodule is used for determining the collected lane changing process diagram and the vehicle sketch diagram as a continuous lane changing event evidence obtaining diagram of the target vehicle.
In a third aspect, there is provided a vehicle speed measuring device, the device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the methods of the first aspect described above.
In a fourth aspect, a computer-readable storage medium is provided, having instructions stored thereon, which when executed by a processor, implement the steps of any of the methods of the first aspect described above.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the method of any of the first aspects above.
The technical scheme provided by the embodiment of the invention can at least bring the following beneficial effects:
firstly, determining the track of a target vehicle in a splicing coordinate system at the current moment, and then selecting two speed measuring reference points belonging to monitoring pictures of different cameras from the track of the target vehicle in the splicing coordinate system at the current moment. And finally, determining the running speed of the target vehicle according to the difference between the actual distance and the acquisition time of the video frame images respectively belonging to the two speed measuring reference points. Because the two selected speed measuring reference points belong to monitoring pictures of different cameras, and compared with two speed measuring reference points in the same monitoring picture, the distance between the two selected speed measuring reference points is generally far, and the difference value between the acquisition time of the video frame images to which the two speed measuring reference points belong is also large, so that the problem of error amplification caused by the small distance between the two speed measuring reference points and the small difference value between the acquisition time of the video frame images to which the two speed measuring reference points belong is solved, the error of converting the distance on an image coordinate into the actual distance on a road is greatly reduced, the error of the determined running speed of the target vehicle is smaller, and the accuracy is higher.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the invention.
Fig. 2 is a schematic structural diagram of a multi-view camera 101 according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of another multi-view camera 101 according to an embodiment of the present invention.
Fig. 4 is a flowchart of a vehicle speed measuring method according to an embodiment of the present invention.
Fig. 5 is a flowchart of another method for measuring a speed of a vehicle according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of establishing a calibration coordinate system according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of a multi-view camera mount according to an embodiment of the invention.
Fig. 8 is a schematic diagram of an imaging principle of a camera according to an embodiment of the present invention.
Fig. 9 is a schematic diagram of establishing a stitching coordinate system according to an embodiment of the present invention.
Fig. 10 is a flowchart of a method for detecting a continuous lane change of a vehicle according to an embodiment of the present invention.
Fig. 11 is a block diagram of a vehicle speed measuring device according to an embodiment of the present invention.
Fig. 12 is a schematic structural diagram of a vehicle speed measuring device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present invention.
Before explaining the embodiments of the present invention in detail, the implementation environment of the embodiments of the present invention is described:
fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present invention, and referring to fig. 1, the implementation environment includes a multi-view camera 101 and a server 102. The multi-view camera 101 and the server 102 are connected through a network. Fig. 2 is a schematic diagram of a configuration of the multi-view camera 101. Referring to fig. 2, the multi-view camera 101 may be an intelligent multi-view camera, and the multi-view camera 101 includes a camera 1, …, a camera i, …, a camera n, a driving circuit, an FPGA (Field-Programmable Gate Array), a DSP (Digital Signal Processing), a DDR (Double Data Rate) memory module, and a Flash memory module.
Each of the plurality of cameras includes a CCD (Charge-coupled Device) for acquiring a video frame image and transmitting an analog signal of the acquired video frame image to the FPGA. Wherein, the monitoring range of two adjacent cameras in the plurality of cameras has the overlap portion along the trend of road.
The driving circuit is used for driving the CCD to collect video frame images.
The FPGA comprises a signal conversion module and a first preprocessing module.
The signal conversion module can convert the analog signal of the video frame image transmitted by the CCD into the digital signal of the video frame image, and then transmits the digital signal of the video frame image to the first preprocessing module.
The first preprocessing module can perform first preprocessing on the digital signal of the video frame image, and then store the first video frame image obtained after the first preprocessing to the DDR memory module.
Fig. 3 is another schematic structural diagram of the multi-view camera, and referring to fig. 3, the DSP includes a foreground detection module, a vehicle detection module, a license plate recognition module, a vehicle tracking module, a coordinate conversion module, a coordinate stitching module, a video speed measuring and capturing module, and a continuous lane change detection module.
The foreground detection module can acquire a first video frame image from the DDR storage module, perform second preprocessing on the first video frame image, and store a second video frame image obtained after the second preprocessing into the DDR storage module. In addition, the foreground detection module can also perform foreground extraction and background calculation on the second video frame image to acquire foreground information and background information of the second video frame image, and then store the foreground information and the background information of the second video frame image to the Flash storage module.
The vehicle detection module can acquire a second video frame image from the DDR storage module, acquire foreground information and background information of the second video frame image from the Flash storage module, detect the position and/or type of a vehicle in the second video frame image according to the foreground information and the background information of the second video frame image, and then store the position and/or type of the vehicle in the Flash storage module.
The license plate recognition module can acquire a first video frame image from the DDR storage module, acquire foreground information, background information and a vehicle position of a second video frame image from the Flash storage module, recognize a license plate of a vehicle at the vehicle position in the first video frame image to obtain license plate information, and store the license plate information into the Flash storage module.
The vehicle tracking module can acquire a second video frame image from the DDR storage module, acquire the position and the license plate information of the vehicle from the Flash storage module, and take the second video frame image, the position and the license plate information of the vehicle as the attribute information of each vehicle. And tracking the vehicles according to the attribute information of each vehicle to form the track of the vehicle, and then storing the track of the vehicle in a Flash storage module. In addition, the vehicle tracking module may associate the vehicle with the license plate information and establish an Identity-sc (smart-camera identification) for the vehicle to distinguish from other vehicles. .
The coordinate conversion module can acquire the position of the vehicle from the Flash storage module, then establish a calibration coordinate system by combining the lane line and the calibration line, determine the calibration coordinate of the position of the vehicle in the calibration coordinate system, and store the calibration coordinate in the Flash storage module.
The coordinate splicing module can acquire calibration coordinates from the Flash storage module, splice the calibration coordinate systems corresponding to the cameras in sequence by combining the serial numbers of the cameras to obtain a spliced coordinate system, convert the calibration coordinates into the spliced coordinates, acquire the track of the vehicle in the spliced coordinate system, and store the spliced coordinates and the track of the vehicle in the spliced coordinate system into the Flash storage module.
The video speed measuring and capturing module can acquire the track of the target vehicle in the splicing coordinate system from the Flash storage module, then two speed measuring reference points belonging to monitoring pictures of different cameras are selected, the actual distance between the two reference points is determined according to the splicing coordinates of the two speed measuring reference points in the splicing coordinate system, the running speed of the target vehicle is obtained according to the difference value between the actual distance and the acquisition time of the video frame images respectively belonging to the two speed measuring reference points, the vehicle image of the target vehicle running on a road is captured, and the vehicle image of the target vehicle, the running speed of the target vehicle and the vehicle information of the target vehicle are sent to the server.
The continuous lane change detection module can acquire the track of the target vehicle in the spliced coordinate system from the Flash storage module, and combines the positions of lane lines on the road in the spliced coordinate system to acquire a continuous lane change process diagram of the target vehicle when the target vehicle is determined to have continuous lane change behaviors, determine a continuous lane change event evidence obtaining diagram of the target vehicle according to the acquired lane change process diagram, and send the vehicle information of the target vehicle and the continuous lane change event evidence obtaining diagram of the target vehicle to the server.
It should be noted that, because the resolution of the video frame image acquired by the CCD is usually higher, and the first preprocessing module of the FPGA performs the first preprocessing on the video frame image acquired by the CCD mainly in the manners of smoothing, noise reduction, and the like, the preprocessing manners do not affect the resolution of the video frame image, and therefore, the first video frame image with higher resolution can be obtained after the first preprocessing is performed on the video frame image acquired by the CCD through the first preprocessing module of the FPGA. For the license plate recognition module, since the license plate in the video frame image usually occupies a smaller area proportion of the video frame image, a higher-resolution video frame image is required to recognize the license plate of the vehicle at the vehicle position, and therefore the license plate recognition module can acquire a first video frame image with higher resolution from the DDR storage module to recognize the license plate. The first video frame image with higher resolution is higher in definition after being amplified, so that the accuracy of the license plate recognition of the multi-view camera is higher.
In addition, the mode of performing the second preprocessing on the first video frame image by the foreground detection module mainly includes format conversion, down-sampling processing and the like, wherein the format conversion can convert the first video frame image into a format which can be used subsequently, the down-sampling processing can improve the efficiency of processing the video frame image and reduce the sampling points of the video frame image, and the resolution of the video frame image obtained after the down-sampling processing is usually smaller than that before the down-sampling processing, so that the second video frame image with different formats and smaller resolution can be obtained after performing the second preprocessing on the first video frame image by the foreground detection module. For the vehicle detection module and the vehicle tracking module, because the area proportion of the video frame image occupied by the vehicle in the video frame image is usually larger and the vehicle position is more obvious than the license plate, the vehicle detection module and the vehicle tracking module can acquire a second video frame image with lower resolution from the DDR storage module to detect and track the vehicle. The second video frame image with the smaller resolution occupies smaller storage space of the multi-view camera, so that the multi-view camera is higher in operation speed, and the vehicle detection and vehicle tracking efficiency can be higher.
The steps of performing first preprocessing on a video frame image acquired by a CCD by a first preprocessing module of the FPGA to obtain a first video frame image and performing second preprocessing on the first video frame image by a foreground detection module to obtain a second video frame image are optional, that is, after an analog signal of the video frame image acquired by the CCD is converted into a digital signal by a signal conversion module of the FPGA, foreground extraction and background calculation of the video frame image can be directly performed according to the video frame image converted into the digital signal, and vehicle detection, license plate recognition and vehicle tracking can be performed without performing the first preprocessing and the second preprocessing.
The server 102 is a server providing background services for the multi-view camera 101, and may be one server, a server cluster composed of a plurality of servers, or a cloud computing server center, which is not limited in the embodiment of the present invention. In the embodiment of the present invention, a server 102 is illustrated. Server 102 includes a user data storage area.
The following explains the vehicle speed measuring method provided by the embodiment of the invention in detail.
Fig. 4 is a flowchart of a vehicle speed measuring method provided in an embodiment of the present invention, and referring to fig. 4, the method is applied to a multi-view camera, where monitoring ranges of two adjacent cameras in a plurality of cameras in the multi-view camera have an overlapping portion along a road, and the method includes:
step 401: and determining the track of the target vehicle in a spliced coordinate system at the current moment, wherein the target vehicle is any vehicle for measuring speed, and the spliced coordinate system is a coordinate system used for drawing the complete track of the target vehicle in the monitoring range of the cameras.
Step 402: and selecting two speed measuring reference points belonging to monitoring pictures of different cameras from the track of the target vehicle in the splicing coordinate system at the current moment.
Step 403: and determining the actual distance between the two speed measuring reference points according to the splicing coordinates of the two speed measuring reference points in the splicing coordinate system.
Step 404: and determining the running speed of the target vehicle according to the actual distance and a first time difference, wherein the first time difference is a difference value between the acquisition times of the video frame images to which the two speed measurement reference points belong respectively.
In the embodiment of the invention, the track of the target vehicle in the splicing coordinate system at the current moment is determined, and then two speed measurement reference points belonging to monitoring pictures of different cameras are selected from the track of the target vehicle in the splicing coordinate system at the current moment. And finally, determining the running speed of the target vehicle according to the difference between the actual distance and the acquisition time of the video frame images respectively belonging to the two speed measuring reference points. Because the two selected speed measuring reference points belong to monitoring pictures of different cameras, and compared with two speed measuring reference points in the same monitoring picture, the distance between the two selected speed measuring reference points is generally far, and the difference value between the acquisition time of the video frame images to which the two speed measuring reference points belong is also large, so that the problem of error amplification caused by the small distance between the two speed measuring reference points and the small difference value between the acquisition time of the video frame images to which the two speed measuring reference points belong is solved, the error of converting the distance on an image coordinate into the actual distance on a road is greatly reduced, the error of the determined running speed of the target vehicle is smaller, and the accuracy is higher.
Optionally, determining an actual distance between the two speed measurement reference points according to the stitching coordinates of the two speed measurement reference points in the stitching coordinate system includes:
determining a first distance according to the abscissa of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent lane lines on the road and the distance between two adjacent lane lines in the splicing coordinate system;
determining a second distance according to the vertical coordinates of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent calibration lines on the road and the distance between two adjacent calibration lines in the splicing coordinate system;
and determining the actual distance between the two speed measuring reference points according to the first distance and the second distance.
Optionally, determining the track of the target vehicle in the stitching coordinate system at the current moment includes:
acquiring video frame images through the plurality of cameras to obtain a plurality of video frame images;
when at least one video frame image in the plurality of video frame images comprises a target vehicle, determining vehicle information of the target vehicle in the at least one video frame image to obtain at least one piece of vehicle information, wherein each piece of vehicle information comprises a calibration coordinate of the target vehicle in a calibration coordinate system and license plate information of the target vehicle;
converting the calibration coordinates of the target vehicle in the at least one piece of vehicle information into corresponding at least one stitching coordinate in the stitching coordinate system according to the number of the at least one camera, wherein the at least one camera is used for acquiring the at least one video frame image;
determining the current splicing coordinate of the target vehicle in the splicing coordinate system according to the license plate information of the target vehicle in the at least one piece of vehicle information and the at least one splicing coordinate;
and when the current splicing coordinate of the target vehicle in the splicing coordinate system is different from the previous splicing coordinate, connecting a line between the previous splicing coordinate and the current splicing coordinate in the splicing coordinate system to obtain the track of the target vehicle in the splicing coordinate system at the current moment.
Optionally, the determining the vehicle information of the target vehicle in the at least one video frame image to obtain at least one vehicle information includes:
performing vehicle detection on the at least one video frame image to determine a vehicle position of the target vehicle in each video frame image;
determining license plate information of the target vehicle from each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle;
determining image coordinates of the target vehicle in an image coordinate system of each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle to obtain at least one image coordinate;
and converting the at least one image coordinate into the calibration coordinate system to obtain a calibration coordinate of the target vehicle in the calibration coordinate system.
Optionally, the numbers of the plurality of cameras are increased from 0 one by one;
the step of converting the calibration coordinates of the target vehicle in the at least one piece of vehicle information into corresponding at least one stitching coordinate in the stitching coordinate system according to the number of the at least one camera comprises the following steps:
and taking the abscissa of the calibration coordinate of the target vehicle in the at least one piece of vehicle information as the abscissa of the target vehicle in the splicing coordinate system, and correspondingly adding the ordinate of the calibration coordinate of the target vehicle in the at least one piece of vehicle information and the serial number of the at least one camera to obtain the ordinate of the target vehicle in the splicing coordinate system.
Optionally, the determining the current stitching coordinate of the target vehicle in the stitching coordinate system according to the license plate information of the target vehicle in the at least one piece of vehicle information and the at least one stitching coordinate includes:
when the number of the at least one splicing coordinate is two, and characters of a license plate number included in license plate information of a target vehicle corresponding to the at least one splicing coordinate are completely the same, determining the splicing coordinate corresponding to the first camera as a current splicing coordinate of the target vehicle in the splicing coordinate system, and comparing with other cameras in the at least one camera, wherein the size of a pixel area occupied by the target vehicle in a video frame image shot by the first camera is the largest;
when the number of the at least one splicing coordinate is two and characters of license plate numbers included in license plate information of the target vehicle corresponding to the at least one splicing coordinate are not identical, determining the matching degree between the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate;
when the determined matching degree is within a preset matching degree range, determining the distance between the at least one splicing coordinate;
and when the distance between the at least one splicing coordinate is smaller than the preset distance, determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system.
Optionally, before determining the trajectory of the target vehicle in the stitching coordinate system at the current time, the method further includes:
acquiring a reference image through each camera in the plurality of cameras, wherein each reference image comprises two calibration lines distributed up and down and at least two lane lines distributed left and right, and one identical calibration line is arranged in the reference images shot by two adjacent cameras;
and establishing the splicing coordinate system according to the serial numbers of the cameras, and two calibration lines and at least two lane lines included in each reference image in the collected multiple reference images.
Optionally, the establishing the stitching coordinate system according to the serial numbers of the plurality of cameras, and two calibration lines and at least two lane lines included in each of the plurality of collected reference images includes:
according to the serial numbers of the cameras, the same calibration lines in the reference images shot by two adjacent cameras in the cameras are overlapped, and the leftmost lane lines in the reference images are connected to obtain a vertical connecting line;
determining the directions of the numbers of the cameras from small to large as the direction of the vertical connecting line;
acquiring a bottommost calibration line in a reference image shot by a camera with the smallest serial number, and determining the horizontal right direction as the direction of the bottommost calibration line;
and taking the vertical connecting line with the direction as a longitudinal axis of the splicing coordinate system, taking the calibration line with the direction at the lowest edge as a transverse axis of the splicing coordinate system, and establishing the splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
Optionally, after determining the traveling speed of the target vehicle according to the actual distance and the first time difference, the method further includes:
capturing an image of a vehicle driven on the road by the target vehicle;
the vehicle image of the target vehicle, the traveling speed of the target vehicle, and the vehicle information of the target vehicle are transmitted to the server.
Optionally, after determining the trajectory of the target vehicle in the stitching coordinate system at the current time, the method further includes:
when the lane changing behavior of the target vehicle is determined according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of a lane line on the road in the splicing coordinate system, acquiring a lane changing process diagram of the target vehicle;
when the lane changing behavior of the target vehicle is determined according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of the lane line on the road in the splicing coordinate system, determining the duration of the lane changing behavior of the target vehicle;
when the duration is less than the reference duration and the target vehicle runs in at least three different lanes within the duration, determining that the lane change behavior of the target vehicle is a continuous lane change behavior;
after determining that the lane change behavior of the target vehicle is the continuous lane change behavior, determining a continuous lane change event evidence obtaining graph of the target vehicle according to the collected lane change process graph;
and sending the vehicle information of the target vehicle and the continuous lane change event forensics graph to a server.
Optionally, the determining a continuous lane change event forensics map of the target vehicle according to the collected lane change process map includes:
selecting a first process diagram from the acquired lane changing process diagrams, wherein the first process diagram is the lane changing process diagram with the largest pixel area size occupied by the target vehicle in the acquired lane changing process diagram;
intercepting a license plate expansion area from the first process diagram, and determining the intercepted license plate expansion area as a vehicle sketch map of a target vehicle, wherein the license plate expansion area is an area which comprises the head or the tail of the target vehicle after being expanded according to the license plate area;
and determining the collected lane change process map and the vehicle close-up map as a continuous lane change event evidence obtaining map of the target vehicle.
All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present invention, which is not described in detail herein.
The embodiment of the invention provides a flow chart of a vehicle speed measuring method. The embodiment shown in fig. 4 will be explained in an expanded manner, referring to fig. 5, the method is applied to a multi-view camera, the multi-view camera comprises a plurality of cameras, monitoring ranges of two adjacent cameras in the plurality of cameras have overlapped parts along the direction of a road, and the method comprises the following steps:
step 501: and determining the track of the target vehicle in the splicing coordinate system at the current moment by the multi-view camera.
It should be noted that the multi-view camera includes a plurality of cameras, and monitoring ranges of two adjacent cameras in the plurality of cameras have an overlapping portion along a road direction. In addition, the target vehicle is any vehicle that performs speed measurement. And moreover, the splicing coordinate system is a coordinate system used for drawing a complete track of the target vehicle in the monitoring range of the plurality of cameras.
Specifically, the operation of step 501 can be realized by steps 5011 to 5015 as follows:
step 5011: the multi-view camera acquires video frame images through a plurality of cameras to obtain a plurality of video frame images.
It should be noted that the video frame image is a single frame image in the video captured by the multiple cameras.
Optionally, after the multi-view camera acquires a plurality of video frame images, the plurality of video frame images may be preprocessed to obtain a plurality of preprocessed video frame images.
Specifically, the multi-view camera may perform preprocessing on the obtained multiple video frame images through preprocessing methods such as smoothing, denoising, format conversion, and downsampling. The sharpness of the video frame images may be relatively high, and the video frame images may be interfered by noise during transmission to cause poor visual effects of the video frame images, so that the multi-view camera may perform preprocessing such as smoothing and noise reduction on the video frame images to cause better visual effects of the video frame images. In addition, since the format of the video frame images subsequently used by the multi-view camera may be different from the format of the plurality of video frame images, the multi-view camera may convert the format of the plurality of video frame images into a format that can be subsequently used. The down-sampling process is a processing method for reducing sampling points of a video frame image in order to improve the efficiency of processing the video frame image by a multi-view camera.
It should be noted that preprocessing the plurality of acquired video frame images is an optional step, that is, after the multi-view camera acquires the plurality of video frame images, the vehicle information can be directly acquired from the plurality of video frame images without preprocessing. Next, a plurality of video frame images without preprocessing will be described as an example.
Step 5012: when at least one video frame image in the plurality of video frame images comprises the target vehicle, determining vehicle information of the target vehicle in the at least one video frame image to obtain at least one piece of vehicle information.
It should be noted that each piece of vehicle information in the at least one piece of vehicle information includes the calibration coordinates of the target vehicle in the calibration coordinate system and the license plate information of the target vehicle.
Specifically, the operation of step 5012 can be realized by the following steps (1) to (4):
step (1): the multi-view camera performs vehicle detection on the at least one video frame image to determine a vehicle position of the target vehicle in each video frame image.
It should be noted that the multi-view camera may determine foreground information and background information in at least one video frame image by performing foreground extraction and background calculation on the at least one video frame image, so as to perform vehicle detection on the at least one video frame image according to the foreground information and the background information.
In the at least one video frame image, the target vehicle may be used as a foreground, and other objects except the target vehicle may be used as a background, and the foreground and the background may be distinguished by two colors with strong contrast, so that foreground information and background information of the at least one video frame image may be obtained.
The multi-view camera can detect the vehicle position of the target vehicle by adopting a deep learning method.
In one possible implementation, the multi-view camera may further detect the vehicle type of the target vehicle from the at least one video frame image by using a deep learning method.
Step (2): and the multi-view camera determines the license plate information of the target vehicle from each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle.
The license plate information comprises a license plate position and a license plate number. Optionally, the license plate information may further include a license plate type. The multi-view camera can determine the license plate position and the license plate number of the target vehicle from each video frame image included in the at least one video frame image by adopting a deep learning method, and determine the license plate type by adopting a color-based license plate recognition algorithm.
It should be noted that, when determining the license plate type, the multi-view camera may use a color-based license plate recognition algorithm to recognize the license plate ground color and the color of the license plate number, and match the recognized license plate ground color and the color of the license plate number with the license plate type rule, where the matching result is the license plate type of the vehicle. The license plate type rule comprises the corresponding relation between the license plate base color and the color of the license plate number and the license plate type. For example, the colors of the identified license plate ground color and the license plate number are blue and white respectively, namely blue and white characters, and the corresponding license plate type is the license plate of a common small vehicle. The identified bottom color of the license plate and the color of the license plate number are yellow and black respectively, namely yellow and black characters, and the corresponding license plate type is the license plate of a large-sized vehicle.
In addition, in a possible implementation manner, the multi-view camera can detect the vehicle type while detecting the vehicle position through the step (1), and at this time, since the vehicle type is already determined, the license plate information may include only the license plate position and the license plate number when determining the license plate information. In another possible implementation manner, the multi-view camera may detect only the vehicle position and not the vehicle type in step (1), and at this time, since the vehicle type is not yet determined, when determining the license plate information, the license plate information may include the license plate position, the license plate number and the license plate type, and the vehicle type is determined by the license plate type. The embodiment of the present invention is not limited thereto. When the vehicle type is determined according to the license plate type, the vehicle type corresponding to the license plate type can be determined according to the corresponding relation between the license plate type and the vehicle type, for example, the license plate of a normal small vehicle corresponds to a normal small vehicle, and the license plate of a large vehicle corresponds to a large vehicle.
And (3): and the multi-view camera determines the image coordinates of the target vehicle in the image coordinate system of each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle so as to obtain at least one image coordinate.
In a possible implementation manner, an image coordinate system may be established according to the video frame images acquired by each camera in the multi-view camera, that is, the video frame images acquired by different cameras correspond to different image coordinate systems. Then, according to the vehicle position of the target vehicle, determining a pixel point of the target vehicle in each video frame image included in the at least one video frame image, and determining the coordinate of the pixel point in the image coordinate system of each video frame image as the image coordinate of the target vehicle in each video frame image.
The left lower corner of the video frame image acquired by each camera can be used as a coordinate origin, the edge at the lowest side of the video frame image acquired by each camera is used as a horizontal axis, and the edge at the leftmost side of the video frame image acquired by each camera is used as a vertical axis, so that an image coordinate system is established. Thus, for the same camera, the video frame images acquired by the camera correspond to the same image coordinate system. Certainly, the image coordinate system is only an implementation manner, and in practical application, an image coordinate system may also be established by using other points in the video frame image acquired by each camera as an origin and using other edges as a horizontal axis and a vertical axis, which is not exemplified in the embodiment of the present invention.
It should be noted that, because the vehicle body of the target vehicle has a certain area, the target vehicle generally occupies a plurality of pixel points in each video frame image. For convenience of calculation, the embodiment of the present invention may determine the pixel point at the center position among the plurality of pixel points as the pixel point where the target vehicle is located in each video frame image. Of course, any other one of the plurality of pixel points may also be used as the pixel point where the target vehicle is located in each video frame image.
And (4): and converting the at least one image coordinate into a calibration coordinate system to obtain a calibration coordinate of the target vehicle in the calibration coordinate system.
It should be noted that, when a calibration coordinate system is established, the multi-view camera may randomly select one video frame image from a plurality of collected video frame images as a reference image, where the reference image includes two calibration lines distributed up and down and at least two lane lines distributed left and right; then, taking the leftmost lane line taking the vertical direction as the upward direction as the vertical axis of the calibration coordinate system, and taking the lowest calibration line taking the horizontal direction as the rightward direction as the horizontal axis; and establishing a calibration coordinate system according to the distance between the two calibration lines and the distance between the two adjacent lane lines.
It is noted that after the calibration coordinate system is established, the calibration coordinate system can be used not only to determine the calibration coordinates of the target vehicle, but also to determine the calibration coordinates of other vehicles in the video frame image.
The following describes a process of establishing a calibration coordinate system and a process of determining calibration coordinates of the target vehicle in the calibration coordinate system.
As shown in fig. 6, fig. 6 is a schematic diagram of establishing a calibration coordinate system, where two calibration lines and three lane lines are taken as an example, the two calibration lines are respectively calibration line 1 and calibration line 2, the three lane lines are respectively lane line 1, lane line 2 and lane line 3, a width between lane line 1 and lane line 2 is a width of a first lane, and a width between lane line 2 and lane line 3 is a width of a second lane. In the calibration coordinate system, the coordinates of the intersection between the horizontal axis of the calibration coordinate system and the vertical axis of the calibration coordinate system are (0,0), the coordinates of the intersection between the lane line 3 and the horizontal axis of the calibration coordinate system are (1,0), the coordinates of the intersection between the calibration line 2 and the vertical axis of the calibration coordinate system are (0,1), and the coordinates of the intersection between the lane line 3 and the calibration line 2 are (1, 1). The dashed line is lane line 2 and the bounding box is used to represent a plurality of different vehicles including the target vehicle.
After the calibration coordinate system is established, the multi-view camera may represent the target vehicle by the lower boundary center point of the boundary frame of the target vehicle, and determine the calibration coordinates of the target vehicle through the following steps:
1. and determining the abscissa of the target vehicle in the calibration coordinate system.
The multi-view camera can determine the abscissa of the target vehicle according to the total number of lanes, the distance between the left lane line of the lane where the target vehicle is located and the target vehicle, and the width of the lane where the target vehicle is located, and determine the abscissa of the target vehicle according to the following formula I:
wherein x is the abscissa of the target vehicle, n is the total number of lanes, i is an integer not greater than n, a is the distance between the left lane line of the lane where the target vehicle is located and the target vehicle, and b is the width of the lane where the target vehicle is located.
2. And determining the ordinate of the target vehicle in the calibration coordinate system.
Fig. 7 is a schematic view of the setup of the multi-view camera, as shown in fig. 7, and fig. 7 corresponds to fig. 6, i.e., the viewing angle of fig. 7 is the viewing angle from the right side of fig. 6. Fig. 7 illustrates an example of specifying the ordinate of the target vehicle between the lane line 1 and the lane line 2. In fig. 7, point O is the position of the monocular camera, point a is the point on the ground where the monocular camera upright stands, point OA is the length of the monocular camera upright, point C corresponds to the intersection of lane line 2 and the calibration line 1 in fig. 6, point G corresponds to the intersection of lane line 2 and the calibration line 2 in fig. 6, point D corresponds to the intersection of the lane line 2 and the horizontal line where the target vehicle is located in fig. 6, and the target vehicle is located between the lane line 1 and the lane line 2. L1 is the distance of AC, L2 is the distance of AG, L is the distance of AD, and d is the distance of CG.
According to fig. 7, the multi-view camera can determine the ordinate of the target vehicle in the calibration coordinate system according to the following three steps:
(1): according to the vertical distance between the calibration line 1 and the vertical rod of the multi-view camera and the distance between the calibration line 1 and the calibration line 2, a first parameter is determined through the following formula II:
where k is the first parameter, L1 is the vertical distance between the calibration line 1 and the multi-view camera upright, and d is the distance between the calibration line 1 and the calibration line 2.
(2): determining a second parameter according to an imaging width of a distance between the first intersection point and the second intersection point, an imaging width of a distance between the third intersection point and the fourth intersection point, and an imaging width of a distance between the fifth intersection point and the sixth intersection point by the following formula three:
wherein m is a second parameter, p is an imaging width of a distance between the first intersection point and the second intersection point, q is an imaging width of a distance between the third intersection point and the fourth intersection point, and r is an imaging width of a distance between the fifth intersection point and the sixth intersection point. The first intersection point is the intersection point of the calibration line 1 and the lane line 1, and the second intersection point is the intersection point of the calibration line 1 and the lane line 2. The third intersection point is the intersection point of the calibration line 2 and the lane line 1, and the fourth intersection point is the intersection point of the calibration line 2 and the lane line 2. The fifth intersection point is the intersection point of the horizontal line of the target vehicle and the lane line 1, and the sixth intersection point is the intersection point of the horizontal line of the target vehicle and the lane line 2.
It should be noted that the distance between the first intersection point and the second intersection point, the distance between the third intersection point and the fourth intersection point, and the distance between the fifth intersection point and the sixth intersection point are the same in actual measurement, but since the calibration line 1, the calibration line 2, and the distance between the target vehicle and the monocular camera are all different, in the monocular camera, the imaging width of the distance between the first intersection point and the second intersection point, the imaging width of the distance between the third intersection point and the fourth intersection point, and the imaging width of the distance between the fifth intersection point and the sixth intersection point are different.
It should be further noted that before determining the second parameter through step (2), the multi-view camera may also determine the values of p, q, and r through the following formula four-formula nine:
the formula four is as follows: OC p ogq OD r
Wherein OC is the distance between the position of the multi-view camera and the point C, OG is the distance between the position of the multi-view camera and the point G, and OD is the distance between the position of the multi-view camera and the point D.
the formula eight: l2 ═ L1+ d ═ kd + d
The formula is nine: L-L1 + CD-kd + yd
And CD is the distance between the point C and the point D, and y is the ordinate of the target vehicle in the calibration coordinate system.
It should be noted that the formula four can be obtained according to the imaging principle of the camera. As shown in fig. 8, fig. 8 is a schematic diagram of the imaging principle of the camera. In fig. 8, W is the actual width of the object, f is the focal length of the lens of the multi-view camera, S is the distance between the center of the lens of the camera and the object, and z is the imaging width of the object in the camera, and W ═ f ═ S × z is satisfied.
(3): according to the first parameter and the second parameter, determining the ordinate of the target vehicle in the calibration coordinate system by the following formula ten:
and y is the ordinate of the target vehicle in the calibration coordinate system.
And finishing determining the calibration coordinates of the target vehicle.
Step 5013: and the multi-view camera converts the calibration coordinates of the target vehicle in the at least one piece of vehicle information into at least one corresponding splicing coordinate in a splicing coordinate system according to the number of the at least one camera.
Wherein the numbers of the plurality of cameras are increased one by one from 0.
It should be noted that at least one camera is a camera for acquiring at least one video frame image.
In addition, each camera in the plurality of cameras is provided with a unique number. The unique number of each camera may be carried in the video frame image captured by each camera, so that when the multi-view camera obtains the video frame image, the number of the camera capturing the video frame image may be obtained at the same time, and of course, the multi-view camera may also obtain the number of each camera in other manners, which is not limited in the embodiment of the present invention.
The process of converting the calibration coordinates of the target vehicle in the at least one piece of vehicle information into the corresponding at least one stitching coordinate in the stitching coordinate system may be: the abscissa of the calibration coordinate of the target vehicle in the at least one piece of vehicle information is taken as the abscissa of the target vehicle in the splicing coordinate system, then the ordinate of the calibration coordinate of the target vehicle in the at least one piece of vehicle information is correspondingly added with the serial number of the at least one camera, and finally the ordinate of the target vehicle in the splicing coordinate system is obtained.
For example, if the target vehicle in the video frame image captured by camera No. 0 has a nominal coordinate of (0.5,0.6), the stitching coordinate converted into the stitching coordinate system has a (0.5, 0.6). And if the calibration coordinates of the target vehicle in the video frame image acquired by the camera No. 1 are (0.5,0.6), the splicing coordinates converted into a splicing coordinate system are (0.5, 1.6). And if the calibration coordinates of the target vehicle in the video frame image acquired by the camera No. 2 are (0.5,0.6), converting the target vehicle into the stitching coordinates in the stitching coordinate system of (0.5,2.6), and so on.
Step 5014: and the multi-view camera determines the current splicing coordinate of the target vehicle in the splicing coordinate system according to the license plate information of the target vehicle and the at least one splicing coordinate in the at least one piece of vehicle information.
It should be noted that the target vehicle may be located in an overlapping portion of the monitoring ranges of two adjacent cameras in the multi-view camera, that is, the two adjacent cameras in the multi-view camera can simultaneously capture the target vehicle located in the overlapping portion, and at this time, the number of at least one stitching coordinate may be one, and may also be two. When the at least one stitching coordinate is one, the stitching coordinate may be directly determined as the current stitching coordinate of the target vehicle in the stitching coordinate system. When the at least one stitching coordinate is two, the current stitching coordinate of the target vehicle in the stitching coordinate system needs to be determined according to the license plate information of the target vehicle and the at least one stitching coordinate in the at least one piece of vehicle information.
When the number of the at least one stitching coordinate is two, the two stitching coordinates may be coordinates of two different vehicles, or may be coordinates of the same vehicle, which is the target vehicle. At this time, the multi-view camera can judge according to the distance between the license plate information and the at least one splicing coordinate. And when the multi-view camera judges that at least one splicing coordinate is two coordinates of two different vehicles, storing the at least one splicing coordinate. When the multi-view camera judges that at least one of the stitching coordinates is two coordinates of the target vehicle, the multi-view camera can determine one of the stitching coordinates as the current stitching coordinate of the target vehicle in the stitching coordinate system.
Specifically, the operation of step 5014 can be realized by the following steps (1) to (4):
step (1): when the number of the at least one splicing coordinate is two, and characters of the license plate number included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate are completely the same, the splicing coordinate corresponding to the first camera is determined as the current splicing coordinate of the target vehicle in the splicing coordinate system, and compared with other cameras in the at least one camera, the size of a pixel area occupied by the target vehicle in the video frame image shot by the first camera is the largest.
When the characters of the license plate number included in the license plate information of the target vehicle corresponding to at least one splicing coordinate are completely the same, it can be stated that the at least one splicing coordinate is the splicing coordinate of the target vehicle. Therefore, the multi-view camera can select one splicing coordinate from the at least one splicing coordinate as the current splicing coordinate of the target vehicle in the splicing coordinate system.
It should be noted that, the larger the size of the pixel area occupied by the target vehicle in the video frame image is, the larger the area of the license plate number of the target vehicle is, and the clearer the license plate number of the target vehicle is. Therefore, the multi-view camera can select the stitching coordinate corresponding to the first camera as the current stitching coordinate of the target vehicle in the stitching coordinate system.
Step (2): and when the number of the at least one splicing coordinate is two and the characters of the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate are not completely the same, determining the matching degree between the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate.
At least one camera may not shoot the license plate, the shot license plate is inclined seriously or the license plate is small. Under these circumstances, the characters of the license plate number included in the license plate information of the target vehicle corresponding to at least one of the stitching coordinates may not be completely the same, that is, the multi-view camera cannot determine the current stitching coordinate of the target vehicle in the stitching coordinate system according to the characters of the license plate number. Therefore, at the moment, the multi-view camera can determine the matching degree between the license plate numbers in the license plate information of the target vehicle, and further determine the current splicing coordinate of the target vehicle in the splicing coordinate system according to the determined matching degree between the license plate numbers.
And (3): and when the determined matching degree is within the preset matching degree range, determining the distance between the at least one splicing coordinate.
The multi-view camera can preset a preset matching degree range, and when the determined matching degree is within the preset matching degree range, the multi-view camera can further determine the distance between at least one splicing coordinate in order to further determine the current splicing coordinate of the target vehicle in the splicing coordinate system.
It should be noted that when the determined matching degree is outside the preset matching degree range, it is indicated that the matching degree between the license plate numbers in the license plate information of the target vehicle corresponding to at least one stitching coordinate is small, and at this time, it can be determined that the at least one stitching coordinate is a stitching coordinate of two different vehicles, and then the at least one stitching coordinate is stored.
And (4): and when the distance between the at least one splicing coordinate is smaller than the preset distance, determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system.
When the distance between the at least one stitching coordinate is smaller than the preset distance, it can be stated that the at least one stitching coordinate is a stitching coordinate of the target vehicle. Therefore, the multi-view camera can determine the stitching coordinate corresponding to the first camera as the current stitching coordinate of the target vehicle in the stitching coordinate system.
It should be noted that, when the distance between at least one of the stitching coordinates is greater than or equal to the preset distance, it may be stated that the at least one of the stitching coordinates is a stitching coordinate of two different vehicles. At this time, the multi-view camera may store the at least one stitching coordinate.
In addition, the parameters of two adjacent cameras can influence whether the two stitching coordinates of the target vehicle at the overlapped part are the same. If the parameters of two adjacent cameras are completely consistent, the two stitching coordinates of the target vehicle at the overlapped part are the same. If the parameters of the two adjacent cameras are not completely consistent, the two splicing coordinates of the target vehicle at the overlapping part are not completely the same, and at this time, the multi-view camera can determine the current splicing coordinate of the target vehicle in the splicing coordinate system according to the method in the step, namely according to the license plate information of the target vehicle in the at least one piece of vehicle information and the at least one splicing coordinate.
Step 5015: and when the current splicing coordinate of the target vehicle in the splicing coordinate system is different from the previous splicing coordinate, connecting a line between the previous splicing coordinate and the current splicing coordinate in the splicing coordinate system to obtain the track of the target vehicle in the splicing coordinate system at the current moment.
It should be noted that, when the current stitching coordinate of the target vehicle in the stitching coordinate system is different from the previous stitching coordinate, it is indicated that the target vehicle is still running, and therefore, the trajectory of the target vehicle in the stitching coordinate system at the current moment can be obtained by connecting the previous stitching coordinate with the current stitching coordinate.
Further, before step 501, a stitching coordinate system may also be established through steps a-B:
step A: the multi-view camera acquires a reference image through each camera in the plurality of cameras.
It should be noted that the reference image is an image randomly acquired by the cameras, each reference image includes two calibration lines distributed up and down and at least two lane lines distributed left and right, and one identical calibration line is located in the reference images shot by two adjacent cameras.
And B: and establishing a splicing coordinate system according to the serial numbers of the cameras, and two calibration lines and at least two lane lines included in each reference image in the collected multiple reference images.
Specifically, the multi-view camera can coincide the same calibration line in the reference images shot by two adjacent cameras in the cameras according to the serial numbers of the cameras, and connect the leftmost lane line in the reference images to obtain a vertical connecting line. And determining the direction of the serial numbers of the plurality of cameras from small to large as the direction of the vertical connecting line. And acquiring the lowest calibration line in the reference image shot by the camera with the smallest serial number, and determining the horizontal right direction as the direction of the lowest calibration line. And establishing a splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
Because the reference images shot by the two adjacent cameras have the same calibration line, the area corresponding to the same calibration line can be shot by the two cameras at the same time, that is, the area corresponding to the same calibration line is the overlapped part of the monitoring ranges of the two adjacent cameras. Therefore, in order to avoid the problem that the mosaic coordinate system is not accurately established due to the overlapped part when the mosaic coordinate system is established, the embodiment of the invention enables the same calibration line in the reference images shot by two adjacent cameras in the multi-view camera to be overlapped.
It should be noted that, after determining the horizontal axis and the vertical axis of the stitching coordinate system, the multi-view camera may also keep other lane lines in the reference image that are not taken as the vertical axis of the stitching coordinate system, and other calibration lines in the reference image that are not taken as the horizontal axis of the stitching coordinate system.
As shown in fig. 9, fig. 9 is a schematic diagram of establishing a stitching coordinate system. Fig. 9 is a mosaic coordinate system established from reference images (not shown) taken by two adjacent cameras, i.e., camera No. 0 and camera No. 1 (not shown). The calibration line 3 is one of the reference images shot by the camera No. 0, the horizontal rightward direction is the direction of the calibration line 3, the calibration line 4 is the same calibration line in the reference images shot by the camera No. 0 and the camera No. 1, and the calibration line 5 is one of the reference images shot by the camera No. 1. The lane line 4 is a vertical connecting line obtained by connecting the leftmost lane lines in the two reference images, and the direction from the camera No. 0 to the camera No. 1 is the direction of the vertical connecting line. The lane line 5 is a lane line connecting the middle lane lines of the two reference images, and the lane line 6 is a lane line connecting the rightmost lane lines of the two reference images.
In fig. 9, the directional lane line 4 is the vertical axis of the stitching coordinate system, and the directional calibration line 3 is the horizontal axis of the stitching coordinate system. The coordinates of the intersection of the calibration line 4 and the longitudinal axis of the stitching coordinate system are (0,1), and the coordinates of the intersection of the calibration line 5 and the longitudinal axis of the stitching coordinate system are (0, 2). The coordinates of the intersection of the lane line 6 and the horizontal axis of the stitching coordinate system are (1,0), the coordinates of the intersection of the lane line 6 and the calibration line 4 are (1,1), and the coordinates of the intersection of the lane line 6 and the calibration line 5 are (1, 2). The dashed line is the lane line 5 and the bounding box is used to represent a number of different vehicles.
Step 502: and selecting two speed measuring reference points belonging to monitoring pictures of different cameras from the track of the target vehicle in the splicing coordinate system at the current moment.
It should be noted that, compared with two speed measurement reference points in the same monitoring picture, the distance between the two speed measurement reference points belonging to the monitoring pictures of different cameras is generally longer, and when the running speed of the target vehicle is determined through the two speed measurement reference points with longer distance, the accuracy of determining the running speed can be improved. Therefore, the embodiment of the invention can select two speed measurement reference points belonging to monitoring pictures of different cameras from the track of the target vehicle in the splicing coordinate system at the current moment.
The track of the target vehicle in the splicing coordinate system is formed by connecting a plurality of discrete points, and the discrete points are determined by the positions of the target vehicle in the video frame images acquired by the plurality of cameras. For ease of description, the plurality of discrete points are referred to as trace points. Therefore, for example, in one possible implementation, intersection points between the trajectory of the target vehicle in the stitching coordinate system and the first straight line and the second straight line at the current moment can be determined, and when both the intersection points are track points, the two intersection points can be determined as two speed measurement reference points. When one of the two intersection points is not a track point and the other intersection point is a track point, assuming that the intersection point of the track of the target vehicle in the splicing coordinate system at the current moment and the first straight line is not a track point and the intersection point of the track and the second straight line is a track point, under such a condition, the intersection point of the track and the second straight line can be determined as a speed measuring reference point, and the track point on the track, which is closest to the intersection point of the first straight line and has a distance with the second straight line greater than a reference distance threshold value, is determined as the other speed measuring reference point. When the two intersection points are not track points, under the condition, two track points which are respectively closest to the two intersection points can be selected from the track, the distance between the two selected track points is larger than or equal to a reference distance threshold value, and then the two selected track points are determined as two speed measuring datum points. The first straight line and the second straight line are both parallel to a transverse axis of the splicing coordinate system, the first straight line and the second straight line are located in monitoring pictures of different cameras, and the distance between the first straight line and the second straight line is larger than or equal to a reference distance threshold value.
Of course, the embodiment of the present invention may also select two speed measurement reference points belonging to monitoring pictures of different cameras in other manners, which is not limited in the embodiment of the present invention.
For example, the camera No. 0 and the camera No. 1 are two adjacent cameras, the first straight line is located in the monitoring range of the camera No. 0, and the second straight line is located in the monitoring range of the camera No. 1. Assuming that the reference distance threshold is 1, the first line is a straight line passing through (0,0.75) and parallel to the horizontal axis, and the second line is a straight line passing through (0,1.75) and parallel to the horizontal axis. At this time, whether the track of the target vehicle on the splicing coordinate system and two intersection points of the two straight lines are track points or not can be determined, and if the two intersection points are the track points, the two intersection points can be determined as two speed measuring reference points.
Step 503: and the multi-view camera determines the actual distance between the two speed measuring reference points according to the splicing coordinates of the two speed measuring reference points in the splicing coordinate system.
Specifically, the operation of step 503 can be realized through steps 5031 to 5033 as follows:
step 5031: the multi-view camera determines a first distance according to the abscissa of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent lane lines on the road and the distance between two adjacent lane lines in the splicing coordinate system.
The method comprises the steps of firstly determining the actual distance between two adjacent lane lines on a road and the distance between two adjacent lane lines in a splicing coordinate system, then dividing the actual distance between two adjacent lane lines on the road by the distance between two adjacent lane lines in the splicing coordinate system to obtain a first proportion, subtracting the abscissa of the two speed measuring datum points in the splicing coordinate system to obtain a first numerical value, and then multiplying the first numerical value by the first proportion to obtain the first distance.
Step 5032: and the multi-view camera determines a second distance according to the vertical coordinates of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent calibration lines on the road and the distance between two adjacent calibration lines in the splicing coordinate system.
The method comprises the steps of firstly determining the actual distance between two adjacent calibration lines on a road and the distance between two adjacent calibration lines in a splicing coordinate system, then dividing the actual distance between two adjacent calibration lines on the road by the distance between two adjacent calibration lines in the splicing coordinate system to obtain a second proportion, subtracting the vertical coordinates of the two speed measuring datum points in the splicing coordinate system to obtain a second numerical value, and multiplying the second numerical value by the second proportion to obtain a second distance.
Step 5033: and the multi-view camera determines the actual distance between the two speed measuring reference points according to the first distance and the second distance.
Since the first distance is the projection distance of the actual distance between the two speed measurement reference points on the lane line on the road, and the second distance is the projection distance of the actual distance between the two speed measurement reference points on the calibration line on the road, the actual distance between the two speed measurement reference points can be calculated by adding the square of the first distance to the square of the second distance and then dividing the square according to the pythagorean theorem.
For example, if the distance between two adjacent lane lines in the stitching coordinate system is 0.5, the distance between two adjacent calibration lines in the stitching coordinate system is 1, the actual distance between two adjacent lane lines on the road is 3m, and the actual distance between two adjacent calibration lines on the road is 50 m. Then, the actual distance between two adjacent lane lines on the road is divided by the distance between two adjacent lane lines in the stitching coordinate system, that is, 3m is divided by 0.5, so that the ratio between the actual distance between two adjacent lane lines on the road and the distance between two adjacent lane lines in the stitching coordinate system is 6, and 6 is recorded as the first ratio. In addition, the actual distance between two adjacent calibration lines on the road is divided by the distance between two adjacent calibration lines in the stitching coordinate system, that is, 50m is divided by 1, so that the ratio between the actual distance between two adjacent calibration lines on the road and the distance between two adjacent calibration lines in the stitching coordinate system is 50, and 50 is recorded as the second ratio. At this time, if the coordinate of the first speed measurement reference point in the splicing coordinate system is (0.7,1.6) and the coordinate of the second speed measurement reference point in the splicing coordinate system is (0.2,0.8), the abscissa of the first speed measurement reference point is subtracted from the abscissa of the second speed measurement reference point, that is, 0.7 is subtracted from 0.2 to obtain 0.5, 0.5 is multiplied by a first proportion, that is, 0.5 is multiplied by 6 to obtain the projection distance of the actual distance between the two speed measurement reference points on the lane line on the road as 3m, the ordinate of the first speed measurement reference point is subtracted from the ordinate of the second speed measurement reference point, that is, 1.6 is subtracted from 0.8 to obtain 0.8, 0.8 is multiplied by a second proportion, that is, that 0.8 is multiplied by 50 to obtain the projection distance of the actual distance between the two speed measurement reference points on the calibration line on the road as 40m, according to the pythagorean theorem, the actual distance between the two speed reference points is about 40.11m, which is obtained by adding the square of 3m to the square of 40m and then squaring.
Step 504: and determining the running speed of the target vehicle by the multi-view camera according to the actual distance and the first time difference.
It should be noted that the first time difference is a difference between the acquisition times of the video frame images to which the two speed measurement reference points respectively belong.
Since the speed can be obtained by dividing the distance by the time, the running speed of the target vehicle can be obtained according to the quotient of the actual distance and the first time difference.
Further, after the driving speed of the target vehicle is determined, the multi-view camera can capture an image of the vehicle driven by the target vehicle on the road, and send the image of the vehicle of the target vehicle, the driving speed of the target vehicle and the vehicle information of the target vehicle to the server, so that the server can judge whether the target vehicle is in overspeed violation and can obtain evidence of the overspeed violation target vehicle. That is, after step 504, steps 505-506 may also be performed:
step 505: the multi-view camera captures an image of a vehicle driven on a road by a target vehicle.
Step 506: the multi-view camera transmits a vehicle image of the target vehicle, a traveling speed of the target vehicle, and vehicle information of the target vehicle to the server.
It should be noted that, the multi-view camera sends the vehicle image of the target vehicle, the traveling speed of the target vehicle, and the vehicle information of the target vehicle to the server, under such a condition, the server may compare the traveling speed of the target vehicle with a preset value, and if the traveling speed of the target vehicle is greater than the preset value, it may be determined that the target vehicle is traveling in violation of speeding, and the vehicle image of the target vehicle and the vehicle information of the target vehicle may be used as the evidence that the target vehicle is traveling in violation of speeding.
In the embodiment of the invention, the track of the target vehicle in the splicing coordinate system at the current moment is determined, and then two speed measurement reference points belonging to monitoring pictures of different cameras are selected from the track of the target vehicle in the splicing coordinate system at the current moment. And finally, determining the running speed of the target vehicle according to the difference between the actual distance and the acquisition time of the video frame images respectively belonging to the two speed measuring reference points. Because the two selected speed measuring reference points belong to monitoring pictures of different cameras, and compared with two speed measuring reference points in the same monitoring picture, the distance between the two selected speed measuring reference points is generally far, and the difference value between the acquisition time of the video frame images to which the two speed measuring reference points belong is also large, so that the problem of error amplification caused by the small distance between the two speed measuring reference points and the small difference value between the acquisition time of the video frame images to which the two speed measuring reference points belong is solved, the error of converting the distance on an image coordinate into the actual distance on a road is greatly reduced, the error of the determined running speed of the target vehicle is smaller, and the accuracy is higher. In addition, in the embodiment of the invention, the two cameras corresponding to the two speed measuring reference points belong to the same multi-view camera, so that the two cameras adopt the same time system, the time synchronization can be realized, and the error of the determined running speed of the target vehicle caused by the time asynchronization can be avoided.
In the embodiment of the present invention, after step 501, not only the running speed of the target vehicle can be determined through steps 502 to 506 and the image of the vehicle on the road where the target vehicle runs can be captured, but also the continuous lane change behavior of the vehicle can be detected through steps 1001 to 1006 as follows, referring to fig. 10.
Step 1001: and determining the track of the target vehicle in the splicing coordinate system at the current moment by the multi-view camera.
It should be noted that, since step 1001 is the same as step 501 in the foregoing embodiment, further description is omitted in this embodiment of the present invention.
Step 1002: when the multi-view camera determines that the target vehicle has lane changing behavior according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of a lane line on a road in the splicing coordinate system, a lane changing process diagram of the target vehicle is collected.
And drawing the corresponding lane lines in the splicing coordinate system according to the positions of the lane lines on the road in the splicing coordinate system. If the track of the target vehicle in the spliced coordinate system at the current moment and the lane line drawn in the spliced coordinate system have intersection points, the target vehicle can be shown to have a behavior of driving from one lane to another lane, and the lane change behavior of the target vehicle can be determined. At this time, a lane change process map of the target lane may be collected.
It should be noted that the lane change process map may be a video frame image acquired by the multi-view camera when the target vehicle is traveling in the previous lane, a video frame image acquired by the multi-view camera when the target vehicle is traveling on at least one lane line between the previous lane and the current lane, a video frame image acquired by the multi-view camera when the target vehicle is traveling in the current lane, and the like, that is, the lane change process map refers to a video frame image that can indicate that the target vehicle has a lane change behavior.
Step 1003: when the multi-view camera determines that the lane changing behavior of the target vehicle is finished according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of the lane line on the road in the splicing coordinate system, the duration of the lane changing behavior of the target vehicle is determined.
If the track of the target vehicle in the splicing coordinate system at the current moment and the lane line drawn in the splicing coordinate system do not have intersection points any more, it can be determined that the target vehicle has finished lane changing behavior. At this time, the duration between the start time and the end time of the lane change behavior of the target vehicle may be determined as the duration of the lane change behavior of the target vehicle.
Step 1004: and when the duration is less than the reference duration and the target vehicle runs in at least three different lanes within the duration, determining the lane changing behavior of the target vehicle as a continuous lane changing behavior by the multi-view camera.
The continuous lane change behavior is a behavior in which the vehicle continuously changes from one lane to a lane separated by one lane from the lane in a short time.
It is to be noted that, when the duration is longer than the reference duration and the target vehicle drives in at least three different lanes within the duration, it is determined that the lane change behavior of the target vehicle is not the continuous lane change behavior, and at this time, the multi-view camera may delete the acquired lane change process map of the target vehicle.
Step 1005: after the lane changing behavior of the target vehicle is determined to be the continuous lane changing behavior, the multi-view camera determines a continuous lane changing event evidence obtaining graph of the target vehicle according to the collected lane changing process graph.
It should be noted that the continuous lane change event forensic map of the target vehicle may be a video frame image of the target vehicle located on a first lane, a video frame image of the target vehicle located on a lane line between the first lane and a second lane, a video frame image of the target vehicle located on the second lane, a video frame image of the target vehicle located on a lane line between the second lane and a third lane, a video frame image of the target vehicle located on a lane line between the third lane, and the like in the acquired lane change process map, that is, the continuous lane change event forensic map refers to a video frame image capable of indicating that the target vehicle has a continuous lane change behavior.
Specifically, the operation of step 1005 may be implemented by the following steps (1) to (3):
step (1): the multi-view camera selects a first process diagram from the acquired lane change process diagrams.
It should be noted that the first process diagram is a lane change process diagram in which the size of the pixel area occupied by the target vehicle in the acquired lane change process diagram is the largest.
In addition, the larger the size of the pixel area occupied by the target vehicle is, the larger the area of the license plate number of the target vehicle is, and the clearer the license plate number of the target vehicle is.
Step (2): and the multi-view camera intercepts the license plate expansion area from the first process diagram, and determines the intercepted license plate expansion area as a vehicle close-up diagram of the target vehicle.
It should be noted that the license plate expansion region refers to a region including a head or a tail of the target vehicle after expansion according to the license plate region.
The multi-view camera can determine a license plate area from the first process diagram and detect the characteristics of the target vehicle around the license plate area. And when the detected characteristics accord with the characteristics of the head of the target vehicle, determining the license plate area as the area of the license plate at the head of the target vehicle, intercepting the area comprising the license plate and the head, and then taking the area comprising the license plate and the head as the license plate expansion area. Or when the detected features accord with the features of the tail of the target vehicle, determining the license plate region as the region of the license plate at the tail of the target vehicle, intercepting the region including the license plate and the tail of the vehicle, and then taking the region including the license plate and the tail of the vehicle as the license plate expansion region.
And (3): and the multi-purpose camera determines the acquired lane change process graph and the vehicle close-up graph as a continuous lane change event evidence obtaining graph of the target vehicle.
The multi-view camera can combine the acquired lane change process graph and the vehicle sketch graph into one image, and the combined image is used as a continuous lane change event evidence obtaining graph of the target vehicle. In a possible implementation manner, the multi-view camera may perform scaling processing on the acquired lane change process diagram, combine the acquired lane change process diagram after scaling processing and the vehicle sketch map into one image, and use the combined image as a continuous lane change event forensics diagram of the target vehicle. In another possible implementation manner, the multi-view camera may further perform scaling processing on both the acquired lane change process diagram and the acquired vehicle close-up diagram, combine the scaled images into one image, and use the combined image as a continuous lane change event evidence obtaining diagram of the target vehicle. The embodiment of the present invention is not limited thereto.
Step 1006: and the multi-view camera sends the vehicle information of the target vehicle and the continuous lane change event evidence obtaining graph to the server.
In the embodiment of the invention, the track of a target vehicle in a spliced coordinate system at the current moment is determined, then the lane change process diagram of the target vehicle is acquired when the lane change behavior of the target vehicle is determined according to the track and the position of a lane line on a road in the spliced coordinate system, then the duration of the lane change behavior of the target vehicle is determined when the lane change behavior of the target vehicle is determined to be finished, when the duration is less than the reference duration and the target vehicle runs in at least three different lanes within the duration, the lane change behavior of the target vehicle is determined to be the continuous lane change behavior, then the continuous lane change event evidence obtaining diagram of the target vehicle is determined according to the acquired lane change process diagram, and finally the vehicle information of the target vehicle and the continuous lane change event evidence obtaining diagram are sent to a server. The track of the target vehicle in the spliced coordinate system at the current moment is obtained by splicing the tracks in the monitoring pictures of the multiple cameras, the lengths of lane lines in the monitoring pictures of the multiple cameras are generally longer, and the duration of the lane changing behavior of the target vehicle is generally longer, so that the acquired lane changing process diagram can completely indicate whether the target vehicle has the continuous lane changing behavior in the monitoring pictures of the multiple cameras, and the continuous lane changing event evidence obtaining diagram of the target vehicle can be accurately determined and sent to the server, so that the server can obtain evidence of the continuous lane changing behavior of the target vehicle.
Fig. 11 is a block diagram of a vehicle speed measuring device according to an embodiment of the present invention. The device is applied to a multi-view camera, the monitoring ranges of two adjacent cameras in the multi-view camera have overlapped parts along the trend of a road, and referring to fig. 11, the device comprises a first determining module 1101, a selecting module 1102, a second determining module 1103 and a third determining module 1104.
The first determining module 1101 is configured to determine a track of a target vehicle in a stitching coordinate system at the current time, where the target vehicle is any vehicle that performs speed measurement, and the stitching coordinate system is a coordinate system used for drawing a complete track of the target vehicle within a monitoring range of the multiple cameras;
a selecting module 1102, configured to select two speed measurement reference points belonging to monitoring pictures of different cameras from a track of a target vehicle in the stitching coordinate system at a current time;
a second determining module 1103, configured to determine an actual distance between the two speed measurement reference points according to the splicing coordinates of the two speed measurement reference points in the splicing coordinate system;
and a third determining module 1104, configured to determine a driving speed of the target vehicle according to the actual distance and a first time difference, where the first time difference is a difference between acquisition times of video frame images to which the two speed measurement reference points belong respectively.
Optionally, the second determining module 1103 includes:
the first determining submodule is used for determining a first distance according to the abscissa of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent lane lines on the road and the distance between two adjacent lane lines in the splicing coordinate system;
the second determining submodule is used for determining a second distance according to the vertical coordinates of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent calibration lines on the road and the distance between two adjacent calibration lines in the splicing coordinate system;
and the third determining submodule is used for determining the actual distance between the two speed measuring reference points according to the first distance and the second distance.
Optionally, the first determining module 1101 includes:
the acquisition submodule is used for acquiring video frame images through the plurality of cameras to obtain a plurality of video frame images;
the fourth determining submodule is used for determining vehicle information of the target vehicle in at least one video frame image when the target vehicle is included in at least one video frame image in the plurality of video frame images so as to obtain at least one piece of vehicle information, and each piece of vehicle information comprises a calibration coordinate of the target vehicle in a calibration coordinate system and license plate information of the target vehicle;
the conversion submodule is used for converting the calibration coordinates of the target vehicle in the at least one piece of vehicle information into corresponding at least one splicing coordinate in the splicing coordinate system according to the serial number of the at least one camera, and the at least one camera is used for acquiring the at least one video frame image;
the fifth determining submodule is used for determining the current splicing coordinate of the target vehicle in the splicing coordinate system according to the license plate information of the target vehicle in the at least one piece of vehicle information and the at least one splicing coordinate;
and the track drawing submodule is used for connecting a line between the previous splicing coordinate and the current splicing coordinate in the splicing coordinate system when the current splicing coordinate and the previous splicing coordinate of the target vehicle in the splicing coordinate system are different so as to obtain the track of the target vehicle in the splicing coordinate system at the current moment.
Optionally, the fourth determining sub-module includes:
the vehicle detection unit is used for carrying out vehicle detection on the at least one video frame image so as to determine the vehicle position of the target vehicle in each video frame image;
the first determining unit is used for determining the license plate information of the target vehicle from each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle;
the second determining unit is used for determining the image coordinates of the target vehicle in the image coordinate system of each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle so as to obtain at least one image coordinate;
and the conversion unit is used for converting the at least one image coordinate into the calibration coordinate system so as to obtain a calibration coordinate of the target vehicle in the calibration coordinate system.
Optionally, the numbers of the plurality of cameras are increased from 0 one by one;
the conversion sub-module is further configured to use an abscissa of the calibration coordinate of the target vehicle in the at least one piece of vehicle information as an abscissa of the target vehicle in the stitching coordinate system, and add an ordinate of the calibration coordinate of the target vehicle in the at least one piece of vehicle information and the number of the at least one camera correspondingly to obtain an ordinate of the target vehicle in the stitching coordinate system.
Optionally, the fifth determining sub-module includes:
the third determining unit is used for determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system when the number of the at least one splicing coordinate is two and the characters of the license plate number included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate are completely the same, and compared with other cameras in the at least one camera, the size of the pixel area occupied by the target vehicle in the video frame image shot by the first camera is the largest;
the fourth determining unit is used for determining the matching degree between the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate when the number of the at least one splicing coordinate is two and the characters of the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate are not completely the same;
a fifth determining unit, configured to determine a distance between the at least one stitching coordinate when the determined matching degree is within a preset matching degree range;
and the sixth determining unit is used for determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system when the distance between the at least one splicing coordinate is smaller than the preset distance.
Optionally, the apparatus further comprises:
the first acquisition module is used for acquiring a reference image through each camera in the plurality of cameras, each reference image comprises two calibration lines distributed up and down and at least two lane lines distributed left and right, and one identical calibration line is arranged in the reference images shot by two adjacent cameras;
and the establishing module is used for establishing the splicing coordinate system according to the serial numbers of the cameras, and two calibration lines and at least two lane lines included in each reference image in the collected multiple reference images.
Optionally, the establishing module includes:
the calibration line superposition submodule is used for superposing the same calibration lines in the reference images shot by two adjacent cameras in the cameras according to the serial numbers of the cameras and connecting the leftmost lane lines in the reference images to obtain a vertical connecting line;
the sixth determining submodule is used for determining the direction from small to large of the serial numbers of the cameras as the direction of the vertical connecting line;
the seventh determining submodule is used for acquiring the lowest calibrated line in the reference image shot by the camera with the smallest serial number and determining the horizontal right direction as the direction of the lowest calibrated line;
and the establishing submodule is used for taking the vertical connecting line with the direction as a longitudinal axis of the splicing coordinate system, taking the calibration line with the direction at the lowest edge as a transverse axis of the splicing coordinate system, and establishing the splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
Optionally, the apparatus further comprises:
the snapshot module is used for snapshot of a vehicle image of the target vehicle running on the road;
the first sending module is used for sending the vehicle image of the target vehicle, the running speed of the target vehicle and the vehicle information of the target vehicle to the server.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring a lane change process diagram of the target vehicle when the lane change behavior of the target vehicle is determined according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of a lane line on the road in the splicing coordinate system;
the fourth determining module is used for determining the duration of the lane changing behavior of the target vehicle when the lane changing behavior of the target vehicle is determined to be finished according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of the lane line on the road in the splicing coordinate system;
the fifth determining module is used for determining that the lane changing behavior of the target vehicle is a continuous lane changing behavior when the duration is less than the reference duration and the target vehicle runs in at least three different lanes within the duration;
the sixth determining module is used for determining a continuous lane changing event evidence obtaining graph of the target vehicle according to the collected lane changing process graph after determining that the lane changing behavior of the target vehicle is the continuous lane changing behavior;
and the second sending module is used for sending the vehicle information of the target vehicle and the continuous lane change event evidence obtaining graph to the server.
Optionally, the sixth determining module includes:
the selection submodule is used for selecting a first process diagram from the acquired lane changing process diagrams, and the first process diagram is the lane changing process diagram with the largest pixel area size occupied by the target vehicle in the acquired lane changing process diagrams;
the eighth determining submodule is used for intercepting a license plate expansion area from the first process diagram and determining the intercepted license plate expansion area as a vehicle sketch map of the target vehicle, wherein the license plate expansion area is an area which comprises the head or the tail of the target vehicle after being expanded according to the license plate area;
and the ninth determining submodule is used for determining the collected lane changing process graph and the vehicle close-up graph as a continuous lane changing event evidence obtaining graph of the target vehicle.
In the embodiment of the invention, the track of the target vehicle in the splicing coordinate system at the current moment is determined, and then two speed measurement reference points belonging to monitoring pictures of different cameras are selected from the track of the target vehicle in the splicing coordinate system at the current moment. And finally, determining the running speed of the target vehicle according to the difference between the actual distance and the acquisition time of the video frame images respectively belonging to the two speed measuring reference points. Because the two selected speed measuring reference points belong to monitoring pictures of different cameras, and compared with two speed measuring reference points in the same monitoring picture, the distance between the two selected speed measuring reference points is generally far, and the difference value between the acquisition time of the video frame images to which the two speed measuring reference points belong is also large, so that the problem of error amplification caused by the small distance between the two speed measuring reference points and the small difference value between the acquisition time of the video frame images to which the two speed measuring reference points belong is solved, the error of converting the distance on an image coordinate into the actual distance on a road is greatly reduced, the error of the determined running speed of the target vehicle is smaller, and the accuracy is higher.
It should be noted that: when the vehicle speed measuring device provided by the above embodiment is used for measuring the speed of the vehicle, only the division of the functional modules is taken as an example, in practical application, the function distribution can be completed by different functional modules according to needs, that is, the internal structure of the vehicle speed measuring device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the vehicle speed measuring device and the vehicle speed measuring method provided by the embodiment belong to the same concept, and the specific implementation process is described in the method embodiment in detail and is not described herein again.
Fig. 12 is a schematic structural diagram of a vehicle speed measuring device 1200 according to an embodiment of the present invention, where the vehicle speed measuring device 1200 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1201 and one or more memories 1202, where the memory 1202 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 1201. Certainly, the vehicle speed measuring device 1200 may further include a wired or wireless network interface, a keyboard, an input/output interface, and other components to facilitate input and output, and the vehicle speed measuring device 1200 may further include other components for implementing functions of the device, which is not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, is also provided that includes instructions executable by a processor in a multi-view camera to perform the method for measuring vehicle speed in the above-described embodiments. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (20)
1. A vehicle speed measurement method is applied to a multi-view camera, monitoring ranges of two adjacent cameras in a plurality of cameras included in the multi-view camera have overlapped parts along the trend of a road, and the method comprises the following steps:
determining the track of a target vehicle in a splicing coordinate system at the current moment, wherein the target vehicle is any vehicle for measuring speed, and the splicing coordinate system is a coordinate system used for drawing the complete track of the target vehicle in the monitoring range of the plurality of cameras;
selecting two speed measuring reference points belonging to monitoring pictures of different cameras from the track of the target vehicle in the splicing coordinate system at the current moment;
determining the actual distance between the two speed measuring reference points according to the splicing coordinates of the two speed measuring reference points in the splicing coordinate system;
determining the running speed of the target vehicle according to the actual distance and a first time difference, wherein the first time difference is a difference value between the acquisition times of the video frame images to which the two speed measurement reference points belong respectively;
the determining the track of the target vehicle in the stitching coordinate system at the current moment comprises the following steps:
acquiring video frame images through the plurality of cameras to obtain a plurality of video frame images;
when at least one video frame image in the plurality of video frame images comprises the target vehicle, determining vehicle information of the target vehicle in the at least one video frame image to obtain at least one piece of vehicle information, wherein each piece of vehicle information comprises a calibration coordinate of the target vehicle in a calibration coordinate system and license plate information of the target vehicle;
converting the calibration coordinates of the target vehicle in the at least one piece of vehicle information into corresponding at least one stitching coordinate in the stitching coordinate system according to the number of at least one camera, wherein the at least one camera is used for acquiring the at least one video frame image;
determining the current splicing coordinate of the target vehicle in the splicing coordinate system according to the license plate information of the target vehicle and the at least one splicing coordinate in the at least one piece of vehicle information;
when the current splicing coordinate of the target vehicle in the splicing coordinate system is different from the previous splicing coordinate, connecting a line between the previous splicing coordinate and the current splicing coordinate in the splicing coordinate system to obtain the track of the target vehicle in the splicing coordinate system at the current moment;
the number of the plurality of cameras is increased from 0 one by one, and the step of converting the calibration coordinates of the target vehicle in the at least one piece of vehicle information into the corresponding at least one stitching coordinate in the stitching coordinate system according to the number of the at least one camera comprises the steps of:
and taking the abscissa of the calibration coordinate of the target vehicle in the at least one piece of vehicle information as the abscissa of the target vehicle in the splicing coordinate system, and correspondingly adding the ordinate of the calibration coordinate of the target vehicle in the at least one piece of vehicle information and the number of the at least one camera to obtain the ordinate of the target vehicle in the splicing coordinate system.
2. The method of claim 1, wherein said determining an actual distance between said two speed reference points from their stitching coordinates in said stitching coordinate system comprises:
determining a first distance according to the abscissa of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent lane lines on the road and the distance between two adjacent lane lines in the splicing coordinate system;
determining a second distance according to the vertical coordinates of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent calibration lines on the road and the distance between two adjacent calibration lines in the splicing coordinate system;
and determining the actual distance between the two speed measuring reference points according to the first distance and the second distance.
3. The method of claim 1, wherein said determining vehicle information of the target vehicle in the at least one video frame image to obtain at least one vehicle information comprises:
performing vehicle detection on the at least one video frame image to determine a vehicle position of the target vehicle in each video frame image;
determining license plate information of the target vehicle from each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle;
determining image coordinates of the target vehicle in an image coordinate system of each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle to obtain at least one image coordinate;
and converting the at least one image coordinate into the calibration coordinate system to obtain a calibration coordinate of the target vehicle in the calibration coordinate system.
4. The method of claim 1, wherein the determining the current stitching coordinates of the target vehicle in the stitching coordinate system according to the license plate information of the target vehicle in the at least one piece of vehicle information and the at least one stitching coordinate comprises:
when the number of the at least one splicing coordinate is two, and characters of a license plate number included in license plate information of a target vehicle corresponding to the at least one splicing coordinate are completely the same, determining a splicing coordinate corresponding to a first camera as a current splicing coordinate of the target vehicle in the splicing coordinate system, and comparing with other cameras in the at least one camera, wherein the size of a pixel area occupied by the target vehicle in a video frame image shot by the first camera is the largest;
when the number of the at least one splicing coordinate is two and characters of license plate numbers included in license plate information of the target vehicle corresponding to the at least one splicing coordinate are not identical, determining the matching degree between the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one splicing coordinate;
when the determined matching degree is within a preset matching degree range, determining the distance between the at least one splicing coordinate;
and when the distance between the at least one splicing coordinate is smaller than a preset distance, determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system.
5. The method of claim 1, wherein determining the trajectory of the target vehicle in the stitching coordinate system at the current time is preceded by:
acquiring a reference image through each camera in the plurality of cameras, wherein each reference image comprises two calibration lines distributed up and down and at least two lane lines distributed left and right, and one identical calibration line is arranged in the reference images shot by two adjacent cameras;
and establishing the splicing coordinate system according to the serial numbers of the cameras, and two calibration lines and at least two lane lines included in each reference image in the collected multiple reference images.
6. The method according to claim 5, wherein the establishing the stitching coordinate system according to the serial numbers of the cameras and the two calibration lines and at least two lane lines included in each of the collected reference images comprises:
according to the serial numbers of the cameras, the same calibration lines in the reference images shot by two adjacent cameras in the cameras are overlapped, and the leftmost lane lines in the reference images are connected to obtain a vertical connecting line;
determining the direction of the serial numbers of the cameras from small to large as the direction of the vertical connecting line;
acquiring a lowermost calibration line in a reference image shot by a camera with the smallest number, and determining the horizontal rightward direction as the direction of the lowermost calibration line;
and taking the vertical connecting line with the direction as a longitudinal axis of the splicing coordinate system, taking the calibration line with the direction at the lowest edge as a transverse axis of the splicing coordinate system, and establishing the splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
7. The method of claim 1, wherein after determining the travel speed of the target vehicle based on the actual distance and the first time difference, further comprising:
capturing an image of a vehicle driven on the road by the target vehicle;
and sending the vehicle image of the target vehicle, the running speed of the target vehicle and the vehicle information of the target vehicle to a server.
8. The method of claim 1, wherein after determining the trajectory of the target vehicle in the stitching coordinate system at the current time, further comprising:
when the target vehicle is determined to have lane changing behavior according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of a lane line on the road in the splicing coordinate system, acquiring a lane changing process diagram of the target vehicle;
when the lane changing behavior of the target vehicle is determined according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of a lane line on the road in the splicing coordinate system, determining the duration of the lane changing behavior of the target vehicle;
when the duration is less than a reference duration and the target vehicle runs in at least three different lanes within the duration, determining that the lane changing behavior of the target vehicle is a continuous lane changing behavior;
after determining that the lane changing behavior of the target vehicle is the continuous lane changing behavior, determining a continuous lane changing event evidence obtaining graph of the target vehicle according to the collected lane changing process graph;
and sending the vehicle information of the target vehicle and the continuous lane change event evidence obtaining graph to a server.
9. The method of claim 8, wherein determining a continuous lane change event forensics map for the target vehicle from the collected lane change process map comprises:
selecting a first process diagram from the acquired lane changing process diagrams, wherein the first process diagram is the lane changing process diagram with the largest pixel area size occupied by the target vehicle in the acquired lane changing process diagram;
intercepting a license plate expansion area from the first process diagram, and determining the intercepted license plate expansion area as a vehicle sketch map of the target vehicle, wherein the license plate expansion area is an area which comprises the head or the tail of the target vehicle after being expanded according to the license plate area;
and determining the collected lane change process graph and the vehicle sketch graph as a continuous lane change event evidence obtaining graph of the target vehicle.
10. The utility model provides a vehicle speed sensor which characterized in that is applied to many mesh camera, there is the overlap portion in the monitoring range of two adjacent cameras in a plurality of cameras that many mesh camera included along the trend of road, the device includes:
the system comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining the track of a target vehicle in a splicing coordinate system at the current moment, the target vehicle is any vehicle for measuring speed, and the splicing coordinate system is a coordinate system used for drawing the complete track of the target vehicle in the monitoring range of the cameras;
the selecting module is used for selecting two speed measuring reference points belonging to monitoring pictures of different cameras from the track of the target vehicle in the splicing coordinate system at the current moment;
the second determining module is used for determining the actual distance between the two speed measuring reference points according to the splicing coordinates of the two speed measuring reference points in the splicing coordinate system;
the third determining module is used for determining the running speed of the target vehicle according to the actual distance and a first time difference, wherein the first time difference is a difference value between the acquisition times of the video frame images to which the two speed measurement reference points belong respectively;
the first determining module includes:
the acquisition submodule is used for acquiring video frame images through the plurality of cameras to obtain a plurality of video frame images;
a fourth determining sub-module, configured to determine vehicle information of the target vehicle in at least one of the video frame images to obtain at least one piece of vehicle information when the target vehicle is included in at least one of the video frame images, where each piece of vehicle information includes a calibration coordinate of the target vehicle in a calibration coordinate system and license plate information of the target vehicle;
the conversion submodule is used for converting the calibration coordinates of the target vehicle in the at least one piece of vehicle information into corresponding at least one splicing coordinate in the splicing coordinate system according to the serial number of at least one camera, and the at least one camera is used for acquiring the at least one video frame image;
a fifth determining submodule, configured to determine a current stitching coordinate of the target vehicle in the stitching coordinate system according to the license plate information of the target vehicle and the at least one stitching coordinate in the at least one piece of vehicle information;
the track drawing sub-module is used for connecting a line between the previous splicing coordinate and the current splicing coordinate in the splicing coordinate system when the current splicing coordinate and the previous splicing coordinate of the target vehicle in the splicing coordinate system are different so as to obtain the track of the target vehicle in the splicing coordinate system at the current moment;
the serial numbers of the cameras are increased one by one from 0, and the conversion sub-module is further configured to take an abscissa of the calibration coordinate of the target vehicle in the at least one piece of vehicle information as an abscissa of the target vehicle in the stitching coordinate system, and add a ordinate of the calibration coordinate of the target vehicle in the at least one piece of vehicle information and the serial number of the at least one camera correspondingly to obtain an ordinate of the target vehicle in the stitching coordinate system.
11. The apparatus of claim 10, wherein the second determining module comprises:
the first determining submodule is used for determining a first distance according to the abscissa of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent lane lines on the road and the distance between two adjacent lane lines in the splicing coordinate system;
the second determining submodule is used for determining a second distance according to the vertical coordinates of the two speed measuring reference points in the splicing coordinate system, the actual distance between two adjacent calibration lines on the road and the distance between two adjacent calibration lines in the splicing coordinate system;
and the third determining submodule is used for determining the actual distance between the two speed measuring reference points according to the first distance and the second distance.
12. The apparatus of claim 10, wherein the fourth determination submodule comprises:
a vehicle detection unit for performing vehicle detection on the at least one video frame image to determine a vehicle position of the target vehicle in each video frame image;
the first determining unit is used for determining license plate information of the target vehicle from each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle;
the second determining unit is used for determining the image coordinates of the target vehicle in the image coordinate system of each video frame image included in the at least one video frame image according to the determined vehicle position of the target vehicle so as to obtain at least one image coordinate;
and the conversion unit is used for converting the at least one image coordinate into the calibration coordinate system so as to obtain a calibration coordinate of the target vehicle in the calibration coordinate system.
13. The apparatus of claim 10, wherein the fifth determination submodule comprises:
a third determining unit, configured to determine, when the number of the at least one stitching coordinate is two and characters of a license plate number included in license plate information of a target vehicle corresponding to the at least one stitching coordinate are completely the same, a stitching coordinate corresponding to a first camera as a current stitching coordinate of the target vehicle in the stitching coordinate system, where a size of a pixel area occupied by the target vehicle in a video frame image captured by the first camera is the largest compared to other cameras in the at least one camera;
a fourth determining unit, configured to determine a matching degree between license plate numbers included in the license plate information of the target vehicle corresponding to the at least one mosaic coordinate when the number of the at least one mosaic coordinate is two and characters of the license plate numbers included in the license plate information of the target vehicle corresponding to the at least one mosaic coordinate are not identical;
the fifth determining unit is used for determining the distance between the at least one splicing coordinate when the determined matching degree is within a preset matching degree range;
and the sixth determining unit is used for determining the splicing coordinate corresponding to the first camera as the current splicing coordinate of the target vehicle in the splicing coordinate system when the distance between the at least one splicing coordinate is smaller than the preset distance.
14. The apparatus of claim 10, wherein the apparatus further comprises:
the first acquisition module is used for acquiring a reference image through each camera in the multiple cameras, each reference image comprises two calibration lines distributed up and down and at least two lane lines distributed left and right, and one identical calibration line exists in the reference images shot by the two adjacent cameras;
and the establishing module is used for establishing the splicing coordinate system according to the serial numbers of the cameras, and two calibration lines and at least two lane lines included in each reference image in the collected multiple reference images.
15. The apparatus of claim 14, wherein the establishing module comprises:
the calibration line superposition submodule is used for superposing the same calibration lines in the reference images shot by two adjacent cameras in the cameras according to the serial numbers of the cameras, and connecting the leftmost lane lines in the reference images to obtain a vertical connecting line;
the sixth determining submodule is used for determining the direction of the serial numbers of the cameras from small to large as the direction of the vertical connecting line;
the seventh determining submodule is used for acquiring the lowest calibration line in the reference image shot by the camera with the smallest serial number and determining the horizontal right direction as the direction of the lowest calibration line;
and the establishing submodule is used for taking the vertical connecting line with the direction as a longitudinal axis of the splicing coordinate system, taking the calibration line with the direction at the lowest edge as a transverse axis of the splicing coordinate system, and establishing the splicing coordinate system according to the distance between two adjacent calibration lines and the distance between two adjacent lane lines.
16. The apparatus of claim 10, wherein the apparatus further comprises:
the snapshot module is used for snapshot of a vehicle image of the target vehicle running on the road;
the first sending module is used for sending the vehicle image of the target vehicle, the running speed of the target vehicle and the vehicle information of the target vehicle to a server.
17. The apparatus of claim 10, wherein the apparatus further comprises:
the second acquisition module is used for acquiring a lane change process diagram of the target vehicle when the lane change behavior of the target vehicle is determined according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of a lane line on the road in the splicing coordinate system;
the fourth determination module is used for determining the duration of the lane-changing behavior of the target vehicle when the lane-changing behavior of the target vehicle is determined to be finished according to the track of the target vehicle in the splicing coordinate system at the current moment and the position of the lane line on the road in the splicing coordinate system;
the fifth determining module is used for determining that the lane changing behavior of the target vehicle is a continuous lane changing behavior when the duration is less than a reference duration and the target vehicle runs in at least three different lanes within the duration;
the sixth determining module is used for determining a continuous lane changing event evidence obtaining graph of the target vehicle according to the collected lane changing process graph after determining that the lane changing behavior of the target vehicle is the continuous lane changing behavior;
and the second sending module is used for sending the vehicle information of the target vehicle and the continuous lane change event evidence obtaining graph to a server.
18. The apparatus of claim 17, wherein the sixth determining module comprises:
the selection submodule is used for selecting a first process diagram from the acquired lane changing process diagrams, and the first process diagram is the lane changing process diagram with the largest pixel area size occupied by the target vehicle in the acquired lane changing process diagrams;
an eighth determining submodule, configured to intercept a license plate extension area from the first process map, and determine the intercepted license plate extension area as a vehicle sketch map of the target vehicle, where the license plate extension area is an area that includes a head or a tail of the target vehicle after being extended according to the license plate area;
and the ninth determining submodule is used for determining the collected lane changing process diagram and the vehicle sketch diagram as a continuous lane changing event evidence obtaining diagram of the target vehicle.
19. A vehicle speed measuring device, characterized in that the device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of any one of claims 1-9.
20. A computer-readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement the steps of the method of any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910059660.XA CN111462503B (en) | 2019-01-22 | 2019-01-22 | Vehicle speed measuring method and device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910059660.XA CN111462503B (en) | 2019-01-22 | 2019-01-22 | Vehicle speed measuring method and device and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111462503A CN111462503A (en) | 2020-07-28 |
CN111462503B true CN111462503B (en) | 2021-06-08 |
Family
ID=71682222
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910059660.XA Active CN111462503B (en) | 2019-01-22 | 2019-01-22 | Vehicle speed measuring method and device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111462503B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112798811B (en) * | 2020-12-30 | 2023-07-28 | 杭州海康威视数字技术股份有限公司 | Speed measurement method, device and equipment |
CN112686806B (en) * | 2021-01-08 | 2023-03-24 | 腾讯科技(深圳)有限公司 | Image splicing method and device, electronic equipment and storage medium |
CN112949470A (en) * | 2021-02-26 | 2021-06-11 | 上海商汤智能科技有限公司 | Method, device and equipment for identifying lane-changing steering lamp of vehicle and storage medium |
US11708075B2 (en) * | 2021-04-08 | 2023-07-25 | Ford Global Technologies, Llc | Enhanced adaptive cruise control |
CN114820819B (en) * | 2022-05-26 | 2023-03-31 | 广东机电职业技术学院 | Expressway automatic driving method and system |
CN116542858B (en) * | 2023-07-03 | 2023-09-05 | 众芯汉创(江苏)科技有限公司 | Data splicing analysis system based on space track |
CN116884235B (en) * | 2023-08-09 | 2024-01-30 | 广东省交通运输规划研究中心 | Video vehicle speed detection method, device and equipment based on wire collision and storage medium |
CN118015850B (en) * | 2024-04-08 | 2024-06-28 | 云南省公路科学技术研究院 | Multi-target vehicle speed synchronous estimation method, system, terminal and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101187671A (en) * | 2007-12-27 | 2008-05-28 | 北京中星微电子有限公司 | Method and device for determining automobile driving speed |
CN101777263A (en) * | 2010-02-08 | 2010-07-14 | 长安大学 | Traffic vehicle flow detection method based on video |
US8294595B1 (en) * | 2009-09-21 | 2012-10-23 | The Boeing Company | Speed detector for moving vehicles |
WO2014186642A2 (en) * | 2013-05-17 | 2014-11-20 | International Electronic Machines Corporation | Operations monitoring in an area |
CN107507298A (en) * | 2017-08-11 | 2017-12-22 | 南京阿尔特交通科技有限公司 | A kind of multimachine digital video vehicle operation data acquisition method and device |
CN108806269A (en) * | 2018-06-22 | 2018-11-13 | 安徽科力信息产业有限责任公司 | A kind of method and device of record motor vehicle continuous transformation track illegal activities |
-
2019
- 2019-01-22 CN CN201910059660.XA patent/CN111462503B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101187671A (en) * | 2007-12-27 | 2008-05-28 | 北京中星微电子有限公司 | Method and device for determining automobile driving speed |
US8294595B1 (en) * | 2009-09-21 | 2012-10-23 | The Boeing Company | Speed detector for moving vehicles |
CN101777263A (en) * | 2010-02-08 | 2010-07-14 | 长安大学 | Traffic vehicle flow detection method based on video |
WO2014186642A2 (en) * | 2013-05-17 | 2014-11-20 | International Electronic Machines Corporation | Operations monitoring in an area |
CN107507298A (en) * | 2017-08-11 | 2017-12-22 | 南京阿尔特交通科技有限公司 | A kind of multimachine digital video vehicle operation data acquisition method and device |
CN108806269A (en) * | 2018-06-22 | 2018-11-13 | 安徽科力信息产业有限责任公司 | A kind of method and device of record motor vehicle continuous transformation track illegal activities |
Also Published As
Publication number | Publication date |
---|---|
CN111462503A (en) | 2020-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111462503B (en) | Vehicle speed measuring method and device and computer readable storage medium | |
JP5792157B2 (en) | Image processing apparatus, method, program, and recording medium | |
KR101121034B1 (en) | System and method for obtaining camera parameters from multiple images and computer program products thereof | |
US11783443B2 (en) | Extraction of standardized images from a single view or multi-view capture | |
CN106878687A (en) | A kind of vehicle environment identifying system and omni-directional visual module based on multisensor | |
CN104685513A (en) | Feature based high resolution motion estimation from low resolution images captured using an array source | |
KR101759798B1 (en) | Method, device and system for generating an indoor two dimensional plan view image | |
CN206611521U (en) | A kind of vehicle environment identifying system and omni-directional visual module based on multisensor | |
CN112172797B (en) | Parking control method, device, equipment and storage medium | |
US8687000B2 (en) | Image generating apparatus and computer program | |
US20230394832A1 (en) | Method, system and computer readable media for object detection coverage estimation | |
CN110120012B (en) | Video stitching method for synchronous key frame extraction based on binocular camera | |
CN104252707B (en) | Method for checking object and device | |
CN116105721B (en) | Loop optimization method, device and equipment for map construction and storage medium | |
KR102065337B1 (en) | Apparatus and method for measuring movement information of an object using a cross-ratio | |
CN111630569B (en) | Binocular matching method, visual imaging device and device with storage function | |
CN113674361B (en) | Vehicle-mounted all-round-looking calibration implementation method and system | |
CN109328373B (en) | Image processing method, related device and storage medium thereof | |
CN112215048B (en) | 3D target detection method, device and computer readable storage medium | |
JP2013200840A (en) | Video processing device, video processing method, video processing program, and video display device | |
CN110796596A (en) | Image splicing method, imaging device and panoramic imaging system | |
JP2004144644A (en) | Topography recognition device, topography recognition means, and moving quantity detection method of moving body | |
JP6344903B2 (en) | Image processing apparatus, control method therefor, imaging apparatus, and program | |
US20240236281A9 (en) | Method and apparatus for generating 3d image by recording digital content | |
Subedi et al. | An extended method of multiple-camera calibration for 3D vehicle tracking at intersections |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |