CN111081031B - Vehicle snapshot method and system - Google Patents

Vehicle snapshot method and system Download PDF

Info

Publication number
CN111081031B
CN111081031B CN201911368921.2A CN201911368921A CN111081031B CN 111081031 B CN111081031 B CN 111081031B CN 201911368921 A CN201911368921 A CN 201911368921A CN 111081031 B CN111081031 B CN 111081031B
Authority
CN
China
Prior art keywords
vehicle
information
detection
snapshot
detector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911368921.2A
Other languages
Chinese (zh)
Other versions
CN111081031A (en
Inventor
房颜明
李智
马春香
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wanji Technology Co Ltd
Original Assignee
Beijing Wanji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wanji Technology Co Ltd filed Critical Beijing Wanji Technology Co Ltd
Priority to CN201911368921.2A priority Critical patent/CN111081031B/en
Publication of CN111081031A publication Critical patent/CN111081031A/en
Application granted granted Critical
Publication of CN111081031B publication Critical patent/CN111081031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/052Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed
    • G08G1/054Detecting movement of traffic to be counted or controlled with provision for determining speed or overspeed photographing overspeeding vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Abstract

The invention provides a vehicle snapshot method and a vehicle snapshot system; according to the invention, the vehicles which enter and leave can be detected through the first vehicle detector and the second vehicle detector, the timeliness of vehicle detection is ensured, the first camera and the second camera are triggered to snapshot the information of the vehicle head and the information of the vehicle tail, the completeness of vehicle snapshot is realized, and finally, a complete snapshot record is formed through the snapshot information of the first camera and the second camera, so that the problem of high snapshot error rate caused by snapshot of the vehicle head or the vehicle tail only through radar and video modes in the related technology is solved, and the efficiency and the accuracy of vehicle snapshot are improved.

Description

Vehicle snapshot method and system
Technical Field
The invention relates to the field of intelligent transportation, in particular to a vehicle snapshot method and system.
Background
At present, a commonly-used snapshot recognition system for a highway generally comprises a radar speed measurement module and a video recognition module, wherein the radar speed measurement module is used for measuring the running speed of a vehicle, the video recognition module is used for recognizing the license plate of an overspeed vehicle, the radar speed measurement module is limited by the running state (such as speed, vehicle following, parallel running and vehicle length) of the vehicle, a vehicle head snapshot mode or a vehicle tail snapshot mode is adopted, and certainly, in the application field with high requirements on vehicle capture and license plate recognition, the video recognition equipped with the radar speed measurement module can not meet the requirements on vehicle capture and recognition of the whole section. However, the method is more to snap and identify the head or the tail of the vehicle, rather than identify the head or the tail of the vehicle at the same time, and particularly, the license plate numbers of the head and the tail of the vehicle are not consistent when the vehicle is pulled, so that the head picture and the trailer picture cannot be accurately matched; the vehicle tail identification rate of the vehicle is low due to the fact that the number plate of the vehicle tail is often dirty and more shields are arranged on the vehicle tail; and video identification is adopted in application scenes, so that the identification is easy to realize multi-identification on the truck and the missing identification on the vehicle with the license plate shielded is easy to realize.
In addition, in an application scene of a highway with snapshot requirements on a head and a tail of a vehicle, the arrival and the departure of the vehicle are recognized in a ground induction coil mode, but the detection of the vehicle by the ground induction coil is influenced by the type (especially a trailer) of the vehicle, the running speed and the running position of the vehicle, on one hand, a trigger signal cannot be timely given, so that the distribution of the front area and the rear area of the snapshot is large, on the other hand, multiple triggers are caused, the head snapshot picture and the tail snapshot picture cannot be paired, and when a third vehicle runs across roads, the coil is easy to miss the trigger; in addition, accurate snapshot position information cannot be provided in a coil triggering mode, so that the left and right snapshot areas are distributed greatly, and error identification is easy to occur during cross-road driving. Although the vehicle head and tail snapshot system adopting the ground induction coil can detect the coming and leaving of the vehicle, the detection of the vehicle by the ground induction coil is influenced by the type (especially a trailer) of the vehicle, the running speed and the running position of the vehicle, on one hand, a trigger signal cannot be given in time, and the front area and the rear area of the snapshot are distributed greatly; secondly, triggering is caused more, and the vehicle head snapshot picture and the vehicle tail snapshot picture cannot be paired; thirdly, when the vehicle runs across the road, the coil is easy to miss triggering; in addition, accurate snapshot position information cannot be given in a coil triggering mode, so that the left and right snapshot areas are large in distribution, and error identification is easy to occur during cross-road driving; and the service life of the coil is short, so that the coil has high requirements on the installation of a pavement, and the function of the coil is greatly reduced due to cracks or ruts on the pavement.
In view of the above problems in the related art, no effective solution exists at present.
Disclosure of Invention
The embodiment of the invention provides a vehicle snapshot method and system, which at least solve the problem of high snapshot error rate caused by snapshot of only the vehicle head or the vehicle tail in a radar and video mode in the related technology.
According to an embodiment of the present invention, there is provided a snapshot method of a vehicle, including: the method comprises the steps of detecting an entering vehicle through a first vehicle detector to obtain first detection information, wherein the first detection information at least comprises the following steps: the vehicle head license plate position information and the vehicle running lane number information are obtained; triggering a first camera to capture the vehicle based on the first detection information to obtain first capture information, wherein the first capture information at least comprises: the license plate information of the vehicle head, the image information captured in the running process of the vehicle and the lane number information of the running vehicle; detecting the vehicle which is driven away by a second vehicle detector corresponding to the lane number where the vehicle runs to obtain second detection information, wherein the second detection information at least comprises: vehicle license plate position information of the tail of the vehicle and lane number information of the vehicle running; triggering a second camera to capture the vehicle based on the second detection information to obtain second capture information, wherein the second capture information at least comprises: license plate information of the vehicle tail, image information captured in the vehicle running process and lane number information of the vehicle running; and forming a snapshot record of the vehicle through the first snapshot information and the second snapshot information.
According to another embodiment of the present invention, there is provided a snapshot system of a vehicle, including: the vehicle detection device comprises a first vehicle detector and a second vehicle detector, wherein the first vehicle detector is used for detecting an entering vehicle to obtain first detection information, and the first detection information at least comprises: the vehicle head license plate position information and the vehicle running lane number information are obtained; a first camera, configured to capture the vehicle based on the first detection information to obtain first capture information, where the first capture information at least includes: the license plate information of the vehicle head, the image information captured in the running process of the vehicle and the lane number information of the running vehicle; a second vehicle detector, configured to detect the vehicle that has left on the lane number to obtain second detection information, where the second detection information at least includes: license plate position information of the tail of the vehicle and lane number information of the running vehicle; a second camera, configured to capture the vehicle based on the second detection information to obtain second capture information, where the second capture information at least includes: the vehicle information comprises vehicle license plate information of the vehicle tail, image information captured in the vehicle running process and vehicle running lane number information; and the processor is used for forming a snapshot record of the vehicle through the first snapshot information and the second snapshot information.
According to the invention, the vehicles which enter and leave can be detected through the first vehicle detector and the second vehicle detector, the timeliness of vehicle detection is ensured, the first camera and the second camera are triggered to snapshot the information of the vehicle head and the information of the vehicle tail, the completeness of vehicle snapshot is realized, and finally, a complete snapshot record is formed through the snapshot information of the first camera and the second camera, so that the problem of high snapshot error rate caused by snapshot of the vehicle head or the vehicle tail only through radar and video modes in the related technology is solved, and the efficiency and the accuracy of vehicle snapshot are improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a flowchart of a snap-shot method of a vehicle according to an embodiment of the invention;
fig. 2 is a schematic view of the running of the vehicle according to the embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
In the present embodiment, a method for capturing a vehicle is provided, and fig. 1 is a flowchart of a method for capturing a vehicle according to an embodiment of the present invention, where the flowchart includes the following steps as shown in fig. 1:
step S102, detecting the entering vehicle by a first vehicle detector to obtain first detection information, wherein the first detection information at least comprises: the vehicle head license plate position information and the vehicle running lane number information are obtained;
step S104, triggering a first camera to shoot the vehicle to obtain first shooting information based on the first detection information, wherein the first shooting information at least comprises: license plate information of a vehicle head, image information captured in the driving process of the vehicle and lane number information of the driving of the vehicle;
step S106, detecting the vehicle which is driven away by a second vehicle detector corresponding to the lane number where the vehicle runs to obtain second detection information, wherein the second detection information at least comprises: vehicle license plate position information of the tail of the vehicle and lane number information of the vehicle running;
step S108, triggering a second camera to capture the vehicle to obtain second capture information based on the second detection information, wherein the second capture information at least comprises: license plate information of the tail of the vehicle, image information captured in the driving process of the vehicle and lane number information of the driving of the vehicle;
and step S110, forming a snapshot record of the vehicle through the first snapshot information and the second snapshot information.
Through the steps S102 to S110, the vehicles which enter and leave can be detected through the first vehicle detector and the second vehicle detector, the timeliness of vehicle detection is guaranteed, the first camera and the second camera are triggered to capture the information of the vehicle head and the information of the vehicle tail, the integrity of vehicle capture is achieved, and finally, a complete capture record is formed through the capture information of the first camera and the second camera, so that the problem that the capture error rate is high due to the fact that the vehicle head or the vehicle tail is captured only in a radar and video mode in the related art is solved, and the efficiency and the accuracy of vehicle capture are improved.
It should be noted that, the first vehicle detector and the second vehicle detector related to in this embodiment are preferably laser sensors, and may also be millimeter wave radar sensors, and of course, other sensors capable of performing detection are also within the protection scope of this application.
In this optional embodiment, the method for detecting the entering vehicle by the first vehicle detector to obtain the first detection information in step S102 may further include:
s102-11, detecting all lanes on the road surface through a detection section formed by the first vehicle detector along the direction vertical to the driving direction; the number of the first vehicle detectors is one or more, each first vehicle detector corresponds to one detection section, the detection range of each detection section covers a plurality of lanes, and the distance between every two detection sections is smaller than a preset threshold value;
step S102-12, under the condition that the vehicle is detected to enter the detection section, detecting the vehicle to obtain first detection information, wherein the first detection information further comprises: identification information of the first vehicle detector, detection time information, and vehicle traveling direction information.
That is, the number of the first vehicle detectors in the present embodiment may be one, or may be multiple, for example, if the number is one, all lanes are detected by the detection section of the one first vehicle detector, if the number is two, all lanes are detected by the detection section formed by combining the two first vehicle detectors, and so on. In a specific application scenario, for example, the current road is a unidirectional four-lane road, and there are two first vehicle detectors, each vehicle detector covers four lanes, and the detection cross sections between the two vehicle detectors are separated by a certain threshold value along the driving direction. In a specific application scenario, the vehicle detector may be disposed on a gantry on the road.
Further, as shown in step S102, the first detection information may include: the vehicle detection system comprises vehicle head license plate position information, vehicle running lane number information, identification information of a first vehicle detector, detection time information and vehicle running direction information. The license plate position information of the vehicle head is used for accurately identifying the license plate of the vehicle head when the vehicle is subsequently captured.
In another optional implementation manner of this embodiment, as to the manner that the first camera is triggered to capture the vehicle based on the first detection information to obtain the first capture information in step S104, the method further may include:
step S104-11, triggering a first camera to capture the vehicle based on the lane number information of the vehicle running in the first detection information to obtain first capture information consisting of first image information and a first recognition result; wherein the first image information includes: the method comprises the following steps of (1) imaging the running process of a vehicle, identification information of a first vehicle detector, detection time information, lane number information and vehicle running direction information; the first recognition result includes: the vehicle detection system comprises vehicle head license plate information, vehicle head license plate color, a vehicle driving process image, identification information of a first vehicle detector, detection time information, lane number information and vehicle driving direction information.
It should be noted that, since the first vehicle detector detects the lane number of the entering vehicle, the first camera corresponding to the lane number may be triggered to capture, that is, different lane numbers correspond to different cameras, that is, corresponding cameras are provided for several lanes. The first snapshot information of the first camera is composed of first image information and a first recognition result, the first image information is information related to images in the driving process of the vehicle, and can be a plurality of continuously shot images, wherein it needs to be noted that the identification information of the first vehicle detector in the images is used for recognizing which vehicle detector is triggered to detect the vehicle, so that the position of the vehicle can be determined; and the detection time information is used for determining the time of the snapshot at that time. In addition, the identification information of the first camera mainly comprises the license plate information of the vehicle head and the color of the license plate besides the information which is the same as the image information.
In another optional implementation manner of this embodiment, the manner, referred to in step S106, of detecting the vehicle that is moving away by the second vehicle detector corresponding to the lane number on which the vehicle is traveling to obtain the second detection information may further include:
step S106-11, triggering a detection section formed by a second vehicle detector along the direction vertical to the driving direction to detect the vehicle which is driving away on the lane number to obtain second detection information based on the identification information of the first vehicle detector;
the number of the second vehicle detectors is one or more, each second vehicle detector corresponds to one detection section, the detection range of each detection section covers a plurality of lanes, and the distance between every two detection sections is smaller than a preset threshold value; the second detection information further includes: identification information of the second vehicle detector, detection time information, and vehicle traveling direction information.
It should be noted that the second vehicle detector detects the vehicle that is moving away on the same lane number as the first vehicle detector detects, and therefore the obtained second detection information includes: the license plate position information of the rear of a vehicle, the lane number information that the vehicle goes, can also include: identification information of the second vehicle detector, detection time information, and vehicle traveling direction information.
That is, the number of the second vehicle detectors in the present embodiment may be one, or may be multiple, for example, if it is one, all lanes are detected by the detection cross section of the one second vehicle detector; if two second vehicle detectors are present, all lanes are detected by the combination of the two second vehicle detectors, and so on. In a specific application scenario, for example, the current road is a unidirectional four-lane road, and there are two second vehicle detectors, each vehicle detector covers four lanes, and the detection cross section between the two vehicle detectors is separated by a certain threshold value along the driving direction. In a specific application scenario, the vehicle detector may be disposed on a gantry on the road.
In addition, the preset threshold in this embodiment refers to the length of the vehicle body, and since the preferred value is the length of the shortest vehicle body on the market, or if the mode in this embodiment is mainly used for detecting trucks, the preset threshold refers to the length of the shortest vehicle body in trucks. Of course, the preset threshold is only an example, and may be adjusted accordingly according to actual situations, and is not limited in this embodiment.
In addition, in the current practical application, the matching of the vehicle head and the vehicle tail depends on the license plate number, but when the license plate number is identified wrongly or is stained, and the license plate number of the vehicle tail of the truck is not easy to identify, even under the condition that the vehicle head and the vehicle tail of the trailer are originally inconsistent, the license plate number cannot be matched well; secondly, time estimation is adopted, and vehicle speed is used for estimation, but the estimation has great error, so the purpose of setting the distance between the detection sections to be smaller than the preset threshold value in the embodiment is to: the two corresponding vehicle detectors can continuously track the same vehicle, detect the arrival of the vehicle and the departure of the vehicle, and complete the unique and accurate matching of the snapshot information of the head and the tail of the vehicle through the vehicle Identification (ID) given by the vehicle detectors.
In another optional implementation of this embodiment, as to the manner that the second camera is triggered to capture the vehicle based on the second detection information to obtain the second capture information in step S108, the method further may include: triggering a second camera to capture the vehicle based on the lane number information of the vehicle running in the second detection information to obtain second capture information consisting of second image information and a second recognition result;
wherein the second image information includes: an image of a vehicle running process, identification information of a second vehicle detector, detection time information, lane number information, vehicle running direction information; the second recognition result includes: the vehicle-mounted information comprises vehicle tail license plate information, vehicle tail license plate color, a vehicle driving process image, identification information of a second vehicle detector, detection time information, lane number information and vehicle driving direction information.
It should be noted that, since the second vehicle detector detects the lane number of the vehicle that is moving away from the vehicle, the second camera corresponding to the lane number may be triggered to capture, that is, different lane numbers correspond to different cameras, that is, corresponding cameras are provided in several lanes.
In addition, the second snapshot information of the second camera is composed of first image information and a first recognition result, the first image information is information related to images in the driving process of the vehicle, and can be a plurality of continuously shot images, wherein it needs to be noted that the identification information of the second vehicle detector in the images is used for recognizing which vehicle detector is triggered to detect the vehicle, so that the position of the vehicle can be determined; and the detection time information is used for determining the time of the snapshot at that time. In addition, the identification information of the second camera mainly comprises the information of the license plate at the tail of the vehicle and the color of the license plate besides the information identical to the image information.
The present application is exemplified below with reference to an alternative embodiment of the present application, in which a vehicle detector is exemplified as a laser sensor;
the optional embodiment provides a vehicle head and tail snapshot identification method based on laser detection, before the snapshot recognition method is performed, as shown in fig. 2, in front of the installation section, at certain intervals along the traveling direction, setting a detection section of one or more laser sensors (corresponding to the first vehicle detector) in a direction vertical to the traveling direction for detecting the arrival of the vehicle head, a detection section of one or more laser sensors (corresponding to the second vehicle detector) is set at intervals in the traveling direction behind the installation section in the direction perpendicular to the traveling direction for detecting the vehicle tail exit, set for a laser under the installation section and detect the section for detect sectional continuity around guaranteeing, along the adjacent two intervals that detect the section of driving direction be not more than the shortest trailer automobile body length, can guarantee that laser sensor is to the continuous tracking of passing through trailer in the detection area.
That is to say, for every lane or every section installation a locomotive high definition discernment camera and a rear of a vehicle high definition discernment, every detects the section and installs one or more laser sensor, all equipment are installed on same portal, and a plurality of laser sensor connect same controller, and many locomotive and rear of a vehicle high definition discernment cameras are connected through the 485 modes to the controller, and many locomotive and rear of a vehicle high definition discernment cameras are connected same industrial computer (controller and industrial computer can fuse into a whole).
The laser controller distinguishes the number of vehicles in the detection area and identifies the positions of the vehicles in real time according to information frames received in real time and sent by the laser sensor, when a laser scanning section in front of the installation section is detected just triggered by the vehicles, the laser controller outputs a vehicle head position information snapshot signal (snapshot direction, lane number, vehicle inspection vehicle ID, trigger time and license plate position) to a corresponding lane camera, after the high-definition recognition camera receives the trigger signal, determines whether the camera responds to the trigger signal according to the snapshot direction and the lane number, then locks the recognition area according to the license plate position and a set threshold value, carries out license plate recognition, and finally outputs a snapshot picture (at least comprising picture content, vehicle inspection vehicle ID, trigger time, lane number and snapshot direction) and a recognition result (at least comprising license plate number and license plate color, vehicle inspection vehicle ID, trigger time, lane number, snapshot direction); when the fact that a laser sensor scans a section after a vehicle just leaves an installation section is detected, the laser sensor outputs a vehicle tail position information snapshot signal (snapshot direction, lane number, vehicle inspection vehicle ID, trigger time and license plate position) to a corresponding lane camera, after the high-definition recognition camera receives the trigger signal, whether the camera responds to the trigger signal is determined according to the snapshot direction and the lane number, then a recognition area is locked according to a set threshold value according to the license plate position, license plate recognition is carried out, and finally a snapshot picture (at least comprising picture content, the vehicle inspection vehicle ID, the trigger time, the lane number and the snapshot direction) and a recognition result (at least comprising the license plate number, the license plate color, the vehicle inspection vehicle ID, the trigger time, the lane number and the snapshot direction) are output to an industrial personal computer.
The industrial personal computer receives the snapshot picture and the recognition result formed by laser triggering from the camera, according to the vehicle ID of the vehicle, the matching between the head snapshot picture and the head license plate recognition result formed by triggering the same vehicle at the triggering moment and the matching between the tail snapshot picture and the tail license plate recognition result are completed at the triggering moment, the matching between the head snapshot picture and the tail snapshot picture is completed according to the vehicle ID of the vehicle and the snapshot direction, finally, a plurality of head and tail evidence obtaining records of the vehicle are formed, the vehicle position is tracked through the laser sensor, the camera snapshot is triggered accurately, the vehicle capturing capacity of the system can be effectively improved, the accuracy and consistency of the matching between the head and the tail, the vehicle recognition efficiency and the license plate recognition accuracy are improved.
Based on this, the method for capturing and identifying the vehicle head and the vehicle tail based on laser detection in the optional embodiment comprises the following steps:
step S202, a laser sensor carries out vehicle detection, and time information, vehicle head position information and a vehicle inspection vehicle ID of a laser scanning section of a vehicle head just before passing through an installation section are obtained;
step S204, determining the lane number of the camera needing to be controlled for snapshot according to the vehicle head position information, and forming vehicle head trigger position information;
wherein, locomotive trigger position information includes: the method comprises the following steps of (1) carrying out lane number, vehicle inspection vehicle ID, triggering time, license plate position and snapshot direction;
step S206, according to the lane number and the snapshot direction in the locomotive trigger position information, the corresponding locomotive snapshot camera responds to the trigger signal;
step S208, according to the license plate position in the locomotive trigger position information, the locomotive snapshot camera locks a recognition area according to a set threshold value, and carries out license plate recognition;
step S210, after the vehicle head snapshot high-definition recognition camera responds to the laser vehicle head triggering position information, vehicle head snapshot picture information and vehicle head license plate recognition information are output to an industrial personal computer;
wherein, the snapshot picture information includes: the method comprises the steps of picture information, vehicle inspection vehicle ID, triggering time, lane number and snapshot direction;
the license plate identification information includes: license plate information, vehicle inspection vehicle ID, triggering time, lane number and snapshot direction;
step S212, the industrial personal computer completes matching between a vehicle head snapshot picture formed by triggering the same vehicle and a vehicle head license plate recognition result and matching between a vehicle tail snapshot picture and a vehicle tail license plate recognition result according to the vehicle inspection vehicle ID and the triggering time to form a vehicle snapshot record;
and step S214, matching different snapshot records of the same vehicle according to the vehicle ID of the vehicle inspection, and finally forming a vehicle evidence obtaining record.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
In this embodiment, a snapshot system of a vehicle is further provided, and the system is used to implement the above embodiments and preferred embodiments, and the description of the system is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
The present embodiment provides a snapshot system of a vehicle, the system including:
(1) the vehicle detecting device comprises a first vehicle detector and a second vehicle detector, wherein the first vehicle detector is used for detecting a vehicle which enters the vehicle to obtain first detection information, and the first detection information at least comprises: vehicle head license plate position information and vehicle running lane number information;
(2) the first camera is used for capturing the vehicle to obtain first capturing information based on the first detection information, wherein the first capturing information at least comprises: license plate information of a vehicle head, image information captured in the driving process of the vehicle and lane number information of the driving of the vehicle;
(3) a second vehicle detector for detecting a vehicle that has left the lane number to obtain second detection information, wherein the second detection information at least includes: license plate position information of the tail of the vehicle and lane number information of the running vehicle;
(4) the second camera is used for capturing the vehicle to obtain second capturing information based on the second detection information, wherein the second capturing information at least comprises: license plate information of the tail of the vehicle, image information captured in the driving process of the vehicle and lane number information of the driving of the vehicle;
(5) and the processor is used for forming a snapshot record of the vehicle through the first snapshot information and the second snapshot information.
Optionally, the first vehicle detector in this embodiment is further configured to detect all lanes on the road surface by forming a detection cross section in a direction perpendicular to the driving direction; under the condition that a vehicle is detected to enter a detection section, detecting the vehicle to obtain first detection information;
the number of the first vehicle detectors is one or more, each first vehicle detector corresponds to one detection section, the detection range of each detection section covers a plurality of lanes, and the distance between every two detection sections is smaller than a preset threshold value; the first detection information further includes: identification information of the first vehicle detector, detection time information, and vehicle traveling direction information.
Optionally, the first camera in this embodiment is further configured to trigger the first camera to capture the vehicle based on the lane number information of the vehicle running in the first detection information, so as to obtain first capture information composed of the first image information and the first recognition result;
wherein the first image information includes: the method comprises the following steps of (1) imaging the running process of a vehicle, identification information of a first vehicle detector, detection time information, lane number information and vehicle running direction information; the first recognition result includes: the vehicle detection system comprises vehicle head license plate information, vehicle head license plate color, a vehicle driving process image, identification information of a first vehicle detector, detection time information, lane number information and vehicle driving direction information.
Optionally, the second vehicle detector in this embodiment is further configured to trigger a detection section to detect a vehicle that is driving away on the lane number along a direction perpendicular to the driving direction based on the identification information of the first vehicle detector, so as to obtain second detection information; the number of the second vehicle detectors is one or more, each second vehicle detector corresponds to one detection section, the detection range of each detection section covers a plurality of lanes, and the distance between every two detection sections is smaller than a preset threshold value; the second detection information further includes: identification information of the second vehicle detector, detection time information, and vehicle traveling direction information.
Optionally, the second camera in this embodiment is further configured to capture the vehicle based on the lane number information of the vehicle running in the second detection information, so as to obtain second capture information composed of the second image information and the second recognition result;
wherein the second image information includes: an image of a vehicle running process, identification information of a second vehicle detector, detection time information, lane number information, vehicle running direction information; the second recognition result includes: the vehicle-mounted information comprises vehicle tail license plate information, vehicle tail license plate color, a vehicle driving process image, identification information of a second vehicle detector, detection time information, lane number information and vehicle driving direction information.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention shall be included in the protection scope of the present invention.

Claims (10)

1. A method of capturing a vehicle, comprising:
the method comprises the steps of detecting an entering vehicle through a first vehicle detector to obtain first detection information, wherein the first detection information at least comprises the following steps: vehicle head license plate position information and vehicle running lane number information;
triggering a first camera to capture the vehicle based on the first detection information to obtain first capture information, wherein the first capture information at least comprises: the license plate information of the vehicle head, the image information captured in the running process of the vehicle and the lane number information of the running vehicle;
detecting the vehicle which is driven away by a second vehicle detector corresponding to the lane number where the vehicle runs to obtain second detection information, wherein the second detection information at least comprises: vehicle license plate position information of the tail of the vehicle and lane number information of the vehicle running;
triggering a second camera to capture the vehicle based on the second detection information to obtain second capture information, wherein the second capture information at least comprises: license plate information of the vehicle tail, image information captured in the vehicle running process and lane number information of the vehicle running;
forming a snapshot record of the vehicle through the first snapshot information and the second snapshot information;
forming a snapshot record of the vehicle by the first snapshot information and the second snapshot information includes: based on the vehicle ID and the triggering moment of the first detection information, the license plate information of the vehicle head included in the first snapshot information and the image information captured in the vehicle running process included in the first snapshot information are matched, based on the vehicle ID and the triggering moment of the second detection information, the license plate information of the vehicle tail included in the second snapshot information and the image information captured in the vehicle running process included in the second snapshot information are matched, based on the vehicle ID and the snapshot direction, the image information captured in the vehicle running process included in the first snapshot information and the image information captured in the second snapshot information are matched, and the snapshot record is determined based on the matching result.
2. The method of claim 1, wherein detecting the incoming vehicle by a first vehicle detector yields first detection information comprising:
detecting all lanes on the road surface through a detection section formed by the first vehicle detector along the direction vertical to the driving direction; the number of the first vehicle detectors is one or more, each first vehicle detector corresponds to one detection section, the detection range of each detection section covers a plurality of lanes, and the distance between every two detection sections is smaller than a preset threshold value;
under the condition that a vehicle is detected to drive into the detection section, detecting the vehicle to obtain the first detection information, wherein the first detection information further comprises: the identification information, the detection time information and the vehicle driving direction information of the first vehicle detector.
3. The method of claim 2, wherein triggering a first camera to snap the vehicle based on the first detection information to obtain first snap information comprises:
triggering the first camera to capture the vehicle based on the lane number information of the vehicle running in the first detection information to obtain first capture information consisting of first image information and a first recognition result;
wherein the first image information includes: an image of a vehicle running process, identification information of the first vehicle detector, detection time information, lane number information, vehicle running direction information;
the first recognition result includes: the vehicle detection device comprises vehicle head license plate information, vehicle head license plate color, a vehicle running process image, identification information of the first vehicle detector, detection time information, lane number information and vehicle running direction information.
4. The method according to claim 2, wherein detecting the vehicle that is driven away by a second vehicle detector corresponding to a lane number on which the vehicle is driven, to obtain second detection information includes:
triggering the vehicle which is driving away on the lane number to be detected through a detection section formed by the second vehicle detector along the direction perpendicular to the driving direction based on the identification information of the first vehicle detector to obtain second detection information; the number of the second vehicle detectors is one or more, each second vehicle detector corresponds to one detection section, the detection range of each detection section covers a plurality of lanes, and the distance between every two detection sections is smaller than the preset threshold value; the second detection information further includes: identification information of the second vehicle detector, detection time information, and vehicle traveling direction information.
5. The method of claim 4, wherein triggering a second camera to snap the vehicle based on the second detection information to obtain second snap information comprises:
triggering the second camera to snapshot the vehicle based on the lane number information of the vehicle running in the second detection information to obtain second snapshot information consisting of second image information and a second recognition result;
wherein the second image information includes: an image of a vehicle running process, identification information of the second vehicle detector, detection time information, lane number information, vehicle running direction information;
the second recognition result includes: the vehicle detector comprises vehicle tail license plate information, vehicle tail license plate color, a vehicle running process image, identification information of the second vehicle detector, detection time information, lane number information and vehicle running direction information.
6. A snapshot system of a vehicle, comprising:
the vehicle detection device comprises a first vehicle detector and a second vehicle detector, wherein the first vehicle detector is used for detecting an entering vehicle to obtain first detection information, and the first detection information at least comprises: the vehicle head license plate position information and the vehicle running lane number information are obtained;
a first camera, configured to capture the vehicle based on the first detection information to obtain first capture information, where the first capture information at least includes: the license plate information of the vehicle head, the image information captured in the running process of the vehicle and the lane number information of the running vehicle;
a second vehicle detector, configured to detect the vehicle that has left on the lane number to obtain second detection information, where the second detection information at least includes: license plate position information of the tail of the vehicle and lane number information of the running vehicle;
a second camera, configured to capture the vehicle based on the second detection information to obtain second capture information, where the second capture information at least includes: license plate information of the vehicle tail, image information captured in the vehicle running process and lane number information of the vehicle running;
the processor is used for forming a snapshot record of the vehicle through the first snapshot information and the second snapshot information;
the processor realizes that the snapshot record of the vehicle is formed through the first snapshot information and the second snapshot information in the following mode: based on the vehicle ID and the triggering moment of the first detection information, the license plate information of the vehicle head included in the first snapshot information and the image information captured in the vehicle running process included in the first snapshot information are matched, based on the vehicle ID and the triggering moment of the second detection information, the license plate information of the vehicle tail included in the second snapshot information and the image information captured in the vehicle running process included in the second snapshot information are matched, based on the vehicle ID and the snapshot direction, the image information captured in the vehicle running process included in the first snapshot information and the image information captured in the second snapshot information are matched, and the snapshot record is determined based on the matching result.
7. The system of claim 6,
the first vehicle detector is used for detecting all lanes on the road surface by forming a detection section along the direction vertical to the driving direction; under the condition that a vehicle is detected to enter a detection section, detecting the vehicle to obtain first detection information;
the number of the first vehicle detectors is one or more, each first vehicle detector corresponds to one detection section, the detection range of each detection section covers a plurality of lanes, and the distance between every two detection sections is smaller than a preset threshold value;
the first detection information further includes: the identification information, the detection time information and the vehicle driving direction information of the first vehicle detector.
8. The system of claim 7,
the first camera is also used for triggering the first camera to snapshot the vehicle based on the lane number information of the vehicle running in the first detection information to obtain first snapshot information consisting of first image information and a first recognition result;
wherein the first image information includes: the image of the vehicle running process, the identification information of the first vehicle detector, the snapshot time, the lane number information and the vehicle running direction information; the first recognition result includes: the vehicle detection device comprises vehicle head license plate information, vehicle head license plate color, a vehicle running process image, identification information of the first vehicle detector, detection time information, lane number information and vehicle running direction information.
9. The system of claim 6,
the second vehicle detector is used for triggering a detection section to be formed along a direction perpendicular to a driving direction based on the identification information of the first vehicle detector so as to detect the vehicle which is driving away on the lane number to obtain second detection information;
the number of the second vehicle detectors is one or more, each second vehicle detector corresponds to one detection section, the detection range of each detection section covers a plurality of lanes, and the distance between every two detection sections is smaller than a preset threshold value; the second detection information further includes: identification information of the second vehicle detector, detection time information, and vehicle traveling direction information.
10. The system of claim 9,
the second camera is further used for capturing the vehicle based on the lane number information of the vehicle running in the second detection information to obtain second capturing information consisting of second image information and a second recognition result;
wherein the second image information includes: the image of the vehicle running process, the identification information of the second vehicle detector, the detection time information, the lane number information and the vehicle running direction information;
the second recognition result includes: the vehicle-mounted vehicle detector comprises vehicle tail license plate information, vehicle tail license plate color, a vehicle running process image, identification information of the second vehicle detector, snapshot time, lane number information and vehicle running direction information.
CN201911368921.2A 2019-12-26 2019-12-26 Vehicle snapshot method and system Active CN111081031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911368921.2A CN111081031B (en) 2019-12-26 2019-12-26 Vehicle snapshot method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911368921.2A CN111081031B (en) 2019-12-26 2019-12-26 Vehicle snapshot method and system

Publications (2)

Publication Number Publication Date
CN111081031A CN111081031A (en) 2020-04-28
CN111081031B true CN111081031B (en) 2022-06-17

Family

ID=70318691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911368921.2A Active CN111081031B (en) 2019-12-26 2019-12-26 Vehicle snapshot method and system

Country Status (1)

Country Link
CN (1) CN111081031B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463988B (en) * 2020-11-09 2023-10-03 浙江宇视科技有限公司 Image acquisition method, device, electronic equipment and storage medium
CN113112798B9 (en) * 2021-04-09 2023-04-07 苏庆裕 Vehicle overload detection method, system and storage medium
CN113763722B (en) * 2021-08-17 2023-04-18 杭州海康威视数字技术股份有限公司 Snapshot method and snapshot device
CN115273485A (en) * 2022-07-18 2022-11-01 广东泓胜科技股份有限公司 Method and device for recognizing lane crossing and line pressing driving of vehicle weighing without stopping and related equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204229642U (en) * 2014-12-05 2015-03-25 成都华安视讯科技有限公司 A kind of parking offense automatic snapshot system
CN105654733A (en) * 2016-03-08 2016-06-08 博康智能网络科技股份有限公司 Front and back vehicle license plate recognition method and device based on video detection
CN105893953A (en) * 2016-03-30 2016-08-24 上海博康智能信息技术有限公司 Method and system for detecting two license plates of one vehicle

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424804B (en) * 2013-09-10 2016-08-10 上海弘视通信技术有限公司 Single radar multilane Intelligent speed-measuring method and system thereof
CN205068792U (en) * 2015-08-14 2016-03-02 武汉万集信息技术有限公司 Laser vehicles detection device
KR101829820B1 (en) * 2015-12-28 2018-02-19 대구대학교 산학협력단 Illegal vehicle Detection system using RFID tag
CN206133929U (en) * 2016-11-02 2017-04-26 南京慧尔视智能科技有限公司 Device that multilane tested speed and bayonet socket triggers based on microwave
CN207624159U (en) * 2017-12-06 2018-07-17 北京万集科技股份有限公司 A kind of vehicle positioning system
CN208256096U (en) * 2017-12-21 2018-12-18 北京万集科技股份有限公司 A kind of preceding license plate and tail license plate synchronous identifying system applied in non-at-scene enforcement system
CN208781398U (en) * 2018-08-11 2019-04-23 安徽恒心云数据科技有限公司 Number plate is the same as vehicle identifying system before and after a kind of trailer and tractor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204229642U (en) * 2014-12-05 2015-03-25 成都华安视讯科技有限公司 A kind of parking offense automatic snapshot system
CN105654733A (en) * 2016-03-08 2016-06-08 博康智能网络科技股份有限公司 Front and back vehicle license plate recognition method and device based on video detection
CN105893953A (en) * 2016-03-30 2016-08-24 上海博康智能信息技术有限公司 Method and system for detecting two license plates of one vehicle

Also Published As

Publication number Publication date
CN111081031A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN111081031B (en) Vehicle snapshot method and system
US11238730B2 (en) System and method for detecting and recording traffic law violation events
CN110430401B (en) Vehicle blind area early warning method, early warning device, MEC platform and storage medium
CN105825185B (en) Vehicle collision avoidance method for early warning and device
KR102267335B1 (en) Method for detecting a speed employing difference of distance between an object and a monitoring camera
CN109035790B (en) Evidence obtaining method and system for vehicle traffic violation
KR100969995B1 (en) System of traffic conflict decision for signalized intersections using image processing technique
KR101742490B1 (en) System for inspecting vehicle in violation by intervention and the method thereof
CN105679043A (en) 3D radar intelligent bayonet system and processing method thereof
CN102332209A (en) Automobile violation video monitoring method
CN104361350A (en) Traffic sign identification system
CN205862593U (en) The tracking system that overload control transfinites is moved based on vehicle-mounted unmanned aerial vehicle
KR101756848B1 (en) Unlawfulness parking and no standing control system and method thereof
CN113676702A (en) Target tracking monitoring method, system and device based on video stream and storage medium
CN113012436A (en) Road monitoring method and device and electronic equipment
CN114463372A (en) Vehicle identification method and device, terminal equipment and computer readable storage medium
KR102306789B1 (en) License Plate Recognition Method and Apparatus for roads
CN110533921B (en) Triggering snapshot method and system for vehicle
JP2018055597A (en) Vehicle type discrimination device and vehicle type discrimination method
CN112034449A (en) System and method for realizing vehicle running track correction based on physical space attribute
CN114495520A (en) Vehicle counting method, device, terminal and storage medium
CN113362592A (en) Method, system, and computer-readable storage medium for identifying an offending traffic participant
WO2008088409A2 (en) Real-time dynamic content based vehicle tracking, traffic monitoring, and classification system
CN110444026B (en) Triggering snapshot method and system for vehicle
CN105761501A (en) Intelligent vehicle behavior detecting and snapshotting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant