CN110111582B - Multi-lane free flow vehicle detection method and system based on TOF camera - Google Patents

Multi-lane free flow vehicle detection method and system based on TOF camera Download PDF

Info

Publication number
CN110111582B
CN110111582B CN201910447708.4A CN201910447708A CN110111582B CN 110111582 B CN110111582 B CN 110111582B CN 201910447708 A CN201910447708 A CN 201910447708A CN 110111582 B CN110111582 B CN 110111582B
Authority
CN
China
Prior art keywords
image information
vehicle
target
information
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910447708.4A
Other languages
Chinese (zh)
Other versions
CN110111582A (en
Inventor
胡攀攀
蔡鄂
李康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Wanji Information Technology Co Ltd
Original Assignee
Wuhan Wanji Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Wanji Information Technology Co Ltd filed Critical Wuhan Wanji Information Technology Co Ltd
Priority to CN201910447708.4A priority Critical patent/CN110111582B/en
Publication of CN110111582A publication Critical patent/CN110111582A/en
Application granted granted Critical
Publication of CN110111582B publication Critical patent/CN110111582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07BTICKET-ISSUING APPARATUS; FARE-REGISTERING APPARATUS; FRANKING APPARATUS
    • G07B15/00Arrangements or apparatus for collecting fares, tolls or entrance fees at one or more control points
    • G07B15/02Arrangements or apparatus for collecting fares, tolls or entrance fees at one or more control points taking into account a variable factor such as distance or time, e.g. for passenger transport, parking systems or car rental systems
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Abstract

The invention provides a method and a system for detecting a multilane free flow vehicle based on a TOF camera, which comprises the following steps: acquiring a group of first image information obtained by shooting a first detection area by a time of flight (TOF) camera arranged on a road within a preset time period; under the condition that each piece of first image information is shot by a TOF camera, a target snapshot camera arranged on a road is triggered to shoot a second detection area to obtain second image information, wherein a group of second images are shot by the target snapshot camera within a preset time period through the TOF camera; determining target image information from a set of second image information, wherein the target image information indicates the presence of a target vehicle on the road; acquiring charging information of a target vehicle, which is acquired by a Road Side Unit (RSU) arranged on a road within a preset time period; and matching the target vehicle acquired from the target image information with the charging information of the target vehicle to obtain the vehicle information of the target vehicle. The invention solves the problem that the driving condition of the vehicle on the road cannot be acquired in real time.

Description

Multi-lane free flow vehicle detection method and system based on TOF camera
Technical Field
The invention relates to the field of communication, in particular to a method and a system for detecting a multi-lane free flow vehicle based on a TOF camera.
Background
At present, a highway non-stop toll collection system is established in many provinces and cities of China, the passing efficiency of the highway is improved to a certain extent, but the existing highway non-stop toll collection system needs to install vehicles with vehicle-mounted OBUs to pass through a single lane with RSU antennas at a low speed, when the traffic flow is large, the existing highway non-stop toll collection system still has the condition of vehicle congestion, and the vehicle communication efficiency is influenced.
The multi-lane free flow technology is developed on the existing highway toll collection technology, the multi-lane free flow technology allows vehicles to freely pass through a common highway or an expressway with multiple lanes at normal running speed, the current multi-lane free flow technology mainly solves the problems of vehicle position, vehicle transaction, vehicle snapshot evidence and the like by adopting information fusion and matching of multiple sensors, a system generally comprises a vehicle detection and positioning subsystem, a snapshot subsystem and a transaction subsystem, but the vehicle position information provided by the current vehicle detection and positioning subsystem is generally vehicle position information at one or more moments and cannot provide the real-time position of the vehicle within a period of time, the conditions of lane change, lane crossing, vehicle shielding and the like of the vehicle under the condition of multi-lane free flow possibly cause the matching error of the vehicle information, and the matching rate is low, further, the vehicle may be paid for and passed illegally, but the vehicle cannot be collected.
Aiming at the problem that the efficiency of acquiring the information of the vehicles running on the road is low in the prior art in the related art, an effective solution does not exist at present.
Disclosure of Invention
The embodiment of the invention provides a method and a system for detecting a multi-lane free flow vehicle based on a TOF camera, which are used for at least solving the problem of low efficiency of acquiring information of vehicles running on a road in the related art.
According to one embodiment of the invention, a multi-lane free flow vehicle detection method based on TOF cameras is provided and comprises the following steps: acquiring a group of first image information obtained by shooting a first detection area by a time of flight (TOF) camera arranged on a road within a preset time period; under the condition that each piece of first image information is obtained by shooting through the TOF camera, triggering a target snapshot camera arranged on the road to shoot a second detection area to obtain second image information, wherein a group of second images obtained by shooting through the TOF camera is triggered within the preset time period; determining target image information from the set of second image information, wherein the target image information indicates the presence of a target vehicle on the road; acquiring charging information of the target vehicle, which is acquired by a Road Side Unit (RSU) arranged on the road within a preset time period; and matching the target vehicle acquired from the target image information with the charging information of the target vehicle to obtain the vehicle information of the target vehicle.
Optionally, determining target image information from the group of second image information includes: determining third image information obtained by shooting at a plurality of continuous shooting times within the preset time period from the group of first image information, wherein the image indicated by the third image information comprises a vehicle-containing area where the target vehicle is located; determining the running track of the target vehicle according to the depth information change and the horizontal offset information change of the vehicle-bearing area in the third image information; determining that the set of first image information indicates that the target vehicle moves from a first lane to a second lane of the road, in a case that the travel trajectory indicates that the target vehicle moves from the first lane to the second lane of the road; and determining target image information obtained by shooting a target lane in the second detection area from the group of second image information, wherein the target lane comprises a part of lanes or all of lanes passed by the target vehicle in the process of moving from the first lane to the second lane.
Optionally, the acquiring a set of first image information obtained by shooting the first detection area by the time of flight TOF camera arranged on the road within a predetermined time period includes: acquiring a group of real-time depth image information obtained by shooting the first detection area by the TOF camera in a plurality of continuous shooting times within the preset time period; extracting position information of the target vehicle at the plurality of continuous shooting times from the set of real-time depth image information; setting the set of first image information to include depth image information captured by a TOF camera used for capturing the at least two first detection areas in the set of real-time depth image information, in a case where the position information of the target vehicle at the plurality of continuous capturing times indicates that the target vehicle has made a lane change and passed through the at least two first detection areas.
Optionally, before obtaining the vehicle information of the target vehicle from the target image information, the method further includes: and performing stretching or shrinking processing on the pictures represented by part or all of the group of first image information and/or the pictures represented by part or all of the group of second image information, so that the resolution of the pictures represented by part or all of the group of first image information after processing is consistent with the resolution of the pictures represented by part or all of the group of second image information.
Optionally, the acquiring a set of first image information obtained by shooting the first detection area by the time of flight TOF camera arranged on the road within a predetermined time period includes: acquiring a group of real-time depth image information obtained by shooting the first detection area by the TOF camera in a plurality of continuous shooting times within the preset time period; and carrying out image difference and image segmentation processing on background image information and the set of real-time depth image information to determine a vehicle-presence area in the first detection area and contour information of a vehicle in the vehicle-presence area, wherein the set of first image information is set to comprise the vehicle-presence area and the contour information, and the background image information is depth image information obtained by shooting the first detection area by the TOF camera under the condition that no vehicle appears in the first detection area.
According to another embodiment of the invention, there is provided a multi-lane free-flow vehicle detection apparatus based on TOF camera, including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a group of first image information obtained by shooting a first detection area by a time of flight (TOF) camera arranged on a road within a preset time period; the triggering module is used for triggering a target snapshot camera arranged on the road to shoot a second detection area to obtain second image information under the condition that each piece of first image information is obtained by shooting through the TOF camera, wherein a group of second images obtained by shooting through the target snapshot camera is triggered by the TOF camera within the preset time period; a determination module configured to determine target image information from the set of second image information, wherein the target image information indicates a presence of a target vehicle on the road; the second acquisition module is used for acquiring the charging information of the target vehicle, which is acquired by a Road Side Unit (RSU) arranged on the road within a preset time period; and the matching module is used for matching the target vehicle acquired from the target image information with the charging information of the target vehicle to obtain the vehicle information of the target vehicle.
According to another embodiment of the invention, there is provided a TOF camera-based multi-lane free-flow vehicle detection system comprising: the TOF camera is installed on a cross bar or a portal above a road and used for shooting a group of first image information obtained by shooting a first detection area within a preset time period; the snapshot camera is used for shooting the second detection area to obtain second image information; the data processing unit comprises a microprocessor, a data storage unit and an external interface unit and is used for triggering a target snapshot camera arranged on the road to shoot a second detection area under the condition that each piece of first image information is obtained by shooting through the TOF camera, wherein a group of second images obtained by shooting through the target snapshot camera is triggered within the preset time period through the TOF camera; the data processing unit is further configured to determine target image information from the set of second image information, wherein the target image information indicates that a target vehicle is present on the road; acquiring charging information of the target vehicle, which is acquired by a Road Side Unit (RSU) arranged on the road within a preset time period; and matching the target vehicle acquired from the target image information with the charging information of the target vehicle to obtain the vehicle information of the target vehicle.
Optionally, the second detection area is greater than or equal to the first detection area, and the second detection area covers the first detection area.
Optionally, in the driving direction, the detection area of the RSU is greater than or equal to the second detection area, and an area where the detection area of the RSU intersects with the second detection area is greater than or equal to half of the detection area of the RSU.
Optionally, in a case where the detection areas of one of the TOF cameras and one of the target capturing machines cannot cover all lanes of the road, at least two of the TOF cameras and at least two of the target capturing machines are disposed on the road.
Optionally, the TOF camera is arranged in front of the target capture machine in the direction of travel and at a first distance apart, or the TOF camera is arranged behind the target capture machine in the direction of travel and at a second distance apart.
According to a further embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, a group of first image information is obtained by shooting the first detection area through the TOF camera arranged on the road, the target capture camera arranged on the road is triggered to shoot the second detection area after each time of TOF camera shooting to obtain a group of second image information, the target image information indicating that the target vehicle exists on the road is determined in the group of second image information, and the charging information of the target vehicle, which is obtained by the road side unit RSU arranged on the road in a preset time period, is obtained; and matching the target vehicle acquired from the target image information with the charging information of the target vehicle to obtain the vehicle information of the target vehicle. Therefore, the problem of low efficiency of acquiring the information of the vehicle running on the road can be solved, and the technical effect of improving the efficiency of acquiring the information of the vehicle running on the road can be achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of a multi-lane free-flow vehicle detection method based on a TOF camera according to an embodiment of the invention;
FIG. 2 is a flow chart of a method of multi-lane free-flow vehicle detection based on TOF cameras according to an embodiment of the present disclosure;
FIG. 3 is a diagram illustrating steps executed by a method for detecting a multi-lane free-flow vehicle based on TOF cameras according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a multi-lane free-flow vehicle detection system based on TOF cameras according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of another TOF camera-based multi-lane free-flow vehicle detection system in accordance with an alternative embodiment of the present invention;
FIG. 6 is a schematic block diagram of another TOF camera-based multi-lane free-flow vehicle detection system according to an alternative embodiment of the present disclosure;
fig. 7 is a block diagram of the structure of a multi-lane free-flow vehicle detection apparatus 7 based on a TOF camera according to an embodiment of the invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking the operation on a mobile terminal as an example, fig. 1 is a hardware structure block diagram of the mobile terminal of the method for detecting a multi-lane free flow vehicle based on a TOF camera according to the embodiment of the invention. As shown in fig. 1, the mobile terminal 10 may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as a computer program corresponding to the TOF camera-based multi-lane free-flow vehicle detection method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In the embodiment, a method for detecting a multi-lane free flow vehicle based on a TOF camera, which is operated on the mobile terminal, is provided, fig. 2 is a flowchart of the method for detecting a multi-lane free flow vehicle based on a TOF camera according to an embodiment of the invention, and as shown in fig. 2, the flowchart includes the following steps:
step S202, a group of first image information obtained by shooting a first detection area by a time of flight (TOF) camera arranged on a road in a preset time period is acquired;
step S204, under the condition that each piece of first image information is obtained by shooting through the TOF camera, triggering a target snapshot camera arranged on the road to shoot a second detection area to obtain second image information, wherein a group of second images obtained by shooting through the TOF camera is triggered within the preset time period;
step S206, determining target image information from the group of second image information, wherein the target image information shows that a target vehicle exists on the road;
step S208, obtaining charging information of the target vehicle, which is obtained by a Road Side Unit (RSU) arranged on the road within a preset time period;
step S210, matching the target vehicle acquired from the target image information with the charging information of the target vehicle to obtain the vehicle information of the target vehicle.
Through the steps, a set of first image information is obtained by shooting a first detection area through a TOF camera arranged on a road, a target capture camera arranged on the road is triggered to shoot a second detection area after each time of TOF camera shooting to obtain a set of second image information, the target image information indicating that a target vehicle exists on the road is determined in the set of second image information, and charging information of the target vehicle, which is obtained by a road side unit RSU arranged on the road within a preset time period, is obtained; and matching the target vehicle acquired from the target image information with the charging information of the target vehicle to obtain the vehicle information of the target vehicle. Therefore, the problem of low efficiency of acquiring the information of the vehicle running on the road can be solved, and the technical effect of improving the efficiency of acquiring the information of the vehicle running on the road can be achieved.
Alternatively, the execution subject of the above steps may be a terminal or the like, but is not limited thereto.
In an alternative embodiment, determining the target image information from the set of second image information comprises: and in the case that the group of first image information indicates that the target vehicle exists on the road at the target trigger time, determining the target image information shot at the target trigger time from the group of second image information. In this embodiment, all the first images acquired by the TOF camera include the images of the existing vehicle and the non-existing vehicle, and since all the second images captured by the snapshot camera are triggered after the TOF camera captures the first images, the second images correspond to the first images. And determining pictures with the target vehicle in all the first images shot by the TOF camera, and determining the images with the target vehicle as target images according to the time for triggering the snapshot camera to shoot.
In an alternative embodiment, before determining the target image information captured at the target trigger time from the set of second image information, the method further comprises: acquiring position information of the target vehicle captured at a first capturing time in the group of first image information, wherein the predetermined period of time includes the first capturing time; and under the condition that the position information of the target vehicle is acquired, setting the first shooting time as the target trigger time, and determining that the group of first image information represents that the target vehicle exists on the road at the target trigger time. In this embodiment, the triggering time for triggering the snapshot camera to take a snapshot is determined according to the position of the target vehicle, and it is further determined that the target vehicle exists in the first image information at the triggering time.
In an alternative embodiment, determining the target image information from the set of second image information comprises: and when the group of first image information represents that the target vehicle moves from a first lane to a second lane of the road, determining target image information obtained by shooting a target lane in the second detection area from the group of second image information, wherein the target lane comprises part of lanes or all of lanes passed by the target vehicle in the process of moving from the first lane to the second lane. In this embodiment, the TOF camera takes pictures at a certain frequency, the driving track of the target vehicle on the road can be reflected through the depth image, the first picture information taken by the TOF camera indicates that the target vehicle has changed lanes, for example, a group of first picture information indicates the whole process of changing the target vehicle from the first lane to the third lane. The target image information obtained by shooting the target vehicle by the target lane is obtained from a group of second image information obtained by the snapshot camera, wherein the target lane may be a first lane, a second lane and a third lane, that is, all lanes passed by the vehicle in the whole lane changing process, or only the first lane and the third lane, that is, a part of lanes in the vehicle lane changing process, which is not limited herein.
In an alternative embodiment, before determining the target image information obtained by shooting the target lane in the second detection area from the set of the second image information, the method further includes: determining third image information obtained by shooting at a plurality of continuous shooting times within the preset time period from the group of first image information, wherein the image indicated by the third image information comprises a vehicle-containing area where the target vehicle is located; determining the running track of the target vehicle according to the depth information change and the horizontal offset information change of the vehicle-bearing area in the third image information; in a case where the travel track indicates that the target vehicle moves from a first lane to a second lane of the road, it is determined that the set of first image information indicates that the target vehicle moves from the first lane to the second lane of the road. In the embodiment, an image of the target vehicle is acquired from a set of first image information obtained by TOF shooting, the image shot by the TOF camera can reflect the distance between the object and the camera, the distance represents depth information, and the driving track of the vehicle can be determined according to the depth information and offset information of the vehicle in the image. For example, in a set of first images taken by the TOF camera, only the depth information of the target vehicle is changed, that is, only the distance between the target vehicle and the TOF camera is changed, and the target vehicle does not make a lane change. And when the horizontal deviation information changes beyond a certain range, the lane change behavior of the vehicle can be determined.
In an alternative embodiment, obtaining the vehicle information of the target vehicle from the target image information includes: vehicle information of the target vehicle at a position corresponding to position information of the target vehicle is acquired from the target image information, wherein the position information of the target vehicle is the position information of the target vehicle captured at the first capturing time acquired in the set of first image information. In this embodiment, there may be no vehicle or multiple vehicles in the first image, and the vehicle may be determined as the target vehicle according to the position of the vehicle in the first image, and the vehicle information of the vehicle may be further acquired. If the first image contains three vehicles which are respectively positioned at the upper left, the middle and the lower left, the target vehicle is determined to further acquire the vehicle information of the vehicle according to the position in the first image.
In an optional embodiment, after obtaining the vehicle information of the target vehicle from the target image information, the method further comprises: acquiring a set of charging information obtained by charging a set of vehicle-mounted units (OBUs) by a Road Side Unit (RSU) arranged on the road within the preset time period; and outputting prompt information under the condition that the charging information of the target vehicle is not included in the group of charging information, wherein the prompt information is used for prompting that the charging information of the target vehicle is abnormal. In this embodiment, the vehicle installed with the on-board unit OBU on the vehicle running on the road deducts the fee of the vehicle through the road side unit RSU on the road, determines whether the target vehicle is running or deducting according to the obtained running condition of the vehicle on the road and the fee deduction record, and if the fee is not deducted, sends a prompt message to confirm that the target vehicle is not deducted.
In an alternative embodiment, before the outputting the prompt message, the method further includes: determining that the group of charging information does not include the charging information of the target vehicle under the condition that the group of charging information does not have target license plate information consistent with the license plate information in the vehicle information of the target vehicle; or determining that the charging information of the target vehicle is not included in the group of charging information under the condition that the group of charging information has target license plate information which is consistent with license plate information in the vehicle information of the target vehicle but target vehicle type information which is corresponding to the target license plate information in the group of charging information is inconsistent with vehicle type information in the vehicle information of the target vehicle. In this embodiment, the charging of the vehicle generally corresponds to the license plate, and it is determined that the target vehicle has not paid money when the charging record does not include the record that the license plate of the target vehicle has paid money. If the charging information includes that the license plate of the target vehicle is recorded with the deduction but is not consistent with the model of the target vehicle shot by the TOF and the snapshot camera, determining that the deduction of the target vehicle is an abnormal condition.
In an alternative embodiment, the acquiring a set of first image information obtained by shooting a first detection area by a time of flight TOF camera arranged on a road within a predetermined time period includes: acquiring a group of real-time depth image information obtained by shooting the first detection area by the TOF camera in a plurality of continuous shooting times within the preset time period; extracting position information of the target vehicle at the plurality of continuous shooting times from the set of real-time depth image information; setting the set of first image information to include depth image information captured by a TOF camera for capturing the at least two first detection areas in the set of real-time depth image information, in a case where the position information of the target vehicle at the plurality of consecutive capturing times indicates that the target vehicle has made the lane change and passed through the at least two first detection areas.
In an optional embodiment, before obtaining the vehicle information of the target vehicle from the target image information, the method further comprises: and performing stretching or shrinking processing on the pictures represented by part or all of the group of first image information and/or the pictures represented by part or all of the group of second image information, so that the resolution of the pictures represented by part or all of the group of first image information after processing is consistent with the resolution of the pictures represented by part or all of the group of second image information. In this embodiment, since the resolutions of the images taken by the TOF camera and the snapshot camera may not be the same, the resolutions of the images taken by the TOF camera and the snapshot camera may be made the same through stretching or shrinking processing.
In an alternative embodiment, the acquiring a set of first image information obtained by shooting a first detection area by a time of flight TOF camera arranged on a road within a predetermined time period comprises acquiring a set of real-time depth image information obtained by shooting the first detection area by the TOF camera over a plurality of continuous shooting times within the predetermined time period; and carrying out image difference and image segmentation processing on background image information and the set of real-time depth image information to determine a vehicle-presence area in the first detection area and contour information of a vehicle in the vehicle-presence area, wherein the set of first image information is set to comprise the vehicle-presence area and the contour information, and the background image information is depth image information obtained by shooting the first detection area by the TOF camera under the condition that no vehicle appears in the first detection area. In the embodiment, the area with the vehicle in the image and the contour information of the vehicle can be determined by carrying out difference and image segmentation processing on the deep image information and the background image information in the first image obtained by shooting by the TOF camera, and the contour information of the vehicle comprises the vehicle type information.
In an alternative embodiment, the second detection area is greater than or equal to the first detection area and the second detection area covers the first detection area. In this embodiment, the detection area where the snapshot camera performs the snapshot is greater than or equal to the detection area where the TOF camera performs the shooting.
In an alternative embodiment, in the driving direction, the detection area of the RSU is greater than or equal to the second detection area, and the area where the detection area of the RSU intersects with the second detection area is greater than or equal to half of the detection area of the RSU.
In an alternative embodiment, at least two TOF cameras and at least two target candiders are arranged on the road in the case that the detection area of one TOF camera and one target candider cannot cover all lanes of the road. In this embodiment, one detection area may be even one TOF camera and target capture machine, or may be provided in plurality, which is not limited herein.
In an alternative embodiment, the TOF camera is arranged in front of the target capture machine in the direction of travel and at a first distance or the TOF camera is arranged behind the target capture machine in the direction of travel and at a second distance.
The application is illustrated below by means of a specific example.
The main purpose of the application is to provide a method and a system for detecting a multi-lane free flow vehicle based on a TOF camera, the position of the free flow vehicle is detected and tracked in real time through the TOF camera, accurate vehicle position information is provided, and the position information is accurately matched with a snapshot subsystem and a transaction subsystem, so that the problems of low matching efficiency, matching error and the like in the multi-lane free flow detection technology in the prior art are solved.
In order to achieve the above object, a method for detecting a multi-lane free flow vehicle based on a TOF camera is provided, and fig. 3 is a diagram of steps executed by the method for detecting a multi-lane free flow vehicle based on a TOF camera according to an embodiment of the invention, mainly including the following steps:
step S1, acquiring real-time depth image information and background image information (the depth image information and the background image information correspond to first image information) of a detection region (corresponding to a first detection region) by a TOF camera, and sending to a data processing unit;
step S2, the data processing unit extracts vehicle position information and vehicle outline information from the real-time depth image information, sends triggering information to the snapshot machine to snapshot the vehicle in a snapshot area (corresponding to the second detection area), and the snapshot machine sends the snapshot vehicle picture information to the data processing unit;
step S3, the data processing unit carries out pixel matching processing on the vehicle picture and the depth image information at the moment of sending the trigger information, and records the vehicle running track;
step S4, the On Board Unit (On Board Unit) position information and the transaction information are acquired by the RSU detection Unit, and the OBU transaction information, the vehicle picture information, and the depth image information are accurately matched by the data processing Unit through the time-space matching method, so as to acquire the vehicle traffic information.
Further, in step S1, the background image information refers to depth image information obtained by the TOF camera when no vehicle passes through the detection area, the real-time depth image information and the background image information include detection time and height information, horizontal offset information, and depth information of each pixel point, and the depth information is distance information between the pixel point and the TOF camera.
Further, in step S2, the data processing unit extracts the vehicle position information and the vehicle contour information from the real-time depth image information by performing image difference and image segmentation processing on the real-time depth image information and the background image information to extract the vehicle-presence areas in the detection area, and analyzing the contour feature information of each vehicle-presence area, including the vehicle type, the vehicle length, the vehicle width, and the vehicle height.
Further, in step S2, in the process of extracting the vehicle position information and the vehicle contour information from the real-time depth image information, the data processing unit sends at least 2 times of trigger information to the capturing machine to capture the vehicle in the capturing area, and when the vehicle changes lane and passes through the capturing areas (corresponding to the second detection areas) of two or more capturing machines, the data processing unit matches the vehicle picture information captured by different capturing machines according to the license plate number, and/or the trigger time and the trigger position.
Further, in step S3, the method for the data processing unit to perform pixel matching processing on the depth image information at the time when the trigger information is sent and the vehicle picture includes: stretching or shrinking the vehicle picture to enable the resolution of the processed vehicle picture to be consistent with the resolution of the depth image; or stretching or shrinking the depth image to enable the resolution of the processed depth image to be consistent with the resolution of the vehicle picture.
Further, in step S3, the method for recording the driving trace of the vehicle includes: and continuously detecting the depth image information at the t-1 moment and the depth image information at the t moment to record the driving track of the vehicle by the depth information change and the horizontal offset information change of the vehicle area in the depth image information at the t moment in a time period when the vehicle passes through the detection area (corresponding to the first detection area) of two or more TOF cameras, and matching the vehicle depth image information detected by different TOF cameras by the data processing unit according to the detection time, the vehicle position information and the vehicle contour information.
Further, in step S4, the method for the data processing unit to precisely match the OBU transaction information, the vehicle picture information and the depth image information by the temporal spatial matching method includes: after the data processing unit acquires the OBU transaction information, in a set time threshold range, the vehicle-mounted OBU position information and the vehicle position information detected by the TOF camera are in a set space threshold range, then the OBU transaction information is matched with the image depth information and the vehicle picture information, and the vehicle traffic information is acquired, and comprises: the system comprises OBU transaction state information, license plate information acquired by the OBU, license plate information acquired by a snapshot machine, vehicle picture information acquired by the snapshot machine, and vehicle type information, contour characteristic information and track information acquired by TOF.
As shown in fig. 4, the schematic structural diagram of a system for detecting a multi-lane free-flow vehicle based on a TOF camera according to an embodiment of the invention is used to implement the above detection method, and includes:
the TOF camera (10) is installed on a cross bar or a portal above a road, is connected with the data processing unit (30), and is used for acquiring depth image information of the detection area (1) (corresponding to the first detection area) and sending the depth image information to the data processing unit (30);
the snapshot machine (20) is arranged on a cross bar or a portal above a road, is connected with the data processing unit (30), and is used for acquiring vehicle picture information of a snapshot area (2) (corresponding to the second detection area) and sending the vehicle picture information to the data processing unit (30);
the RSU detection unit (40) and the TOF camera (10) are arranged on the same cross bar or gantry and connected with the data processing unit (30), and one RSU detection unit (40) is arranged right above each lane and used for acquiring vehicle-mounted OBU position information and transaction information of the transaction area (3);
and the data processing unit (30) comprises a microprocessor, a data storage unit and an external interface unit, and is used for extracting the vehicle position information and the outline information of the depth image information, sending triggering information to the snapshot machine (20), and matching the depth image information and the vehicle picture information with the transaction information.
It is noted that the multilane free-flow vehicle detection system comprises one or more TOF cameras (10) and one or more snap-shot machines (20);
in the traveling direction, the farthest distance between a detection area (1) of a TOF camera (10) and the TOF camera (10) is 25-30 meters, the nearest distance is 10-15 meters, a union set of the detection areas (1) of one or more TOF cameras can completely cover the whole lane, and vehicles pass through the detection area (1) and then pass through a portal frame or a cross bar in the traveling direction;
the union of the capturing areas (2) of one or more capturing machines (20) is not less than the union of the detection areas (1) of one or more TOF cameras, and the union of the capturing areas (2) or the capturing areas (2) can completely cover the union of the detection areas (1) or the detection areas (1).
It should be further noted that the capturing area (2) of the capturing machine (20) is not smaller than the detection area (1), and the capturing area can completely cover the detection area (1).
It should be further noted that, in the driving direction, the transaction area (3) of the RSU detection unit (40) is not smaller than the snapshot area (2), and the transaction area (3) and the snapshot area (2) have an intersection, and the intersection area is not smaller than one half of the transaction area (3).
Fig. 5 is a schematic structural diagram of another multi-lane free-flow vehicle detection system based on TOF cameras according to the present invention, in view of the field angle between the TOF camera (10) and the capturing machine (20) and the optimal detection area, when the field angle of the used TOF camera and capturing machine cannot cover the whole lane, the layout of fig. 5 needs to be adopted, multiple TOF cameras and multiple capturing machines are adopted, the system layout features are the same as those of fig. 4, and the detection method is the same as the above detection method.
Fig. 6 is a schematic structural diagram of another multi-lane free-flow vehicle detection system based on TOF camera according to the invention, considering that the detection distances of TOF camera (10) and the capturing machine (20) are different, when the detection area of the used TOF camera is not consistent with the capturing distance of the capturing machine, the installation positions of TOF camera and capturing machine need to be adjusted, as the layout mode of fig. 6, the detection method is consistent with the above detection method.
The beneficial effects of this application have following two: (1) the driving track of a multi-lane free flow vehicle can be detected, the position of the vehicle at each moment in a passing time period can be accurately positioned, and vehicle image information and transaction information can be accurately matched; (2) through multi-sensor information fusion and matching, complete vehicle information is obtained, complete evidence obtaining information is provided for illegal vehicles and fee evasion vehicles, and illegal vehicles can escape everywhere.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
In this embodiment, a device for detecting a multi-lane free flow vehicle based on a TOF camera is also provided, and the device is used to implement the above embodiments and preferred embodiments, which have already been described and will not be described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 7 is a block diagram of a structure of a multi-lane free-flow vehicle detecting apparatus 7 based on a TOF camera according to an embodiment of the present invention, as shown in fig. 7, the apparatus including: a first acquisition module 72, configured to acquire a set of first image information obtained by shooting a first detection area by a time of flight TOF camera disposed on a road within a predetermined time period; a triggering module 74, configured to, when each piece of first image information is obtained by shooting with the TOF camera, trigger a target snapshot camera arranged on the road to shoot a second detection area, so as to obtain second image information, where a group of second images obtained by shooting with the target snapshot camera is triggered within the predetermined time period by the TOF camera; a determining module 76 for determining target image information from the set of second image information, wherein the target image information indicates the presence of a target vehicle on the road; a second obtaining module 78, configured to obtain charging information of the target vehicle, which is obtained by a road side unit RSU disposed on the road within a predetermined time period; the matching module 710 is configured to match the target vehicle acquired from the target image information with the charging information of the target vehicle, so as to obtain vehicle information of the target vehicle.
In an alternative embodiment, the determining module 76 determines the target image information from the set of second image information, wherein the set of first image information indicates that the target vehicle is present on the road at the target trigger time, and the target image information captured at the target trigger time is determined from the set of second image information.
In an alternative embodiment, the apparatus is further configured to acquire the position information of the target vehicle captured at a first capturing time from the set of first image information before determining the target image information captured at the target trigger time from the set of second image information, where the predetermined time period includes the first capturing time; and under the condition that the position information of the target vehicle is acquired, setting the first shooting time as the target trigger time, and determining that the group of first image information represents that the target vehicle exists on the road at the target trigger time.
In an alternative embodiment, the determining module 76 is further configured to determine the target image information from the set of second image information, where the set of first image information indicates that the target vehicle moves from a first lane to a second lane of the road, and the target image information obtained by shooting the target lane in the second detection area is determined from the set of second image information, where the target lane includes a part of lanes or all of lanes through which the target vehicle moves from the first lane to the second lane.
In an alternative embodiment, the apparatus is further configured to determine, before determining target image information obtained by capturing a target lane in the second detection area from the set of second image information, third image information obtained by capturing a plurality of consecutive capturing times within the predetermined time period from the set of first image information, wherein an image indicated by the third image information includes a vehicle-presence area where the target vehicle is located; determining the running track of the target vehicle according to the depth information change and the horizontal offset information change of the vehicle-bearing area in the third image information; in a case where the travel track indicates that the target vehicle moves from a first lane to a second lane of the road, it is determined that the set of first image information indicates that the target vehicle moves from the first lane to the second lane of the road.
In an alternative embodiment, the second obtaining module 78 obtains the vehicle information of the target vehicle from the target image information, and obtains the vehicle information of the target vehicle at a position corresponding to the position information of the target vehicle from the target image information, wherein the position information of the target vehicle is the position information of the target vehicle captured at the first capturing time obtained in the group of first image information.
The device is further used for acquiring a set of charging information acquired by charging a set of on-board units (OBUs) within the preset time period after acquiring the vehicle information of the target vehicle from the target image information, wherein the RSU is arranged on the road; and outputting prompt information under the condition that the charging information of the target vehicle is not included in the group of charging information, wherein the prompt information is used for prompting that the charging information of the target vehicle is abnormal.
In an optional embodiment, the apparatus is further configured to determine that the group of charging information does not include the charging information of the target vehicle when, before the prompt information is output, there is no target license plate information in the group of charging information that is consistent with the license plate information in the vehicle information of the target vehicle; or determining that the charging information of the target vehicle is not included in the group of charging information under the condition that the group of charging information has target license plate information which is consistent with license plate information in the vehicle information of the target vehicle but target vehicle type information which is corresponding to the target license plate information in the group of charging information is inconsistent with vehicle type information in the vehicle information of the target vehicle.
In an alternative embodiment, the first obtaining module 72 obtains a set of first image information obtained by shooting a first detection area by a TOF camera during a predetermined time period, and obtains a set of real-time depth image information obtained by shooting the first detection area by the TOF camera during multiple consecutive shooting times during the predetermined time period; extracting position information of the target vehicle at the plurality of continuous shooting times from the set of real-time depth image information; setting the set of first image information to include depth image information captured by a TOF camera for capturing the at least two first detection areas in the set of real-time depth image information, in a case where the position information of the target vehicle at the plurality of consecutive capturing times indicates that the target vehicle has made the lane change and passed through the at least two first detection areas.
In an optional embodiment, the above apparatus is further configured to, before acquiring the vehicle information of the target vehicle from the target image information, perform stretching or shrinking processing on the pictures represented by part or all of the set of first image information and/or the pictures represented by part or all of the set of second image information, so that the resolution of the pictures represented by part or all of the processed set of first image information is consistent with the resolution of the pictures represented by part or all of the set of second image information.
In an optional embodiment, the first obtaining module obtains a set of first image information obtained by shooting a first detection area by a TOF camera during a predetermined time period, and obtains a set of real-time depth image information obtained by shooting the first detection area by the TOF camera during multiple continuous shooting times during the predetermined time period; and carrying out image difference and image segmentation processing on background image information and the set of real-time depth image information to determine a vehicle-presence area in the first detection area and contour information of a vehicle in the vehicle-presence area, wherein the set of first image information is set to comprise the vehicle-presence area and the contour information, and the background image information is depth image information obtained by shooting the first detection area by the TOF camera under the condition that no vehicle appears in the first detection area.
In an alternative embodiment, the second detection area is greater than or equal to the first detection area and the second detection area covers the first detection area.
In an alternative embodiment, in the driving direction, the detection area of the RSU is greater than or equal to the second detection area, and the area where the detection area of the RSU intersects with the second detection area is greater than or equal to half of the detection area of the RSU.
In an alternative embodiment, at least two TOF cameras and at least two target candiders are arranged on the road in the case that the detection area of one TOF camera and one target candider cannot cover all lanes of the road.
In an alternative embodiment, the TOF camera is arranged in front of the target capture machine in the direction of travel and at a first distance or the TOF camera is arranged behind the target capture machine in the direction of travel and at a second distance.
Example 3
There is also provided in this embodiment a system for multilane free flow vehicle detection based on TOF camera, comprising: the TOF camera is installed on a cross bar or a portal above a road and used for shooting a group of first image information obtained by shooting a first detection area within a preset time period; the snapshot camera is used for shooting the second detection area to obtain second image information; the data processing unit comprises a microprocessor, a data storage unit and an external interface unit and is used for triggering a target snapshot camera arranged on the road to shoot a second detection area under the condition that each piece of first image information is obtained by shooting through the TOF camera, wherein a group of second images obtained by shooting through the target snapshot camera is triggered within the preset time period through the TOF camera; the data processing unit is further configured to determine target image information from the set of second image information, wherein the target image information indicates that a target vehicle is present on the road; acquiring charging information of the target vehicle, which is acquired by a Road Side Unit (RSU) arranged on the road within a preset time period; and matching the target vehicle acquired from the target image information with the charging information of the target vehicle to obtain the vehicle information of the target vehicle.
In an alternative embodiment, the second detection area is greater than or equal to the first detection area and the second detection area covers the first detection area.
In an alternative embodiment, in the driving direction, the detection area of the RSU is greater than or equal to the second detection area, and the area where the detection area of the RSU intersects with the second detection area is greater than or equal to half of the detection area of the RSU.
In an alternative embodiment, at least two TOF cameras and at least two target candiders are arranged on the road in the case that the detection area of one TOF camera and one target candider cannot cover all lanes of the road.
In an alternative embodiment, the TOF camera is arranged in front of the target capture machine in the direction of travel and at a first distance or the TOF camera is arranged behind the target capture machine in the direction of travel and at a second distance.
Example 4
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring a group of first image information obtained by shooting a first detection area by a time of flight (TOF) camera arranged on a road within a preset time period;
s2, under the condition that each piece of first image information is obtained by shooting through the TOF camera, triggering a target snapshot camera arranged on the road to shoot a second detection area to obtain second image information, wherein a group of second images obtained by shooting through the TOF camera is triggered within the preset time period;
s3, determining target image information from the set of second image information, wherein the target image information indicates the presence of a target vehicle on the road;
s4, acquiring the charging information of the target vehicle, which is acquired by a Road Side Unit (RSU) arranged on the road within a preset time period;
and S5, matching the target vehicle acquired from the target image information with the charging information of the target vehicle to obtain the vehicle information of the target vehicle.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring a group of first image information obtained by shooting a first detection area by a time of flight (TOF) camera arranged on a road within a preset time period;
s2, under the condition that each piece of first image information is obtained by shooting through the TOF camera, triggering a target snapshot camera arranged on the road to shoot a second detection area to obtain second image information, wherein a group of second images obtained by shooting through the TOF camera is triggered within the preset time period;
s3, determining target image information from the set of second image information, wherein the target image information indicates the presence of a target vehicle on the road;
s4, acquiring the charging information of the target vehicle, which is acquired by a Road Side Unit (RSU) arranged on the road within a preset time period;
and S5, matching the target vehicle acquired from the target image information with the charging information of the target vehicle to obtain the vehicle information of the target vehicle.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A multi-lane free flow vehicle detection method based on TOF cameras is characterized by comprising the following steps:
acquiring a group of first image information obtained by shooting a first detection area by a time of flight (TOF) camera arranged on a road within a preset time period;
under the condition that each piece of first image information is obtained by shooting through the TOF camera, triggering a target snapshot camera arranged on the road to shoot a second detection area to obtain second image information, wherein a group of second images obtained by shooting through the TOF camera is triggered within the preset time period;
determining target image information from the set of second image information, wherein the target image information indicates the presence of a target vehicle on the road;
acquiring charging information of the target vehicle, which is acquired by a Road Side Unit (RSU) arranged on the road within a preset time period;
matching the target vehicle obtained from the target image information with the charging information of the target vehicle to obtain the vehicle information of the target vehicle;
wherein determining target image information from the set of second image information comprises: determining third image information obtained by shooting at a plurality of continuous shooting times within the preset time period from the group of first image information, wherein the image indicated by the third image information comprises a vehicle-containing area where the target vehicle is located; determining the running track of the target vehicle according to the depth information change and the horizontal offset information change of the vehicle-bearing area in the third image information; determining that the set of first image information indicates that the target vehicle moves from a first lane to a second lane of the road, in a case that the travel trajectory indicates that the target vehicle moves from the first lane to the second lane of the road; and determining target image information obtained by shooting a target lane in the second detection area from the group of second image information, wherein the target lane comprises a part of lanes or all of lanes passed by the target vehicle in the process of moving from the first lane to the second lane.
2. The method according to claim 1, wherein the acquiring a set of first image information obtained by shooting a first detection area by a time of flight (TOF) camera arranged on a road within a predetermined time period comprises:
acquiring a group of real-time depth image information obtained by shooting the first detection area by the TOF camera in a plurality of continuous shooting times within the preset time period;
extracting position information of the target vehicle at the plurality of continuous shooting times from the set of real-time depth image information;
setting the set of first image information to include depth image information captured by a TOF camera used for capturing the at least two first detection areas in the set of real-time depth image information, in a case where the position information of the target vehicle at the plurality of continuous capturing times indicates that the target vehicle has made a lane change and passed through the at least two first detection areas.
3. The method according to claim 1, wherein before acquiring the vehicle information of the target vehicle from the target image information, the method further comprises:
and performing stretching or shrinking processing on the pictures represented by part or all of the group of first image information and/or the pictures represented by part or all of the group of second image information, so that the resolution of the pictures represented by part or all of the group of first image information after processing is consistent with the resolution of the pictures represented by part or all of the group of second image information.
4. The method according to claim 1, wherein the acquiring a set of first image information obtained by shooting a first detection area by a time of flight (TOF) camera arranged on a road within a predetermined time period comprises:
acquiring a group of real-time depth image information obtained by shooting the first detection area by the TOF camera in a plurality of continuous shooting times within the preset time period;
and carrying out image difference and image segmentation processing on background image information and the set of real-time depth image information to determine a vehicle-presence area in the first detection area and contour information of a vehicle in the vehicle-presence area, wherein the set of first image information is set to comprise the vehicle-presence area and the contour information, and the background image information is depth image information obtained by shooting the first detection area by the TOF camera under the condition that no vehicle appears in the first detection area.
5. A multi-lane free-flow vehicle detection system based on TOF cameras, comprising:
the TOF camera is installed on a cross bar or a portal above a road and used for shooting a group of first image information obtained by shooting a first detection area within a preset time period;
the snapshot camera is used for shooting the second detection area to obtain second image information;
the data processing unit comprises a microprocessor, a data storage unit and an external interface unit and is used for triggering a target snapshot camera arranged on the road to shoot a second detection area under the condition that each piece of first image information is obtained by shooting through the TOF camera, wherein a group of second images obtained by shooting through the target snapshot camera is triggered within the preset time period through the TOF camera;
the data processing unit is further configured to determine target image information from the set of second image information, wherein the target image information indicates that a target vehicle is present on the road; acquiring charging information of the target vehicle, which is acquired by a Road Side Unit (RSU) arranged on the road within a preset time period; matching the target vehicle obtained from the target image information with the charging information of the target vehicle to obtain the vehicle information of the target vehicle;
wherein the data processing unit is further configured to determine target image information from the set of second image information by: determining third image information obtained by shooting at a plurality of continuous shooting times within the preset time period from the group of first image information, wherein the image indicated by the third image information comprises a vehicle-containing area where the target vehicle is located; determining the running track of the target vehicle according to the depth information change and the horizontal offset information change of the vehicle-bearing area in the third image information; determining that the set of first image information indicates that the target vehicle moves from a first lane to a second lane of the road, in a case that the travel trajectory indicates that the target vehicle moves from the first lane to the second lane of the road; and determining target image information obtained by shooting a target lane in the second detection area from the group of second image information, wherein the target lane comprises a part of lanes or all of lanes passed by the target vehicle in the process of moving from the first lane to the second lane.
6. The system of claim 5, wherein the second detection area is greater than or equal to the first detection area and the second detection area covers the first detection area.
7. The system of claim 5, wherein the detection zone of the RSU is greater than or equal to the second detection zone and the area where the detection zone of the RSU intersects the second detection zone is greater than or equal to half the detection zone of the RSU in the direction of travel.
8. System according to claim 5, characterized in that at least two TOF cameras and at least two target candid cameras are arranged on the road in case the detection area of one TOF camera and one target candid camera cannot cover all lanes of the road.
9. The system of claim 5, wherein the TOF camera is disposed in front of the target snapshot in a direction of travel and at a first distance or the TOF camera is disposed behind the target snapshot in the direction of travel and at a second distance.
CN201910447708.4A 2019-05-27 2019-05-27 Multi-lane free flow vehicle detection method and system based on TOF camera Active CN110111582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910447708.4A CN110111582B (en) 2019-05-27 2019-05-27 Multi-lane free flow vehicle detection method and system based on TOF camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910447708.4A CN110111582B (en) 2019-05-27 2019-05-27 Multi-lane free flow vehicle detection method and system based on TOF camera

Publications (2)

Publication Number Publication Date
CN110111582A CN110111582A (en) 2019-08-09
CN110111582B true CN110111582B (en) 2020-11-10

Family

ID=67492511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910447708.4A Active CN110111582B (en) 2019-05-27 2019-05-27 Multi-lane free flow vehicle detection method and system based on TOF camera

Country Status (1)

Country Link
CN (1) CN110111582B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489448A (en) * 2019-09-11 2021-03-12 浙江宇视科技有限公司 Snapshot output filtering method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8686873B2 (en) * 2011-02-28 2014-04-01 Toyota Motor Engineering & Manufacturing North America, Inc. Two-way video and 3D transmission between vehicles and system placed on roadside
CN106778656A (en) * 2016-12-27 2017-05-31 清华大学苏州汽车研究院(吴江) A kind of counting passenger flow of buses system based on ToF cameras

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101350109B (en) * 2008-09-05 2010-08-25 交通部公路科学研究所 Method for locating and controlling multilane free flow video vehicle
CN102353935A (en) * 2011-06-07 2012-02-15 北京万集科技股份有限公司 OBU (on board unit) positioning method, equipment and system based on time measurement
CN103116988B (en) * 2013-01-18 2014-10-08 合肥工业大学 Traffic flow and vehicle type detecting method based on TOF (time of flight) camera
CN103198531B (en) * 2013-04-10 2015-04-22 北京速通科技有限公司 Snapshot method for multilane free stream vehicle image
CN103268640B (en) * 2013-05-10 2016-02-03 北京速通科技有限公司 Based on multilane free-flow electronic toll collection system and the method for multiple-beam antenna
CN204902980U (en) * 2015-05-27 2015-12-23 北京万集科技股份有限公司 Multilane free flow automatic weighing system based on RFID technique
CN204904364U (en) * 2015-08-20 2015-12-23 北京万集科技股份有限公司 Multilane free flow charging system
KR101756555B1 (en) * 2015-12-14 2017-07-11 현대오트론 주식회사 Apparatus for detecting of vehicle pitch angle using time of flight sensor and method therof
CN205451098U (en) * 2015-12-24 2016-08-10 北京万集科技股份有限公司 Motorcycle type automatic identification equipment of charge station based on TOF camera
CN205230344U (en) * 2015-12-24 2016-05-11 北京万集科技股份有限公司 Vehicle positioning system based on TOF camera
CN205451485U (en) * 2015-12-24 2016-08-10 北京万集科技股份有限公司 Vehicle positioning system based on TOF camera
US10841491B2 (en) * 2016-03-16 2020-11-17 Analog Devices, Inc. Reducing power consumption for time-of-flight depth imaging

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8686873B2 (en) * 2011-02-28 2014-04-01 Toyota Motor Engineering & Manufacturing North America, Inc. Two-way video and 3D transmission between vehicles and system placed on roadside
CN106778656A (en) * 2016-12-27 2017-05-31 清华大学苏州汽车研究院(吴江) A kind of counting passenger flow of buses system based on ToF cameras

Also Published As

Publication number Publication date
CN110111582A (en) 2019-08-09

Similar Documents

Publication Publication Date Title
US9336450B2 (en) Methods and systems for selecting target vehicles for occupancy detection
CN110189424B (en) Multi-lane free flow vehicle detection method and system based on multi-target radar
CN202443514U (en) Multilane free movement electronic road toll collection system
CN107301776A (en) Track road conditions processing and dissemination method based on video detection technology
CN106097722B (en) The system and method for carrying out the automation supervision of trackside parking stall using video
CN104574954A (en) Vehicle checking method and system based on free flow system as well as control equipment
CN108389396B (en) Vehicle type matching method and device based on video and charging system
CN107705564A (en) Express-road vehicle running track identification system and method based on mobile phone positioning
US11182983B2 (en) Same vehicle detection device, toll collection facility, same vehicle detection method, and program
CN108133599A (en) A kind of slag-soil truck video frequency identifying method and system
CN108765975B (en) Roadside vertical parking lot management system and method
CN110880205B (en) Parking charging method and device
CN108932850B (en) Method and device for recording low-speed driving illegal behaviors of motor vehicle
CN105303826A (en) Violating side parking evidence obtaining device and method
CN112447060A (en) Method and device for recognizing lane and computing equipment
CN110111582B (en) Multi-lane free flow vehicle detection method and system based on TOF camera
CN108932849B (en) Method and device for recording low-speed running illegal behaviors of multiple motor vehicles
CN109308807A (en) Road violation snap-shooting system based on unmanned plane aerial photography technology
KR102093237B1 (en) Vehicle classification system using non-contact automatic vehicle detectior
CN113112813A (en) Illegal parking detection method and device
KR102267335B1 (en) Method for detecting a speed employing difference of distance between an object and a monitoring camera
KR20020032049A (en) A fare collection a means
KR101363176B1 (en) Electronic toll collecting system and method thereof
CN110189425A (en) Multilane free-flow vehicle detection method and system based on binocular vision
CN111627224A (en) Vehicle speed abnormality detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant