CN112863195A - Vehicle state determination method and device - Google Patents

Vehicle state determination method and device Download PDF

Info

Publication number
CN112863195A
CN112863195A CN202110292935.1A CN202110292935A CN112863195A CN 112863195 A CN112863195 A CN 112863195A CN 202110292935 A CN202110292935 A CN 202110292935A CN 112863195 A CN112863195 A CN 112863195A
Authority
CN
China
Prior art keywords
target
target vehicle
vehicle
determining
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110292935.1A
Other languages
Chinese (zh)
Other versions
CN112863195B (en
Inventor
牛晨鸣
吴允
吴惠敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110292935.1A priority Critical patent/CN112863195B/en
Publication of CN112863195A publication Critical patent/CN112863195A/en
Application granted granted Critical
Publication of CN112863195B publication Critical patent/CN112863195B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a method and a device for determining a vehicle state, which comprise the following steps: acquiring laser radar data and image data acquired from a target detection area, wherein a preset time difference is satisfied between a timestamp of the laser radar data and a timestamp of the image data; matching the laser radar data with the image data to obtain target laser radar data and target image data of a target vehicle; and determining the state of the target vehicle according to the target laser radar data and the target image data. By the method and the device, the problems of low vehicle management accuracy and low efficiency are solved.

Description

Vehicle state determination method and device
Technical Field
The invention relates to the field of communication, in particular to a method and a device for determining a vehicle state.
Background
With the increase of urban population, the load of vehicles is greatly increased, the demand for urban roadside parking is increased, and the demand for managing the roadside parking is increased. The management of roadside vehicle parking is generally to arrange personnel to manage vehicles at a distance, but the manpower is limited, and all vehicles on a certain distance cannot be covered completely; or some places adopt the geomagnetic technology to judge the parking state of the vehicle and inform personnel of event tracking, but the effect is not good.
Aiming at the problems of low vehicle management accuracy and efficiency in the related art, no effective solution exists at present.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining a vehicle state, which are used for at least solving the problems of low vehicle management accuracy and low vehicle management efficiency in the related technology.
According to an embodiment of the present invention, there is provided a vehicle state determination method including: acquiring laser radar data and image data acquired from a target detection area, wherein a time stamp of the laser radar data and a time stamp of the image data meet a preset time difference; matching the laser radar data with the image data to obtain target laser radar data and target image data of a target vehicle; and determining the state of the target vehicle according to the target laser radar data and the target image data.
Optionally, matching the laser radar data and the image data to obtain target laser radar data and target image data of a target vehicle, including: determining a target trajectory frame of the target vehicle in the image data, wherein the target image data includes the target trajectory frame; and determining a contour center point of the laser radar data located in the target track frame as a target contour center point of the target vehicle, wherein the target laser radar data comprises the target contour center point.
Optionally, determining that a contour center point of the laser radar data located in the target track frame is a target contour center point of the target vehicle includes: under the condition that the number of the contour center points of the target track frame in the laser radar data is at least two, constructing a corresponding rectangular area according to a contour corresponding to each contour center point of at least two contour center points; and determining the contour central point corresponding to the rectangular area with the highest overlapping rate of the target track frame as the target contour central point of the target vehicle.
Optionally, the constructing a corresponding rectangular region according to the contour corresponding to each contour center point of the at least two contour center points includes: performing the following operation on each contour center point of the at least two contour center points, wherein each contour center point is called a current contour center point when the following operation is performed: determining the minimum value of the width and the height of the current contour as the minimum value of the width and the height, wherein the current contour is a contour corresponding to the center point of the current contour; moving the center point of the current contour according to the minimum value of the width and the height to obtain at least two boundary points; and determining the area formed by the at least two boundary points as a rectangular area corresponding to the center point of the current contour.
Optionally, the determining the state of the target vehicle according to the target lidar data and the target image data includes: determining whether the target vehicle is in a parking space area according to target image data under the condition that the target laser radar data indicates that the speed of the target vehicle is greater than zero; and determining the state of the target vehicle according to whether the target vehicle is in the parking space area.
Optionally, determining the state of the target vehicle according to whether the target vehicle is in the parking space area includes: determining whether the identity of the target vehicle exists in an entrance queue under the condition that the target vehicle is not in the parking space area; determining that the target vehicle is in the parking space area under the condition that the identification mark of the target vehicle does not exist in the driving queue; and determining that the target vehicle is in the state of exiting the parking space area under the condition that the identification mark of the target vehicle exists in the entrance queue.
Optionally, after determining that the target vehicle is in a parking space area, the method includes: adding the identification of the target vehicle to a standby queue; after determining that the target vehicle is in the state of exiting the parking space area, the method comprises the following steps: and after determining that the target vehicle exits the parking space area for a preset distance, deleting the identity of the target vehicle in the entrance queue.
Optionally, determining the state of the target vehicle according to whether the target vehicle is in the parking space area includes: determining whether the identity of the target vehicle exists in a standby queue under the condition that the target vehicle is in the parking space area; determining that the target vehicle enters the parking space area under the condition that the identification of the target vehicle exists in the standby queue; if the identification of the target vehicle exists in the driving queue under the condition that the identification of the target vehicle does not exist in the standby queue, determining that the state of the target vehicle is the driving-out parking space area; and if the identity of the target vehicle does not exist in the driving queue, generating alarm information.
Optionally, after determining that the target vehicle is in the parking space area, the method includes: deleting the identification of the target vehicle in the standby queue and adding the identification of the target vehicle to the driving-in queue under the condition that the speed of the target vehicle is determined to be zero within a preset time period; after determining that the target vehicle is in the state of exiting the parking space area, the method comprises the following steps: and after determining that the target vehicle exits the parking space area for a preset distance, deleting the identity of the target vehicle in the entrance queue.
According to another embodiment of the present invention, there is provided a vehicle state determination device including: the system comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring laser radar data and image data acquired by a target detection area, and a preset time difference is met between a timestamp of the laser radar data and a timestamp of the image data; the matching module is used for matching the laser radar data with the image data to obtain target laser radar data and target image data of a target vehicle; and the determining module is used for determining the state of the target vehicle according to the target laser radar data and the target image data.
According to a further embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory and a processor, the memory having a computer program stored therein, the processor being configured to execute the computer program to perform the steps in any of the method embodiments.
According to the invention, the time difference between the time stamp of the laser radar data and the time stamp of the image data is satisfied by acquiring the laser radar data and the image data acquired from the target detection area; matching the laser radar data with the image data to obtain target laser radar data and target image data of a target vehicle; and determining the state of the target vehicle according to the target laser radar data and the target image data. Therefore, the problems of low vehicle management accuracy and efficiency can be solved, and the effect of improving the vehicle management accuracy and efficiency is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a method for determining a vehicle state according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of determining a vehicle condition according to an embodiment of the present invention;
FIG. 3 is a first schematic view of a camera security scenario in accordance with an alternative embodiment of the present invention;
FIG. 4 is a schematic diagram of a camera security scene two in accordance with an alternative embodiment of the present invention;
FIG. 5 is a schematic diagram of a camera security scene three in accordance with an alternative embodiment of the present invention;
FIG. 6 is a schematic illustration of calibration of a detection zone in accordance with an alternative embodiment of the present invention;
FIG. 7 is a schematic view of the entry-exit determination logic according to an alternative embodiment of the present invention;
fig. 8 is a block diagram showing the configuration of a vehicle state determination device according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking an example of the method performed by a mobile terminal, fig. 1 is a block diagram of a hardware structure of the mobile terminal according to the method for determining a vehicle state of the embodiment of the present invention. As shown in fig. 1, the mobile terminal 10 may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the method for determining the vehicle state in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In the present embodiment, a method for determining a vehicle state operating on the mobile terminal is provided, and fig. 2 is a flowchart of the method for determining a vehicle state according to the embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, acquiring laser radar data and image data acquired from a target detection area, wherein a time stamp of the laser radar data and a time stamp of the image data meet a preset time difference;
step S204, matching the laser radar data and the image data to obtain target laser radar data and target image data of a target vehicle;
and step S206, determining the state of the target vehicle according to the target laser radar data and the target image data.
Through the steps, the time difference between the time stamp of the laser radar data and the time stamp of the image data is satisfied by acquiring the laser radar data and the image data acquired from the target detection area; matching the laser radar data with the image data to obtain target laser radar data and target image data of a target vehicle; and determining the state of the target vehicle according to the target laser radar data and the target image data. Therefore, the problems of low vehicle management accuracy and efficiency can be solved, and the effect of improving the vehicle management accuracy and efficiency is achieved.
Alternatively, the execution subject of the above steps may be a terminal or the like, but is not limited thereto.
As an alternative implementation, the target detection area may be a traffic scene such as a roadside parking lane, a parking lot, and the like, and the laser radar and the camera are installed in the target detection area, as shown in fig. 3, a first schematic view of a camera safety scene according to an alternative embodiment of the present invention, fig. 4, a second schematic view of a camera safety scene according to an alternative embodiment of the present invention, and fig. 5, a third schematic view of a camera safety scene according to an alternative embodiment of the present invention. FIG. 3 is a front view of a roadside detection scene, FIG. 4 is a top view of the roadside detection scene, and FIG. 5 is a parking lot scene overhead view. The laser radar is arranged on the top of the camera and fixed on the detection rod. In order to acquire images through videos and enable the images to normally detect the attribute information of the vehicle, the installation height of the detection rod can be 4-6 meters, and the detection rod is stable and does not shake.
As an optional implementation manner, the directions of the camera and the lidar can be adjusted through the terminal device, and then the shooting areas of the camera and the lidar can be adjusted. Determining the boundary of the detection area, selecting four edge points in the detection area, and using the four edge points as calibration points at intervals of a predetermined distance, where the predetermined distance may be determined according to actual situations, and may be, for example, 5 meters, 3 meters, and the like. Placing a corner reflector on the calibration point, as shown in fig. 6, which is a schematic diagram of calibrating a detection area according to an alternative embodiment of the present invention, after the corner reflector is placed, the information of the corresponding calibration point is sent to the laser radar for calibration operation, so that the coordinate information output by the laser radar can be mapped into a two-dimensional picture of video acquisition.
As an optional implementation manner, after the calibration is completed, the laser radar outputs corresponding laser radar data, including vehicle profile information of the 3D information of the vehicle and speed information of the vehicle, according to the sampling frequency. In this embodiment, at least two vehicles may be included within the target detection region. The video data picture collected by the camera is sent to the intelligent recognition algorithm module for attribute recognition, a track frame containing the vehicle and attribute information of the vehicle are output, and the attribute information of the vehicle can comprise: car logos, car series and license plates, etc. Wherein the lidar data includes profile information of the vehicleBidSpeed information V of vehicleidThe profile information B of the same vehicle collected by the laser radar can be acquiredidSpeed information V of vehicleidAnd (6) binding. The image data includes: track frame T of vehicleidAnd attribute information A of the vehicleidThe track frame T of the same vehicle collected by the camera can be usedidAnd attribute information A of the vehicleidAnd (6) binding. In this embodiment, the contour information acquired by the lidar may be a gray scale map used for representing the contour of the vehicle, and the track frame erased by the camera may be an image of the vehicle captured by the camera.
As an optional implementation manner, distances between a lower edge line of a nearest parking space and an upper edge line of a farthest parking space in the detection area and the detection rod are respectively marked as LminAnd Lmax. Filtering the lidar data to [ L ]min-1,Lmax+2]The information in the rice is stored, and the information exceeding or smaller than the range is filtered. And carrying out gray level processing on the filtered laser radar information, and mapping the information to different gray levels for display according to the distance relation.
As an alternative embodiment, the data acquired by the lidar and the camera may be time-stamped each time the lidar and the camera acquire data, the time stamps on the lidar data acquired by the lidar and the time stamps on the image data acquired by the camera are compared, and the lidar data and the image data with the time stamp error smaller than or equal to a predetermined time difference are regarded as data acquired in the same detection area at the same time, and the predetermined time difference may be determined according to actual conditions, for example, may be ± 10 ms. And binding the laser radar data and the image data meeting the preset time difference, and analyzing the vehicle information in the laser radar data and the image so as to determine the state of the vehicle.
Optionally, matching the laser radar data and the image data to obtain target laser radar data and target image data of a target vehicle, including: determining a target trajectory frame of the target vehicle in the image data, wherein the target image data includes the target trajectory frame; and determining a contour center point of the laser radar data located in the target track frame as a target contour center point of the target vehicle, wherein the target laser radar data comprises the target contour center point.
As an optional implementation manner, since a plurality of vehicles may be included in the detection area, the collected lidar data and the image data need to be matched to obtain target lidar data and target image data of the same vehicle (target vehicle), and then the state of the target vehicle may be analyzed according to the target lidar data and the target image data. In this embodiment, vehicle matching may be performed by matching the contour of the vehicle in the laser radar data with the trajectory frame of the vehicle in the image data. Specifically, the laser radar data and the image data may be matched as follows: comparing the contour center point B of the vehicle in the laser radar gray scale mapid-midWhether or not to fall within the target track frame T of the image dataidIn the case of the contour center point Bid-midTrack frame T falling on image dataidIn the method, it is considered that a vehicle corresponding to the contour center point in the laser radar data and a vehicle corresponding to the track frame in the image data are the same vehicle (target vehicle), and then target laser radar data of the target vehicle can be determined in the laser radar data, and target image data of the target vehicle is determined in the image data, wherein the target laser radar data includes the target contour center point of the target vehicle. Specifically, the contour center point B can be determined by judging the contour center point Bid-midCoordinate (X) ofid,Yid) Middle abscissa XidWhether it is larger than the track frame TidThe upper left-hand abscissa X ofulAnd the ordinate YidIs less than the vertical coordinate Y of the lower right corner of the track framedrIf the two are satisfied, determining the contour central point B of the vehicleid-midTarget track frame T falling on image dataidIn (1).
Optionally, determining that a contour center point of the laser radar data located in the target track frame is a target contour center point of the target vehicle includes: under the condition that the number of the contour center points positioned in the target track frame in the laser radar data is at least two, constructing a corresponding rectangular area according to a contour corresponding to each contour center point in at least two contour center points; and determining the contour central point corresponding to the rectangular area with the highest overlapping rate of the target track frame as the target contour central point of the target vehicle.
As an optional implementation manner, if a plurality of contour center points fall in the same track frame, a rectangular region may be constructed according to the contour center points, and matching may be performed according to an overlapping rate of the rectangular region and the track frame.
Optionally, the constructing a corresponding rectangular region according to the contour corresponding to each contour center point of the at least two contour center points includes: performing the following operation on each contour center point of the at least two contour center points, wherein each contour center point is called a current contour center point when the following operation is performed: determining the minimum value of the width and the height of the current contour as the minimum value of the width and the height, wherein the current contour is a contour corresponding to the center point of the current contour; moving the center point of the current contour according to the minimum value of the width and the height to obtain at least two boundary points; and determining the area formed by the at least two boundary points as a rectangular area corresponding to the center point of the current contour.
As an optional implementation manner, if a plurality of contour center points fall in the same track frame, the resolution of each contour center point is added or subtracted with the width-height minimum value WH of the corresponding contourminWherein the distance (height) from the highest point to the lowest point of the contour is defined as HmaxLet the distance (width) from the leftmost point to the rightmost point of the contour be Wmax,WHminIs the minimum value of width and height min [ H ]max,Wmax]. And constructing a rectangular region according to the contour central point and the contour width-height minimum value, specifically, translating the contour central point according to the width-height minimum value to obtain at least two boundary points, and forming a corresponding rectangular region by the at least two boundary points. For example, the contour center point Bid-midBoth the abscissa and the ordinate are shifted to the left by WHmin/2, obtaining a first boundary point [ B ]id-mid(x)-WHmin/2,Bid-mid(y)-WHmin/2]The center point B of the contourid-midBoth the abscissa and the ordinate are shifted to the right WHmin/2, obtaining a second boundary point [ B ]id-mid(x)+WHmin/2,Bid-mid(y)+WHmin/2]The first boundary point is set as the upper left point of the rectangular region, and the second boundary point is set as the lower right point of the region, thereby forming the rectangular region corresponding to the contour center point. Calculating the overlapping rate of the rectangular area and the track frame in the video data, wherein the vehicle corresponding to the outline center point corresponding to the rectangular area with the highest proportion is the same target vehicle as the vehicle corresponding to the track frame, and fusing the attribute information and the laser radar information to obtain the target vehicle Bid,Vid,Tid,AidUnifying the four pieces of information into the same identification mark for fusion to obtain fusion information of the target vehicle, wherein the fusion information comprises target laser radar data B of the target vehicleid,VidAnd target image data T of the target vehicleid,Aid
Optionally, the determining the state of the target vehicle according to the target lidar data and the target image data includes: determining whether the target vehicle is in a parking space area according to target image data under the condition that the target laser radar data indicates that the speed of the target vehicle is greater than zero; and determining the state of the target vehicle according to whether the target vehicle is in the parking space area.
As an alternative, the speed of the vehicle can be detected by the lidar data, and the vehicle with which the speed exists is a moving vehicle. The method can judge the entering parking space and the exiting parking space of the moving vehicle, and further can determine the state of the vehicle so as to manage the vehicle in the parking area. As an alternative implementation, fig. 7 is a schematic diagram of an entrance-exit determination logic according to an alternative embodiment of the present invention, and whether the vehicle is parked in the parking area may be determined according to the image data, and the entrance-exit state of the vehicle may be determined according to whether the vehicle is parked in the parking area.
Optionally, determining the state of the target vehicle according to whether the target vehicle is in the parking space area includes: determining whether the identity of the target vehicle exists in an entrance queue under the condition that the target vehicle is not in the parking space area; determining that the target vehicle is in the parking space area under the condition that the identification mark of the target vehicle does not exist in the driving queue; and determining that the target vehicle is in the state of exiting the parking space area under the condition that the identification mark of the target vehicle exists in the entrance queue.
As an optional implementation manner, if the vehicle is parked outside the parking space area and cannot be queried in the driving-in queue, indicating that the vehicle may drive into the parking space, adding the identity of the vehicle into the standby queue; if the vehicle can be inquired in the driving queue, the vehicle is the vehicle originally parked in the parking space, namely the vehicle is about to drive out of the parking space.
Optionally, after determining that the target vehicle is in a parking space area, the method includes: adding the identification of the target vehicle to a standby queue; after determining that the target vehicle is in the state of exiting the parking space area, the method comprises the following steps: and after determining that the target vehicle exits the parking space area for a preset distance, deleting the identity of the target vehicle in the entrance queue.
As an optional implementation manner, if the vehicle is parked outside the parking space area and cannot be queried in the driving-in queue, indicating that the vehicle may drive into the parking space, adding the identity of the vehicle into the standby queue; if the vehicle can be inquired in the driving queue, the vehicle is the vehicle originally parked in the parking space, namely the vehicle is about to drive out of the parking space. And adding the vehicle into an outgoing queue, continuously judging whether the distance of the vehicle out of the parking space exceeds a threshold value, if the distance exceeds the threshold value and the speed is more than 0, and maintaining for 5 frames, generating an outgoing event, and deleting the identity of the vehicle in the outgoing queue.
Optionally, determining the state of the target vehicle according to whether the target vehicle is in the parking space area includes: determining whether the identity of the target vehicle exists in a standby queue under the condition that the target vehicle is in the parking space area; determining that the target vehicle enters the parking space area under the condition that the identification of the target vehicle exists in the standby queue; if the identification of the target vehicle exists in the driving queue under the condition that the identification of the target vehicle does not exist in the standby queue, determining that the state of the target vehicle is the driving-out parking space area; and if the identity of the target vehicle does not exist in the driving queue, generating alarm information.
As an alternative embodiment, if the image data indicates that the vehicle is parked in the parking space area, the standby queue is searched for the vehicle identifier, if yes, the vehicle is added to the driving queue, and the vehicle is indicated to be about to drive into the parking space. If the vehicle cannot be found in the standby queue, the vehicle is stopped in the parking space originally, the driving queue is traversed, if the vehicle is found, the fact that the vehicle is about to drive out of the parking space is determined, and the vehicle information is updated. If the identity of the vehicle cannot be found in the driving queue, and a driving alarm is generated. When the driving-in alarm is generated, whether the vehicle outline crosses the vehicle line or not is judged. A presence of a vehicle contour having more than 40 pixels above the line of the vehicle is considered to produce a cross-stop. The supplemental reporting generates a cross-location parking event.
Optionally, after determining that the target vehicle is in the parking space area, the method includes: deleting the identification of the target vehicle in the standby queue and adding the identification of the target vehicle to the driving-in queue under the condition that the speed of the target vehicle is determined to be zero within a preset time period; after determining that the target vehicle is in the state of exiting the parking space area, the method comprises the following steps: and after determining that the target vehicle exits the parking space area for a preset distance, deleting the identity of the target vehicle in the entrance queue.
As an alternative embodiment, if the image data indicates that the vehicle is parked in the parking space area, searching whether the vehicle is in the standby queue, if so, indicating that the vehicle is about to drive into the parking space, adding the vehicle into the driving queue, determining that the target is already parked when the target speed is judged to be 0, deleting the vehicle in the standby queue, and generating the driving-in event.
If the vehicle cannot be found in the standby queue, the vehicle is stopped in the parking space originally, the driving-in queue is traversed, if the vehicle is found, the vehicle is determined to be about to drive out of the parking space, whether the distance of the vehicle driving out of the parking space exceeds a threshold value is continuously judged, if the distance exceeds the threshold value and the speed is greater than 0, a driving-out event is generated, and the vehicle identity identification is added into the driving-out queue.
According to the method and the device, the laser radar detection information and the image AI identification information can be bound in a calibration mode and a data preprocessing analysis mode, and information fusion is achieved. Target contour information and speed information can be obtained through the laser radar, attributes that videos cannot be detected are supplemented, and detection effects are improved. And the judgment of whether the vehicle is across the position can be supplemented and prompted.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a vehicle state determination device is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and the description of the device is omitted for brevity. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 8 is a block diagram showing the configuration of a vehicle state determination apparatus according to an embodiment of the present invention, as shown in fig. 8, the apparatus including: an obtaining module 82, configured to obtain laser radar data and image data acquired from a target detection area, where a time stamp of the laser radar data and a time stamp of the image data satisfy a predetermined time difference; a matching module 84, configured to match the laser radar data and the image data to obtain target laser radar data and target image data of a target vehicle; a determining module 86, configured to determine a state of the target vehicle according to the target lidar data and the target image data.
Optionally, the apparatus is further configured to determine a target track frame of the target vehicle in the image data, where the target image data includes the target track frame; and determining a contour center point of the laser radar data located in the target track frame as a target contour center point of the target vehicle, wherein the target laser radar data comprises the target contour center point.
Optionally, the apparatus is further configured to construct a corresponding rectangular region according to a contour corresponding to each of at least two contour center points in the at least two contour center points, when the number of contour center points located in the target trajectory frame in the lidar data is at least two; and determining the contour central point corresponding to the rectangular area with the highest overlapping rate of the target track frame as the target contour central point of the target vehicle.
Optionally, the apparatus is further configured to perform the following operation on each of the at least two contour center points, where each contour center point is referred to as a current contour center point when the following operation is performed: determining the minimum value of the width and the height of the current contour as the minimum value of the width and the height, wherein the current contour is a contour corresponding to the center point of the current contour; moving the center point of the current contour according to the minimum value of the width and the height to obtain at least two boundary points; and determining the area formed by the at least two boundary points as a rectangular area corresponding to the center point of the current contour.
Optionally, the apparatus is further configured to determine, according to the target image data, whether the target vehicle is in the parking space area, if the target lidar data indicates that the speed of the target vehicle is greater than zero; and determining the state of the target vehicle according to whether the target vehicle is in the parking space area.
Optionally, the apparatus is further configured to determine whether an identity of the target vehicle exists in the incoming queue if the target vehicle is not in the parking space area; determining that the target vehicle is in the parking space area under the condition that the identification mark of the target vehicle does not exist in the driving queue; and determining that the target vehicle is in the state of exiting the parking space area under the condition that the identification mark of the target vehicle exists in the entrance queue.
Optionally, the device is further configured to add the identification of the target vehicle to a standby queue after determining that the target vehicle is in the parking space area; and after the state of the target vehicle is determined to be the state of exiting the parking space area, deleting the identity of the target vehicle in the entering queue after the target vehicle is determined to exit the parking space area for a preset distance.
Optionally, the apparatus is further configured to determine whether the identifier of the target vehicle exists in the standby queue if the target vehicle is in the parking space area; determining that the target vehicle enters the parking space area under the condition that the identification of the target vehicle exists in the standby queue; under the condition that the identification of the target vehicle does not exist in the standby queue, if the identification of the target vehicle exists in the driving queue, determining that the state of the target vehicle is the driving-out parking space area; and if the identity of the target vehicle does not exist in the driving queue, generating alarm information.
Optionally, the device is further configured to delete the identifier of the target vehicle from the standby queue and add the identifier of the target vehicle to the entry queue if it is determined that the speed of the target vehicle is zero within a predetermined time period after it is determined that the target vehicle is in the entry parking space area; and after the state of the target vehicle is determined to be the state of exiting the parking space area, deleting the identity of the target vehicle in the entering queue after the target vehicle is determined to exit the parking space area for a preset distance.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring laser radar data and image data acquired from a target detection area, wherein a time stamp of the laser radar data and a time stamp of the image data satisfy a preset time difference;
s2, matching the laser radar data and the image data to obtain target laser radar data and target image data of a target vehicle;
and S3, determining the state of the target vehicle based on the target lidar data and the target image data.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring laser radar data and image data acquired from a target detection area, wherein a time stamp of the laser radar data and a time stamp of the image data satisfy a preset time difference;
s2, matching the laser radar data and the image data to obtain target laser radar data and target image data of a target vehicle;
and S3, determining the state of the target vehicle based on the target lidar data and the target image data.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method of determining a vehicle state, comprising:
acquiring laser radar data and image data acquired from a target detection area, wherein a preset time difference is satisfied between a timestamp of the laser radar data and a timestamp of the image data;
matching the laser radar data with the image data to obtain target laser radar data and target image data of a target vehicle;
and determining the state of the target vehicle according to the target laser radar data and the target image data.
2. The method of claim 1, wherein matching the lidar data and the image data to obtain target lidar data and target image data for a target vehicle comprises:
determining a target trajectory box of the target vehicle in the image data, wherein the target image data includes the target trajectory box;
determining a contour center point of the laser radar data located in the target track frame as a target contour center point of the target vehicle, wherein the target laser radar data includes the target contour center point.
3. The method of claim 2, wherein determining the contour center point of the lidar data at the target trajectory box as the target contour center point of the target vehicle comprises:
under the condition that the number of the contour center points positioned in the target track frame in the laser radar data is at least two, constructing a corresponding rectangular area according to a contour corresponding to each contour center point in at least two contour center points;
and determining the contour central point corresponding to the rectangular area with the highest overlapping rate of the target track frame as the target contour central point of the target vehicle.
4. The method of claim 3, wherein constructing the respective rectangular region from the corresponding contour from each of the at least two contour center points comprises:
performing the following operation on each contour center point of the at least two contour center points, each contour center point being referred to as a current contour center point when the following operation is performed:
determining the minimum value of the width and the height of a current contour as the minimum value of the width and the height, wherein the current contour is a contour corresponding to the center point of the current contour;
moving the center point of the current contour according to the minimum value of width, height and height to obtain at least two boundary points;
and determining the region formed by the at least two boundary points as a rectangular region corresponding to the central point of the current contour.
5. The method of any one of claims 1 to 4, wherein said determining a state of the target vehicle from the target lidar data and the target image data comprises:
determining whether the target vehicle is in a parking space area according to target image data under the condition that the target laser radar data indicates that the speed of the target vehicle is greater than zero;
and determining the state of the target vehicle according to whether the target vehicle is in the parking space area.
6. The method of claim 5, wherein determining the status of the target vehicle based on whether the target vehicle is within the parking area comprises:
determining whether the identity of the target vehicle exists in an entrance queue or not under the condition that the target vehicle is not in the parking space area;
under the condition that the identity of the target vehicle does not exist in the driving queue, determining that the state of the target vehicle is the driving-in parking space area;
and under the condition that the identification of the target vehicle exists in the driving queue, determining that the state of the target vehicle is the driving-out of the parking space area.
7. The method of claim 6,
after the target vehicle is determined to be in the parking space area, the method comprises the following steps: adding the identity of the target vehicle to a standby queue;
after the target vehicle is determined to be in the parking space area, the method comprises the following steps: and after the target vehicle is determined to be driven out of the parking space area by a preset distance, deleting the identity of the target vehicle in the driving queue.
8. The method of claim 5, wherein determining the status of the target vehicle based on whether the target vehicle is within the parking area comprises:
determining whether the identity of the target vehicle exists in a standby queue under the condition that the target vehicle is in the parking space area;
under the condition that the identification of the target vehicle exists in the standby queue, determining that the target vehicle is in a state of entering the parking space area;
under the condition that the identity of the target vehicle does not exist in the standby queue, if the identity of the target vehicle exists in the driving queue, determining that the state of the target vehicle is the driving-out parking space area; and if the identity of the target vehicle does not exist in the driving queue, generating alarm information.
9. The method of claim 8,
after determining that the target vehicle is in the parking space area, the method comprises the following steps: under the condition that the speed of the target vehicle is determined to be zero within a preset time, deleting the identity of the target vehicle in the standby queue, and adding the identity of the target vehicle to the driving queue;
after the target vehicle is determined to be in the parking space area, the method comprises the following steps: and after the target vehicle is determined to be driven out of the parking space area by a preset distance, deleting the identity of the target vehicle in the driving queue.
10. A vehicle state determination device characterized by comprising:
the system comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring laser radar data and image data acquired by a target detection area, and a preset time difference is met between a timestamp of the laser radar data and a timestamp of the image data;
the matching module is used for matching the laser radar data with the image data to obtain target laser radar data and target image data of a target vehicle;
a determination module to determine a state of the target vehicle based on the target lidar data and the target image data.
CN202110292935.1A 2021-03-18 2021-03-18 Vehicle state determination method and device Active CN112863195B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110292935.1A CN112863195B (en) 2021-03-18 2021-03-18 Vehicle state determination method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110292935.1A CN112863195B (en) 2021-03-18 2021-03-18 Vehicle state determination method and device

Publications (2)

Publication Number Publication Date
CN112863195A true CN112863195A (en) 2021-05-28
CN112863195B CN112863195B (en) 2022-06-14

Family

ID=75993511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110292935.1A Active CN112863195B (en) 2021-03-18 2021-03-18 Vehicle state determination method and device

Country Status (1)

Country Link
CN (1) CN112863195B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115169452A (en) * 2022-06-30 2022-10-11 北京中盛国芯科技有限公司 System and method for fusing target information based on space-time synchronization queue characteristics
CN117334080A (en) * 2023-12-01 2024-01-02 江苏镭神激光智能系统有限公司 Vehicle tracking method and system based on laser radar and camera identification

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517530A (en) * 2019-09-03 2019-11-29 吉林大学 A kind of urban road Roadside Parking management-control method based on parking robot
US20200238982A1 (en) * 2019-01-30 2020-07-30 Mando Corporation Driver assistance system and control method thereof
CN111582256A (en) * 2020-04-26 2020-08-25 智慧互通科技有限公司 Parking management method and device based on radar and visual information
CN111739338A (en) * 2020-05-07 2020-10-02 智慧互通科技有限公司 Parking management method and system based on multiple types of sensors
CN112185129A (en) * 2020-11-13 2021-01-05 重庆盘古美天物联网科技有限公司 Parking management method based on urban auxiliary road bayonet snapshot
CN112330968A (en) * 2020-11-19 2021-02-05 朱新培 Urban roadside parking lot data acquisition device based on microwave radar and method thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200238982A1 (en) * 2019-01-30 2020-07-30 Mando Corporation Driver assistance system and control method thereof
CN110517530A (en) * 2019-09-03 2019-11-29 吉林大学 A kind of urban road Roadside Parking management-control method based on parking robot
CN111582256A (en) * 2020-04-26 2020-08-25 智慧互通科技有限公司 Parking management method and device based on radar and visual information
CN111739338A (en) * 2020-05-07 2020-10-02 智慧互通科技有限公司 Parking management method and system based on multiple types of sensors
CN112185129A (en) * 2020-11-13 2021-01-05 重庆盘古美天物联网科技有限公司 Parking management method based on urban auxiliary road bayonet snapshot
CN112330968A (en) * 2020-11-19 2021-02-05 朱新培 Urban roadside parking lot data acquisition device based on microwave radar and method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115169452A (en) * 2022-06-30 2022-10-11 北京中盛国芯科技有限公司 System and method for fusing target information based on space-time synchronization queue characteristics
CN117334080A (en) * 2023-12-01 2024-01-02 江苏镭神激光智能系统有限公司 Vehicle tracking method and system based on laser radar and camera identification
CN117334080B (en) * 2023-12-01 2024-02-02 江苏镭神激光智能系统有限公司 Vehicle tracking method and system based on laser radar and camera identification

Also Published As

Publication number Publication date
CN112863195B (en) 2022-06-14

Similar Documents

Publication Publication Date Title
CN108345822B (en) Point cloud data processing method and device
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN113240909B (en) Vehicle monitoring method, equipment, cloud control platform and vehicle road cooperative system
Kumaran et al. Computer vision-guided intelligent traffic signaling for isolated intersections
CN112863195B (en) Vehicle state determination method and device
CN114445803A (en) Driving data processing method and device and electronic equipment
CN112447041A (en) Method and device for identifying operation behavior of vehicle and computing equipment
CN113470206B (en) Expressway inspection method, equipment and medium based on vehicle matching
EP4020428A1 (en) Method and apparatus for recognizing lane, and computing device
CN115035744B (en) Vehicle identification method, device and system based on image analysis and RFID
CN110602446A (en) Garbage recovery reminding method and system and storage medium
CN110021161B (en) Traffic flow direction prediction method and system
CN114332707A (en) Method and device for determining equipment effectiveness, storage medium and electronic device
CN110930715A (en) Method and system for identifying red light running of non-motor vehicle and violation processing platform
CN111929672A (en) Method and device for determining movement track, storage medium and electronic device
CN114611635A (en) Object identification method and device, storage medium and electronic device
CN114782496A (en) Object tracking method and device, storage medium and electronic device
CN114677843A (en) Road condition information processing method, device and system and electronic equipment
CN116311015A (en) Road scene recognition method, device, server, storage medium and program product
CN114202919A (en) Method, device and system for identifying shielding of electronic license plate of non-motor vehicle
CN113609317A (en) Image library construction method and device and electronic equipment
CN113744304A (en) Target detection tracking method and device
CN113823095B (en) Method and device for determining traffic state, storage medium and electronic device
CN111243289A (en) Target vehicle tracking method and device, storage medium and electronic device
CN114926973B (en) Video monitoring method, device, system, server and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant