WO2021068210A1 - Method and apparatus for monitoring moving object, and computer storage medium - Google Patents

Method and apparatus for monitoring moving object, and computer storage medium Download PDF

Info

Publication number
WO2021068210A1
WO2021068210A1 PCT/CN2019/110674 CN2019110674W WO2021068210A1 WO 2021068210 A1 WO2021068210 A1 WO 2021068210A1 CN 2019110674 W CN2019110674 W CN 2019110674W WO 2021068210 A1 WO2021068210 A1 WO 2021068210A1
Authority
WO
WIPO (PCT)
Prior art keywords
foreground object
background
position information
point
frame
Prior art date
Application number
PCT/CN2019/110674
Other languages
French (fr)
Chinese (zh)
Inventor
李延召
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2019/110674 priority Critical patent/WO2021068210A1/en
Priority to CN201980031231.XA priority patent/CN112956187A/en
Publication of WO2021068210A1 publication Critical patent/WO2021068210A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the embodiments of the present invention relate to the field of radar, and more specifically, to a method, device, and computer storage medium for monitoring moving objects.
  • the embodiments of the present invention provide a method, a device and a computer storage medium for monitoring a moving object, which can determine the trajectory of the movement of a foreground object and can be applied to various scenes.
  • a method for monitoring moving objects including:
  • the trajectory of the foreground object is constructed.
  • a device for monitoring moving objects including:
  • the acquisition module is used to acquire the detection data obtained by the lidar through detection
  • the determining module is configured to determine the front sight cloud of each frame according to the background point cloud data and the detection data of each frame, wherein the background point cloud data and the detection data are for the same scene;
  • the construction module is used to construct the trajectory of the foreground object according to the front sight cloud of multiple frames.
  • a device for monitoring a moving object including a memory, a processor, and a computer program stored in the memory and running on the processor, and the processor implements the foregoing when the computer program is executed. The steps of the method described in the first aspect.
  • a computer storage medium is provided with a computer program stored thereon, and the computer program, when executed by a processor, implements the steps of the method described in the first aspect.
  • the point cloud data detected by the lidar is used to construct the background point cloud data, and on this basis, the moving foreground object is determined, and the moving trajectory of the foreground object is determined.
  • the embodiment of the present invention can realize the motion detection of foreground objects through lidar.
  • the method of the embodiment of the present invention has low algorithm complexity and is not difficult to implement.
  • the obtained motion detection result has high accuracy and is easy to apply.
  • And can be applied to various scenes, for example, it will not be affected by light, etc., and the scene adaptability is strong.
  • FIG. 1 is a schematic flowchart of a method for monitoring a moving object according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of the spatial distribution of the depth value of the background according to an embodiment of the present invention.
  • Fig. 3 is a schematic diagram of a front scenic spot cloud according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of foreground objects determined according to the cloud of front scenic spots according to an embodiment of the present invention.
  • Fig. 5 is a schematic block diagram of a mobile object monitoring device according to an embodiment of the present invention.
  • FIG. 6 is another schematic block diagram of a device for monitoring a moving object according to an embodiment of the present invention.
  • Fig. 7 is another schematic block diagram of a mobile object monitoring device according to an embodiment of the present invention.
  • the embodiment of the present invention provides a solution for monitoring moving objects based on lidar.
  • FIG. 1 it is a schematic flowchart of a method for monitoring a moving object according to an embodiment of the present invention.
  • the method shown in Figure 1 includes:
  • S120 Determine the front scenic spot cloud of each frame according to the background point cloud data and the detection data of each frame, wherein the background point cloud data and the detection data are for the same scene;
  • S130 Construct a moving trajectory of the foreground object according to the front scenic spot cloud of multiple frames.
  • the moving objects can also be different.
  • the moving object may be a vehicle, for example, a car, a bicycle, and so on.
  • the moving objects may be vehicles, or the moving objects may be vehicles and pedestrians.
  • the scene can be other, and correspondingly, the moving object can be other moving objects in the scene, for example, the scene is water, the moving object is fish, and so on.
  • the present invention will not enumerate one by one.
  • the embodiment of the present invention mainly takes a vehicle as an example for detailed explanation, and the corresponding scene is a location where the vehicle may appear, such as a parking lot, a road, and the like.
  • S110 may include: S101, constructing background point cloud data.
  • This process can be called a background modeling process, and the background point cloud data can also be called a background point cloud pool.
  • the background point cloud data can be obtained in the following manner: acquiring the background detection data detected by the lidar over a period of time; constructing the background point cloud data according to the detected background detection data.
  • the length of a period of time can be preset according to the scene, for example, 24 hours or 2 days or other length of time, which is not limited in the present invention.
  • the lidar can be turned on for a period of time, and sufficient detection data within this period of time can be collected to determine that all backgrounds in the background are detected and collected.
  • the object should be removed at a certain or certain moments during this period of time, so as to ensure that the collection of complete background information is included. Background detection data. For example, if there is a stationary vehicle in the scene, it should be ensured that it has been removed during this period of time.
  • constructing the background point cloud data according to the background detection data may include: projecting the scanning points of the background detection data on the projection surface and gridding the projection surface; for each grid on the projection surface, determining the projection to the grid
  • the background feature of the scanned point on the Exemplarily, the background feature of the scanning point can be determined according to the depth information and/or reflectivity of the corresponding scanning point of each frame of background detection data.
  • the projection surface may be a plane perpendicular to the axis of the radar, of course, the projection surface may also be other planes, such as a certain angle with the axis of the radar.
  • the projection surface can be gridded (also referred to as discretization), and each grid on the gridded projection surface has scan points projected on it.
  • the background feature can be determined based on depth information and/or reflectivity.
  • the depth information may include a depth value, and the point with the largest depth value may be selected as its background feature.
  • the depth information may include the distribution density function of the depth value, and the distribution density function represents the probability density of the depth value, which represents the probability density of different depth values being determined as the background.
  • the distribution density function of its depth value can be used as its background feature.
  • FIG. 2 shows a schematic diagram of the spatial distribution of the depth value of the background.
  • the scene in Fig. 2 is the truth, such as a highway; where the sky above the picture, the picture includes roads, road signs, street lights, etc., and trees beside the road.
  • a process of in-plane filtering may also be included.
  • a filtering method can be used to reduce the influence of noise on the background features of the scanned points in the background point cloud data.
  • the filtering method may be average filtering or other filtering methods.
  • the embodiment of the present invention may also update the background point cloud data based on the detection data in real time or periodically. Specifically, after the background point cloud data is constructed, it is updated according to the subsequent detection data. For example, if the depth of a certain grid point in the subsequent detection data is greater than the depth of the corresponding grid point in the background point cloud data, it means that the background depth of the point has changed, and the point can be determined based on the detection data.
  • the background features are updated.
  • background point cloud data can be constructed by means of background modeling. It can be used for subsequent foreground extraction to facilitate the detection of moving objects.
  • S110 may include: real-time acquisition by lidar to obtain detection data, where the lidar is not moved after being fixed, that is, the scene collected by lidar in S110 and the background point cloud constructed in S101 The scenes collected by lidar are the same when data is collected.
  • S120 may include: projecting the scanning point of the detection data onto the projection surface to obtain the projection point; by comparing the projection point of the scanning point with the background projection point of the background point cloud data at the corresponding position of the projection surface, determining the scanning Whether the point belongs to the former scenic spot cloud.
  • the projection surface selected when constructing the cloud data of the front scenic spot in S101 is the same projection surface as the projection surface here, for example, both are planes perpendicular to the axis of the radar.
  • the method of gridding (or discretizing) the projection surface is also the same.
  • the projection of the detection data detected in S110 on the projection surface is called the projection point
  • the projection of the background point cloud data in S101 on the projection surface is called the background projection point.
  • the calculated difference value may be the difference value of the attribute value between the pixel points, and the attribute value may be depth, reflectance or other values.
  • the first preset threshold can be set according to the attributes of the radar, the attributes of the scene, and so on.
  • the scan point is determined to be the front scenic spot; otherwise As the background point.
  • the previous scenic spot cloud can be obtained by aggregating the previous scenic spots.
  • the collection of all the front scenic spots in a frame can be constructed as the front scenic spot cloud of this frame.
  • Fig. 3 it is a schematic diagram of the front scenic spot cloud of one frame.
  • the foreground object in each frame can be determined according to the front sight cloud of each frame. As shown in FIG. 4, different gray levels are used to represent different foreground objects, including foreground object 1 to foreground object 6.
  • the front scenic spot cloud whose distance is less than the second preset threshold can be regarded as a point cloud cluster, which belongs to the same foreground object, that is, the foreground object contains the distance between every two adjacent front scenic spots.
  • the distance is less than the second preset threshold.
  • the second preset threshold may be preset according to the attributes of the radar, for example, according to the density of the points detected by the radar.
  • the geometric attributes of the foreground object can be determined according to all the front scenic spot clouds contained in the foreground object.
  • the geometric attributes may include at least one of the following: the size, center coordinates, and center of gravity coordinates of the foreground object.
  • the size is the side length or area of a bounding box of all front scenic spot clouds contained in the foreground object.
  • the size may be the difference between the maximum value and the minimum value of the coordinates of all points in the front scenic spot cloud included in the foreground object.
  • the center coordinates are the coordinates of the center points of all front scenic spot clouds contained in the foreground object or the coordinates of the center point of the bounding box.
  • the center of gravity coordinates are the coordinates of the center of gravity of all the front scenic spot clouds contained in the foreground object or the coordinates of the center of gravity of the bounding box.
  • the type of foreground object may also include: identifying the type of the foreground object according to the front scenic spot cloud contained in the foreground object.
  • the type of foreground object can be determined according to the shape of the outer contours of all front scenic spot clouds contained in the foreground object.
  • the types can include cars, bicycles, pedestrians, and so on.
  • S130 may include: predicting the predicted position information of the foreground object in the current frame according to the position information of the foreground object in the historical frame; and determining that the foreground object is in the current frame according to the predicted position information of the foreground object in the current frame The actual position information of the foreground object; according to the position information of the foreground object in the historical frame and the actual position information of the foreground object in the current frame, the trajectory of the foreground object's movement is constructed.
  • the predicted position information of the foreground object in the current frame can be predicted by fitting.
  • the predicted position information of the t+1th frame can be predicted according to the position information of each frame from the 0th frame (or the t-nth frame) to the tth frame.
  • Predictive models can be used to make predictions.
  • the predictive model may be a machine learning model.
  • the prediction model may be a function model obtained by curve fitting historical location information.
  • determine the position information of each moving object in the current frame determine which of each moving object is the foreground according to the predicted position information of the foreground object in the current frame and the position information of each moving object in the current frame Object and get the actual position information of the foreground object in the current frame.
  • the position information of each moving object (ie foreground object) in the t+1th frame can be determined according to the actually detected detection data of the t+1th frame, and the t+1th frame can be determined by position matching Which foreground object in is foreground object A.
  • the distance difference between the position information of each moving object and the predicted position information can be calculated; the absolute value of the distance difference that is less than the third preset threshold and the smallest is the absolute value of the distance difference corresponding to the movement
  • the object is determined to be the foreground object.
  • the distance between the position of several moving objects and the predicted position of the foreground object A is less than the third preset threshold, then the distance The smallest moving object is determined to be the foreground object A.
  • the position information is center coordinates or center of gravity coordinates. Then, the center coordinate can be used as the position information to calculate the distance difference, or the center of gravity coordinate can be used as the position information to calculate the distance difference. Generally, these two calculation results are different. At this time, the foreground object can be matched based on the smallest distance difference.
  • the center coordinates are used as the position information
  • the m distance differences between the position information of the m moving objects and the predicted position information of the predicted foreground object A can be calculated.
  • the center of gravity coordinates are used as the position information
  • the m distance differences between the position information of the m moving objects and the predicted position information of the predicted foreground object A can also be calculated. In other words, a total of 2*m distance differences can be obtained. If the minimum value of these 2*m distance differences is obtained by using the center coordinates as the position information, the center coordinates are used as the position information to determine which one of the m moving objects is the foreground object A. On the contrary, if the minimum value of the 2*m distance difference values is obtained by using the center of gravity coordinates as the position information, the center of gravity coordinates are used as the position information to determine which one of the m moving objects is the foreground object A.
  • the foreground object is not found in the current frame. In other words, the foreground object is not detected in the current frame, and its trajectory is interrupted.
  • the predicted position information of the t+pth frame can be predicted based on the position information of each frame from the 0th frame (or the t-nth frame) to the tth frame.
  • the foreground object is foreground object A.
  • m1 moving objects are detected in the t+pth frame.
  • the m1-1 moving objects in the m1 moving objects are matched with the foreground objects in the t+p-1 frame, and the m1 moving object in the m1 moving objects cannot be in the t+p-1 frame Find a matching foreground object.
  • it can be judged based on the predicted position information of the foreground object A in the t+p frame and the detected position information of the m1 moving object among the m1 moving objects in the t+p frame Whether the m1 moving object among the m1 moving objects is the foreground object A.
  • the judgment process is performed.
  • the position information of each foreground object can be determined based on the detection data detected by the radar, and the position information of the foreground object in each frame can be determined by matching between different frames, so that the position information of each foreground object can be determined according to each frame
  • the trajectory can characterize the change of position information over time.
  • the information of the trajectory of the movement of the foreground object may be stored.
  • the trajectory of the foreground object may be stored in the form of a list, wherein the list records the position information of the foreground object at each time.
  • the list may have the form of Table 1 as shown below.
  • the embodiment of the present invention does not limit the form and content of the list.
  • the list may be in the form of a stack.
  • the center position and the center of gravity position may be recorded in the list at the same time, and so on.
  • the present invention is not limited to this.
  • point clouds in different frames may be matched to determine which ones belong to the same foreground object. Further, it is also possible to accumulate the front sight clouds of the foreground object in multiple frames. For example, if the number of point clouds included in the foreground object in a certain frame is n1, and the number of point clouds included in another frame is n2, then the point clouds in the two frames can be accumulated, specifically according to the foreground The shape and posture of the object are accumulated, assuming that the number of point clouds after accumulation is n3, n3 ⁇ n1, n3 ⁇ n2 and n3 ⁇ n1+n2. In this way, a denser point cloud of the foreground object can be obtained, so that the foreground object can be better characterized, and on the other hand, the type of the foreground object can be verified in turn, which can improve the accuracy of monitoring.
  • it may further include: determining a movement parameter of the foreground object according to the trajectory of the movement of the foreground object, where the movement parameter includes at least one of the following: displacement, velocity, and acceleration.
  • the movement parameters of the foreground object in a certain period of time can be determined according to the trajectory.
  • the movement parameter may also be included in the above list.
  • the foreground object is a vehicle
  • the speed of the vehicle can be determined, and the speed can be compared with the speed limit in the current scene to determine whether the vehicle is in an overspeeding state.
  • the lidar can be installed at the signal light at the intersection of the road, so as to detect whether the vehicle passing through the intersection is speeding, and assist in the violation monitoring.
  • the traffic controller or the driver can be warned or reminded in advance to ensure traffic safety and prevent traffic accidents due to speeding.
  • the foreground object is a vehicle
  • the foreground object is a vehicle and the scene is a parking lot
  • the point cloud data detected by the lidar is used to construct the background point cloud data, and on this basis, the moving foreground object is determined, and the moving trajectory of the foreground object is determined.
  • the embodiment of the present invention can realize the motion detection of foreground objects through lidar.
  • the method of the embodiment of the present invention has low algorithm complexity and is not difficult to implement.
  • the obtained motion detection result has high accuracy and is easy to apply.
  • And can be applied to various scenes, for example, it will not be affected by light, etc., and the scene adaptability is strong.
  • FIG. 5 is a schematic block diagram of an apparatus for monitoring a moving object according to an embodiment of the present invention.
  • the apparatus 20 shown in FIG. 5 includes an acquisition module 210, a determination module 220, and a construction module 230.
  • the acquiring module 210 is used to acquire the detection data obtained by the lidar through detection
  • the determining module 220 is configured to determine the front scenic spot cloud of each frame according to the background point cloud data and the detection data of each frame, wherein the background point cloud data and the detection data are for the same scene;
  • the construction module 230 is used for constructing a trajectory of the foreground object's movement according to the front scenic spot cloud of multiple frames.
  • the device 20 may further include a background data acquisition module 201 and a background point cloud construction module 202.
  • the background data acquisition module 201 may be used to acquire background detection data detected by the lidar over a period of time.
  • the background point cloud construction module 202 may be used to construct the background point cloud data according to the background detection data.
  • the background point cloud construction module 202 may include a projection sub-module and a feature determination sub-module.
  • the projection sub-module is used to project the scan points of the background detection data onto the projection surface and the projection surface is gridded; the feature determination sub-module is used to determine for each grid on the projection surface The background features of the scanned points projected onto the grid.
  • the projection surface may be a plane perpendicular to the axis of the radar, of course, the projection surface may also be other planes, such as a certain angle with the axis of the radar.
  • the projection surface can be gridded (also referred to as discretization), and each grid on the gridded projection surface has scan points projected on it.
  • the feature determination submodule may be specifically configured to determine the background feature of the scan point according to the depth information and/or reflectivity of the corresponding scan point of each frame of background detection data.
  • the depth information includes a distribution density function of the depth value, and the distribution density function represents the probability density of the depth value. Specifically, it indicates the probability density of different depth values being judged as background
  • the background point cloud construction module 202 may also be used to reduce the influence of noise on the background features of the scan points in the background point cloud data by adopting a filtering method.
  • the device 20 may further include a background point cloud update module, configured to update the background point cloud data according to the detection data in real time or periodically. Specifically, after constructing the background point cloud data, it is updated based on the subsequent detection data. For example, if the depth of a certain grid point in the subsequent detection data is greater than the depth of the corresponding grid point in the background point cloud data, it means that the background depth of the point has changed, and the point can be determined based on the detection data.
  • the background features are updated.
  • the determining module 220 may include a projection unit 2201 and a judgment unit 2202.
  • the projection unit 2201 can be used to project the scanning point of the detection data onto a projection surface to obtain a projection point; the judging unit 2202 can be used to compare the projection point of the scanning point with the background point cloud data in the The background projection points at the corresponding positions of the projection surface are compared, and it is judged whether the scanning point belongs to the front scenic spot cloud.
  • the judging unit 2202 may be specifically configured to: if the absolute value of the difference between the projection point and the background projection point is greater than or equal to a first preset threshold, determine that the scan point belongs to the front scenic spot cloud If the absolute value of the difference between the projection point and the background projection point is less than the first preset threshold, it is determined that the scan point does not belong to the front scenic spot cloud.
  • the determining module 220 may also be used to determine the foreground object according to the cloud of front sights, wherein the distance between every two adjacent front sights contained in the foreground object is less than the second predicted Set a threshold; determine the geometric attributes of the foreground object according to all front sight clouds included in the foreground object.
  • the geometric attributes include at least one of the following: the size, center coordinates, and center of gravity coordinates of the foreground object.
  • the size is the side length or area of the bounding box of all the front scenic spot clouds.
  • the center coordinates are the coordinates of the center points of all the front scenic spot clouds or the coordinates of the center point of the bounding box.
  • the center of gravity coordinates are the coordinates of the center of gravity of all the front scenic spot clouds or the coordinates of the center of gravity of the bounding box.
  • the construction module 230 may include a prediction unit 2301, a determination unit 2302 and a construction unit 2303.
  • the prediction unit 2301 may be configured to predict the predicted position information of the foreground object in the current frame according to the position information of the foreground object in the historical frame.
  • the determining unit 2302 may be configured to determine the actual position information of the foreground object in the current frame according to the predicted position information of the foreground object in the current frame.
  • the constructing unit 2303 may be configured to construct a trajectory of the foreground object's movement according to the position information of the foreground object in the historical frame and the actual position information of the foreground object in the current frame.
  • the prediction unit 2301 may be specifically configured to predict the predicted position information of the foreground object in the current frame by fitting a historical trajectory constructed based on the position information of the foreground object in the historical frame.
  • the determining unit 2302 may be specifically configured to: determine the position information of each moving object in the current frame; according to the predicted position information of the foreground object in the current frame and each moving object in the current frame To determine which of the moving objects is the foreground object and obtain the actual position information of the foreground object in the current frame.
  • the determining unit 2302 may be specifically configured to: calculate the distance difference between the position information of each moving object and the predicted position information; and set the absolute value of the distance difference to be less than a third preset threshold. And the moving object corresponding to the absolute value of the smallest distance difference is determined as the foreground object.
  • the determining unit 2302 may also be specifically configured to: if the absolute value of the distance difference is greater than or equal to the third preset threshold, determine that the foreground object is not found in the current frame.
  • the device 20 may further include a storage module 240, configured to store the trajectory of the foreground object in the form of a list, wherein the list records the status of the foreground object at each time. location information.
  • a storage module 240 configured to store the trajectory of the foreground object in the form of a list, wherein the list records the status of the foreground object at each time. location information.
  • the position information includes: the coordinates of the center point or the coordinates of the center of gravity point.
  • the device 20 may further include a parameter determination module, configured to determine a movement parameter of the foreground object according to the trajectory of the movement of the foreground object, wherein the movement parameter includes at least one of the following: displacement, Speed, acceleration.
  • a parameter determination module configured to determine a movement parameter of the foreground object according to the trajectory of the movement of the foreground object, wherein the movement parameter includes at least one of the following: displacement, Speed, acceleration.
  • the device 20 further includes a post-processing module 250.
  • the post-processing module 250 may be used to accumulate the foreground cloud of the foreground object in multiple frames.
  • the post-processing module 250 may be used to identify the type of the foreground object according to the front sight cloud included in the foreground object.
  • the types include: cars, bicycles, and pedestrians.
  • the foreground object is a vehicle
  • the post-processing module 250 may be used to determine whether the vehicle is in an overspeeding state.
  • the foreground object is a vehicle
  • the post-processing module 250 may be used to perform statistics on traffic flow information in a specific area.
  • the scene is a parking lot
  • the foreground object is a vehicle
  • the post-processing module 250 may be used to determine the parking space occupancy status of the parking lot according to the detected movement trajectory of the foreground object in the parking lot.
  • the post-processing module 250 may be further used to guide the vehicle to be parked to an idle parking space.
  • the point cloud data detected by the lidar is used to construct the background point cloud data, and on this basis, the moving foreground object is determined, and the moving trajectory of the foreground object is determined.
  • the embodiment of the present invention can realize the motion detection of foreground objects through lidar.
  • the method of the embodiment of the present invention has low algorithm complexity and is not difficult to implement.
  • the obtained motion detection result has high accuracy and is easy to apply.
  • And can be applied to various scenes, for example, it will not be affected by light, etc., and the scene adaptability is strong.
  • the device 20 shown in FIG. 5 or FIG. 6 can implement the aforementioned method for monitoring a moving object shown in FIG. 1. To avoid repetition, it will not be repeated here.
  • the embodiment of the present invention also provides another device for monitoring a moving object, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor executes the program when the program is executed. The steps of the method for monitoring a moving object shown in FIG. 1 described above.
  • the device 30 may include a memory 310 and a processor 320.
  • the memory 310 stores computer program codes for implementing corresponding steps in the method for monitoring a moving object according to an embodiment of the present invention.
  • the processor 320 is configured to run the computer program code stored in the memory 310 to execute the corresponding steps of the method for monitoring a moving object according to an embodiment of the present invention, and to implement the steps described in FIG. 5 or FIG. 6 according to the embodiment of the present invention.
  • the various modules in the device 20 are examples of the device 20.
  • the following steps are executed: acquiring detection data obtained by the lidar through detection; determining the front spot of each frame according to the background point cloud data and the detection data of each frame Cloud, wherein the background point cloud data and the detection data are for the same scene; according to multiple frames of the front scenic spot cloud, the trajectory of the foreground object is constructed.
  • the point cloud data detected by the lidar is used to construct the background point cloud data, and on this basis, the moving foreground object is determined, and the moving trajectory of the foreground object is determined.
  • the embodiment of the present invention can realize the motion detection of foreground objects through lidar.
  • the method of the embodiment of the present invention has low algorithm complexity and is not difficult to implement.
  • the obtained motion detection result has high accuracy and is easy to apply.
  • And can be applied to various scenes, for example, it will not be affected by light, etc., and the scene adaptability is strong.
  • the embodiment of the present invention also provides a computer storage medium on which a computer program is stored.
  • the computer program is executed by the processor, the steps of the method for monitoring a moving object shown in FIG. 1 can be implemented.
  • the computer storage medium is a computer-readable storage medium.
  • the computer storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable compact disk read-only memory ( CD-ROM), USB memory, or any combination of the above storage media.
  • the computer-readable storage medium may be any combination of one or more computer-readable storage media.
  • one computer-readable storage medium contains computer-readable program code for monitoring moving objects
  • another computer-readable storage medium contains computer-readable storage media based on Computer-readable program code for the post-processing of the trajectory.
  • the embodiment of the present invention also provides a computer program or computer program product.
  • the computer program or computer program product can be executed by a processor, and when the code is executed by the processor, it can realize: Detection data; according to the background point cloud data and the detection data of each frame, determine the front sight cloud of each frame, wherein the background point cloud data and the detection data are for the same scene; according to the front sight cloud of multiple frames, Construct the trajectory of the foreground object's movement.
  • the point cloud data detected by the lidar is used to construct the background point cloud data, and on this basis, the moving foreground object is determined, and the moving trajectory of the foreground object is determined.
  • the embodiment of the present invention can realize the motion detection of foreground objects through lidar.
  • the method of the embodiment of the present invention has low algorithm complexity and is not difficult to implement.
  • the obtained motion detection result has high accuracy and is easy to apply.
  • And can be applied to various scenes, for example, it will not be affected by light, etc., and the scene adaptability is strong.
  • the embodiment of the present invention can also determine parameters such as the speed and acceleration of the moving object based on the trajectory of the moving object, and can also perform statistics on the traffic information of the moving object. This simple and intelligent statistics can also become an important part of the construction of smart cities.
  • the computer may be implemented in whole or in part by software, hardware, firmware or any other combination.
  • software it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital video disc (DVD)), or a semiconductor medium (for example, a solid state disk (SSD)), etc.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each functional unit in each embodiment of the present application may be integrated in a processor, or each unit may exist alone physically, or two or more units may be integrated in one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

Provided are a method and apparatus for monitoring a moving object, and a computer storage medium. The method comprises: acquiring detection data obtained by a lidar by means of detection (S110); determining a foreground point cloud of each frame according to background point cloud data and detection data of each frame, wherein the background point cloud data and the detection data are for the same scene (S120); and according to foreground point clouds of multiple frames, constructing the trajectory of the movement of a foreground object (S130). It can be seen that embodiments of the present invention use point cloud data detected by a lidar to construct background point cloud data, and on the basis of same, determine a moving foreground object and determine the trajectory thereof. The embodiments of the present invention may use a lidar to achieve the motion detection of a foreground object. Moreover, the algorithm complexity of the method is low, the difficulty of implementation is small, the obtained motion detection results are highly accurate, the method is easy to apply, may be applied in various scenes, for example, the method will not be affected by light and the like, and the scene adaptability is strong.

Description

移动物体监控的方法、装置及计算机存储介质Method, device and computer storage medium for monitoring moving objects 技术领域Technical field
本发明实施例涉及雷达领域,并且更具体地,涉及一种移动物体监控的方法、装置及计算机存储介质。The embodiments of the present invention relate to the field of radar, and more specifically, to a method, device, and computer storage medium for monitoring moving objects.
背景技术Background technique
目前在对移动物体进行监控时,通常采用的是基于相机的监控。然而,使用相机所采集的图像一般缺乏深度信息或者所估计的深度信息准确度差,导致监控结果不准确。并且,使用相机的监控其环境适应性差,尤其是容易受光照影响的环境等。At present, when monitoring moving objects, camera-based monitoring is usually adopted. However, the images collected by the camera generally lack depth information or the accuracy of the estimated depth information is poor, resulting in inaccurate monitoring results. In addition, the use of cameras for monitoring has poor environmental adaptability, especially in environments that are easily affected by light.
发明内容Summary of the invention
本发明实施例提供了一种移动物体监控的方法、装置及计算机存储介质,能够确定前景物移动的轨迹,且能够应用于各种场景。The embodiments of the present invention provide a method, a device and a computer storage medium for monitoring a moving object, which can determine the trajectory of the movement of a foreground object and can be applied to various scenes.
第一方面,提供了一种移动物体监控的方法,包括:In the first aspect, a method for monitoring moving objects is provided, including:
获取激光雷达通过探测得到的探测数据;Obtain detection data obtained by lidar through detection;
根据背景点云数据以及每一帧的探测数据,确定每一帧的前景点云,其中,所述背景点云数据与所述探测数据针对同一场景;Determine the front scenic spot cloud of each frame according to the background point cloud data and the detection data of each frame, wherein the background point cloud data and the detection data are for the same scene;
根据多帧的前景点云,构建前景物移动的轨迹。According to the front scenic spot cloud of multiple frames, the trajectory of the foreground object is constructed.
第二方面,提供了一种移动物体监控的装置,包括:In a second aspect, a device for monitoring moving objects is provided, including:
获取模块,用于获取激光雷达通过探测得到的探测数据;The acquisition module is used to acquire the detection data obtained by the lidar through detection;
确定模块,用于根据背景点云数据以及每一帧的探测数据,确定每一帧的前景点云,其中,所述背景点云数据与所述探测数据针对同一场景;The determining module is configured to determine the front sight cloud of each frame according to the background point cloud data and the detection data of each frame, wherein the background point cloud data and the detection data are for the same scene;
构建模块,用于根据多帧的前景点云,构建前景物移动的轨迹。The construction module is used to construct the trajectory of the foreground object according to the front sight cloud of multiple frames.
第三方面,提供了一种移动物体监控的装置,包括存储器、处理器及存储在所述存储器上且在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述第一方面所述方法的步骤。In a third aspect, a device for monitoring a moving object is provided, including a memory, a processor, and a computer program stored in the memory and running on the processor, and the processor implements the foregoing when the computer program is executed. The steps of the method described in the first aspect.
第四方面,提供了一种计算机存储介质,其上存储有计算机程序,所述计 算机程序被处理器执行时实现上述第一方面所述方法的步骤。In a fourth aspect, a computer storage medium is provided with a computer program stored thereon, and the computer program, when executed by a processor, implements the steps of the method described in the first aspect.
由此可见,本发明实施例利用激光雷达所探测到的点云数据,构建了背景点云数据,在此基础上确定移动的前景物,并确定前景物移动的轨迹。本发明实施例能够通过激光雷达实现对前景物的移动侦测,再者本发明实施例的方法其算法复杂度低,实现难度不大,所得到的移动侦测的结果准确度高,易于应用,并且能够应用于各种场景,例如不会受光照等的影响,场景适应性强。It can be seen that, in the embodiment of the present invention, the point cloud data detected by the lidar is used to construct the background point cloud data, and on this basis, the moving foreground object is determined, and the moving trajectory of the foreground object is determined. The embodiment of the present invention can realize the motion detection of foreground objects through lidar. Moreover, the method of the embodiment of the present invention has low algorithm complexity and is not difficult to implement. The obtained motion detection result has high accuracy and is easy to apply. , And can be applied to various scenes, for example, it will not be affected by light, etc., and the scene adaptability is strong.
附图说明Description of the drawings
为了更清楚地说明本发明实施例的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions of the embodiments of the present invention more clearly, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only some of the present invention. Embodiments, for those of ordinary skill in the art, without creative labor, other drawings can be obtained based on these drawings.
图1是本发明实施例的移动物体监控的方法的一个示意性流程图;FIG. 1 is a schematic flowchart of a method for monitoring a moving object according to an embodiment of the present invention;
图2是本发明实施例的背景的深度值的空间分布的一个示意图;2 is a schematic diagram of the spatial distribution of the depth value of the background according to an embodiment of the present invention;
图3是本发明实施例的前景点云的一个示意图;Fig. 3 is a schematic diagram of a front scenic spot cloud according to an embodiment of the present invention;
图4是本发明实施例的根据前景点云确定的前景物的一个示意图;4 is a schematic diagram of foreground objects determined according to the cloud of front scenic spots according to an embodiment of the present invention;
图5是本发明实施例的移动物体监控的装置的一个示意性框图;Fig. 5 is a schematic block diagram of a mobile object monitoring device according to an embodiment of the present invention;
图6是本发明实施例的移动物体监控的装置的另一个示意性框图;FIG. 6 is another schematic block diagram of a device for monitoring a moving object according to an embodiment of the present invention;
图7是本发明实施例的移动物体监控的装置的又一个示意性框图。Fig. 7 is another schematic block diagram of a mobile object monitoring device according to an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
随着激光雷达的成本的显著降低,其应用也必将变得逐渐广泛。本发明实施例提供了一种基于激光雷达的移动物体监控的方案。With the significant reduction in the cost of lidar, its application is bound to become increasingly widespread. The embodiment of the present invention provides a solution for monitoring moving objects based on lidar.
如图1所示,是本发明实施例的移动物体监控的方法的一个示意性流程图。图1所示的方法包括:As shown in FIG. 1, it is a schematic flowchart of a method for monitoring a moving object according to an embodiment of the present invention. The method shown in Figure 1 includes:
S110,获取激光雷达通过探测得到的探测数据;S110: Obtain detection data obtained by lidar through detection;
S120,根据背景点云数据以及每一帧的探测数据,确定每一帧的前景点云,其中,所述背景点云数据与所述探测数据针对同一场景;S120: Determine the front scenic spot cloud of each frame according to the background point cloud data and the detection data of each frame, wherein the background point cloud data and the detection data are for the same scene;
S130,根据多帧的前景点云,构建前景物移动的轨迹。S130: Construct a moving trajectory of the foreground object according to the front scenic spot cloud of multiple frames.
其中,针对不同的场景,移动物体也可以不同。作为一例,若场景为停车场,则移动物体可以为车辆,例如,汽车、自行车等。作为另一例,若场景为道路,则移动物体可以为车辆,或者移动物体可以为车辆和行人。本领域技术人员可理解,场景可以为其他,相应地移动物体可以为场景中移动的其他物体,如场景为水域,移动物体为鱼类,等等。本发明不再一一列举。Among them, for different scenes, the moving objects can also be different. As an example, if the scene is a parking lot, the moving object may be a vehicle, for example, a car, a bicycle, and so on. As another example, if the scene is a road, the moving objects may be vehicles, or the moving objects may be vehicles and pedestrians. Those skilled in the art can understand that the scene can be other, and correspondingly, the moving object can be other moving objects in the scene, for example, the scene is water, the moving object is fish, and so on. The present invention will not enumerate one by one.
本发明实施例主要以车辆为例进行详细阐述,相应的场景为车辆可能出现的位置,如停车场、道路等。The embodiment of the present invention mainly takes a vehicle as an example for detailed explanation, and the corresponding scene is a location where the vehicle may appear, such as a parking lot, a road, and the like.
示例性地,在S110之前可以包括:S101,构建背景点云数据。该过程可以称为背景建模的过程,背景点云数据也可以称为背景点云池。Exemplarily, before S110, it may include: S101, constructing background point cloud data. This process can be called a background modeling process, and the background point cloud data can also be called a background point cloud pool.
具体地,可以通过下述方式得到背景点云数据:获取激光雷达在一段时间长度内探测的背景探测数据;根据所探测到的背景探测数据构建背景点云数据。Specifically, the background point cloud data can be obtained in the following manner: acquiring the background detection data detected by the lidar over a period of time; constructing the background point cloud data according to the detected background detection data.
其中,一段时间长度可以根据场景进行预先设定,例如为24小时或2天或其他时长,本发明对此不限定。Wherein, the length of a period of time can be preset according to the scene, for example, 24 hours or 2 days or other length of time, which is not limited in the present invention.
其中,在初始化时,激光雷达固定好之后,可以开启激光雷达一段时间长度,采集这段时间长度内的足够的探测数据,以确定背景中的所有背景均被探测并采集到。可选地,如果该场景中存在当前处于静止状态的可移动物体,那么在这段时间长度内的某个或某些时刻,该物体应该被移除,从而能够保证采集到包括完备背景信息的背景探测数据。例如,若该场景中存在静止的车辆,则应确保其在这段时间长度内曾被移走。Among them, during initialization, after the lidar is fixed, the lidar can be turned on for a period of time, and sufficient detection data within this period of time can be collected to determine that all backgrounds in the background are detected and collected. Optionally, if there is a movable object currently in a stationary state in the scene, the object should be removed at a certain or certain moments during this period of time, so as to ensure that the collection of complete background information is included. Background detection data. For example, if there is a stationary vehicle in the scene, it should be ensured that it has been removed during this period of time.
其中,根据背景探测数据构建背景点云数据,可以包括:将背景探测数据的扫描点投影到投影面上并将投影面网格化;针对投影面上的每一个网格,确定投影到网格上的扫描点的背景特征。示例性地,可以根据各帧背景探测数据的对应扫描点的深度信息和/或反射率,确定扫描点的背景特征。Among them, constructing the background point cloud data according to the background detection data may include: projecting the scanning points of the background detection data on the projection surface and gridding the projection surface; for each grid on the projection surface, determining the projection to the grid The background feature of the scanned point on the Exemplarily, the background feature of the scanning point can be determined according to the depth information and/or reflectivity of the corresponding scanning point of each frame of background detection data.
可选地,投影面可以是与雷达的轴线垂直的平面,当然投影面也可以是其他的平面,如与雷达的轴线成一定角度。随后,可以将该投影面进行网格化(也可以称为离散化),并且网格化后的投影面上的每一个网格均存在投影上去的 扫描点。Optionally, the projection surface may be a plane perpendicular to the axis of the radar, of course, the projection surface may also be other planes, such as a certain angle with the axis of the radar. Subsequently, the projection surface can be gridded (also referred to as discretization), and each grid on the gridded projection surface has scan points projected on it.
进一步地,可以基于深度信息和/或反射率来确定背景特征。以深度信息为例,针对某个扫描点,深度信息可以包括深度值,则可以选取深度值最大的点作为其背景特征。或者,深度信息可以包括深度值的分布密度函数,分布密度函数表示深度值的概率密度,其表示不同的深度值被判定为背景的概率密度大小。可以将其深度值的分布密度函数作为其背景特征。Further, the background feature can be determined based on depth information and/or reflectivity. Taking depth information as an example, for a certain scanning point, the depth information may include a depth value, and the point with the largest depth value may be selected as its background feature. Alternatively, the depth information may include the distribution density function of the depth value, and the distribution density function represents the probability density of the depth value, which represents the probability density of different depth values being determined as the background. The distribution density function of its depth value can be used as its background feature.
作为一例,图2示出了背景的深度值的空间分布的一个示意图。图2中所针对的场景是道理,例如高速路;其中图的上方为天空,图中包括道路以及路测的路牌、路灯等,另外还包括路旁的树木等等。As an example, FIG. 2 shows a schematic diagram of the spatial distribution of the depth value of the background. The scene in Fig. 2 is the truth, such as a highway; where the sky above the picture, the picture includes roads, road signs, street lights, etc., and trees beside the road.
另外,在S101中,还可以包括面内滤波的过程。具体地,可以采用滤波方法降低噪声对背景点云数据中的扫描点的背景特征的影响。其中,滤波方法可以是均值滤波或其他滤波方式。In addition, in S101, a process of in-plane filtering may also be included. Specifically, a filtering method can be used to reduce the influence of noise on the background features of the scanned points in the background point cloud data. Among them, the filtering method may be average filtering or other filtering methods.
可选地,本发明实施例还可以实时地或者定期地根据探测数据对背景点云数据进行更新。具体地,在构建背景点云数据之后,根据后续探测到的探测数据对其进行更新。例如,若后续探测到的探测数据中某个网格的点的深度大于背景点云数据中对应的网格的点的深度,则说明该点的背景深度已经变化,可以根据探测数据对该点的背景特征进行更新。Optionally, the embodiment of the present invention may also update the background point cloud data based on the detection data in real time or periodically. Specifically, after the background point cloud data is constructed, it is updated according to the subsequent detection data. For example, if the depth of a certain grid point in the subsequent detection data is greater than the depth of the corresponding grid point in the background point cloud data, it means that the background depth of the point has changed, and the point can be determined based on the detection data. The background features are updated.
这样,在S101中,通过背景建模的方式可以构建背景点云数据。其可以用于后续进行前景提取,以方便检测出移动物体。In this way, in S101, background point cloud data can be constructed by means of background modeling. It can be used for subsequent foreground extraction to facilitate the detection of moving objects.
示例性地,S110可以包括:由激光雷达进行实时采集,得到探测数据,其中,激光雷达固定好之后未进行移动,也就是说,S110中激光雷达所采集的场景与S101中在构建背景点云数据时激光雷达所采集的场景是一样的。Exemplarily, S110 may include: real-time acquisition by lidar to obtain detection data, where the lidar is not moved after being fixed, that is, the scene collected by lidar in S110 and the background point cloud constructed in S101 The scenes collected by lidar are the same when data is collected.
示例性地,S120可以包括:将探测数据的扫描点投影到投影面上得到投影点;通过将扫描点的投影点与背景点云数据在投影面的对应位置的背景投影点进行比较,判断扫描点是否属于前景点云。Exemplarily, S120 may include: projecting the scanning point of the detection data onto the projection surface to obtain the projection point; by comparing the projection point of the scanning point with the background projection point of the background point cloud data at the corresponding position of the projection surface, determining the scanning Whether the point belongs to the former scenic spot cloud.
可理解,S101中构建前景点云数据时所选取的投影面与此处的投影面为同一个投影面,例如均为与雷达的轴线垂直的平面。并且,对投影面进行网格化(或称为离散化)的方式也是一样的。It can be understood that the projection surface selected when constructing the cloud data of the front scenic spot in S101 is the same projection surface as the projection surface here, for example, both are planes perpendicular to the axis of the radar. In addition, the method of gridding (or discretizing) the projection surface is also the same.
将S110探测到的探测数据在投影面上的投影称为投影点,将S101中的背景点云数据在投影面上的投影称为背景投影点。在判断探测数据中的某个扫描 点是否属于前景点云时,可以通过比较投影点与背景投影点进行判断。具体地,若所述投影点与所述背景投影点之间的差值的绝对值大于或等于第一预设阈值,则确定所述扫描点属于前景点云;若所述投影点与所述背景投影点之间的差值的绝对值小于所述第一预设阈值,则确定所述扫描点不属于前景点云。The projection of the detection data detected in S110 on the projection surface is called the projection point, and the projection of the background point cloud data in S101 on the projection surface is called the background projection point. When judging whether a certain scanning point in the detection data belongs to the front scenic spot cloud, it can be judged by comparing the projection point with the background projection point. Specifically, if the absolute value of the difference between the projection point and the background projection point is greater than or equal to the first preset threshold, it is determined that the scan point belongs to the front scenic spot cloud; If the absolute value of the difference between the background projection points is less than the first preset threshold, it is determined that the scanning point does not belong to the front scenic spot cloud.
其中,计算差值可以是计算像素点之间的属性值的差值,属性值可以是深度、反射率或其他值等。其中,第一预设阈值可以根据雷达的属性、场景的属性等进行设定。Wherein, the calculated difference value may be the difference value of the attribute value between the pixel points, and the attribute value may be depth, reflectance or other values. Wherein, the first preset threshold can be set according to the attributes of the radar, the attributes of the scene, and so on.
也就是说,若探测数据中的某个扫描点的投影点与对应的背景投影点之间的属性值的差值的绝对值大于第一预设阈值,则确定该扫描点为前景点;否则为背景点。That is to say, if the absolute value of the difference in the attribute value between the projection point of a certain scan point and the corresponding background projection point in the detection data is greater than the first preset threshold, the scan point is determined to be the front scenic spot; otherwise As the background point.
这样,通过该过程,针对激光雷达探测到的每一帧探测数据,可以得到每一帧中的哪些是前景点。进而可以通过将前景点进行聚合,得到每一帧的前景点云。In this way, through this process, for each frame of detection data detected by the lidar, it is possible to obtain which of the previous scenic spots in each frame are. Furthermore, the previous scenic spot cloud can be obtained by aggregating the previous scenic spots.
其中,可以将一帧中的所有前景点的集合,构建为这一帧的前景点云。如图3所示,为一帧的前景点云的一个示意图。Among them, the collection of all the front scenic spots in a frame can be constructed as the front scenic spot cloud of this frame. As shown in Fig. 3, it is a schematic diagram of the front scenic spot cloud of one frame.
进一步地,可以根据每一帧的前景点云,确定每一帧中的前景物。如图4所示,其中使用不同的灰度代表不同的前景物,其中包括前景物1至前景物6。Further, the foreground object in each frame can be determined according to the front sight cloud of each frame. As shown in FIG. 4, different gray levels are used to represent different foreground objects, including foreground object 1 to foreground object 6.
具体地,可以将两两距离小于第二预设阈值的前景点云作为一个点云簇,属于同一个前景物,也就是说,前景物所包含的每两个相邻的前景点之间的距离小于第二预设阈值。其中,第二预设阈值可以根据雷达的属性进行预先设定,例如根据雷达所探测的点的密度进行设定。Specifically, the front scenic spot cloud whose distance is less than the second preset threshold can be regarded as a point cloud cluster, which belongs to the same foreground object, that is, the foreground object contains the distance between every two adjacent front scenic spots. The distance is less than the second preset threshold. The second preset threshold may be preset according to the attributes of the radar, for example, according to the density of the points detected by the radar.
进一步地,可以根据前景物所包含的所有前景点云,确定前景物的几何属性。Further, the geometric attributes of the foreground object can be determined according to all the front scenic spot clouds contained in the foreground object.
可选地,几何属性可以包括以下至少一项:前景物的尺寸、中心坐标、重心坐标。示例性地,尺寸为前景物所包含的所有前景点云的包围盒(bouding box)的边长或面积。作为另一例,尺寸可以为前景物所包含的所有前景点云中各点的坐标的最大值与最小值的差值。示例性地,中心坐标为前景物所包含的所有前景点云的中心点的坐标或者包围盒的中心点的坐标。示例性地,重心坐标为前景物所包含的所有前景点云的重心点的坐标或者包围盒的重心点的坐标。Optionally, the geometric attributes may include at least one of the following: the size, center coordinates, and center of gravity coordinates of the foreground object. Exemplarily, the size is the side length or area of a bounding box of all front scenic spot clouds contained in the foreground object. As another example, the size may be the difference between the maximum value and the minimum value of the coordinates of all points in the front scenic spot cloud included in the foreground object. Exemplarily, the center coordinates are the coordinates of the center points of all front scenic spot clouds contained in the foreground object or the coordinates of the center point of the bounding box. Exemplarily, the center of gravity coordinates are the coordinates of the center of gravity of all the front scenic spot clouds contained in the foreground object or the coordinates of the center of gravity of the bounding box.
进一步地,还可以包括:根据前景物所包含的前景点云,识别前景物的类型。例如,可以根据前景物所包含的所有前景点云的外部轮廓的形状,确定前景物的类型。举例来说,类型可以包括汽车、自行车、行人等。Further, it may also include: identifying the type of the foreground object according to the front scenic spot cloud contained in the foreground object. For example, the type of foreground object can be determined according to the shape of the outer contours of all front scenic spot clouds contained in the foreground object. For example, the types can include cars, bicycles, pedestrians, and so on.
由此可见,针对激光雷达探测到的一帧的探测数据,可以:确定其中的哪些点是前景点,构建前景点云,根据前景点云确定前景物,进一步确定前景物的类型、几何属性。其中,可以将几何属性中的中心坐标或者重心坐标作为前景物的位置信息。对每一帧都执行类似的过程,可以得到每一帧中的前景物。It can be seen that for one frame of detection data detected by lidar, it is possible to determine which points are front sights, construct a cloud of front sights, determine foreground objects based on the cloud of front sights, and further determine the type and geometric properties of the foreground objects. Among them, the center coordinates or the center of gravity coordinates in the geometric attributes can be used as the position information of the foreground object. A similar process is performed for each frame, and the foreground objects in each frame can be obtained.
示例性地,S130可以包括:根据前景物在历史帧中的位置信息,预测前景物在当前帧中的预测位置信息;根据前景物在当前帧中的预测位置信息,确定前景物在当前帧中的实际位置信息;根据前景物在历史帧中的位置信息以及前景物在当前帧中的实际位置信息,构建前景物移动的轨迹。Exemplarily, S130 may include: predicting the predicted position information of the foreground object in the current frame according to the position information of the foreground object in the historical frame; and determining that the foreground object is in the current frame according to the predicted position information of the foreground object in the current frame The actual position information of the foreground object; according to the position information of the foreground object in the historical frame and the actual position information of the foreground object in the current frame, the trajectory of the foreground object's movement is constructed.
可理解,在一帧中可能会存在有多个前景物,因此需要确定不同帧中的哪些是同一个前景物,进而可以确定某一个特定的前景物的移动的轨迹。It can be understood that there may be multiple foreground objects in one frame, so it is necessary to determine which of the same foreground objects in different frames are, and then the movement trajectory of a certain foreground object can be determined.
可选地,可以基于前景物在历史帧中的位置信息所构建的历史轨迹,通过拟合预测前景物在当前帧中的预测位置信息。Optionally, based on the historical trajectory constructed by the position information of the foreground object in the historical frame, the predicted position information of the foreground object in the current frame can be predicted by fitting.
假设当前帧是第t+1帧。举例来说,针对前景物A,可以根据第0帧(或者第t-n帧)至第t帧每一帧的位置信息,预测第t+1帧的预测位置信息。可以采用预测模型进行预测。作为一例,预测模型可以是机器学习的模型。作为另一例,预测模型可以是通过将历史位置信息进行曲线拟合得出的函数模型。Assume that the current frame is the t+1th frame. For example, for the foreground object A, the predicted position information of the t+1th frame can be predicted according to the position information of each frame from the 0th frame (or the t-nth frame) to the tth frame. Predictive models can be used to make predictions. As an example, the predictive model may be a machine learning model. As another example, the prediction model may be a function model obtained by curve fitting historical location information.
可选地,确定在当前帧中的各个移动物体的位置信息;根据前景物在当前帧中的预测位置信息以及当前帧中的各个移动物体的位置信息,确定各个移动物体中的哪一个为前景物并得到前景物在当前帧中的实际位置信息。Optionally, determine the position information of each moving object in the current frame; determine which of each moving object is the foreground according to the predicted position information of the foreground object in the current frame and the position information of each moving object in the current frame Object and get the actual position information of the foreground object in the current frame.
也就是说,可以根据实际探测到的第t+1帧的探测数据,确定第t+1帧中的各个移动物体(即前景物)的位置信息,并通过位置匹配来确定第t+1帧中的哪个前景物是前景物A。That is to say, the position information of each moving object (ie foreground object) in the t+1th frame can be determined according to the actually detected detection data of the t+1th frame, and the t+1th frame can be determined by position matching Which foreground object in is foreground object A.
具体地,可以计算各个移动物体的位置信息与预测位置信息之间的距离差值;将距离差值的绝对值中小于第三预设阈值且最小的那个距离差值的绝对值所对应的移动物体确定为前景物。Specifically, the distance difference between the position information of each moving object and the predicted position information can be calculated; the absolute value of the distance difference that is less than the third preset threshold and the smallest is the absolute value of the distance difference corresponding to the movement The object is determined to be the foreground object.
如果在第t+1帧实际探测到各个移动物体的位置信息中,存在有若干个移动物体的位置与预测的前景物A的位置之间的距离都小于第三预设阈值,那 么可以将距离最小的那个移动物体确定为是前景物A。If the position information of each moving object is actually detected in the t+1 frame, the distance between the position of several moving objects and the predicted position of the foreground object A is less than the third preset threshold, then the distance The smallest moving object is determined to be the foreground object A.
另外,如前所述,位置信息为中心坐标或者重心坐标。那么,可以将中心坐标作为位置信息计算距离差值,也可以将重心坐标作为位置信息计算距离差值。一般地,这两个计算结果是不同的。此时,可以基于最小的那个距离差值进行前景物匹配。In addition, as described above, the position information is center coordinates or center of gravity coordinates. Then, the center coordinate can be used as the position information to calculate the distance difference, or the center of gravity coordinate can be used as the position information to calculate the distance difference. Generally, these two calculation results are different. At this time, the foreground object can be matched based on the smallest distance difference.
举例来说,假设第t+1帧中探测到存在有m个移动物体。若将中心坐标作为位置信息,可以计算得到这m个移动物体的位置信息与预测的前景物A的预测位置信息之间的m个距离差值。若将重心坐标作为位置信息,也可以计算得到这m个移动物体的位置信息与预测的前景物A的预测位置信息之间的m个距离差值。也就是说,一共可以得到2*m个距离差值。如果这2*m个距离差值中的最小值是将中心坐标作为位置信息而得到的,则将中心坐标作为位置信息来从这m个移动物体中确定哪一个是前景物A。相反,如果这2*m个距离差值中的最小值是将重心坐标作为位置信息而得到的,则将重心坐标作为位置信息来从这m个移动物体中确定哪一个是前景物A。For example, suppose that m moving objects are detected in the t+1th frame. If the center coordinates are used as the position information, the m distance differences between the position information of the m moving objects and the predicted position information of the predicted foreground object A can be calculated. If the center of gravity coordinates are used as the position information, the m distance differences between the position information of the m moving objects and the predicted position information of the predicted foreground object A can also be calculated. In other words, a total of 2*m distance differences can be obtained. If the minimum value of these 2*m distance differences is obtained by using the center coordinates as the position information, the center coordinates are used as the position information to determine which one of the m moving objects is the foreground object A. On the contrary, if the minimum value of the 2*m distance difference values is obtained by using the center of gravity coordinates as the position information, the center of gravity coordinates are used as the position information to determine which one of the m moving objects is the foreground object A.
另外,如果在当前帧中的各个移动物体的位置信息与预测位置信息之间的距离差值都大于或等于第三预设阈值,则确定在当前帧中未找到该前景物。也就是说,该前景物在当前帧中未探测到,其轨迹出现中断。In addition, if the distance difference between the position information of each moving object in the current frame and the predicted position information is greater than or equal to the third preset threshold, it is determined that the foreground object is not found in the current frame. In other words, the foreground object is not detected in the current frame, and its trajectory is interrupted.
返回上述的例子,如果m个移动物体的位置信息与预测的前景物A的预测位置信息之间的m个距离差值的绝对值都大于或等于第三预设阈值,则说明m个移动物体中的每一个都不是前景物A。Returning to the above example, if the absolute values of the m distance differences between the position information of m moving objects and the predicted position information of the predicted foreground object A are greater than or equal to the third preset threshold, then m moving objects are described Each of them is not foreground A.
针对这种情况,即在第t+1帧中未找到前景物A的情况,如果前景物A在第t帧中的位置信息不处于雷达的视场的边缘,则可以进一步在后续雷达探测的探测数据中匹配是否存在前景物A。假设当前帧是第t+p帧。针对前景物A,可以根据第0帧(或者第t-n帧)至第t帧每一帧的位置信息,预测第t+p帧的预测位置信息。并且可以根据实际探测到的第t+p帧的探测数据,确定第t+p帧中的各个移动物体(即前景物)的位置信息,并通过位置匹配来确定第t+p帧中的哪个前景物是前景物A。For this situation, that is, if the foreground object A is not found in the t+1 frame, if the position information of the foreground object A in the t frame is not at the edge of the radar's field of view, it can be further detected in the subsequent radar Detect whether there is foreground object A in the matching data. Assume that the current frame is the t+pth frame. For the foreground object A, the predicted position information of the t+pth frame can be predicted based on the position information of each frame from the 0th frame (or the t-nth frame) to the tth frame. And can determine the position information of each moving object (ie foreground object) in the t+p frame according to the actually detected detection data of the t+pth frame, and determine which of the t+pth frame is through position matching The foreground object is foreground object A.
举例来说,假设第t+p帧中探测到存在有m1个移动物体。这m1个移动物体中的m1-1个移动物体均与第t+p-1帧中前景物匹配上,而这m1个移动物体中的第m1个移动物体无法在t+p-1帧中找到匹配的前景物,此时,可以根 据前景物A在第t+p帧的预测位置信息以及这m1个移动物体中的第m1个移动物体在第t+p帧的探测的位置信息,判断这m1个移动物体中的第m1个移动物体是否为前景物A。For example, suppose that m1 moving objects are detected in the t+pth frame. The m1-1 moving objects in the m1 moving objects are matched with the foreground objects in the t+p-1 frame, and the m1 moving object in the m1 moving objects cannot be in the t+p-1 frame Find a matching foreground object. At this time, it can be judged based on the predicted position information of the foreground object A in the t+p frame and the detected position information of the m1 moving object among the m1 moving objects in the t+p frame Whether the m1 moving object among the m1 moving objects is the foreground object A.
具体地,若这m1个移动物体中的第m1个移动物体在第t+p帧的探测的位置信息不位于雷达探测的视场的边缘,则进行该判断过程。Specifically, if the detected position information of the m1-th moving object in the t+p-th frame among the m1 moving objects is not located at the edge of the field of view detected by the radar, the judgment process is performed.
由此可见,本发明实施例中能够基于雷达探测到的探测数据来确定各个前景物的位置信息,并且通过不同帧之间的匹配确定前景物在各帧时的位置信息,从而能够根据各帧的位置信息,构建前景物移动的轨迹。其中,轨迹可以表征位置信息随时间的变化。It can be seen that, in the embodiment of the present invention, the position information of each foreground object can be determined based on the detection data detected by the radar, and the position information of the foreground object in each frame can be determined by matching between different frames, so that the position information of each foreground object can be determined according to each frame To construct the trajectory of the foreground object’s movement. Among them, the trajectory can characterize the change of position information over time.
本发明实施例中,可以将前景物移动的轨迹的信息进行存储。示例性地,可以以列表的形式存储前景物的轨迹,其中,该列表记录前景物在各个时刻的位置信息。作为一例,该列表可以具有如下所示的表一的形式。In the embodiment of the present invention, the information of the trajectory of the movement of the foreground object may be stored. Exemplarily, the trajectory of the foreground object may be stored in the form of a list, wherein the list records the position information of the foreground object at each time. As an example, the list may have the form of Table 1 as shown below.
表一Table I
Figure PCTCN2019110674-appb-000001
Figure PCTCN2019110674-appb-000001
应当注意,本发明实施例对列表的形式和内容不作限定,例如该列表可以为堆栈的形式,例如该列表中可以同时记录有中心位置和重心位置,等等。本发明对此不限定。It should be noted that the embodiment of the present invention does not limit the form and content of the list. For example, the list may be in the form of a stack. For example, the center position and the center of gravity position may be recorded in the list at the same time, and so on. The present invention is not limited to this.
示例性地,在S130中,可以将不同帧中的点云进行匹配,以确定哪些是属于同一个前景物的。进一步地,还可以累加前景物在多帧中的前景点云。例 如,前景物在某一帧中包括的点云数为n1个,在另一帧中包括的点云数为n2个,那么可以将这两帧中的点云进行累加,具体地要根据前景物的形状,姿态等进行累加,假设累加后的点云数为n3,n3≥n1,n3≥n2且n3≤n1+n2。这样能够得到前景物的更加密集的点云,从而能够更好地表征前景物,并且另一方面还可以反过来验证该前景物的类型,能够提高监测的准确性。Exemplarily, in S130, point clouds in different frames may be matched to determine which ones belong to the same foreground object. Further, it is also possible to accumulate the front sight clouds of the foreground object in multiple frames. For example, if the number of point clouds included in the foreground object in a certain frame is n1, and the number of point clouds included in another frame is n2, then the point clouds in the two frames can be accumulated, specifically according to the foreground The shape and posture of the object are accumulated, assuming that the number of point clouds after accumulation is n3, n3≥n1, n3≥n2 and n3≤n1+n2. In this way, a denser point cloud of the foreground object can be obtained, so that the foreground object can be better characterized, and on the other hand, the type of the foreground object can be verified in turn, which can improve the accuracy of monitoring.
进一步地,在S130之后,还可以包括:根据前景物移动的轨迹,确定前景物的移动参数,其中,移动参数包括以下至少一项:位移、速度、加速度。Further, after S130, it may further include: determining a movement parameter of the foreground object according to the trajectory of the movement of the foreground object, where the movement parameter includes at least one of the following: displacement, velocity, and acceleration.
具体地,可以根据轨迹,确定前景物在某个时间段内的移动参数。作为一种实现方式,在上述的列表中,也可以包括该移动参数。Specifically, the movement parameters of the foreground object in a certain period of time can be determined according to the trajectory. As an implementation manner, the movement parameter may also be included in the above list.
假设该前景物为车辆,那么,可以根据该前景物的移动参数,判断该车辆是否处于超速行驶的状态。具体地,可以确定该车辆的速度,将该速度与当前场景中的限速进行比较,来判断该车辆是否处于超速行驶的状态。举例来说,激光雷达可以安装在道路的路口的信号灯处,从而检测通过路口的车辆是否超速,协助进行违章监测。Assuming that the foreground object is a vehicle, it can be determined whether the vehicle is in an overspeeding state according to the movement parameters of the foreground object. Specifically, the speed of the vehicle can be determined, and the speed can be compared with the speed limit in the current scene to determine whether the vehicle is in an overspeeding state. For example, the lidar can be installed at the signal light at the intersection of the road, so as to detect whether the vehicle passing through the intersection is speeding, and assist in the violation monitoring.
作为另一例,即使该车辆的当前的车速未超速,也可以根据该车辆的当前的速度以及加速度,预测其是否有可能将会超速行驶。如果有的话,可以预先给交通管理员或者给司机警示或提醒,以保证交通安全,防止因超速出现交通事故。As another example, even if the current speed of the vehicle is not overspeeding, it can be predicted whether it is likely to be overspeeding based on the current speed and acceleration of the vehicle. If so, the traffic controller or the driver can be warned or reminded in advance to ensure traffic safety and prevent traffic accidents due to speeding.
进一步地,假设前景物为车辆,那么在S130之后还可以包括:对特定区域的车流信息进行统计。具体地,可以对所探测的场景内的车流信息进行统计,并且可以基于统计的结果确定是否存在拥塞等状况。Further, assuming that the foreground object is a vehicle, after S130, it may further include: performing statistics on traffic flow information in a specific area. Specifically, the traffic flow information in the detected scene can be counted, and based on the result of the statistics, it can be determined whether there is congestion or other conditions.
进一步地,假设前景物为车辆,场景为停车场,那么在S130之后还可以包括:根据停车场中所探测到的前景物移动的轨迹,确定停车场的车位占用状况。具体地,可以根据轨迹确定有多少车辆已经驶入该停车场,并且能够确定这些驶入的车辆已经占用了哪些车位,还有哪些车位是空闲的。随后,当有其他待停的车辆驶入时,可以引导该待停的车辆行驶至空闲车位。这样,可以借助于自动驾驶(或无人驾驶)实现停车场的智能停车、自动化管理。Further, assuming that the foreground object is a vehicle and the scene is a parking lot, after S130, it may further include: determining the parking space occupancy status of the parking lot according to the detected movement trajectory of the foreground object in the parking lot. Specifically, it can be determined according to the trajectory how many vehicles have entered the parking lot, and it can be determined which parking spaces have been occupied by these vehicles that have entered, and which parking spaces are still free. Subsequently, when another vehicle to be parked enters, the vehicle to be parked can be guided to a free parking space. In this way, intelligent parking and automated management of parking lots can be realized by means of automatic driving (or unmanned driving).
由此可见,本发明实施例利用激光雷达所探测到的点云数据,构建了背景点云数据,在此基础上确定移动的前景物,并确定前景物移动的轨迹。本发明实施例能够通过激光雷达实现对前景物的移动侦测,再者本发明实施例的方法 其算法复杂度低,实现难度不大,所得到的移动侦测的结果准确度高,易于应用,并且能够应用于各种场景,例如不会受光照等的影响,场景适应性强。It can be seen that, in the embodiment of the present invention, the point cloud data detected by the lidar is used to construct the background point cloud data, and on this basis, the moving foreground object is determined, and the moving trajectory of the foreground object is determined. The embodiment of the present invention can realize the motion detection of foreground objects through lidar. Moreover, the method of the embodiment of the present invention has low algorithm complexity and is not difficult to implement. The obtained motion detection result has high accuracy and is easy to apply. , And can be applied to various scenes, for example, it will not be affected by light, etc., and the scene adaptability is strong.
根据另一个实施例,图5是本发明实施例的一种移动物体监控的装置的一个示意性框图。图5所示的装置20包括获取模块210、确定模块220以及构建模块230。According to another embodiment, FIG. 5 is a schematic block diagram of an apparatus for monitoring a moving object according to an embodiment of the present invention. The apparatus 20 shown in FIG. 5 includes an acquisition module 210, a determination module 220, and a construction module 230.
获取模块210,用于获取激光雷达通过探测得到的探测数据;The acquiring module 210 is used to acquire the detection data obtained by the lidar through detection;
确定模块220,用于根据背景点云数据以及每一帧的探测数据,确定每一帧的前景点云,其中,所述背景点云数据与所述探测数据针对同一场景;The determining module 220 is configured to determine the front scenic spot cloud of each frame according to the background point cloud data and the detection data of each frame, wherein the background point cloud data and the detection data are for the same scene;
构建模块230,用于根据多帧的前景点云,构建前景物移动的轨迹。The construction module 230 is used for constructing a trajectory of the foreground object's movement according to the front scenic spot cloud of multiple frames.
示例性地,如图6所示,装置20还可以包括背景数据获取模块201以及背景点云构建模块202。背景数据获取模块201可以用于获取所述激光雷达在一段时间长度内探测的背景探测数据。背景点云构建模块202可以用于根据所述背景探测数据构建所述背景点云数据。Exemplarily, as shown in FIG. 6, the device 20 may further include a background data acquisition module 201 and a background point cloud construction module 202. The background data acquisition module 201 may be used to acquire background detection data detected by the lidar over a period of time. The background point cloud construction module 202 may be used to construct the background point cloud data according to the background detection data.
示例性地,背景点云构建模块202可以包括投影子模块以及特征确定子模块。投影子模块,用于将所述背景探测数据的扫描点投影到投影面上并将所述投影面网格化;特征确定子模块,用于针对所述投影面上的每一个网格,确定投影到所述网格上的扫描点的背景特征。Exemplarily, the background point cloud construction module 202 may include a projection sub-module and a feature determination sub-module. The projection sub-module is used to project the scan points of the background detection data onto the projection surface and the projection surface is gridded; the feature determination sub-module is used to determine for each grid on the projection surface The background features of the scanned points projected onto the grid.
可选地,投影面可以是与雷达的轴线垂直的平面,当然投影面也可以是其他的平面,如与雷达的轴线成一定角度。随后,可以将该投影面进行网格化(也可以称为离散化),并且网格化后的投影面上的每一个网格均存在投影上去的扫描点。Optionally, the projection surface may be a plane perpendicular to the axis of the radar, of course, the projection surface may also be other planes, such as a certain angle with the axis of the radar. Subsequently, the projection surface can be gridded (also referred to as discretization), and each grid on the gridded projection surface has scan points projected on it.
可选地,所述特征确定子模块可以具体用于:根据各帧背景探测数据的对应扫描点的深度信息和/或反射率,确定所述扫描点的背景特征。Optionally, the feature determination submodule may be specifically configured to determine the background feature of the scan point according to the depth information and/or reflectivity of the corresponding scan point of each frame of background detection data.
其中,所述深度信息包括深度值的分布密度函数,所述分布密度函数表示所述深度值的概率密度。具体地,表示不同的深度值被判定为背景的概率密度大小Wherein, the depth information includes a distribution density function of the depth value, and the distribution density function represents the probability density of the depth value. Specifically, it indicates the probability density of different depth values being judged as background
示例性地,背景点云构建模块202还可以用于:采用滤波方法降低噪声对所述背景点云数据中的扫描点的背景特征的影响。Exemplarily, the background point cloud construction module 202 may also be used to reduce the influence of noise on the background features of the scan points in the background point cloud data by adopting a filtering method.
示例性地,装置20还可以包括背景点云更新模块,用于:实时地或者定期地根据所述探测数据对所述背景点云数据进行更新。具体地,在构建背景点 云数据之后,根据后续探测到的探测数据对其进行更新。例如,若后续探测到的探测数据中某个网格的点的深度大于背景点云数据中对应的网格的点的深度,则说明该点的背景深度已经变化,可以根据探测数据对该点的背景特征进行更新。Exemplarily, the device 20 may further include a background point cloud update module, configured to update the background point cloud data according to the detection data in real time or periodically. Specifically, after constructing the background point cloud data, it is updated based on the subsequent detection data. For example, if the depth of a certain grid point in the subsequent detection data is greater than the depth of the corresponding grid point in the background point cloud data, it means that the background depth of the point has changed, and the point can be determined based on the detection data. The background features are updated.
示例性地,如图6所示,确定模块220可以包括投影单元2201和判断单元2202。Exemplarily, as shown in FIG. 6, the determining module 220 may include a projection unit 2201 and a judgment unit 2202.
投影单元2201可以用于将所述探测数据的扫描点投影到投影面上得到投影点;判断单元2202可以用于通过将所述扫描点的所述投影点与所述背景点云数据在所述投影面的对应位置的背景投影点进行比较,判断所述扫描点是否属于前景点云。The projection unit 2201 can be used to project the scanning point of the detection data onto a projection surface to obtain a projection point; the judging unit 2202 can be used to compare the projection point of the scanning point with the background point cloud data in the The background projection points at the corresponding positions of the projection surface are compared, and it is judged whether the scanning point belongs to the front scenic spot cloud.
可选地,判断单元2202可以具体用于:若所述投影点与所述背景投影点之间的差值的绝对值大于或等于第一预设阈值,则确定所述扫描点属于前景点云;若所述投影点与所述背景投影点之间的差值的绝对值小于所述第一预设阈值,则确定所述扫描点不属于前景点云。Optionally, the judging unit 2202 may be specifically configured to: if the absolute value of the difference between the projection point and the background projection point is greater than or equal to a first preset threshold, determine that the scan point belongs to the front scenic spot cloud If the absolute value of the difference between the projection point and the background projection point is less than the first preset threshold, it is determined that the scan point does not belong to the front scenic spot cloud.
示例性地,确定模块220还可以用于:根据所述前景点云,确定所述前景物,其中,所述前景物所包含的每两个相邻的前景点之间的距离小于第二预设阈值;根据所述前景物所包含的所有前景点云,确定所述前景物的几何属性。Exemplarily, the determining module 220 may also be used to determine the foreground object according to the cloud of front sights, wherein the distance between every two adjacent front sights contained in the foreground object is less than the second predicted Set a threshold; determine the geometric attributes of the foreground object according to all front sight clouds included in the foreground object.
其中,所述几何属性包括以下至少一项:所述前景物的尺寸、中心坐标、重心坐标。Wherein, the geometric attributes include at least one of the following: the size, center coordinates, and center of gravity coordinates of the foreground object.
其中,所述尺寸为所述所有前景点云的包围盒的边长或面积。所述中心坐标为所述所有前景点云的中心点的坐标或者所述包围盒的中心点的坐标。所述重心坐标为所述所有前景点云的重心点的坐标或者所述包围盒的重心点的坐标。Wherein, the size is the side length or area of the bounding box of all the front scenic spot clouds. The center coordinates are the coordinates of the center points of all the front scenic spot clouds or the coordinates of the center point of the bounding box. The center of gravity coordinates are the coordinates of the center of gravity of all the front scenic spot clouds or the coordinates of the center of gravity of the bounding box.
示例性地,如图6所示,构建模块230可以包括预测单元2301、确定单元2302和构建单元2303。Exemplarily, as shown in FIG. 6, the construction module 230 may include a prediction unit 2301, a determination unit 2302 and a construction unit 2303.
预测单元2301可以用于根据所述前景物在历史帧中的位置信息,预测所述前景物在当前帧中的预测位置信息。确定单元2302可以用于根据所述前景物在当前帧中的预测位置信息,确定所述前景物在所述当前帧中的实际位置信息。构建单元2303可以用于根据所述前景物在历史帧中的位置信息以及所述前景物在所述当前帧中的实际位置信息,构建所述前景物移动的轨迹。The prediction unit 2301 may be configured to predict the predicted position information of the foreground object in the current frame according to the position information of the foreground object in the historical frame. The determining unit 2302 may be configured to determine the actual position information of the foreground object in the current frame according to the predicted position information of the foreground object in the current frame. The constructing unit 2303 may be configured to construct a trajectory of the foreground object's movement according to the position information of the foreground object in the historical frame and the actual position information of the foreground object in the current frame.
可选地,预测单元2301可以具体用于:基于所述前景物在历史帧中的位置信息所构建的历史轨迹,通过拟合预测所述前景物在当前帧中的预测位置信息。Optionally, the prediction unit 2301 may be specifically configured to predict the predicted position information of the foreground object in the current frame by fitting a historical trajectory constructed based on the position information of the foreground object in the historical frame.
可选地,确定单元2302可以具体用于:确定在所述当前帧中的各个移动物体的位置信息;根据所述前景物在当前帧中的预测位置信息以及所述当前帧中的各个移动物体的位置信息,确定所述各个移动物体中的哪一个为所述前景物并得到所述前景物在所述当前帧中的实际位置信息。Optionally, the determining unit 2302 may be specifically configured to: determine the position information of each moving object in the current frame; according to the predicted position information of the foreground object in the current frame and each moving object in the current frame To determine which of the moving objects is the foreground object and obtain the actual position information of the foreground object in the current frame.
可选地,确定单元2302可以具体用于:计算所述各个移动物体的位置信息与所述预测位置信息之间的距离差值;将所述距离差值的绝对值中小于第三预设阈值且最小的那个距离差值的绝对值所对应的移动物体确定为所述前景物。Optionally, the determining unit 2302 may be specifically configured to: calculate the distance difference between the position information of each moving object and the predicted position information; and set the absolute value of the distance difference to be less than a third preset threshold. And the moving object corresponding to the absolute value of the smallest distance difference is determined as the foreground object.
可选地,确定单元2302还可以具体用于:若所述距离差值的绝对值均大于或等于所述第三预设阈值,则确定在所述当前帧中未找到所述前景物。Optionally, the determining unit 2302 may also be specifically configured to: if the absolute value of the distance difference is greater than or equal to the third preset threshold, determine that the foreground object is not found in the current frame.
示例性地,如图6所示,装置20还可以包括存储模块240,用于:以列表的形式存储所述前景物的所述轨迹,其中,所述列表记录所述前景物在各个时刻的位置信息。Exemplarily, as shown in FIG. 6, the device 20 may further include a storage module 240, configured to store the trajectory of the foreground object in the form of a list, wherein the list records the status of the foreground object at each time. location information.
其中,所述位置信息包括:中心点的坐标或重心点的坐标。Wherein, the position information includes: the coordinates of the center point or the coordinates of the center of gravity point.
示例性地,装置20还可以包括参数确定模块,用于:根据所述前景物移动的所述轨迹,确定所述前景物的移动参数,其中,所述移动参数包括以下至少一项:位移、速度、加速度。Exemplarily, the device 20 may further include a parameter determination module, configured to determine a movement parameter of the foreground object according to the trajectory of the movement of the foreground object, wherein the movement parameter includes at least one of the following: displacement, Speed, acceleration.
示例性地,如图6所示,装置20还包括后处理模块250。Exemplarily, as shown in FIG. 6, the device 20 further includes a post-processing module 250.
作为一例,后处理模块250可以用于:累加所述前景物在多帧中的前景点云。As an example, the post-processing module 250 may be used to accumulate the foreground cloud of the foreground object in multiple frames.
作为一例,后处理模块250可以用于:根据所述前景物所包含的前景点云,识别所述前景物的类型。其中,所述类型包括:汽车、自行车、行人。As an example, the post-processing module 250 may be used to identify the type of the foreground object according to the front sight cloud included in the foreground object. Among them, the types include: cars, bicycles, and pedestrians.
作为一例,所述前景物为车辆,后处理模块250可以用于:判断所述车辆是否处于超速行驶的状态。As an example, the foreground object is a vehicle, and the post-processing module 250 may be used to determine whether the vehicle is in an overspeeding state.
作为一例,所述前景物为车辆,后处理模块250可以用于:对特定区域的车流信息进行统计。As an example, the foreground object is a vehicle, and the post-processing module 250 may be used to perform statistics on traffic flow information in a specific area.
作为一例,所述场景为停车场,所述前景物为车辆,后处理模块250可以 用于:根据所述停车场中所探测到的前景物移动的轨迹,确定所述停车场的车位占用状况。可选地,后处理模块250还可以进一步用于:引导待停的车辆行驶至空闲车位。As an example, the scene is a parking lot, the foreground object is a vehicle, and the post-processing module 250 may be used to determine the parking space occupancy status of the parking lot according to the detected movement trajectory of the foreground object in the parking lot. . Optionally, the post-processing module 250 may be further used to guide the vehicle to be parked to an idle parking space.
由此可见,本发明实施例利用激光雷达所探测到的点云数据,构建了背景点云数据,在此基础上确定移动的前景物,并确定前景物移动的轨迹。本发明实施例能够通过激光雷达实现对前景物的移动侦测,再者本发明实施例的方法其算法复杂度低,实现难度不大,所得到的移动侦测的结果准确度高,易于应用,并且能够应用于各种场景,例如不会受光照等的影响,场景适应性强。It can be seen that, in the embodiment of the present invention, the point cloud data detected by the lidar is used to construct the background point cloud data, and on this basis, the moving foreground object is determined, and the moving trajectory of the foreground object is determined. The embodiment of the present invention can realize the motion detection of foreground objects through lidar. Moreover, the method of the embodiment of the present invention has low algorithm complexity and is not difficult to implement. The obtained motion detection result has high accuracy and is easy to apply. , And can be applied to various scenes, for example, it will not be affected by light, etc., and the scene adaptability is strong.
图5或图6所示的装置20能够实现前述图1所示的移动物体监控的方法,为避免重复,这里不再赘述。The device 20 shown in FIG. 5 or FIG. 6 can implement the aforementioned method for monitoring a moving object shown in FIG. 1. To avoid repetition, it will not be repeated here.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered as going beyond the scope of the present invention.
另外,本发明实施例还提供了另一种移动物体监控的装置,包括存储器、处理器及存储在所述存储器上且在所述处理器上运行的计算机程序,处理器执行所述程序时实现前述图1所示的移动物体监控的方法的步骤。In addition, the embodiment of the present invention also provides another device for monitoring a moving object, including a memory, a processor, and a computer program stored in the memory and running on the processor. The processor executes the program when the program is executed. The steps of the method for monitoring a moving object shown in FIG. 1 described above.
如图7所示,该装置30可以包括存储器310和处理器320。存储器310存储用于实现根据本发明实施例的移动物体监控的方法中的相应步骤的计算机程序代码。处理器320用于运行存储器310中存储的计算机程序代码,以执行根据本发明实施例的移动物体监控的方法的相应步骤,并且用于实现根据本发明实施例的图5或图6所述的装置20中的各个模块。As shown in FIG. 7, the device 30 may include a memory 310 and a processor 320. The memory 310 stores computer program codes for implementing corresponding steps in the method for monitoring a moving object according to an embodiment of the present invention. The processor 320 is configured to run the computer program code stored in the memory 310 to execute the corresponding steps of the method for monitoring a moving object according to an embodiment of the present invention, and to implement the steps described in FIG. 5 or FIG. 6 according to the embodiment of the present invention. The various modules in the device 20.
示例性地,在所述计算机程序代码被处理器320运行时执行以下步骤:获取激光雷达通过探测得到的探测数据;根据背景点云数据以及每一帧的探测数据,确定每一帧的前景点云,其中,所述背景点云数据与所述探测数据针对同一场景;根据多帧的前景点云,构建前景物移动的轨迹。Exemplarily, when the computer program code is executed by the processor 320, the following steps are executed: acquiring detection data obtained by the lidar through detection; determining the front spot of each frame according to the background point cloud data and the detection data of each frame Cloud, wherein the background point cloud data and the detection data are for the same scene; according to multiple frames of the front scenic spot cloud, the trajectory of the foreground object is constructed.
由此可见,本发明实施例利用激光雷达所探测到的点云数据,构建了背景点云数据,在此基础上确定移动的前景物,并确定前景物移动的轨迹。本发明实施例能够通过激光雷达实现对前景物的移动侦测,再者本发明实施例的方法 其算法复杂度低,实现难度不大,所得到的移动侦测的结果准确度高,易于应用,并且能够应用于各种场景,例如不会受光照等的影响,场景适应性强。It can be seen that, in the embodiment of the present invention, the point cloud data detected by the lidar is used to construct the background point cloud data, and on this basis, the moving foreground object is determined, and the moving trajectory of the foreground object is determined. The embodiment of the present invention can realize the motion detection of foreground objects through lidar. Moreover, the method of the embodiment of the present invention has low algorithm complexity and is not difficult to implement. The obtained motion detection result has high accuracy and is easy to apply. , And can be applied to various scenes, for example, it will not be affected by light, etc., and the scene adaptability is strong.
另外,本发明实施例还提供了一种计算机存储介质,其上存储有计算机程序。当所述计算机程序由处理器执行时,可以实现前述图1所示的移动物体监控的方法的步骤。例如,该计算机存储介质为计算机可读存储介质。In addition, the embodiment of the present invention also provides a computer storage medium on which a computer program is stored. When the computer program is executed by the processor, the steps of the method for monitoring a moving object shown in FIG. 1 can be implemented. For example, the computer storage medium is a computer-readable storage medium.
计算机存储介质例如可以包括智能电话的存储卡、平板电脑的存储部件、个人计算机的硬盘、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、或者上述存储介质的任意组合。计算机可读存储介质可以是一个或多个计算机可读存储介质的任意组合,例如一个计算机可读存储介质包含用于移动物体监控的计算机可读的程序代码,另一个计算机可读存储介质包含基于轨迹的后处理的计算机可读的程序代码。The computer storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable compact disk read-only memory ( CD-ROM), USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media. For example, one computer-readable storage medium contains computer-readable program code for monitoring moving objects, and another computer-readable storage medium contains computer-readable storage media based on Computer-readable program code for the post-processing of the trajectory.
另外,本发明实施例还提供了一种计算机程序或计算机程序产品,该计算机程序或计算机程序产品可以被处理器执行,且该代码被处理器执行时,能够实现:获取激光雷达通过探测得到的探测数据;根据背景点云数据以及每一帧的探测数据,确定每一帧的前景点云,其中,所述背景点云数据与所述探测数据针对同一场景;根据多帧的前景点云,构建前景物移动的轨迹。In addition, the embodiment of the present invention also provides a computer program or computer program product. The computer program or computer program product can be executed by a processor, and when the code is executed by the processor, it can realize: Detection data; according to the background point cloud data and the detection data of each frame, determine the front sight cloud of each frame, wherein the background point cloud data and the detection data are for the same scene; according to the front sight cloud of multiple frames, Construct the trajectory of the foreground object's movement.
由此可见,本发明实施例利用激光雷达所探测到的点云数据,构建了背景点云数据,在此基础上确定移动的前景物,并确定前景物移动的轨迹。本发明实施例能够通过激光雷达实现对前景物的移动侦测,再者本发明实施例的方法其算法复杂度低,实现难度不大,所得到的移动侦测的结果准确度高,易于应用,并且能够应用于各种场景,例如不会受光照等的影响,场景适应性强。进一步地,本发明实施例还可以基于移动物体的轨迹确定移动物体的速度、加速度等参数,还可以对移动物体的流量信息等进行统计。这种简便智能化的统计还能够成为智慧城市的建设中的重要一环。It can be seen that, in the embodiment of the present invention, the point cloud data detected by the lidar is used to construct the background point cloud data, and on this basis, the moving foreground object is determined, and the moving trajectory of the foreground object is determined. The embodiment of the present invention can realize the motion detection of foreground objects through lidar. Moreover, the method of the embodiment of the present invention has low algorithm complexity and is not difficult to implement. The obtained motion detection result has high accuracy and is easy to apply. , And can be applied to various scenes, for example, it will not be affected by light, etc., and the scene adaptability is strong. Further, the embodiment of the present invention can also determine parameters such as the speed and acceleration of the moving object based on the trajectory of the moving object, and can also perform statistics on the traffic information of the moving object. This simple and intelligent statistics can also become an important part of the construction of smart cities.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其他任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可 编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。In the above-mentioned embodiments, it may be implemented in whole or in part by software, hardware, firmware or any other combination. When implemented by software, it can be implemented in the form of a computer program product in whole or in part. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions described in the embodiments of the present invention are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website, computer, server, or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital video disc (DVD)), or a semiconductor medium (for example, a solid state disk (SSD)), etc. .
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method may be implemented in other ways. For example, the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理器中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present application may be integrated in a processor, or each unit may exist alone physically, or two or more units may be integrated in one unit.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到 变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Should be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.

Claims (56)

  1. 一种移动物体监控的方法,其特征在于,包括:A method for monitoring a moving object, which is characterized in that it includes:
    获取激光雷达通过探测得到的探测数据;Obtain detection data obtained by lidar through detection;
    根据背景点云数据以及每一帧的探测数据,确定每一帧的前景点云,其中,所述背景点云数据与所述探测数据针对同一场景;Determine the front scenic spot cloud of each frame according to the background point cloud data and the detection data of each frame, wherein the background point cloud data and the detection data are for the same scene;
    根据多帧的前景点云,构建前景物移动的轨迹。According to the front scenic spot cloud of multiple frames, the trajectory of the foreground object is constructed.
  2. 根据权利要求1所述的方法,其特征在于,在所述获取探测数据之前,还包括通过下述方式得到所述背景点云数据:The method according to claim 1, characterized in that, before the acquiring the detection data, it further comprises acquiring the background point cloud data in the following manner:
    获取所述激光雷达在一段时间长度内探测的背景探测数据;Acquiring background detection data detected by the lidar over a period of time;
    根据所述背景探测数据构建所述背景点云数据。The background point cloud data is constructed according to the background detection data.
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述背景探测数据构建所述背景点云数据,包括:The method according to claim 2, wherein the constructing the background point cloud data according to the background detection data comprises:
    将所述背景探测数据的扫描点投影到投影面上并将所述投影面网格化;Projecting the scanning points of the background detection data onto a projection surface and gridding the projection surface;
    针对所述投影面上的每一个网格,确定投影到所述网格上的扫描点的背景特征。For each grid on the projection surface, the background feature of the scanning point projected on the grid is determined.
  4. 根据权利要求3所述的方法,其特征在于,确定投影到所述网格上的扫描点的背景特征,包括:The method according to claim 3, wherein determining the background characteristics of the scanning points projected on the grid comprises:
    根据各帧背景探测数据的对应扫描点的深度信息和/或反射率,确定所述扫描点的背景特征。According to the depth information and/or reflectivity of the corresponding scanning point of each frame of background detection data, the background feature of the scanning point is determined.
  5. 根据权利要求4所述的方法,其特征在于,所述深度信息包括深度值的分布密度函数,所述分布密度函数表示所述深度值的概率密度。The method according to claim 4, wherein the depth information comprises a distribution density function of a depth value, and the distribution density function represents a probability density of the depth value.
  6. 根据权利要求2-5任一项所述的方法,其特征在于,在构建所述背景点云数据的过程中,还包括:The method according to any one of claims 2-5, wherein the process of constructing the background point cloud data further comprises:
    采用滤波方法降低噪声对所述背景点云数据中的扫描点的背景特征的影响。A filtering method is used to reduce the influence of noise on the background features of the scanned points in the background point cloud data.
  7. 根据权利要求1-6任一项所述的方法,其特征在于,还包括:The method according to any one of claims 1-6, further comprising:
    实时地或者定期地根据所述探测数据对所述背景点云数据进行更新。The background point cloud data is updated in real time or periodically according to the detection data.
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述确定每一帧的前景点云,包括:The method according to any one of claims 1-7, wherein the determining the front sight cloud of each frame comprises:
    将所述探测数据的扫描点投影到投影面上得到投影点;Projecting the scanning point of the detection data onto the projection surface to obtain the projection point;
    通过将所述扫描点的所述投影点与所述背景点云数据在所述投影面的对应位置的背景投影点进行比较,判断所述扫描点是否属于前景点云。By comparing the projection point of the scanning point with the background projection point of the background point cloud data at the corresponding position of the projection surface, it is determined whether the scanning point belongs to the front scenic spot cloud.
  9. 根据权利要求8所述的方法,其特征在于,所述通过将所述扫描点的所述投影点与所述背景点云数据在所述投影面的对应位置的背景投影点进行比较,判断所述扫描点是否属于前景点云,包括:The method according to claim 8, wherein the said projection point of the scanning point is compared with the background projection point of the background point cloud data at the corresponding position of the projection surface to determine the State whether the scan point belongs to the former scenic spot cloud, including:
    若所述投影点与所述背景投影点之间的差值的绝对值大于或等于第一预设阈值,则确定所述扫描点属于前景点云;If the absolute value of the difference between the projection point and the background projection point is greater than or equal to a first preset threshold, determining that the scanning point belongs to the front scenic spot cloud;
    若所述投影点与所述背景投影点之间的差值的绝对值小于所述第一预设阈值,则确定所述扫描点不属于前景点云。If the absolute value of the difference between the projection point and the background projection point is less than the first preset threshold, it is determined that the scan point does not belong to the front scenic spot cloud.
  10. 根据权利要求1-9任一项所述的方法,其特征在于,在确定每一帧的前景点云之后,还包括:The method according to any one of claims 1-9, characterized in that, after determining the front sight cloud of each frame, the method further comprises:
    根据所述前景点云,确定所述前景物,其中,所述前景物所包含的每两个相邻的前景点之间的距离小于第二预设阈值;Determine the foreground object according to the cloud of front sights, wherein the distance between every two adjacent front sights included in the foreground object is less than a second preset threshold;
    根据所述前景物所包含的所有前景点云,确定所述前景物的几何属性。Determine the geometric attributes of the foreground object according to all the front sight clouds included in the foreground object.
  11. 根据权利要求10所述的方法,其特征在于,所述几何属性包括以下至少一项:所述前景物的尺寸、中心坐标、重心坐标。The method according to claim 10, wherein the geometric attributes include at least one of the following: the size, center coordinates, and center of gravity coordinates of the foreground object.
  12. 根据权利要求11所述的方法,其特征在于,The method of claim 11, wherein:
    所述尺寸为所述所有前景点云的包围盒的边长或面积,The size is the side length or area of the bounding box of all front scenic spot clouds,
    所述中心坐标为所述所有前景点云的中心点的坐标或者所述包围盒的中心点的坐标,The center coordinates are the coordinates of the center points of the all front scenic spot clouds or the coordinates of the center point of the bounding box,
    所述重心坐标为所述所有前景点云的重心点的坐标或者所述包围盒的重心点的坐标。The center of gravity coordinates are the coordinates of the center of gravity of all the front scenic spot clouds or the coordinates of the center of gravity of the bounding box.
  13. 根据权利要求1-12任一项所述的方法,其特征在于,所述根据多帧的前景点云,构建前景物移动的轨迹,包括:The method according to any one of claims 1-12, wherein the constructing a trajectory of the movement of the foreground object according to the front scenic spot cloud of multiple frames comprises:
    根据所述前景物在历史帧中的位置信息,预测所述前景物在当前帧中的预测位置信息;Predict the predicted position information of the foreground object in the current frame according to the position information of the foreground object in the historical frame;
    根据所述前景物在当前帧中的预测位置信息,确定所述前景物在所述当前帧中的实际位置信息;Determine the actual position information of the foreground object in the current frame according to the predicted position information of the foreground object in the current frame;
    根据所述前景物在历史帧中的位置信息以及所述前景物在所述当前帧中的实际位置信息,构建所述前景物移动的轨迹。According to the position information of the foreground object in the historical frame and the actual position information of the foreground object in the current frame, a trajectory of the movement of the foreground object is constructed.
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述前景物在历史帧中的位置信息,预测所述前景物在当前帧中的预测位置信息,包括:The method according to claim 13, wherein the predicting the predicted position information of the foreground object in the current frame according to the position information of the foreground object in the historical frame comprises:
    基于所述前景物在历史帧中的位置信息所构建的历史轨迹,通过拟合预测所述前景物在当前帧中的预测位置信息。Based on the historical trajectory constructed based on the position information of the foreground object in the historical frame, the predicted position information of the foreground object in the current frame is predicted by fitting.
  15. 根据权利要求13或14所述的方法,其特征在于,所述确定所述前景物在所述当前帧中的实际位置信息,包括:The method according to claim 13 or 14, wherein the determining the actual position information of the foreground object in the current frame comprises:
    确定在所述当前帧中的各个移动物体的位置信息;Determining the position information of each moving object in the current frame;
    根据所述前景物在当前帧中的预测位置信息以及所述当前帧中的各个移动物体的位置信息,确定所述各个移动物体中的哪一个为所述前景物并得到所述前景物在所述当前帧中的实际位置信息。According to the predicted position information of the foreground object in the current frame and the position information of each moving object in the current frame, determine which of the moving objects is the foreground object and obtain the foreground object in the current frame. Describe the actual position information in the current frame.
  16. 根据权利要求15所述的方法,其特征在于,所述确定所述各个移动物体中的哪一个为所述前景物,包括:The method according to claim 15, wherein the determining which of the moving objects is the foreground object comprises:
    计算所述各个移动物体的位置信息与所述预测位置信息之间的距离差值;Calculating the distance difference between the position information of each moving object and the predicted position information;
    将所述距离差值的绝对值中小于第三预设阈值且最小的那个距离差值的绝对值所对应的移动物体确定为所述前景物。The moving object corresponding to the absolute value of the distance difference that is less than the third preset threshold and the smallest absolute value of the distance difference is determined as the foreground object.
  17. 根据权利要求16所述的方法,其特征在于,还包括:The method according to claim 16, further comprising:
    若所述距离差值的绝对值均大于或等于所述第三预设阈值,则确定在所述当前帧中未找到所述前景物。If the absolute value of the distance difference is greater than or equal to the third preset threshold, it is determined that the foreground object is not found in the current frame.
  18. 根据权利要求13-17任一项所述的方法,其特征在于,还包括:The method according to any one of claims 13-17, further comprising:
    以列表的形式存储所述前景物的所述轨迹,其中,所述列表记录所述前景物在各个时刻的位置信息。The trajectory of the foreground object is stored in the form of a list, wherein the list records position information of the foreground object at each time.
  19. 根据权利要求13-18任一项所述的方法,其特征在于,所述位置信息包括:中心点的坐标或重心点的坐标。The method according to any one of claims 13-18, wherein the position information comprises: coordinates of a center point or coordinates of a center of gravity point.
  20. 根据权利要求1-19任一项所述的方法,其特征在于,还包括:The method according to any one of claims 1-19, further comprising:
    根据所述前景物移动的所述轨迹,确定所述前景物的移动参数,其中,所述移动参数包括以下至少一项:位移、速度、加速度。According to the trajectory of the movement of the foreground object, a movement parameter of the foreground object is determined, wherein the movement parameter includes at least one of the following: displacement, velocity, and acceleration.
  21. 根据权利要求20所述的方法,其特征在于,所述前景物为车辆,所述方法还包括:The method according to claim 20, wherein the foreground object is a vehicle, and the method further comprises:
    判断所述车辆是否处于超速行驶的状态。Determine whether the vehicle is in a state of speeding.
  22. 根据权利要求1-21任一项所述的方法,其特征在于,还包括:The method according to any one of claims 1-21, further comprising:
    累加所述前景物在多帧中的前景点云。Accumulate the front sight clouds of the foreground object in multiple frames.
  23. 根据权利要求1-22任一项所述的方法,其特征在于,还包括:The method according to any one of claims 1-22, further comprising:
    根据所述前景物所包含的前景点云,识别所述前景物的类型。Identify the type of the foreground object according to the front sight cloud included in the foreground object.
  24. 根据权利要求23所述的方法,其特征在于,所述类型包括:汽车、自行车、行人。The method according to claim 23, wherein the types include: cars, bicycles, and pedestrians.
  25. 根据权利要求1-24任一项所述的方法,其特征在于,所述前景物为车辆,所述方法还包括:The method according to any one of claims 1-24, wherein the foreground object is a vehicle, and the method further comprises:
    对特定区域的车流信息进行统计。Carry out statistics on traffic information in a specific area.
  26. 根据权利要求1-24任一项所述的方法,其特征在于,所述场景为停车场,所述前景物为车辆,所述方法还包括:The method according to any one of claims 1-24, wherein the scene is a parking lot and the foreground object is a vehicle, and the method further comprises:
    根据所述停车场中所探测到的前景物移动的轨迹,确定所述停车场的车位占用状况。Determine the parking space occupancy status of the parking lot according to the detected movement trajectory of the foreground object in the parking lot.
  27. 根据权利要求26所述的方法,其特征在于,所述方法还包括:The method according to claim 26, wherein the method further comprises:
    引导待停的车辆行驶至空闲车位。Guide the vehicle to be parked to a free parking space.
  28. 一种移动物体监控的装置,其特征在于,包括:A device for monitoring a moving object, which is characterized in that it comprises:
    获取模块,用于获取激光雷达通过探测得到的探测数据;The acquisition module is used to acquire the detection data obtained by the lidar through detection;
    确定模块,用于根据背景点云数据以及每一帧的探测数据,确定每一帧的前景点云,其中,所述背景点云数据与所述探测数据针对同一场景;The determining module is configured to determine the front sight cloud of each frame according to the background point cloud data and the detection data of each frame, wherein the background point cloud data and the detection data are for the same scene;
    构建模块,用于根据多帧的前景点云,构建前景物移动的轨迹。The construction module is used to construct the trajectory of the foreground object according to the front sight cloud of multiple frames.
  29. 根据权利要求28所述的装置,其特征在于,还包括:The device according to claim 28, further comprising:
    背景数据获取模块,用于获取所述激光雷达在一段时间长度内探测的背景探测数据;A background data acquisition module, configured to acquire background detection data detected by the lidar over a period of time;
    背景点云构建模块,用于根据所述背景探测数据构建所述背景点云数据。The background point cloud construction module is used to construct the background point cloud data according to the background detection data.
  30. 根据权利要求29所述的装置,其特征在于,所述背景点云构建模块,包括:The device according to claim 29, wherein the background point cloud building module comprises:
    投影子模块,用于将所述背景探测数据的扫描点投影到投影面上并将所述投影面网格化;A projection sub-module for projecting the scanning points of the background detection data onto a projection surface and gridding the projection surface;
    特征确定子模块,用于针对所述投影面上的每一个网格,确定投影到所述网格上的扫描点的背景特征。The feature determination sub-module is used to determine the background feature of the scanning point projected on the grid for each grid on the projection surface.
  31. 根据权利要求30所述的装置,其特征在于,所述特征确定子模块, 具体用于:The device according to claim 30, wherein the feature determining submodule is specifically configured to:
    根据各帧背景探测数据的对应扫描点的深度信息和/或反射率,确定所述扫描点的背景特征。According to the depth information and/or reflectivity of the corresponding scanning point of each frame of background detection data, the background feature of the scanning point is determined.
  32. 根据权利要求31所述的装置,其特征在于,所述深度信息包括深度值的分布密度函数,所述分布密度函数表示所述深度值的概率密度。The device according to claim 31, wherein the depth information comprises a distribution density function of a depth value, and the distribution density function represents a probability density of the depth value.
  33. 根据权利要求29-32任一项所述的装置,其特征在于,所述背景点云构建模块,还用于:The device according to any one of claims 29-32, wherein the background point cloud building module is further used for:
    采用滤波方法降低噪声对所述背景点云数据中的扫描点的背景特征的影响。A filtering method is used to reduce the influence of noise on the background features of the scanned points in the background point cloud data.
  34. 根据权利要求28-33任一项所述的装置,其特征在于,还包括背景点云更新模块,用于:The device according to any one of claims 28-33, further comprising a background point cloud update module, configured to:
    实时地或者定期地根据所述探测数据对所述背景点云数据进行更新。The background point cloud data is updated in real time or periodically according to the detection data.
  35. 根据权利要求28-34任一项所述的装置,其特征在于,所述确定模块,包括:The device according to any one of claims 28-34, wherein the determining module comprises:
    投影单元,用于将所述探测数据的扫描点投影到投影面上得到投影点;A projection unit for projecting the scanning point of the detection data onto the projection surface to obtain a projection point;
    判断单元,用于通过将所述扫描点的所述投影点与所述背景点云数据在所述投影面的对应位置的背景投影点进行比较,判断所述扫描点是否属于前景点云。The judging unit is configured to compare the projection point of the scan point with the background projection point of the background point cloud data at the corresponding position of the projection surface to determine whether the scan point belongs to the front sight cloud.
  36. 根据权利要求35所述的装置,其特征在于,所述判断单元,具体用于:The device according to claim 35, wherein the judging unit is specifically configured to:
    若所述投影点与所述背景投影点之间的差值的绝对值大于或等于第一预设阈值,则确定所述扫描点属于前景点云;If the absolute value of the difference between the projection point and the background projection point is greater than or equal to a first preset threshold, determining that the scanning point belongs to the front scenic spot cloud;
    若所述投影点与所述背景投影点之间的差值的绝对值小于所述第一预设阈值,则确定所述扫描点不属于前景点云。If the absolute value of the difference between the projection point and the background projection point is less than the first preset threshold, it is determined that the scanning point does not belong to the front scenic spot cloud.
  37. 根据权利要求28-36任一项所述的装置,其特征在于,所述确定模块,还用于:The device according to any one of claims 28-36, wherein the determining module is further configured to:
    根据所述前景点云,确定所述前景物,其中,所述前景物所包含的每两个相邻的前景点之间的距离小于第二预设阈值;Determine the foreground object according to the front scenic spot cloud, wherein the distance between every two adjacent front scenic spots included in the foreground object is less than a second preset threshold;
    根据所述前景物所包含的所有前景点云,确定所述前景物的几何属性。Determine the geometric attributes of the foreground object according to all the front sight clouds included in the foreground object.
  38. 根据权利要求37所述的装置,其特征在于,所述几何属性包括以下 至少一项:所述前景物的尺寸、中心坐标、重心坐标。The device according to claim 37, wherein the geometric attributes include at least one of the following: the size, center coordinates, and center of gravity coordinates of the foreground object.
  39. 根据权利要求38所述的装置,其特征在于,The device of claim 38, wherein:
    所述尺寸为所述所有前景点云的包围盒的边长或面积,The size is the side length or area of the bounding box of all front scenic spot clouds,
    所述中心坐标为所述所有前景点云的中心点的坐标或者所述包围盒的中心点的坐标,The center coordinates are the coordinates of the center points of the all front scenic spot clouds or the coordinates of the center point of the bounding box,
    所述重心坐标为所述所有前景点云的重心点的坐标或者所述包围盒的重心点的坐标。The center of gravity coordinates are the coordinates of the center of gravity of all the front scenic spot clouds or the coordinates of the center of gravity of the bounding box.
  40. 根据权利要求28-39任一项所述的装置,其特征在于,所述构建模块,包括:The device according to any one of claims 28-39, wherein the building module comprises:
    预测单元,用于根据所述前景物在历史帧中的位置信息,预测所述前景物在当前帧中的预测位置信息;A prediction unit, configured to predict the predicted position information of the foreground object in the current frame according to the position information of the foreground object in the historical frame;
    确定单元,用于根据所述前景物在当前帧中的预测位置信息,确定所述前景物在所述当前帧中的实际位置信息;A determining unit, configured to determine the actual position information of the foreground object in the current frame according to the predicted position information of the foreground object in the current frame;
    构建单元,用于根据所述前景物在历史帧中的位置信息以及所述前景物在所述当前帧中的实际位置信息,构建所述前景物移动的轨迹。The constructing unit is configured to construct a trajectory of the foreground object's movement according to the position information of the foreground object in the historical frame and the actual position information of the foreground object in the current frame.
  41. 根据权利要求40所述的装置,其特征在于,所述预测单元,具体用于:The device according to claim 40, wherein the prediction unit is specifically configured to:
    基于所述前景物在历史帧中的位置信息所构建的历史轨迹,通过拟合预测所述前景物在当前帧中的预测位置信息。Based on the historical trajectory constructed based on the position information of the foreground object in the historical frame, the predicted position information of the foreground object in the current frame is predicted by fitting.
  42. 根据权利要求40或41所述的装置,其特征在于,所述确定单元,具体用于:The device according to claim 40 or 41, wherein the determining unit is specifically configured to:
    确定在所述当前帧中的各个移动物体的位置信息;Determining the position information of each moving object in the current frame;
    根据所述前景物在当前帧中的预测位置信息以及所述当前帧中的各个移动物体的位置信息,确定所述各个移动物体中的哪一个为所述前景物并得到所述前景物在所述当前帧中的实际位置信息。According to the predicted position information of the foreground object in the current frame and the position information of each moving object in the current frame, determine which of the moving objects is the foreground object and obtain the foreground object in the current frame. Describe the actual position information in the current frame.
  43. 根据权利要求42所述的装置,其特征在于,所述确定单元,具体用于:The device according to claim 42, wherein the determining unit is specifically configured to:
    计算所述各个移动物体的位置信息与所述预测位置信息之间的距离差值;Calculating the distance difference between the position information of each moving object and the predicted position information;
    将所述距离差值的绝对值中小于第三预设阈值且最小的那个距离差值的绝对值所对应的移动物体确定为所述前景物。The moving object corresponding to the absolute value of the distance difference that is less than the third preset threshold and the smallest absolute value of the distance difference is determined as the foreground object.
  44. 根据权利要求43所述的装置,其特征在于,所述确定单元,还具体用于:The device according to claim 43, wherein the determining unit is further specifically configured to:
    若所述距离差值的绝对值均大于或等于所述第三预设阈值,则确定在所述当前帧中未找到所述前景物。If the absolute value of the distance difference is greater than or equal to the third preset threshold, it is determined that the foreground object is not found in the current frame.
  45. 根据权利要求40-44任一项所述的装置,其特征在于,还包括存储模块,用于:The device according to any one of claims 40-44, further comprising a storage module, configured to:
    以列表的形式存储所述前景物的所述轨迹,其中,所述列表记录所述前景物在各个时刻的位置信息。The trajectory of the foreground object is stored in the form of a list, wherein the list records position information of the foreground object at each time.
  46. 根据权利要求40-45任一项所述的装置,其特征在于,所述位置信息包括:中心点的坐标或重心点的坐标。The device according to any one of claims 40-45, wherein the position information comprises: coordinates of a center point or coordinates of a center of gravity point.
  47. 根据权利要求28-46任一项所述的装置,其特征在于,还包括参数确定模块,用于:The device according to any one of claims 28-46, further comprising a parameter determination module, configured to:
    根据所述前景物移动的所述轨迹,确定所述前景物的移动参数,其中,所述移动参数包括以下至少一项:位移、速度、加速度。According to the trajectory of the movement of the foreground object, a movement parameter of the foreground object is determined, wherein the movement parameter includes at least one of the following: displacement, velocity, and acceleration.
  48. 根据权利要求47所述的装置,其特征在于,所述前景物为车辆,所述装置还包括后处理模块,用于:The device according to claim 47, wherein the foreground object is a vehicle, and the device further comprises a post-processing module for:
    判断所述车辆是否处于超速行驶的状态。Determine whether the vehicle is in an overspeeding state.
  49. 根据权利要求28-48任一项所述的装置,其特征在于,还包括后处理模块,用于:The device according to any one of claims 28-48, further comprising a post-processing module for:
    累加所述前景物在多帧中的前景点云。Accumulate the front sight clouds of the foreground object in multiple frames.
  50. 根据权利要求28-49任一项所述的装置,其特征在于,还包括后处理模块,用于:The device according to any one of claims 28-49, further comprising a post-processing module for:
    根据所述前景物所包含的前景点云,识别所述前景物的类型。Identify the type of the foreground object according to the front sight cloud included in the foreground object.
  51. 根据权利要求50所述的装置,其特征在于,所述类型包括:汽车、自行车、行人。The device according to claim 50, wherein the types include: cars, bicycles, and pedestrians.
  52. 根据权利要求28-51任一项所述的装置,其特征在于,所述前景物为车辆,所述装置还包括后处理模块,用于:The device according to any one of claims 28-51, wherein the foreground object is a vehicle, and the device further comprises a post-processing module for:
    对特定区域的车流信息进行统计。Carry out statistics on traffic information in a specific area.
  53. 根据权利要求28-51任一项所述的装置,其特征在于,所述场景为停车场,所述前景物为车辆,所述装置还包括后处理模块,用于:The device according to any one of claims 28-51, wherein the scene is a parking lot, the foreground object is a vehicle, and the device further comprises a post-processing module for:
    根据所述停车场中所探测到的前景物移动的轨迹,确定所述停车场的车位占用状况。Determine the parking space occupancy status of the parking lot according to the detected movement trajectory of the foreground object in the parking lot.
  54. 根据权利要求53所述的装置,其特征在于,所述后处理模块还用于:The device according to claim 53, wherein the post-processing module is further configured to:
    引导待停的车辆行驶至空闲车位。Guide the vehicle to be parked to a free parking space.
  55. 一种移动物体监控的装置,包括存储器、处理器及存储在所述存储器上且在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至27中任一项所述方法的步骤。A device for monitoring a moving object, comprising a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor implements claim 1 when the computer program is executed. To the steps of the method described in any one of 27.
  56. 一种计算机存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至27中任一项所述方法的步骤。A computer storage medium having a computer program stored thereon, wherein the computer program implements the steps of any one of claims 1 to 27 when the computer program is executed by a processor.
PCT/CN2019/110674 2019-10-11 2019-10-11 Method and apparatus for monitoring moving object, and computer storage medium WO2021068210A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/110674 WO2021068210A1 (en) 2019-10-11 2019-10-11 Method and apparatus for monitoring moving object, and computer storage medium
CN201980031231.XA CN112956187A (en) 2019-10-11 2019-10-11 Method and device for monitoring moving object and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/110674 WO2021068210A1 (en) 2019-10-11 2019-10-11 Method and apparatus for monitoring moving object, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2021068210A1 true WO2021068210A1 (en) 2021-04-15

Family

ID=75437662

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/110674 WO2021068210A1 (en) 2019-10-11 2019-10-11 Method and apparatus for monitoring moving object, and computer storage medium

Country Status (2)

Country Link
CN (1) CN112956187A (en)
WO (1) WO2021068210A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140368493A1 (en) * 2013-06-14 2014-12-18 Microsoft Corporation Object removal using lidar-based classification
CN107845095A (en) * 2017-11-20 2018-03-27 维坤智能科技(上海)有限公司 Mobile object real time detection algorithm based on three-dimensional laser point cloud
CN108022205A (en) * 2016-11-04 2018-05-11 杭州海康威视数字技术股份有限公司 Method for tracking target, device and recording and broadcasting system
US20190224847A1 (en) * 2018-01-23 2019-07-25 Toyota Jidosha Kabushiki Kaisha Motion trajectory generation apparatus
CN110163904A (en) * 2018-09-11 2019-08-23 腾讯大地通途(北京)科技有限公司 Object marking method, control method for movement, device, equipment and storage medium
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019100354A1 (en) * 2017-11-25 2019-05-31 华为技术有限公司 State sensing method and related apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140368493A1 (en) * 2013-06-14 2014-12-18 Microsoft Corporation Object removal using lidar-based classification
CN108022205A (en) * 2016-11-04 2018-05-11 杭州海康威视数字技术股份有限公司 Method for tracking target, device and recording and broadcasting system
CN107845095A (en) * 2017-11-20 2018-03-27 维坤智能科技(上海)有限公司 Mobile object real time detection algorithm based on three-dimensional laser point cloud
US20190224847A1 (en) * 2018-01-23 2019-07-25 Toyota Jidosha Kabushiki Kaisha Motion trajectory generation apparatus
CN110163904A (en) * 2018-09-11 2019-08-23 腾讯大地通途(北京)科技有限公司 Object marking method, control method for movement, device, equipment and storage medium
CN110246159A (en) * 2019-06-14 2019-09-17 湖南大学 The 3D target motion analysis method of view-based access control model and radar information fusion

Also Published As

Publication number Publication date
CN112956187A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
US11941887B2 (en) Scenario recreation through object detection and 3D visualization in a multi-sensor environment
US11487988B2 (en) Augmenting real sensor recordings with simulated sensor data
CN109087510B (en) Traffic monitoring method and device
US20190065637A1 (en) Augmenting Real Sensor Recordings With Simulated Sensor Data
Negru et al. Image based fog detection and visibility estimation for driving assistance systems
CN110929655B (en) Lane line identification method in driving process, terminal device and storage medium
CN113064135B (en) Method and device for detecting obstacle in 3D radar point cloud continuous frame data
US11042159B2 (en) Systems and methods for prioritizing data processing
CN109377694B (en) Monitoring method and system for community vehicles
CN112753038B (en) Method and device for identifying lane change trend of vehicle
CN107705577B (en) Real-time detection method and system for calibrating illegal lane change of vehicle based on lane line
CN112766069A (en) Vehicle illegal parking detection method and device based on deep learning and electronic equipment
CN113971432A (en) Generating fused sensor data by metadata association
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN112861773A (en) Multi-level-based berthing state detection method and system
CN112541475A (en) Sensing data detection method and device
WO2021088504A1 (en) Road junction detection method and apparatus, neural network training method and apparatus, intelligent driving method and apparatus, and device
CN112487884A (en) Traffic violation behavior detection method and device and computer readable storage medium
JP2019154027A (en) Method and device for setting parameter for video monitoring system, and video monitoring system
CN111160132B (en) Method and device for determining lane where obstacle is located, electronic equipment and storage medium
CN114488072A (en) Obstacle detection method, obstacle detection device and storage medium
CN112735163B (en) Method for determining static state of target object, road side equipment and cloud control platform
Matsuda et al. A system for real-time on-street parking detection and visualization on an edge device
WO2021068210A1 (en) Method and apparatus for monitoring moving object, and computer storage medium
CN115761668A (en) Camera stain recognition method and device, vehicle and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19948289

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19948289

Country of ref document: EP

Kind code of ref document: A1