CN113970725A - Calibration method, device and equipment for radar detection area and storage medium - Google Patents

Calibration method, device and equipment for radar detection area and storage medium Download PDF

Info

Publication number
CN113970725A
CN113970725A CN202010725723.3A CN202010725723A CN113970725A CN 113970725 A CN113970725 A CN 113970725A CN 202010725723 A CN202010725723 A CN 202010725723A CN 113970725 A CN113970725 A CN 113970725A
Authority
CN
China
Prior art keywords
detection area
point cloud
point
area
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010725723.3A
Other languages
Chinese (zh)
Inventor
李娟娟
刘建超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wanji Technology Co Ltd
Original Assignee
Beijing Wanji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wanji Technology Co Ltd filed Critical Beijing Wanji Technology Co Ltd
Priority to CN202010725723.3A priority Critical patent/CN113970725A/en
Publication of CN113970725A publication Critical patent/CN113970725A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating

Abstract

The embodiment of the invention provides a method, a device and equipment for calibrating a radar detection area and a storage medium. The method comprises the following steps: acquiring a plurality of frames of first point clouds collected by a radar; acquiring a moving area raster image corresponding to a plurality of frames of first point clouds; generating a corresponding detection area raster image according to the motion area raster image; and calibrating the radar detection area according to the detection area grid map. The detection area is represented by adopting a grid diagram mode, so that the detection area is prevented from being represented by adopting a road boundary line, and the calibration difficulty of the detection area is reduced. Moreover, automatic calibration of the radar detection area can be completed without manual calibration. And the point cloud data in the detection area can be rapidly identified, so that the calculation complexity is reduced, and the requirement on hardware resources is further reduced.

Description

Calibration method, device and equipment for radar detection area and storage medium
Technical Field
The embodiment of the invention relates to the technical field of radars, in particular to a method, a device, equipment and a storage medium for calibrating a radar detection area.
Background
With the rapid development of the automatic driving technology, the development of the automatic driving technology gradually goes from the single-vehicle intelligence to the road-end intelligence due to the problems that the sensing distance is limited, the blind area is blocked and the like in the single-vehicle sensing scheme. The most core problem of the road end intelligence is a road end sensing system, and the current scheme of the road end sensing which is mature and complete is radar road side sensing. In the roadside radar sensing algorithm, in order to reduce the algorithm complexity, only point cloud data in a detection area is processed. Therefore, the detection area of the roadside radar needs to be calibrated.
In the prior art, a calibration method for a road side radar detection area is mainly based on a road boundary line. Specifically, the road boundary line of the detection area is determined artificially through a straight line or a curve, so as to calibrate the detection area of the road-side radar. In the calibration method for the roadside radar detection area in the prior art, when whether each frame of point cloud data acquired by the roadside radar is point cloud data in the detection area is identified through the calibrated detection area, the identification is carried out in a mode of comparing each frame of point cloud data with a road boundary line.
Because the road boundary line is difficult to express by straight lines or curves in some special roads, the calibration method of the roadside radar detection area in the prior art increases the difficulty for the calibration of the detection area. And because the data volume of each frame of point cloud data is very large, the point cloud data in the detection area in each frame of point cloud data is identified in a way of comparing each frame of point cloud data with the road boundary line, so that the calculation complexity is higher, and the requirement on hardware resources is higher.
Disclosure of Invention
Embodiments of the present invention provide a method, an apparatus, a device, and a storage medium for calibrating a radar detection area, which solve the technical problems that the calibration method for a roadside radar detection area in the prior art increases the difficulty for calibrating the detection area, and the calculation complexity is high when identifying whether the detection area is point cloud data in the detection area, and further the requirement for hardware resources is high.
In a first aspect, an embodiment of the present invention provides a method for calibrating a radar detection area, including:
acquiring a plurality of frames of first point clouds collected by the radar; acquiring a moving area raster image corresponding to the multiple frames of first point clouds; generating a corresponding detection area raster image according to the motion area raster image; and calibrating the radar detection area according to the detection area grid map.
Further, the method for acquiring the moving area raster map corresponding to the first point cloud of the multiple frames includes:
determining a target motion point in the first point cloud according to a background frame and the first point cloud, wherein the background frame is a multi-frame point cloud acquired before the first point cloud; and generating the moving area grid map according to the target moving points in the first point cloud.
Further, the method as described above, determining the first point cloud from a background frame and the first point cloud
A target motion point in a point cloud comprising:
comparing the distance values of the point cloud points of each frame of point cloud corresponding to a plurality of target positions in the background frame to obtain a maximum distance value; taking the maximum distance value corresponding to each target position as an element value of a background frame matrix to generate a background frame matrix; determining a target motion point in the first point cloud using the background frame matrix.
Further, the method for determining a target motion point in the first point cloud according to a background frame and the plurality of frames of first point cloud distance data as described above includes:
grouping multi-frame point clouds of the background frames according to an acquisition sequence, wherein each group of point clouds comprises continuous N frames of point clouds; comparing the distance values of the point cloud points of each frame of point cloud corresponding to the plurality of target positions according to groups to obtain a maximum distance median value; taking the maximum distance median value corresponding to each target position as an element value of a background frame matrix to generate a background frame matrix; determining a target motion point in the first point cloud using the background frame matrix.
Further, the method as described above, determining a target motion point in the first point cloud using the background frame matrix, comprising:
comparing the element values in the background frame matrix with the distance values of the corresponding first target points in the multiple frames of first point clouds; calculating a difference value between an element value in the background frame matrix and a distance value of a corresponding point cloud point in each first point cloud; and if the difference is larger than a preset distance threshold, determining the corresponding point cloud point as a target motion point.
Further, the method as described above, further comprising:
and if the difference value is larger than a preset updating threshold value, taking the distance value of the corresponding point cloud point as the element value of the background frame matrix.
Further, the method as described above, generating the motion region grid map from the target motion points in the first point cloud, comprising:
initializing the moving area raster image under a moving area raster image coordinate system to obtain an initialized moving area raster image; converting the distance value of the target motion point into a corresponding target motion point position coordinate in a motion area grid graph coordinate system; and generating the grid map of the motion area according to the coordinates of the target motion point and the initialized grid map of the motion area.
Further, the method for acquiring the moving area raster map corresponding to the first point cloud of the multiple frames includes:
and processing the multiple frames of first point clouds by using a deep learning model to obtain a moving area grid map corresponding to the multiple frames of first point clouds.
Further, the method as described above, the generating a corresponding detection area grid map according to the motion area grid map includes:
determining a detection area and a non-detection area in the grid image of the motion area by adopting a preset area growing algorithm model; and generating the detection area grid map according to the detection area and the non-detection area.
Further, the method as described above, further comprising:
judging whether a detection area updating condition is met; and if the detection area updating condition is met, updating the detection area grid map.
In a second aspect, an embodiment of the present invention provides a calibration apparatus for a radar detection area, including:
the point cloud acquisition module is used for acquiring a plurality of frames of first point clouds collected by the radar; the raster image acquisition module is used for acquiring a moving area raster image corresponding to the multi-frame first point cloud; the grid map generating module is used for generating a corresponding detection area grid map according to the movement area grid map; and the area calibration module is used for calibrating the radar detection area according to the detection area grid map.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
a memory, a processor, and a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any of the first aspects.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the method according to any one of the first aspect.
The embodiment of the invention provides a calibration method of a radar detection area, which comprises the steps of obtaining a plurality of frames of first point clouds collected by a radar; acquiring a moving area raster image corresponding to a plurality of frames of first point clouds; generating a corresponding detection area raster image according to the motion area raster image; and calibrating the radar detection area according to the detection area grid map. The detection area is represented by adopting a grid diagram mode, so that the detection area is prevented from being represented by adopting a road boundary line, and the calibration difficulty of the detection area is reduced. Moreover, automatic calibration of the radar detection area can be completed without manual calibration. And moreover, the detection area is represented in a grid map mode, so that when the point cloud data in the detection area is identified, only the position coordinates of each frame of point cloud data under the grid map coordinate system of the detection area are needed to be determined, whether the position corresponding to each target point is the position in the detection area can be indexed through the grid map of the detection area, and then the point cloud data in the detection area is rapidly identified, the complexity of calculation is reduced, and further the requirement on hardware resources is reduced.
It should be understood that what is described in the summary above is not intended to limit key or critical features of embodiments of the invention, nor is it intended to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a method for calibrating a radar detection area according to an embodiment of the present invention;
fig. 2 is a flowchart of a calibration method for a radar detection area according to a second embodiment of the present invention;
fig. 3 is a flowchart of step 202 in the method for calibrating a radar detection area according to the second embodiment of the present invention;
fig. 4 is a flowchart of step 2021 of the method for calibrating a radar detection area according to the second embodiment of the present invention;
fig. 5 is a schematic diagram of a vehicle motion trajectory in the calibration method for a radar detection area according to the second embodiment of the present invention;
fig. 6 is another flowchart of step 2021 of the method for calibrating a radar detection area according to the second embodiment of the present invention;
fig. 7 is a schematic view of a target motion point cloud corresponding to a vehicle in the radar detection area calibration method according to the second embodiment of the present invention;
fig. 8 is a schematic diagram of a detection area and a non-detection area in the calibration method for a radar detection area according to the second embodiment of the present invention;
fig. 9 is a flowchart of a target detection method according to a third embodiment of the present invention;
fig. 10 is a flowchart of a calibration method for a detection area of a multi-sensor system according to a third embodiment of the present invention;
fig. 11 is a schematic structural diagram of a calibration apparatus of a radar detection area according to a fourth embodiment of the present invention;
fig. 12 is a schematic structural diagram of a calibration apparatus of a radar detection area according to a fifth embodiment of the present invention;
fig. 13 is a schematic structural diagram of an object detection apparatus according to a sixth embodiment of the present invention;
fig. 14 is a schematic structural diagram of an electronic device according to a seventh embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present invention. It should be understood that the drawings and the embodiments of the present invention are illustrative only and are not intended to limit the scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, and in the above-described drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Example one
Fig. 1 is a flowchart of a method for calibrating a radar detection area according to an embodiment of the present invention, and as shown in fig. 1, an execution main body of the embodiment is a calibration device for a radar detection area, the calibration device for a radar detection area may be integrated into an electronic device, and the electronic device may be a radar, a computer, a tablet computer, or other devices with independent calculation and processing capabilities, and the method for calibrating a radar detection area according to the embodiment includes the following steps.
Step 101, acquiring a plurality of frames of first point clouds collected by a radar.
In this embodiment, the radar may be a radar disposed on the roadside, and the radar type may be a laser radar, a millimeter wave radar, or another type of radar. The roadside radar can be arranged beside a road and used for detecting the distance between a moving target in a detection range and the radar. After the radar is installed on the road side, the detection area of the radar is calibrated.
In this embodiment, when calibrating a detection area of a radar, a plurality of frames of first point clouds collected by the radar are first acquired. The multi-frame first point cloud can be acquired by an acquisition device of the radar.
As an optional implementation manner, in this embodiment, if the electronic device is not a radar, a communication connection is established between the electronic device and the radar in advance, and after a collection device of the radar collects multiple frames of first point clouds, the electronic device is communicated with the radar to obtain the multiple frames of first point clouds collected by the radar collection device.
As another optional implementation manner, in this embodiment, if the electronic device is a radar itself, after the acquisition device of the radar acquires multiple frames of first point cloud data, the multiple frames of first point cloud data may be directly acquired from the acquisition device.
Wherein the distance value of the point cloud point is included in the first point cloud of each frame. The distance value of the first point cloud per frame can be represented by a matrix. For example, if the radar is an m-line radar and the horizontal resolution is h °, the number of columns of the matrix of the first point cloud distance data per frame is 360/h columns, and the number of rows is m rows. The dimension of the matrix corresponding to the first point cloud distance data in each frame is m x (360/h). And each element value in the corresponding matrix of the first point cloud distance data of each frame is the distance data corresponding to each first target point in the first point cloud of the frame. The distance data corresponding to each first target point may be denoted Lij. Where i is 1, 2.. multidot.m, m is the number of lines of the radar, and j is 1, 2.. multidot.n, n is the number of first target points included in each line of the radar.
And 102, acquiring a moving area raster image corresponding to the first point cloud of the multiple frames.
In this embodiment, a target motion point may be determined from multiple frames of the first point cloud, and then a motion area grid map may be generated according to the target motion point.
The method for determining the target motion point from the multi-frame first point cloud may be as follows: and acquiring a background frame corresponding to the radar, and then determining a target moving point according to the background frame and the first point cloud. Or, the method for determining the target motion point from the multi-frame first point cloud may further be: and constructing a moving target recognition model by adopting a deep learning model, and then inputting the multi-frame first point cloud into the moving target recognition model so as to recognize the target moving point of the multi-frame first point cloud and output the target moving point.
It can be understood that the manner of determining the target motion point from the multi-frame first point cloud may also be other manners, which is not limited in this embodiment.
And 103, generating a corresponding detection area grid map according to the motion area grid map.
Specifically, in this embodiment, the grid map of the motion region may be input into a preset region growing algorithm model, and a detection region and a non-detection region in the grid map of the motion region are determined in the preset region growing algorithm model according to a preset region growing strategy, so as to output a grid map of the detection region.
The detection area grid graph comprises a detection area and a non-detection area, and the corresponding numerical values of grids of the detection area and the non-detection area are different.
And 104, calibrating the radar detection area according to the detection area grid map.
Specifically, in this embodiment, after the detection area grid map is determined, the detection area grid map is used to represent the radar detection area, so as to calibrate the radar detection area.
It can be understood that the detection area of the radar is represented by the detection area grid map, and when each frame of point cloud data of the radar is identified, only after the position coordinates of each frame of point cloud data under the detection area grid map coordinate system are determined, whether the position corresponding to each target point is the position in the detection area can be indexed through the detection area grid map, so that the point cloud data in the detection area can be rapidly identified.
It will be appreciated that the detection region raster image coordinate system may be the same as the motion region raster image coordinate system.
According to the calibration method for the radar detection area, a plurality of frames of first point clouds collected by a radar are obtained; acquiring a moving area raster image corresponding to a plurality of frames of first point clouds; generating a corresponding detection area raster image according to the motion area raster image; and calibrating the radar detection area according to the detection area grid map. The detection area is represented by adopting a grid diagram mode, so that the detection area is prevented from being represented by adopting a road boundary line, and the calibration difficulty of the detection area is reduced. Moreover, automatic calibration of the radar detection area can be completed without manual calibration. And moreover, the detection area is represented in a grid map mode, when the point cloud data in the detection area is identified, only the position coordinates of each frame of point cloud data under the grid map coordinate system of the detection area are needed to be determined, whether the position corresponding to each target point is the position in the detection area can be indexed through the grid map of the detection area, and then the target in the point cloud data in the detection area is rapidly identified, so that the complexity of calculation is reduced, and the requirement on hardware resources is further reduced.
Example two
Fig. 2 is a flowchart of a calibration method for a radar detection area according to a second embodiment of the present invention, and as shown in fig. 2, the calibration method for a radar detection area according to the present embodiment is further refined in steps 102 to 103 on the basis of the calibration method for a radar detection area according to the first embodiment of the present invention, and further includes a step of updating a detection area raster map, so that the calibration method for a radar detection area according to the present embodiment includes the following steps.
Step 201, acquiring a plurality of frames of first point clouds collected by a radar.
Step 202, obtaining a moving area raster image corresponding to a plurality of frames of first point clouds.
As an alternative implementation, in this embodiment, as shown in fig. 3, step 202 includes the following steps:
step 2021, determining a target motion point in the first point cloud according to the background frame and the first point cloud.
The background frame is a multi-frame point cloud acquired before the first point cloud.
Step 2022, generating a motion region grid map according to the target motion points in the first point cloud.
Optionally, in this embodiment, as shown in fig. 4, step 2021 includes the following steps:
step 2021a, comparing the distance values of the point cloud points of each frame of point cloud corresponding to the plurality of target positions in the background frame to obtain a maximum distance value.
In this embodiment, the background frame includes multiple frames of point clouds, where the multiple frames of point clouds are point clouds before the first point cloud, where point cloud points in each frame of point clouds include a distance value, a target position of the background frame is determined from the point cloud points in each frame of point clouds, for example, the point cloud points in the frame of point clouds may be sampled at equal intervals, and the sampled point cloud points are determined as the target position of the background frame, or each point cloud in the frame of point clouds is directly determined as the target position of the background frame. And then comparing the distance values of the point cloud points of the multi-frame point cloud corresponding to each target position of the background frame to obtain the maximum distance value.
The maximum distance value represents a distance value corresponding to a static background in a radar detection range, such as a road surface, a roadside railing, a tree, a building, and the like.
Step 2021b, using the maximum distance value corresponding to each target position as an element value of the background frame matrix, and generating a background frame matrix.
In this embodiment, a matrix may be created first, a mapping relationship between the target positions and elements of the newly created matrix is created, and then the maximum distance value corresponding to each target position is placed at the element having the mapping relationship to obtain an element value of each element of the matrix, so as to generate a background frame matrix.
Step 2021c, determine the target motion point in the first point cloud using the background frame matrix.
Optionally, in this embodiment, step 2021c may specifically include:
comparing the element values in the background frame matrix with the distance values of the corresponding first target points in the multiple frames of first point clouds; calculating the difference value between the element value in the background frame matrix and the distance value of the corresponding point cloud point in each first point cloud; and if the difference value is larger than the preset distance threshold value, determining the corresponding point cloud point as the target motion point.
Specifically, in this embodiment, the element value in the background frame matrix may be represented as Bij, and the distance value of the corresponding point cloud point in the first point cloud of the multiple frames may be represented as lij. And judging whether the difference value of the Bij and the lij is greater than a preset distance threshold, and if so, determining the corresponding point cloud point as a target motion point.
Wherein, the preset distance threshold may be 10 cm, 20 cm, etc. Preferably, the preset distance threshold may be 30 cm in order to avoid interference caused by trembling.
As shown in fig. 5, the target moving points in the first point cloud are determined to be moving points of the first vehicle and the second vehicle according to the background frame matrix and the plurality of frames of first point cloud distance values, the target moving point of the first vehicle may determine a moving track of the first vehicle, and the target moving point of the second vehicle may determine a moving track of the second vehicle.
Optionally, in this embodiment, in step 2021c, after calculating a difference between an element value in the background frame matrix and a distance value of a corresponding point cloud point in each first point cloud, the method further includes:
and if the difference value is larger than the preset updating threshold value, taking the distance value of the corresponding point cloud point as the element value of the background frame matrix.
Specifically, in this embodiment, the preset update threshold is a threshold for updating the background frame matrix. If the difference value between the element value in the background frame matrix and the distance value of the corresponding point cloud point in each first point cloud is greater than the preset updating threshold value, the representation of the background frame matrix on the current background is not accurate enough, the condition of updating the background frame matrix is met, and when the background frame matrix is updated, the element value of the background frame matrix is updated by adopting the distance value of the corresponding point cloud point.
In this embodiment, a difference between an element value in the background frame matrix and a distance value of a corresponding point cloud point in each first point cloud is calculated, and if the difference is greater than a preset update threshold, the distance value of the corresponding point cloud point is used as the element value of the background frame matrix. The background frame matrix can be updated when the updating condition of the background frame matrix is met, and the requirement of the accuracy of the background frame matrix is met.
Or alternatively, in this embodiment, as shown in fig. 6, step 2021 includes the following steps:
step 20211, grouping the multiple frames of point clouds of the background frames according to the obtaining sequence, wherein each group of point clouds includes N continuous frames of point clouds.
Wherein N is a positive integer greater than 1.
In this embodiment, the multiple frames of point clouds of the background frames are divided into multiple groups according to the acquisition sequence, the multiple frames of point clouds of each group of background frames include N continuous frames of point clouds, each frame of point cloud includes a distance value of a point cloud point, and then the distance value of the point cloud point of the N continuous frames of point clouds in each group of point clouds can be represented as Lijk, where k is 1, 2.
Step 20212, comparing the distance values of the point cloud points of each frame of point cloud corresponding to the plurality of target positions in groups to obtain a maximum distance median.
Step 20213, using the maximum distance median corresponding to each target position as the element value of the background frame matrix, to generate a background frame matrix.
Specifically, each element value Bij in the background frame matrix may be initialized to 0 first. The method comprises the steps of determining target positions of background frames from point cloud points in multi-frame point clouds of each group of background frames, sequencing distance values of point cloud points corresponding to the target positions in the multi-frame point clouds of each group of background frames from large to small, and determining the distance median of the point cloud points corresponding to each group as Cij. And if the distance median Cij of a certain corresponding point cloud point in the current group is greater than the corresponding element value Bij in the background frame matrix, updating the element value Bij in the background frame matrix to be Cij. And for the distance values of the point cloud points corresponding to the next group of point clouds, if the distance median Cij of the point cloud point corresponding to a certain next group is larger than the corresponding element value Bij in the background frame matrix, updating the element value Bij in the background frame matrix to be Cij. The element value in the background frame matrix is the maximum distance median among the distance values of the point cloud points corresponding to all the groups of point clouds.
In the embodiment, when the target motion point in the first point cloud is determined according to the background frame and the multiple frames of first point cloud distance data, the target motion point in the first point cloud can be determined by adopting two different modes, so that the flexibility of determining the target motion point in the first point cloud is improved.
At step 20214, the background frame matrix is used to determine the target motion points in the first point cloud.
In this embodiment, the implementation manner of step 20214 is similar to that of step 2021c of the present invention, and is not described herein again.
Or as another alternative implementation, in this embodiment, step 202 includes the following steps:
and processing the multi-frame first point cloud by using the deep learning model to obtain a moving area grid map corresponding to the multi-frame first point cloud.
Specifically, in this embodiment, the deep learning model is a moving object recognition model.
Firstly, a training sample is adopted to train a moving object recognition model. The training sample can be multi-frame point cloud data, and is marked with a moving point. After the moving target recognition model is trained to be convergent by adopting the training sample, the moving target recognition model trained to be convergent is obtained, then, the multi-frame first point cloud data are input into the moving target recognition model trained to be convergent, the moving target recognition model is used for recognizing the target moving point in the multi-needle first point cloud data, and the target moving point is output. And finally, generating a moving area grid map according to the target moving points in the first point cloud.
In this embodiment, when obtaining the moving area raster image corresponding to the multiple frames of first point clouds, the target moving points in the first point clouds may be determined by comparing distance values between the background frames and corresponding point clouds in the first point clouds, or the target moving points in the first point clouds may be identified by using a deep learning model, and then the moving area raster image may be generated according to the target moving points, so that the target moving points in the first point clouds may be determined in multiple ways, and flexibility of determining the target moving points in the first point clouds may be improved.
As an optional implementation manner, in this embodiment, as shown in fig. 7, in step 2022, generating a motion area grid map according to the target motion point in the first point cloud specifically includes the following steps:
step 2022a, initializing the grid map of the motion region in the coordinate system of the grid map of the motion region to obtain the grid map of the initialized motion region.
Further, in this embodiment, the size of each grid in the motion region grid map may be set in advance, and the size of the grid may be determined according to the horizontal resolution of the lidar, so that each target position can be mapped onto the grid in the motion region grid map after the coordinate transformation is performed. Such as K meters per grid in the grid map of the motion region. And then determining the detection range of the laser radar, and establishing a grid map coordinate system of the motion area according to the detection range of the laser radar and the size of each grid. For example, the grid map coordinate system of the motion area is established by taking the point at the leftmost upper corner of the grid map of the motion area as the origin of coordinates of the grid map coordinate system. In the established moving region grid map coordinate system, if the detection range of the laser radar is S meters and the size of each grid of the moving region grid map is K meters, the length of the moving region grid map can be 2 × S/K grid lengths, and the width of the moving region grid map can be 2 × S/K grid widths.
For example, the detection range of the laser radar is 100 meters, and the size of each grid is 0.1 meter, then in the grid map coordinate system of the moving area, the position of the laser radar is (1000 ), the length of the grid map of the moving area is 2000 grids, and the width is also 2000 grids.
After the motion area grid map coordinate system is established, the motion area grid map is initialized, specifically, a value of each grid in the motion area grid map is initialized, and in this embodiment, the value of each grid is initialized to 0.
Step 2022b, converting the distance value of the target motion point into a corresponding target motion point position coordinate in the motion region grid map coordinate system.
Further, in the present embodiment, the distance value lij of the target moving point is first converted from a polar coordinate system to position coordinates (Pijx, Pijy) in a rectangular coordinate system, and then the position coordinates in the rectangular coordinate system are converted to corresponding position coordinates of the target moving point in the moving region grid map coordinate system. And converting the position coordinates (Pijx, Pijy) under the rectangular coordinate system into the corresponding target motion point position coordinates (Pijx/K + S/K, Pijy/K + S/K) in the grid diagram coordinate system of the motion area.
For example, if the distance data lij of the target moving point is converted from the polar coordinate system to the position coordinate of (50.2,40.3) in the rectangular coordinate system, K is 0.1, and S is 100, the position coordinate (50.2,40.3) in the rectangular coordinate system is converted to the corresponding position coordinate of the target moving point in the grid map coordinate system of the moving area as (1502,1403).
Step 2022c, generating a grid image of the motion area according to the coordinates of the target motion point and the initialized grid image of the motion area.
Further, in this embodiment, first, the initial value at the position coordinates of the target motion point in the initialized motion region grid map is changed to 1, the initial value at the position coordinates of the non-target motion point in the initialized motion region grid map is kept unchanged and is still 0, and then the dimension of the generated motion region grid map is (2 × S/K ). The value at the position coordinates of the target moving point is 1, and the value at the position of the non-target moving point is 0. As shown in fig. 7, it is the target motion point cloud corresponding to the first vehicle track and the target motion point cloud corresponding to the second vehicle shown in fig. 5. The values at the positions of the target motion point cloud corresponding to the first vehicle track and the target motion point cloud corresponding to the second vehicle track are 1.
And 203, determining a detection area and a non-detection area in the grid map of the motion area by adopting a preset area growing algorithm model.
And 204, generating a detection area grid map according to the detection area and the non-detection area.
Further, in this embodiment, the grid map of the motion region is input into a preset region growing algorithm model, and the preset region growing algorithm model performs segmentation of the detection region and the non-detection region. Specifically, if the area of the motion region is smaller than the preset area threshold, it may be an individual interference region, and the motion region is determined as a non-detection region, and if the area of the motion region is larger than the preset area threshold, it is determined as a detection region. After the preset region growing algorithm model finishes the segmentation of the detection region and the non-detection region, the value corresponding to each grid in the detection region is set to be 1, and the value of the non-detection region is set to be 0. As shown in fig. 8, the detection area is a black filled area in fig. 8, and the non-detection area is a white filled area in fig. 8. And combining the detection area and the non-detection area to generate a detection area grid map and outputting the detection area grid map. And storing the output detection area grid graph into the laser radar to realize the calibration of the detection area of the laser radar.
It is to be understood that, after the detection area grid map is generated, the position coordinates having a value of 1 in the detection area grid map may be stored in association with the corresponding value. So as to facilitate the identification of the point cloud data in the detection area when the subsequent laser radar collects the point cloud data.
Step 205, determine whether the detection area update condition is satisfied.
Further, in this embodiment, after the lidar is installed to calibrate the detection area, the installation position may be shifted due to external factors, so that the detection area of the lidar needs to be updated to ensure that the detection area of the lidar is accurate. After the laser radar is installed to calibrate the first detection area, whether the detection area updating condition is met or not is judged, and if the detection area updating condition is met, the detection area is updated.
Specifically, in this embodiment, the detection area may be periodically updated, so that the determination of whether the detection area update condition is satisfied may be to determine whether the detection area update period is reached, and if the detection area update period is reached, it is determined that the detection area update condition is satisfied.
In step 206, if the detection area update condition is satisfied, the detection area grid map is updated.
In this embodiment, the manner of updating the raster map of the detection area is similar to the manner of steps 201 to 205, and is not described in detail here.
In the method for calibrating the detection area of the laser radar provided by this embodiment, a corresponding detection area grid map is generated according to a movement area grid map to calibrate the detection area of the laser radar, and then whether a detection area update condition is met is judged; if the detection area updating condition is met, the detection area grid map is updated, automatic calibration of the laser radar detection area can be achieved, the detection area can be automatically updated after the detection area updating condition is determined to be met, and maintenance cost of the laser radar is reduced.
EXAMPLE III
Fig. 9 is a flowchart of a target detection method provided in the third embodiment of the present invention, and as shown in fig. 9, the target detection method provided in this embodiment is based on the radar detection area calibration method provided in the second embodiment of the present invention, and further includes a step of obtaining a radar detection area calibrated in multiple frames of first point clouds after step 204 or step 206, and performing target detection according to the point clouds in the radar detection area to obtain a detection result, so that the radar detection area calibration method provided in this embodiment includes the following steps.
Step 301, acquiring a current frame point cloud collected by a radar.
In this embodiment, the current frame point cloud includes a distance value of a point cloud point, where after the current frame point cloud calibrates a detection area of the radar, the radar is put into use formally and is used to acquire the point cloud when monitoring a moving target in a detection range.
It can be understood that, similar to step 101, if the electronic device is not a radar, a communication connection is established between the electronic device and the radar in advance, and after the current frame point cloud is collected by a collection device of the radar, the current frame point cloud collected by the radar collection device is obtained by communicating with the radar. If the electronic equipment is the radar, the current frame point cloud can be directly obtained from the acquisition device after the acquisition device of the radar acquires the current frame point cloud.
Step 302, a radar detection area calibrated in a multi-frame first point cloud is obtained.
In this embodiment, a calibrated radar detection area in a multi-frame first point cloud is obtained, that is, a calibrated radar detection area grid map is obtained. If the detection area grid map is stored at present, the detection area grid map is the detection area grid map after the first calibration. If the current storage is the updated detection area grid map, the current detection area grid map is the updated detection area grid map.
It is understood that the first calibrated detection region grid map obtained in step 302 is the determined detection region grid map after steps 201-204. Or the updated detection area grid map obtained in step 302 is the updated detection area grid map determined after steps 205 to 206.
And 303, performing target detection according to the point cloud in the radar detection area to obtain a detection result.
Further, in this embodiment, the current frame point cloud distance value is converted into a position coordinate under a grid diagram coordinate system of the detection area, and then the position coordinate of the current frame point cloud distance value under the grid diagram coordinate system of the detection area is and-operated with a position coordinate of a pre-stored grid diagram median value of the current detection area being 1, that is, whether a position corresponding to each point cloud is a position in the detection area can be indexed through the grid diagram of the current detection area, and then the point cloud in the detection area is rapidly target-detected, so as to obtain a detection result.
After generating a corresponding detection area grid map according to the movement area grid map or after updating the detection area grid map, the method for calibrating a radar detection area provided in this embodiment further includes: and acquiring a radar detection area calibrated in the first point cloud of the multiple frames, and performing target detection according to the point cloud in the radar detection area to obtain a detection result. The detection area is represented by adopting a grid diagram mode, so that the detection area is prevented from being represented by adopting a road boundary line, and the calibration difficulty of the detection area is reduced. And moreover, the detection area is represented in a grid map mode, when point cloud data in the detection area can be identified, only the position coordinates of each frame of point cloud under a grid map coordinate system of the detection area are needed to be determined, whether the position corresponding to each point cloud is the position in the detection area can be indexed through the grid map of the detection area, and then the point cloud in the detection area is identified rapidly, so that the complexity of calculation is reduced, and further the requirement on hardware resources is reduced.
Based on the same inventive concept, in one embodiment, as shown in fig. 10, the present invention further provides a method for calibrating a detection area of a multi-sensor system, where the multi-sensor system includes a plurality of sensors, including the following steps:
step 401, obtaining perception information collected by each sensor at the same time and in the same scene, and performing fusion processing on the perception information collected by each sensor to obtain a fusion image.
In this embodiment, after obtaining the sensing information collected by each sensor, the sensing information collected by each sensor at the same time and in the same scene may be subjected to space-time synchronization processing according to the system calibration parameters of the multi-sensing system, and then subjected to fusion operation to obtain a fusion image. The fusion mode may be result-level fusion (decision-level fusion) or feature-level fusion, and is not described in detail herein. Optionally, the plurality of sensors in this step may be one or more of a camera, a laser radar, and a millimeter wave radar.
And 402, acquiring a moving area raster image of the multi-frame fusion image.
And 403, generating a corresponding detection area grid map according to the motion area grid map.
And 404, calibrating the detection area of the multi-sensor system according to the detection area grid map.
The implementation manner of steps S402-S404 in this embodiment is substantially the same as the first embodiment, the second embodiment, and the implementation manner of obtaining the raster map of the motion region, and it can be referred to the above related contents, and will not be described in detail here.
In the method for calibrating the detection area of the multi-sensor system provided by the embodiment, a moving area grid map in a fusion image is obtained; generating a corresponding detection area raster image according to the motion area raster image; and calibrating the detection area of the multi-sensor system according to the detection area grid map. Because the sensing information of the multiple sensors is subjected to fusion processing, the embodiment can realize the calibration of the detection areas in the images acquired by the multiple sensors at the same time, thereby not only reducing the difficulty of the calibration of the detection areas, but also improving the accuracy and efficiency of the calibration. Because the detection area calibration can be accurately and quickly performed by the embodiment, the calibration method of the embodiment not only can identify the target more accurately (using fusion data and having more characteristic dimensions), but also can identify the target in the detection area more quickly (the detection area calibration efficiency is high).
Example four
Fig. 11 is a schematic structural diagram of a calibration apparatus of a radar detection area according to a fourth embodiment of the present invention, and as shown in fig. 11, the calibration apparatus of a radar detection area according to the present embodiment includes: a point cloud obtaining module 41, a raster image obtaining module 42, a raster image generating module 43 and an area calibration module 44.
The point cloud obtaining module 41 is configured to obtain multiple frames of first point clouds collected by a radar. And the raster image acquisition module 42 is configured to acquire a moving area raster image corresponding to a first point cloud of multiple frames. And a raster map generation module 43, configured to generate a corresponding detection area raster map according to the motion area raster map. And the area calibration module 44 is configured to calibrate the radar detection area according to the detection area grid map.
The calibration apparatus for a radar detection area provided in this embodiment may implement the technical solution of the method embodiment shown in fig. 1, and the implementation principle and the technical effect are similar, which are not described herein again.
EXAMPLE five
Fig. 12 is a schematic structural diagram of a calibration apparatus for a radar detection area according to a fifth embodiment of the present invention, and as shown in fig. 12, the calibration apparatus for a radar detection area according to the present embodiment further includes, on the basis of the calibration apparatus for a radar detection area according to the fourth embodiment: further comprising: a raster map update module 51.
Optionally, the raster map obtaining module 42 is specifically configured to: determining a target motion point in the first point cloud according to the background frame and the first point cloud, wherein the background frame is a multi-frame point cloud obtained before the first point cloud; and generating a motion area grid map according to the target motion points in the first point cloud.
Optionally, the raster image obtaining module 42, when determining the target motion point in the first point cloud according to the background frame and the first point cloud, is specifically configured to:
comparing the distance values of the point cloud points of each frame of point cloud corresponding to a plurality of target positions in the background frame to obtain a maximum distance value; taking the maximum distance value corresponding to each target position as an element value of a background frame matrix to generate a background frame matrix; a target motion point in the first point cloud is determined using the background frame matrix.
Optionally, the raster image obtaining module 42, when determining the target motion point in the first point cloud according to the background frame and the first point cloud distance data of the multiple frames, is specifically configured to:
grouping multi-frame point clouds of the background frame according to the acquisition sequence, wherein each group of point clouds comprises N continuous frame point clouds; comparing the distance values of the point cloud points of each frame of point cloud corresponding to the plurality of target positions according to groups to obtain a maximum distance median value; taking the maximum distance median value corresponding to each target position as an element value of a background frame matrix to generate a background frame matrix; a target motion point in the first point cloud is determined using the background frame matrix.
Optionally, the raster image obtaining module 42, when determining the target motion point in the first point cloud by using the background frame matrix, is specifically configured to:
comparing the element values in the background frame matrix with the distance values of the corresponding first target points in the multiple frames of first point clouds; calculating the difference value between the element value in the background frame matrix and the distance value of the corresponding point cloud point in each first point cloud; and if the difference value is larger than the preset distance threshold value, determining the corresponding point cloud point as the target motion point.
Optionally, the raster image obtaining module 42 is further configured to:
and if the difference value is larger than the preset updating threshold value, taking the distance value of the corresponding point cloud point as the element value of the background frame matrix.
Optionally, the grid map obtaining module 42, when generating the grid map of the motion area according to the target motion point in the first point cloud, is specifically configured to:
initializing the moving area raster image under a moving area raster image coordinate system to obtain an initialized moving area raster image; converting the distance value of the target motion point into a corresponding target motion point position coordinate in a motion area grid graph coordinate system; and generating a moving area grid map according to the target moving point coordinates and the initialized moving area grid map.
Optionally, the raster map obtaining module 42 is specifically configured to:
and processing the multi-frame first point cloud by using the deep learning model to obtain a moving area grid map corresponding to the multi-frame first point cloud.
The raster map generation module 43 is specifically configured to:
determining a detection area and a non-detection area in a grid image of the motion area by adopting a preset area growing algorithm model; and generating a detection area grid map according to the detection area and the non-detection area.
Optionally, the raster map updating module 51 is configured to:
judging whether a detection area updating condition is met; and if the detection area updating condition is met, updating the detection area raster image.
The calibration apparatus for a radar detection area provided in this embodiment may implement the technical solutions of the method embodiments shown in fig. 2 to fig. 8, and the implementation principles and technical effects are similar, and are not described herein again.
EXAMPLE six
Fig. 13 is a schematic structural diagram of a target detection apparatus according to a fifth embodiment of the present invention, and as shown in fig. 13, the target detection apparatus according to this embodiment further includes, on the basis of a calibration apparatus of a radar detection area according to a third embodiment or a fourth embodiment: an area acquisition module 61 and an object detection module 62.
The area obtaining module 61 is configured to obtain a radar detection area calibrated in a multi-frame first point cloud. And the target detection module 62 is configured to perform target detection according to the point cloud in the radar detection area to obtain a detection result.
The calibration apparatus for a radar detection area provided in this embodiment may implement the technical solution of the method embodiment shown in fig. 9, and the implementation principle and the technical effect are similar, which are not described herein again.
EXAMPLE seven
An embodiment of the present invention provides an electronic device, as shown in fig. 14, where the electronic device includes: a memory 71, a processor 72 and a computer program.
The computer program is stored in the memory 71 and configured to be executed by the processor 72 to implement the calibration method for the radar detection area provided in the first embodiment or the second embodiment of the present invention. Or configured to be executed by the processor 72 to implement the object detection method provided by the third embodiment of the present invention.
The relevant description may be understood by referring to the relevant description and effect corresponding to the steps in fig. 1 to fig. 9, and redundant description is not repeated here.
In the present embodiment, the memory 71 and the processor 72 are connected by a bus 73.
Example eight
The seventh embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method for calibrating a radar detection area provided in the first embodiment or the second embodiment of the present invention. Or the computer program is executed by a processor to implement the object detection method provided by the third embodiment of the invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware form, and can also be realized in a form of hardware and a software functional module.
Program code for implementing the methods of the present invention may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (15)

1. A calibration method for a radar detection area is characterized by comprising the following steps:
acquiring a plurality of frames of first point clouds collected by the radar;
acquiring a moving area raster image corresponding to the multiple frames of first point clouds;
generating a corresponding detection area raster image according to the motion area raster image;
and calibrating the radar detection area according to the detection area grid map.
2. The method according to claim 1, wherein obtaining a moving area raster map corresponding to the first point cloud of the plurality of frames comprises:
determining a target motion point in the first point cloud according to a background frame and the first point cloud, wherein the background frame is a multi-frame point cloud acquired before the first point cloud;
and generating the moving area grid map according to the target moving points in the first point cloud.
3. The method of claim 2, wherein determining a target motion point in the first point cloud from a background frame and the first point cloud comprises:
comparing the distance values of the point cloud points of each frame of point cloud corresponding to a plurality of target positions in the background frame to obtain a maximum distance value;
taking the maximum distance value corresponding to each target position as an element value of a background frame matrix to generate a background frame matrix;
determining a target motion point in the first point cloud using the background frame matrix.
4. The method of claim 2, wherein determining a target motion point in the first point cloud from a background frame and the plurality of frames of first point cloud distance data comprises:
grouping multi-frame point clouds of the background frames according to an acquisition sequence, wherein each group of point clouds comprises continuous N frames of point clouds;
comparing the distance values of the point cloud points of each frame of point cloud corresponding to the plurality of target positions according to groups to obtain a maximum distance median value;
taking the maximum distance median value corresponding to each target position as an element value of a background frame matrix to generate a background frame matrix;
determining a target motion point in the first point cloud using the background frame matrix.
5. The method of claim 3 or 4, wherein determining a target motion point in the first point cloud using the background frame matrix comprises:
comparing the element values in the background frame matrix with the distance values of the corresponding first target points in the multiple frames of first point clouds;
calculating a difference value between an element value in the background frame matrix and a distance value of a corresponding point cloud point in each first point cloud;
and if the difference is larger than a preset distance threshold, determining the corresponding point cloud point as a target motion point.
6. The method of claim 5, further comprising:
and if the difference value is larger than a preset updating threshold value, taking the distance value of the corresponding point cloud point as the element value of the background frame matrix.
7. The method of claim 5, wherein generating the grid map of motion regions from the target motion points in the first point cloud comprises:
initializing the moving area raster image under a moving area raster image coordinate system to obtain an initialized moving area raster image;
converting the distance value of the target motion point into a corresponding target motion point position coordinate in a motion area grid graph coordinate system;
and generating the grid map of the motion area according to the coordinates of the target motion point and the initialized grid map of the motion area.
8. The method according to claim 1, wherein obtaining a moving area raster map corresponding to the first point cloud of the plurality of frames comprises:
and processing the multiple frames of first point clouds by using a deep learning model to obtain a moving area grid map corresponding to the multiple frames of first point clouds.
9. The method of claim 1, wherein generating a corresponding raster map of detection regions from the raster map of motion regions comprises:
determining a detection area and a non-detection area in the grid image of the motion area by adopting a preset area growing algorithm model;
and generating the detection area grid map according to the detection area and the non-detection area.
10. The method of claim 1, further comprising:
judging whether a detection area updating condition is met;
and if the detection area updating condition is met, updating the detection area grid map.
11. A method of object detection, the method comprising:
acquiring a radar detection area calibrated in a first point cloud of a plurality of frames by using the method for calibrating a radar detection area according to any one of claims 1 to 12;
and carrying out target detection according to the point cloud in the radar detection area to obtain a detection result.
12. A calibration method for a detection area of a multi-sensor system, wherein the multi-sensor system comprises a plurality of sensors, the calibration method is characterized by comprising the following steps:
acquiring perception information acquired by each sensor at the same time and in the same scene, and fusing the perception information acquired by each sensor to obtain a fused image;
acquiring a moving area raster image of the multi-frame fusion image;
generating a corresponding detection area raster image according to the motion area raster image;
and calibrating the detection area of the multi-sensing system according to the detection area grid map.
13. A calibration device for a radar detection area is characterized by comprising:
the point cloud acquisition module is used for acquiring a plurality of frames of first point clouds collected by the radar;
the raster image acquisition module is used for acquiring a moving area raster image corresponding to the multi-frame first point cloud;
the grid map generating module is used for generating a corresponding detection area grid map according to the movement area grid map;
and the area calibration module is used for calibrating the radar detection area according to the detection area grid map.
14. An electronic device, comprising:
a memory, a processor, and a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any one of claims 1-12.
15. A computer-readable storage medium, having stored thereon a computer program for execution by a processor to perform the method of any one of claims 1-12.
CN202010725723.3A 2020-07-24 2020-07-24 Calibration method, device and equipment for radar detection area and storage medium Pending CN113970725A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010725723.3A CN113970725A (en) 2020-07-24 2020-07-24 Calibration method, device and equipment for radar detection area and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010725723.3A CN113970725A (en) 2020-07-24 2020-07-24 Calibration method, device and equipment for radar detection area and storage medium

Publications (1)

Publication Number Publication Date
CN113970725A true CN113970725A (en) 2022-01-25

Family

ID=79585883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010725723.3A Pending CN113970725A (en) 2020-07-24 2020-07-24 Calibration method, device and equipment for radar detection area and storage medium

Country Status (1)

Country Link
CN (1) CN113970725A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115327497A (en) * 2022-08-12 2022-11-11 南京慧尔视软件科技有限公司 Radar detection range determining method and device, electronic equipment and readable medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115327497A (en) * 2022-08-12 2022-11-11 南京慧尔视软件科技有限公司 Radar detection range determining method and device, electronic equipment and readable medium
CN115327497B (en) * 2022-08-12 2023-10-10 南京慧尔视软件科技有限公司 Radar detection range determining method, radar detection range determining device, electronic equipment and readable medium

Similar Documents

Publication Publication Date Title
CN109034018B (en) Low-altitude small unmanned aerial vehicle obstacle sensing method based on binocular vision
CN110879401B (en) Unmanned platform real-time target 3D detection method based on camera and laser radar
CN110119698B (en) Method, apparatus, device and storage medium for determining object state
CN113340334B (en) Sensor calibration method and device for unmanned vehicle and electronic equipment
CN111860072A (en) Parking control method and device, computer equipment and computer readable storage medium
CN111257892A (en) Obstacle detection method for automatic driving of vehicle
CN112001298B (en) Pedestrian detection method, device, electronic equipment and storage medium
CN113160327A (en) Method and system for realizing point cloud completion
CN112927309B (en) Vehicle-mounted camera calibration method and device, vehicle-mounted camera and storage medium
CN111207762A (en) Map generation method and device, computer equipment and storage medium
CN113205604A (en) Feasible region detection method based on camera and laser radar
CN112949366A (en) Obstacle identification method and device
CN114882316A (en) Target detection model training method, target detection method and device
CN114217665A (en) Camera and laser radar time synchronization method, device and storage medium
CN113970725A (en) Calibration method, device and equipment for radar detection area and storage medium
CN114549542A (en) Visual semantic segmentation method, device and equipment
CN115272493B (en) Abnormal target detection method and device based on continuous time sequence point cloud superposition
CN115731560B (en) Deep learning-based slot line identification method and device, storage medium and terminal
CN114092771A (en) Multi-sensing data fusion method, target detection device and computer equipment
CN114882115B (en) Vehicle pose prediction method and device, electronic equipment and storage medium
CN115755072A (en) Special scene positioning method and system based on binocular structured light camera
CN113433568B (en) Laser radar observation simulation method and device
CN115236643A (en) Sensor calibration method, system, device, electronic equipment and medium
CN111160266B (en) Object tracking method and device
CN114862953A (en) Mobile robot repositioning method and device based on visual features and 3D laser

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination