CN115166773A - Obstacle monitoring method, device, equipment and storage medium - Google Patents

Obstacle monitoring method, device, equipment and storage medium Download PDF

Info

Publication number
CN115166773A
CN115166773A CN202210495269.6A CN202210495269A CN115166773A CN 115166773 A CN115166773 A CN 115166773A CN 202210495269 A CN202210495269 A CN 202210495269A CN 115166773 A CN115166773 A CN 115166773A
Authority
CN
China
Prior art keywords
screening
curve
obstacle
point cloud
boundary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210495269.6A
Other languages
Chinese (zh)
Inventor
严征
吴振锋
张治成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LeiShen Intelligent System Co Ltd
Original Assignee
LeiShen Intelligent System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LeiShen Intelligent System Co Ltd filed Critical LeiShen Intelligent System Co Ltd
Priority to CN202210495269.6A priority Critical patent/CN115166773A/en
Publication of CN115166773A publication Critical patent/CN115166773A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00

Abstract

The invention discloses a method, a device and equipment for monitoring obstacles and a storage medium. The method comprises the following steps: acquiring point cloud data acquired by a laser radar, wherein the point cloud data comprises at least two screening frames; generating at least one close-range boundary curve based on the at least two filter boxes; generating at least one perspective fitting curve based on the at least two screening boxes in a continuous iteration mode; obtaining a track boundary curve based on the close-range boundary curve and the long-range fitted curve; screening obstacle point clouds from the point cloud data according to at least one track boundary curve; according to the technical scheme, the obstacle information is determined according to the obstacle point cloud, the calculated amount of the system can be greatly reduced, and the accuracy of obstacle monitoring is improved.

Description

Obstacle monitoring method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of vehicles, in particular to a method, a device, equipment and a storage medium for monitoring obstacles.
Background
Railway transportation is an important carrier for personnel mobility, and is recognized as a comfortable, quick and safe transportation mode.
The complex environment of the railway laying area brings great challenges to the safety of trains. The prior art methods for monitoring obstacles on a track can be roughly divided into the following two methods: a pure vision algorithm based on image data and a pure point cloud detection algorithm based on laser radar.
The pure visual algorithm based on image data is characterized in that a camera is sensitive to light and shadow, the shadow is generated by shielding of an object in the daytime, light at night is poor, and image quality is low, so that false detection or missing detection is caused.
In the existing most schemes, the pure point cloud detection algorithm based on the laser radar detects obstacles by continuous frame background filtering or off-line map matching. The method needs to calculate through all the collected point clouds, has high calculation complexity and high calculation difficulty, and has low accuracy of obstacle monitoring because interference point clouds possibly exist in the collected point clouds.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment and a storage medium for monitoring obstacles.
According to an aspect of the present invention, there is provided an obstacle monitoring method including:
acquiring point cloud data acquired by a laser radar, wherein the point cloud data comprises at least two screening frames;
generating at least one close-range boundary curve based on the at least two filter boxes;
generating at least one perspective fitting curve based on the at least two screening boxes in a continuous iteration mode;
obtaining a track boundary curve based on the close-range boundary curve and the long-range fitted curve;
screening obstacle point clouds from the point cloud data according to at least one track boundary curve;
and determining obstacle information according to the obstacle point cloud.
According to another aspect of the present invention, there is provided an obstacle monitoring device including:
the point cloud data acquisition module is used for acquiring point cloud data acquired by the laser radar and comprises at least two screening frames;
a close-range boundary curve generating module, configured to generate at least one close-range boundary curve based on the at least two filter boxes;
the long-range view fitting curve generating module is used for generating at least one long-range view fitting curve in a continuous iteration mode on the basis of the at least two screening boxes;
the track boundary curve generating module is used for obtaining a track boundary curve based on the close-range boundary curve and the long-range fitted curve;
the obstacle point cloud screening module is used for screening obstacle point clouds from the point cloud data according to at least one track boundary curve;
and the obstacle information determining module is used for determining the obstacle information according to the obstacle point cloud.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to enable the at least one processor to perform the obstacle monitoring method according to any of the embodiments of the invention.
According to another aspect of the present invention, there is provided a computer-readable storage medium storing computer instructions for causing a processor to implement the obstacle monitoring method according to any one of the embodiments of the present invention when the computer instructions are executed.
The method comprises the steps of acquiring point cloud data acquired by a laser radar, wherein the point cloud data comprises at least two screening frames; generating at least one close-range boundary curve based on the at least two filter boxes; generating at least one perspective fitting curve through a continuous iteration mode based on the at least two screening boxes; obtaining a track boundary curve based on the close-range boundary curve and the distant-range fitting curve; screening obstacle point clouds from the point cloud data according to at least one track boundary curve; the obstacle information is determined according to the obstacle point cloud, the problems that due to the fact that a camera is sensitive to light and shadow, shadow is generated due to the fact that objects are shielded in the daytime, light is poor at night, image quality is low, and false detection or missing detection is caused are solved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flow chart of an obstacle monitoring method in an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an obstacle monitoring device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device in an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a flowchart of an obstacle monitoring method according to an embodiment of the present invention, where the present embodiment is applicable to an obstacle monitoring situation, and the method may be executed by an obstacle monitoring device according to an embodiment of the present invention, where the device may be implemented in a software and/or hardware manner, as shown in fig. 1, the method specifically includes the following steps:
and S110, acquiring point cloud data acquired by the laser radar, wherein the point cloud data comprises at least two screening frames.
In order to improve the accuracy rate and obtain as many ground point clouds as possible, the laser radar can be arranged in the middle of the top of the train.
Wherein, the screening frame comprises a certain amount of point clouds. The screening frame may be a cubic frame, or may be a frame with other shapes, which is not limited in this embodiment of the present invention.
Specifically, the manner of acquiring the point cloud data acquired by the laser radar may be: the method comprises the steps of collecting initial point cloud data through a laser radar, determining at least two screening frames according to the initial point cloud data, and determining the point cloud data according to the at least two screening frames. The method for acquiring the point cloud data acquired by the laser radar can also be as follows: setting the coordinate of the central point of the first screening frame; acquiring the gravity centers of all point clouds in the corresponding screening frame according to the first screening frame; determining the coordinates of the center point of the second screening frame according to the gravity centers of all point clouds in the first screening frame; and circularly determining the coordinates of the center point of the Nth screening frame according to the gravity centers of all the point clouds in the (N-1) th screening frame. The method for acquiring the point cloud data acquired by the laser radar can also be as follows: setting the coordinates of the central point of the first screening frame on the left side; acquiring the gravity centers of all point clouds in the corresponding screening frame according to the first screening frame on the left side; determining the coordinates of the center points of a second screening frame on the left side according to the centers of gravity of all point clouds in the first screening frame on the left side; and circularly determining the coordinates of the center point of the Nth screening box on the left side according to the gravity centers of all point clouds in the Nth-1 th screening box on the left side. Setting the coordinates of the center point of the first screening frame on the right side; acquiring the gravity centers of all point clouds in the corresponding screening frames according to the first screening frame on the right side; determining the center point coordinates of a second screening frame on the right side according to the gravity centers of all point clouds in the first screening frame on the right side; the determination of the coordinates of the center points of the nth right screening frame according to the centers of gravity of all point clouds in the (N-1) th right screening frame is performed in a loop, and the left screening frame and the right screening frame can be determined in parallel in order to increase the speed, which is not limited in the embodiment of the present invention.
And S120, generating at least one close-range boundary curve based on the at least two filter boxes.
Specifically, the manner of generating at least one close-range boundary curve based on the at least two filter boxes may be: and generating at least one close-range boundary curve according to the coordinates of the central points of the at least two screening boxes. The manner of generating at least one close-range boundary curve based on the at least two filter boxes may also be: and generating a close-range boundary curve of the left track according to the central point coordinates of the N screening boxes corresponding to the left track, and generating a close-range boundary curve of the right track according to the central point coordinates of the N screening boxes corresponding to the right track. The manner of generating at least one close-range boundary curve based on the at least two filter boxes may also be: generating a first target set according to the coordinates of the central points of the N screening frames corresponding to the left track; generating a second target set according to the coordinates of the central points of the N screening frames corresponding to the right track; generating a close-range boundary curve of a left track according to the first target set; and generating a close-range boundary curve of the right track according to the second target set.
And S130, generating at least one perspective fitting curve through a continuous iteration mode based on the at least two screening boxes.
Specifically, based on the at least two filter boxes, the manner of generating the at least one perspective fit curve through a continuous iteration manner may be: when the 1 st distant view point is calculated, the 1 st distant view point is calculated through all the near view points in front; when the second distant view point is calculated, the second distant view point is calculated by all the previous near view points + the previous 1 distant view points, and the second distant view point is calculated from the 3 rd distant view point based on the following rule: if M is larger than the screening frame number threshold value, generating a fitting curve through the central point coordinates of the first M-1 screening frames, and obtaining the central point coordinate of the Mth screening frame in the extension direction of the fitting curve, wherein M is a positive integer not smaller than 3; circularly executing the operation until the constraint condition is met, and generating a perspective fitting curve according to the coordinates of the central points of all the screening boxes obtained in the extending direction of the fitting curve; and repeating the operation to obtain a perspective fitting curve of the left track and a perspective fitting curve of the right track. Based on the at least two screening boxes, the mode of generating at least one perspective fitting curve through a continuous iteration mode can also be as follows: when the 1 st distant view point is calculated, the 1 st distant view point is calculated through all the near view points in front; when the second distant view point is calculated, the second distant view point is calculated by all the previous near view points + the previous 1 distant view points, and the second distant view point is calculated from the 3 rd distant view point based on the following rule: if the point cloud number in the Mth screening frame is smaller than the point cloud number threshold, generating a fitting curve through the central point coordinates of the first M-1 screening frames, and obtaining the central point coordinates of the Mth screening frame in the extension direction of the fitting curve, wherein M is a positive integer not smaller than 3; circularly executing the operation until the constraint condition is met, and generating a perspective fitting curve according to the coordinates of the central points of all the screening boxes obtained in the extending direction of the fitting curve; and repeating the operation to obtain a perspective fitting curve of the left track and a perspective fitting curve of the right track. Based on the at least two filter boxes, the mode of generating at least one perspective fitting curve through a continuous iteration mode can also be as follows: when the 1 st distant view point is calculated, the 1 st distant view point is calculated through all the near view points in front; when the second distant view point is calculated, the second distant view point is calculated by all previous near view points and the first 1 distant view points, and the second distant view point is calculated from the 3 rd distant view point based on the following rule: if the Mth screening frame does not have point cloud data, generating a fitting curve through the central point coordinates of the first M-1 screening frames, and obtaining the central point coordinates of the Mth screening frame in the extending direction of the fitting curve, wherein M is a positive integer not less than 3; circularly executing the operation until the constraint condition is met, and generating a perspective fitting curve according to the coordinates of the central points of all the screening boxes obtained in the extending direction of the fitting curve; and repeating the operation to obtain a perspective fitting curve of the left track and a perspective fitting curve of the right track.
And S140, obtaining an orbit boundary curve based on the close-range boundary curve and the long-range fitted curve.
Wherein the close-range boundary curve may include: the left side near view boundary curve and the right side near view boundary curve.
Wherein the perspective fit curve may include: a perspective fit curve on the left and a perspective fit curve on the right.
The near view curve refers to a curve within the detection range of the laser radar, and the far view curve refers to a curve outside the detection range of the laser radar.
Wherein the track boundary curve comprises: the left track boundary curve and the right track boundary curve, the track boundary curve may further include: the boundary curve on the track is not limited in this embodiment of the present invention.
Specifically, the way of obtaining the orbit boundary curve based on the close-range boundary curve and the distant-range fitting curve may be: obtaining a boundary fitting curve of the left side based on the near view boundary curve and the distant view fitting curve of the left side; obtaining a boundary fitting curve on the right side based on the close-range boundary curve and the long-range fitting curve on the right side; the track boundary curve is obtained based on the left boundary fitting curve and/or the right boundary fitting curve, for example, if the user selects the left boundary fitting curve, the left boundary fitting curve is translated, and the track boundary curve is determined according to the curve obtained after translation and the left boundary fitting curve. And if the user selects the right boundary fitting curve, translating the right boundary fitting curve, and determining the track boundary curve according to the translated curve and the right boundary fitting curve. And if the user selects the left boundary fitting curve and the right boundary fitting curve, determining the track boundary curve according to the left boundary fitting curve and the right boundary fitting curve.
The way of obtaining the orbit boundary curve based on the close-range boundary curve and the distant-range fitting curve can also be as follows: obtaining a boundary fitting curve of the left side based on the near view boundary curve and the distant view fitting curve of the left side; obtaining a boundary fitting curve on the right side based on the close-range boundary curve and the distant-range fitting curve on the right side; calculating the confidence coefficient of the boundary fitting curve on the left side and the confidence coefficient of the boundary fitting curve on the right side, and comparing the confidence coefficients; and defining a boundary fitting curve with high confidence coefficient as an orbit boundary curve, and obtaining the orbit boundary curve on the other side in a translation mode.
The way of obtaining the track boundary curve based on the close-range boundary curve and the distant-range fitting curve may also be: obtaining a boundary fitting curve of the left side based on the close-range boundary curve and the long-range fitting curve of the left side; obtaining a boundary fitting curve on the right side based on the close-range boundary curve and the long-range fitting curve on the right side; calculating the confidence coefficient of the boundary fitting curve on the left side and the confidence coefficient of the boundary fitting curve on the right side, and comparing the confidence coefficients; defining a boundary fitting curve with high confidence coefficient as an orbit boundary curve, and obtaining an orbit boundary curve on the other side in a translation mode; if the confidence coefficient of the boundary fitting curve on the left side is higher than that of the boundary fitting curve on the right side, generating a boundary curve on the track according to the point cloud Z coordinate values in the N screening frames corresponding to the track on the left side, which are greater than the coordinate value threshold; and if the confidence coefficient of the boundary fitting curve on the right side is higher than that of the boundary fitting curve on the right side, generating the boundary curve on the track according to the point cloud Z coordinate values in the N screening frames corresponding to the track on the right side, wherein the point cloud Z coordinate values are larger than the coordinate value threshold value.
S150, screening the obstacle point cloud from the point cloud data according to the at least one track boundary curve.
Wherein the obstacle point cloud is a point cloud within the track.
Specifically, the method for screening the obstacle point cloud from the point cloud data according to the at least one track boundary curve may be: and screening out point clouds in the track from the point cloud data according to the track boundary curve on the left side and the track boundary curve on the right side, and determining the point clouds in the track as the obstacle point clouds.
The method for screening the obstacle point cloud from the point cloud data according to the at least one orbit boundary curve can also be as follows: and (2) screening point clouds located in the track from the point cloud data according to the track boundary curve on the left side, the track boundary curve on the right side and the track upper boundary curve, and determining the point clouds in the track as the obstacle point clouds. It should be noted that, when the number of the filtered point clouds is greater than 0, it is determined that an obstacle exists on the track, and the number of the obstacle may be determined by a clustering algorithm, for example: the clustering results have several obstacles for several classes.
And S160, determining the obstacle information according to the obstacle point cloud.
Specifically, the method for determining the obstacle information according to the obstacle point cloud may be: clustering the obstacle point clouds to obtain at least one point cloud set; and determining the obstacle information according to the position coordinates and the parameter information of the bounding box corresponding to each point cloud set. The method for determining the obstacle information according to the obstacle point cloud may further include: clustering the obstacle point clouds to obtain at least one point cloud set; acquiring target position information and characteristic information of an obstacle in an image to be identified, which is shot by a camera; projecting the position coordinates of each bounding box to the image to be identified to obtain first position information of each bounding box relative to the image to be identified; and fusing the characteristic information of the obstacle and the parameter information of the bounding box according to the first position information and the target position information to obtain obstacle information.
Optionally, the point cloud data acquired by the laser radar is acquired, and the point cloud data includes at least two screening frames, including:
setting the coordinate of the central point of the first screening frame;
acquiring the gravity centers of all point clouds in the corresponding screening frame according to the first screening frame;
determining the coordinates of the center point of the second screening frame according to the gravity centers of all point clouds in the first screening frame;
determining the coordinates of the center points of the Nth screening frame according to the gravity centers of all point clouds in the (N-1) th screening frame in a circulating execution mode until the coordinates of the center points of all the screening frames are obtained; wherein N is a positive integer not less than 2.
The mode of setting the central point coordinate of the first filter box may be: and manually selecting the coordinates of the central point of the first screening frame, for example, manually selecting a section of track point cloud close to the vehicle head through straight-through filtering to form the first screening frame.
Wherein, the screening frame can be the 3D frame, and the 3D frame can compensate the not enough of new focus manual selection, and the track of different curvatures of size ability self-adaptation of 3D frame moreover.
It should be noted that, within the short distance range, the difference between the coordinates of the central points of the two previous and next frames is not large, and the coordinate of the central point of the next frame and the coordinate of the central point of the current frame are only different in the forward direction, for example, if the coordinate of the central point of the current frame is (x, y, z), the coordinate of the central point of the next frame is (x, y +1, z).
Specifically, the method for determining the coordinates of the center point of the second screening box according to the centers of gravity of all point clouds in the first screening box may be: and determining the average value of the coordinates of the barycenter of all the point clouds in the first screening frame as the coordinate of the center point of the second screening frame. The method for determining the coordinates of the center point of the second screening box according to the gravity centers of all point clouds in the first screening box can also be as follows: and calculating the coordinates of the central point of the second screening box based on the following formula:
Figure BDA0003632748760000111
Figure BDA0003632748760000112
Figure BDA0003632748760000113
wherein x is 2 Is the x-coordinate, y, of the center point of the second frame 2 Is the y coordinate of the center point of the second filter box, z 2 Is the z coordinate, scale, of the center point of the second filter box x Scale for scaling factor corresponding to x-coordinate y Scale for a scaling factor corresponding to the y-coordinate z For the scaling factor corresponding to the z-coordinate,
Figure BDA0003632748760000114
is the average of the x coordinates of the centers of gravity of all point clouds in the first screening box,
Figure BDA0003632748760000115
the average of the y coordinates of the centers of gravity of all point clouds in the first screening box,
Figure BDA0003632748760000116
is the average value of the z coordinates of the gravity centers of all point clouds in the first screening frame, and n is the firstTotal number of point clouds in the screening box, (x) 1 ) i The x coordinate of the gravity center of the ith point cloud in the first screening box, (y) 1 ) i Is the y coordinate of the center of gravity of the ith point cloud in the first screening box, (z) 1 ) i The z coordinate of the center of gravity of the ith point cloud in the first screening box.
Specifically, the method of determining the coordinates of the center point of the nth screening frame according to the centers of gravity of all point clouds in the (N-1) th screening frame in a circulating execution mode until the coordinates of the center points of all the screening frames are obtained may be: determining the coordinates of the center point of the Nth screening box based on the following formula:
Figure BDA0003632748760000121
Figure BDA0003632748760000122
Figure BDA0003632748760000123
wherein x is center Is the x-coordinate, y, of the center point of the Nth screening box center Is the y coordinate, z, of the center point of the Nth filter box center Is the z coordinate, scale, of the center point of the Nth filter box x Scale for scaling factor corresponding to x-coordinate y Scale is a scaling factor corresponding to the y coordinate z For the scaling factor corresponding to the z-coordinate,
Figure BDA0003632748760000124
is the average of the x coordinates of the centers of gravity of all point clouds in the (N-1) th screening box,
Figure BDA0003632748760000125
is the average value of the y coordinates of the centers of gravity of all the point clouds in the (N-1) th screening box,
Figure BDA0003632748760000126
is the average value of the z coordinates of the gravity centers of all the point clouds in the N-1 screening frame, N is the total number of the point clouds in the N-1 screening frame, x i Is the x coordinate, y, of the center of gravity of the ith point cloud in the N-1 screening box i Is the y coordinate, z, of the center of gravity of the ith point cloud in the N-1 screening box i Is the z coordinate of the barycenter of the ith point cloud in the N-1 screening box.
Optionally, generating at least one close-range boundary curve based on the at least two filter boxes includes:
generating a first target set according to the coordinates of the central points of the N screening frames corresponding to the left track;
generating a second target set according to the coordinates of the central points of the N screening frames corresponding to the right track;
generating a close-range boundary curve of a left track according to the first target set;
and generating a close-range boundary curve of the right track according to the second target set.
Wherein the first set of targets includes: the central point coordinate of N screening boxes that left side orbit corresponds, the second target set includes: and the coordinates of the central points of the N screening frames corresponding to the right track.
Specifically, the manner of generating the close-range boundary curve of the left track according to the first target set may be: and fitting based on the coordinates of the central points of the N screening boxes corresponding to the left track in the first target set to obtain a close-range boundary curve of the left track.
Specifically, the manner of generating the close-range boundary curve of the right orbit according to the second target set may be: and fitting based on the coordinates of the central points of the N screening boxes corresponding to the right track in the second target set to obtain a close-range boundary curve of the right track.
Optionally, based on the at least two filter boxes, generating at least one perspective fitting curve in a continuous iteration manner includes:
if the Mth screening frame does not have point cloud data, generating a fitting curve through the central point coordinates of the first M-1 screening frames, and obtaining the central point coordinates of the Mth screening frame in the extending direction of the fitting curve, wherein M is a positive integer not less than 3;
circularly executing the operation until the constraint condition is met, and generating a perspective fitting curve according to the coordinates of the central points of all the screening boxes obtained in the extending direction of the fitting curve;
and repeating the operation to obtain a perspective fitting curve of the left track and a perspective fitting curve of the right track.
The constraint condition may be that M is greater than the number of preset filter boxes, the constraint condition may also be that no point cloud data exists in Q continuous filter boxes, and the constraint condition may also be that the number of point clouds in Q continuous filter boxes is less than a threshold value of the number of point clouds.
Specifically, the farther away the distance, the more dispersed the point cloud distribution. When a certain distance is reached, point cloud data is not available in the screening frame or the number of point clouds in the screening frame is smaller than a point cloud number threshold, if the center point coordinate of the next screening frame is determined according to a close-range mode, the problem that the center point coordinate of the next screening frame cannot be obtained or the problem that the obtained center point coordinate of the next screening frame is inaccurate may occur. Therefore, the coordinates of the central point of the previous screening frame need to be fitted to obtain a fitting curve, the coordinates of the central point of the next screening frame are obtained according to the extending direction, and if no point cloud data exists in the next continuous 3 screening frames, the process is finished. And connecting the coordinates of the central points of the screening frames obtained by fitting to obtain a long-range fitting curve.
Specifically, a fitting curve is generated through the center point coordinates of the first M-1 screening frames, and the center point coordinates of the Mth screening frame are obtained in the extending direction of the fitting curve, so that interruption caused by fault and sparseness of the point cloud of the orbit can be avoided.
Optionally, the constraint condition is that no point cloud data exists in Q continuous filter boxes, and Q is a positive integer not less than 2.
Optionally, obtaining an orbit boundary curve based on the close-range boundary curve and the distant-range fitted curve includes:
obtaining a boundary fitting curve of the left side based on the close-range boundary curve and the long-range fitting curve of the left side;
obtaining a boundary fitting curve on the right side based on the close-range boundary curve and the long-range fitting curve on the right side;
calculating the confidence coefficient of the boundary fitting curve on the left side and the confidence coefficient of the boundary fitting curve on the right side, and comparing the confidence coefficients;
and defining a boundary fitting curve with high confidence coefficient as an orbit boundary curve, and obtaining the orbit boundary curve on the other side in a translation mode.
Specifically, the confidence of the boundary fitting curve on the left side may be calculated by: and obtaining the total number of the screening frames corresponding to the left track, the sum of the total number of the screening frames corresponding to the left track and the total number of the screening frames corresponding to the right track, the residual error item corresponding to the left track, the sum of the residual error item corresponding to the left track and the residual error item corresponding to the right track to determine the confidence coefficient of the boundary fitting curve on the left side. The way to calculate the confidence of the left boundary-fit curve may also be: the confidence of the boundary-fit curve on the left side is calculated based on the following formula:
Figure BDA0003632748760000151
wherein, confidence left Confidence of fitting curve to left boundary, N left The total number of screening boxes, N, for the left track total Is the sum of the total number of the screening frames corresponding to the left track and the total number of the screening frames corresponding to the right track, E left For the residual term corresponding to the left track, E total The sum of the residual error items corresponding to the left track and the residual error items corresponding to the right track.
Specifically, the way of calculating the confidence of the boundary-fitting curve on the right side may be: and obtaining the total number of the screening frames corresponding to the right track, the sum of the total number of the screening frames corresponding to the left track and the total number of the screening frames corresponding to the right track, the residual error item corresponding to the left track and the residual error item corresponding to the right track to determine the confidence coefficient of the boundary fitting curve on the right side. The way of calculating the confidence of the boundary-fitted curve on the right side may also be: calculating a boundary-fitted curve on the right side based on the following formulaConfidence coefficient:
Figure BDA0003632748760000152
wherein, confidence right Fitting the confidence of the curve for the boundary on the right, N right Total number of screening boxes, N, for right track total Is the sum of the total number of the screening frames corresponding to the left track and the total number of the screening frames corresponding to the right track, E right For the residual term corresponding to the right track, E total Is the sum of the residual error item corresponding to the left track and the residual error item corresponding to the right track.
Specifically, defining a boundary fitting curve with high confidence as an orbit boundary curve, and obtaining an orbit boundary curve on the other side in a translation manner may be: and if the confidence coefficient of the boundary fitting curve on the left side is greater than that of the boundary fitting curve on the right side, the boundary curve of the track on the other side is obtained by translating the boundary fitting curve on the left side. And if the confidence coefficient of the boundary fitting curve on the left side is smaller than that of the boundary fitting curve on the right side, the boundary curve of the track on the other side is obtained by translating the boundary fitting curve on the right side. If the confidence of the left boundary fitting curve is equal to the confidence of the right boundary fitting curve, the track boundary curve on the other side is obtained by translating the right boundary fitting curve or the left boundary fitting curve, which is not limited in the embodiment of the present invention.
It should be noted that, because the point cloud on the inner side of the curve is sparse or the point cloud is seriously lost, no effective point cloud of the track exists in a long section of range, the detection must be terminated, but the point cloud on the outer side of the curve is very abundant, the point cloud in the lane can be detected by solely using the point cloud on the outer side, and the corresponding point cloud on the inner side can be obtained only by translating the point cloud on the outer side.
Specifically, the manner of obtaining the boundary curve of the track on the other side by the translation may be: fitting a curve according to the boundary with high translation confidence coefficient of the track width to obtain a track boundary curve on the other side; the method for obtaining the boundary curve of the track on the other side by means of translation may also be: and fitting a curve according to the track width and the boundary with high curvature translation confidence coefficient of the track curve to obtain a track boundary curve on the other side.
Optionally, determining the coordinates of the center point of the nth filter box according to the centers of gravity of all point clouds in the (N-1) th filter box includes:
determining the coordinates of the center point of the Nth screening box based on the following formula:
Figure BDA0003632748760000161
Figure BDA0003632748760000162
Figure BDA0003632748760000163
wherein x is center Is the x-coordinate, y, of the center point of the Nth screening box center Is the y coordinate of the center point of the Nth screening box, z center Is the z coordinate, scale, of the center point of the Nth filter box x Scale for scaling factor corresponding to x-coordinate y Scale for a scaling factor corresponding to the y-coordinate z Is a scaling factor corresponding to the z coordinate, N is the total number of point clouds in the N-1 screening box, x i Is the x coordinate, y, of the center of gravity of the ith point cloud in the N-1 screening box i Is the y coordinate, z, of the center of gravity of the ith point cloud in the N-1 screening box i Is the z coordinate of the barycenter of the ith point cloud in the N-1 screening box.
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003632748760000171
is the average of the x coordinates of the centers of gravity of all point clouds in the (N-1) th screening box,
Figure BDA0003632748760000172
is the average value of the y coordinates of the centers of gravity of all the point clouds in the (N-1) th screening box,
Figure BDA0003632748760000173
the average of the z coordinates of the centers of gravity of all point clouds in the (N-1) th screening box.
Optionally, the confidence of the left boundary-fitted curve is calculated based on the following formula:
Figure BDA0003632748760000174
wherein, confidence left Confidence of fitting curve to left boundary, N left Total number of screening boxes, N, for left track total Is the sum of the total number of the screening frames corresponding to the left track and the total number of the screening frames corresponding to the right track, E left For the residual term corresponding to the left track, E total The sum of the residual error items corresponding to the left track and the residual error items corresponding to the right track.
Wherein the content of the first and second substances,
Figure BDA0003632748760000175
f(k j ) Is a curve fit equation function, k j Is the x coordinate of the jth fitting point, h j Is the y coordinate of the jth fitted point, and m is the total number of fitted points.
Optionally, determining the obstacle information according to the obstacle point cloud includes:
clustering the obstacle point clouds to obtain at least one point cloud set;
and determining the obstacle information according to the position coordinates and the parameter information of the bounding box corresponding to each point cloud set.
Wherein the obstacle information may include: the number of obstacles. The obstacle information may further include: the position of the obstacle is not limited in this respect by the embodiments of the present invention.
The parameter information of the bounding box may include a displacement of the bounding box and may also include a velocity of the bounding box, which is not limited in this embodiment of the present invention.
Specifically, the method for determining the obstacle information according to the position coordinates and the parameter information of the bounding box corresponding to each point cloud set may be: and if the number of the point clouds in the bounding boxes is greater than a first number threshold, determining that the obstacles exist, and determining the number of the bounding boxes with the point clouds number greater than the first number threshold as the number of the obstacles. The method for determining the obstacle information according to the position coordinates and the parameter information of the bounding box corresponding to each point cloud set may further include: acquiring target position information and characteristic information of an obstacle in an image to be identified, which is shot by a camera; projecting the position coordinates of each bounding box to the image to be identified to obtain first position information of each bounding box relative to the image to be identified; and fusing the characteristic information of the barrier and the parameter information of the bounding box according to the first position information and the target position information to obtain barrier information. The method for determining the obstacle information according to the position coordinates and the parameter information of the bounding box corresponding to each point cloud set may further include: acquiring the position coordinates of the bounding boxes corresponding to each point cloud set; and determining the displacement of the bounding box according to the position coordinates of the bounding box in the adjacent frame point cloud images. And determining the obstacle information according to the position coordinates of the bounding box corresponding to each point cloud set and the displacement of the bounding box.
In a specific example, limited by the sensing capability of the laser radar, the track detection can be only performed within a certain range, and the track detection and the obstacle detection in the far range can be only performed by using a rough algorithm, so that a certain accuracy rate is achieved for a slightly large obstacle. The straight road can be positioned according to the actual position, and whether the straight road is in a long-distance straight road or not is judged. If the road is a long-distance straight road, under the condition that a close-range track is known, the extending track line selects the obstacles in the lane, so that the early warning distance can be greatly increased, and the early warning can be timely and effectively carried out on the large obstacles. And if the point cloud early warning is received, finding a corresponding image area by matching with a long-focus camera, detecting the 2D obstacle, and providing a visual interface for train safety personnel to judge. And in the second half section of the close-range track detection, only one-side track point cloud is used for early warning processing, and visual global detection is carried out on an image area corresponding to the track point cloud to judge whether an obstacle exists or not.
Optionally, determining obstacle information according to the position coordinates and the parameter information of the bounding box corresponding to each point cloud set, including:
acquiring target position information and characteristic information of an obstacle in an image to be identified, which is shot by a camera;
projecting the position coordinates of each bounding box to the image to be identified to obtain first position information of each bounding box relative to the image to be identified;
and fusing the characteristic information of the obstacle and the parameter information of the bounding box according to the first position information and the target position information to obtain obstacle information.
Specifically, clustering all the point clouds of the obstacles, and judging the obstacles according to the number of the point clouds of the clusters. The type of the obstacle does not need to be judged, the point cloud number is judged as the obstacle to a certain extent reliably, and when the point cloud is only sparse scattered points, an auxiliary camera is needed to judge the obstacle accurately.
Wherein, the camera can be a long-focus camera.
The target position information of the obstacle may be coordinates of a center point of the target detection frame, a length of the target detection frame, and a width of the target detection frame.
The characteristic information may be a type of the obstacle, and may be, for example, a human, an animal, a motor vehicle, and the like.
Specifically, the manner of projecting the position coordinates of each bounding box into the image to be recognized to obtain the first position information of each bounding box relative to the image to be recognized may be: and projecting the gravity center point coordinates of each bounding box into the image to be recognized according to the projection density and the camera external parameters to obtain first position information of each bounding box relative to the image to be recognized.
Specifically, the manner of acquiring the target position information and the feature information of the obstacle in the image to be recognized, which is captured by the camera, may be: and inputting the image to be recognized into the recognition model to obtain the target position information and the characteristic information of the obstacle in the image to be recognized.
Specifically, the obtaining of the obstacle information may be obtained by fusing feature information of the obstacle and parameter information of the bounding box according to the first position information and the target position information: and if the distance between the first position information and the target position information is smaller than the distance threshold, fusing the characteristic information of the obstacle corresponding to the target position information and the parameter information of the bounding box corresponding to the first position information.
In a specific example, for long-distance detection, an image to be recognized is captured through a telephoto lens, and the captured image to be recognized is input into a target detection algorithm, wherein the target detection algorithm can detect common target types with obvious characteristics such as people, animals and motor vehicles, and the output of the target detection algorithm is the coordinates of the central point and the length and width of a target detection frame. The method is assisted by a track detection algorithm of a deep neural network, pixel coordinates of a long-distance track boundary in an image can also be output, an equation expression of the track in the image coordinates is obtained through a curve fitting algorithm, and safety early warning can be performed on an obstacle in an ultra-long distance based on the equation and target detection frame information.
According to the technical scheme of the embodiment, the method comprises the steps of obtaining point cloud data collected by a laser radar, wherein the point cloud data comprises at least two screening frames; generating at least one close-range boundary curve based on the at least two filter boxes; generating at least one perspective fitting curve through a continuous iteration mode based on the at least two screening boxes; obtaining at least one track boundary curve based on the close-range boundary curve and the long-range fitted curve; screening obstacle point clouds from the point cloud data according to at least one track boundary curve; the obstacle information is determined according to the obstacle point cloud, the problems that due to the fact that a camera is sensitive to light and shadow, shadow is generated due to object shielding in the daytime, light is poor at night, image quality is low, and false detection or missing detection is caused are solved, the point cloud of the obstacle is obtained by deleting the point cloud which is not in the range of the left track and the right track, then the obstacle information is obtained based on the obstacle point cloud, and due to the fact that the point cloud which is not in the range of the left track and the right track does not form an obstacle of a railway, the point cloud which is not in the range of the left track and the right track is deleted, the calculation amount of a system can be greatly reduced, and the accuracy of obstacle monitoring is improved.
Example two
Fig. 2 is a schematic structural diagram of an obstacle monitoring device according to an embodiment of the present invention. The present embodiment may be applicable to the case of obstacle monitoring, and the apparatus may be implemented in a software and/or hardware manner, and may be integrated into any device providing an obstacle monitoring function, as shown in fig. 2, where the obstacle monitoring apparatus specifically includes: the system comprises a point cloud data acquisition module 210, a close-range boundary curve generation module 220, a distant-range fitting curve generation module 230, an orbit boundary curve generation module 240, an obstacle point cloud screening module 250 and an obstacle information determination module 260.
The point cloud data acquisition module 210 is configured to acquire point cloud data acquired by a laser radar, and includes at least two screening frames;
a close-range boundary curve generating module 220, configured to generate at least one close-range boundary curve based on the at least two filter boxes;
a perspective fitting curve generating module 230, configured to generate at least one perspective fitting curve in a continuous iteration manner based on the at least two filter boxes;
an orbit boundary curve generating module 240, configured to obtain an orbit boundary curve based on the close-range boundary curve and the distant-range fitted curve;
an obstacle point cloud screening module 250, configured to screen an obstacle point cloud from the point cloud data according to at least one track boundary curve;
and the obstacle information determining module 260 is used for determining the obstacle information according to the obstacle point cloud.
Optionally, the point cloud data acquiring module is specifically configured to:
setting the coordinate of the central point of the first screening frame;
acquiring the gravity centers of all point clouds in the corresponding screening frame according to the first screening frame;
determining the coordinates of the center point of the second screening frame according to the gravity centers of all point clouds in the first screening frame;
determining the coordinates of the center points of the Nth screening frame according to the gravity centers of all point clouds in the (N-1) th screening frame in a circulating execution mode until the coordinates of the center points of all the screening frames are obtained; wherein N is a positive integer not less than 2.
Optionally, the close-range boundary curve generating module is specifically configured to:
generating a first target set according to the coordinates of the central points of the N screening frames corresponding to the left track;
generating a second target set according to the coordinates of the central points of the N screening frames corresponding to the right track;
generating a close-range boundary curve of a left track according to the first target set;
and generating a close-range boundary curve of the right track according to the second target set.
Optionally, the perspective-fitting-curve generating module is specifically configured to:
if the Mth screening frame does not have point cloud data, generating a fitting curve through the central point coordinates of the first M-1 screening frames, and obtaining the central point coordinates of the Mth screening frame in the extending direction of the fitting curve, wherein M is a positive integer not less than 3;
circularly executing the operation until the constraint condition is met, and generating a perspective fitting curve according to the coordinates of the central points of all the screening boxes obtained in the extending direction of the fitting curve;
and repeating the operation to obtain a perspective fitting curve of the left track and a perspective fitting curve of the right track.
Optionally, the constraint condition is that no point cloud data exists in Q continuous filter boxes, and Q is a positive integer not less than 2.
Optionally, the track boundary curve generating module is specifically configured to:
obtaining a boundary fitting curve of the left side based on the close-range boundary curve and the long-range fitting curve of the left side;
obtaining a boundary fitting curve on the right side based on the close-range boundary curve and the long-range fitting curve on the right side;
calculating the confidence coefficient of the boundary fitting curve on the left side and the confidence coefficient of the boundary fitting curve on the right side, and comparing the confidence coefficients;
and defining a boundary fitting curve with high confidence coefficient as an orbit boundary curve, and obtaining the orbit boundary curve on the other side in a translation mode.
Optionally, the point cloud data acquiring module is specifically configured to:
determining the coordinates of the center point of the Nth screening box based on the following formula:
Figure BDA0003632748760000231
Figure BDA0003632748760000232
Figure BDA0003632748760000233
wherein x is center Is the x-coordinate, y, of the center point of the Nth screening box center Is the y coordinate of the center point of the Nth screening box, z center Is the z coordinate, scale, of the center point of the Nth screening box x Scale is a scaling factor corresponding to the x coordinate y Scale for a scaling factor corresponding to the y-coordinate z Is a scaling factor corresponding to the z coordinate, N is the total number of point clouds in the N-1 screening box, x i Is the x coordinate, y, of the center of gravity of the ith point cloud in the N-1 screening box i Is the y coordinate, z, of the center of gravity of the ith point cloud in the N-1 screening box i Is the z coordinate of the barycenter of the ith point cloud in the N-1 screening box.
Optionally, the track boundary curve generating module is specifically configured to:
Figure BDA0003632748760000234
wherein, confidence left Confidence of fitting curve to left boundary, N left Is a left side railTotal number of screening frames, N, corresponding to the channel total Is the sum of the total number of the screening frames corresponding to the left track and the total number of the screening frames corresponding to the right track, E left For the residual term corresponding to the left track, E total The sum of the residual error items corresponding to the left track and the residual error items corresponding to the right track.
Optionally, the obstacle information determining module is specifically configured to:
clustering the obstacle point clouds to obtain at least one point cloud set;
and determining the obstacle information according to the position coordinates and the parameter information of the bounding box corresponding to each point cloud set.
Optionally, the obstacle information determining module is specifically configured to:
acquiring target position information and characteristic information of an obstacle in an image to be identified, which is shot by a camera;
projecting the position coordinates of each bounding box to the image to be identified to obtain first position information of each bounding box relative to the image to be identified;
and fusing the characteristic information of the barrier and the parameter information of the bounding box according to the first position information and the target position information to obtain barrier information.
The product can execute the obstacle monitoring method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
According to the technical scheme of the embodiment, the point cloud data acquired by the laser radar is acquired and comprises at least two screening frames; generating at least one close-range boundary curve based on the at least two filter boxes; generating at least one perspective fitting curve through a continuous iteration mode based on the at least two screening boxes; obtaining a track boundary curve based on the close-range boundary curve and the long-range fitted curve; screening obstacle point clouds from the point cloud data according to at least one track boundary curve; the obstacle information is determined according to the obstacle point cloud, the problems that due to the fact that a camera is sensitive to light and shadow, shadow is generated due to the fact that objects are shielded in the daytime, light is poor at night, image quality is low, and therefore false detection or missing detection is caused are solved, and the accuracy of obstacle monitoring can be improved while the calculated amount is reduced.
EXAMPLE III
FIG. 3 illustrates a schematic diagram of an electronic device 10 that may be used to implement an embodiment of the present invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 3, the electronic device 10 includes at least one processor 11, and a memory communicatively connected to the at least one processor 11, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, and the like, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various suitable actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data necessary for the operation of the electronic apparatus 10 can also be stored. The processor 11, the ROM 12, and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
A number of components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, or the like; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The processor 11 performs the various methods and processes described above, such as the obstacle monitoring method:
acquiring point cloud data acquired by a laser radar, wherein the point cloud data comprises at least two screening frames;
generating at least one close-range boundary curve based on the at least two filter boxes;
generating at least one perspective fitting curve through a continuous iteration mode based on the at least two screening boxes;
obtaining a track boundary curve based on the close-range boundary curve and the long-range fitted curve;
screening obstacle point clouds from the point cloud data according to at least one track boundary curve;
and determining obstacle information according to the obstacle point cloud.
In some embodiments, the obstacle monitoring method may be implemented as a computer program tangibly embodied in a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into the RAM 13 and executed by the processor 11, one or more steps of the obstacle monitoring method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the obstacle monitoring method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing the methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be performed. A computer program can execute entirely on a machine, partly on a machine, as a stand-alone software package partly on a machine and partly on a remote machine or entirely on a remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present invention may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solution of the present invention can be achieved.
The above-described embodiments should not be construed as limiting the scope of the invention. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (13)

1. An obstacle monitoring method, comprising:
acquiring point cloud data acquired by a laser radar, wherein the point cloud data comprises at least two screening frames;
generating at least one close-range boundary curve based on the at least two filter boxes;
generating at least one perspective fitting curve through a continuous iteration mode based on the at least two screening boxes;
obtaining a track boundary curve based on the close-range boundary curve and the long-range fitted curve;
screening obstacle point clouds from the point cloud data according to at least one track boundary curve;
and determining obstacle information according to the obstacle point cloud.
2. The obstacle monitoring method according to claim 1, wherein the acquiring point cloud data collected by the lidar includes at least two screening boxes including:
setting the coordinate of the central point of the first screening frame;
acquiring the gravity centers of all point clouds in the corresponding screening frame according to the first screening frame;
determining the coordinates of the center point of the second screening frame according to the gravity centers of all point clouds in the first screening frame;
determining the coordinates of the center points of the Nth screening frame according to the gravity centers of all point clouds in the (N-1) th screening frame in a circulating execution mode until the coordinates of the center points of all the screening frames are obtained; wherein N is a positive integer not less than 2.
3. The method of claim 2, wherein determining coordinates of a center point of the nth filter box according to the centers of gravity of all point clouds in the nth-1 filter box comprises:
determining the coordinates of the center point of the Nth screening box based on the following formula:
Figure FDA0003632748750000011
Figure FDA0003632748750000012
Figure FDA0003632748750000013
wherein x is center X coordinate, y, of the center point of the Nth sieve box center Is the y coordinate of the center point of the Nth screening box, z center Is the z coordinate, scale, of the center point of the Nth screening box x Scale for scaling factor corresponding to x-coordinate y Scale for a scaling factor corresponding to the y-coordinate z Is a scaling factor corresponding to the z coordinate, N is the total number of point clouds in the N-1 screening box, x i Is the x coordinate, y, of the center of gravity of the ith point cloud in the N-1 screening box i Is the y coordinate, z, of the center of gravity of the ith point cloud in the N-1 screening box i Is the z coordinate of the barycenter of the ith point cloud in the N-1 screening box.
4. The obstruction monitoring method according to any one of claims 1 to 3, wherein generating at least one close-range boundary curve based on the at least two filter boxes comprises:
generating a first target set according to the central point coordinates of the N screening frames corresponding to the left track;
generating a second target set according to the coordinates of the central points of the N screening frames corresponding to the right track;
generating a close-range boundary curve of a left track according to the first target set;
and generating a close-range boundary curve of the right track according to the second target set.
5. The obstacle monitoring method according to claim 4, wherein generating at least one perspective fit curve based on the at least two filter boxes in an iterative manner comprises:
if the Mth screening frame does not have point cloud data, generating a fitting curve through the central point coordinates of the first M-1 screening frames, and obtaining the central point coordinates of the Mth screening frame in the extending direction of the fitting curve, wherein M is a positive integer not less than 3;
circularly executing the operation until the constraint condition is met, and generating a perspective fitting curve according to the coordinates of the central points of all the screening boxes obtained in the extending direction of the fitting curve;
and repeating the operation to obtain a perspective fitting curve of the left track and a perspective fitting curve of the right track.
6. The obstacle monitoring method according to claim 5, wherein the constraint condition is that no point cloud data exists in Q continuous screening boxes, and Q is a positive integer not less than 2.
7. The obstacle monitoring method according to claim 1, 2, 3, 5 or 6, wherein deriving an orbit boundary curve based on the close-range boundary curve and the distant-range fitted curve comprises:
obtaining a boundary fitting curve of the left side based on the near view boundary curve and the distant view fitting curve of the left side;
obtaining a boundary fitting curve on the right side based on the close-range boundary curve and the long-range fitting curve on the right side;
calculating the confidence coefficient of the boundary fitting curve on the left side and the confidence coefficient of the boundary fitting curve on the right side, and comparing the confidence coefficients;
and defining a boundary fitting curve with high confidence coefficient as an orbit boundary curve, and obtaining the orbit boundary curve on the other side in a translation mode.
8. The method of claim 7, wherein the confidence of the left boundary-fit curve is calculated based on the following formula:
Figure FDA0003632748750000031
wherein, confidence left Confidence of fitting curve to left boundary, N left Total number of screening boxes, N, for left track total Is the sum of the total number of the screening frames corresponding to the left track and the total number of the screening frames corresponding to the right track, E left For the residual term corresponding to the left track, E total The sum of the residual error items corresponding to the left track and the residual error items corresponding to the right track.
9. The method of claim 1, 2, 3, 5, 6 or 8, wherein determining obstacle information from the obstacle point cloud comprises:
clustering the obstacle point clouds to obtain at least one point cloud set;
and determining the obstacle information according to the position coordinates and the parameter information of the bounding box corresponding to each point cloud set.
10. The method of claim 9, wherein determining obstacle information from the position coordinates and the parameter information of the bounding box corresponding to each point cloud set comprises:
acquiring target position information and characteristic information of an obstacle in an image to be identified, which is shot by a camera;
projecting the position coordinates of each bounding box to the image to be identified to obtain first position information of each bounding box relative to the image to be identified;
and fusing the characteristic information of the obstacle and the parameter information of the bounding box according to the first position information and the target position information to obtain obstacle information.
11. An obstacle monitoring device, comprising:
the point cloud data acquisition module is used for acquiring point cloud data acquired by the laser radar and comprises at least two screening frames;
a close-range boundary curve generating module, configured to generate at least one close-range boundary curve based on the at least two filter boxes;
the distant view fitting curve generating module is used for generating at least one distant view fitting curve in a continuous iteration mode based on the at least two screening boxes;
the track boundary curve generating module is used for obtaining a track boundary curve based on the close-range boundary curve and the distant-range fitting curve;
the obstacle point cloud screening module is used for screening obstacle point clouds from the point cloud data according to at least one track boundary curve;
and the obstacle information determining module is used for determining the obstacle information according to the obstacle point cloud.
12. An electronic device, characterized in that the electronic device comprises:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the obstacle monitoring method of any one of claims 1-10.
13. A computer-readable storage medium, having stored thereon computer instructions for causing a processor to, when executed, implement the obstacle monitoring method of any one of claims 1-10.
CN202210495269.6A 2022-05-07 2022-05-07 Obstacle monitoring method, device, equipment and storage medium Pending CN115166773A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210495269.6A CN115166773A (en) 2022-05-07 2022-05-07 Obstacle monitoring method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210495269.6A CN115166773A (en) 2022-05-07 2022-05-07 Obstacle monitoring method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115166773A true CN115166773A (en) 2022-10-11

Family

ID=83483407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210495269.6A Pending CN115166773A (en) 2022-05-07 2022-05-07 Obstacle monitoring method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115166773A (en)

Similar Documents

Publication Publication Date Title
CN113902897B (en) Training of target detection model, target detection method, device, equipment and medium
CN106951847B (en) Obstacle detection method, apparatus, device and storage medium
TW201928290A (en) Dynamic road surface detection method based on 3D sensor
CN113378760A (en) Training target detection model and method and device for detecting target
JP4102885B2 (en) Parked vehicle detection method and parked vehicle detection system
CN112541475A (en) Sensing data detection method and device
CN116503803A (en) Obstacle detection method, obstacle detection device, electronic device and storage medium
CN115797736A (en) Method, device, equipment and medium for training target detection model and target detection
CN115147831A (en) Training method and device of three-dimensional target detection model
CN114815851A (en) Robot following method, robot following device, electronic device, and storage medium
CN114882198A (en) Target determination method, device, equipment and medium
Romero-Cano et al. Stereo-based motion detection and tracking from a moving platform
CN114137526A (en) Label-based vehicle-mounted millimeter wave radar multi-target detection method and system
CN114119729A (en) Obstacle identification method and device
CN113688730A (en) Obstacle ranging method, apparatus, electronic device, storage medium, and program product
CN114018269A (en) Positioning method, positioning device, electronic equipment, storage medium and automatic driving vehicle
CN112509126A (en) Method, device, equipment and storage medium for detecting three-dimensional object
CN115166773A (en) Obstacle monitoring method, device, equipment and storage medium
CN116434181A (en) Ground point detection method, device, electronic equipment and medium
CN115546764A (en) Obstacle detection method, device, equipment and storage medium
CN115147809A (en) Obstacle detection method, device, equipment and storage medium
WO2020135325A1 (en) Mobile device positioning method, device and system, and mobile device
CN112630736A (en) Method, device and equipment for determining parameters of roadside radar and storage medium
CN117576665B (en) Automatic driving-oriented single-camera three-dimensional target detection method and system
Song et al. Preceding vehicle distance computation based on dark prior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination