CN114270410A - Point cloud fusion method and system for moving object and computer storage medium - Google Patents

Point cloud fusion method and system for moving object and computer storage medium Download PDF

Info

Publication number
CN114270410A
CN114270410A CN201980030566.XA CN201980030566A CN114270410A CN 114270410 A CN114270410 A CN 114270410A CN 201980030566 A CN201980030566 A CN 201980030566A CN 114270410 A CN114270410 A CN 114270410A
Authority
CN
China
Prior art keywords
point
point cloud
current frame
target object
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980030566.XA
Other languages
Chinese (zh)
Inventor
潘志琛
李延召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN114270410A publication Critical patent/CN114270410A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Abstract

A point cloud fusion method, system and computer storage medium for moving objects, the method comprising: determining a point cloud belonging to a target object in the point cloud data of the current frame (S110); determining point clouds belonging to the target object in point cloud data of adjacent frames before the current frame, wherein the point clouds belonging to the target object in the adjacent frames and the point clouds belonging to the target object in the current frame respectively comprise point clouds belonging to different positions on the target object (S120); at least part of the point clouds belonging to the target object in the adjacent frames are fused into the current frame according to the change of the position and/or the posture of the target object between the adjacent frames and the current frame (S130). The point cloud belonging to the target object in the adjacent frames is fused into the current frame, so that the point cloud density of the target object is increased, and the target object can be observed and tracked conveniently.

Description

Point cloud fusion method and system for moving object and computer storage medium Technical Field
The embodiment of the invention relates to the technical field of information, in particular to a point cloud fusion method and system for a moving object and a computer storage medium.
Background
For autonomously moving mobile platforms such as unmanned vehicles, etc., only the motion states of surrounding vehicles can be accurately tracked, and the motion strategy can be accurately formulated, thereby effectively avoiding the occurrence of collision and the like. The existing mobile platforms such as unmanned vehicles acquire point cloud data through sensors such as laser radars and the like, and identify moving objects such as vehicles in the point cloud through a point cloud identification algorithm, so that the motion state of the moving objects is tracked. However, the current method mainly processes a single frame point cloud, and cannot effectively utilize historical data.
Disclosure of Invention
In this summary, concepts in a simplified form are introduced that are further described in the detailed description. This summary of the invention is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In view of the deficiencies of the prior art, a first aspect of the embodiments of the present invention provides a point cloud fusion method for a moving object, including:
determining point clouds belonging to a target object in the point cloud data of the current frame;
determining point clouds belonging to the target object in point cloud data of adjacent frames before the current frame, wherein the point clouds belonging to the target object in the adjacent frames and the point clouds belonging to the target object in the current frame respectively comprise point clouds belonging to different positions on the target object;
and fusing at least part of point clouds belonging to the target object in the adjacent frames into the current frame according to the change of the position and/or the posture of the target object between the adjacent frames and the current frame.
A second aspect of an embodiment of the present invention provides a point cloud fusion system for moving an object, where the system includes:
a memory for storing executable instructions;
a processor configured to execute the instructions stored in the memory, so that the processor executes the point cloud fusion method for a moving object according to the first aspect of the embodiment of the present invention.
A third aspect of the embodiments of the present invention provides a computer storage medium on which a computer program is stored, the program, when executed by a processor, implementing the point cloud fusion method for a moving object of the first aspect of the embodiments of the present invention.
The point cloud fusion method and system for the moving object and the computer storage medium fuse the point clouds belonging to the target object in the adjacent frames into the current frame, thereby increasing the point cloud density of the target object and being beneficial to observing and tracking the target object.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic flow chart of a point cloud fusion method for moving objects in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of a moving object point cloud layering phenomenon;
FIG. 3 is a schematic flow chart of extracting edge tracing points according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of edge points extracted from a point cloud according to an embodiment of the present invention;
FIG. 5A is a schematic illustration of a portion of the sparse delineation points of an embodiment of the present invention;
FIG. 5B is a schematic diagram of recursively extracting edge points, in accordance with an embodiment of the invention;
FIG. 6 is a schematic diagram of computing a point cloud point normal vector according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of normal vectors of point cloud points according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating the merging of the normal vectors of point cloud points into the normal vectors of edge tracing points according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of searching for matching edge tracing points using a normal vector search method according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a nearest neighbor search method for searching for matching edge points, in accordance with an embodiment of the present invention;
FIG. 11 is a schematic diagram of a sliding window optimization method according to an embodiment of the invention;
FIG. 12 is a comparison of a single frame of body point cloud and a fused body point cloud fused using a method according to an embodiment of the invention;
FIG. 13 is a schematic view of a scanning pattern of a ranging device according to an embodiment of the present invention;
fig. 14 is a block diagram illustrating a point cloud fusion system for moving objects according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
It is to be understood that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
In order to provide a thorough understanding of the present invention, a detailed structure will be set forth in the following description in order to explain the present invention. Alternative embodiments of the invention are described in detail below, however, the invention may be practiced in other embodiments that depart from these specific details.
Fig. 1 shows a schematic flow diagram of a point cloud fusion method 100 of moving objects according to an embodiment of the present application. As shown in fig. 1, the method 100 includes the steps of:
in step S110, a point cloud belonging to a target object is determined in the point cloud data of the current frame;
in step S120, point clouds belonging to the target object are determined in point cloud data of adjacent frames before the current frame, wherein the point clouds belonging to the target object in the adjacent frames and the point clouds belonging to the target object in the current frame respectively include point clouds belonging to different positions on the target object;
in step S130, at least a portion of the point clouds belonging to the target object in the adjacent frames are fused into the current frame according to the change of the position and/or the posture of the target object between the adjacent frames and the current frame.
The point cloud fusion method 100 for moving objects according to the embodiment of the present invention fuses point clouds belonging to a target object in adjacent frames into a current frame, thereby increasing the point cloud density of the target object and facilitating observation and tracking of the target object.
The point cloud data may be point cloud data collected by a distance measuring device, and the distance measuring device may be an electronic device such as a laser radar and a laser distance measuring device. In one embodiment, the ranging device is used to sense external environmental information, such as distance information, orientation information, reflected intensity information, velocity information, etc. of environmental targets. The point cloud points in the point cloud data may include at least one of external environment information measured by the distance measuring device.
In one implementation, the ranging device may detect the distance of the probe to the ranging device by measuring the Time of light propagation, i.e., Time-of-Flight (TOF), between the ranging device and the probe. Alternatively, the distance measuring device may detect the distance from the probe to the distance measuring device by other techniques, such as a distance measuring method based on phase shift (phase shift) measurement, a distance measuring method based on frequency shift (frequency shift) measurement, and the like.
As an example, the ranging device may include a transmitting circuit, a receiving circuit, a sampling circuit, and an arithmetic circuit.
Wherein the transmitting circuit may transmit a sequence of optical pulses (e.g. a sequence of laser pulses). The receiving circuit can receive the optical pulse sequence reflected by the detected object, perform photoelectric conversion on the optical pulse sequence to obtain an electric signal, and output the electric signal to the sampling circuit after processing the electric signal. The sampling circuit may sample the electrical signal to obtain a sampling result. The arithmetic circuit may determine the distance between the ranging device and the detected object based on the sampling result of the sampling circuit.
In some implementations, in addition to the above circuit, the distance measuring device may further include a scanning module, configured to change a propagation direction of at least one laser pulse sequence emitted from the emitting circuit.
In one embodiment, the scanning module may include a plurality of optical elements for changing the propagation path of the light beam, wherein the optical elements may change the propagation path of the light beam by reflecting, refracting, diffracting, etc. the light beam. For example, the scanning module may include a lens, mirror, prism, galvanometer, grating, liquid crystal, Optical Phased Array (Optical Phased Array), or any combination thereof. In one example, at least a portion of the optical elements is moved, for example, by a driving module, and the moved optical elements can reflect, refract or diffract the light beam to different directions at different times.
In some embodiments, multiple optical elements of the scanning module can rotate or oscillate about a common axis or different axes, with each rotating or oscillating optical element serving to constantly change the direction of propagation of an incident beam. In one embodiment, the multiple optical elements of the scanning module may rotate at different rotational speeds or oscillate at different speeds. The rotational speed of the optical element directly determines the uniformity of the scanning point cloud of the scanning module.
In one embodiment, each optical element in the scanning module may project light in different directions by rotating, thus scanning the space around the ranging device. The multiple optical elements can be driven by the same or different drivers to rotate and/or rotate at different speeds, so that the collimated light beams can be projected to different directions of the external space, and a larger space range can be scanned. Alternatively, the plurality of optical elements may include two or three wedge angle prisms that rotate at different rotational speeds. It will be appreciated that as the speed of the optical elements within the scanning module changes, the scanning pattern will also change. Fig. 13 is a schematic diagram of a scanning pattern of the distance measuring device, in which the distribution uniformity of the point cloud pattern reflects the scanning uniformity, as shown in fig. 13.
According to the distance measuring device of the embodiment of the invention, the scanning density of the scanning module changes along with the accumulation of the integration time on the time axis. The scanning track of the distance measuring device changes along with time, and the scanning density gradually increases along with the accumulation of the integral time, so that the point clouds belonging to different parts of the target object are scanned from the point cloud data of different frames acquired by the distance measuring device.
In one embodiment, the distance measuring device can be applied to a mobile platform, and the distance measuring device can be installed on a platform body of the mobile platform. The mobile platform with the distance measuring device can measure the external environment, for example, the distance between the mobile platform and an obstacle is measured for the purpose of avoiding the obstacle, and the external environment is mapped in two dimensions or three dimensions. In certain embodiments, the mobile platform comprises at least one of an unmanned aerial vehicle, an automobile, a remote control car, a robot, a camera. When the distance measuring device is applied to the unmanned aerial vehicle, the platform body is a fuselage of the unmanned aerial vehicle. When the distance measuring device is applied to an automobile, the platform body is the automobile body of the automobile. The vehicle may be an autonomous vehicle or a semi-autonomous vehicle, without limitation. When the distance measuring device is applied to the remote control car, the platform body is the car body of the remote control car. When the distance measuring device is applied to a robot, the platform body is the robot.
In step S110, the current frame is a point cloud frame to be processed currently in the point cloud data, and the target object in the current frame is a target object whose point cloud needs to be fused to analyze its motion state.
In one embodiment, the point cloud belonging to the target object in the current frame is segmented by a point cloud segmentation method.
For example, the point clouds of a plurality of objects in the current frame and the adjacent frame can be identified by a point cloud segmentation method, and the point clouds of each object are numbered, so that the point clouds of the same target object can be found in multi-frame point cloud data. When the same target object is continuously detected in the multi-frame point cloud data, the point cloud fusion method 100 according to the embodiment of the present invention can track the motion state of the target object.
For example, a template matching method may be adopted to determine a point cloud belonging to a target object in a current frame, that is, target segmentation and identification may be performed by feature template matching on the basis of a previously constructed target object feature template directly using 3D data of a laser radar. In another embodiment, a 3D clustering method may be used to determine the point clouds belonging to the target object in the current frame, i.e. segmentation is performed by using features such as similar distances or density set of the same target point clouds in space. Optionally, a grid map clustering method may also be adopted to project the 3D point cloud data into a 2D grid, and then process the data in the grid to determine the point cloud belonging to the target object. In addition, of course, any other suitable method may be adopted to determine the point cloud belonging to the target object in the embodiment of the present invention, and the specific method for extracting the point cloud belonging to the target object is not limited in the embodiment of the present invention.
In step S120, point clouds belonging to the same target object are determined in point cloud data of adjacent frames before the current frame. It is understood that, in some embodiments, step S120 may be performed before step S110, for example, after the point cloud data is acquired, the point clouds belonging to the target object in each frame of point cloud data are sequentially segmented according to a time sequence of each frame of point cloud in the point cloud data.
In one embodiment, the number of adjacent frames is no less than two frames. As an example, the number of adjacent frames may be 3-5 frames. For example, when the i frame is the current frame, the point clouds of the target objects in the i-1 frame, the i-2 frame and the i-3 frame can be fused into the i frame; when the i +1 frame is the current frame, the point clouds of the target objects in the i frame, the i-1 frame and the i-2 frame can be fused into the i +1 frame, and so on. The point clouds of the target object in two or more adjacent frames are fused into the current frame, so that the information of the multi-frame historical point clouds can be accumulated, the richness of the point clouds of the target object is further enhanced, and the description capacity of the outline of the target object is improved.
In one embodiment, the number of adjacent frames may be set as desired, with particular reference to the associated description below.
In step S130, at least a part of the point clouds belonging to the target object in one or more adjacent frames are fused into the current frame.
Because the target object in the application is a moving object and the point cloud collection is carried out according to a time sequence, the collection time of each frame of point cloud data has a time difference, and the motion state of the target object may be different at the collection time point of each frame of point cloud. Therefore, the point cloud belonging to the target object in the adjacent frame and the point cloud belonging to the target object in the current frame respectively include point clouds belonging to different portions on the target object. According to the embodiment of the invention, the point clouds belonging to the target object in the adjacent frames are fused into the current frame, so that the description of the point clouds on the contour information of the target object can be more complete.
For example, the fusing of at least part of the point clouds belonging to the target object in the adjacent frames into the current frame may be fusing all the point clouds belonging to the target object in the adjacent frames into the current frame, and increasing the density of the point clouds while completing the contour of the target object. Or, the parts of the point clouds of the target objects in the adjacent frames, which belong to different parts of the target object in the current frame, can be fused into the current frame. In the present embodiment, a description will be given taking an example in which all point clouds belonging to a target object in adjacent frames are fused into a current frame.
Wherein the fusion first judges a change in position and/or posture of the target object between the adjacent frame and the current frame so as to determine the coordinate transformation matrix based on the change.
In one embodiment, since the scanning of the radar is performed in time series, when the target object is in motion, the point cloud thereof has a large distortion. As shown in fig. 2, taking a vehicle as an example, since the vehicle moves during the radar scanning process, a layering phenomenon (i.e. layer 1 and layer 2 in fig. 2) occurs in the point cloud at the tail of the vehicle, which has a large influence on the estimation of the vehicle motion state. In order to eliminate the influence of motion distortion, the embodiment of the invention adopts a method for extracting edge tracing points to extract the edges of point clouds of a target object in a current frame and an adjacent frame so as to judge the pose change of the target object.
Illustratively, the extraction of the edge-tracing point is performed based on the distance between the point cloud point in the point cloud in the current frame and the adjacent frame and the origin of the coordinate system, that is, the point cloud point closest to the origin of the coordinate system in each interval is taken as the edge-tracing point. The extraction of the edge tracing points can be carried out in a three-dimensional space, and can also be carried out after the point cloud is projected to a two-dimensional space.
Referring to fig. 3, in one embodiment, the step of extracting the edge tracing point includes:
in step S310, projecting the point cloud of the target object in the current frame onto a projection surface;
in step S320, dividing the maximum included angle between the point cloud projected onto the projection surface and the origin of the coordinate system into a plurality of angle sections;
in step S330, the point cloud point closest to the origin of the coordinate system in each angle interval is used as the edge tracing point.
Wherein the projection plane depends on the principal plane in which the target object is moving. For example, if the target object moves mainly in a horizontal plane and the amount of movement in the vertical direction is small, for example, when the target object is a vehicle, the projection plane may be a horizontal plane. When the movement amplitude of the target object in the vertical direction is large, the projection surface can also be a vertical surface. In this embodiment, a projection plane is taken as an example of a horizontal plane.
Specifically, as shown in fig. 4, after the point cloud is projected onto a horizontal plane, an included angle θ between each point cloud point in the point cloud and the Y axis of the radar coordinate system is calculated. In fig. 4, the point at which the θ angle is smallest is referred to as a "left point", the point at which the θ angle is largest is referred to as a "right point", and the angle between the left point and the right point is referred to as the maximum angle. In one embodiment, the maximum included angle is divided angularly into a plurality of angular intervals, for example, 0.2 ° at angular intervals. And then, searching whether a point cloud point closest to the origin of the coordinate system exists in each angle interval, and if so, confirming the point as a delineation point. Because the edge tracing points generally have no layering phenomenon, the contour of an object in the point cloud can be confirmed by extracting the edge tracing points, so that the influence caused by motion distortion can be eliminated.
Furthermore, the above extraction method of the edge tracing points has some restriction factors, which causes the edge tracing points in the partial region to be too sparse. For example, as shown in fig. 5A, when the side of the point cloud is nearly collinear with the X-axis of the radar coordinate system, the stroked points in the side extracted by the above method that is nearly collinear with the radar coordinate system are very sparse. In order to solve the problem, the invention adopts a strategy of recursively extracting the edge tracing points, divides part of the angle interval into a plurality of subintervals, and extracts point cloud points in the subintervals as the edge tracing points, thereby further extracting the edge tracing points among the excessively sparse edge tracing points. Wherein the division of the angle interval may be equal angle division.
In one embodiment, when the distance between two adjacent edge tracing points is greater than a predetermined threshold, dividing an angle interval between two adjacent edge tracing points into a plurality of sub-intervals, and extracting a point cloud point closest to the origin of the coordinate system in each sub-interval as the edge tracing point. As shown in fig. 5B, the side stroked points of the point cloud extracted by the recursive strategy are significantly more, so that the outline of the target object can be better described.
The method for judging whether the subintervals are divided between the two edge tracing points through the distance between the edge tracing points is simple, and the calculated amount is small. However, it may also be determined whether it is necessary to divide the subintervals in other ways, for example, when an angle between the side of the moving object and a coordinate axis of a reference coordinate system is smaller than a predetermined threshold, dividing an angle interval between two adjacent edge points into a plurality of subintervals, and extracting a point cloud point closest to the origin of the coordinate system as the edge point.
Then, based on the extracted edge points in the current frame and the adjacent frame, the embodiment of the invention provides a normal vector search method, which is to calculate a normal vector of the edge points in the current frame, and search for matched edge points in the adjacent frame according to the normal vector so as to determine the change of the position and/or the posture of the target object between the adjacent frame and the current frame.
In one embodiment, the normal vectors of all point cloud points in the point cloud of the current frame are calculated first, and the normal vectors of point cloud points in a predetermined range around each edge-tracing point are fused into the normal vector of the edge-tracing point.
Illustratively, because the distance measuring device such as a laser radar has the characteristic of continuous high-frequency scanning according to a specific scanning mode, and each cloud point has obvious timing sequence and position information, the normal vector of each cloud point can be calculated through the coordinate relation between every two adjacent points in the cloud point, so as to reflect the normal vector state of the contour curved surface of the target object at the position of the point.
In one embodiment, the point cloud may be projected onto a projection surface, and a vertical vector of vectors of point cloud points projected onto the projection surface to neighboring point cloud points may be calculated as a normal vector of the point cloud points. Wherein the projection plane may be a plane that better describes the contour of the target object. For example, if the target object is a vehicle, since the projection of the point cloud on the horizontal plane can better describe the contour of the vehicle body, when calculating the normal vector of the vehicle body, the point cloud of the vehicle body can be projected on the horizontal plane, and then the vector between the cloud point of the point on the plane and the cloud point of the adjacent point is calculated, and the vertical vector of the vector is the normal vector of the point cloud point. And for an object with a projection on the vertical surface capable of better describing the outline of the object, the point cloud can be projected on the vertical surface to calculate a normal vector.
As shown in FIG. 6, for a point cloud point i in the point cloud, first, its vector to the neighboring point cloud point i +1 is calculated
Figure PCTCN2019111731-APPB-000001
Its vertical vector
Figure PCTCN2019111731-APPB-000002
Namely the normal vector of the point cloud point i. For a target object with smooth surface transition and small fluctuation, the profile of the target object can be well described by adopting a normal vector. Fig. 7 shows a schematic diagram of normal vectors for all points in the point cloud.
Since the point cloud of a moving object is affected by severe noise and distortion, many error terms exist in the normal vector of the point cloud point. Therefore, as shown in fig. 8, in the embodiment of the present invention, normal vectors of a plurality of cloud points are associated with a same edge tracing point, some obviously abnormal normal vectors are eliminated, and the influence of noise and distortion is eliminated to a certain extent. Specifically, abnormal normal vectors in normal vectors of point cloud points in the predetermined range are removed, and then the average value of the normal vectors of the point cloud points in the predetermined range is calculated to serve as the normal vector of the edge tracing point.
As an example, the method of rejecting abnormal normal vectors includes: and calculating the average value of all normal vectors in the preset range, and eliminating normal vectors with overlarge deviation with the average value. Of course, other methods may be used to reject the normal vectors with significant anomalies.
Based on the extracted normal vector of the edge tracing point, the embodiment of the invention searches the matching edge tracing point of the edge tracing point in the adjacent frame by adopting a normal vector search method. The method guides the searching direction of the matched edge tracing point by using normal vector information provided by the edge tracing point, and the matched edge tracing point searched by the method is more in line with the actual situation, namely the edge tracing point and the matched edge tracing point thereof are closer to the same position of a real target object.
In one embodiment, a method of searching for matching edge points in adjacent frames includes: and searching two edge tracing points which are closest to the normal vector distance of the edge tracing points in the point clouds of the adjacent frames to be used as matching edge tracing points of the edge tracing points. The matching edge tracing points may be located on one side of the normal vector, or may be located on both sides of the normal vector.
Referring to FIG. 9, wherein point cloud F2Point cloud representing a target object (shown as a vehicle in the figure) of the current frame, point cloud F1A point cloud representing the target object in adjacent frames. For point cloud F2Drawing point M in (1)2,1In other words, the normal vector n of the edge tracing point is obtained based on the normal vector calculation method2,i. Then point cloud F may be generated according to the method of embodiments of the present invention1Is superimposed on the current frame and is,and the point cloud F1Middle distance normal vector n2,iThe two nearest edge-tracing points are edge-tracing points M2,1The matching edge-tracing point of (1), i.e. the matching point M in the figure1,kAnd matching point M1,j
In contrast, fig. 10 shows matching edge points searched in adjacent frames by a more common "nearest neighbor search method". In the nearest neighbor search method shown in fig. 10, the point cloud F is directly searched1Neutralization point cloud F2Drawing point M in (1)2,1Two points closest to each other (i.e., matching point M)1,kAnd matching point M1,j) As a border point M2,1The matched edge tracing points. Obviously, compared with the method, the matched stroked point searched by the normal vector search method of the embodiment of the invention is closer to the same position of the target object with the corresponding stroked point, thereby being more suitable for the actual situation.
After the matching edge points are found, the distance between the edge points and the matching edge points can be calculated, so that the fusion residual error between the current frame and the adjacent frame can be calculated, and then the coordinate transformation matrix from the point cloud of the adjacent frame to the point cloud of the current frame can be determined according to the fusion residual error, so that at least part of the point clouds belonging to the target object in the adjacent frame can be fused into the current frame. The fusion residual can represent the distance between the delineation point in the current frame and the matching delineation point in the adjacent frame after the point cloud in the adjacent frame is transformed to the coordinate system of the current frame by adopting a coordinate transformation matrix. It can thus be appreciated that by minimizing the fused residual, an optimal coordinate transformation matrix can be found.
In one embodiment, first, the point clouds in the adjacent frames are converted to the coordinate system of the current frame through a coordinate transformation matrix; then, the distance from the edge-tracing point to the connecting line between the two matched edge-tracing points converted into the coordinate system of the current frame is calculated. And the fusion residual error of the current frame and the adjacent frame is the sum of the distances of all the edge tracing points in the current frame. By minimizing the residual, an optimal change matrix can be found.
In the embodiment of the present invention, the number of adjacent frames may be two or more, and at this time, as shown in fig. 11, the matching and fusion of the point clouds is completed by using a sliding window optimization method in the embodiment of the present invention. In fig. 11, when the i +1 frame is the current frame, 3 frames of historical point cloud data before the current frame are kept as the adjacent frames, i.e., i-2 frame, i-1 frame and i frame, in the sliding window (dashed frame) according to time sequence. In calculating the coordinate transformation matrix, a sum of a fusion residual between a current frame (i +1 th frame) and a plurality of adjacent frames (i.e., i-2 frame, i-1 frame and i frame) and a fusion residual between a plurality of adjacent frames (i.e., i-2 frame, i-1 frame and i frame) is calculated as a total fusion residual, and the coordinate transformation matrix between the current frame and the adjacent frames is calculated by minimizing the total fusion residual. Let the coordinate transformation matrix between the i +1 frame and the i-2 frame, and between the i-1 frame and the i frame be recorded as
Figure PCTCN2019111731-APPB-000003
And
Figure PCTCN2019111731-APPB-000004
the total fusion residual r is the sum of the fusion residuals of the i +1 frame and the i frame, the i +1 frame and the i-1 frame, the i +1 frame and the i-2 frame, the i frame and the i-1 frame, the i frame and the i-2 frame, and the i-1 frame and the i-2 frame. By minimizing the fusion residual, it is possible to obtain
Figure PCTCN2019111731-APPB-000005
And
Figure PCTCN2019111731-APPB-000006
namely:
Figure PCTCN2019111731-APPB-000007
obtaining optimal coordinate transformation matrix through sliding window optimization
Figure PCTCN2019111731-APPB-000008
Then, all historical point cloud points P of the i-2 frame, the i-1 frame and the i frame in the sliding window can be processed according to the coordinate transformation matrixnAnd transforming to the coordinate system of the i +1 frame according to the following formula to complete the fusion of the vehicle body point cloud:
Figure PCTCN2019111731-APPB-000009
in one embodiment, when the target object has a small movement in the vertical direction, for example, when the target object is a vehicle traveling on a road, only the coordinates in the horizontal plane, i.e., the x-axis and y-axis coordinates, may be transformed without changing the coordinates in the vertical direction, i.e., the z-axis coordinates, i.e., the coordinate values in the horizontal plane of the point clouds of the target object in the adjacent frames are corrected, and the corrected point clouds are added to the current frame, thereby completing the fusion of the point clouds.
And after the fusion of the current frame is completed, the current frame is changed into an i +2 frame, at the moment, the sliding window moves forwards, namely the i-2 frame is removed, the i +1 frame is added as an adjacent frame, and the point cloud fusion of the i +2 frame is completed. And repeating the process to complete the continuous fusion of the point clouds of the target object.
It can be understood that, since the coordinate transformation matrix depends on the position and/or posture change of the moving object in the adjacent frames, the motion trajectory of the target object and the movement parameters thereof can be determined according to the coordinate transformation matrix obtained from the point cloud data of each frame, wherein the movement parameters can include at least one of displacement, velocity and acceleration.
In one embodiment, the number of adjacent frames in the sliding window may be set according to actual needs. For example, the number of adjacent frames may be selected according to the speed of movement of a mobile platform carrying the ranging device used to acquire the point cloud. For example, when the point cloud is collected by a laser radar on a vehicle, the slower the movement speed of the vehicle carrying the laser radar is, the more the available operation time is, the larger the window is, and the more the number of adjacent frames is; the faster the vehicle moves, the less computation time is available, the smaller the window and the fewer the number of adjacent frames.
In another embodiment, the number of adjacent frames is also selected according to a user's instruction. The number of adjacent frames within the sliding window may be reduced when computational resources are insufficient, and the number of adjacent frames within the sliding window may be increased when computational resources are sufficient. It can be understood that when the number of the adjacent frames is large, the point cloud of the target object is dense, the outline is complete, and the operation speed is relatively slow; when the number of adjacent frames is small, the operation speed is high, but the accuracy is reduced. By adjusting the number of adjacent frames within the sliding window, a balance can be achieved between the two.
After the fused point cloud data is obtained by the method described above, the fused point cloud data can be output, that is, the fused data is provided for storage or analysis. Or, the fused point cloud image can be displayed on a display interface based on the fused point cloud data for the user to view.
FIG. 12 shows the comparison of a single frame of vehicle body point cloud and a fused vehicle body point cloud fused by the method of the embodiment of the invention under the same data and scene. As can be seen, the single frame of vehicle body point cloud is very sparse, and only the vehicle tail part can be detected by the laser radar. By adopting the point cloud fusion method provided by the embodiment of the invention, the historical point cloud information of the vehicle can be reserved, so that the point cloud of the vehicle body is richer, and the outline is more complete.
In summary, the point cloud fusion method for moving objects according to the embodiments of the present invention fuses point clouds belonging to a target object in adjacent frames into a current frame, so as to accumulate historical point cloud information, enhance the richness of moving objects, improve the description capability of moving object outlines, and further increase the perception capability of dynamic scenes. In addition, the point cloud fusion method of the moving object can utilize the laser radar to complete the observation of the motion state of the surrounding object without adding an additional sensor, thereby avoiding the increase of cost.
FIG. 14 shows a schematic block diagram of a point cloud fusion system 1400 for moving objects in one embodiment of the invention.
As shown in fig. 14, the point cloud fusion system 1400 includes one or more processors 1410 and one or more memories 1420. Optionally, the point cloud fusion system 1400 may further include at least one of an input device (not shown), an output device (not shown), and an image sensor (not shown), which are interconnected by a bus system and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the point cloud fusion system 1400 shown in fig. 14 are exemplary only, and not limiting, and the point cloud fusion system 1400 may have other components and structures as desired, such as a transceiver for transceiving signals.
The memory 1420, i.e. the memory, is used for storing processor-executable instructions, e.g. for the respective steps and program instructions present for implementing the point cloud fusion method according to an embodiment of the present invention. May include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
The input device may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device may output various information (e.g., images or sounds) to an external (e.g., user), and may include one or more of a display, a speaker, and the like.
A communication interface (not shown) is used for communication between the point cloud fusion system 1400 and other devices, including wired or wireless communication. The point cloud fusion system 1400 may access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, 5G, or a combination thereof. In one exemplary embodiment, the communication interface receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication interface further comprises a Near Field Communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The processor 1410 may be a Central Processing Unit (CPU), image processing unit (GPU), Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA), or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the point cloud fusion system 1400 to perform desired functions. The processor can execute the instructions stored in the memory 1420 to perform the point cloud fusion methods described herein. For example, processor 1410 can include one or more embedded processors, processor cores, microprocessors, logic circuits, hardware Finite State Machines (FSMs), Digital Signal Processors (DSPs), or a combination thereof.
On which one or more computer program instructions may be stored, which the processor 1410 may execute stored by the memory 1420 to implement the functions of the embodiments of the invention described herein (implemented by the processor) and/or other desired functions, e.g., to perform the respective steps of the point cloud fusion method according to embodiments of the invention. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
In addition, the embodiment of the invention also provides a computer storage medium, and the computer storage medium is stored with the computer program. When the computer program is executed by a processor, the steps of the point cloud fusion method of the embodiment of the invention can be realized. For example, the computer storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware or any other combination. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the foregoing illustrative embodiments are merely exemplary and are not intended to limit the scope of the invention thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present invention. All such changes and modifications are intended to be included within the scope of the present invention as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present invention should not be construed to reflect the intent: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules according to embodiments of the present invention. The present invention may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (29)

  1. A point cloud fusion method for moving objects, comprising:
    determining point clouds belonging to a target object in the point cloud data of the current frame;
    determining point clouds belonging to the target object in point cloud data of adjacent frames before the current frame, wherein the point clouds belonging to the target object in the adjacent frames and the point clouds belonging to the target object in the current frame respectively comprise point clouds belonging to different positions on the target object;
    and fusing at least part of point clouds belonging to the target object in the adjacent frames into the current frame according to the change of the position and/or the posture of the target object between the adjacent frames and the current frame.
  2. The method according to claim 1, wherein the fusing at least part of the point clouds belonging to the target object in the adjacent frames into the current frame according to the change of the position and/or the posture of the target object between the adjacent frames and the current frame comprises:
    respectively extracting delineation points in the point cloud of the target object in the current frame and the adjacent frame;
    fusing at least part of the point clouds belonging to the target object in the adjacent frames into the current frame based on the edge tracing points in the current frame and the adjacent frames.
  3. The method of claim 2, wherein said separately extracting the stroked points in the point cloud belonging to the target object in the current frame and the neighboring frame comprises:
    and extracting part of point cloud points as the edge tracing points based on the distance between the point cloud points in the point cloud in the current frame and the adjacent frame and the origin of a coordinate system.
  4. The method of claim 3, wherein extracting the edge tracing points in the point cloud of the current frame comprises:
    projecting the point cloud of the current frame onto a projection surface;
    dividing the maximum included angle between the point cloud projected onto the projection surface and the origin of the coordinate system into a plurality of angle intervals;
    and taking the point cloud point which is closest to the origin of the coordinate system in each angle interval as the edge tracing point.
  5. The method of claim 4, wherein the division of the maximum included angle is an equal angle division.
  6. The method of claim 4, further comprising:
    and dividing part of the angle interval into a plurality of subintervals, and extracting point cloud points in the subintervals as the edge tracing points.
  7. The method of claim 6, wherein the dividing part of the angle interval into a plurality of subintervals and extracting point cloud points in the subintervals as the edge tracing points comprises:
    when the distance between two adjacent edge tracing points is larger than a preset threshold value, dividing an angle interval between the two adjacent edge tracing points into a plurality of subintervals, and extracting a point cloud point which is closest to the origin of the coordinate system in each subinterval as the edge tracing point.
  8. The method of claim 6, wherein the dividing part of the angle interval into a plurality of subintervals and extracting point cloud points in the subintervals as the edge tracing points comprises:
    when the included angle between the side face of the moving object and the coordinate axis of the reference coordinate system is smaller than a preset threshold value, dividing the angle interval between two adjacent edge tracing points into a plurality of sub-intervals, and extracting the point cloud point which is closest to the origin point of the coordinate system in each sub-interval as the edge tracing point.
  9. The method according to any one of claims 6 to 8, wherein the division of the angular interval between two adjacent edge tracing points is an equiangular division.
  10. The method of claim 2, further comprising:
    calculating a normal vector of the edge tracing point in the current frame;
    and determining the change of the position and/or the posture of the target object between the adjacent frame and the current frame according to the normal vector of the edge tracing point.
  11. The method of claim 10, wherein the calculating the normal vector of the edge tracing point comprises:
    calculating normal vectors of all point cloud points in the point cloud of the current frame;
    fusing normal vectors of point cloud points in a preset range around the delineation point into normal vectors of the delineation point.
  12. The method of claim 11, wherein the calculating normal vectors for all point cloud points in the point cloud of the current frame comprises:
    projecting the point cloud onto a projection surface;
    and calculating the vertical vector of the vector from the point cloud point projected onto the projection surface to the adjacent point cloud point to serve as the normal vector of the point cloud point.
  13. The method of claim 12, wherein merging the normal vectors of point cloud points within a predetermined range around the edge tracing point into the normal vector of the edge tracing point comprises:
    and calculating the average value of the normal vectors of the point cloud points in the preset range to serve as the normal vector of the edge tracing point.
  14. The method of claim 13, wherein the calculating the mean of the normal vectors of the point cloud points within the predetermined range comprises:
    rejecting abnormal normal vectors in the normal vectors of the point cloud points in the preset range;
    and calculating the average value of the residual normal vectors after the abnormal normal vectors are eliminated.
  15. The method according to any one of claims 10-14, wherein said determining a change in position and/or orientation of the target object between the neighboring frame and the current frame according to the normal vector of the edge tracing point comprises:
    searching matched edge tracing points of the edge tracing points in the adjacent frames according to the normal vector of the edge tracing points in the current frame;
    and determining the change of the position and/or the posture of the target object between the adjacent frame and the current frame according to the position relation between the delineation point and the matched delineation point.
  16. The method of claim 15, wherein searching for the matching stroke point comprises:
    and searching two edge tracing points which are closest to the normal vector distance of the edge tracing points in the point clouds of the adjacent frames to be used as the matched edge tracing points of the current frame.
  17. The method of claim 16, further comprising:
    calculating the distance between the delineation point and the matching delineation point;
    calculating a fusion residual error of the current frame and the adjacent frame according to the distance;
    determining a coordinate transformation matrix of the point clouds of the adjacent frames to the point clouds of the current frame according to the fusion residual so as to fuse at least part of the point clouds belonging to the target object in the adjacent frames to the current frame.
  18. The method of claim 17, wherein calculating the distance between the stroking point and the matching stroking point comprises:
    converting the two matched edge points to be under the coordinate system of the current frame through the coordinate transformation matrix;
    and calculating the distance from the edge tracing point to a connecting line between the two matched edge tracing points converted into the coordinate system of the current frame.
  19. The method according to claim 17 or 18, wherein the fused residual between the current frame and the neighboring frame is the sum of the distances of all edge points in the current frame.
  20. The method of claim 19, wherein said determining a transformation matrix between the current frame and the neighboring frame according to the residual error comprises:
    taking the sum of the fusion residual between the current frame and a plurality of adjacent frames and the fusion residual between the adjacent frames as a total fusion residual;
    calculating a coordinate transformation matrix between the current frame and the adjacent frame by minimizing the total fused residual.
  21. The method according to any of claims 17-20, wherein said fusing at least part of the point clouds belonging to the target object in the neighboring frames into the current frame comprises:
    and correcting the coordinate value of the point cloud of the adjacent frame on the horizontal plane according to the coordinate transformation matrix, and adding the corrected point cloud into the current frame.
  22. The method of claim 1, further comprising: selecting the number of the adjacent frames according to the movement speed of a mobile platform carrying a distance measuring device for collecting the point cloud.
  23. The method of claim 1, further comprising: and selecting the number of the adjacent frames according to the instruction of a user.
  24. The method of claim 1, further comprising: and outputting the fused point cloud data.
  25. The method of claim 1, further comprising: and displaying the fused point cloud image.
  26. The method of claim 1, wherein the target object comprises a vehicle.
  27. The method of claim 1, wherein the number of adjacent frames is not less than two frames.
  28. A point cloud fusion system for moving objects, the system comprising:
    a memory for storing executable instructions;
    a processor for executing the instructions stored in the memory such that the processor performs the point cloud fusion method of moving objects of any one of claims 1 to 27.
  29. A computer storage medium on which a computer program is stored, characterized in that the program, when executed by a processor, implements the point cloud fusion method of moving objects of any one of claims 1 to 27.
CN201980030566.XA 2019-10-17 2019-10-17 Point cloud fusion method and system for moving object and computer storage medium Pending CN114270410A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/111731 WO2021072710A1 (en) 2019-10-17 2019-10-17 Point cloud fusion method and system for moving object, and computer storage medium

Publications (1)

Publication Number Publication Date
CN114270410A true CN114270410A (en) 2022-04-01

Family

ID=75537645

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980030566.XA Pending CN114270410A (en) 2019-10-17 2019-10-17 Point cloud fusion method and system for moving object and computer storage medium

Country Status (2)

Country Link
CN (1) CN114270410A (en)
WO (1) WO2021072710A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269202B (en) * 2021-04-26 2023-11-03 南方电网数字电网研究院有限公司 Method for extracting point cloud of gate-type electric tower
CN113359149B (en) * 2021-05-12 2023-11-28 武汉中仪物联技术股份有限公司 Method, device, equipment and storage medium for positioning branch pipe and broken hole of pipeline
CN113447923A (en) * 2021-06-29 2021-09-28 上海高德威智能交通系统有限公司 Target detection method, device, system, electronic equipment and storage medium
CN113408454B (en) * 2021-06-29 2024-02-06 上海高德威智能交通系统有限公司 Traffic target detection method, device, electronic equipment and detection system
CN113807239B (en) * 2021-09-15 2023-12-08 京东鲲鹏(江苏)科技有限公司 Point cloud data processing method and device, storage medium and electronic equipment
CN113792699B (en) * 2021-09-24 2024-03-12 北京易航远智科技有限公司 Object-level rapid scene recognition method based on semantic point cloud
CN113983958B (en) * 2021-11-26 2024-01-05 中电科信息产业有限公司 Motion state determining method and device, electronic equipment and storage medium
CN115631215B (en) * 2022-12-19 2023-04-07 中国人民解放军国防科技大学 Moving target monitoring method, system, electronic equipment and storage medium
CN116197910B (en) * 2023-03-16 2024-01-23 江苏集萃清联智控科技有限公司 Environment sensing method and device for wind power blade wheel type mobile polishing robot
CN116359938B (en) * 2023-05-31 2023-08-25 未来机器人(深圳)有限公司 Object detection method, device and carrying device
CN116977226B (en) * 2023-09-22 2024-01-19 天津云圣智能科技有限责任公司 Point cloud data layering processing method and device, electronic equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559711B (en) * 2013-11-05 2016-04-27 余洪山 Based on the method for estimating of three dimensional vision system characteristics of image and three-dimensional information
CN103955948B (en) * 2014-04-03 2016-10-05 西北工业大学 A kind of space movement target detection method under dynamic environment
CN106846461B (en) * 2016-12-30 2019-12-03 西安交通大学 A kind of human body three-dimensional scan method
CN106875482B (en) * 2017-01-13 2020-04-28 浙江大学 Method for simultaneous positioning and dense three-dimensional reconstruction
CN108647646B (en) * 2018-05-11 2019-12-13 北京理工大学 Low-beam radar-based short obstacle optimized detection method and device
CN109724586B (en) * 2018-08-21 2022-08-02 南京理工大学 Spacecraft relative pose measurement method integrating depth map and point cloud
CN110264567B (en) * 2019-06-19 2022-10-14 南京邮电大学 Real-time three-dimensional modeling method based on mark points

Also Published As

Publication number Publication date
WO2021072710A1 (en) 2021-04-22

Similar Documents

Publication Publication Date Title
CN114270410A (en) Point cloud fusion method and system for moving object and computer storage medium
EP3283843B1 (en) Generating 3-dimensional maps of a scene using passive and active measurements
CN107272021B (en) Object detection using radar and visually defined image detection areas
CN109521756B (en) Obstacle motion information generation method and apparatus for unmanned vehicle
KR101829556B1 (en) Lidar-based classification of object movement
KR102032070B1 (en) System and Method for Depth Map Sampling
EP4089361A1 (en) Three-dimensional model generation method, information processing device, and program
WO2020243962A1 (en) Object detection method, electronic device and mobile platform
CN112513679B (en) Target identification method and device
WO2021253430A1 (en) Absolute pose determination method, electronic device and mobile platform
CN110663060B (en) Method, device, system and vehicle/robot for representing environmental elements
CN113378760A (en) Training target detection model and method and device for detecting target
EP3324359B1 (en) Image processing device and image processing method
WO2022198637A1 (en) Point cloud noise filtering method and system, and movable platform
WO2022141116A1 (en) Three-dimensional point cloud segmentation method and apparatus, and movable platform
US20200041622A1 (en) Detecting and tracking lidar cross-talk
CN111913177A (en) Method and device for detecting target object and storage medium
KR101030317B1 (en) Apparatus for tracking obstacle using stereo vision and method thereof
JP6397386B2 (en) Region division processing apparatus, method, and program
JP2013069045A (en) Image recognition device, image recognition method, and image recognition program
WO2022083529A1 (en) Data processing method and apparatus
CN115131756A (en) Target detection method and device
WO2023009180A1 (en) Lidar-based object tracking
CN114080545A (en) Data processing method and device, laser radar and storage medium
CN111542828A (en) Line recognition method, line recognition device, line recognition system, and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination