CN113807239B - Point cloud data processing method and device, storage medium and electronic equipment - Google Patents

Point cloud data processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113807239B
CN113807239B CN202111081401.0A CN202111081401A CN113807239B CN 113807239 B CN113807239 B CN 113807239B CN 202111081401 A CN202111081401 A CN 202111081401A CN 113807239 B CN113807239 B CN 113807239B
Authority
CN
China
Prior art keywords
sparse
data
point cloud
moment
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111081401.0A
Other languages
Chinese (zh)
Other versions
CN113807239A (en
Inventor
许舒恒
许新玉
李�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Kunpeng Jiangsu Technology Co Ltd
Original Assignee
Jingdong Kunpeng Jiangsu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Kunpeng Jiangsu Technology Co Ltd filed Critical Jingdong Kunpeng Jiangsu Technology Co Ltd
Priority to CN202111081401.0A priority Critical patent/CN113807239B/en
Publication of CN113807239A publication Critical patent/CN113807239A/en
Application granted granted Critical
Publication of CN113807239B publication Critical patent/CN113807239B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the invention discloses a processing method and device of point cloud data, a storage medium and electronic equipment. The method comprises the following steps: acquiring first time state information of a target object, and predicting to obtain predicted state information of a second time based on the first time state information; acquiring point cloud data of a previous frame, and determining a sparse object in the point cloud data of the previous frame based on the point cloud data of the previous frame and prediction state information of a second moment; and acquiring current frame point cloud data, and compensating the sparse object based on the history sparse data in the current frame point cloud data to obtain compensated current frame point cloud data. By accumulating the historical sparse data in the current frame point cloud data, the consistency of the sparse object is improved. Correspondingly, the historical sparse data is only part of data points, the historical sparse data is subjected to compensation processing and superposition processing, the processing amount is small, the processing speed is high, and the processing efficiency of the point cloud data is improved.

Description

Point cloud data processing method and device, storage medium and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of data processing, in particular to a method and a device for processing point cloud data, a storage medium and electronic equipment.
Background
In an autopilot scenario, the autopilot capability of an autopilot device is an important factor affecting autopilot quality, especially in complex driving scenarios.
Limited by the detection accuracy of the sensor, current autopilot devices have poor recognition of distant target objects. At present, the data identification omission of single frame data is avoided by adopting a global target static multi-frame accumulation mode. However, in the process of implementing the present invention, the inventor finds that at least the following technical problems exist in the prior art: the above-mentioned overall goal static multi-frame accumulation mode accumulates the data of a plurality of time frames according to fixed frequency, times, use the data redundancy after the above-mentioned scheme accumulates, not merely increase the operation amount of the automatic driving apparatus, increase the resource consumption, will increase the time spent processing the data at the same time.
Disclosure of Invention
The embodiment of the invention provides a processing method and device of point cloud data, a storage medium and electronic equipment, so as to reduce data redundancy.
In a first aspect, an embodiment of the present invention provides a method for processing point cloud data, including:
acquiring first time state information of a target object, and predicting and obtaining prediction state information of second time based on the first time state information, wherein the first time and the second time are two continuous time on a time axis;
acquiring point cloud data of a previous frame, and determining a sparse object in the point cloud data of the previous frame based on the point cloud data of the previous frame and the prediction state information of the second moment;
and acquiring current frame point cloud data, wherein in the current frame point cloud data, the sparse object is compensated based on historical sparse data, so as to obtain compensated current frame point cloud data.
In a second aspect, an embodiment of the present invention further provides a device for processing point cloud data, including:
the state information prediction module is used for obtaining first time state information of a target object and predicting and obtaining predicted state information of second time based on the first time state information, wherein the first time and the second time are two continuous time on a time axis;
the sparse object determining module is used for acquiring the point cloud data of the previous frame and determining a sparse object in the point cloud data of the previous frame based on the point cloud data of the previous frame and the prediction state information of the second moment;
The point cloud data processing module is used for acquiring current frame point cloud data, and in the current frame point cloud data, the sparse object is compensated based on historical sparse data, so that compensated current frame point cloud data is obtained.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements a method for processing point cloud data according to any one of the embodiments of the present invention when the processor executes the program.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements a method for processing point cloud data as provided in any embodiment of the present invention.
According to the technical scheme, the predicted state information of the target object at the second moment is obtained through the prediction of the state information of the target object at the first moment, the sparse object in the point cloud data of the previous frame is determined based on the predicted state information of the target object at the second moment, and the sparse object is determined in a prediction mode before the current state information is obtained, so that the waiting time is shortened, and the processing efficiency is improved. The method comprises the steps of obtaining current frame point cloud data, compensating the sparse object based on historical sparse data, obtaining compensated current frame point cloud data, replacing the condition of integrally storing the point cloud data at each moment in a static accumulation mode by storing the historical sparse data, reducing data storage capacity, occupying memory, adding the historical sparse data to the current frame point cloud data after compensating, accumulating the sparse data in the current frame point cloud data, and improving the density of the sparse object. Correspondingly, the historical sparse data is only part of data points, the historical sparse data is subjected to compensation processing and superposition processing, the processing amount is small, the processing speed is high, and the processing efficiency of the point cloud data is improved.
Drawings
Fig. 1 is a flow chart of a method for processing point cloud data according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a motion model provided by an embodiment of the present invention;
fig. 3 is a flow chart of a method for processing point cloud data according to a second embodiment of the present invention;
FIG. 4 is a schematic diagram of a scan distribution state according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a sparse object provided by an embodiment of the present invention;
fig. 6 is a schematic diagram of a process flow of point cloud data according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a processing device for point cloud data according to a third embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a flow chart of a processing method of point cloud data according to an embodiment of the present invention, where the embodiment is applicable to a case of processing acquired point cloud data during a driving process of an autopilot device, where the method may be performed by a processing device of point cloud data according to an embodiment of the present invention, where the processing device of point cloud data may be implemented by software and/or hardware, and the processing device of point cloud data may be configured on an electronic device, such as an autopilot device, a processing device configured on an autopilot device, or a mobile phone, a tablet computer, etc. that has communication with an autopilot device, and specifically includes the following steps:
S110, acquiring first time state information of a target object, and predicting and obtaining prediction state information of second time based on the first time state information, wherein the first time and the second time are two continuous time on a time axis.
S120, acquiring point cloud data of a previous frame, and determining sparse objects in the point cloud data of the previous frame based on the point cloud data of the previous frame and the prediction state information of the second moment.
S130, acquiring current frame point cloud data, and compensating the sparse object based on historical sparse data in the current frame point cloud data to obtain compensated current frame point cloud data.
In this embodiment, the target object is an autopilot device having an autopilot function, wherein the autopilot device may include, but is not limited to, an autopilot vehicle.
The status information at each time includes, but is not limited to, the position (x, y, z) of the target object, the velocity v, the acceleration a, the throttle T/m, and brake information. And determining the running state of the target object by acquiring state information of each moment in real time. Wherein, the information collected by each sensor can be obtained through communication connection or electric connection with each sensor and each executing component of the target object, and the sensors comprise but are not limited to a speed sensor, an acceleration sensor, a position sensor and the like; the implement components include, but are not limited to, throttle, brake, etc. The sensors and the executing components transmit the collected information to the electronic device in the embodiment based on the preset time interval, the information is respectively configured with a time stamp, and the state information of each moment can be determined according to the time stamps. The preset time interval may be 1 second, which is not limited to this, and the state information at the previous time may be the state information corresponding to the previous time stamp, and the time interval between the adjacent times is the preset time interval.
The state information of the second moment is predicted through the state information of the previous moment, namely the predicted state information of the second moment, the predicted state information of the second moment can be obtained before the second moment arrives after the state information of the previous moment is obtained, and the point cloud data is processed based on the predicted state information of the second moment, so that the waiting time for obtaining the actual state information of the second moment is reduced, the processing time of the point cloud data is shortened, and the processing efficiency is improved.
In some embodiments, predicting the predicted state information for the second time based on the first time state information includes: and inputting the state information at the first moment into a preset motion model, and acquiring the predicted state information at the second moment predicted by the motion model, wherein the motion model is obtained by training based on the state information at each previous moment and the corresponding state information at the next moment in the historical motion process.
The preset motion model can be a machine learning model, is obtained by training a training sample in advance, and has a prediction function of state information. In the present embodiment, the specific form of the motion model is not limited, and may be exemplified by, but not limited to, a logistic regression model, a neural network model, and the like. It should be noted that different types of autopilot devices may be configured with different motion models, and accordingly, each motion model is trained based on sample data of the same type of autopilot device. Correspondingly, in this embodiment, the motion model invoked by the current object is obtained based on the sample data training of the target object in the historical motion process, and/or is obtained based on the sample data training of the same type of automatic driving equipment of the target object. The motion model is used for processing the state information of the previous moment as input information, outputting predicted state information of the next moment, and determining a loss function based on the predicted state information of the next moment and the state information of the next moment so as to adjust model parameters of the motion model.
Optionally, during the running of the target object, the motion model is updated in real time based on the predicted state information at the second moment and the actual state information at the second moment, so as to improve the prediction accuracy of the motion model. Referring to fig. 2, fig. 2 is a schematic diagram of a motion model according to an embodiment of the present invention. In FIG. 2, S t-1 For the first moment state information of the target object,is the predicted state information of the second moment of the target object, S t Is the actual state information at the second moment. The motion model predicts and obtains the predicted state information of the target object at the second moment through the input state information at the first moment, and updates the motion model in real time based on the actual state information at the second moment and the predicted state information at the second moment obtained by prediction when the actual state information at the second moment is obtained.
Point cloud data is a collection of vectors in a three-dimensional coordinate system, represented in the form of X, Y, Z three-dimensional coordinates, and is generally used primarily to represent the shape of the exterior surface of an object. Alternatively, in addition to the geometric position information represented by (X, Y, Z), the point cloud data may represent RGB colors, gray values, depths, and the like of one point. Illustratively, pi= { Xi, yi, zi, … … } represents a Point in space, point cloud= { P1, P2, P3, … Pn } represents a set of Point Cloud data.
The point cloud data is generated by a 3D scanning device, which may be, for example, but not limited to, a laser radar (2D/3D), a stereo camera (stereo camera), a time-of-flight camera (time-of-flight camera), and the like. The 3D scanning device measures information of a large number of points on the surface of the object in an automatic mode, and outputs point cloud data based on a preset data file.
In this embodiment, the 3D scanning device is disposed on the target object, and the target object is exemplified as an autonomous vehicle, and the autonomous vehicle may be disposed with a laser radar. The number of the lidars may be plural, and may be disposed at positions in front of, behind, and beside the autonomous vehicle, and this is not limited. Point cloud data of the target object in the driving direction is acquired through the 3D scanning device, and the obstacle in the driving direction is identified through the point cloud data. Specifically, point cloud data is collected based on a preset time interval, and the collected point cloud data is transmitted to the electronic device in the embodiment. The collection time interval of the point cloud data may be the same as the collection time interval of the state information, that is, the time interval of the first time and the second time, where the state information of the first time and the point cloud data of the previous frame may have a corresponding relationship, and the state information of the second time and the point cloud data of the current frame have a corresponding relationship. Optionally, the previous frame of point cloud data may be obtained by scanning at a first time, and the current frame of point cloud data may be obtained by scanning at a second time; alternatively, the previous frame of point cloud data may be scanned at a preset time interval before/after the first time, and the current frame of point cloud data may be scanned at a preset time interval before/after the second time.
Because of the scanning precision of the 3D scanning equipment, sparse data exists in the acquired point cloud data, wherein the sparse data is data with most numerical values missing or zero in a data set, is incomplete data, and is generally data points formed by objects far from the current position of a target object. Object recognition is performed based on the sparse data no method, recognition of obstacles in the driving direction is affected, and obstacle avoidance functions of the target object are further affected.
In this embodiment, based on the previous frame point cloud data and the prediction state information of the second moment, the sparse object in the previous frame point cloud data is determined. Optionally, the predicted state information of the previous frame of point cloud data and the predicted state information of the second moment are input into a prediction model of the point cloud data, the point cloud data at the second moment is predicted, the predicted point cloud data and the previous frame of point cloud data are compared, and the undisplayed data points are determined to be sparse objects. Optionally, based on the scanning characteristics of the 3D scanning device, taking the laser radar as an example, taking the laser as a signal source, and sending out the pulse laser by the laser, where the pulse laser hits an obstacle to cause scattering, a part of light waves will be reflected to a receiver of the laser radar, that is, each data point in the point cloud data is located on a scanning line of the 3D scanning device, and accordingly, the scanning line distribution of the 3D scanning device may be invoked, or the scanning line distribution of the 3D scanning device may be determined by the point cloud data of the previous frame. The scan line distribution under the predicted state information at the second moment can be predicted and obtained through the state information at the last moment and the scan line distribution of the 3D scanning device. Further, the state of the distribution of the scan lines of each data point in the previous frame of point cloud data under the predicted state information of the second time can be determined, for example, if any data point is located on any scan line under the predicted state information of the second time, it is determined that the data point is scanned at the second time and is not a sparse object, and if any data point is located outside each scan line under the predicted state information of the second time, it is determined that the data point is not scanned at the second time and belongs to the sparse object.
And by determining the sparse object, performing compensation processing on the sparse object, the density of the sparse object is improved, and the recognition accuracy of the sparse object is further improved. Meanwhile, only the sparse object is compensated, so that a large amount of calculation and loss for integrally processing the point cloud data are avoided.
In this embodiment, current frame point cloud data is obtained, and sparse objects in the current frame point cloud data are compensated by historical sparse data. The historical sparse data is determined based on the sparse objects of the target object at all times in the current moving process. The current moving process is a process that a target object continuously runs from a starting moment to a second moment, in the current moving process, a sparse object in each frame of acquired point cloud data is determined, historical sparse data is formed based on the sparse object, the historical sparse data is stored, and the subsequent point cloud data can be conveniently compensated. The history sparse data is stored instead of the complete history point cloud data, so that the storage of a large number of invalid data points is reduced, and the memory occupation caused by the storage of a large number of point cloud data is avoided.
Optionally, the determining manner of the historical sparse data includes: and generating a first compensation factor for the sparse object at any moment based on the state information of the target object at any moment and the state information of the target object at the base moment, and performing compensation processing on the sparse object at any moment based on the compensation factor to obtain sparse data corresponding to the base moment, wherein the sparse data corresponding to the base moment, which are respectively obtained by the sparse objects at each historical moment through the compensation processing, form historical sparse data.
The sparse data at different moments are respectively located in different coordinate systems, and the sparse objects at each moment are converted into the same coordinate system by performing compensation processing on the sparse objects at each moment, so that management and application of the sparse objects in the same coordinate system are facilitated. The basic time is a target time for performing data conversion, and in an embodiment, the basic time may be any time of the target object in the current moving process; in some embodiments, it may be the initial time of the target object during the current movement; in an embodiment, it may be the initial moment of the target object in the current driving direction.
The first compensation factor is determined based on the state information at each instant and the state information at the base instant, optionally the first compensation factor being a conversion matrix. For example, the state information at any time may be converted by the conversion matrix at the time to obtain the state information at the base time, and correspondingly, the conversion matrix at the time, that is, the first compensation factor at the time, is obtained by analyzing the state information at any time and the state information at the base time. And for the sparse object at any moment, a first compensation factor at the corresponding moment is called, the sparse object is subjected to compensation processing through the first compensation factor, and sparse data of the sparse object relative to the basic moment is obtained through matrix multiplication of the sparse object and the first compensation factor.
It should be noted that, the sparse object at the second moment is stored after being compensated by the corresponding first compensation factor, so as to update the historical sparse data.
And storing the sparse data, which are obtained by compensation, of the sparse objects at each moment relative to the basic moment to obtain historical sparse data. The history sparse data may be stored in a database or may be stored in a data stream, which is not limited.
On the basis of the embodiment, the current frame point cloud data is compensated based on each piece of sparse data in the historical sparse data, namely, each piece of sparse data in the historical sparse data is converted into a data point corresponding to the second moment, each data point obtained by converting the historical sparse data is overlapped into the current frame point cloud data, accumulation of the historical sparse data in the current frame point cloud data is achieved, the density of the sparse data in the current frame point cloud data is improved, and the object recognition accuracy in the current frame point cloud data is further improved.
Optionally, in the current frame point cloud data, the sparse object is compensated based on historical sparse data, so as to obtain compensated current frame point cloud data, including: acquiring current actual state information, and determining a second compensation factor based on the current actual state information and state information of a basic moment; performing compensation processing on the historical sparse data based on the second compensation factor to obtain sparse data corresponding to a second moment; and adding the sparse data relative to the second moment to the current frame point cloud data to obtain compensated current frame point cloud data.
The second compensation factor is used as a conversion matrix for converting data from the basic moment to the second moment, and the conversion matrix is calculated based on the current actual state information of the second moment and the state information of the basic moment. For example, the state information of the base time may be converted by the second compensation factor to obtain the current actual state information.
The compensation processing of each sparse data in the historical sparse data by the second compensation factor may be, for example, performing matrix multiplication on each sparse data in the historical sparse data and the second compensation factor to obtain sparse data corresponding to the second moment. And adding the sparse data obtained through the compensation processing relative to the second moment to the current frame point cloud data to obtain compensated current frame point cloud data.
The sparse data is compensated in the compensated current frame point cloud data, and the obstacle recognition is carried out through the compensated current frame point cloud data, so that the recognition accuracy of the obstacle is improved, and the automatic driving accuracy of the target object is further improved.
According to the technical scheme, the predicted state information of the target object at the second moment is obtained through the prediction of the state information of the target object at the first moment, the sparse object in the point cloud data of the previous frame is determined based on the predicted state information of the target object at the second moment, and the sparse object is determined in a prediction mode before the current state information is obtained, so that the waiting time is shortened, and the processing efficiency is improved. The method comprises the steps of obtaining current frame point cloud data, compensating the sparse object based on historical sparse data, obtaining compensated current frame point cloud data, replacing the condition of integrally storing the point cloud data at each moment in a static accumulation mode by storing the historical sparse data, reducing data storage capacity, occupying memory, adding the historical sparse data to the current frame point cloud data after compensating, accumulating the sparse data in the current frame point cloud data, and improving the density of the sparse object. Correspondingly, the historical sparse data is only part of data points, the historical sparse data is subjected to compensation processing and superposition processing, the processing amount is small, the processing speed is high, and the processing efficiency of the point cloud data is improved.
Example two
Fig. 3 is a flow chart of a processing method of point cloud data according to a second embodiment of the present invention, where the embodiments of the present invention may be combined with each of the alternatives in the foregoing embodiments. In an embodiment of the present invention, optionally, the determining, based on the previous frame point cloud data and the prediction state information of the second moment, the sparse object in the previous frame point cloud data includes: determining a data point in the point cloud data of the previous frame; predicting the display state of each data point at the second moment based on the prediction state information at the second moment and the point cloud data of the previous frame; and determining the data point with the display state of no at the second moment as a sparse object.
As shown in fig. 3, the method in the embodiment of the present invention specifically includes the following steps:
s210, acquiring first time state information of a target object, and predicting to obtain predicted state information of a second time based on the first time state information.
S220, determining data points in the point cloud data of the previous frame.
S230, predicting the display state of each data point at the second moment based on the prediction state information at the second moment and the point cloud data of the previous frame.
And S240, determining the data point with the display state of no at the second moment as a sparse object.
S250, acquiring current frame point cloud data, and compensating the sparse object based on historical sparse data in the current frame point cloud data to obtain compensated current frame point cloud data.
In this embodiment, each data point in the point cloud data of the previous frame is in a display state of yes in the previous frame, the display state of each data point at the second moment is predicted according to the prediction state information of the second moment, if the display state is yes, it is determined that the data point is acquired at the second moment and can be displayed in the point cloud data, and if the display state is no, it is determined that the data point is not acquired at the second moment and cannot be displayed in the point cloud data, that is, sparse data.
On the basis of the foregoing embodiment, predicting the display state of each data point at the second time based on the predicted state information at the second time and the previous frame point cloud data includes: determining a scanning distribution state of a previous moment based on the previous frame point cloud data, and determining the scanning distribution state of a second moment based on the predicted state information of the second moment and the scanning distribution state of the previous moment; and determining the display state of the data point which is in the dead zone of resolution in the scanning distribution state at the second moment in each data point in the point cloud data of the previous frame as no.
The scan distribution state may be a scan line distribution state of the 3D code scanning device. For example, since each data point in the previous frame of point cloud data is located on a scanning line of the 3D code scanning device, correspondingly, the scanning distribution state may be determined based on the distribution state of each data point in the previous frame of point cloud data. The scan distribution state of the second time is obtained by predicting based on the previous state information and the predicted state information of the second time, that is, the scan distribution state of the previous time is adjusted to be in accordance with the scan distribution state of the predicted state information of the second time, specifically, the scan distribution state of the previous time may be adjusted according to the position change and/or the orientation change of the predicted state information of the second time relative to the state information of the previous time.
The area between any adjacent scanning lines in the scanning distribution state is a resolution blind area, if any data point is positioned on the scanning line, the display state is yes, and if any data point is positioned in the resolution blind area, the display state is no. For example, a data point whose display state at the previous time is yes and whose display state at the second time is no may be determined as a sparse object.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating a scan distribution state according to an embodiment of the present invention. In fig. 4, a solid black line is used to indicate a scanning line at the previous time, a dashed black line is used to indicate a predicted scanning line at the second time, and predicted state information at the second time is changed in position with respect to state information at the previous time. The data point P is located on the black solid line, that is, the display state at the previous moment is yes, the data point P is located outside the black dashed line, that is, between the two black dashed lines, that is, the second moment is located in the resolution blind area, and the display state at the second moment is no, where the data point P can be determined as a sparse object.
On the basis of the foregoing embodiment, determining, as a sparse object, a data point whose display state is no at the second time includes: and determining the data points with the display state being NO and the current distance from the target object meeting the threshold condition as sparse objects. In order to avoid the problem of large data volume caused by a plurality of sparse objects, data points with no first display state of distance information are screened. The threshold condition may be preset, may be set as needed, and is not limited thereto. Specifically, a data point with a threshold condition and a current distance from the target object greater than a distance threshold is determined as a sparse object.
On the basis of the foregoing embodiment, after determining, as the sparse object, the data point whose display state is no at the second time, may further include: and determining the processing priority of the sparse objects based on the position information of each sparse object. For example, it may be that distance information of the sparse object and the target object is determined based on position information of the sparse object, and a processing priority is determined based on the distance information, the processing priority being positively correlated with the distance information, that is, the larger the distance information is, the higher the processing priority is. By way of example, it is also possible to perform calculation based on position information (i.e., coordinate information) of the sparse object, and determine the processing priority based on the calculated value. For example, referring to fig. 5, fig. 5 is a schematic diagram of a sparse object provided by an embodiment of the present invention, where the sparse object P2 in fig. 5 has a higher processing priority than the sparse object P1.
And performing priority processing on the sparse objects with high processing priority so as to perform priority processing on the sparse objects far away from the target object, thereby improving the recognition accuracy of the sparse objects far away from the target object.
According to the technical scheme provided by the embodiment, the display state of each data point is predicted at the second moment through the prediction state information of the second moment and the previous point cloud data, so that the sparse data is determined, the sparse object is compensated based on the historical sparse data, the compensated current frame point cloud data is obtained, the situation that the point cloud data at each moment are integrally stored in a static accumulation mode is replaced by storing the historical sparse data, the data storage amount is reduced, the occupation of a memory is reduced, the historical sparse data is subjected to compensation processing and then is overlapped into the current frame point cloud data, the sparse data is accumulated in the current frame point cloud data, and the density of the sparse object is improved. Correspondingly, the historical sparse data is only part of data points, the historical sparse data is subjected to compensation processing and superposition processing, the processing amount is small, the processing speed is high, and the processing efficiency of the point cloud data is improved.
On the basis of the above embodiment, the present embodiment also provides a preferred example, and for illustration, referring to fig. 6, fig. 6 is a schematic diagram of a processing flow of point cloud data according to an embodiment of the present invention. In this embodiment, a terminal such as a mobile phone or the like, or a processing device disposed on a target object, acquires state information of the target object at each time. Exemplary, acquiring State information S of the target object at the last time t-1 The first time state information includes position (x, y, z), velocity v, acceleration a, and throttle T/m. Through a pre-established dynamic model Mpre-establishedMeasuring predicted state information at a second timePrediction state information of the second moment +.>For determining sparse objects. Prediction state information of the second time instant obtained by prediction using a motion model +.>Actual movement state S at the second moment t And the dynamic model is input into the dynamic model M as negative feedback at the same time, and the dynamic model under the current scene is continuously and dynamically calibrated.
Because the scanning device has a fixed resolution, the data it collects at distant objects will be more sparse. For the state of motion S at the previous moment t-1 Point cloud data D collected downwards t-1 If the data point p at the second moment predicts the state informationThe sensor resolution blind zone is about to be entered, and at this time, the data point p is taken as an object for accumulation, namely a sparse object. Referring to fig. 2, the state of motion S at the previous moment t-1 When the data point p (on the scanning line of the scanning device) can be acquired under the view angle of the left data acquisition device; but at the predicted second instant of the prediction state information +.>At this point data point p is +. >In the state, in the dead zone of the data acquisition device (outside the scanning line), the data point p is regarded as a sparse object.
Each sparse object is given priority for multi-frame accumulation based on the data point's own characteristics (e.g., location information). For the sparse objects p1 and p2,the priority of the target object is proportional to the distance between the target object and the point, W i ∝P x 2 +P y 2 +P z 2 Wherein W is i Processing priority for sparse object i, P x 、P y 、P z Coordinate information of the sparse objects i respectively.
Establishing the motion state S at the previous moment t-1 And basic state S 0 Compensation matrix T (e.g. initial state) t-1 (first compensation factor) based on the compensation matrix T t-1 For the motion state S at the previous moment t-1 Compensating the corresponding sparse object to obtain a basic state S 0 And (3) sparse data and updating historical sparse data. Wherein the sparse data of the motion state at the previous moment can be P t-1 ={p 1 ,p 2 ,…,p n-1 By converting matrix T t-1 (including translation, rotation, zooming, etc.) to a base state S 0 In the space P 0 =P t T t-1 . Storing the projection motion compensated data into multi-frame accumulated data stream, i.e. updating history sparse data, i.e. P stream =P stream +P 0 . Wherein P is stream Is historical sparse data.
Acquiring actual state information S at a second moment t Will be the actual state information S t Corresponding historical sparse data is obtained through a second compensation factor T t (including translation, rotation, zooming, etc.) projected to the actual state information S t In the space where P is located t+1 =P stream T t -1 Superimposing the projection-compensated data to the actual state information S t Down-collected current frame point cloud data D t In the process, compensated point cloud data D is obtained t ’。
Example III
Fig. 7 is a schematic structural diagram of a processing device for point cloud data according to an embodiment of the present invention, where the device includes:
the state information prediction module 310 is configured to obtain first time state information of a target object, and predict to obtain predicted state information of a second time based on the first time state information;
the sparse object determining module 320 is configured to obtain previous frame point cloud data, and determine a sparse object in the previous frame point cloud data based on the previous frame point cloud data and the prediction state information of the second moment;
the point cloud data processing module 330 is configured to obtain current frame point cloud data, and in the current frame point cloud data, compensate the sparse object based on historical sparse data to obtain compensated current frame point cloud data.
Based on the above embodiments, optionally, the state information prediction module 310 is configured to:
And inputting the state information at the first moment into a preset motion model, and acquiring the predicted state information at the second moment predicted by the motion model, wherein the motion model is obtained by training based on the state information at each previous moment and the corresponding state information at the next moment in the historical motion process.
Based on the above embodiment, optionally, the sparse object determining module 320 includes:
a data point determining unit for determining a data point in the previous frame of point cloud data;
a display state prediction unit, configured to predict a display state of each data point at the second time based on the prediction state information at the second time and the previous frame point cloud data;
and the sparse object determining unit is used for determining the data point with the display state of no at the second moment as a sparse object.
Optionally, the display state prediction unit is configured to:
determining a scanning distribution state of a previous moment based on the previous frame point cloud data, and determining the scanning distribution state of a second moment based on the predicted state information of the second moment and the scanning distribution state of the previous moment;
and determining the display state of the data point which is in the dead zone of resolution in the scanning distribution state at the second moment in each data point in the point cloud data of the previous frame as no.
Optionally, the sparse object determining unit is configured to:
determining a data point with the display state at the second moment being NO and the current distance from the target object meeting a threshold condition as a sparse object; or,
and determining the data point with the display state of no at the second moment as a sparse object, and determining the processing priority of the sparse object based on the position information of each sparse object.
On the basis of the above embodiment, optionally, the historical sparse data is determined based on sparse objects of the target object at each moment in the current moving process;
the apparatus further comprises:
the historical sparse data generation module is used for generating a first compensation factor for the sparse object at any moment based on the state information of the target object at any moment and the state information of the target object at the base moment, and carrying out compensation processing on the sparse object at any moment based on the compensation factor to obtain sparse data corresponding to the base moment, wherein the sparse data corresponding to the base moment, which are respectively obtained by the sparse objects at each historical moment through the compensation processing, form historical sparse data.
Based on the above embodiment, optionally, the point cloud data processing module 330 is configured to:
Acquiring current actual state information, and determining a second compensation factor based on the current actual state information and state information of a basic moment;
performing compensation processing on the historical sparse data based on the second compensation factor to obtain sparse data corresponding to a second moment;
and adding the sparse data relative to the second moment to the current frame point cloud data to obtain compensated current frame point cloud data.
The processing device for the point cloud data provided by the embodiment of the invention can execute the processing method for the point cloud data provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the processing method for the point cloud data.
Example IV
Fig. 8 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. Fig. 8 shows a block diagram of an electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 8 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present invention. Device 12 is typically an electronic device that assumes image classification functionality.
As shown in fig. 8, the electronic device 12 is in the form of a general purpose computing device. Components of the electronic device 12 may include, but are not limited to: one or more processors 16, a memory device 28, and a bus 18 connecting the various system components, including the memory device 28 and the processors 16.
Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include industry standard architecture (Industry Standard Architecture, ISA) bus, micro channel architecture (Micro Channel Architecture, MCA) bus, enhanced ISA bus, video electronics standards association (Video Electronics Standards Association, VESA) local bus, and peripheral component interconnect (Peripheral Component Interconnect, PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The storage 28 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory, RAM) 30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 8, commonly referred to as a "hard disk drive"). Although not shown in fig. 8, a disk drive for reading from and writing to a removable nonvolatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from and writing to a removable nonvolatile optical disk (e.g., a Compact Disc-Read Only Memory (CD-ROM), digital versatile Disc (Digital Video Disc-Read Only Memory, DVD-ROM), or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. The storage device 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the embodiments of the invention.
Programs 36 having a set (at least one) of program modules 26 may be stored, for example, in storage 28, such program modules 26 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a gateway environment. Program modules 26 generally perform the functions and/or methods of the embodiments described herein.
The electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, camera, display 24, etc.), one or more devices that enable a user to interact with the electronic device 12, and/or any devices (e.g., network card, modem, etc.) that enable the electronic device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more gateways (e.g., local area network (Local Area Network, LAN), wide area network Wide Area Network, WAN) and/or public gateways, such as the internet) via the gateway adapter 20. As shown, gateway adapter 20 communicates with other modules of electronic device 12 over bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, disk array (Redundant Arrays of Independent Disks, RAID) systems, tape drives, data backup storage systems, and the like.
The processor 16 executes various functional applications and data processing by running a program stored in the storage device 28, for example, implementing the processing method of point cloud data provided by the above-described embodiment of the present invention.
Example five
A fifth embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for processing point cloud data as provided by the embodiments of the present invention.
Of course, the computer readable storage medium provided by the embodiments of the present invention, on which the computer program stored is not limited to the above-described method operations, may also perform the processing method of point cloud data provided by any embodiment of the present invention.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer-readable signal medium may include a propagated data signal with computer-readable source code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
The source code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wire, electrical leads, fiber optic cables, RF, and the like, or any suitable combination of the foregoing.
Computer source code for carrying out operations of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The source code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of gateway, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (8)

1. The processing method of the point cloud data is characterized by comprising the following steps of:
acquiring first time state information of a target object, and predicting and obtaining prediction state information of second time based on the first time state information, wherein the first time and the second time are two continuous time on a time axis;
acquiring point cloud data of a previous frame, and determining a sparse object in the point cloud data of the previous frame based on the point cloud data of the previous frame and the prediction state information of the second moment;
acquiring current frame point cloud data, and compensating the sparse object based on historical sparse data in the current frame point cloud data to obtain compensated current frame point cloud data;
The historical sparse data is determined based on sparse objects of the target object at all moments in the current moving process; the determining mode of the history sparse data comprises the following steps:
for a sparse object at any moment, generating a first compensation factor based on the state information of the target object at any moment and the state information of the target object at a basic moment, and performing compensation processing on the sparse object at any moment based on the compensation factor to obtain sparse data corresponding to the basic moment, wherein the sparse data corresponding to the basic moment, which are respectively obtained by the sparse objects at each historical moment through the compensation processing, form historical sparse data;
the compensating the sparse object based on the history sparse data in the current frame point cloud data to obtain compensated current frame point cloud data includes:
acquiring current actual state information, and determining a second compensation factor based on the current actual state information and state information of a basic moment; performing compensation processing on the historical sparse data based on the second compensation factor to obtain sparse data corresponding to a second moment; and adding the sparse data relative to the second moment to the current frame point cloud data to obtain compensated current frame point cloud data.
2. The method of claim 1, wherein predicting the predicted state information at the second time based on the first time state information comprises:
and inputting the state information at the first moment into a preset motion model, and acquiring the predicted state information at the second moment predicted by the motion model, wherein the motion model is obtained by training based on the state information at each previous moment and the corresponding state information at the next moment in the historical motion process.
3. The method of claim 1, wherein the determining sparse objects in the previous frame of point cloud data based on the previous frame of point cloud data and the predicted state information for the second time instant comprises:
determining a data point in the point cloud data of the previous frame;
predicting the display state of each data point at the second moment based on the predicted state information at the second moment and the point cloud data of the previous frame;
and determining the data point with the display state of no at the second moment as a sparse object.
4. A method according to claim 3, wherein predicting the display state of the data points at the second time based on the predicted state information at the second time and the previous frame of point cloud data comprises:
Determining a scanning distribution state of a previous moment based on the previous frame point cloud data, and determining the scanning distribution state of a second moment based on the predicted state information of the second moment and the scanning distribution state of the previous moment;
and determining the display state of the data point which is in the dead zone of resolution in the scanning distribution state at the second moment in each data point in the point cloud data of the previous frame as no.
5. A method according to claim 3, wherein said determining as a sparse object a data point for which the display state at the second instant is no comprises:
determining a data point with the display state at the second moment being NO and the current distance from the target object meeting a threshold condition as a sparse object; or,
and determining the data point with the display state of no at the second moment as a sparse object, and determining the processing priority of the sparse object based on the position information of each sparse object.
6. A processing apparatus for point cloud data, comprising:
the state information prediction module is used for obtaining first time state information of a target object and predicting and obtaining predicted state information of second time based on the first time state information, wherein the first time and the second time are two continuous time on a time axis;
The sparse object determining module is used for acquiring the point cloud data of the previous frame and determining a sparse object in the point cloud data of the previous frame based on the point cloud data of the previous frame and the prediction state information of the second moment;
the point cloud data processing module is used for acquiring current frame point cloud data, and compensating the sparse object based on historical sparse data in the current frame point cloud data to obtain compensated current frame point cloud data;
the historical sparse data is determined based on sparse objects of the target object at all moments in the current moving process;
the processing device of the point cloud data further comprises:
the historical sparse data generation module is used for generating a first compensation factor for the sparse object at any moment based on the state information of the target object at any moment and the state information of the target object at the base moment, and carrying out compensation processing on the sparse object at any moment based on the compensation factor to obtain sparse data corresponding to the base moment, wherein the sparse data corresponding to the base moment, which are respectively obtained by the sparse object at each historical moment through the compensation processing, form historical sparse data;
the point cloud data processing module is used for: acquiring current actual state information, and determining a second compensation factor based on the current actual state information and state information of a basic moment; performing compensation processing on the historical sparse data based on the second compensation factor to obtain sparse data corresponding to a second moment; and adding the sparse data relative to the second moment to the current frame point cloud data to obtain compensated current frame point cloud data.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of processing point cloud data according to any of claims 1-5 when executing the program.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a method of processing point cloud data according to any of claims 1-5.
CN202111081401.0A 2021-09-15 2021-09-15 Point cloud data processing method and device, storage medium and electronic equipment Active CN113807239B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111081401.0A CN113807239B (en) 2021-09-15 2021-09-15 Point cloud data processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111081401.0A CN113807239B (en) 2021-09-15 2021-09-15 Point cloud data processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113807239A CN113807239A (en) 2021-12-17
CN113807239B true CN113807239B (en) 2023-12-08

Family

ID=78895393

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111081401.0A Active CN113807239B (en) 2021-09-15 2021-09-15 Point cloud data processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113807239B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820982A (en) * 2015-04-23 2015-08-05 北京理工大学 Real-time terrain estimation method based on kernel function
CN108152831A (en) * 2017-12-06 2018-06-12 中国农业大学 A kind of laser radar obstacle recognition method and system
CN108647646A (en) * 2018-05-11 2018-10-12 北京理工大学 The optimizing detection method and device of low obstructions based on low harness radar
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
CN111060099A (en) * 2019-11-29 2020-04-24 畅加风行(苏州)智能科技有限公司 Real-time positioning method for unmanned automobile
CN111337941A (en) * 2020-03-18 2020-06-26 中国科学技术大学 Dynamic obstacle tracking method based on sparse laser radar data
CN112666535A (en) * 2021-01-12 2021-04-16 重庆长安汽车股份有限公司 Environment sensing method and system based on multi-radar data fusion
WO2021072710A1 (en) * 2019-10-17 2021-04-22 深圳市大疆创新科技有限公司 Point cloud fusion method and system for moving object, and computer storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107976688A (en) * 2016-10-25 2018-05-01 菜鸟智能物流控股有限公司 Obstacle detection method and related device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104820982A (en) * 2015-04-23 2015-08-05 北京理工大学 Real-time terrain estimation method based on kernel function
CN108152831A (en) * 2017-12-06 2018-06-12 中国农业大学 A kind of laser radar obstacle recognition method and system
CN108647646A (en) * 2018-05-11 2018-10-12 北京理工大学 The optimizing detection method and device of low obstructions based on low harness radar
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
WO2021072710A1 (en) * 2019-10-17 2021-04-22 深圳市大疆创新科技有限公司 Point cloud fusion method and system for moving object, and computer storage medium
CN111060099A (en) * 2019-11-29 2020-04-24 畅加风行(苏州)智能科技有限公司 Real-time positioning method for unmanned automobile
CN111337941A (en) * 2020-03-18 2020-06-26 中国科学技术大学 Dynamic obstacle tracking method based on sparse laser radar data
CN112666535A (en) * 2021-01-12 2021-04-16 重庆长安汽车股份有限公司 Environment sensing method and system based on multi-radar data fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于惯性测量单元的激光雷达点云融合方法;张艳国;李擎;;系统仿真学报(11);全文 *
张艳国 ; 李擎 ; .基于惯性测量单元的激光雷达点云融合方法.系统仿真学报.2018,(11),全文. *

Also Published As

Publication number Publication date
CN113807239A (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN109635685B (en) Target object 3D detection method, device, medium and equipment
US20240112051A1 (en) Machine learning models operating at different frequencies for autonomous vehicles
JP6745328B2 (en) Method and apparatus for recovering point cloud data
US10497145B2 (en) System and method for real-time large image homography processing
CN113264066B (en) Obstacle track prediction method and device, automatic driving vehicle and road side equipment
JP2021515939A (en) Monocular depth estimation method and its devices, equipment and storage media
CN111563450B (en) Data processing method, device, equipment and storage medium
US11688177B2 (en) Obstacle detection method and device, apparatus, and storage medium
JP2023530545A (en) Spatial geometric information estimation model generation method and apparatus
CN113780064A (en) Target tracking method and device
JP2021174531A (en) Target tracking method and device, electronic equipment, storage medium, and computer program
CN115375887A (en) Moving target trajectory prediction method, device, equipment and medium
CN115346192A (en) Data fusion method, system, equipment and medium based on multi-source sensor perception
CN112651535A (en) Local path planning method and device, storage medium, electronic equipment and vehicle
CN113807239B (en) Point cloud data processing method and device, storage medium and electronic equipment
CN113793349A (en) Target detection method and device, computer readable storage medium and electronic equipment
US20210174079A1 (en) Method and apparatus for object recognition
CN114820953B (en) Data processing method, device, equipment and storage medium
CN111915587A (en) Video processing method, video processing device, storage medium and electronic equipment
CN115856874A (en) Millimeter wave radar point cloud noise reduction method, device, equipment and storage medium
CN115471731A (en) Image processing method, image processing apparatus, storage medium, and device
CN113111692B (en) Target detection method, target detection device, computer readable storage medium and electronic equipment
CN114440856A (en) Method and device for constructing semantic map
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN114964204A (en) Map construction method, map using method, map constructing device, map using equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant