CN116338604A - Data processing method, device, electronic equipment and storage medium - Google Patents

Data processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116338604A
CN116338604A CN202310293893.2A CN202310293893A CN116338604A CN 116338604 A CN116338604 A CN 116338604A CN 202310293893 A CN202310293893 A CN 202310293893A CN 116338604 A CN116338604 A CN 116338604A
Authority
CN
China
Prior art keywords
point cloud
cloud data
target object
data
detection box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310293893.2A
Other languages
Chinese (zh)
Inventor
张意逊
赵广明
方志刚
陈忠明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunyi Electronic Technology Shanghai Co Ltd
Original Assignee
Kunyi Electronic Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunyi Electronic Technology Shanghai Co Ltd filed Critical Kunyi Electronic Technology Shanghai Co Ltd
Priority to CN202310293893.2A priority Critical patent/CN116338604A/en
Publication of CN116338604A publication Critical patent/CN116338604A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The application provides a data processing method, a device, electronic equipment and a storage medium, wherein the method firstly acquires first point cloud data acquired by a radar on a target object under a first acquisition condition, and the first acquisition condition comprises that a first distance between the target object and the radar is smaller than a first threshold value; acquiring second point cloud data acquired by the radar on the target object under a second acquisition condition, wherein the second acquisition condition comprises that the number of data points in the second point cloud data is not more than a second threshold value, and/or the second distance between the target object and the radar is more than a third threshold value, and the third threshold value is more than or equal to the first threshold value; and fusing the first point cloud data and the second point cloud data, obtaining third point cloud data and a third detection box of the third point cloud data according to the fusion result and the sensing algorithm, and finally marking the third detection box on the second point cloud data. According to the method, the third detection box is obtained based on the fused third point cloud data mark and marked on the second point cloud data, so that the marking accuracy and efficiency are considered.

Description

Data processing method, device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a data processing method, a data processing device, an electronic device, and a storage medium.
Background
In a drive test scene, a radar is arranged on the drive test vehicle, and the radar can acquire an external environment so as to obtain point cloud data of the external environment. When training (including training, testing or verifying) of a sensing algorithm is required to be performed by using point cloud data, it is generally required to mark the point cloud data of a target object, for example, a detection box is marked for the point cloud data of a certain vehicle, the detection box may be a cuboid surrounding all data points of the vehicle, and then the detection box and the point cloud data of the vehicle are reinjected into the sensing algorithm to implement training, testing or verifying.
The existing marks mainly have two modes: firstly, the marking is completely performed manually, and secondly, the marking is performed manually by combining a sensing algorithm. However, the former has higher accuracy but lower efficiency, the latter has higher accuracy only when the distance between the target object and the radar is shorter, when the distance between the target object and the radar is longer, the point cloud of the target object is sparse, the size, shape and the like of the detection box obtained by the sensing algorithm mark are not in accordance with the actual situation, the accuracy is not high, the subsequent use of the mark result can affect the training, verification or test result, and if the manual mark is adopted for the target object at a longer distance, the efficiency is too low.
Therefore, there is a technical problem that the marking accuracy and the marking efficiency cannot be considered when marking the point cloud data at present, and improvement is needed.
Disclosure of Invention
The embodiment of the application provides a data processing method, a device, electronic equipment and a storage medium, which are used for relieving the technical problem that marking accuracy and marking efficiency cannot be considered when point cloud data are marked at present.
In order to solve the technical problems, the embodiment of the application provides the following technical scheme:
the application provides a data processing method, which comprises the following steps:
acquiring first point cloud data acquired by a radar on a target object under a first acquisition condition, and marking a first detection box of the first point cloud data based on a perception algorithm, wherein the first acquisition condition comprises that a first distance between the target object and the radar is smaller than a first threshold;
acquiring second point cloud data acquired by the radar on the target object under a second acquisition condition, wherein the second acquisition condition comprises: the number of data points in the second point cloud data is not greater than a second threshold, and/or: the second distance between the target object and the radar is larger than a third threshold value, and the third threshold value is larger than or equal to the first threshold value;
Fusing the first point cloud data and the second point cloud data, and obtaining third point cloud data and a third detection box of the third point cloud data according to a fusion result and the perception algorithm;
and marking the third detection box on the second point cloud data.
Meanwhile, the embodiment of the application also provides a data processing device, which comprises:
the first acquisition module is used for acquiring first point cloud data acquired by the radar on a target object under a first acquisition condition, and marking a first detection box of the first point cloud data based on a perception algorithm, wherein the first acquisition condition comprises that a first distance between the target object and the radar is smaller than a first threshold value;
the second acquisition module is used for acquiring second point cloud data acquired by the radar on the target object under a second acquisition condition, and the second acquisition condition comprises: the number of data points in the second point cloud data is not greater than a second threshold, and/or: the second distance between the target object and the radar is larger than a third threshold value, and the third threshold value is larger than or equal to the first threshold value;
the fusion module is used for fusing the first point cloud data and the second point cloud data, and obtaining third point cloud data and a third detection box of the third point cloud data according to a fusion result and the perception algorithm;
And the marking module is used for marking the third detection box on the second point cloud data.
The application also provides an electronic device comprising a memory and a processor; the memory stores an application program, and the processor is configured to run the application program in the memory to perform the steps in the data processing method described in any one of the above.
Embodiments of the present application provide a computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the data processing method described above.
The beneficial effects are that: the application provides a data processing method, a device, electronic equipment and a storage medium, wherein the method firstly acquires first point cloud data acquired by a radar on a target object under a first acquisition condition, the first acquisition condition comprises that a first distance between the target object and the radar is smaller than a first threshold value, then acquires second point cloud data acquired by the radar on the target object under a second acquisition condition, and the second acquisition condition comprises that: the number of data points in the second point cloud data is not greater than a second threshold, and/or: and the second distance between the target object and the radar is larger than a third threshold, the third threshold is larger than or equal to the first threshold, the first point cloud data and the second point cloud data are fused, third detection boxes of the third point cloud data and the third point cloud data are obtained according to the fusion result and the perception algorithm, and finally the third detection boxes are marked on the second point cloud data. According to the method, first point cloud data obtained under the first acquisition condition are denser, information is richer, second point cloud data obtained under the second acquisition condition are sparser, information is weaker, then after the first point cloud data and the second point cloud data are fused, the obtained third point cloud data simultaneously comprises the first point cloud data and the second point cloud data, and the information is richer, so that compared with a scheme that the second detection box is obtained by directly marking the sparse second point cloud data, the third detection box is obtained firstly based on dense third point cloud data, then the third detection box is marked on the second point cloud data, the marking accuracy of the third detection box is effectively improved, the whole process is obtained based on automatic calculation of a perception algorithm, manual marking is not needed, and the marking efficiency is improved. That is, the present application achieves both of the marking accuracy and the marking efficiency.
Drawings
Technical solutions and other advantageous effects of the present application will be made apparent from the following detailed description of specific embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic view of a scenario of a data processing method according to an embodiment of the present application.
Fig. 2 is a flow chart of a data processing method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of the second point cloud data and the second detection box in the first fusion and labeling mode.
Fig. 4 is a schematic diagram of third point cloud data and a second detection box in the first fusion and labeling mode.
Fig. 5 is a schematic diagram of third point cloud data and a third detection box in the first fusion and labeling mode.
Fig. 6 is a schematic diagram of the second point cloud data and the third detection cartridge in the first fusion and labeling mode.
Fig. 7 is a schematic diagram of a second point cloud data and a second detection cartridge in a second fusion and labeling mode.
Fig. 8 is a schematic diagram of third point cloud data and a second detection cartridge in a second fusion and labeling mode.
Fig. 9 is a schematic diagram of third point cloud data and a third detection cartridge in a second fusion and labeling mode.
Fig. 10 is a schematic diagram of the second point cloud data and the third detection cartridge in the second fusion and labeling mode.
Fig. 11 is a schematic diagram of a data processing apparatus according to an embodiment of the present application.
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Referring to fig. 1, fig. 1 is a schematic view of a scenario of an application of a data processing method provided in an embodiment of the present application, where the scenario may include a drive-by-drive vehicle (may also be other vehicles that are not dedicated to drive-by-drive), a target object and a data processing device, where the drive-by-drive vehicle is provided with a radar, the target object is a vehicle, a pedestrian or other dynamic or static object running on a road, and the data processing device may be a local data processing device or a remote data processing device, where:
when the relative motion occurs between the target object and the road test vehicle, the radar acquires the target object under the first acquisition condition to obtain first point cloud data, and acquires the target object under the second acquisition condition to obtain second point cloud data. The first acquisition condition comprises that the first distance between the target object and the radar is smaller than a first threshold value, and at the moment, the first point cloud data is usually denser, and the contained information is richer. The second acquisition condition includes any one of the following three cases: firstly, the number of data points in the second point cloud data is not larger than a second threshold value; second, the second distance between the target object and the radar is larger than a third threshold value, and the third threshold value is larger than or equal to the first threshold value; third, the second distance between the target object and the radar is larger than a third threshold, the third threshold is larger than or equal to the first threshold, and the number of data points in the second point cloud data is not larger than the second threshold. At this time, the second point cloud data will be sparse and contain less information.
And (3) fusing the first point cloud data and the second point cloud data to obtain third point cloud data, using the third point cloud data as a marking source under the second acquisition condition, and marking the marking source based on a perception algorithm to obtain a third detection box, wherein the third detection box is a three-dimensional frame surrounding all data points in the third point cloud data. And marking the third detection box on the second point cloud data, and using the third detection box as a marking result of the second point cloud data.
Because the third point cloud data simultaneously comprises the first point cloud data and the second point cloud data, the information is rich, the third detection box obtained by taking the third point cloud data as a marking source is closer to the actual size and the actual position of the target object compared with the second detection box obtained by taking the second point cloud data as the marking source, the marking accuracy of the second point cloud data is effectively improved, the whole process is automatically calculated based on a perception algorithm, no manual marking is needed, the marking efficiency is improved, and therefore the compromise of the marking accuracy and the marking efficiency is realized.
It should be noted that, the schematic diagram of the system scenario shown in fig. 1 is only an example, and the data processing apparatus and scenario described in the embodiments of the present application are for more clearly describing the technical solution of the embodiments of the present application, and do not constitute a limitation on the technical solution provided in the embodiments of the present application, and those skilled in the art can know that, with the evolution of the system and the appearance of a new service scenario, the technical solution provided in the embodiments of the present application is equally applicable to similar technical problems. The following will describe in detail. The following description of the embodiments is not intended to limit the preferred embodiments.
Referring to fig. 2, fig. 2 is a schematic flow chart of data processing according to an embodiment of the present application, and the method specifically includes:
s1: and acquiring first point cloud data acquired by the radar on the target object under a first acquisition condition, wherein the first acquisition condition comprises that the first distance between the target object and the radar is smaller than a first threshold value.
In the embodiment of the application, the radar is arranged on the drive test vehicle, and in the drive test scene, the drive test vehicle can comprise one or a plurality of drive test vehicles, and one or a plurality of radars can be arranged on each drive test vehicle. The radar can sense and collect external environment information in a sensing range of the radar, and obtain point cloud data, and the target object refers to an object needing tracking marking in the external environment, such as a certain vehicle. When the target object and the radar are in a relative motion state, the radar can acquire the target object periodically or continuously, and corresponding point cloud data is acquired through a sensing algorithm during each acquisition. The relative motion between the target object and the radar may be that the target object is in a static state, the radar is in a motion state and gradually approaches or gradually gets away from the target object, or the radar is in a static state, the target object is in a motion state and gradually approaches or gradually gets away from the radar, or the target object and the radar are in a motion state at the same time, and the target object and the radar are gradually approaches or gradually gets away from each other. The embodiment of the application does not limit the type of the relative motion state between the target object and the radar, and a person skilled in the art can set the type of the relative motion state according to the requirement.
The radar acquires the target object under the first acquisition condition to obtain first point cloud data, wherein the first point cloud data comprises attribute information (coordinate information, reflection intensity information, color information and the like) of each data point on the target object, and the data points form a three-dimensional first point cloud. The first acquisition condition comprises that the first distance between the target object and the radar is smaller than a first threshold, and the first threshold can be set in a targeted mode after factors such as the acquisition capacity (acquisition range, acquisition precision and the like) of the radar and attribute information (shape, size and the like) of the target object are comprehensively considered. When the first distance is smaller than the first threshold value, the first point cloud data is closer to the radar, and the first point cloud data is generally denser and contains more information.
After the first point cloud data of the target object is obtained, the first point cloud data can be marked based on a perception algorithm to obtain a first detection box, wherein the first detection box is a three-dimensional solid frame which surrounds all data points in the first point cloud data of the target object, such as a cuboid, and each data point in the first point cloud data of the target object is used as a marking source of the first detection box. The perception algorithm may be a BEVFUSION algorithm for encoding image data from a camera and point cloud data of a radar into features within the same BEV space and passing into a downstream task architecture, wherein radar branches processing the point cloud data may be used to perceive and label all data points of a target object to obtain a corresponding detection box.
When the point cloud data of the target object is denser, the size, the position and the like of the detection box marked by the sensing algorithm are more consistent with the actual condition of the target object, namely the marking accuracy is higher, otherwise, when the point cloud data of the target object is sparser, the size, the position and the like of the detection box marked by the sensing algorithm are more in and out with the actual condition of the target object, namely the marking accuracy is lower. In this embodiment, since the first point cloud data is denser, the first detection box obtained by using the first point cloud data as the marker source can more accurately trace the target object.
In one embodiment, S1 specifically includes: and acquiring first point cloud data acquired by the radar from a target angle on a target object. The first point cloud data may be obtained by acquiring the radar at a time, where an acquisition angle of the radar with respect to the target object is a target angle. In the following embodiment, the first point cloud data acquired by this scheme is denoted as a, and the first detection BOX corresponding to the mark is denoted as box_a.
In one embodiment, S1 specifically includes: and acquiring at least two groups of sub-point cloud data acquired by the radar from at least two different angles for the target object, and fusing the at least two groups of sub-point cloud data to obtain first point cloud data. The first point cloud data can be obtained by respectively acquiring and then fusing the target objects from two or more different angles by the radar, and can be obtained by acquiring the same radar from different angles around the target object at different moments or acquiring different radars from different angles around the target object at different moments or at the same moment during acquisition. In the following embodiments, sub-point cloud data acquired under different angles in the present solution are respectively denoted as B1, B2, & gt, bn, and first point cloud data obtained after fusion is denoted as B, and a first detection BOX corresponding to the mark is denoted as box_b.
In one embodiment, the step of fusing at least two sets of sub-point cloud data to obtain first point cloud data specifically includes: determining reference sub-point cloud data from at least two groups of sub-point cloud data, and transforming other sub-point cloud data except the reference sub-point cloud data in the at least two groups of sub-point cloud data into a first reference coordinate system where the reference sub-point cloud data is located; in a first reference coordinate system, performing feature matching on other sub-point cloud data and reference sub-point cloud data, and adjusting coordinate information of the other sub-point cloud data according to a matching result; and merging the adjusted other sub-point cloud data with the reference sub-point cloud data to obtain first point cloud data.
When the target object is collected at different angles, as each sub-point cloud data corresponds to different coordinate systems, each data point cannot be directly compared with the position, and therefore all the sub-point cloud data needs to be unified before fusion. In addition, although all the sub-point cloud data are acquired from different angles for the same target object, when the acquisition is performed, matching parts are needed between other sub-point cloud data and reference sub-point cloud data to realize registration and fusion, the matching parts exist in the form of matching point pairs, the matching point pairs comprise characteristic points a in the other sub-point cloud data and characteristic points c in the reference sub-point cloud data, and the matching refers to the attribute that the characteristic points a can be expressed in the other sub-point cloud data in the reference sub-point cloud data after the other sub-point cloud data are registered with the reference sub-point cloud data.
In general, a plurality of matching point pairs are arranged between other sub-point cloud data and reference sub-point cloud data, so that the matching point pairs are found through feature matching, and coordinate conversion parameters between the other sub-point cloud data and the reference sub-point cloud data are solved, so that after all data points of the other sub-point cloud data are subjected to coordinate adjustment based on the coordinate conversion parameters, the coordinate difference cumulative value of two feature points a and c in all the matching point pairs is minimum, and the matching degree of the other sub-point cloud data and the reference sub-point cloud data is maximum. Feature matching can be performed based on an ICP algorithm or other point cloud registration algorithms, all matching point pairs can be determined according to matching results, corresponding coordinate conversion parameters can be solved, and the coordinate conversion parameters can be in a matrix form. And adjusting the coordinate information of all data points in the other sub-point cloud data based on the coordinate conversion parameters, and finally fusing the adjusted other sub-point cloud data with the reference sub-point cloud data to obtain the whole as the first point cloud data B. When the number of other sub-point cloud data is two or more, the above operation is performed on each set of other sub-point cloud data, respectively.
Specifically, taking the example that the sub-point cloud data includes B1, B2 and B3, a group of sub-point cloud data may be determined from B1, B2 and B3 as reference sub-point cloud data, for example, B1, a coordinate system where B1 is located is determined as a first reference coordinate system, and then B2 and B3 are both transformed into the first reference coordinate system. Then, firstly carrying out feature matching on B1 and B2 in a first reference coordinate system, finding out each matching point pair between B1 and B2, solving the coordinate conversion parameters from B2 to B1 by integrating the coordinate difference of each matching point pair, and then adjusting the coordinate information of all data points in B2 to finish the registration and fusion of B1 and B2. Then, feature matching is carried out on B1 and B3 in a first reference coordinate system, each matching point pair between B1 and B3 is found, coordinate differences of each matching point pair are synthesized to solve coordinate conversion parameters from B3 to B1, and then coordinate information of all data points in B3 is adjusted to finish registration and fusion of B1 and B3. And (5) synthesizing the two fusion results to obtain final first point cloud data B.
S2: acquiring second point cloud data acquired by the radar on the target object under a second acquisition condition, wherein the second acquisition condition comprises: the number of data points in the second point cloud data is not greater than a second threshold, and/or: the second distance between the target object and the radar is greater than a third threshold, and the third threshold is greater than or equal to the first threshold.
And the radar acquires the target object under the second acquisition condition to obtain second point cloud data. The second acquisition condition includes any one of the following three cases: firstly, the number of data points in the second point cloud data is not larger than a second threshold value; second, the second distance between the target object and the radar is larger than a third threshold value, and the third threshold value is larger than or equal to the first threshold value; third, the second distance between the target object and the radar is larger than a third threshold, the third threshold is larger than or equal to the first threshold, and the number of data points in the second point cloud data is not larger than the second threshold. The first threshold is the same as the first threshold mentioned in the previous step S1, the third threshold needs to be greater than or equal to the first threshold, and the second threshold can be set in a targeted manner by comprehensively considering factors such as the acquisition capability (acquisition range, acquisition precision, etc.) of the radar and the attribute information (shape, size, etc.) of the target object. In the following embodiment, the acquired second point cloud data is denoted as C.
For the first case, when the number of data points is not greater than the second threshold value, the second point cloud data can be directly determined to be sparse, and when the condition is met, the second point cloud data can be considered as a marking source and can not meet the requirement of the accuracy of the subsequent marking. For the second case, when the distance is greater than or equal to the third threshold value, the distance between the target object and the radar is indicated to be far, and the second point cloud data is generally sparse, and when the condition is met, the second point cloud data is considered to be taken as a marking source and cannot meet the requirement of the accuracy of the subsequent marking. For the third case, since two conditions of far distance and sparse point cloud are satisfied at the same time, the accuracy of the subsequent marking is lower, and the marking requirement is not satisfied.
In the following embodiments, for convenience of comparison, the second detection BOX obtained by directly marking the second point cloud data C based on the sensing algorithm is denoted by box_c, and in all three cases, the marking accuracy of box_c is low.
S3: and fusing the first point cloud data and the second point cloud data, and obtaining third point cloud data and a third detection box of the third point cloud data according to the fusion result and the perception algorithm.
And fusing the first point cloud data A (or B) and the second point cloud data C to obtain third point cloud data C ', and marking the marking source based on a perception algorithm by taking the third point cloud data C ' as a marking source to obtain a third detection BOX BOX_C '. Because each data point in the first point cloud data A (or B) and the second point cloud data C is obtained by sensing the same target object through a sensing algorithm, after the first point cloud data A (or B) and the second point cloud data C are fused, each data point in the third point cloud data C' is also a data point on the target object.
For the same target object, the contour, the size and the like of the first detection BOX box_a (or box_b) and the second detection BOX box_c are the same under the first acquisition condition or the second acquisition condition, so that the directions and the sizes of the first detection BOX box_a (or box_b) and the second detection BOX box_c are in agreement in theory, and the directions and the sizes of the first detection BOX box_a (or box_b) and the second detection BOX box_c are in agreement with the difference of the actual detection boxes of the target object when the accuracy of a sensing algorithm is fixed.
However, the second point cloud data C obtained under the second acquisition condition is sparse, and the difference between the second detection BOX BOX_C obtained based on the sensing algorithm mark and the actual detection BOX is large, namely the mark accuracy is small; the first point cloud data A (or B) obtained under the first acquisition condition is denser, and the difference between the first detection BOX BOX_A (or BOX_B) obtained based on the sensing algorithm marking with the same accuracy and the actual detection BOX is smaller, namely the marking accuracy is higher. And the third point cloud data C 'integrates the first point cloud data A (or B) and the second point cloud data C, and the data points contained in the third point cloud data C' are richer, so that the difference between the third detection BOX BOX_C 'obtained by marking the third point cloud data C' based on the sensing algorithm with the same accuracy is smaller than that of the actual detection BOX, namely the marking accuracy is higher. Therefore, among the three of the first cartridge box_a (or box_b), the second cartridge box_c, and the third cartridge box_c ', the direction and the size of the third cartridge box_c' are closer to those of the actual cartridge of the target object.
It should be noted that, in the embodiment of the present application, the first point cloud data a (or B) may be only that the radar has a first distance s from the target object 1 The data acquired at a certain time by one time can also be that the first distance between the radar and the target object is s in sequence 1 、s 2 、...、s n The embodiment of the application does not limit the number of the first point cloud data A (or B) participating in fusion, and in a certain range, the larger the number of the first point cloud data A (or B) participating in fusion is, the smaller the difference between the third detection BOX BOX_C' obtained by fusion and marking and the actual detection BOX is.
There are two ways in which the third point cloud data and the third cassette are obtained.
In one embodiment, S3 specifically includes: transforming the first point cloud data into a second reference coordinate system where the reference point cloud data is located by taking the second point cloud data as the reference point cloud data; in a second reference coordinate system, performing feature matching on the first point cloud data and the reference point cloud data, and adjusting coordinate information of the first point cloud data according to a matching result to obtain fourth point cloud data; and fusing the fourth point cloud data and the reference point cloud data to obtain third point cloud data, and obtaining a third detection box of the third point cloud data based on a perception algorithm.
And taking the second point cloud data C as reference point cloud data, and firstly converting the first point cloud data A (or B) and the first detection BOX BOX_A (or BOX_B) into a second reference coordinate system where the second point cloud data C is positioned, so as to realize the unification of the coordinate systems. Then, in the second reference coordinate system, all data points in the second point cloud data C are kept as they are, the first point cloud data a (or B) and the second point cloud data C are subjected to feature matching, and the coordinate information of all data points in the first point cloud data a (or B) is adjusted according to the matching result to obtain fourth point cloud data, and in the following embodiments, the fourth point cloud data obtained after adjustment is referred to as a '(or B'). And integrating the fourth point cloud data A ' (or B ') and the second point cloud data C into a whole, wherein the whole is the third point cloud data C '. And marking all data points in the third point cloud data C 'based on a perception algorithm to obtain a third detection BOX BOX_C'.
As shown in fig. 3 to 5, the same target object is located in a circle on the right side of the figure, and the whole formed by the point cloud data of the target object and the detection box is presented in a three-view mode on the left side of the figure. Fig. 3 shows second point cloud data C of the target object and second BOX box_c obtained by directly marking the second point cloud data C, fig. 4 shows third point cloud data C 'obtained by fusing first point cloud data a (or B) of the target object and the second point cloud data C based on the above manner, and second BOX box_c, fig. 5 shows third point cloud data C' and third BOX box_c 'obtained by marking the third point cloud data C'.
As can be seen from comparing the three figures, for the same second detection BOX box_c, the second point cloud data C in fig. 3 is located in the second detection BOX box_c, and the third point cloud data C' in fig. 4 is located in the second detection BOX box_c only partially, and is located outside the second detection BOX box_c partially. Since all the data points in the third point cloud data C' are points on the target object, when a part of the points exist outside the second detection BOX box_c, the actual size of the marked second detection BOX box_c is different from that of the target object, and the size is smaller. At this time, if the third point cloud data C ' is marked based on the sensing algorithm, the third detection BOX box_c ' obtained in fig. 5 may enclose all the third point cloud data C ', and the size is larger and is closer to the actual size and the actual position of the target object.
In one embodiment, S3 specifically further includes: marking a first detection box of first point cloud data based on a perception algorithm; transforming the first point cloud data and the first detection box into a second reference coordinate system where the reference point cloud data are located by taking the second point cloud data as reference point cloud data; in a second reference coordinate system, performing feature matching on the first point cloud data and the reference point cloud data, and adjusting coordinate information of the first point cloud data and the pose of the first detection box according to a matching result to obtain fourth point cloud data and a fourth detection box; and fusing the fourth point cloud data and the second point cloud data to obtain third point cloud data, and adjusting the scaling of the fourth detection box based on the third point cloud data to obtain a third detection box of the third point cloud data.
First, a first detection BOX BOX_A (or BOX_B) of first point cloud data A (or B) is obtained based on a sensing algorithm, second point cloud data C is used as reference point cloud data, and both the first point cloud data A (or B) and the first detection BOX BOX_A (or BOX_B) are transformed into a second reference coordinate system where the second point cloud data C is located, so that the unification of coordinate systems is realized. Then, all data points in the second point cloud data C are kept as they are, the first point cloud data A (or B) and the second point cloud data C are subjected to feature matching, and the coordinate information of all data points in the first point cloud data A (or B) is adjusted according to the matching result to obtain fourth point cloud data A '(or B'). Meanwhile, according to the matching result, the pose of the first detection BOX box_a (or box_b) is synchronously adjusted, that is, the first detection BOX box_a (or box_b) is translated and rotated, so as to obtain a fourth detection BOX, in the following embodiment, the fourth detection BOX is denoted as box_a '(or box_b'), and after the pose adjustment, all data points in the fourth point cloud data a '(or B') are included in the fourth detection BOX box_a '(or box_b').
And integrating the fourth point cloud data A ' (or B ') and the second point cloud data C into a whole, wherein the whole is the third point cloud data C '. At this time, since the fourth BOX box_a ' (or box_b ') contains only all the data points in the fourth point cloud data a ' (or B '), if some data points exist in the second point cloud data C outside the fourth BOX box_a ' (or box_b '), it indicates that the fourth BOX box_a ' (or box_b ') is smaller in size, and the direct use of the fourth BOX box_a ' (or box_b ') as the third BOX box_c ' may make the marking inaccurate. Therefore, the scaling of the fourth BOX box_a ' (or box_b ') needs to be adjusted based on the third point cloud data C ', so that the adjusted fourth BOX box_a ' (or box_b ') may include all the data points in the third point cloud data C ', and finally the adjusted fourth BOX box_a ' (or box_b ') is taken as the third BOX box_c '.
As shown in fig. 7 to 9, the same target object is located in a circle on the right side of the figure, and the whole formed by the point cloud data of the target object and the detection box is presented in a three-view manner on the left side of the figure. Fig. 7 shows second point cloud data C of the target object and second BOX box_c obtained by directly marking the second point cloud data C, fig. 8 shows third point cloud data C ' obtained by fusing first point cloud data a (or B) of the target object and the second point cloud data C based on the above manner, and second BOX box_c, and fig. 9 shows third point cloud data C ' and third BOX box_c '.
As can be seen from comparing the three figures, for the same second detection BOX box_c, the second point cloud data C in fig. 7 is located in the second detection BOX box_c, and the third point cloud data C' in fig. 8 is located in the second detection BOX box_c only partially, and is located outside the second detection BOX box_c partially. Since all the data points in the third point cloud data C' are points on the target object, when a part of the points exist outside the second detection BOX box_c, the actual size of the marked second detection BOX box_c is different from that of the target object, and the size is smaller. At this time, if the third point cloud data C ' is marked based on the sensing algorithm, the third detection BOX box_c ' obtained in fig. 9 may enclose all the third point cloud data C ', and the size is larger and is closer to the actual size and the actual position of the target object.
In the above two schemes, the step of performing feature matching on the first point cloud data and the reference point cloud data and adjusting coordinate information of the first point cloud data according to a matching result specifically includes: in a second reference coordinate system, performing feature matching on the first point cloud data and the reference point cloud data, and obtaining matching point pairs in the first point cloud data and the second point cloud data according to a matching result; and adjusting the coordinate information of all data points in the first point cloud data according to the coordinate difference of the matching point pair.
Matching parts are needed between the first point cloud data A (or B) and the second point cloud data C to realize registration and fusion, the matching parts exist in the form of matching point pairs, the matching point pairs comprise characteristic points d in the first point cloud data A (or B) and characteristic points e in the second point cloud data C, and the matching means that after the first point cloud data A (or B) and the second point cloud data C are registered, the characteristic points d can express attributes in the first point cloud data A (or B) in the second point cloud data C.
In general, a plurality of matching point pairs are arranged between the first point cloud data a (or B) and the second point cloud data C, so that the matching point pairs are found by feature matching, and the coordinate conversion parameters between the first point cloud data a (or B) and the second point cloud data C are solved, so that after coordinate adjustment is performed on all data points of the first point cloud data a (or B) based on the coordinate conversion parameters, the coordinate difference cumulative value of two feature points d and e in all the matching point pairs is minimum, and the matching degree of the first point cloud data a (or B) and the second point cloud data C is maximum. Feature matching can be performed based on an ICP algorithm or other point cloud registration algorithms, all matching point pairs can be determined according to matching results, corresponding coordinate conversion parameters can be solved, and the coordinate conversion parameters can be in a matrix form. And adjusting the coordinate information of all data points in the first point cloud data A (or B) based on the coordinate conversion parameters, and finally fusing the adjusted first point cloud data A (or B) with the second point cloud data C to obtain the whole as third point cloud data C'. When the number of the first point cloud data a (or B) is two or more, the above-described operation is performed once for each group of the first point cloud data a (or B), respectively.
Specifically, taking the example that the first point cloud data includes A1 and A2, the coordinate system where the second point cloud data C is located is determined as the second reference coordinate system, and then both A1 and A2 are transformed into the second reference coordinate system. And then, firstly carrying out feature matching on the A1 and the C in a second reference coordinate system, finding out each matching point pair between the A1 and the C, solving the coordinate conversion parameters of the A1 to the C by integrating the coordinate difference of each matching point pair, and then adjusting the coordinate information of all data points in the A1 to finish the registration and fusion of the A1 and the C. And then, carrying out feature matching on the A2 and the C in a second reference coordinate system, finding out each matching point pair between the A2 and the C, solving the coordinate conversion parameters from the A2 to the C by integrating the coordinate difference of each matching point pair, and then adjusting the coordinate information of all data points in the A2 to finish the registration and fusion of the A2 and the C. And integrating the two fusion results to obtain final third point cloud data C'.
S4: and marking the third detection box on the second point cloud data.
Since the direction and size of the third cartridge box_c 'are closer to the direction and size of the actual cartridge of the target object among the three of the first cartridge box_a (or box_b), the second cartridge box_c, and the third cartridge box_c', the third cartridge box_c 'can be marked on the second point cloud data C, and the third cartridge box_c' can be used as a marking result of the second point cloud data C. As shown in fig. 6 and 10, the third cartridge box_c' obtained by the above two fusion methods is marked on the second point cloud data C.
It should be noted that, the first point cloud data acquired in the step S1 includes two schemes a and B, and because the two schemes are acquired under the first acquisition condition, they are richer than the second point cloud data C acquired under the second acquisition condition, so that the effect of improving the marking accuracy can be achieved to a certain extent whether the two schemes are a or B.
In one embodiment, after S4, further comprising: and training, testing or verifying the perception algorithm or another target algorithm based on the second point cloud data and the third detection box. The second point cloud data C and the third detection BOX BOX_C' can be used for reflecting the identification marking condition of the perception algorithm on the target object at the far position, training, testing or verifying the perception algorithm based on the perception data, optimizing the algorithm performance based on the training, testing or verifying results, and improving the identification and marking accuracy of the perception algorithm on the target object at the far position. Meanwhile, the scene is not limited to this, and the second point cloud data C and the third detection BOX box_c' can be used for training, testing or verifying another target algorithm, the other target algorithm and the sensing algorithm used in the steps are different objects, related parameters of the two algorithms, tasks to be executed and the like can be the same or not the same, but the target algorithm also needs more accurate point cloud data and a corresponding detection BOX to serve as a basis for algorithm optimization. The method and the device do not limit the target algorithm, and all algorithms which need to use the point cloud data of the target object at a far position and the detection box can be used as the target algorithm. After training, testing or verifying another target algorithm based on the second point cloud data C and the third detection BOX box_c', performance optimization of the target algorithm can be facilitated as well.
According to the embodiment, the first point cloud data obtained under the first acquisition condition is dense, the information is rich, the second point cloud data obtained under the second acquisition condition is sparse, the information is deficient, the obtained third point cloud data simultaneously comprises the first point cloud data and the second point cloud data after the first point cloud data and the second point cloud data are fused, and the information is rich, so that compared with the scheme of directly marking the sparse second point cloud data to obtain the second detection box, the third detection box is obtained firstly based on the dense third point cloud data, then the third detection box is marked on the second point cloud data, the marking accuracy of the third detection box is effectively improved, the whole process is obtained based on automatic calculation of a perception algorithm, manual marking is not needed, and the marking efficiency is improved. That is, the present application achieves both of the marking accuracy and the marking efficiency.
On the basis of the method described in the above embodiment, this embodiment will be further described from the viewpoint of a data processing apparatus, as shown in fig. 11, which includes:
a first acquisition module 10, configured to acquire first point cloud data acquired by a radar on a target object under a first acquisition condition, where the first acquisition condition includes that a first distance between the target object and the radar is smaller than a first threshold;
A second acquisition module 20, configured to acquire second point cloud data acquired by the radar on the target object under a second acquisition condition, where the second acquisition condition includes: the number of data points in the second point cloud data is not greater than a second threshold, and/or: the second distance between the target object and the radar is larger than a third threshold value, and the third threshold value is larger than or equal to the first threshold value;
the fusion module 30 is configured to fuse the first point cloud data and the second point cloud data, and obtain third point cloud data and a third detection box of the third point cloud data according to a fusion result and a sensing algorithm;
and a marking module 40, configured to mark the third detection box on the second point cloud data.
In one embodiment, the first acquisition module 10 comprises:
the first acquisition sub-module is used for acquiring first point cloud data acquired from a target angle by the radar on the target object;
or comprises:
the second acquisition sub-module is used for acquiring at least two groups of sub-point cloud data acquired by the radar from at least two different angles on the target object, and fusing the at least two groups of sub-point cloud data to acquire first point cloud data.
In one embodiment, the second acquisition submodule includes:
the transformation unit is used for determining reference sub-point cloud data from the at least two groups of sub-point cloud data and transforming other sub-point cloud data except the reference sub-point cloud data in the at least two groups of sub-point cloud data into a first reference coordinate system where the reference sub-point cloud data is located;
the first matching unit is used for performing feature matching on the other sub-point cloud data and the reference sub-point cloud data in the first reference coordinate system, and adjusting the coordinate information of the other sub-point cloud data according to a matching result;
and the fusion unit is used for fusing the adjusted other sub-point cloud data with the reference sub-point cloud data to obtain the first point cloud data.
In one embodiment, the fusion module 30 includes:
the first transformation submodule is used for transforming the first point cloud data into a second reference coordinate system where the reference point cloud data is located by taking the second point cloud data as the reference point cloud data;
the first matching sub-module is used for performing feature matching on the first point cloud data and the reference point cloud data in the second reference coordinate system, and adjusting coordinate information of the first point cloud data according to a matching result to obtain fourth point cloud data;
The first fusion sub-module is used for fusing the fourth point cloud data and the reference point cloud data to obtain third point cloud data, and obtaining a third detection box of the third point cloud data based on a perception algorithm.
In one embodiment, the fusion module 30 includes:
the marking sub-module is used for marking the first detection box of the first point cloud data based on a perception algorithm;
the second transformation submodule is used for transforming the first point cloud data and the first detection box into a second reference coordinate system where the reference point cloud data are located by taking the second point cloud data as reference point cloud data;
the second matching sub-module is used for carrying out feature matching on the first point cloud data and the reference point cloud data in the second reference coordinate system, and adjusting the coordinate information of the first point cloud data and the pose of the first detection box according to a matching result to obtain fourth point cloud data and a fourth detection box;
and the second fusion sub-module is used for fusing the fourth point cloud data and the second point cloud data to obtain third point cloud data, and adjusting the scaling of the fourth detection box based on the third point cloud data to obtain a third detection box of the third point cloud data.
In one embodiment, the first matching sub-module or the second matching sub-module comprises:
the second matching unit is used for performing feature matching on the first point cloud data and the reference point cloud data in the second reference coordinate system, and obtaining matching point pairs in the first point cloud data and the reference point cloud data according to a matching result;
and the adjusting unit is used for adjusting the coordinate information of all the data points in the first point cloud data according to the coordinate difference of the matching point pair.
In one embodiment, the data processing apparatus further comprises:
and the training module is used for training, testing or verifying the perception algorithm or another target algorithm based on the second point cloud data and the third detection box.
Compared with the prior art, the data processing device provided by the application is more dense in first point cloud data obtained under the first acquisition condition, more abundant in information, more sparse in second point cloud data obtained under the second acquisition condition, and more deficient in information, so that after the first point cloud data and the second point cloud data are fused, the obtained third point cloud data simultaneously comprise the first point cloud data and the second point cloud data, and the information is also more abundant, therefore, compared with the scheme of directly marking the sparse second point cloud data to obtain the second detection box, the third detection box is obtained firstly based on the dense third point cloud data, then the third detection box is marked on the second point cloud data, the marking accuracy of the third detection box is effectively improved, the whole process is obtained based on automatic calculation of a sensing algorithm, the marking is not needed, and the marking efficiency is improved. That is, the present application achieves both of the marking accuracy and the marking efficiency.
Accordingly, the embodiment of the present application further provides an electronic device, as shown in fig. 12, where the electronic device may include a Radio Frequency (RF) circuit 101, a memory 102 including one or more computer readable storage media, an input unit 103, a display unit 104, a sensor 105, an audio circuit 106, a WiFi module 107, a processor 108 including one or more processing cores, and a power supply 109. Those skilled in the art will appreciate that the electronic device structure shown in fig. 12 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or may be arranged in different components. Wherein:
the radio frequency circuit 101 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, in particular, after receiving downlink information of the base station, the downlink information is processed by one or more processors 108; in addition, data relating to uplink is transmitted to the base station. The memory 102 may be used to store software programs and modules, and the processor 108 may execute various functional applications and data processing by executing the software programs and modules stored in the memory 102. The input unit 103 may be used to receive entered numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to customer settings and function control.
The display unit 104 may be used to display information entered by a client or provided to a client and various graphical client interfaces of a server, which may be composed of graphics, text, icons, video, and any combination thereof.
The electronic device may also include at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Audio circuitry 106 includes speakers that may provide an audio interface between the client and the electronic device.
WiFi belongs to a short-distance wireless transmission technology, and the electronic equipment can help clients to send and receive emails, browse webpages, follow-up streaming media and the like through the WiFi module 107, so that wireless broadband Internet follow-up is provided for the clients. Although fig. 12 shows the WiFi module 107, it is understood that it does not belong to the necessary constitution of the electronic device, and can be omitted entirely as required within a range that does not change the essence of the application.
The processor 108 is a control center of the electronic device that uses various interfaces and lines to connect the various parts of the overall handset, performing various functions of the electronic device and processing the data by running or executing software programs and/or modules stored in the memory 102, and invoking data stored in the memory 102, thereby performing overall monitoring of the handset.
The electronic device further comprises a power supply 109 (e.g. a battery) for powering the components, which may preferably be logically connected to the processor 108 via a power management system, whereby the functions of managing charging, discharging, and power consumption are performed by the power management system.
Although not shown, the electronic device may further include a camera, a bluetooth module, etc., which will not be described herein. Specifically, in this embodiment, the processor 108 in the server loads executable files corresponding to the processes of one or more application programs into the memory 102 according to the following instructions, and the processor 108 executes the application programs stored in the memory 102, so as to implement the following functions:
acquiring first point cloud data acquired by a radar on a target object under a first acquisition condition, wherein the first acquisition condition comprises that a first distance between the target object and the radar is smaller than a first threshold value;
acquiring second point cloud data acquired by the radar on the target object under a second acquisition condition, wherein the second acquisition condition comprises: the number of data points in the second point cloud data is not greater than a second threshold, and/or: the second distance between the target object and the radar is larger than a third threshold value, and the third threshold value is larger than or equal to the first threshold value;
Fusing the first point cloud data and the second point cloud data, and obtaining third point cloud data and a third detection box of the third point cloud data according to a fusion result and a perception algorithm;
and marking the third detection box on the second point cloud data.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the portions of a certain embodiment that are not described in detail may be referred to the foregoing detailed description, which is not repeated herein.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer readable storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform the following functions:
acquiring first point cloud data acquired by a radar on a target object under a first acquisition condition, wherein the first acquisition condition comprises that a first distance between the target object and the radar is smaller than a first threshold value;
Acquiring second point cloud data acquired by the radar on the target object under a second acquisition condition, wherein the second acquisition condition comprises: the number of data points in the second point cloud data is not greater than a second threshold, and/or: the second distance between the target object and the radar is larger than a third threshold value, and the third threshold value is larger than or equal to the first threshold value;
fusing the first point cloud data and the second point cloud data, and obtaining third point cloud data and a third detection box of the third point cloud data according to a fusion result and a perception algorithm;
and marking the third detection box on the second point cloud data.
The foregoing has described in detail a data processing method, apparatus, electronic device and computer readable storage medium provided by embodiments of the present application, where specific examples are applied to illustrate principles and implementations of the present application, and the description of the foregoing embodiments is only used to help understand the technical solution and core idea of the present application; those of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method of data processing, comprising:
acquiring first point cloud data acquired by a radar on a target object under a first acquisition condition, wherein the first acquisition condition comprises that a first distance between the target object and the radar is smaller than a first threshold value;
acquiring second point cloud data acquired by the radar on the target object under a second acquisition condition, wherein the second acquisition condition comprises: the number of data points in the second point cloud data is not greater than a second threshold, and/or: the second distance between the target object and the radar is larger than a third threshold value, and the third threshold value is larger than or equal to the first threshold value;
fusing the first point cloud data and the second point cloud data, and obtaining third point cloud data and a third detection box of the third point cloud data according to a fusion result and a perception algorithm;
and marking the third detection box on the second point cloud data.
2. The method according to claim 1, wherein the step of acquiring first point cloud data acquired by the radar for the target object under the first acquisition condition includes:
acquiring first point cloud data acquired by a radar from a target angle on the target object;
Or comprises:
and acquiring at least two groups of sub-point cloud data acquired by the radar from at least two different angles for the target object, and fusing the at least two groups of sub-point cloud data to obtain first point cloud data.
3. The data processing method according to claim 2, wherein the step of acquiring at least two sets of sub-point cloud data acquired by the radar from at least two different angles for the target object, and fusing the at least two sets of sub-point cloud data to obtain the first point cloud data comprises:
determining reference sub-point cloud data from the at least two groups of sub-point cloud data, and transforming other sub-point cloud data except the reference sub-point cloud data in the at least two groups of sub-point cloud data into a first reference coordinate system in which the reference sub-point cloud data is located;
in the first reference coordinate system, performing feature matching on the other sub-point cloud data and the reference sub-point cloud data, and adjusting coordinate information of the other sub-point cloud data according to a matching result;
and fusing the adjusted other sub-point cloud data with the reference sub-point cloud data to obtain the first point cloud data.
4. The data processing method according to claim 1, wherein the step of fusing the first point cloud data and the second point cloud data to obtain third point cloud data and a third detection box of the third point cloud data according to a fusion result and a sensing algorithm includes:
Transforming the first point cloud data into a second reference coordinate system where the reference point cloud data is located by taking the second point cloud data as reference point cloud data;
in the second reference coordinate system, performing feature matching on the first point cloud data and the reference point cloud data, and adjusting coordinate information of the first point cloud data according to a matching result to obtain fourth point cloud data;
and fusing the fourth point cloud data and the reference point cloud data to obtain third point cloud data, and obtaining a third detection box of the third point cloud data based on a perception algorithm.
5. The data processing method according to claim 1, wherein the step of fusing the first point cloud data and the second point cloud data to obtain third point cloud data and a third detection box of the third point cloud data according to a fusion result and a sensing algorithm includes:
marking a first detection box of the first point cloud data based on a perception algorithm;
transforming the first point cloud data and the first detection box into a second reference coordinate system where the reference point cloud data are located by taking the second point cloud data as reference point cloud data;
in the second reference coordinate system, performing feature matching on the first point cloud data and the reference point cloud data, and adjusting coordinate information of the first point cloud data and the pose of the first detection box according to a matching result to obtain fourth point cloud data and a fourth detection box;
And fusing the fourth point cloud data and the second point cloud data to obtain third point cloud data, and adjusting the scaling of the fourth detection box based on the third point cloud data to obtain a third detection box of the third point cloud data.
6. The data processing method according to claim 4 or 5, characterized in that in the second reference coordinate system, the first point cloud data and the reference point cloud data are subjected to feature matching, and the coordinate information of the first point cloud data is adjusted according to the matching result, comprising:
in the second reference coordinate system, performing feature matching on the first point cloud data and the reference point cloud data, and obtaining matching point pairs in the first point cloud data and the reference point cloud data according to a matching result;
and adjusting the coordinate information of all data points in the first point cloud data according to the coordinate difference of the matching point pair.
7. The data processing method according to claim 1, characterized by, after the step of labeling the third cartridge to the second point cloud data, comprising:
training, testing or verifying the perception algorithm or another target algorithm based on the second point cloud data and the third detection box.
8. A data processing apparatus, comprising:
the first acquisition module is used for acquiring first point cloud data acquired by a radar on a target object under a first acquisition condition, wherein the first acquisition condition comprises that a first distance between the target object and the radar is smaller than a first threshold value;
the second acquisition module is used for acquiring second point cloud data acquired by the radar on the target object under a second acquisition condition, and the second acquisition condition comprises: the number of data points in the second point cloud data is not greater than a second threshold, and/or: the second distance between the target object and the radar is larger than a third threshold value, and the third threshold value is larger than or equal to the first threshold value;
the fusion module is used for fusing the first point cloud data and the second point cloud data, and obtaining third point cloud data and a third detection box of the third point cloud data according to a fusion result and a perception algorithm;
and the marking module is used for marking the third detection box on the second point cloud data.
9. An electronic device comprising a memory and a processor; the memory stores an application program, and the processor is configured to execute the application program in the memory to perform the steps in the data processing method according to any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which is executed by a processor to implement the steps in the data processing method of any of claims 1 to 7.
CN202310293893.2A 2023-03-23 2023-03-23 Data processing method, device, electronic equipment and storage medium Pending CN116338604A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310293893.2A CN116338604A (en) 2023-03-23 2023-03-23 Data processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310293893.2A CN116338604A (en) 2023-03-23 2023-03-23 Data processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116338604A true CN116338604A (en) 2023-06-27

Family

ID=86881792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310293893.2A Pending CN116338604A (en) 2023-03-23 2023-03-23 Data processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116338604A (en)

Similar Documents

Publication Publication Date Title
CN111046744B (en) Method and device for detecting attention area, readable storage medium and terminal equipment
CN112232293B (en) Image processing model training method, image processing method and related equipment
EP3506161A1 (en) Method and apparatus for recovering point cloud data
US11328401B2 (en) Stationary object detecting method, apparatus and electronic device
CN108898171B (en) Image recognition processing method, system and computer readable storage medium
WO2022105395A1 (en) Data processing method, apparatus, and system, computer device, and non-transitory storage medium
CN112101209B (en) Method and apparatus for determining world coordinate point cloud for roadside computing device
CN112687107B (en) Perception data acquisition method and device
CN110059623B (en) Method and apparatus for generating information
CN110446164B (en) Mobile terminal positioning method and device, mobile terminal and server
CN113483774B (en) Navigation method, navigation device, electronic equipment and readable storage medium
CN112330756A (en) Camera calibration method and device, intelligent vehicle and storage medium
Liu et al. Towards vehicle-to-everything autonomous driving: A survey on collaborative perception
WO2021088497A1 (en) Virtual object display method, global map update method, and device
US20210203840A1 (en) Data compression apparatus, model generation apparatus, data compression method, model generation method and program recording medium
CN115965961B (en) Local-global multi-mode fusion method, system, equipment and storage medium
CN116338604A (en) Data processing method, device, electronic equipment and storage medium
CN113269168B (en) Obstacle data processing method and device, electronic equipment and computer readable medium
CN112200130B (en) Three-dimensional target detection method and device and terminal equipment
CN114611635A (en) Object identification method and device, storage medium and electronic device
US20220053171A1 (en) Overlaying metadata on video streams on demand for intelligent video analysis
US11856284B2 (en) Method of controlling a portable device and a portable device
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN116136408A (en) Indoor navigation method, server, device and terminal
CN115830588B (en) Target detection method, system, storage medium and device based on point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination