CN114359386A - Point cloud data processing method, processing device, storage medium and processor - Google Patents

Point cloud data processing method, processing device, storage medium and processor Download PDF

Info

Publication number
CN114359386A
CN114359386A CN202111676534.2A CN202111676534A CN114359386A CN 114359386 A CN114359386 A CN 114359386A CN 202111676534 A CN202111676534 A CN 202111676534A CN 114359386 A CN114359386 A CN 114359386A
Authority
CN
China
Prior art keywords
point cloud
cloud data
target object
detection model
laser radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111676534.2A
Other languages
Chinese (zh)
Inventor
黄佳伟
陈博
王宇
张勇
张林灿
郭昌野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FAW Group Corp
Original Assignee
FAW Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FAW Group Corp filed Critical FAW Group Corp
Priority to CN202111676534.2A priority Critical patent/CN114359386A/en
Publication of CN114359386A publication Critical patent/CN114359386A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a point cloud data processing method, a point cloud data processing device, a storage medium and a point cloud data processor. Wherein, the method comprises the following steps: acquiring point cloud data and position data, wherein the point cloud data is original point cloud data which is acquired by a laser radar and is reflected back through a target object, and the position data comprises at least one of the following data: the spatial position of the target device on which the laser radar is installed and the spatial position of the laser radar are determined; obtaining a characteristic diagram of the target object according to the point cloud data and the position data; detecting the characteristic graph by using a detection head of the convolutional neural network to obtain a contour graph of the target object, and determining the tracking ID of the target object according to the contour graph of the target object; and displaying the image information of the target object on the terminal equipment according to the point cloud data and the tracking ID of the target object. The invention solves the technical problem of poor perception effect of the teaching platform.

Description

Point cloud data processing method, processing device, storage medium and processor
Technical Field
The invention relates to the technical field of point cloud data processing, in particular to a point cloud data processing method, a point cloud data processing device, a storage medium and a point cloud data processor.
Background
Since the vehicle-mounted laser radar which is an important component of the intelligent automobile environment sensing system is in production for the first year, more and more automobile enterprises begin to embed the vehicle-mounted laser radar in hardware in advance, and then the sensing function is provided and iterated in an OTA (over the air) mode; although some perception algorithm teaching platforms based on personal computers exist at present, the perception algorithm teaching platforms have a great difference from actual loading, and the requirements of low cost, low power consumption, high stability, small size and the like of vehicle-mounted electronics are not considered, so that the development of the perception algorithm is separated from the actual scene; in the future, as vehicle enterprises increase and push the vehicle-mounted laser radar perception function, the demands of perception algorithm developers will be greater, so that a set of perception algorithm teaching platform of the vehicle-mounted laser radar is urgently needed to enable developers and students to be familiar with and master the development process of the perception algorithm and the development of each functional module in the perception algorithm.
At present, no set of standardized teaching equipment and system is available for perception algorithm development of the vehicle-mounted laser radar, a platform manufactured based on a personal computer or a black box platform is available, the actual scene of vehicle-mounted electronics is not considered, and all functional modules covering the whole process of perception algorithm development are not opened for developers to learn.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a point cloud data processing method, a point cloud data processing device, a storage medium and a point cloud data processor, and at least solves the technical problem that a teaching platform is poor in perception effect.
According to an aspect of the embodiments of the present invention, there is provided a method for processing point cloud data, including: acquiring point cloud data and position data, wherein the point cloud data is original point cloud data which is acquired by a laser radar and is reflected back through a target object, and the position data comprises at least one of the following data: the spatial position of the target device on which the laser radar is installed and the spatial position of the laser radar are determined; obtaining a characteristic diagram of the target object according to the point cloud data and the position data; detecting the characteristic graph by using a detection head of the convolutional neural network to obtain a contour graph of the target object, and determining the tracking ID of the target object according to the contour graph of the target object; and displaying the image information of the target object on the terminal equipment according to the point cloud data and the tracking ID of the target object.
Optionally, before acquiring the point cloud data and the position data, the method includes: taking the ground as a reference system, and acquiring a first calibration matrix of the laser radar; acquiring a second calibration matrix of the laser radar by taking the target equipment as a reference system; and determining the space position of the laser radar according to the first calibration matrix and the second calibration matrix.
Optionally, the method for processing point cloud data includes: receiving a first updating request, and according to the first updating request, re-determining a first calibration matrix when the laser radar takes the ground as a reference frame and re-determining a second calibration matrix when the laser radar takes the target equipment as the reference frame; and re-determining the space position of the laser radar according to the re-determined first calibration matrix and the re-determined second calibration matrix.
Optionally, obtaining a feature map of the target object according to the point cloud data and the position data includes: obtaining a detection model, wherein the detection model is used for detecting a target object, and the detection model includes at least one of the following: a target detection model and a target tracking model; and carrying out model training and quantitative training on the detection model to obtain an image processing detection model, and analyzing point cloud data and position data by adopting the image processing detection model to obtain a characteristic diagram of the target object.
Optionally, training the detection model to obtain an image processing detection model includes: training the detection model to obtain an image processing detection model, converting the format of the image processing detection model into a preset format, and storing the image processing detection model with the preset format to a target position.
Optionally, the method for processing point cloud data includes: receiving a second updating request, and acquiring the detection model again according to the second updating request; and carrying out model training and quantitative training according to the newly acquired detection model to obtain a new image processing detection model.
According to another aspect of the embodiments of the present invention, there is also provided a processing apparatus of point cloud data, including: the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring point cloud data and position data, the point cloud data is original point cloud data collected by a laser radar and reflected by a target object, and the position data comprises at least one of the following data: the spatial position of the target device on which the laser radar is installed and the spatial position of the laser radar are determined; the second acquisition unit is used for acquiring a characteristic diagram of the target object according to the point cloud data and the position data; the first determining unit detects the characteristic graph by using a detection head of the convolutional neural network to obtain a contour map of the target object, and determines the tracking ID of the target object according to the contour map of the target object; and the display unit is used for displaying the image information of the target object on the terminal equipment according to the point cloud data and the tracking ID of the target object.
Optionally, the processing means comprises: the third acquisition unit is used for acquiring a first calibration matrix of the laser radar; the fourth acquisition unit is used for acquiring a second calibration matrix; and the second determining unit determines the spatial position of the laser radar according to the first calibration matrix and the second calibration matrix.
According to another aspect of the embodiments of the present invention, a nonvolatile storage medium is further provided, where the nonvolatile storage medium includes a stored program, and when the program runs, a device in which the nonvolatile storage medium is located is controlled to execute the processing method of the point cloud data.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes the method for processing point cloud data.
In the embodiment of the invention, the characteristic diagram of the target object is obtained by acquiring the point cloud data and the position data, the characteristic diagram is detected by using the detection head of the convolutional neural network to obtain the contour diagram of the target object, the tracking ID of the target object is determined according to the contour diagram of the target object, and the image information of the target object is displayed on the terminal equipment according to the point cloud data and the tracking ID of the target object. The user can clearly and intuitively perceive the process of detecting and tracking the target object by adopting the method in relation to the actual scene. By adopting the method, the user can know the perception algorithm more quickly and thoroughly, and the technical problem of poor perception effect of the teaching platform is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of a first embodiment of an alternative method of processing point cloud data in accordance with the present invention;
FIG. 2 is a flow chart of a second embodiment of an alternative method of processing point cloud data in accordance with the present invention;
FIG. 3 is a flow chart of a third embodiment of an alternative method of processing point cloud data in accordance with the present invention;
FIG. 4 is a block diagram of an alternative apparatus for processing point cloud data according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating an embodiment of a device for processing point cloud data according to the present invention;
fig. 6 is a block flow diagram of a first embodiment of a processing apparatus of point cloud data according to the present invention;
fig. 7 is a block flow diagram of a second embodiment of a processing apparatus of point cloud data according to the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with an embodiment of the present invention, there is provided a method embodiment of a method for processing point cloud data, it is noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer executable instructions, and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
Fig. 1 is a flowchart of a first embodiment of an alternative method for processing point cloud data according to the present application, as shown in fig. 1, the method includes the following steps:
step S102, point cloud data and position data are obtained, wherein the point cloud data are original point cloud data collected by a laser radar and reflected by a target object, and the position data comprise at least one of the following data: the spatial position of the target device on which the laser radar is installed and the spatial position of the laser radar are determined;
step S104, obtaining a characteristic diagram of the target object according to the point cloud data and the position data;
step S106, detecting the characteristic diagram by using a detection head of the convolutional neural network to obtain a contour diagram of the target object, and determining the tracking ID of the target object according to the contour diagram of the target object;
and S108, displaying the image information of the target object on the terminal equipment according to the point cloud data and the tracking ID of the target object.
Through the steps, the point cloud data and the position data are obtained to obtain the characteristic map of the target object, the detection head of the convolutional neural network is used for detecting the characteristic map to obtain the contour map of the target object, the tracking ID of the target object is determined according to the contour map of the target object, and the image information of the target object is displayed on the terminal equipment according to the point cloud data and the tracking ID of the target object. The user can clearly and intuitively perceive the process of detecting and tracking the target object by adopting the method in relation to the actual scene. By adopting the method, the user can know the perception algorithm more quickly and thoroughly, and the technical problem of poor perception effect of the teaching platform is solved.
Fig. 2 is a flowchart illustrating a second embodiment of an optional method for processing point cloud data according to the present application, and as shown in fig. 2, before acquiring point cloud data and location data, the method includes: step S201, a ground is taken as a reference system, and a first calibration matrix of the laser radar is obtained; step S202, a second calibration matrix of the laser radar is obtained by taking the target equipment as a reference system; and S203, determining the space position of the laser radar according to the first calibration matrix and the second calibration matrix. The first calibration matrix and the second calibration matrix are respectively obtained through different reference systems to determine the spatial position of the laser radar, so that the accuracy of the result can be improved, and the subsequent acquisition of accurate position data is facilitated.
Fig. 3 is a flowchart illustrating a third embodiment of an optional method for processing point cloud data according to the present application, and as shown in fig. 3, the method for processing point cloud data includes: step S301, receiving a first updating request, and according to the first updating request, re-determining a first calibration matrix when the laser radar takes the ground as a reference system, and re-determining a second calibration matrix when the laser radar takes a target device as a reference system; and step S302, re-determining the space position of the laser radar according to the re-determined first calibration matrix and the re-determined second calibration matrix. In this embodiment, the first update request includes an update calibration matrix, a target detection algorithm parameter, a target tracking algorithm parameter, an operation configuration parameter, and the like. Therefore, the accuracy of the laser radar for acquiring the point cloud data of the target object can be further improved, and the precision of monitoring the target object is further improved.
Optionally, obtaining a feature map of the target object according to the point cloud data and the position data, including obtaining a detection model, where the detection model is used to detect the target object, and the detection model includes at least one of: the method comprises the steps of carrying out model training and quantitative training on a detection model to obtain an image processing detection model, and analyzing position data and position data by adopting the image processing detection model to obtain a characteristic diagram of a target object. Specifically, the image processing detection model has the functions of accessing and analyzing point cloud and position data, can provide support for various models, has the functions of loading, preprocessing, post-processing and the like of the detection model, can provide support for centerpoint and pointpilar, can realize target object tracking processing of the detection model, and provides support for drawing conversion model (ab3d), so that the point cloud data and the position data can be analyzed, and the feature map of the target object can be obtained.
Optionally, training the detection model to obtain an image processing detection model, including training the detection model to obtain the image processing detection model, converting the format of the image processing detection model into a preset format, and storing the image processing detection model with the preset format to a target position. The training of the detection model can be carried out on a personal computer, and the preset format can be set as an AI chip binary file.
Optionally, the point cloud data processing method includes receiving a second update request, re-acquiring the detection model according to the second update request, and performing model training and quantitative training according to the re-acquired detection model to obtain a new image processing detection model. Specifically, the second update request includes updating a calibration matrix, a target detection algorithm parameter, a target tracking algorithm parameter, an operation configuration parameter, and the like in the image processing detection model. The retrieved detection model may be a centerpoint, pointpilar model and training code that have been trained to support network modification and retraining.
According to another aspect of the embodiments of the present invention, there is also provided a processing apparatus for point cloud data, as shown in fig. 4, which is a block diagram of an optional processing apparatus for point cloud data according to an embodiment of the present invention, including: the system comprises a first acquisition unit 40, a second acquisition unit 41, a first determination unit 42 and a presentation unit 43, wherein the first acquisition unit 40 is configured to acquire point cloud data and position data, the point cloud data is original point cloud data acquired by a laser radar and reflected by a target object, and the position data includes at least one of the following data: the system comprises a first determining unit, a display unit and a terminal device, wherein the first determining unit is used for determining the target object tracking ID according to the point cloud data and the target object tracking ID, and the display unit is used for displaying the image information of the target object on the terminal device according to the point cloud data and the position data. The terminal equipment can be terminals such as an intelligent tablet, an electric energy display and a mobile phone.
In the embodiment of the invention, the processing device of the point cloud data obtains the feature map of the target object by obtaining the point cloud data and the position data, detects the feature map by using the detection head of the convolutional neural network to obtain the contour map of the target object, determines the tracking ID of the target object according to the contour map of the target object, and displays the image information of the target object on the terminal equipment according to the point cloud data and the tracking ID of the target object. The user can clearly and intuitively perceive the process of detecting and tracking the target object by adopting the method in relation to the actual scene. By adopting the method, the user can know the perception algorithm more quickly and thoroughly, and the technical problem of poor perception effect of the teaching platform is solved.
Alternatively, as shown in fig. 4, the processing device includes: the laser radar calibration system comprises a third obtaining unit 44, a fourth obtaining unit 45 and a second determining unit 46, wherein the third obtaining unit 44 is used for obtaining a first calibration matrix of the laser radar, the fourth obtaining unit 45 is used for obtaining a second calibration matrix, and the second determining unit 46 determines the spatial position of the laser radar according to the first calibration matrix and the second calibration matrix. Therefore, the first calibration matrix and the second calibration matrix of the laser radar can be conveniently obtained, and the reliability of the space position determination result of the laser radar is improved.
According to one embodiment of the present application, the processing device of point cloud data may be a vehicle-mounted laser radar perception algorithm teaching device (hereinafter referred to as teaching device), and the teaching device includes: the system comprises a core controller, a vehicle-mounted laser radar, a display device and inertial navigation equipment, wherein the core controller is provided with a first acquisition unit 40, a second acquisition unit 41 and a first determination unit 42, the display device is provided with a display unit 43, the teaching equipment can be deployed on a vehicle or an indoor workbench, in addition, the teaching equipment is also provided with a perception algorithm teaching system, and the operation mode of the perception algorithm teaching system is divided into an online mode and an offline mode.
As shown in fig. 6, the online mode includes the steps of:
firstly, loading teaching equipment:
1. and the vehicle-mounted laser radar is horizontally fixed on the vehicle roof through the sucker bracket.
2. And horizontally fixing the inertial navigation equipment to the trunk.
3. And fixing the core controller at the ventilation position of the trunk.
4. The display device is fixed on the seat, so that the user can conveniently watch the display device.
5. The devices are communicated through cables.
Secondly, electrifying, calibrating and registering:
1. and calculating a calibration matrix of the vehicle-mounted laser radar by taking the ground as reference.
2. And calculating a calibration matrix of the vehicle-mounted laser radar by taking the inertial navigation equipment as reference.
Thirdly, operating file import:
1. leading the detection model into a core controller;
2. training and quantitative training of the detection model are carried out on the personal computer, and then the detection model is converted into an AI chip binary file.
3. Providing the trained centerpoint, pointpilar model and training codes to support network modification and retraining.
Importing an executable file into a core controller;
1. the access and analysis functions of the point cloud and the positioning data provide support for various types.
2. The functions of loading, preprocessing, postprocessing and the like of the detection model are realized, and the support of the centerpoint and the pointpilar is provided.
3. And target object tracking processing of the detection result is realized, and ab3d support is provided.
Fourthly, updating the configuration file:
1. and updating a calibration matrix, target object detection algorithm parameters, target object tracking algorithm parameters and operation configuration parameters in the core controller.
Fifthly, visualization of perception results:
the results of target object detection and target object tracking are displayed (as shown in fig. 5).
As shown in fig. 7, the offline mode includes the following steps:
importing original data:
firstly, electrifying a core controller, and importing recorded point clouds and position data to the core controller.
And the second step, the third step and the fourth step are in the same online mode as the third step, the fourth step and the fifth step.
By adopting the vehicle-mounted laser radar perception algorithm teaching equipment of the embodiment, the requirements of low cost, low power consumption, high stability, small size and the like of vehicle-mounted electronics can be met, and each functional module in the whole process of development of the vehicle-mounted laser radar perception algorithm is opened, so that developers and students can be familiar with and master the development process of the perception algorithm and the knowledge points in the development process.
According to another specific embodiment of the present application, a nonvolatile storage medium is further provided, where the nonvolatile storage medium includes a stored program, and when the program runs, a device in which the nonvolatile storage medium is located is controlled to execute the steps of the method for processing point cloud data in the foregoing embodiments.
According to another specific embodiment of the present application, there is also provided a processor, configured to execute a program, where the program executes the steps of the method for processing point cloud data in the foregoing embodiments.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit may be a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that it is obvious to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be considered as the protection scope of the present invention.

Claims (10)

1. A method for processing point cloud data is characterized by comprising the following steps:
acquiring point cloud data and position data, wherein the point cloud data is original point cloud data which is acquired by a laser radar and is reflected back through a target object, and the position data comprises at least one of the following data: a spatial position of a target device on which the lidar is mounted, a spatial position of the lidar;
obtaining a feature map of the target object according to the point cloud data and the position data;
detecting the characteristic graph by using a detection head of a convolutional neural network to obtain a contour map of the target object, and determining a tracking ID of the target object according to the contour map of the target object;
and displaying the image information of the target object on terminal equipment according to the point cloud data and the tracking ID of the target object.
2. The method for processing point cloud data according to claim 1, wherein before acquiring the point cloud data and the position data, the method comprises:
taking the ground as a reference system, and acquiring a first calibration matrix of the laser radar;
acquiring a second calibration matrix of the laser radar by taking the target equipment as a reference system;
and determining the space position of the laser radar according to the first calibration matrix and the second calibration matrix.
3. The method for processing point cloud data according to claim 2, comprising:
receiving a first updating request, and according to the first updating request, re-determining the first calibration matrix when the laser radar takes the ground as a reference frame, and re-determining the second calibration matrix when the laser radar takes the target equipment as a reference frame;
and re-determining the space position of the laser radar according to the re-determined first calibration matrix and the re-determined second calibration matrix.
4. The method for processing point cloud data according to claim 1, wherein obtaining the feature map of the target object according to the point cloud data and the position data comprises:
obtaining a detection model, wherein the detection model is used for detecting a target object, and the detection model includes at least one of the following: a target detection model and a target tracking model;
and carrying out model training and quantitative training on the detection model to obtain an image processing detection model, and analyzing the point cloud data and the position data by adopting the image processing detection model to obtain the characteristic diagram of the target object.
5. The method for processing point cloud data according to claim 4, wherein training the detection model to obtain an image processing detection model comprises:
training the detection model to obtain an image processing detection model, converting the format of the image processing detection model into a preset format, and storing the image processing detection model with the preset format to a target position.
6. The method for processing point cloud data according to claim 4, comprising:
receiving a second updating request, and re-acquiring the detection model according to the second updating request;
and performing model training and quantitative training according to the newly acquired detection model to obtain a new image processing detection model.
7. An apparatus for processing point cloud data, comprising:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring point cloud data and position data, the point cloud data is original point cloud data collected by a laser radar and reflected by a target object, and the position data comprises at least one of the following data: a spatial position of a target device on which the lidar is mounted, a spatial position of the lidar;
the second acquisition unit is used for acquiring a characteristic diagram of the target object according to the point cloud data and the position data;
the first determining unit detects the characteristic graph by using a detection head of a convolutional neural network to obtain a contour map of the target object, and determines the tracking ID of the target object according to the contour map of the target object;
and the display unit is used for displaying the image information of the target object on the terminal equipment according to the point cloud data and the tracking ID of the target object.
8. The processing apparatus according to claim 7, comprising:
the third acquisition unit is used for acquiring a first calibration matrix of the laser radar;
a fourth obtaining unit, configured to obtain a second calibration matrix;
and the second determining unit determines the spatial position of the laser radar according to the first calibration matrix and the second calibration matrix.
9. A non-volatile storage medium, comprising a stored program, wherein when the program runs, a device where the non-volatile storage medium is located is controlled to execute the processing method of point cloud data according to any one of claims 1 to 6.
10. A processor, characterized in that the processor is configured to execute a program, wherein the program executes a method for processing point cloud data according to any one of claims 1 to 6.
CN202111676534.2A 2021-12-31 2021-12-31 Point cloud data processing method, processing device, storage medium and processor Pending CN114359386A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111676534.2A CN114359386A (en) 2021-12-31 2021-12-31 Point cloud data processing method, processing device, storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111676534.2A CN114359386A (en) 2021-12-31 2021-12-31 Point cloud data processing method, processing device, storage medium and processor

Publications (1)

Publication Number Publication Date
CN114359386A true CN114359386A (en) 2022-04-15

Family

ID=81105288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111676534.2A Pending CN114359386A (en) 2021-12-31 2021-12-31 Point cloud data processing method, processing device, storage medium and processor

Country Status (1)

Country Link
CN (1) CN114359386A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115032615A (en) * 2022-05-31 2022-09-09 中国第一汽车股份有限公司 Laser radar calibration point determining method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115032615A (en) * 2022-05-31 2022-09-09 中国第一汽车股份有限公司 Laser radar calibration point determining method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109285220B (en) Three-dimensional scene map generation method, device, equipment and storage medium
CN109343061B (en) Sensor calibration method and device, computer equipment, medium and vehicle
CN109188457B (en) Object detection frame generation method, device, equipment, storage medium and vehicle
CN107506499B (en) Method, device and server for establishing logical relationship between interest point and building
CN106845324B (en) Method and device for processing guideboard information
US8818031B1 (en) Utility pole geotagger
CN113030990B (en) Fusion ranging method, device, ranging equipment and medium for vehicle
CN112146682B (en) Sensor calibration method and device for intelligent automobile, electronic equipment and medium
CN114280582A (en) Calibration and calibration method and device for laser radar, storage medium and electronic equipment
CN111428165A (en) Three-dimensional model display method and device and electronic equipment
CN114359386A (en) Point cloud data processing method, processing device, storage medium and processor
CN112529335B (en) Model detection method, device, equipment and storage medium
CN114449439A (en) Method and device for positioning underground pipe gallery space
CN111582378B (en) Training generation method, position detection method and device of positioning recognition model
CN110770540B (en) Method and device for constructing environment model
CN116823884A (en) Multi-target tracking method, system, computer equipment and storage medium
CN116665170A (en) Training of target detection model, target detection method, device, equipment and medium
CN109544648B (en) Calibration method and device
CN113450459A (en) Method and device for constructing three-dimensional model of target object
CN115546299A (en) Method for identifying camera pose change and abnormity by combining laser radar
CN113268555B (en) Map generation method and device for multi-type data and computer equipment
CN110892449A (en) Image processing method and device and mobile device
CN107885651B (en) Automatic system regression testing method and device for mobile terminal positioning algorithm
CN114118413A (en) Network training and equipment control method, device, equipment and storage medium
CN114743395A (en) Signal lamp detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination