CN115830588B - Target detection method, system, storage medium and device based on point cloud - Google Patents

Target detection method, system, storage medium and device based on point cloud Download PDF

Info

Publication number
CN115830588B
CN115830588B CN202310123443.9A CN202310123443A CN115830588B CN 115830588 B CN115830588 B CN 115830588B CN 202310123443 A CN202310123443 A CN 202310123443A CN 115830588 B CN115830588 B CN 115830588B
Authority
CN
China
Prior art keywords
point cloud
cloud data
training
feature vector
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310123443.9A
Other languages
Chinese (zh)
Other versions
CN115830588A (en
Inventor
郑米
郭振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Transportation Technology Co ltd
Original Assignee
Tianyi Transportation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Transportation Technology Co ltd filed Critical Tianyi Transportation Technology Co ltd
Priority to CN202310123443.9A priority Critical patent/CN115830588B/en
Publication of CN115830588A publication Critical patent/CN115830588A/en
Application granted granted Critical
Publication of CN115830588B publication Critical patent/CN115830588B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention provides a target detection method, a system, a storage medium and equipment based on point cloud, wherein the method comprises the following steps: dividing the original point cloud data into training point cloud data and test point cloud data, and acquiring images corresponding to the training point cloud data; respectively inputting training point cloud data and images into an enhancement module, screening sparse point cloud data from the training point cloud data by utilizing the images in the enhancement module, and training a feature extraction module in the enhancement module based on the sparse point cloud data; responding to the completion of training of the feature extraction module, and respectively inputting the cloud data of the test points into the feature extraction module and the point cloud feature extraction module after the completion of training to obtain an enhanced feature vector and an original feature vector; and fusing the enhanced feature vector and the original feature vector to obtain a fused feature vector, and detecting the three-dimensional target based on the fused feature vector. The method solves the problem that the accuracy rate of target detection based on the point cloud for remote object identification is low.

Description

Target detection method, system, storage medium and device based on point cloud
Technical Field
The present invention relates to the field of autopilot technologies, and in particular, to a point cloud-based target detection method, system, storage medium, and apparatus.
Background
While autopilot is currently being actively developed, autopilot technology relies on perceptibility, with 3D object detection being an important technology in information perception. There are two main types of sensors used for 3D object detection, one is a conventional camera and the other is a lidar.
The common target detection algorithms are divided into three types, one is an image-based target detection algorithm, the other is a point cloud-based target detection algorithm, and the third is a fusion-based target detection algorithm which combines the two types and takes an image and a point cloud as inputs.
Image-based object detection takes only RGB images (three-channel color images) as input, and since the 2-dimensional plane represented by the images is intended to represent a 3-dimensional space, it is necessary to estimate a depth dimension to expand the space represented by the image, so the detection result of this method depends largely on the result of depth prediction, whereas the current depth prediction method has relatively large errors, so the image-based object detection effect is relatively poor.
The target detection algorithm based on the point cloud has the advantages that the point cloud acquired by the laser radar can have enough information to represent the three-dimensional space, so that the detection result of the method is relatively good, but due to the characteristics of the laser radar, the point cloud is denser at a place close to the emission source, and the point cloud is sparser at a place far from the emission source, so that the method has a better detection result for a near object, and the detection result of the method is poorer for a distant object.
Compared with the method based on the point cloud, the common target detection method based on the point cloud and the image fusion has improvement, but the fusion strategy cannot fully consider the problem of sparsity of the far point cloud, and the fusion needs two sensor signal inputs, so that the calculation amount and the cost are increased.
Disclosure of Invention
In view of the above, the present invention aims to provide a target detection method, system, storage medium and device based on point cloud, which are used for solving the problem in the prior art that the target detection method based on point cloud has a poor detection effect on sparse point cloud at a far distance from an emission source, resulting in low recognition rate of distant objects.
Based on the above purpose, the invention provides a target detection method based on point cloud, comprising the following steps:
dividing the original point cloud data into training point cloud data and test point cloud data, and acquiring images corresponding to the training point cloud data;
respectively inputting training point cloud data and images into an enhancement module, screening sparse point cloud data from the training point cloud data by utilizing the images in the enhancement module, and training a feature extraction module in the enhancement module based on the sparse point cloud data;
responding to the completion of training of the feature extraction module, and respectively inputting the cloud data of the test points into the feature extraction module and the point cloud feature extraction module after the completion of training to obtain an enhanced feature vector and an original feature vector;
and fusing the enhanced feature vector and the original feature vector to obtain a fused feature vector, and detecting the three-dimensional target based on the fused feature vector.
In some embodiments, screening sparse point cloud data from training point cloud data using the image in the enhancement module includes:
inputting the image into a two-dimensional target detection network in the enhancement module to obtain a two-dimensional detection frame;
obtaining coordinates of training point cloud data on an image through a transformation matrix;
and filtering the point cloud data outside the frame through the two-dimensional detection frame based on the coordinates, and screening out sparse point cloud data inside the frame.
In some embodiments, screening out sparse point cloud data within a box includes:
and removing the point cloud data in the preset range of the corresponding emission source of the intra-frame distance, and screening out sparse point cloud data.
In some embodiments, training the feature extraction module in the enhancement module based on the sparse point cloud data includes:
and inputting training point cloud data into a feature extraction module, inputting sparse point cloud data into the feature extraction module as a true value, and training the feature extraction module based on a constructed loss function, wherein the loss function represents feature similarity of the training point cloud data and the sparse point cloud data.
In some embodiments, fusing the enhanced feature vector with the original feature vector comprises:
vector stitching is performed on the enhanced feature vector and the original feature vector.
In some embodiments, three-dimensional object detection based on the fused feature vector includes:
and inputting the fusion feature vector into a bev conversion module to obtain bev feature vector, and performing three-dimensional target detection on the bev feature vector.
In some embodiments, three-dimensional object detection of bev feature vectors includes:
and inputting the bev characteristic vector into the three-dimensional target detection head, and predicting the coordinates of the three-dimensional detection frame and the corresponding object type.
In another aspect of the present invention, there is also provided a target detection system based on a point cloud, including:
the data module is configured to divide the original point cloud data into training point cloud data and test point cloud data and acquire images corresponding to the training point cloud data;
the training module is configured to input training point cloud data and images into the enhancement module respectively, screen sparse point cloud data from the training point cloud data by utilizing the images in the enhancement module, and train the feature extraction module in the enhancement module based on the sparse point cloud data;
the input module is configured to respond to the completion of training of the feature extraction module, and input the test point cloud data to the feature extraction module and the point cloud feature extraction module after the completion of training respectively to obtain an enhanced feature vector and an original feature vector; and
the target detection module is configured to fuse the enhanced feature vector and the original feature vector to obtain a fused feature vector, and perform three-dimensional target detection based on the fused feature vector.
In yet another aspect of the present invention, there is also provided a computer readable storage medium storing computer program instructions which, when executed by a processor, implement the above-described method.
In yet another aspect of the present invention, there is also provided a computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, performs the above method.
The invention has at least the following beneficial technical effects:
according to the invention, training point cloud data and corresponding images are input into the enhancement module, sparse point cloud data are screened out by utilizing the images in the enhancement module, and the feature extraction module in the enhancement module is trained according to the sparse point cloud data, so that features extracted by the feature extraction module are approximate to features of remote sparse point clouds; the method comprises the steps of respectively inputting test point cloud data into a feature extraction module and an original point cloud feature extraction module after training, fusing the obtained enhanced feature vector and the original feature vector, and then carrying out three-dimensional target detection, so that the problem of low accuracy of target detection based on the point cloud on remote objects is solved, and the overall accuracy of a target detection algorithm based on the point cloud is further improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other embodiments may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a target detection method based on a point cloud according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a training phase enhancement module according to an embodiment of the present invention;
fig. 3 is a schematic flow chart of the inference phase point cloud target detection according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a point cloud based object detection system according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a computer-readable storage medium implementing a point cloud based target detection method according to an embodiment of the present invention;
fig. 6 is a schematic hardware structure of a computer device for performing a point cloud-based target detection method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
It should be noted that, in the embodiments of the present invention, all the expressions "first" and "second" are used to distinguish two non-identical entities with the same name or non-identical parameters, and it is noted that the "first" and "second" are only used for convenience of expression, and should not be construed as limiting the embodiments of the present invention. Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such as a process, method, system, article, or other step or unit that comprises a list of steps or units.
Based on the above object, in a first aspect of the embodiments of the present invention, an embodiment of a target detection method based on a point cloud is provided. Fig. 1 is a schematic diagram of an embodiment of a point cloud-based target detection method provided by the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
s10, dividing original point cloud data into training point cloud data and test point cloud data, and acquiring images corresponding to the training point cloud data;
step S20, training point cloud data and images are respectively input into an enhancement module, sparse point cloud data are screened out from the training point cloud data by the aid of the images in the enhancement module, and a feature extraction module in the enhancement module is trained based on the sparse point cloud data;
step S30, in response to the completion of training of the feature extraction module, the cloud data of the test points are respectively input into the feature extraction module and the point cloud feature extraction module after the completion of training, and an enhanced feature vector and an original feature vector are obtained;
and S40, fusing the enhanced feature vector and the original feature vector to obtain a fused feature vector, and detecting the three-dimensional target based on the fused feature vector.
According to the embodiment of the invention, the training point cloud data and the corresponding images are input into the enhancement module, the sparse point cloud data are screened out by utilizing the images in the enhancement module, and the feature extraction module in the enhancement module is trained according to the sparse point cloud data, so that the features extracted by the feature extraction module are approximate to the features of the sparse point cloud at a distance; the method comprises the steps of respectively inputting test point cloud data into a feature extraction module and an original point cloud feature extraction module after training, fusing the obtained enhanced feature vector and the original feature vector, and then carrying out three-dimensional target detection, so that the problem of low accuracy of target detection based on the point cloud on remote objects is solved, and the overall accuracy of a target detection algorithm based on the point cloud is further improved.
In some embodiments, screening sparse point cloud data from training point cloud data using the image in the enhancement module includes: inputting the image into a two-dimensional target detection network in the enhancement module to obtain a two-dimensional detection frame; obtaining coordinates of training point cloud data on an image through a transformation matrix; and filtering the point cloud data outside the frame through the two-dimensional detection frame based on the coordinates, and screening out sparse point cloud data inside the frame.
In some embodiments, screening out sparse point cloud data within a box includes: and removing the point cloud data in the preset range of the corresponding emission source of the intra-frame distance, and screening out sparse point cloud data.
Fig. 2 is a schematic flow chart of a training phase enhancement module according to an embodiment of the present invention. As shown in fig. 2, firstly, an RGB image (three-channel color image) is input into a 2D (two-dimensional) target detection network to obtain a 2D detection frame; and then, a conversion matrix from the point cloud to the picture is obtained through calibration of the camera and the laser radar and is marked as T, and the position coordinates of the point cloud on the corresponding image can be obtained through traversing the point cloud data through T.
Filtering out point cloud data outside the frame through a 2D detection frame; and then removing the point cloud data in the preset range of the corresponding emission source of the in-frame distance, and screening out sparse point cloud data. Specifically, according to the distance x from the sampling center (i.e., the emission source such as the laser radar), removing all the point cloud data with the distance less than x from the sampling center, i.e., removing the dense point cloud data near the emission source, finally obtaining the screened sparse point cloud data, and recording as lidar1.
In some embodiments, training the feature extraction module in the enhancement module based on the sparse point cloud data includes: and inputting training point cloud data into a feature extraction module, inputting sparse point cloud data into the feature extraction module as a true value, and training the feature extraction module based on a constructed loss function, wherein the loss function represents feature similarity of the training point cloud data and the sparse point cloud data.
As shown in fig. 2, the lidar1 is input as a ground score into the feature extraction module F, and a loss function thereof is set to be loss=similarity (lidar 1), that is, similarity between the original training point cloud data lidar and the sparse point cloud data lidar1 after feature enhancement, where remote feature=f (lidar).
The ground trunk refers to a real situation or a real value, and when the model evaluates or trains (calculates a loss function), a predicted result obtained by model output is compared with the real value, so that the quality and performance of the model can be known.
In some embodiments, fusing the enhanced feature vector with the original feature vector comprises: vector stitching is performed on the enhanced feature vector and the original feature vector.
Fig. 3 is a schematic flow chart of the inference phase point cloud target detection according to an embodiment of the present invention. As shown in fig. 3, in the reasoning stage, the cloud data lidar 'of the test points is input into a point cloud feature extraction network (such as a Resnet) to perform feature extraction, and the extracted features are named as lidar' features; and inputting the test point cloud data lidar 'into a feature extraction module F which is trained in the enhancement module, and outputting a feature remotecture=F (lidar') after the remote point cloud is enhanced.
And fusing the remote feature and the lidar' feature together to be recorded as fusion feature (fusion feature vector), wherein the fusion feature has original feature information and feature information enhanced on remote sparse point clouds. Because the original characteristic information has little remote point cloud data, and the enhanced characteristic information has much remote point cloud data, the original characteristic information and the enhanced characteristic information can be fused.
The fusion mode can adopt a mode of directly carrying out vector splicing (i.e. concat). concat belongs to series feature fusion, and directly splices two features. In addition, there is a fusion mode of add, and the add adopts a parallel strategy, so that two feature vectors can be combined into a complex vector.
In some embodiments, three-dimensional object detection based on the fused feature vector includes: and inputting the fusion feature vector into a bev conversion module to obtain bev feature vector, and performing three-dimensional target detection on the bev feature vector.
In some embodiments, three-dimensional object detection of bev feature vectors includes: and inputting the bev characteristic vector into the three-dimensional target detection head, and predicting the coordinates of the three-dimensional detection frame and the corresponding object type.
Specifically, fusion feature is input into the bev conversion module (i.e., existing features are projected onto the aerial view plane) to generate bev feature (i.e., bev feature vector), and finally bev feature is input into an existing 3D (three-dimensional) target detection head (e.g., centrpoint) to predict 3D detection frame coordinates and corresponding object categories.
In a second aspect of the embodiment of the present invention, a target detection system based on a point cloud is also provided. Fig. 4 is a schematic diagram of an embodiment of a point cloud-based object detection system provided by the present invention. As shown in fig. 4, a point cloud-based object detection system includes: the data module 10 is configured to divide the original point cloud data into training point cloud data and test point cloud data, and acquire an image corresponding to the training point cloud data; the training module 20 is configured to input training point cloud data and images to the enhancement module respectively, screen sparse point cloud data from the training point cloud data by using the images in the enhancement module, and train the feature extraction module in the enhancement module based on the sparse point cloud data; an input module 30 configured to input test point cloud data to the feature extraction module and the point cloud feature extraction module after training is completed in response to the feature extraction module being completed, respectively, to obtain an enhanced feature vector and an original feature vector; and a target detection module 40 configured to fuse the enhanced feature vector with the original feature vector to obtain a fused feature vector, and perform three-dimensional target detection based on the fused feature vector.
According to the point cloud-based target detection system, training point cloud data and corresponding images are input to the enhancement module, sparse point cloud data are screened out by the aid of the images in the enhancement module, and the feature extraction module in the enhancement module is trained according to the sparse point cloud data, so that features extracted by the feature extraction module are approximate to features of remote sparse point clouds; the method comprises the steps of respectively inputting test point cloud data into a feature extraction module and an original point cloud feature extraction module after training, fusing the obtained enhanced feature vector and the original feature vector, and then carrying out three-dimensional target detection, so that the problem of low accuracy of target detection based on the point cloud on remote objects is solved, and the overall accuracy of a target detection algorithm based on the point cloud is further improved.
In a third aspect of the embodiment of the present invention, a computer readable storage medium is provided, and fig. 5 shows a schematic diagram of a computer readable storage medium for implementing a point cloud based target detection method according to an embodiment of the present invention. As shown in fig. 5, the computer-readable storage medium 3 stores computer program instructions 31. The computer program instructions 31 when executed by a processor implement the method of any of the embodiments described above.
It should be appreciated that all of the embodiments, features and advantages set forth above for the point cloud based object detection method according to the present invention equally apply to the point cloud based object detection system and storage medium according to the present invention without conflicting therewith.
In a fourth aspect of the embodiment of the present invention, there is also provided a computer device, including a memory 402 and a processor 401 as shown in fig. 6, where the memory 402 stores a computer program, and the computer program is executed by the processor 401 to implement the method of any one of the embodiments above.
Fig. 6 is a schematic hardware structure diagram of an embodiment of a computer device for performing the point cloud-based target detection method according to the present invention. Taking the example of a computer device as shown in fig. 6, a processor 401 and a memory 402 are included in the computer device, and may further include: an input device 403 and an output device 404. The processor 401, memory 402, input device 403, and output device 404 may be connected by a bus or otherwise, for example in fig. 6. The input device 403 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the point cloud based object detection system. The output 404 may include a display device such as a display screen.
The memory 402 is used as a non-volatile computer readable storage medium, and may be used to store a non-volatile software program, a non-volatile computer executable program, and a module, such as program instructions/modules corresponding to the point cloud-based object detection method in the embodiment of the present application. Memory 402 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created by the use of a point cloud-based target detection method, and the like. In addition, memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 402 may optionally include memory located remotely from processor 401, which may be connected to the local module via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 401 executes various functional applications of the server and data processing, i.e., implements the point cloud based object detection method of the above-described method embodiment, by running nonvolatile software programs, instructions, and modules stored in the memory 402.
Finally, it is noted that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that as used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items. The foregoing embodiment of the present invention has been disclosed with reference to the number of embodiments for the purpose of description only, and does not represent the advantages or disadvantages of the embodiments.
Those of ordinary skill in the art will appreciate that: the above discussion of any embodiment is merely exemplary and is not intended to imply that the scope of the disclosure of embodiments of the invention, including the claims, is limited to such examples; combinations of features of the above embodiments or in different embodiments are also possible within the idea of an embodiment of the invention, and many other variations of the different aspects of the embodiments of the invention as described above exist, which are not provided in detail for the sake of brevity. Therefore, any omission, modification, equivalent replacement, improvement, etc. of the embodiments should be included in the protection scope of the embodiments of the present invention.

Claims (10)

1. The target detection method based on the point cloud is characterized by comprising the following steps of:
dividing original point cloud data into training point cloud data and test point cloud data, and acquiring images corresponding to the training point cloud data;
the training point cloud data and the image are respectively input into an enhancement module, a two-dimensional target is determined by the image in the enhancement module, sparse point cloud data corresponding to the two-dimensional target is screened out from the training point cloud data, and a feature extraction module in the enhancement module is trained based on the sparse point cloud data, and the method further comprises the steps of: inputting the training point cloud data to the feature extraction module, and inputting the sparse point cloud data as a true value to the feature extraction module;
responding to the completion of training of the feature extraction module, and respectively inputting the test point cloud data into the feature extraction module and the point cloud feature extraction module after the completion of training to obtain an enhanced feature vector and an original feature vector;
and fusing the enhanced feature vector and the original feature vector to obtain a fused feature vector, and detecting a three-dimensional target based on the fused feature vector.
2. The method of claim 1, wherein screening sparse point cloud data from the training point cloud data using the image in the enhancement module comprises:
inputting the image to a two-dimensional target detection network in the enhancement module to obtain a two-dimensional detection frame;
obtaining coordinates of the training point cloud data on the image through a transformation matrix;
and filtering the point cloud data outside the frame through the two-dimensional detection frame based on the coordinates, and screening out sparse point cloud data inside the frame.
3. The method of claim 2, wherein screening out sparse point cloud data within a frame comprises:
and removing point cloud data in a preset range of the corresponding emission source of the in-frame distance, and screening out the sparse point cloud data.
4. The method of claim 1, wherein training a feature extraction module of the enhancement module based on the sparse point cloud data comprises:
and training the feature extraction module based on a constructed loss function, wherein the loss function represents feature similarity of the training point cloud data and the sparse point cloud data.
5. The method of claim 1, wherein fusing the enhanced feature vector and the original feature vector comprises:
and vector splicing is carried out on the enhanced feature vector and the original feature vector.
6. The method of claim 1, wherein performing three-dimensional object detection based on the fused feature vector comprises:
and inputting the fusion feature vector into a bev conversion module to obtain bev feature vector, and carrying out three-dimensional target detection on the bev feature vector.
7. The method of claim 6, wherein performing three-dimensional object detection on the bev feature vector comprises:
and inputting the bev characteristic vector into a three-dimensional target detection head, and predicting coordinates of a three-dimensional detection frame and corresponding object types.
8. A point cloud-based object detection system, comprising:
the data module is configured to divide the original point cloud data into training point cloud data and test point cloud data, and acquire images corresponding to the training point cloud data;
the training module is configured to input the training point cloud data and the image to an enhancement module respectively, determine a two-dimensional target by using the image in the enhancement module, screen sparse point cloud data corresponding to the two-dimensional target from the training point cloud data, and train a feature extraction module in the enhancement module based on the sparse point cloud data, and further comprises: inputting the training point cloud data to the feature extraction module, and inputting the sparse point cloud data as a true value to the feature extraction module;
the input module is configured to respond to the completion of training of the feature extraction module, and input the test point cloud data to the feature extraction module and the point cloud feature extraction module after the completion of training respectively to obtain an enhanced feature vector and an original feature vector; and
and the target detection module is configured to fuse the enhanced feature vector with the original feature vector to obtain a fused feature vector, and perform three-dimensional target detection based on the fused feature vector.
9. A computer readable storage medium, characterized in that computer program instructions are stored, which, when executed by a processor, implement the method of any one of claims 1-7.
10. A computer device comprising a memory and a processor, wherein the memory has stored therein a computer program which, when executed by the processor, performs the method of any of claims 1-7.
CN202310123443.9A 2023-02-16 2023-02-16 Target detection method, system, storage medium and device based on point cloud Active CN115830588B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310123443.9A CN115830588B (en) 2023-02-16 2023-02-16 Target detection method, system, storage medium and device based on point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310123443.9A CN115830588B (en) 2023-02-16 2023-02-16 Target detection method, system, storage medium and device based on point cloud

Publications (2)

Publication Number Publication Date
CN115830588A CN115830588A (en) 2023-03-21
CN115830588B true CN115830588B (en) 2023-05-26

Family

ID=85521632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310123443.9A Active CN115830588B (en) 2023-02-16 2023-02-16 Target detection method, system, storage medium and device based on point cloud

Country Status (1)

Country Link
CN (1) CN115830588B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553859B (en) * 2020-04-29 2020-12-01 清华大学 Laser radar point cloud reflection intensity completion method and system
CN111724478B (en) * 2020-05-19 2021-05-18 华南理工大学 Point cloud up-sampling method based on deep learning
CN114140758A (en) * 2021-11-30 2022-03-04 北京超星未来科技有限公司 Target detection method and device and computer equipment
CN115238758A (en) * 2022-04-12 2022-10-25 华南理工大学 Multi-task three-dimensional target detection method based on point cloud feature enhancement
CN115393601A (en) * 2022-05-19 2022-11-25 湖南大学 Three-dimensional target detection method based on point cloud data

Also Published As

Publication number Publication date
CN115830588A (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN109376667B (en) Target detection method and device and electronic equipment
CN112419494B (en) Obstacle detection and marking method and device for automatic driving and storage medium
JP6031554B2 (en) Obstacle detection method and apparatus based on monocular camera
CN112232293B (en) Image processing model training method, image processing method and related equipment
CN111179329B (en) Three-dimensional target detection method and device and electronic equipment
CN109920055A (en) Construction method, device and the electronic equipment of 3D vision map
CN111222395A (en) Target detection method and device and electronic equipment
CN110276831B (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN115035235A (en) Three-dimensional reconstruction method and device
CN112487979A (en) Target detection method, model training method, device, electronic device and medium
CN114170290A (en) Image processing method and related equipment
WO2023164845A1 (en) Three-dimensional reconstruction method, device, system, and storage medium
CN114663598A (en) Three-dimensional modeling method, device and storage medium
CN114494395A (en) Depth map generation method, device and equipment based on plane prior and storage medium
CN115830588B (en) Target detection method, system, storage medium and device based on point cloud
CN114611635B (en) Object identification method and device, storage medium and electronic device
CN111738061A (en) Binocular vision stereo matching method based on regional feature extraction and storage medium
CN115346184A (en) Lane information detection method, terminal and computer storage medium
CN116012609A (en) Multi-target tracking method, device, electronic equipment and medium for looking around fish eyes
CN116051736A (en) Three-dimensional reconstruction method, device, edge equipment and storage medium
CN113379787B (en) Target tracking method based on 3D convolution twin neural network and template updating
KR20160039447A (en) Spatial analysis system using stereo camera.
CN110196638B (en) Mobile terminal augmented reality method and system based on target detection and space projection
AU2017300877B2 (en) Method and device for aiding the navigation of a vehicle
CN115049822B (en) Three-dimensional imaging method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant