CN116051925B - Training sample acquisition method, device, equipment and storage medium - Google Patents

Training sample acquisition method, device, equipment and storage medium Download PDF

Info

Publication number
CN116051925B
CN116051925B CN202310010153.3A CN202310010153A CN116051925B CN 116051925 B CN116051925 B CN 116051925B CN 202310010153 A CN202310010153 A CN 202310010153A CN 116051925 B CN116051925 B CN 116051925B
Authority
CN
China
Prior art keywords
point cloud
cloud data
radar
data
direction angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310010153.3A
Other languages
Chinese (zh)
Other versions
CN116051925A (en
Inventor
陈曲
叶晓青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310010153.3A priority Critical patent/CN116051925B/en
Publication of CN116051925A publication Critical patent/CN116051925A/en
Application granted granted Critical
Publication of CN116051925B publication Critical patent/CN116051925B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/067Reshaping or unfolding 3D tree structures onto 2D planes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The present disclosure provides a training sample collection method, a device, equipment and a storage medium, which relate to the technical field of artificial intelligence, in particular to the technical fields of computer vision, image processing, deep learning and the like, and can be applied to the scenes of smart cities, metauniverse and the like. The target recognition model adopts point cloud data without resolution ratio during training, so that recognition accuracy is improved.

Description

Training sample acquisition method, device, equipment and storage medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, image processing, deep learning and the like, and can be applied to scenes such as smart cities, metauniverse and the like, in particular to a training sample acquisition method, device, equipment and storage medium.
Background
In order to better perform object recognition, a point cloud acquired by a radar is generally input into a target recognition model, so that the target recognition model recognizes an object corresponding to the point cloud.
However, in the use process, the target recognition model obtained by training the point cloud data acquired by the radar of one type is found to have poor target recognition effect when the target recognition model is applied to radars of other types.
Disclosure of Invention
The present disclosure provides a training sample collection method, apparatus, device, and storage medium.
According to an aspect of the present disclosure, there is provided a training sample collection method, including:
acquiring first point cloud data acquired by a first radar, wherein the first point cloud data comprises position point data acquired by the first radar through a plurality of beams;
downsampling the first point cloud data according to the wave beams to which the position point data in the first point cloud data belong to so as to obtain downsampled second point cloud data;
and generating a training sample for training the target recognition model based on the second point cloud data.
According to another aspect of the present disclosure, there is provided a training sample acquisition device comprising:
The acquisition module is used for acquiring first point cloud data acquired by a first radar, wherein the first point cloud data comprises position point data acquired by the first radar through a plurality of beams;
the downsampling module is used for downsampling the first point cloud data according to the wave beams to which the position point data in the first point cloud data belong so as to obtain downsampled second point cloud data;
and the generation module is used for generating a training sample for training the target recognition model based on the second point cloud data.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect embodiment of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of the first aspect embodiment of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method as described in embodiments of the first aspect of the present disclosure.
According to the training sample collection method, the device, the equipment and the storage medium, after the first point cloud data collected by the first radar are obtained, the first point cloud data comprise the position point data collected by the first radar through the plurality of beams, so that the first point cloud data can be downsampled according to the beams to which the position point data belong in the first point cloud data, second point cloud data after downsampling are obtained, and the training sample for training the target recognition model is generated based on the second point cloud data. The target recognition model adopts point cloud data without resolution ratio during training, the application range of the target recognition model is expanded, and the situation that the target recognition effect is poor when the target recognition model is migrated and applied to other types of radars due to the fact that the target recognition model is obtained by training the point cloud data acquired by one type of radars and caused by different types of radars with different resolution ratios in the related technology is avoided.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
fig. 1 is a schematic flow chart of a training sample collection method according to an embodiment of the disclosure;
FIG. 2 is a flow chart of another training sample collection method according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of another training sample collection method according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of another training sample collection method according to an embodiment of the present disclosure;
FIG. 5 is a schematic illustration of a process;
fig. 6 is a schematic structural diagram of a training sample collection device 600 according to an embodiment of the disclosure;
fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In order to better perform object recognition, point cloud data acquired by a radar is generally input into a target recognition model, so that the target recognition model recognizes an object corresponding to the point cloud. However, in the use process, the radars of different types have different resolutions, and when the target recognition model obtained by training the radars of one type is migrated to the radars of other types, the recognition effect is poor. Resulting in a smaller range of application for the object recognition model. Therefore, in order to expand the application range of the object recognition model, it is necessary to perform migration training on the object recognition model.
In the related art, if sampling is performed by using different radars, sampling cost is increased undoubtedly, in the embodiment of the disclosure, only one type of radar is used for sampling point cloud data, so that point cloud data obtained by sampling different types of radars can be simulated through downsampling.
In the embodiment of the disclosure, after the first point cloud data acquired by the first radar is acquired, because the first point cloud data includes the position point data acquired by the first radar through the plurality of beams, the first point cloud data can be downsampled according to the beams to which each position point data belongs in the first point cloud data, so as to obtain downsampled second point cloud data, and a training sample for training the target recognition model is generated based on the second point cloud data. The target recognition model adopts point cloud data without resolution ratio during training, the application range of the target recognition model is expanded, and the situation that the target recognition effect is poor when the target recognition model is migrated and applied to other types of radars due to the fact that the target recognition model is obtained by training the point cloud data acquired by one type of radars and caused by different types of radars with different resolution ratios in the related technology is avoided.
The following describes a training sample collection method, apparatus, device, and storage medium with reference to various embodiments of the present disclosure.
Fig. 1 is a flow chart of a training sample collection method provided by an embodiment of the present disclosure, where the method provided by the embodiment may be executed by a cloud server, and the cloud server is communicatively connected to at least one vehicle-mounted radar, for example: and the first radar is used for acquiring the acquired first point cloud data. Those skilled in the art will appreciate that the method provided in this embodiment may also be performed by the first radar, or by the vehicle-mounted terminal on which the first radar is mounted.
As shown in fig. 1, the method includes:
step 101, acquiring first point cloud data acquired by a first radar, wherein the first point cloud data comprises position point data acquired by the first radar through a plurality of beams.
The first radar is a radar for sampling point cloud data in the embodiment, and the point cloud data acquired by the first radar is called first point cloud data.
The first radar collects position point data through a plurality of beams, and the collected position point data is in a point cloud form, so that the first radar can be called first point cloud data. The plurality of beams included in the first radar each correspond to a certain direction angle, that is, acquisition of position point data is performed at a certain direction angle. The direction angle refers to the central angle of the beam, and those skilled in the art can know that the beam has a certain angular width, so that the beam can acquire position point data within a certain angular range centered on the direction angle.
Step 102, downsampling the first point cloud data according to the beam to which each position point data in the first point cloud data belongs, so as to obtain downsampled second point cloud data.
In some related technologies, if the first point cloud data needs to be downsampled, beams to which each position point data belongs are not generally distinguished, and are randomly selected from the position point data based on a certain downsampling rate, or a grid dividing mode is adopted to select a certain proportion of the position point data for deletion or combination, so as to realize downsampling.
Since the number of beams is generally reduced for the low-resolution radar, in this embodiment, the down-sampling is performed based on the beams to which each position point data in the first point cloud data belongs, so that the position point data acquired by the low-resolution radar can be simulated.
And the first point cloud data comprises a plurality of position point data, and each position point data has a beam which belongs to the position point data, namely, a beam which acquires the position point data. Downsampling is performed in terms of beams to approximate the location point data actually acquired by the low resolution radar.
Step 103, generating a training sample for training the target recognition model based on the second point cloud data.
As a first possible implementation manner, after training the target recognition model by using the first point cloud data is completed, training the target recognition model by using the second point cloud data is continued. The target recognition model continues to learn target recognition based on the point cloud data with lower resolution on the premise of learning target recognition based on the point cloud data with high resolution.
As a second possible implementation manner, the first point cloud data and the second point cloud data are added to a set of training samples, and the target recognition model is trained alternately, so that the target recognition model learns target recognition synchronously based on the point cloud data with high resolution and based on the point cloud data with low resolution.
As a third possible implementation manner, after or when training the target recognition model by using the first point cloud data, the second point cloud data and the point cloud data actually collected by the low-resolution radar are added to a set of training samples, and the target recognition model is trained, so that the target recognition model synchronously learns the target recognition based on the second point cloud data and the point cloud data actually collected by the low-resolution radar, and the sample size actually collected by the low-resolution radar is reduced to a certain extent.
In the embodiment of the disclosure, after the first point cloud data acquired by the first radar is acquired, because the first point cloud data includes the position point data acquired by the first radar through the plurality of beams, the first point cloud data can be downsampled according to the beams to which each position point data belongs in the first point cloud data, so as to obtain downsampled second point cloud data, and a training sample for training the target recognition model is generated based on the second point cloud data. The target recognition model adopts point cloud data without resolution ratio during training, the application range of the target recognition model is expanded, and the situation that the target recognition effect is poor when the target recognition model is migrated and applied to other types of radars due to the fact that the target recognition model is obtained by training the point cloud data acquired by one type of radars and caused by different types of radars with different resolution ratios in the related technology is avoided.
Fig. 2 is a flow chart of another training sample collection method according to an embodiment of the disclosure, as shown in fig. 2, in some possible embodiments, a second radar for which a known target recognition model needs to be adapted. In order to simplify the process of training sample acquisition, the second radar is not required to be adopted for point cloud acquisition in the embodiment of the disclosure.
As shown in fig. 2, the training sample collection method includes the following steps:
step 201, acquiring first point cloud data acquired by a first radar, wherein the first point cloud data comprises position point data acquired by the first radar through a plurality of beams.
Reference is made specifically to the foregoing descriptions of the embodiment, which will not be repeated here.
Step 202, obtaining direction angles of a plurality of beams in a second radar, wherein the number of beams of the second radar is smaller than that of the first radar.
Since the number of beams is generally reduced for a low resolution radar, in this embodiment, in the case of a known second radar, the direction angles of the plurality of beams of the second radar may be obtained by referring to a technical document provided by the manufacturer, so that the subsequent downsampling is performed based on the direction angles of the plurality of beams of the second radar.
And 203, taking a beam which is not matched with any beam direction angle of the second radar in a plurality of beams of the first radar as a first target beam.
As one possible implementation, a beam in the first radar having a difference in direction angle from any one of the beams of the second radar greater than a threshold value is determined as a first target beam that does not match any one of the beam direction angles of the second radar.
And then down sampling is carried out based on the wave beam of each position point data in the first point cloud data, so that the position point data acquired by the wave beam with the direction angle matched in the low-resolution radar can be simulated.
And step 204, deleting the position point data belonging to the first target beam from the first point cloud data to obtain the second point cloud data after downsampling.
Optionally, in the first point cloud data, based on the foregoing step, the beam to which each position point data belongs and the first beam that is not matched with any beam direction angle of the second radar in the plurality of beams of the first radar have been determined, and in this step, the down-sampling can be implemented by deleting the position point data that belongs to the first target beam. And simulating the point cloud data actually acquired by the second radar by adopting the second point cloud data.
Step 205, generating a training sample for training the target recognition model based on the second point cloud data.
Reference is made specifically to the foregoing descriptions of the embodiment, which will not be repeated here.
In the embodiment of the disclosure, after the first point cloud data acquired by the first radar is acquired, because the first point cloud data includes the position point data acquired by the first radar through the plurality of beams, the first point cloud data can be downsampled according to the beams to which each position point data belongs in the first point cloud data, so as to obtain downsampled second point cloud data, and a training sample for training the target recognition model is generated based on the second point cloud data. The target recognition model adopts point cloud data without resolution ratio during training, the application range of the target recognition model is expanded, and the situation that the target recognition effect is poor when the target recognition model is migrated and applied to other types of radars due to the fact that the target recognition model is obtained by training the point cloud data acquired by one type of radars and caused by different types of radars with different resolution ratios in the related technology is avoided.
Fig. 3 is a flowchart of another training sample collection method provided in the embodiment of the disclosure, as shown in fig. 3, in some possible embodiments, the second radar to which the target recognition model needs to be adapted cannot be obtained as described in the foregoing embodiments. In order to simplify the process of training sample acquisition, a method for performing point cloud acquisition in this case is provided in the embodiments of the present disclosure.
As shown in fig. 3, the training sample collection method includes the following steps:
step 301, acquiring first point cloud data acquired by a first radar, wherein the first point cloud data comprises position point data acquired by the first radar through a plurality of beams.
Step 302, selecting a second target beam from beams included in the first radar according to a downsampling rate.
Since in the scenario corresponding to this embodiment, the second radar to which the target recognition model needs to be adapted cannot be obtained as described in the foregoing embodiment, in this case, a beam with a corresponding proportion may be selected for downsampling according to the downsampling rate. The beam selection may be an interval selection of the second target beam, i.e. the plurality of second target beams are not adjacent in the plurality of beams of the first radar. Thereby avoiding missing data in the second point cloud data due to the continuous selection of the beam as the second target beam.
And step 303, deleting the position point data belonging to the second target beam from the first point cloud data to obtain the second point cloud data after downsampling.
Step 304, generating a training sample for training the target recognition model based on the second point cloud data.
In the embodiment of the disclosure, after the first point cloud data acquired by the first radar is acquired, because the first point cloud data includes the position point data acquired by the first radar through the plurality of beams, the first point cloud data can be downsampled according to the beams to which each position point data belongs in the first point cloud data, so as to obtain downsampled second point cloud data, and a training sample for training the target recognition model is generated based on the second point cloud data. The target recognition model adopts point cloud data without resolution ratio during training, the application range of the target recognition model is expanded, and the situation that the target recognition effect is poor when the target recognition model is migrated and applied to other types of radars due to the fact that the target recognition model is obtained by training the point cloud data acquired by one type of radars and caused by different types of radars with different resolution ratios in the related technology is avoided.
Fig. 4 is a flowchart of another training sample collection method according to an embodiment of the present disclosure, as shown in fig. 4, in which each position point data in the point cloud data collected by the radar is mapped to a position in a set spatial domain, and the set spatial domain may be established based on, for example, a zenith angle-direction angle (azimuth-zenith) coordinate system.
As shown in fig. 4, the training sample collection method includes the following steps:
step 401, acquiring first point cloud data acquired by a first radar under a radar coordinate system, wherein the first point cloud data comprises position point data acquired by the first radar through a plurality of beams.
Step 402, according to the position of each position point data in the first point cloud data, projecting each position point data in the first point cloud data in a set spatial domain, so as to determine the direction angle of each position point data in the spatial domain.
The setting space domain may be two-dimensional or three-dimensional, and is not limited in this embodiment. In this embodiment, the projection in the set spatial domain is to be able to project the position point data in the first point cloud data and the direction angle of each beam in the first radar in the same spatial domain, so as to identify which direction angle the position point data is located in and which direction angle of each beam is matched with, and thus identify the beam of the first radar to which each position point belongs.
Step 403, determining a beam to which each position point data in the first point cloud data belongs according to a direction angle of each beam in the first radar in the spatial domain and a direction angle of each position point data in the spatial domain.
As a first possible implementation manner, determining a direction angle range of each beam in the first radar according to a direction angle of each beam in the first radar in the spatial domain and a corresponding angle resolution; and determining the beam to which each position point data in the first point cloud data belongs according to the direction angle range to which the direction angle of each position data belongs.
With known angular resolution, the range of direction angles may be determined as in the first possible implementation manner described above, and the beam to which each position point data in the first point cloud data belongs may be determined based on this.
In case the angular resolution is not known or the directional angular range of the beams in the first radar cannot be determined, then the following second possible implementation may be employed.
As a second possible implementation manner, using a direction angle of each beam in the first radar in the spatial domain as a clustering center, and clustering each position point data in the first point cloud data based on the direction angle of each position point data in the first point cloud data in the spatial domain to obtain a cluster
Obtaining clusters corresponding to each wave beam in the first radar; the location point data 5 contained in each cluster is determined to belong to the beam to which the cluster corresponds.
Step 404, performing downsampling on the first point cloud data according to the beams to which the position point data in the first point cloud data belong, so as to obtain downsampled second point cloud data.
Step 405, generating a training sample for training the target recognition model based on the second point cloud data.
And 0 is used as a possible implementation manner, and the target marking is carried out on the corresponding position point data in the second point cloud data according to the target marking information of each position point data in the first point cloud data. And taking the position point data carrying the target labeling information in the second point cloud data as the training sample, and adding the training sample into a training sample set for training a target recognition model. With this is employed
In the mode, only the first point cloud data is required to be marked, and the second point cloud data is not required to be marked 5, so that the labor cost of marking is saved.
After the first point cloud data acquired by the first radar is acquired, since the first point cloud data includes the position point data acquired by the first radar through a plurality of beams, the first point cloud data can be downsampled according to the beams to which each position point data in the first point cloud data belongs,
And generating a training sample for training the target identification 0 model based on the second point cloud data. The target recognition model adopts point cloud data without resolution ratio during training, the application range of the target recognition model is expanded, and the situation that the target recognition effect is poor when the target recognition model is migrated and applied to other types of radars due to the fact that the target recognition model is obtained by training the point cloud data acquired by one type of radars and caused by different types of radars with different resolution ratios in the related technology is avoided.
5 to further illustrate the principles of the method provided by this embodiment, this embodiment also provides a specific embodiment
An example schematic diagram, as shown in fig. 5, is an example process schematic diagram, as shown in fig. 5:
step 501, acquiring 3D point cloud data under an original radar coordinate system;
step 502, projecting 3D point cloud data at zenith angle-direction angle (azimuth-zenith)
The coordinate system yields a 2D image.
0 step 503, determining, based on the 2D image in zenith angle-direction (azimuth-zenith) coordinate system, a beam to which the corresponding position point data in the 3D point cloud data belongs.
And step 504, deleting the position point data under the set beam to obtain new 3D point cloud data.
In the embodiment of the disclosure, after the first point cloud data acquired by the first radar is acquired, because the first point cloud data includes the position point data acquired by the first radar through the plurality of beams, the first point cloud data can be downsampled according to the beams to which each position point data belongs in the first point cloud data, so as to obtain downsampled second point cloud data, and a training sample for training the target recognition model is generated based on the second point cloud data. The target recognition model adopts point cloud data without resolution ratio during training, the application range of the target recognition model is expanded, and the situation that the target recognition effect is poor when the target recognition model is migrated and applied to other types of radars due to the fact that the target recognition model is obtained by training the point cloud data acquired by one type of radars and caused by different types of radars with different resolution ratios in the related technology is avoided.
Fig. 6 is a schematic structural diagram of a training sample collection device 600 according to an embodiment of the present disclosure, and as shown in fig. 6, the training sample collection device 600 includes: an acquisition module 601, a downsampling module 602 and a generation module 603.
The acquiring module 601 is configured to acquire first point cloud data acquired by a first radar, where the first point cloud data includes position point data acquired by the first radar through a plurality of beams.
The downsampling module 602 is configured to downsample the first point cloud data according to the beam to which each position point data in the first point cloud data belongs, so as to obtain downsampled second point cloud data.
And the generating module 603 is configured to generate a training sample for training the target recognition model based on the second point cloud data.
In one possible implementation, the downsampling module 602 includes:
an acquisition unit configured to acquire direction angles of a plurality of beams in a second radar, wherein the number of beams of the second radar is smaller than that of the first radar;
a determining unit configured to set, as a first target beam, a beam that does not match any beam direction angle of the second radar among a plurality of beams of the first radar;
and the deleting unit is used for deleting the position point data belonging to the first target beam from the first point cloud data so as to obtain the second point cloud data after downsampling.
In a possible implementation, the determining unit is further configured to:
and determining a beam, of which the direction angle difference between any beam of the first radar and any beam of the second radar is larger than a threshold value, as a first target beam which does not match any beam direction angle of the second radar.
In one possible implementation, the downsampling module 602 is configured to:
selecting a second target beam from beams contained in the first radar according to a downsampling rate;
and deleting the position point data belonging to the second target beam from the first point cloud data to obtain the second point cloud data after downsampling.
In one possible implementation, the training sample acquisition device 600 further comprises:
the projection module is used for projecting each position point data in the first point cloud data into a set space domain according to the position of each position point data in the first point cloud data so as to determine the direction angle of each position point data in the space domain;
the determining module is used for determining the beam to which each position point data in the first point cloud data belongs according to the direction angle of each beam in the first radar in the spatial domain and the direction angle of each position point data in the spatial domain.
In one possible implementation, the determining module is configured to:
determining the direction angle range of each beam in the first radar according to the direction angle of each beam in the first radar in the space domain and the corresponding angle resolution;
And determining the beam to which each position point data in the first point cloud data belongs according to the direction angle range to which the direction angle of each position data belongs.
In another possible implementation, the determining module is configured to:
clustering the position point data in the first point cloud data based on the direction angle of the position point data in the first point cloud data in the spatial domain by taking the direction angle of each wave beam in the first radar in the spatial domain as a clustering center so as to obtain clusters corresponding to each wave beam in the first radar;
determining that the position point data contained in each cluster belongs to the beam corresponding to the cluster.
In one possible implementation, the generating module is configured to:
performing target labeling on the corresponding position point data in the second point cloud data according to the target labeling information of each position point data in the first point cloud data;
and taking the position point data carrying the target labeling information in the second point cloud data as the training sample, and adding the training sample into a training sample set for training a target recognition model.
According to an embodiment of the present application, the present application also provides an electronic device and a readable storage medium.
In the embodiment of the disclosure, after the first point cloud data acquired by the first radar is acquired, because the first point cloud data includes the position point data acquired by the first radar through the plurality of beams, the first point cloud data can be downsampled according to the beams to which each position point data belongs in the first point cloud data, so as to obtain downsampled second point cloud data, and a training sample for training the target recognition model is generated based on the second point cloud data. The target recognition model adopts point cloud data without resolution ratio during training, the application range of the target recognition model is expanded, and the situation that the target recognition effect is poor when the target recognition model is migrated and applied to other types of radars due to the fact that the target recognition model is obtained by training the point cloud data acquired by one type of radars and caused by different types of radars with different resolution ratios in the related technology is avoided.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a ROM (Read-Only Memory) 702 or a computer program loaded from a storage unit 708 into a RAM (Random Access Memory ) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An I/O (Input/Output) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a CPU (Central Processing Unit ), a GPU (Graphic Processing Units, graphics processing unit), various dedicated AI (Artificial Intelligence ) computing chips, various computing units running machine learning model algorithms, a DSP (Digital Signal Processor ), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the various methods and processes described above. For example, in some embodiments, the vehicle control method and the motion state identification method may be implemented as computer software programs tangibly embodied on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into RAM 703 and executed by computing unit 701, one or more steps of the method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the vehicle control method and the movement state recognition method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit System, FPGA (Field Programmable Gate Array ), ASIC (Application-Specific Integrated Circuit, application-specific integrated circuit), ASSP (Application Specific Standard Product, special-purpose standard product), SOC (System On Chip ), CPLD (Complex Programmable Logic Device, complex programmable logic device), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, RAM, ROM, EPROM (Electrically Programmable Read-Only-Memory, erasable programmable read-Only Memory) or flash Memory, an optical fiber, a CD-ROM (Compact Disc Read-Only Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., CRT (Cathode-Ray Tube) or LCD (Liquid Crystal Display ) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: LAN (Local Area Network ), WAN (Wide Area Network, wide area network), internet and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other
And typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. Server device
The cloud server can be also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service (Virtual Private Server or VPS for short) are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
0 wherein, it should be noted that artificial intelligence is a research that makes a computer simulate some thinking of a human
The disciplines of processes and intelligent behavior (e.g., learning, reasoning, thinking, planning, etc.) exist in both hardware-level and software-level technologies. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; artificial intelligence software
The technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine 5 device learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
0 the above detailed description is not intended to limit the scope of the present disclosure. Techniques in the art
It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (16)

1. A training sample collection method comprising:
acquiring first point cloud data acquired by a first radar, wherein the first point cloud data comprises position point data acquired by the first radar through a plurality of beams;
downsampling the first point cloud data according to the wave beams to which the position point data in the first point cloud data belong to so as to obtain downsampled second point cloud data;
generating a training sample for training a target recognition model based on the second point cloud data;
the step of down-sampling the first point cloud data according to the beam to which each position point data in the first point cloud data belongs to obtain down-sampled second point cloud data includes:
acquiring direction angles of a plurality of beams in a second radar, wherein the number of beams of the second radar is smaller than that of the first radar;
A beam which does not match any beam direction angle of the second radar among the plurality of beams of the first radar is used as a first target beam;
and deleting the position point data belonging to the first target beam from the first point cloud data to obtain the second point cloud data after downsampling.
2. The method of claim 1, wherein the method further comprises:
and determining a beam, of which the direction angle difference between any beam of the first radar and any beam of the second radar is larger than a threshold value, as a first target beam which does not match any beam direction angle of the second radar.
3. The method of claim 1, wherein the downsampling the first point cloud data according to the beam to which each location point data in the first point cloud data belongs to obtain the downsampled second point cloud data, includes:
selecting a second target beam from beams contained in the first radar according to a downsampling rate;
and deleting the position point data belonging to the second target beam from the first point cloud data to obtain the second point cloud data after downsampling.
4. A method according to any one of claims 1-3, wherein the method further comprises:
According to the position of each position point data in the first point cloud data, projecting each position point data in the first point cloud data into a set space domain to determine the direction angle of each position point data in the space domain;
and determining the beam to which each position point data in the first point cloud data belongs according to the direction angle of each beam in the first radar in the spatial domain and the direction angle of each position point data in the spatial domain.
5. The method of claim 4, wherein the determining the beam to which each location point data in the first point cloud data belongs according to the direction angle of each beam in the first radar in the spatial domain and the direction angle to which each location point data belongs in the spatial domain comprises:
determining the direction angle range of each beam in the first radar according to the direction angle of each beam in the first radar in the space domain and the corresponding angle resolution;
and determining the beam to which each position point data in the first point cloud data belongs according to the direction angle range to which the direction angle of each position point data belongs.
6. The method of claim 4, wherein the determining the beam to which each location point data in the first point cloud data belongs according to the direction angle of each beam in the first radar in the spatial domain and the direction angle to which each location point data belongs in the spatial domain comprises:
Clustering the position point data in the first point cloud data based on the direction angle of the position point data in the first point cloud data in the spatial domain by taking the direction angle of each wave beam in the first radar in the spatial domain as a clustering center so as to obtain clusters corresponding to each wave beam in the first radar;
determining that the position point data contained in each cluster belongs to the beam corresponding to the cluster.
7. A method according to any of claims 1-3, wherein the generating training samples for training a target recognition model based on the second point cloud data comprises:
performing target labeling on the corresponding position point data in the second point cloud data according to the target labeling information of each position point data in the first point cloud data;
and taking the position point data carrying the target labeling information in the second point cloud data as the training sample, and adding the training sample into a training sample set for training a target recognition model.
8. A training sample acquisition device comprising:
the acquisition module is used for acquiring first point cloud data acquired by a first radar, wherein the first point cloud data comprises position point data acquired by the first radar through a plurality of beams;
The downsampling module is used for downsampling the first point cloud data according to the wave beams to which the position point data in the first point cloud data belong so as to obtain downsampled second point cloud data;
the generation module is used for generating a training sample for training the target recognition model based on the second point cloud data;
wherein, the downsampling module includes:
an acquisition unit configured to acquire direction angles of a plurality of beams in a second radar, wherein the number of beams of the second radar is smaller than that of the first radar;
a determining unit configured to set, as a first target beam, a beam that does not match any beam direction angle of the second radar among a plurality of beams of the first radar;
and the deleting unit is used for deleting the position point data belonging to the first target beam from the first point cloud data so as to obtain the second point cloud data after downsampling.
9. The apparatus of claim 8, wherein the determining unit is further configured to:
and determining a beam, of which the direction angle difference between any beam of the first radar and any beam of the second radar is larger than a threshold value, as a first target beam which does not match any beam direction angle of the second radar.
10. The apparatus of claim 9, wherein the downsampling module is to:
selecting a second target beam from beams contained in the first radar according to a downsampling rate;
and deleting the position point data belonging to the second target beam from the first point cloud data to obtain the second point cloud data after downsampling.
11. The apparatus according to any one of claims 8-10, wherein the apparatus further comprises:
the projection module is used for projecting each position point data in the first point cloud data into a set space domain according to the position of each position point data in the first point cloud data so as to determine the direction angle of each position point data in the space domain;
the determining module is used for determining the beam to which each position point data in the first point cloud data belongs according to the direction angle of each beam in the first radar in the spatial domain and the direction angle of each position point data in the spatial domain.
12. The apparatus of claim 11, wherein the means for determining is configured to:
determining the direction angle range of each beam in the first radar according to the direction angle of each beam in the first radar in the space domain and the corresponding angle resolution;
And determining the beam to which each position point data in the first point cloud data belongs according to the direction angle range to which the direction angle of each position point data belongs.
13. The apparatus of claim 11, wherein the means for determining is configured to:
clustering the position point data in the first point cloud data based on the direction angle of the position point data in the first point cloud data in the spatial domain by taking the direction angle of each wave beam in the first radar in the spatial domain as a clustering center so as to obtain clusters corresponding to each wave beam in the first radar;
determining that the position point data contained in each cluster belongs to the beam corresponding to the cluster.
14. The apparatus of any of claims 8-10, wherein the generating module is configured to:
performing target labeling on the corresponding position point data in the second point cloud data according to the target labeling information of each position point data in the first point cloud data;
and taking the position point data carrying the target labeling information in the second point cloud data as the training sample, and adding the training sample into a training sample set for training a target recognition model.
15. An electronic device, comprising:
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of claims 1-7.
CN202310010153.3A 2023-01-04 2023-01-04 Training sample acquisition method, device, equipment and storage medium Active CN116051925B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310010153.3A CN116051925B (en) 2023-01-04 2023-01-04 Training sample acquisition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310010153.3A CN116051925B (en) 2023-01-04 2023-01-04 Training sample acquisition method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116051925A CN116051925A (en) 2023-05-02
CN116051925B true CN116051925B (en) 2023-11-10

Family

ID=86121361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310010153.3A Active CN116051925B (en) 2023-01-04 2023-01-04 Training sample acquisition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116051925B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104297735A (en) * 2014-10-23 2015-01-21 西安电子科技大学 Clutter suppression method based on priori road information
CN111220956A (en) * 2019-11-08 2020-06-02 北京理工雷科电子信息技术有限公司 Method for removing sea detection land target by airborne radar based on geographic information
CN112085123A (en) * 2020-09-25 2020-12-15 北方民族大学 Point cloud data classification and segmentation method based on salient point sampling
CN112666553A (en) * 2020-12-16 2021-04-16 动联(山东)电子科技有限公司 Road ponding identification method and equipment based on millimeter wave radar
CN113592932A (en) * 2021-06-28 2021-11-02 北京百度网讯科技有限公司 Training method and device for deep completion network, electronic equipment and storage medium
CN113985408A (en) * 2021-09-13 2022-01-28 南京航空航天大学 Inverse synthetic aperture radar imaging method combining gate unit and transfer learning
WO2022035842A1 (en) * 2020-08-10 2022-02-17 Qualcomm Incorporated Imaging radar super-resolution for stationary objects
CN114118286A (en) * 2021-12-01 2022-03-01 苏州思卡信息系统有限公司 Processing method of automobile radar point cloud data
CN114882316A (en) * 2022-05-23 2022-08-09 阿波罗智联(北京)科技有限公司 Target detection model training method, target detection method and device
CN114943870A (en) * 2022-04-07 2022-08-26 阿里巴巴(中国)有限公司 Training method and device of line feature extraction model and point cloud matching method and device
CN115220007A (en) * 2022-07-26 2022-10-21 浙江大学 Radar point cloud data enhancement method aiming at attitude identification
CN115393597A (en) * 2022-10-31 2022-11-25 之江实验室 Semantic segmentation method and device based on pulse neural network and laser radar point cloud

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104297735A (en) * 2014-10-23 2015-01-21 西安电子科技大学 Clutter suppression method based on priori road information
CN111220956A (en) * 2019-11-08 2020-06-02 北京理工雷科电子信息技术有限公司 Method for removing sea detection land target by airborne radar based on geographic information
WO2022035842A1 (en) * 2020-08-10 2022-02-17 Qualcomm Incorporated Imaging radar super-resolution for stationary objects
CN112085123A (en) * 2020-09-25 2020-12-15 北方民族大学 Point cloud data classification and segmentation method based on salient point sampling
CN112666553A (en) * 2020-12-16 2021-04-16 动联(山东)电子科技有限公司 Road ponding identification method and equipment based on millimeter wave radar
CN113592932A (en) * 2021-06-28 2021-11-02 北京百度网讯科技有限公司 Training method and device for deep completion network, electronic equipment and storage medium
CN113985408A (en) * 2021-09-13 2022-01-28 南京航空航天大学 Inverse synthetic aperture radar imaging method combining gate unit and transfer learning
CN114118286A (en) * 2021-12-01 2022-03-01 苏州思卡信息系统有限公司 Processing method of automobile radar point cloud data
CN114943870A (en) * 2022-04-07 2022-08-26 阿里巴巴(中国)有限公司 Training method and device of line feature extraction model and point cloud matching method and device
CN114882316A (en) * 2022-05-23 2022-08-09 阿波罗智联(北京)科技有限公司 Target detection model training method, target detection method and device
CN115220007A (en) * 2022-07-26 2022-10-21 浙江大学 Radar point cloud data enhancement method aiming at attitude identification
CN115393597A (en) * 2022-10-31 2022-11-25 之江实验室 Semantic segmentation method and device based on pulse neural network and laser radar point cloud

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Unsupervised Domain Adaptive 3-D Detection with Data Adaption From LiDAR Point Cloud;Zhang D等;《IEEE Transactions on Geoscience and Remote Sensing》;第60卷;1-14 *
数字相控阵雷达发射多波束特性研究;吴鸿超等;《微波学报》;第30卷(第1期);6-9 *

Also Published As

Publication number Publication date
CN116051925A (en) 2023-05-02

Similar Documents

Publication Publication Date Title
US20190156144A1 (en) Method and apparatus for detecting object, method and apparatus for training neural network, and electronic device
JP7273129B2 (en) Lane detection method, device, electronic device, storage medium and vehicle
CN115578433B (en) Image processing method, device, electronic equipment and storage medium
CN112528858A (en) Training method, device, equipment, medium and product of human body posture estimation model
JP2023143742A (en) Method for training point cloud processing model, point cloud instance segmentation method and device
JP2023527615A (en) Target object detection model training method, target object detection method, device, electronic device, storage medium and computer program
JP2023539934A (en) Object detection model training method, image detection method and device
CN115511779B (en) Image detection method, device, electronic equipment and storage medium
CN112580666A (en) Image feature extraction method, training method, device, electronic equipment and medium
CN111402413A (en) Three-dimensional visual positioning method and device, computing equipment and storage medium
JP2022185144A (en) Object detection method and training method and device of object detection model
CN113592932A (en) Training method and device for deep completion network, electronic equipment and storage medium
CN114723949A (en) Three-dimensional scene segmentation method and method for training segmentation model
CN116052097A (en) Map element detection method and device, electronic equipment and storage medium
CN113762397B (en) Method, equipment, medium and product for training detection model and updating high-precision map
CN114596431A (en) Information determination method and device and electronic equipment
CN115578432B (en) Image processing method, device, electronic equipment and storage medium
CN116051925B (en) Training sample acquisition method, device, equipment and storage medium
CN113920273B (en) Image processing method, device, electronic equipment and storage medium
KR20230006628A (en) method and device for processing image, electronic equipment, storage medium and computer program
CN113610856A (en) Method and device for training image segmentation model and image segmentation
KR102721493B1 (en) Lane marking detecting method, apparatus, electronic device, storage medium, and vehicle
CN113177545B (en) Target object detection method, target object detection device, electronic equipment and storage medium
CN115049895B (en) Image attribute identification method, attribute identification model training method and device
CN115131562B (en) Three-dimensional scene segmentation method, model training method, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant