CN113077544A - Point cloud generation method and device, electronic equipment and storage medium - Google Patents

Point cloud generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113077544A
CN113077544A CN202110346606.0A CN202110346606A CN113077544A CN 113077544 A CN113077544 A CN 113077544A CN 202110346606 A CN202110346606 A CN 202110346606A CN 113077544 A CN113077544 A CN 113077544A
Authority
CN
China
Prior art keywords
target scene
point cloud
cad model
cloud data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110346606.0A
Other languages
Chinese (zh)
Inventor
林逸群
王哲
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sensetime Group Ltd
Original Assignee
Sensetime Group Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sensetime Group Ltd filed Critical Sensetime Group Ltd
Priority to CN202110346606.0A priority Critical patent/CN113077544A/en
Publication of CN113077544A publication Critical patent/CN113077544A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Architecture (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a point cloud generating method, a point cloud generating device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring point cloud data of a target scene and a Computer Aided Design (CAD) model corresponding to at least one target object; inserting the CAD model into the target scene according to the acquired point cloud data of the target scene; determining simulated radar ray information transmitted to the CAD model when the radar equipment scans a target scene inserted into the CAD model based on the installation parameters and the configuration parameters set by the radar equipment; and determining point cloud data of a target object corresponding to the CAD model inserted into the target scene under the target scene based on the simulated radar ray information and the spatial position information of the CAD model inserted into the target scene in the target scene. The method and the device automatically generate the point cloud data of the related target object by utilizing the CAD model, and are time-saving and labor-saving.

Description

Point cloud generation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for generating a point cloud, an electronic device, and a storage medium.
Background
Lidar is widely used in various technical fields. Taking the field of automatic driving as an example, it is particularly important to accurately detect obstacles around a vehicle, such as pedestrians and other vehicles, by using point cloud data acquired by a laser radar.
In order to accurately detect the obstacle, a detection model needs to be trained based on the collected point cloud data as a training sample. Before training, the collected point cloud data needs to be accurately marked so as to improve the accuracy of the detection result.
At present, most of point cloud data are marked in a three-dimensional space directly by means of manual methods, and marking efficiency is low. In addition, the data distribution of the laser radars of different models is different, so that even if the same obstacle is detected, the point cloud data acquired by the laser radars of different models need to be respectively marked, which is more time-consuming and labor-consuming.
Disclosure of Invention
The embodiment of the disclosure at least provides a method and a device for generating point cloud, electronic equipment and a storage medium, wherein the point cloud data of a target object (such as an obstacle) is automatically generated by using a CAD model, and time and labor are saved.
In a first aspect, an embodiment of the present disclosure provides a method for point cloud generation, where the method includes:
acquiring point cloud data of a target scene and a Computer Aided Design (CAD) model corresponding to at least one target object;
inserting the CAD model into the target scene according to the acquired point cloud data of the target scene;
determining simulated radar ray information transmitted to a CAD model when the radar equipment scans a target scene inserted into the CAD model based on installation parameters and configuration parameters set by the radar equipment;
and determining point cloud data of a target object corresponding to the CAD model inserted into the target scene under the target scene based on the simulated radar ray information and the spatial position information of the CAD model inserted into the target scene in the target scene.
By adopting the point cloud generating method, under the condition that the point cloud data of the target scene collected by the radar equipment and the CAD model corresponding to the target object in the target scene are obtained, the CAD model is firstly inserted into the target scene, namely, the point cloud data and the CAD model can be fused. In this way, simulated radar ray information may be generated for a CAD model inserted into a target scene.
Because the simulated radar ray information can simulate the related position information passed by the radio waves emitted by the radar device to the CAD model in the target scene, under the condition of determining the spatial position information of the CAD model in the target scene, the point cloud data of the target object corresponding to the CAD model in the target scene can be determined based on the two position information. That is, under the condition that the target object corresponding to the CAD model does not exist in the real target scene, based on the point cloud generating method, point cloud data detected by the radar device under the condition that the target object exists in the real target scene is simulated, and the point cloud data can be directly used as a point cloud labeling result of the target object, so that the whole process does not need manual participation, and time and labor are saved.
In one possible embodiment, the determining point cloud data of a target object under the target scene corresponding to a CAD model inserted into the target scene based on the simulated radar ray information and spatial position information of the CAD model inserted into the target scene in the target scene includes:
for each simulated radar ray, determining intersection point coordinate information between the simulated radar ray and a CAD model based on coordinate information of a spatial position through which the simulated radar ray passes and spatial position information of the CAD model inserted into the target scene in the target scene;
and determining point cloud data of a target object corresponding to the CAD model under the target scene based on intersection point coordinate information between each simulated radar ray and the CAD model.
In a possible implementation manner, the spatial position information of the CAD model includes spatial position range information where each model component patch is located;
the determining, for each simulated radar ray, intersection point coordinate information between the simulated radar ray and the CAD model based on coordinate information of each spatial position through which the simulated radar ray passes and spatial position information of the CAD model inserted into the target scene in the target scene includes:
for each simulated radar ray, determining intersection point coordinate information between the simulated radar ray and each model composition surface patch based on coordinate information of each space position through which the simulated radar ray passes and space position range information of each model composition surface patch;
the determining point cloud data of the target object corresponding to the CAD model under the target scene based on the intersection point coordinate information between each simulated radar ray and the CAD model comprises the following steps:
aiming at each model composition patch, determining intersection point coordinate information between a simulation radar ray which has an intersection point with the model composition patch and the model composition patch;
and combining the intersection point coordinate information corresponding to the model composition patches to determine point cloud data of the target object corresponding to the CAD model under the target scene.
In one possible embodiment, the method further comprises:
determining whether shielding exists between the model composition surface patches or not based on the spatial position range information of the model composition surface patches;
if shielding exists, determining intersection point coordinate information between a model composition surface patch closest to the radar equipment and each simulation radar ray based on intersection point coordinate information between the simulation radar ray and each model composition surface patch in a plurality of model composition surface patches with shielding;
the determining point cloud data of the target object corresponding to the CAD model under the target scene based on the intersection point coordinate information between each simulated radar ray and the CAD model comprises the following steps:
and determining point cloud data of a target object corresponding to the CAD model in the target scene based on each simulated radar ray and intersection point coordinate information between the model composition patch determined by each simulated radar ray and closest to the radar equipment and the simulated radar ray.
Considering that in an actual scene, there may be occlusion between different model component patches of the same CAD model, and in the case of the occlusion, there is a high possibility that the model component patch to which the lidar is actually able to transmit is the model component patch closest to the radar device, rather than other model component patches. Therefore, in the embodiment of the present disclosure, under the condition that it is determined that occlusion exists between each model component patch of one CAD model, for each simulated radar ray, intersection coordinate information between the model component patch closest to the radar device and the simulated radar ray is determined, so as to determine point cloud data of a target object corresponding to the CAD model in a target scene, and thus, the determined point cloud data more conforms to an actual application scene.
In a possible embodiment, if there are a plurality of CAD models; the method further comprises the following steps:
determining whether occlusion exists between target objects corresponding to a plurality of CAD models based on spatial position information of the CAD models inserted into the target scene in the target scene;
if the shielding exists, determining intersection point coordinate information between the CAD model closest to the radar equipment and each simulated radar ray based on intersection point coordinate information between the simulated radar ray and the CAD models for each simulated radar ray;
and taking the determined intersection point coordinate information as point cloud data of a target object corresponding to the CAD model pointed by the intersection point coordinate information under the target scene.
Consider the situation where there may also be occlusion between different CAD models, where the CAD model to which the lidar is actually capable of transmitting may be the closest CAD model to the radar device. Therefore, in the embodiment of the disclosure, the CAD model closest to the radar device can be determined in the plurality of CAD models where the occlusion occurs, so that the point cloud data of the target object is determined, and the method is more suitable for an actual application scene.
In a possible embodiment, the inserting the CAD model into the target scene according to the acquired point cloud data of the target scene includes:
according to the acquired point cloud data of the target scene, determining spatial position information of the CAD model to be inserted in the target scene;
searching point cloud data matched with the spatial position information from the point cloud data of the target scene based on the determined spatial position information;
and fusing the searched point cloud data with the CAD model to obtain a target scene inserted with the CAD model.
In the embodiment of the disclosure, the point cloud data and the CAD model can be fused based on the spatial position information of the CAD model to be inserted in the target scene, so that the target scene with the CAD model inserted therein can adapt to the simulated emission operation of the subsequent radar equipment.
In a possible implementation manner, the determining, according to the acquired point cloud data of the target scene, spatial position information of a CAD model to be inserted in the target scene includes:
performing target object recognition on the acquired point cloud data of the target scene based on the trained target object recognition model, and determining the position information of the target object in the target scene;
and determining the spatial position information of the CAD model to be inserted in the target scene based on the position information of the target object in the target scene.
In the embodiment of the disclosure, the spatial position information of the CAD model to be inserted in the target scene may be determined based on the position information of the existing target object in the target scene, for example, a CAD model that belongs to the same type as the existing target object may be inserted at the position information of the existing target object to realize the fusion between the target scene and different CAD models, so that the point cloud data of different CAD models may be generated, and the diversity of point cloud data generation is ensured while the generation efficiency is improved.
In a possible implementation manner, the determining, according to the acquired point cloud data of the target scene, spatial position information of a CAD model to be inserted in the target scene includes:
acquiring a three-dimensional scene map of a related target scene;
determining map position information of an associated object associated with a target object corresponding to the CAD model based on the acquired three-dimensional scene map and the point cloud data of the target scene;
determining map position information of the target object based on the determined map position information of the associated object and a relative position relationship between the associated object and the target object;
and taking the determined map position information of the target object as the spatial position information of the CAD model to be inserted in the target scene.
In the embodiment of the disclosure, the spatial position information of the target object may be determined by referring to the map position information of the associated object associated with the target object in the three-dimensional scene map based on the requirement of the actual application scene, for example, the spatial position of the target object, which is a vehicle, may be determined based on the associated object, which is a road, so that the generated point cloud data about the vehicle better meets the requirement of the actual scene.
In a possible embodiment, the fusing the point cloud data found with the CAD model includes:
determining a scaled CAD model based on the model size information of the CAD model and a preset model scaling;
and fusing the scaled CAD model with the searched point cloud data.
In a possible embodiment, the inserting the CAD model into the target scene according to the acquired point cloud data of the target scene includes:
identifying a reference object from the target scene; the reference object represents an object which moves in continuous multi-frame point cloud data of the target scene;
screening out reference point cloud data corresponding to the reference object from the acquired point cloud data of the target scene to obtain updated point cloud data of the target scene;
and inserting the CAD model into the target scene according to the updated point cloud data of the target scene.
Considering the adverse effect of the dynamic change of the scene on the insertion of the CAD model, here, before the CAD model is inserted into the target scene, the point cloud data of the target scene can be screened out based on the reference object which moves in the continuous multi-frame point cloud data of the target scene, so that the insertion effect of the CAD model is better.
In a second aspect, an embodiment of the present disclosure further provides an apparatus for point cloud generation, where the apparatus includes:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring point cloud data of a target scene and a Computer Aided Design (CAD) model corresponding to at least one target object;
the inserting module is used for inserting the CAD model into the target scene according to the acquired point cloud data of the target scene;
the system comprises a determining module, a calculating module and a calculating module, wherein the determining module is used for determining simulation radar ray information transmitted to a CAD model when the radar equipment scans and inserts a target scene of the CAD model based on installation parameters and configuration parameters set by the radar equipment;
and the generating module is used for determining point cloud data of a target object corresponding to the CAD model inserted into the target scene under the target scene based on the simulated radar ray information and the spatial position information of the CAD model inserted into the target scene in the target scene.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the steps of the method of point cloud generation according to the first aspect and any of its various embodiments.
In a fourth aspect, the embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by an electronic device, the electronic device executes the steps of the method for generating a point cloud according to the first aspect and any one of the various embodiments of the first aspect.
For the description of the effects of the above-mentioned point cloud generating apparatus, electronic device, and computer-readable storage medium, reference is made to the description of the above-mentioned point cloud generating method, which is not repeated herein.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a method for point cloud generation according to an embodiment of the present disclosure;
fig. 2 shows a schematic diagram of an apparatus for point cloud generation provided in the second embodiment of the present disclosure;
fig. 3 shows a schematic diagram of an electronic device provided in a third embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Research shows that currently, most of point cloud data are marked by manually and directly operating in a three-dimensional space, and the marking efficiency is low. In addition, the data distribution of the laser radars of different models is different, so that even if the same obstacle is detected, the point cloud data acquired by the laser radars of different models need to be respectively marked, which is more time-consuming and labor-consuming.
Based on the research, the disclosure provides a point cloud generating method, a point cloud generating device, an electronic device and a storage medium, wherein point cloud data of a target object (such as an obstacle) is automatically generated by using a CAD model, and time and labor are saved.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a method for point cloud generation disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the method for point cloud generation provided in the embodiments of the present disclosure is generally an electronic device with certain computing capability, and the electronic device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the method of point cloud generation may be implemented by a processor invoking computer readable instructions stored in a memory.
The method for generating the point cloud provided by the embodiment of the disclosure is explained below.
Example one
Referring to fig. 1, which is a flowchart of a method for generating a point cloud provided by the embodiment of the present disclosure, the method includes steps S101 to S104, where:
s101, point cloud data of a target scene and a Computer Aided Design (CAD) model corresponding to at least one target object are obtained;
s102, inserting a CAD model into the target scene according to the acquired point cloud data of the target scene;
s103, determining simulated radar ray information transmitted to the CAD model when the radar equipment scans a target scene inserted into the CAD model based on the installation parameters and the configuration parameters set by the radar equipment;
and S104, determining point cloud data of a target object corresponding to the CAD model inserted into the target scene under the target scene based on the simulated radar ray information and the spatial position information of the CAD model inserted into the target scene in the target scene.
Here, in order to facilitate understanding of the method for point cloud generation provided by the embodiment of the present disclosure, an application scenario of the method for point cloud generation may be specifically described first. The method for generating the point cloud provided by the embodiment of the disclosure can be applied to any scene in which point cloud data corresponding to a target object needs to be determined, for example, can be applied to point cloud labeling, the point cloud data of the target object in the target scene corresponding to the CAD model generated by the embodiment of the disclosure can be used as three-dimensional labeling data of the target object in the target scene, and the point cloud data can be used for labeling related target vehicles in the target scene corresponding to automatic driving; for another example, the method can also be applied to model training, and the point cloud data generated by the embodiment of the present disclosure and related to the target object is used as the input data of the target object recognition model, so as to facilitate the recognition of the target object, where the target object may be recognized by aiming at the related target vehicle in the target scene corresponding to the automatic driving; besides, the embodiments of the present disclosure may also be applied to other application scenarios, and are not limited in particular herein.
The embodiment of the disclosure provides a point cloud generating method for automatically generating a target object based on a CAD model, which does not need manual operation and is time-saving and labor-saving.
The point cloud data of the target scene in the embodiment of the present disclosure may be acquired by using radar equipment, and the point cloud data acquired in different target scenes is also different, and the point cloud data may be dense or sparse, which is not limited specifically.
The radar device may be a rotary scanning laser radar, and may also be other radar devices, which are not particularly limited. Taking a rotary scanning laser radar as an example, the laser radar can acquire three-dimensional point cloud data related to the surrounding environment when the laser radar rotates and scans in the horizontal direction. Here, the laser radar may adopt a multi-line scanning mode in which a plurality of laser tubes are sequentially used for emission during the rotational scanning, and the structure is that the plurality of laser tubes are longitudinally arranged, that is, multi-layer scanning in the vertical direction is performed during the rotational scanning in the horizontal direction. A certain included angle is formed between every two laser tubes, the vertical emission view field can be 30-40 degrees, therefore, a data packet returned by the laser emitted by the laser tubes can be obtained when the laser radar equipment rotates for one scanning angle, the data packets obtained by the scanning angles are spliced to obtain one frame of point cloud data (corresponding to 360-degree scanning in one rotation), and after the laser radar scans for one circle, the acquisition of one frame of point cloud data can be completed.
In addition, embodiments of the present disclosure may also obtain a Computer Aided Design (CAD) model corresponding to the target object. For example, for a target scene in which automatic driving is performed, the target object may be a pedestrian or a vehicle, and the corresponding CAD model may be a pedestrian CAD model or a vehicle CAD model.
The CAD model may be constructed based on model component patches. In a specific application, the CAD model generation method can be implemented based on a specific CAD editing tool, for example, each component patch of the vehicle can be edited by referring to the size of the actual vehicle, and then the vehicle CAD model is generated.
In view of the point cloud generation method provided by the embodiment of the present disclosure is a point cloud data generation method depending on a target scene, therefore, a CAD model may be inserted into the target scene by using the obtained point cloud data, and then a simulation operation of radar emission is realized by using the target scene into which the CAD model is inserted, thereby realizing generation of point cloud data of a target object under the target scene.
In order to insert the CAD model into the target scene, a position in the target scene, into which the CAD model can be inserted, may first be determined, so that the CAD model can be inserted at the determined position. This is mainly because in practical application scenarios, the relative relationship between the target object and the target scene may be different for different CAD models, for example, for a target scene including roads and roadside trees, the insertable target vehicle may be actually located under the entire target scene, for example, may be on a road, but is unlikely to be on a tree, and thus, the process of inserting a CAD model in the target scene according to the embodiments of the present disclosure may depend on a predetermined location.
When the CAD model is inserted into a suitable position in the target scene, it may be determined that the radar device scans the target scene into which the CAD model is inserted, and the simulated radar ray information is transmitted to the CAD model, that is, the simulated radar ray information may be information of a plurality of radar rays simulated by radar transmission for the CAD model currently inserted into the target scene.
The simulated information of the plurality of radar rays can be information of each simulated radar ray emitted by performing radar scanning on a target scene in which a CAD model is inserted according to installation parameters and configuration parameters of radar equipment.
The installation parameter may be installation height information of the radar device, and the configuration parameter may include not only configuration parameters such as horizontal resolution, vertical resolution, and angle of view of the radar device, but also parameters such as model of the radar device, and other relevant configuration parameters affecting radar transmission, which is not limited specifically herein.
In the embodiment of the disclosure, the scanning track corresponding to the laser radar in the process of scanning the target scene can be determined, and point cloud data of the target object corresponding to the CAD model in the target scene can be determined based on the intersection point set between the scanning track and the CAD model inserted into the target scene. In the embodiment of the present disclosure, the point cloud data corresponding to the CAD model may be determined according to the following steps:
step one, aiming at each simulated radar ray, determining intersection point coordinate information between the simulated radar ray and a CAD model based on coordinate information of a spatial position where the simulated radar ray passes and spatial position information of the CAD model inserted into a target scene in the target scene;
and secondly, determining point cloud data of a target object corresponding to the CAD model in a target scene based on intersection point coordinate information between each simulated radar ray and the CAD model.
Here, in the process of determining the scanning track corresponding to the scanning of the laser radar to the target scene, the coordinate information of the spatial position where each simulated radar ray passes may be determined, and the passing coordinate information and the spatial position information of the CAD model inserted into the target scene in the target scene are subjected to an intersection test, that is, the coordinate information of the intersection point between each simulated radar ray and the CAD model may be determined. And (4) counting the coordinate information of the intersection points between all the simulated radar rays and the CAD model, so that the point cloud data of the target object corresponding to the CAD model in the target scene can be determined.
It should be noted that, for radar devices with different configuration attributes, because corresponding scanning tracks are different, after intersection tests are performed, intersection coordinate information between the determined simulated radar rays and the CAD model is also different, so that generated point cloud data are different for the same target object even in the same target scene, which avoids the problem of time and labor waste caused by labeling for radar devices with different configurations in the related art, and can be widely applied to various application scenes.
Considering that the CAD model may be composed of individual model component patches, in the embodiments of the present disclosure, intersection coordinate information between the simulated radar rays and the CAD model may be determined based on a collection of intersection coordinate information between the individual model component patches and the simulated radar rays. Here, for each simulated radar ray, intersection point coordinate information between each simulated radar ray and each model component patch may be determined based on coordinate information of each spatial position through which the simulated radar ray passes and spatial position range information in which each model component patch is located.
Under the condition that intersection point coordinate information between each simulated radar ray and each model composition patch is determined, intersection point coordinate information between the simulated radar ray having an intersection point with the model composition patch and the model composition patch can be determined for each model composition patch included in the CAD model, and thus, the intersection point coordinate information corresponding to each model composition patch is combined, and point cloud data corresponding to the target object can be determined.
In practical application scenarios, there may be some degree of occlusion between different surfaces of the same target object or between different target objects. Under the condition of occlusion, even though simulated radar rays which are simulated in principle can reach the occluded part, the simulated radar rays can be limited by the influence of the material of the target object, and the rays are not penetrated frequently, so that in order to generate point cloud data which is more consistent with the practical application scene, the embodiment of the disclosure can adopt a closest distance mode to determine the relevant part of the CAD model which can be actually reached by each simulated radar ray, and further realize the determination of the point cloud data of the target object in the target scene.
The following can describe the occlusion between model component patches included in the CAD model and the occlusion between different CAD models in detail.
In a first aspect: for a CAD model comprising a plurality of model component patches, embodiments of the present disclosure may determine point cloud data of a target object corresponding to the CAD model in a target scene according to the following steps:
step one, determining whether shielding exists between each model composition surface patch based on the spatial position range information of each model composition surface patch;
step two, if occlusion exists, determining intersection point coordinate information between a model composition surface patch closest to the radar equipment and each simulation radar ray based on intersection point coordinate information between the simulation radar ray and each model composition surface patch in a plurality of model composition surface patches with occlusion for each simulation radar ray;
and thirdly, determining point cloud data of a target object corresponding to the CAD model in a target scene based on each simulated radar ray and intersection point coordinate information between a model composition patch determined by each simulated radar ray and closest to the radar equipment and the simulated radar ray.
Here, when it is determined that there is an intersection between the spatial position range information of the different model component patches, it may be determined that there is occlusion between the model component patches, and the occlusion degree may be represented by information such as the number of intersections, for example, the larger the number of intersections, the more serious the problem that there is occlusion between the two models.
For a plurality of model composition patches with occlusion, the embodiment of the disclosure may determine a model composition patch closest to a radar device, and determine intersection coordinate information between the closest model composition patch and a simulated radar ray, that is, within a depth range, may determine a model composition patch to which each simulated radar ray hits first as a model composition patch actually detected by the simulated radar ray, and further determine final point cloud data of a related target object.
In a second aspect: for a plurality of CAD models, the embodiments of the present disclosure may determine point cloud data of a target object corresponding to the CAD model in a target scene according to the following steps:
the method comprises the steps of firstly, determining whether occlusion exists between target objects corresponding to a plurality of CAD models based on spatial position information of the CAD models inserted into a target scene in the target scene;
step two, if shielding exists, determining intersection point coordinate information between a CAD model closest to the radar equipment and each simulated radar ray based on intersection point coordinate information between the simulated radar ray and a plurality of CAD models for each simulated radar ray;
and step three, taking the determined intersection point coordinate information as point cloud data of a target object corresponding to the CAD model pointed by the intersection point coordinate information in a target scene.
Similar to the occlusion existing in the model composition patch, in the embodiment of the present disclosure, under the condition that the occlusion exists among the multiple CAD models, the coordinate information of the intersection point between the CAD model closest to the radar device and each simulated radar ray may be determined, and then the point cloud data of the target object is determined, and a specific determination process is described in the related description in the first aspect, which is not described herein again.
In the embodiment of the present disclosure, in consideration of the key role that the insertion operation of the CAD model for the target scene plays in the simulation process of subsequently implementing the radar ray, the following may describe the insertion operation in detail. In the embodiment of the present disclosure, the CAD model may be inserted into the target scene according to the following steps:
step one, according to the acquired point cloud data of a target scene, determining spatial position information of a CAD model to be inserted in the target scene;
secondly, searching point cloud data matched with the spatial position information from the point cloud data of the target scene based on the determined spatial position information;
and step three, fusing the searched point cloud data with the CAD model to obtain a target scene inserted with the CAD model.
Here, first, spatial position information of the CAD model to be inserted in the target scene may be determined based on the acquired point cloud data of the target scene. In order to better realize the fusion of the target scene and the CAD model, an in-scene position range matched with the spatial position information of the CAD model can be searched from a larger spatial position range corresponding to the target scene based on the spatial position information, and the in-scene position range can be used as an insertion position of the CAD model to be inserted in the target scene, so that the target scene obtained by fusing the point cloud data corresponding to the in-scene position range and the CAD model can be smoother, and the simulated ray effect of subsequent radar scanning is further improved.
Considering that the determination process of the spatial position information of the CAD model to be inserted in the target scene is the key to achieve the fusion of the point cloud data and the CAD, the embodiments of the present disclosure provide various position determination methods to determine the spatial position information of the CAD model in the target scene, which can be specifically described as follows.
In a first aspect: the disclosed embodiments may determine spatial location information of a CAD model based on target object recognition, including the steps of:
firstly, performing target object identification on the acquired point cloud data of a target scene based on a trained target object identification model, and determining the position information of the target object in the target scene;
and secondly, determining the spatial position information of the CAD model to be inserted in the target scene based on the position information of the target object in the target scene.
Here, in a case where it is determined that the target object already exists in the target scene, position information of the target object in the target scene may be determined, and spatial position information of the CAD model to be inserted in the target scene may be determined based on this position information. That is, the position information of the target object existing in the target scene may be used as a reference for the spatial position information to be inserted into the CAD model to be inserted.
In specific application, under the condition that a target object exists in a target scene, point cloud data corresponding to the target object can be firstly scratched, and a CAD model is inserted into the scratched position. For the target objects of the same type, the mode of firstly digging out and then inserting is adopted, so that the subsequent point cloud generating effect can be ensured, the point cloud data of other various target objects of the type can be expanded under the condition of collecting one target object of the same type, and the time and the labor are saved.
In addition, when a target object exists in the target scene, the point cloud data corresponding to the target object is retained, and a preset position range, which is outside the contour of the target object corresponding to the point cloud data and is close to the target object, is used as the spatial position information to be inserted into the CAD model to be inserted.
In a second aspect: the embodiment of the disclosure may determine spatial position information of a CAD model based on a three-dimensional scene map, including the following steps:
step one, acquiring a three-dimensional scene map of a related target scene;
secondly, determining map position information of a related object related to a target object corresponding to the CAD model based on the acquired three-dimensional scene map and the point cloud data of the target scene;
step three, determining the map position information of the target object based on the determined map position information of the associated object and the relative position relationship between the associated object and the target object;
and step four, taking the determined map position information of the target object as the spatial position information of the CAD model to be inserted in the target scene.
Here, in consideration of strong correlation between different objects, here, the map position information of the target object, and thus the spatial position information of the CAD model in the target scene, may be determined depending on the map position information of the related object associated with the target object in the three-dimensional scene map.
Still taking the target scene including the road and the roadside trees as an example, considering the strong correlation between the vehicle and the road, after the map position information of the road is determined in the target scene, the position information of the vehicle to be inserted may be determined based on the relative position relationship between the road and the vehicle.
In the embodiment of the present disclosure, in addition to the two aspects described above to determine the spatial position information, the spatial position information may also be determined by using other methods, for example, the spatial position information may be determined based on the distribution of the target object in the scene, which is not limited in this respect.
Regardless of the manner of determining the spatial position information of the CAD model in the target scene, the CAD model can be scaled in the process of actually fusing the found point cloud data with the CAD model, and the scaled CAD model and the found point cloud data are fused to meet the requirements of different application scenes.
In the embodiment of the disclosure, the CAD model may be scaled according to a preset model scaling ratio. The preset model scaling can be obtained by statistics based on the relevant size information of each object in one type, so that the scaled CAD model can better conform to the actual size of the target object, and the subsequently generated point cloud data of the relevant target object can better conform to the requirements of actual application scenes.
In order to better realize the fusion between the target scene and the CAD model, the point cloud generation method provided by the embodiment of the present disclosure may further screen the point cloud data of the target scene based on the moving object, and specifically may be realized by the following steps:
step one, identifying a reference object from a target scene; the reference object represents an object which moves in continuous multi-frame point cloud data of a target scene;
step two, screening out reference point cloud data corresponding to the reference object from the acquired point cloud data of the target scene to obtain updated point cloud data of the target scene;
and thirdly, inserting the CAD model into the target scene according to the updated point cloud data of the target scene.
For a reference object moving in continuous multi-frame point cloud data of a target scene, reference point cloud data corresponding to the reference object can be determined firstly, and then the reference point cloud data is screened out from the point cloud data of the target scene, so that the updated point cloud data only remains static scene point clouds, the target scene inserted with a CAD model is smoother, the problem of unsmooth pictures caused by the point clouds of dynamic objects is avoided, and the fusion effect is ensured.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a device for point cloud generation corresponding to the method for point cloud generation, and since the principle of solving the problem of the device in the embodiment of the present disclosure is similar to that of the method for point cloud generation in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Example two
Referring to fig. 2, a schematic diagram of an apparatus for generating a point cloud according to an embodiment of the present disclosure is shown, the apparatus including: the system comprises an acquisition module 201, an insertion module 202, a determination module 203 and a generation module 204; wherein,
an obtaining module 201, configured to obtain point cloud data of a target scene and a CAD model corresponding to at least one target object;
the inserting module 202 is used for inserting the CAD model into the target scene according to the acquired point cloud data of the target scene;
the determining module 203 is used for determining simulated radar ray information transmitted to the CAD model when the radar equipment scans and inserts a target scene of the CAD model based on the installation parameters and the configuration parameters set by the radar equipment;
and the generating module 204 is configured to determine point cloud data of a target object in the target scene, which corresponds to the CAD model inserted into the target scene, based on the simulated radar ray information and the spatial position information of the CAD model inserted into the target scene in the target scene.
According to the method and the device, under the condition that the target object corresponding to the CAD model does not exist in the real target scene, the point cloud data detected by the radar equipment can be simulated based on the point cloud generating device under the condition that the target object exists in the real target scene, the point cloud data can be directly used as the point cloud marking result of the target object, the whole process does not need manual participation, and time and labor are saved.
In one possible implementation, the generating module 204 is configured to determine point cloud data of a target object in the target scene corresponding to the CAD model inserted into the target scene based on the simulated radar ray information and the spatial location information of the CAD model inserted into the target scene in the target scene according to the following steps:
for each simulated radar ray, determining intersection point coordinate information between the simulated radar ray and a CAD model based on coordinate information of a spatial position through which the simulated radar ray passes and spatial position information of the CAD model inserted into a target scene in the target scene;
and determining point cloud data of a target object corresponding to the CAD model under the target scene based on intersection point coordinate information between each simulated radar ray and the CAD model.
In one possible implementation, the spatial position information of the CAD model includes spatial position range information where each model component patch is located;
the generating module 204 is configured to determine point cloud data of a target object corresponding to the CAD model in a target scene based on intersection coordinate information between each simulated radar ray and the CAD model according to the following steps:
for each simulated radar ray, determining intersection point coordinate information between the simulated radar ray and each model composition surface patch based on coordinate information of each space position through which the simulated radar ray passes and space position range information of each model composition surface patch;
aiming at each model composition patch, determining intersection point coordinate information between a simulation radar ray which has an intersection point with the model composition patch and the model composition patch;
and combining the intersection point coordinate information corresponding to the model composition patches to determine point cloud data of the target object corresponding to the CAD model in the target scene.
In a possible implementation manner, the generating module 204 is configured to determine point cloud data of a target object corresponding to a CAD model in a target scene based on intersection coordinate information between each simulated radar ray and the CAD model according to the following steps:
determining whether shielding exists between the model composition surface patches or not based on the spatial position range information of the model composition surface patches;
if shielding exists, determining intersection point coordinate information between a model composition surface patch closest to the radar equipment and each simulation radar ray based on intersection point coordinate information between the simulation radar ray and each model composition surface patch in a plurality of model composition surface patches with shielding;
and determining point cloud data of a target object corresponding to the CAD model in a target scene based on each simulated radar ray and intersection point coordinate information between the model composition patch determined by each simulated radar ray and closest to the radar equipment and the simulated radar ray.
In one possible embodiment, if there are a plurality of CAD models; the generating module 204 is configured to determine point cloud data of a target object corresponding to the CAD model in a target scene according to the following steps:
determining whether occlusion exists between target objects corresponding to a plurality of CAD models based on spatial position information of the CAD models inserted into the target scene in the target scene;
if the shielding exists, determining intersection point coordinate information between a CAD model closest to the radar equipment and each simulated radar ray based on intersection point coordinate information between the simulated radar ray and the CAD models for each simulated radar ray;
and taking the determined intersection point coordinate information as point cloud data of a target object corresponding to the CAD model pointed by the intersection point coordinate information in a target scene.
In one possible implementation, the inserting module 202 is configured to insert a CAD model into the target scene according to the acquired point cloud data of the target scene according to the following steps:
determining spatial position information of a CAD model to be inserted in the target scene according to the acquired point cloud data of the target scene;
searching point cloud data matched with the spatial position information from the point cloud data of the target scene based on the determined spatial position information;
and fusing the searched point cloud data with the CAD model to obtain a target scene inserted with the CAD model.
In one possible implementation, the insertion module 202 is configured to determine spatial position information of the CAD model to be inserted in the target scene according to the acquired point cloud data of the target scene according to the following steps:
performing target object recognition on the acquired point cloud data of the target scene based on the trained target object recognition model, and determining the position information of the target object in the target scene;
and determining the spatial position information of the CAD model to be inserted in the target scene based on the position information of the target object in the target scene.
In one possible implementation, the insertion module 202 is configured to determine spatial position information of the CAD model to be inserted in the target scene according to the acquired point cloud data of the target scene according to the following steps:
acquiring a three-dimensional scene map of a related target scene;
determining map position information of an associated object associated with a target object corresponding to the CAD model based on the acquired three-dimensional scene map and point cloud data of the target scene;
determining map position information of the target object based on the determined map position information of the associated object and the relative position relationship between the associated object and the target object;
and taking the determined map position information of the target object as the spatial position information of the CAD model to be inserted in the target scene.
In one possible embodiment, the insertion module 202 is configured to fuse the found point cloud data with the CAD model according to the following steps:
determining a scaled CAD model based on model size information of the CAD model and a preset model scaling;
and fusing the scaled CAD model with the searched point cloud data.
In one possible implementation, the inserting module 202 is configured to insert a CAD model into the target scene according to the acquired point cloud data of the target scene according to the following steps:
identifying a reference object from a target scene; the reference object represents an object which moves in continuous multi-frame point cloud data of a target scene;
screening out reference point cloud data corresponding to the reference object from the acquired point cloud data of the target scene to obtain updated point cloud data of the target scene;
and inserting the CAD model into the target scene according to the updated point cloud data of the target scene.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
EXAMPLE III
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 3, which is a schematic structural diagram of the electronic device provided in the embodiment of the present disclosure, and the electronic device includes: a processor 301, a memory 302, and a bus 303. The memory 302 stores machine-readable instructions executable by the processor 301 (for example, corresponding execution instructions of the acquisition module 201, the insertion module 202, the determination module 203, and the generation module 204 in the apparatus for point cloud generation in fig. 2, and the like), when the electronic device is operated, the processor 301 communicates with the memory 302 through the bus 303, and when the processor 301 executes the following processing:
acquiring point cloud data of a target scene and a Computer Aided Design (CAD) model corresponding to at least one target object;
inserting the CAD model into the target scene according to the acquired point cloud data of the target scene;
determining simulated radar ray information transmitted to the CAD model when the radar equipment scans a target scene inserted into the CAD model based on the installation parameters and the configuration parameters set by the radar equipment;
and determining point cloud data of a target object corresponding to the CAD model inserted into the target scene under the target scene based on the simulated radar ray information and the spatial position information of the CAD model inserted into the target scene in the target scene.
The specific execution process of the instruction may refer to the steps of the point cloud generation method described in the embodiments of the present disclosure, and details are not repeated here.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the method for point cloud generation described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the point cloud generating method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the point cloud generating method described in the above method embodiments, which may be referred to in the above method embodiments specifically, and are not described herein again.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (13)

1. A method of point cloud generation, the method comprising:
acquiring point cloud data of a target scene and a Computer Aided Design (CAD) model corresponding to at least one target object;
inserting the CAD model into the target scene according to the acquired point cloud data of the target scene;
determining simulated radar ray information transmitted to a CAD model when the radar equipment scans a target scene inserted into the CAD model based on installation parameters and configuration parameters set by the radar equipment;
and determining point cloud data of a target object corresponding to the CAD model inserted into the target scene under the target scene based on the simulated radar ray information and the spatial position information of the CAD model inserted into the target scene in the target scene.
2. The method of claim 1, wherein the determining point cloud data of the target object under the target scene corresponding to the CAD model inserted into the target scene based on the simulated radar ray information and the spatial position information of the CAD model inserted into the target scene in the target scene comprises:
for each simulated radar ray, determining intersection point coordinate information between the simulated radar ray and a CAD model based on coordinate information of a spatial position through which the simulated radar ray passes and spatial position information of the CAD model inserted into the target scene in the target scene;
and determining point cloud data of a target object corresponding to the CAD model under the target scene based on intersection point coordinate information between each simulated radar ray and the CAD model.
3. The method according to claim 2, wherein the spatial position information of the CAD model includes spatial position range information where each model component patch is located;
the determining, for each simulated radar ray, intersection point coordinate information between the simulated radar ray and the CAD model based on coordinate information of each spatial position through which the simulated radar ray passes and spatial position information of the CAD model inserted into the target scene in the target scene includes:
for each simulated radar ray, determining intersection point coordinate information between the simulated radar ray and each model composition surface patch based on coordinate information of each space position through which the simulated radar ray passes and space position range information of each model composition surface patch;
the determining point cloud data of the target object corresponding to the CAD model under the target scene based on the intersection point coordinate information between each simulated radar ray and the CAD model comprises the following steps:
aiming at each model composition patch, determining intersection point coordinate information between a simulation radar ray which has an intersection point with the model composition patch and the model composition patch;
and combining the intersection point coordinate information corresponding to the model composition patches to determine point cloud data of the target object corresponding to the CAD model under the target scene.
4. The method of claim 3, further comprising:
determining whether shielding exists between the model composition surface patches or not based on the spatial position range information of the model composition surface patches;
if shielding exists, determining intersection point coordinate information between a model composition surface patch closest to the radar equipment and each simulation radar ray based on intersection point coordinate information between the simulation radar ray and each model composition surface patch in a plurality of model composition surface patches with shielding;
the determining point cloud data of the target object corresponding to the CAD model under the target scene based on the intersection point coordinate information between each simulated radar ray and the CAD model comprises the following steps:
and determining point cloud data of a target object corresponding to the CAD model in the target scene based on each simulated radar ray and intersection point coordinate information between the model composition patch determined by each simulated radar ray and closest to the radar equipment and the simulated radar ray.
5. The method according to any of claims 2-4, wherein if there are a plurality of said CAD models; the method further comprises the following steps:
determining whether occlusion exists between target objects corresponding to a plurality of CAD models based on spatial position information of the CAD models inserted into the target scene in the target scene;
if the shielding exists, determining intersection point coordinate information between the CAD model closest to the radar equipment and each simulated radar ray based on intersection point coordinate information between the simulated radar ray and the CAD models for each simulated radar ray;
and taking the determined intersection point coordinate information as point cloud data of a target object corresponding to the CAD model pointed by the intersection point coordinate information under the target scene.
6. The method of any one of claims 1-5, wherein the inserting the CAD model into the target scene according to the acquired point cloud data of the target scene comprises:
according to the acquired point cloud data of the target scene, determining spatial position information of the CAD model to be inserted in the target scene;
searching point cloud data matched with the spatial position information from the point cloud data of the target scene based on the determined spatial position information;
and fusing the searched point cloud data with the CAD model to obtain a target scene inserted with the CAD model.
7. The method of claim 6, wherein the determining spatial position information of the CAD model to be inserted in the target scene according to the acquired point cloud data of the target scene comprises:
performing target object recognition on the acquired point cloud data of the target scene based on the trained target object recognition model, and determining the position information of the target object in the target scene;
and determining the spatial position information of the CAD model to be inserted in the target scene based on the position information of the target object in the target scene.
8. The method of claim 6, wherein the determining spatial position information of the CAD model to be inserted in the target scene according to the acquired point cloud data of the target scene comprises:
acquiring a three-dimensional scene map of a related target scene;
determining map position information of a related object related to a target object corresponding to the CAD model based on the acquired three-dimensional scene map and the point cloud data of the target scene;
determining map position information of the target object based on the determined map position information of the associated object and a relative position relationship between the associated object and the target object;
and taking the determined map position information of the target object as the spatial position information of the CAD model to be inserted in the target scene.
9. The method according to any of claims 6-8, wherein said fusing the found point cloud data with the CAD model comprises:
determining a scaled CAD model based on the model size information of the CAD model and a preset model scaling;
and fusing the scaled CAD model with the searched point cloud data.
10. The method of any one of claims 1-9, wherein the inserting the CAD model into the target scene according to the acquired point cloud data of the target scene comprises:
identifying a reference object from the target scene; the reference object represents an object which moves in continuous multi-frame point cloud data of the target scene;
screening out reference point cloud data corresponding to the reference object from the acquired point cloud data of the target scene to obtain updated point cloud data of the target scene;
and inserting the CAD model into the target scene according to the updated point cloud data of the target scene.
11. An apparatus for point cloud generation, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring point cloud data of a target scene and a Computer Aided Design (CAD) model corresponding to at least one target object;
the inserting module is used for inserting the CAD model into the target scene according to the acquired point cloud data of the target scene;
the system comprises a determining module, a calculating module and a calculating module, wherein the determining module is used for determining simulation radar ray information transmitted to a CAD model when the radar equipment scans and inserts a target scene of the CAD model based on installation parameters and configuration parameters set by the radar equipment;
and the generating module is used for determining point cloud data of a target object corresponding to the CAD model inserted into the target scene under the target scene based on the simulated radar ray information and the spatial position information of the CAD model inserted into the target scene in the target scene.
12. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of point cloud generation of any of claims 1 to 10.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by an electronic device, performs the steps of the method of point cloud generation according to any one of claims 1 to 10.
CN202110346606.0A 2021-03-31 2021-03-31 Point cloud generation method and device, electronic equipment and storage medium Pending CN113077544A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110346606.0A CN113077544A (en) 2021-03-31 2021-03-31 Point cloud generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110346606.0A CN113077544A (en) 2021-03-31 2021-03-31 Point cloud generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113077544A true CN113077544A (en) 2021-07-06

Family

ID=76614098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110346606.0A Pending CN113077544A (en) 2021-03-31 2021-03-31 Point cloud generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113077544A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109271893A (en) * 2018-08-30 2019-01-25 百度在线网络技术(北京)有限公司 A kind of generation method, device, equipment and storage medium emulating point cloud data
WO2019105190A1 (en) * 2017-11-28 2019-06-06 腾讯科技(深圳)有限公司 Augmented reality scene implementation method, apparatus, device, and storage medium
US20190371044A1 (en) * 2018-06-04 2019-12-05 Baidu Online Network Technology (Beijing) Co., Ltd Method, apparatus, device and computer readable storage medium for reconstructing three-dimensional scene
CN112330815A (en) * 2020-11-26 2021-02-05 北京百度网讯科技有限公司 Three-dimensional point cloud data processing method, device and equipment based on obstacle fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019105190A1 (en) * 2017-11-28 2019-06-06 腾讯科技(深圳)有限公司 Augmented reality scene implementation method, apparatus, device, and storage medium
US20190371044A1 (en) * 2018-06-04 2019-12-05 Baidu Online Network Technology (Beijing) Co., Ltd Method, apparatus, device and computer readable storage medium for reconstructing three-dimensional scene
CN109271893A (en) * 2018-08-30 2019-01-25 百度在线网络技术(北京)有限公司 A kind of generation method, device, equipment and storage medium emulating point cloud data
EP3618008A1 (en) * 2018-08-30 2020-03-04 Baidu Online Network Technology (Beijing) Co., Ltd. Method for generating simulated point cloud data, device, and storage medium
CN112330815A (en) * 2020-11-26 2021-02-05 北京百度网讯科技有限公司 Three-dimensional point cloud data processing method, device and equipment based on obstacle fusion

Similar Documents

Publication Publication Date Title
US11455565B2 (en) Augmenting real sensor recordings with simulated sensor data
CN109271893B (en) Method, device, equipment and storage medium for generating simulation point cloud data
US20190065933A1 (en) Augmenting Real Sensor Recordings With Simulated Sensor Data
CN112199991B (en) Simulation point cloud filtering method and system applied to vehicle-road cooperation road side perception
CN112258610B (en) Image labeling method and device, storage medium and electronic equipment
CN111177887A (en) Method and device for constructing simulation track data based on real driving scene
CN111932451B (en) Method and device for evaluating repositioning effect, electronic equipment and storage medium
CN109636868B (en) High-precision image map online construction method and device based on WebGIS and deep learning
CN112529022A (en) Training sample generation method and device
KR20200136723A (en) Method and apparatus for generating learning data for object recognition using virtual city model
CN111744183B (en) Illumination sampling method and device in game and computer equipment
CN109871829A (en) A kind of detection model training method and device based on deep learning
CN112598993A (en) CIM-based city map platform visualization method and device and related products
CN112348737A (en) Method for generating simulation image, electronic device and storage medium
CN112330815A (en) Three-dimensional point cloud data processing method, device and equipment based on obstacle fusion
CN108597034B (en) Method and apparatus for generating information
CN115393322A (en) Method and device for generating and evaluating change detection data based on digital twins
CN113822892B (en) Evaluation method, device and equipment of simulated radar and computer storage medium
CN114219958B (en) Multi-view remote sensing image classification method, device, equipment and storage medium
CN112507887B (en) Intersection sign extracting and associating method and device
CN112381873B (en) Data labeling method and device
CN113077544A (en) Point cloud generation method and device, electronic equipment and storage medium
CN112948605A (en) Point cloud data labeling method, device, equipment and readable storage medium
CN113409473B (en) Method, device, electronic equipment and storage medium for realizing virtual-real fusion
WO2023010540A1 (en) Method and apparatus for verifying scanning result of laser radar, and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination