CN109191553B - Point cloud rendering method, device, terminal and storage medium - Google Patents

Point cloud rendering method, device, terminal and storage medium Download PDF

Info

Publication number
CN109191553B
CN109191553B CN201810998130.7A CN201810998130A CN109191553B CN 109191553 B CN109191553 B CN 109191553B CN 201810998130 A CN201810998130 A CN 201810998130A CN 109191553 B CN109191553 B CN 109191553B
Authority
CN
China
Prior art keywords
point cloud
cloud data
reflection intensity
point
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810998130.7A
Other languages
Chinese (zh)
Other versions
CN109191553A (en
Inventor
党跃东
陈卓
姚卫锋
夏黎明
辛建康
邓呈亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810998130.7A priority Critical patent/CN109191553B/en
Publication of CN109191553A publication Critical patent/CN109191553A/en
Application granted granted Critical
Publication of CN109191553B publication Critical patent/CN109191553B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The embodiment of the invention discloses a point cloud rendering method, a device, equipment and a storage medium. Wherein the method comprises the following steps: acquiring point cloud data acquired by a laser radar; and carrying out region segmentation on the point cloud data according to the reflection intensity of each point in the point cloud data and each predetermined reflection intensity range. By predetermining a plurality of reflection intensity ranges and carrying out region segmentation on the point cloud data according to the reflection intensity ranges and the reflection intensity of the point cloud data, effective information in the point cloud scene can be displayed more, and the visualized processing of the point cloud data is realized.

Description

Point cloud rendering method, device, terminal and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a point cloud rendering method, a device, a terminal and a storage medium.
Background
LiDAR (Detection and Ranging) is a short term for laser detection and ranging systems, and data obtained by scanning with a LiDAR is referred to as point cloud data. The laser radar has the advantages of good concealment, strong active interference resistance, good low-altitude detection performance and the like in the data acquisition process, and has wide application prospect in the field of unmanned automobiles.
The unmanned automobile is also called an automatic driving automobile, is an unmanned intelligent automobile realized by mutually matching a plurality of algorithm modules such as sensing driving, sensing, positioning and controlling, and mainly realizes unmanned by virtue of a driving control system in the automobile, which is mainly a computer system. The unmanned vehicle can be used as an important mark for measuring national scientific research strength and industrial level, and has wide application prospect in the fields of national defense and national economy.
The point cloud data collected by the unmanned vehicle laser radar plays a vital role in monitoring, decision making and control of the unmanned vehicle, and the visual algorithm of the point cloud data can enable unmanned vehicle developers and debugging personnel to observe data and find rules in a more visual mode, so that a better decision control algorithm is created for the unmanned vehicle, and the current point cloud visual algorithm is in the early development stage, can only simply display data information, and cannot display more effective information in a point cloud scene.
Disclosure of Invention
The embodiment of the invention provides a point cloud rendering method, a device, equipment and a storage medium, which are used for realizing the visualization processing of point cloud data and displaying effective information in a point cloud scene more.
In a first aspect, an embodiment of the present invention provides a point cloud rendering method, including:
acquiring point cloud data acquired by a laser radar;
and carrying out region segmentation on the point cloud data according to the reflection intensity of each point in the point cloud data and each predetermined reflection intensity range.
In a second aspect, an embodiment of the present invention further provides a point cloud rendering apparatus, where the apparatus includes:
the data acquisition module is used for acquiring point cloud data acquired by the laser radar;
the area segmentation module is used for carrying out area segmentation on the point cloud data according to the reflection intensity of each point in the point cloud data and the predetermined reflection intensity ranges.
In a third aspect, an embodiment of the present invention further provides an apparatus, including:
one or more processors;
and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize any one of the point cloud rendering methods according to the embodiment of the invention.
In a fourth aspect, embodiments of the present invention further provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the point cloud rendering methods of the embodiments of the present invention.
According to the embodiment of the invention, the point cloud data acquired by the laser radar are acquired, and the point cloud data are segmented according to the reflection intensity of each point in the point cloud data and the predetermined reflection intensity ranges. By predetermining a plurality of reflection intensity ranges and carrying out region segmentation on the point cloud data according to the reflection intensity ranges and the reflection intensity of the point cloud data, effective information in the point cloud scene can be displayed more, and the visualized processing of the point cloud data is realized.
Drawings
FIG. 1 is a flow chart of a point cloud rendering method according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a point cloud rendering method according to a second embodiment of the present invention;
FIG. 3 is a flow chart of a point cloud rendering method in a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a point cloud rendering device according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal in a fifth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a flowchart of a point cloud rendering method provided in the first embodiment, where the point cloud rendering method provided in the first embodiment may be suitable for obtaining point cloud data collected by a laser radar and performing region segmentation on the point cloud data, the method may be performed by a point cloud rendering device, and the device may be implemented by software and/or hardware, and the device may be integrated in a terminal having a data processing function, where the terminal may be a control framework of an autopilot mobile carrier (e.g., an unmanned vehicle). Referring to fig. 1, the method of the present implementation specifically includes:
s110, acquiring point cloud data acquired by a laser radar.
The laser radar is an optical remote sensing device, and by emitting laser to a target object, determining an actual distance of the target object according to a time interval between the emitted laser and the laser reflected by the received object, and deducing position information of the object according to the actual distance and the angle of laser emission. Lidar is generally composed of three parts, a laser transmitter, a receiver, and an information processing device.
The point cloud data comprise information such as geometrical coordinates and reflection intensity of each reflection point in a scene acquired by the laser radar. In the running process of the unmanned vehicle, the laser radar usually rotates at a constant speed at a certain angular speed, and in the process, the laser radar continuously emits laser and collects information of each reflection point in a scene to obtain omnibearing scene information, and the geometric coordinates and reflection intensity information of each reflection point in the current scene are obtained through information processing.
Specifically, after the point cloud data is collected by the lidar carried on the autopilot mobile carrier, the terminal can acquire the point cloud data collected by the lidar so as to further analyze and process the point cloud data.
S120, dividing the point cloud data according to the reflection intensity of each point in the point cloud data and the predetermined reflection intensity ranges.
The reflection intensity of each point is the intensity of laser reflected by the point in a scene received by the laser radar. The reflection intensity is related to the factors such as the color, volume, material, surface roughness and the like of the target objects, and researches show that the reflection intensity of points on different target objects in a scene is usually different, so that the reflection points in the scene are classified through the range of the reflection intensity, and the distinction of different target objects is realized.
The reflection intensity range is a range preset in the terminal, and can be set according to the research result, and by matching the reflection intensity of each point with the preset reflection intensity range, the range of the reflection intensity corresponding to each point can be determined, and the points in the same intensity range can be regarded as the same target object.
In one implementation of this embodiment, points within the same intensity range may be displayed as the same color, and each region in the current scene may be intuitively determined by the color of each point in the scene.
It should be noted that, in the present embodiment, different target objects are not limited to different individuals, but may be different from one another in different portions of the same individual due to color, material or surface roughness. For example, a lane line on an asphalt road belongs to the same asphalt road, but since the lane line and other areas of the asphalt road are different in color, the reflection intensity of a point on the lane line and a point on the other areas of the asphalt road are also different.
According to the embodiment of the invention, the point cloud data acquired by the laser radar are acquired, and the point cloud data are segmented according to the reflection intensity of each point in the point cloud data and the predetermined reflection intensity ranges. By predetermining a plurality of reflection intensity ranges and carrying out region segmentation on the point cloud data according to the reflection intensity ranges and the reflection intensity of the point cloud data, effective information in the point cloud scene can be displayed more, and the visualized processing of the point cloud data is realized.
Example two
Fig. 2 is a flowchart of a point cloud rendering method according to a second embodiment, where the method is optimized based on the foregoing embodiments, and explanation of the same or corresponding terms as those of the foregoing embodiments is not repeated herein. Referring to fig. 2, the point cloud rendering method provided in the present embodiment includes:
s210, acquiring point cloud data acquired by a laser radar.
And S220, matching the reflection intensity of each point in the point cloud data with each reflection intensity range in advance.
And S230, if the reflection intensity of the point is detected to belong to any reflection intensity range, the pixel parameter to which the reflection intensity range belongs is given to the point.
The pixel parameters to which the different reflection intensity ranges belong are different so that the regions of different reflection intensities are divided in S240.
Optionally, the pixel color and/or the pixel gray level to which the reflection intensity range belongs is used as the pixel characteristic of the point.
For example, pixel colors may be represented by three channel color values, with different pixel colors corresponding to different channel color values. If a certain point belongs to a certain reflection intensity range, the three-channel color value corresponding to the reflection intensity range is endowed to the point, so that the color displayed by the point is the color corresponding to the three-channel color value. Because the reflection intensity ranges corresponding to different target objects are different, different target objects can display different colors.
It should be noted that if there are multiple target objects that are similar in material, volume, and color and have the same distance from the lidar, the reflection intensity ranges to which the points on the respective target objects belong may be the same.
S240, carrying out region segmentation on the point cloud data according to pixel parameters of each point.
Optionally, traversing each point in the point cloud data, and if the pixel parameters of a plurality of points adjacent to each other in the position are identical, determining that the plurality of points belong to the same area.
For example, if the pixel parameters of the points adjacent to each other are the same, the points adjacent to each other may belong to the same target object, and the points are divided into the same area. If the pixel parameters of the points are the same, but the positions of the points are not adjacent, the points may belong to a plurality of target objects which are similar in material, volume and color and have the same distance with the laser radar, and the points are not divided into the same area.
According to different colors of each point and different areas, a tester or maintainer can intuitively see the distribution condition of each target object in the scene and preliminarily determine the content of the target object.
In one implementation of this embodiment, this step may be performed in a graphics processor (Graphics Processing Unit, GPU), which may improve the efficiency of region segmentation of the point cloud data due to its powerful processing power in graphics rendering computations.
According to the embodiment of the invention, the point cloud data acquired by the laser radar are acquired, the reflection intensity of each point in the point cloud data is matched with each reflection intensity range in advance, and the point cloud data is subjected to region segmentation. By predetermining a plurality of reflection intensity ranges and assigning the pixel parameters to which the reflection intensity ranges belong to corresponding points according to the reflection intensity ranges, effective information in the point cloud scene can be displayed more clearly, and the visualized processing of the point cloud data is realized.
Example III
Fig. 3 is a flowchart of a point cloud rendering method according to a third embodiment, which is optimized based on the foregoing embodiments, where explanations of terms that are the same as or corresponding to those of the foregoing embodiments are not repeated herein. Referring to fig. 3, the point cloud rendering method provided in the present embodiment includes:
s310, acquiring point cloud data acquired by a laser radar.
S320, grouping the point cloud data according to the distance attribute of the point cloud data and the radar.
Illustratively, the point cloud data includes geometric coordinates of each point and reflection intensity information, and according to the geometric coordinates of the point cloud data, a relative distance between each point and the radar can be determined. By reasonably arranging the position and the angle of the laser radar, the relative distance between the point on the same target object and the radar can be the same. The point cloud data is grouped according to the difference of the relative distance between the point cloud data and the radar.
In one implementation of this embodiment, this step may be performed in a central processing unit (Central Processing Unit, CPU).
S330, determining pixel parameters of each group of point cloud data according to the reflection intensity of the points in each group of point cloud data and the predetermined reflection intensity ranges.
Optionally, the pixel color and/or the pixel gray level to which the reflection intensity range belongs is used as the pixel characteristic of the point in the set of point cloud data.
The pixel color and/or the pixel gray scale to which the different reflection intensity ranges belong are different, so that the regions of different reflection intensities are divided in S340.
And S340, carrying out region segmentation on the point cloud data according to the pixel parameters of each group of point cloud data.
Optionally, traversing each point in each group of point cloud data, and if the pixel parameters of a plurality of points adjacent to each other in the position are identical, determining that the plurality of points belong to the same area.
According to the embodiment of the invention, the point cloud data acquired by the laser radar are acquired and grouped according to the distance attribute between the point cloud data and the radar, and the point cloud data are grouped before the point cloud data are subjected to region segmentation, so that the efficiency of rendering the point cloud data is improved. According to the reflection intensity of points in each group of point cloud data and the predetermined reflection intensity ranges, the pixel parameters of each group of point cloud data are determined, and according to the pixel parameters of each group of point cloud data, the point cloud data are subjected to region segmentation, so that effective information in a point cloud scene can be displayed more, and the visualized processing of the point cloud data is realized.
Example IV
Fig. 4 is a schematic structural diagram of a point cloud rendering device provided in the fourth embodiment, where the embodiment may be suitable for obtaining point cloud data collected by a lidar and performing region segmentation on the point cloud data, and the device may be implemented in a software and/or hardware manner, and may be integrated in any terminal with a data processing function. Referring to fig. 4, the apparatus specifically includes:
the data acquisition module 401 is configured to acquire point cloud data acquired by the laser radar;
the area segmentation module 402 is configured to segment the point cloud data according to the reflection intensity of each point in the point cloud data and each predetermined reflection intensity range.
Optionally, the region segmentation module 402 includes:
the matching unit is used for matching the reflection intensity of each point in the point cloud data with each reflection intensity range in advance;
the adjusting unit is used for giving the pixel parameter to which the reflection intensity range belongs to the point if the reflection intensity of the point is detected to belong to any reflection intensity range;
and the segmentation unit is used for carrying out region segmentation on the point cloud data according to the pixel parameters of each point.
Optionally, the adjusting unit is specifically configured to:
the pixel color and/or the pixel gray level to which the reflection intensity range belongs is taken as the pixel characteristic of the point.
Optionally, the adjusting unit is specifically configured to:
and traversing each point in the point cloud data, and if the pixel parameters of a plurality of points adjacent in position are identical, determining that the plurality of points belong to the same area.
Optionally, the region segmentation module 402 includes:
the data grouping unit is used for grouping the point cloud data according to the distance attribute of the point cloud data and the radar;
the parameter determining unit is used for determining pixel parameters of each group of point cloud data according to the reflection intensity of the points in each group of point cloud data and the predetermined reflection intensity ranges;
and the intra-group division unit is used for carrying out region division on the point cloud data according to the pixel parameters of each group of point cloud data.
According to the embodiment of the invention, the point cloud rendering device is used for acquiring the point cloud data acquired by the laser radar, and the point cloud data is subjected to region segmentation according to the reflection intensity of each point in the point cloud data and the predetermined reflection intensity ranges. By predetermining a plurality of reflection intensity ranges and carrying out region segmentation on the point cloud data according to the reflection intensity ranges and the reflection intensity of the point cloud data, effective information in the point cloud scene can be displayed more, and the visualized processing of the point cloud data is realized.
Example five
Fig. 5 is a block diagram of a point cloud rendering terminal according to a fifth embodiment of the present invention. Fig. 5 illustrates a block diagram of an exemplary terminal 512 suitable for use in implementing embodiments of the invention. The terminal 512 shown in fig. 5 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present invention.
As shown in fig. 5, the terminal 512 is in the form of a general purpose computing device. The components of terminal 512 may include, but are not limited to: one or more processors or processing units 516, a system memory 528, a bus 518 that connects the various system components (including the system memory 528 and processing units 516).
Bus 518 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Terminal 512 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by terminal 512 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 528 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 530 and/or cache memory 532. The terminal 512 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 534 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard disk drive"). Although not shown in fig. 5, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 518 through one or more data media interfaces. Memory 528 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
A program/utility 540 having a set (at least one) of program modules 542 may be stored in, for example, memory 528, such program modules 542 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 542 generally perform the functions and/or methods in the described embodiments of the invention.
The terminal 512 may also communicate with one or more external devices 514 (e.g., keyboard, pointing device, display 524, etc.), one or more devices that enable a user to interact with the terminal 512, and/or any devices (e.g., network card, modem, etc.) that enable the terminal 512 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 522. Also, terminal 512 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet, through network adapter 520. As shown, network adapter 520 communicates with other modules of terminal 512 via bus 518. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with terminal 512, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 516 executes various functional applications and data processing by running at least one of other programs among a plurality of programs stored in the system memory 528, for example, to implement a method for processing point cloud rendering by elements in a blockchain according to an embodiment of the present invention.
Example six
A sixth embodiment of the present invention also provides a storage medium containing computer-executable instructions for performing a point cloud rendering method when executed by a computer processor.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (10)

1. A point cloud rendering method, comprising:
acquiring point cloud data acquired by a laser radar;
dividing the point cloud data according to the reflection intensity of each point in the point cloud data and each predetermined reflection intensity range;
the method for performing region segmentation on the point cloud data according to the reflection intensity of each point in the point cloud data and each predetermined reflection intensity range comprises the following steps:
grouping the point cloud data according to the distance attribute of the point cloud data and the radar;
determining pixel parameters of each group of point cloud data according to the reflection intensity of points in each group of point cloud data and the predetermined reflection intensity ranges;
and carrying out region segmentation on the point cloud data according to the pixel parameters of each group of point cloud data.
2. The method of claim 1, wherein the performing the region segmentation on the point cloud data according to the reflection intensity of each point in the point cloud data and the predetermined reflection intensity ranges comprises:
matching the reflection intensity of each point in the point cloud data with each reflection intensity range in advance;
if the reflection intensity of the point is detected to belong to any reflection intensity range, the pixel parameter to which the reflection intensity range belongs is given to the point;
and carrying out region segmentation on the point cloud data according to the pixel parameters of each point.
3. The method of claim 2, wherein assigning the pixel parameter to which the reflection intensity range belongs to the point comprises:
the pixel color and/or the pixel gray level to which the reflection intensity range belongs is taken as the pixel characteristic of the point.
4. The method of claim 2, wherein the partitioning of the point cloud data according to the pixel parameters of each point comprises:
and traversing each point in the point cloud data, and if the pixel parameters of a plurality of points adjacent in position are identical, determining that the plurality of points belong to the same area.
5. A point cloud rendering apparatus, comprising:
the data acquisition module is used for acquiring point cloud data acquired by the laser radar;
the area segmentation module is used for carrying out area segmentation on the point cloud data according to the reflection intensity of each point in the point cloud data and each predetermined reflection intensity range;
wherein the region segmentation module comprises:
the data grouping unit is used for grouping the point cloud data according to the distance attribute of the point cloud data and the radar;
the parameter determining unit is used for determining pixel parameters of each group of point cloud data according to the reflection intensity of the points in each group of point cloud data and the predetermined reflection intensity ranges;
and the intra-group division unit is used for carrying out region division on the point cloud data according to the pixel parameters of each group of point cloud data.
6. The apparatus of claim 5, wherein the region segmentation module comprises:
the matching unit is used for matching the reflection intensity of each point in the point cloud data with each reflection intensity range in advance;
the adjusting unit is used for giving the pixel parameter to which the reflection intensity range belongs to the point if the reflection intensity of the point is detected to belong to any reflection intensity range;
and the segmentation unit is used for carrying out region segmentation on the point cloud data according to the pixel parameters of each point.
7. The device according to claim 6, wherein the adjustment unit is specifically configured to:
the pixel color and/or the pixel gray level to which the reflection intensity range belongs is taken as the pixel characteristic of the point.
8. The apparatus according to claim 6, wherein the dividing unit is specifically configured to:
and traversing each point in the point cloud data, and if the pixel parameters of a plurality of points adjacent in position are identical, determining that the plurality of points belong to the same area.
9. A terminal, the terminal comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the point cloud rendering method of any of claims 1-4.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the point cloud rendering method according to any of claims 1-4.
CN201810998130.7A 2018-08-29 2018-08-29 Point cloud rendering method, device, terminal and storage medium Active CN109191553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810998130.7A CN109191553B (en) 2018-08-29 2018-08-29 Point cloud rendering method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810998130.7A CN109191553B (en) 2018-08-29 2018-08-29 Point cloud rendering method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN109191553A CN109191553A (en) 2019-01-11
CN109191553B true CN109191553B (en) 2023-07-25

Family

ID=64916575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810998130.7A Active CN109191553B (en) 2018-08-29 2018-08-29 Point cloud rendering method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN109191553B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112327308B (en) * 2019-07-19 2024-07-16 浙江菜鸟供应链管理有限公司 Object detection method, device, system and equipment
CN113196336A (en) * 2019-11-29 2021-07-30 深圳市大疆创新科技有限公司 Point cloud density quantification method and device and storage medium
CN114026410A (en) * 2020-05-19 2022-02-08 深圳市大疆创新科技有限公司 Point cloud coloring method, point cloud coloring system, and computer storage medium
DE102020208099A1 (en) 2020-06-30 2021-12-30 Robert Bosch Gesellschaft mit beschränkter Haftung Method for determining a point cloud representing an environment of a LiDAR sensor
CN111929694B (en) * 2020-10-12 2021-01-26 炬星科技(深圳)有限公司 Point cloud matching method, point cloud matching equipment and storage medium
CN114585946A (en) * 2020-12-29 2022-06-03 深圳市大疆创新科技有限公司 Laser ranging device, laser ranging method and movable platform
CN114915664A (en) * 2021-01-29 2022-08-16 华为技术有限公司 Point cloud data transmission method and device
CN113379884B (en) * 2021-07-05 2023-11-17 北京百度网讯科技有限公司 Map rendering method, map rendering device, electronic device, storage medium and vehicle
CN113607185B (en) * 2021-10-08 2022-01-04 禾多科技(北京)有限公司 Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN114137505B (en) * 2021-11-17 2024-10-08 珠海格力电器股份有限公司 Target detection method and device based on wireless radar
CN114445547A (en) * 2021-12-15 2022-05-06 北京云测信息技术有限公司 Point cloud coloring method and device, electronic equipment and storage medium
CN114972385B (en) * 2022-06-29 2024-05-31 山东信通电子股份有限公司 Shape clipping method, device and medium for point cloud data

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023210A (en) * 2016-05-24 2016-10-12 百度在线网络技术(北京)有限公司 Unmanned vehicle, and unmanned vehicle positioning method, device and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102608620B (en) * 2012-03-12 2013-09-18 北京北科安地科技发展有限公司 Laser scanning point cloud vegetation filtering method on basis of reflection strength and terrain
US9383753B1 (en) * 2012-09-26 2016-07-05 Google Inc. Wide-view LIDAR with areas of special attention
CN106127771B (en) * 2016-06-28 2018-11-02 南京数联空间测绘科技有限公司 Tunnel orthography system and method is obtained based on laser radar LIDAR point cloud datas
CN107945198B (en) * 2016-10-13 2021-02-23 北京百度网讯科技有限公司 Method and device for marking point cloud data
CN106951847B (en) * 2017-03-13 2020-09-29 百度在线网络技术(北京)有限公司 Obstacle detection method, apparatus, device and storage medium
CN108198145B (en) * 2017-12-29 2020-08-28 百度在线网络技术(北京)有限公司 Method and device for point cloud data restoration

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023210A (en) * 2016-05-24 2016-10-12 百度在线网络技术(北京)有限公司 Unmanned vehicle, and unmanned vehicle positioning method, device and system

Also Published As

Publication number Publication date
CN109191553A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109191553B (en) Point cloud rendering method, device, terminal and storage medium
US11328429B2 (en) Method and apparatus for detecting ground point cloud points
US10698106B2 (en) Obstacle detecting method and apparatus, device and storage medium
CN109459734B (en) Laser radar positioning effect evaluation method, device, equipment and storage medium
CN109188438B (en) Yaw angle determination method, device, equipment and medium
US20200081119A1 (en) Method and apparatus for determining relative pose, device and medium
KR102543952B1 (en) Lane line determination method and apparatus, lane line positioning accuracy evaluation method and apparatus, device, and program
CN109190573B (en) Ground detection method applied to unmanned vehicle, electronic equipment and vehicle
CN113704116B (en) Data processing method and device for automatic driving vehicle, electronic equipment and medium
US11887473B2 (en) Road congestion detection method and device, and electronic device
CN109284801B (en) Traffic indicator lamp state identification method and device, electronic equipment and storage medium
CN109118797B (en) Information sharing method, device, equipment and storage medium
CN111709965A (en) Map optimization method and device for sweeping robot
CN114186007A (en) High-precision map generation method and device, electronic equipment and storage medium
WO2022116831A1 (en) Positioning method and apparatus, electronic device and readable storage medium
CN113177980B (en) Target object speed determining method and device for automatic driving and electronic equipment
WO2023024087A1 (en) Method, apparatus and device for processing laser radar point cloud, and storage medium
CN114565906A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
CN113762397A (en) Detection model training and high-precision map updating method, device, medium and product
CN112558035B (en) Method and device for estimating the ground
CN115061386B (en) Intelligent driving automatic simulation test system and related equipment
CN114581890B (en) Method and device for determining lane line, electronic equipment and storage medium
CN116434181A (en) Ground point detection method, device, electronic equipment and medium
CN110068834B (en) Road edge detection method and device
CN113628208B (en) Ship detection method, device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant