CN114596420A - Laser point cloud modeling method and system applied to urban brain - Google Patents
Laser point cloud modeling method and system applied to urban brain Download PDFInfo
- Publication number
- CN114596420A CN114596420A CN202210258150.7A CN202210258150A CN114596420A CN 114596420 A CN114596420 A CN 114596420A CN 202210258150 A CN202210258150 A CN 202210258150A CN 114596420 A CN114596420 A CN 114596420A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- module
- modeling
- preprocessing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 210000004556 brain Anatomy 0.000 title claims abstract description 15
- 230000009467 reduction Effects 0.000 claims abstract description 11
- 238000013528 artificial neural network Methods 0.000 claims abstract description 10
- 230000004913 activation Effects 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims description 28
- 238000013507 mapping Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000002085 persistent effect Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 239000002355 dual-layer Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20088—Trinocular vision calculations; trifocal tensor
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Geometry (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a laser point cloud modeling method and a laser point cloud modeling system applied to a city brain. The method comprises the following steps: step 1, constructing a deep neural network, performing feature expression on acquired point cloud data by adopting a Softplus activation function, performing dimensionality reduction on formed features based on LargeVis, then performing clustering by using a self-adaptive clustering algorithm, taking a clustering result as a pseudo label, and further updating weight parameters of the network through back propagation; step 2, the network after updating the weight parameters predicts the pseudo label again; and 3, sequentially and alternately executing the step 1 and the step 2 until the laser point cloud data modeling is completed. The technical scheme provided by the invention can quickly, conveniently and accurately establish the 3D model (grid surface edge) for the physical object in the digital space, ensures that the twin model has the characteristics of high fidelity, high reliability and high precision, and is particularly suitable for scenes with high-precision application requirements.
Description
Technical Field
The invention relates to the field of point cloud data processing, in particular to a laser point cloud modeling method and system applied to a city brain.
Background
In a digital twin urban three-dimensional scene, modeling is the technical basis and pillar of digital twin. The traditional modeling technology needs to be based on point cloud modeling manually with the help of auxiliary software to realize physical object reduction, so that the workload is high, the modeling period is long, the technical requirement on modeling personnel is high, and the application of the traditional modeling technology to digital city construction under large-scale data volume is severely limited.
Disclosure of Invention
In order to solve the problems of large workload, long modeling period, high requirements on technical personnel and the like of the traditional modeling method, the invention provides the laser point cloud modeling method and the laser point cloud modeling system applied to the urban brain, which can quickly, conveniently and accurately establish a 3D model (grid surface edge) for a physical object in a digital space, ensure that a twin model has the characteristics of high fidelity, high reliability and high precision, and are particularly suitable for scenes with high-precision application requirements.
According to a first aspect of the embodiments of the present invention, there is provided a laser point cloud modeling method applied to a city brain, including:
step 1, constructing a deep neural network, performing feature expression on acquired point cloud data by adopting a Softplus activation function, performing dimensionality reduction on formed features based on LargeVis, then performing clustering by using a self-adaptive clustering algorithm, taking a clustering result as a pseudo label, and further updating weight parameters of the network through back propagation;
step 2, the network after updating the weight parameters predicts the pseudo label again;
and 3, sequentially and alternately executing the step 1 and the step 2 until the laser point cloud data modeling is completed.
Further, before step 1, the method further comprises:
and preprocessing the acquired point cloud data and the panoramic image data.
Further, the method for preprocessing the acquired point cloud data specifically comprises the following steps:
and denoising, redundancy removing, thinning and simplifying the acquired point cloud data.
Further, preprocessing the acquired panoramic image data specifically includes:
and registering, associating and mapping the point cloud data and the panoramic image according to the spatial consistency.
Further, after step 3, the method further comprises:
step 4, establishing three-dimensional grid models of each part one by one according to the step 1, the step 2 and the step 3;
and 5, splicing all the three-dimensional grid models into an integral three-dimensional model through the overlapping area or the common point.
According to a second aspect of the embodiments of the present invention, there is provided a laser point cloud modeling system applied to a city brain, including:
the building and updating module is used for building a deep neural network, performing feature expression on the acquired point cloud data by adopting a Softplus activation function, performing dimensionality reduction on the formed features based on LargeVis, then clustering by using a self-adaptive clustering algorithm, taking the clustering result as a pseudo label, and further updating the weight parameters of the network through back propagation;
the prediction module is used for enabling the network after the weight parameters are updated to predict the pseudo labels again;
and the modeling module is used for sequentially and alternately calling the constructing and updating module and the predicting module until the laser point cloud data modeling is completed.
Further, the system further comprises:
and the preprocessing module is used for preprocessing the acquired point cloud data and the panoramic image data.
Further, the preprocessing module specifically includes:
and the first preprocessing unit is used for denoising, redundancy removing, thinning and simplifying the acquired point cloud data.
Further, the preprocessing module specifically includes:
and the second preprocessing unit is used for registering, associating and mapping the point cloud data and the panoramic image according to the space consistency.
Further, the system further comprises:
the establishing module is used for calling the constructing and updating module, the predicting module and the modeling module to establish a three-dimensional grid model of each part one by one;
and the splicing module is used for splicing all the three-dimensional grid models of each part into an integral three-dimensional model through an overlapping area or a common point.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
the method can quickly, conveniently and accurately establish the 3D model (grid surface edge) for the physical object in the digital space, ensures that the twin model has the characteristics of high fidelity, high reliability and high precision, and is particularly suitable for scenes with high-precision application requirements.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings, wherein like reference numerals generally represent like parts in the exemplary embodiments of the present invention.
FIG. 1 is a schematic flow diagram illustrating a laser point cloud modeling method applied to a city brain according to an exemplary embodiment of the present invention;
FIG. 2 is a block diagram illustrating the structure of a laser point cloud modeling system applied to a city brain according to an exemplary embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a computing device according to an exemplary embodiment of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that, although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
Aiming at the problems, the invention mainly scans an entity in three dimensions by a laser scanner to form a Point Cloud which is completely consistent with an entity, and provides a DNNPCGM (deep Neural Network Point Cloud Grid modeling) technology for modeling.
The technical solutions of the embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart illustrating a laser point cloud modeling method applied to a city brain according to an exemplary embodiment of the present invention.
Referring to fig. 1, the method includes:
110. constructing a deep neural network, adopting a Softplus activation function to perform feature expression on the acquired point cloud data, performing dimension reduction on the formed features based on the LargeVis, then clustering by using an adaptive clustering algorithm, taking the clustering result as a pseudo label, and further updating the weight parameters of the network through back propagation;
120. the network after updating the weight parameters predicts the pseudo label again;
130. and sequentially and alternately executing the step 110 and the step 120 until the laser point cloud data modeling is completed.
Specifically, in this embodiment, a Softplus activation function is adopted, so that the nonlinear expression capability of the deep neural network can be improved; by performing feature expression on the point cloud data, 256-dimensional by 64-dimensional by 8-dimensional features can be formed, so that the details of the physical entity can be fully described; the method has the advantages that the training speed can be improved based on LargeVis dimension reduction, when the feature data after dimension reduction is clustered, the number of clustering centers does not need to be set manually by the adopted self-adaptive clustering algorithm, the number of the clustering centers can be automatically adjusted according to data statistical information in the algorithm, the influence of algorithm precision caused by insufficient manual experience can be reduced, then the clustering result is used as a pseudo label, the weight parameters of the network are updated through back propagation, then the network is used for predicting the pseudo labels, the two processes are sequentially and alternately executed, and therefore flexible and efficient laser point cloud data modeling is achieved.
The method provided by the embodiment of the invention can flexibly and efficiently establish a high-fidelity, high-reliability and high-precision three-dimensional model by organically combining the deep neural network and the self-adaptive clustering, thereby greatly improving the efficiency. The technical scheme of the method can be effectively applied to the application scenes of the vehicle-mounted laser radar and the backpack laser radar. If the scene is large, the point cloud data amount is large, and modeling can be performed on each component one by one.
Optionally, in this embodiment, before step 110, as shown in fig. 1, the method further includes:
100. and preprocessing the acquired point cloud data and the panoramic image data.
Specifically, the preprocessing of the acquired point cloud data includes: and denoising, redundancy removing, thinning and simplifying the acquired point cloud data.
Preprocessing the acquired panoramic image data, specifically comprising: and registering, associating and mapping the point cloud data and the panoramic image according to the spatial consistency.
Optionally, in this embodiment, after step 130, the method further includes:
140. establishing a three-dimensional grid model of each part one by one according to the steps 110, 120 and 130;
150. and splicing all the three-dimensional grid models into an integral three-dimensional model through an overlapping region or a common point.
In the embodiment, a huge scene is modeled in blocks according to components, so that the loading pressure of a using platform can be reduced in the aspects of application of big data and big scenes.
The following description takes the application of the method in the establishment of a digital twin scene three-dimensional model in an urban brain as an example, and specifically comprises the following steps:
(1) according to the site survey, the project environment and the project purpose are determined, the working site situation is known, and the scanning route, the station erecting position, the number, the distance measurement, the scanning resolution and the like are determined.
(2) Preprocessing the acquired point cloud data and the acquired image data, removing error points and points containing gross errors in the original point cloud by applying a noise reduction algorithm, and performing geometric correction on the scanned panoramic image.
(3) And registering, associating and mapping the three-dimensional point cloud and the panoramic image.
(4) The DNNPCGM technology provided by the patent is utilized to establish a three-dimensional model.
(5) And giving corresponding textures to the built three-dimensional model.
(6) And (6) deriving a three-dimensional model.
(7) And loading the three-dimensional model in the application platform.
Fig. 2 is a block diagram illustrating a structure of a laser point cloud modeling system applied to a city brain according to an exemplary embodiment of the present invention.
Referring to fig. 2, the system includes:
the building and updating module is used for building a deep neural network, performing feature expression on the acquired point cloud data by adopting a Softplus activation function, performing dimensionality reduction on the formed features based on LargeVis, then clustering by using a self-adaptive clustering algorithm, taking the clustering result as a pseudo label, and further updating the weight parameters of the network through back propagation;
the prediction module is used for enabling the network after the weight parameters are updated to predict the pseudo labels again;
and the modeling module is used for sequentially and alternately calling the constructing and updating module and the predicting module until the laser point cloud data modeling is completed.
Optionally, in this embodiment, as shown in fig. 2, the system further includes:
and the preprocessing module is used for preprocessing the acquired point cloud data and the panoramic image data.
Optionally, in this embodiment, the preprocessing module specifically includes:
and the first preprocessing unit is used for denoising, redundancy removing, thinning and simplifying the acquired point cloud data.
Optionally, in this embodiment, the preprocessing module specifically includes:
and the second preprocessing unit is used for registering, associating and mapping the point cloud data and the panoramic image according to the space consistency.
Optionally, in this embodiment, the system further includes:
the building module is used for calling the building and updating module, the predicting module and the modeling module to build a three-dimensional grid model of each part one by one;
and the splicing module is used for splicing all the three-dimensional grid models of each part into an integral three-dimensional model through an overlapping area or a common point.
The laser point cloud modeling system applied to the urban brain provided by the embodiment of the invention can quickly, conveniently and accurately establish a 3D model (grid surface edge) for a physical object in a digital space, ensures that a twin model has the characteristics of high fidelity, high reliability and high precision, and is particularly suitable for scenes with high-precision application requirements.
With regard to the system in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 3 is a schematic diagram illustrating a computing device according to an exemplary embodiment of the present invention.
Referring to fig. 3, computing device 300 includes memory 310 and processor 320.
The Processor 320 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 310 may include various types of storage units such as a system memory, a Read Only Memory (ROM), and a permanent storage device. Wherein the ROM may store static data or instructions for the processor 320 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. Further, the memory 310 may comprise any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, may also be employed. In some embodiments, memory 310 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-density optical disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 310 has stored thereon executable code that, when processed by the processor 320, may cause the processor 320 to perform some or all of the methods described above.
Furthermore, the method according to the invention may also be implemented as a computer program or computer program product comprising computer program code instructions for carrying out some or all of the steps of the above-described method of the invention.
Alternatively, the invention may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform part or all of the various steps of the above-described method according to the invention.
The aspects of the invention have been described in detail hereinabove with reference to the drawings. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. Those skilled in the art should also appreciate that the acts and modules referred to in the specification are not necessarily required by the invention. In addition, it can be understood that the steps in the method according to the embodiment of the present invention may be sequentially adjusted, combined, and deleted according to actual needs, and the modules in the device according to the embodiment of the present invention may be combined, divided, and deleted according to actual needs.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
While embodiments of the present invention have been described above, the above description is illustrative, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (10)
1. A laser point cloud modeling method applied to a city brain is characterized by comprising the following steps:
step 1, constructing a deep neural network, performing feature expression on acquired point cloud data by adopting a Softplus activation function, performing dimensionality reduction on formed features based on LargeVis, then performing clustering by using a self-adaptive clustering algorithm, taking a clustering result as a pseudo label, and further updating weight parameters of the network through back propagation;
step 2, the network after updating the weight parameters predicts the pseudo label again;
and 3, sequentially and alternately executing the step 1 and the step 2 until the laser point cloud data modeling is completed.
2. The method of claim 1, further comprising, prior to step 1:
and preprocessing the acquired point cloud data and the panoramic image data.
3. The method according to claim 2, wherein the preprocessing of the acquired point cloud data comprises:
and denoising, redundancy removing, thinning and simplifying the acquired point cloud data.
4. The method of claim 2, wherein preprocessing the acquired panoramic image data comprises:
and registering, associating and mapping the point cloud data and the panoramic image according to the spatial consistency.
5. The method according to any one of claims 1 to 4, further comprising, after step 3:
step 4, establishing three-dimensional grid models of each part one by one according to the step 1, the step 2 and the step 3;
and 5, splicing all the three-dimensional grid models into an integral three-dimensional model through the overlapping area or the common point.
6. A laser point cloud modeling system applied to a city brain is characterized by comprising:
the building and updating module is used for building a deep neural network, performing feature expression on the acquired point cloud data by adopting a Softplus activation function, performing dimensionality reduction on the formed features based on LargeVis, then clustering by using a self-adaptive clustering algorithm, taking the clustering result as a pseudo label, and further updating the weight parameters of the network through back propagation;
the prediction module is used for enabling the network after the weight parameters are updated to predict the pseudo labels again;
and the modeling module is used for sequentially and alternately calling the constructing and updating module and the predicting module until the laser point cloud data modeling is completed.
7. The system of claim 6, further comprising:
and the preprocessing module is used for preprocessing the acquired point cloud data and the panoramic image data.
8. The system according to claim 7, wherein the preprocessing module specifically includes:
and the first preprocessing unit is used for denoising, redundancy removing, thinning and simplifying the acquired point cloud data.
9. The system according to claim 7, wherein the preprocessing module specifically includes:
and the second preprocessing unit is used for registering, associating and mapping the point cloud data and the panoramic image according to the space consistency.
10. The system according to any one of claims 6-9, further comprising:
the building module is used for calling the building and updating module, the predicting module and the modeling module to build a three-dimensional grid model of each part one by one;
and the splicing module is used for splicing all the three-dimensional grid models of each part into an integral three-dimensional model through an overlapping area or a common point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210258150.7A CN114596420A (en) | 2022-03-16 | 2022-03-16 | Laser point cloud modeling method and system applied to urban brain |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210258150.7A CN114596420A (en) | 2022-03-16 | 2022-03-16 | Laser point cloud modeling method and system applied to urban brain |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114596420A true CN114596420A (en) | 2022-06-07 |
Family
ID=81818590
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210258150.7A Pending CN114596420A (en) | 2022-03-16 | 2022-03-16 | Laser point cloud modeling method and system applied to urban brain |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114596420A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115242853A (en) * | 2022-09-20 | 2022-10-25 | 中关村科学城城市大脑股份有限公司 | Digital twin traffic order maintenance system based on three-dimensional point cloud |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110415342A (en) * | 2019-08-02 | 2019-11-05 | 深圳市唯特视科技有限公司 | A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors |
CN111612885A (en) * | 2020-04-10 | 2020-09-01 | 安徽继远软件有限公司 | Digital pole tower management and control method and system in intelligent power transmission internet of things environment |
CN111652964A (en) * | 2020-04-10 | 2020-09-11 | 合肥工业大学 | Auxiliary positioning method and system for power inspection unmanned aerial vehicle based on digital twinning |
CN112270345A (en) * | 2020-10-19 | 2021-01-26 | 西安工程大学 | Clustering algorithm based on self-supervision dictionary learning |
US20210327119A1 (en) * | 2020-04-17 | 2021-10-21 | Occipital, Inc. | System for Generating a Three-Dimensional Scene Reconstructions |
CN113869629A (en) * | 2021-08-13 | 2021-12-31 | 广东电网有限责任公司广州供电局 | Laser point cloud-based power transmission line safety risk analysis, judgment and evaluation method |
CN114004938A (en) * | 2021-12-27 | 2022-02-01 | 中国电子科技集团公司第二十八研究所 | Urban scene reconstruction method and device based on mass data |
CN114120110A (en) * | 2021-11-22 | 2022-03-01 | 中国科学院紫金山天文台 | Multi-granularity calculation method for airborne laser point cloud classification of hybrid scene |
-
2022
- 2022-03-16 CN CN202210258150.7A patent/CN114596420A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110415342A (en) * | 2019-08-02 | 2019-11-05 | 深圳市唯特视科技有限公司 | A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors |
CN111612885A (en) * | 2020-04-10 | 2020-09-01 | 安徽继远软件有限公司 | Digital pole tower management and control method and system in intelligent power transmission internet of things environment |
CN111652964A (en) * | 2020-04-10 | 2020-09-11 | 合肥工业大学 | Auxiliary positioning method and system for power inspection unmanned aerial vehicle based on digital twinning |
US20210327119A1 (en) * | 2020-04-17 | 2021-10-21 | Occipital, Inc. | System for Generating a Three-Dimensional Scene Reconstructions |
CN112270345A (en) * | 2020-10-19 | 2021-01-26 | 西安工程大学 | Clustering algorithm based on self-supervision dictionary learning |
CN113869629A (en) * | 2021-08-13 | 2021-12-31 | 广东电网有限责任公司广州供电局 | Laser point cloud-based power transmission line safety risk analysis, judgment and evaluation method |
CN114120110A (en) * | 2021-11-22 | 2022-03-01 | 中国科学院紫金山天文台 | Multi-granularity calculation method for airborne laser point cloud classification of hybrid scene |
CN114004938A (en) * | 2021-12-27 | 2022-02-01 | 中国电子科技集团公司第二十八研究所 | Urban scene reconstruction method and device based on mass data |
Non-Patent Citations (1)
Title |
---|
赵李强: "基于卷积自编码网络的杆塔点云数据自动分类方法", 《云南电力技术》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115242853A (en) * | 2022-09-20 | 2022-10-25 | 中关村科学城城市大脑股份有限公司 | Digital twin traffic order maintenance system based on three-dimensional point cloud |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xiao et al. | Motion layer extraction in the presence of occlusion using graph cuts | |
US11024073B2 (en) | Method and apparatus for generating virtual object | |
CN111445472B (en) | Laser point cloud ground segmentation method, device, computing equipment and storage medium | |
WO2021142996A1 (en) | Point cloud denoising method, system, and device employing image segmentation, and storage medium | |
WO2019062649A1 (en) | Adaptive region division method and system | |
CN111367649B (en) | High-precision map data parallel processing method and device | |
US9824494B2 (en) | Hybrid surfaces for mesh repair | |
CN110717589A (en) | Data processing method, device and readable storage medium | |
CN110910483B (en) | Three-dimensional reconstruction method and device and electronic equipment | |
CN114596420A (en) | Laser point cloud modeling method and system applied to urban brain | |
CN115202922A (en) | Packed Error Correction Code (ECC) for compressed data protection | |
CN115828349A (en) | Geometric model processing method and device, electronic equipment and storage medium | |
CN115371663A (en) | Laser mapping method and device, electronic equipment and computer readable storage medium | |
CN110827341A (en) | Picture depth estimation method and device and storage medium | |
RU2296368C2 (en) | Method for cutting off a line and method for displaying three-dimensional image based on this method | |
CN116543134B (en) | Method, device, computer equipment and medium for constructing digital twin model | |
CN108573510B (en) | Grid map vectorization method and device | |
CN116612223B (en) | Digital twin simulation space generation method, device, computer equipment and medium | |
CN111858785B (en) | Map discrete element matching method, device, system and storage medium | |
CN116863072A (en) | Monomer three-dimensional reconstruction method and system based on vision-laser fusion | |
CN112017190B (en) | Global network construction and training method and device for vessel segmentation completion | |
CN114820784A (en) | Guideboard generation method and device and electronic equipment | |
CN115127542A (en) | Laser mapping method and device, electronic equipment and computer readable storage medium | |
CN110119721B (en) | Method and apparatus for processing information | |
CN113096104A (en) | Training method and device of target segmentation model and target segmentation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220607 |
|
RJ01 | Rejection of invention patent application after publication |