CN115222884A - Space object analysis and modeling optimization method based on artificial intelligence - Google Patents
Space object analysis and modeling optimization method based on artificial intelligence Download PDFInfo
- Publication number
- CN115222884A CN115222884A CN202210830532.2A CN202210830532A CN115222884A CN 115222884 A CN115222884 A CN 115222884A CN 202210830532 A CN202210830532 A CN 202210830532A CN 115222884 A CN115222884 A CN 115222884A
- Authority
- CN
- China
- Prior art keywords
- data
- model
- point cloud
- modeling
- space object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 113
- 238000004458 analytical method Methods 0.000 title claims abstract description 40
- 238000005457 optimization Methods 0.000 title claims abstract description 38
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 33
- 238000001514 detection method Methods 0.000 claims abstract description 29
- 230000008569 process Effects 0.000 claims abstract description 16
- 230000004927 fusion Effects 0.000 claims abstract description 14
- 230000002093 peripheral effect Effects 0.000 claims abstract description 9
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 29
- 238000012545 processing Methods 0.000 claims description 18
- 239000013598 vector Substances 0.000 claims description 17
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000001914 filtration Methods 0.000 claims description 12
- 238000013135 deep learning Methods 0.000 claims description 9
- 230000003287 optical effect Effects 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 8
- 230000005540 biological transmission Effects 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000013519 translation Methods 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000000149 argon plasma sintering Methods 0.000 claims description 3
- 238000013499 data model Methods 0.000 claims description 3
- 230000005764 inhibitory process Effects 0.000 claims description 3
- 230000005855 radiation Effects 0.000 claims description 3
- 238000002310 reflectometry Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 239000007787 solid Substances 0.000 claims description 3
- 238000002834 transmittance Methods 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims 1
- 238000000605 extraction Methods 0.000 abstract description 9
- 238000003708 edge detection Methods 0.000 abstract description 8
- 238000005516 engineering process Methods 0.000 abstract description 8
- 238000009499 grossing Methods 0.000 abstract description 3
- 230000000877 morphologic effect Effects 0.000 abstract description 2
- 238000004088 simulation Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 238000003860 storage Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000014759 maintenance of location Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000010220 Pearson correlation analysis Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000013523 data management Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013079 data visualisation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
Abstract
The invention provides a space object analysis and modeling optimization method based on artificial intelligence, wherein the space object analysis comprises edge detection, contour detection and target detection. The method comprises the steps of carrying out image enhancement preprocessing on a target image, carrying out operations such as reverse color, smoothing and morphological denoising on the image, and realizing space object contour extraction. And detecting a peripheral object of the space object by adopting a Yolov3 network, perfecting peripheral information of the model and acquiring corresponding model parameters. The optimization method of the space object modeling comprises a three-dimensional reconstruction method based on image recognition and a three-dimensional reconstruction method based on laser scanning, and the model obtained by the two methods is combined with database data, and data fusion is carried out by utilizing a digital twin technology to complete the optimization of model parameters, so that the space object model is optimized. The method can reconstruct a three-dimensional model according to the information in the plurality of two-dimensional images and complete model optimization, thereby improving the accuracy and the modeling efficiency of the space object modeling process.
Description
Technical Field
The invention relates to the field of deep learning three-dimensional modeling, in particular to a space object analysis and modeling optimization method based on artificial intelligence.
Background
With the progress of scientific technology and the rapid development of modern artificial intelligence technology, modern modeling methods have been advanced to the brand-new stages of automation, intellectualization and individuation, and are increasingly developing towards the construction of complex objects with higher simulation degree.
The modern modeling method mainly comprises two modes, one mode is to manually extract the whole contour line of the space object by using laser point cloud data and improve the accuracy of the space structure by manually optimizing the contour, and the other mode is to layer the point cloud data by using a machine learning algorithm, classify similar layers to form different components and respectively model so as to realize the modeling of the whole contour. However, both of the above two modeling methods based on laser point cloud data require traversing analysis processing of all the input point cloud data, which consumes a lot of time, thereby reducing modeling efficiency.
Through virtual simulation, the structural characteristics and the performance of an entity in the real world are digitally described and modeled. And the mapping of the real model and the virtual model is completed by visualizing the virtual data. The digital twin is a new high-tech technology, integrates multiple digital technologies such as Internet of things, virtual reality, simulation and the like, and can realize mapping between a real world and a virtual world. Although the digital twin has wide application in manufacturing industry, medical care, city management and the like, the connotation is too rich, so that a corresponding standard system, an applicable criterion, an implementation requirement and a standard reference for supporting tools and a technical platform are lacked in China. The application of the digital twin technology is not perfect, and the virtual model optimization application aspect needs to be expanded.
Aiming at the defects of the prior art in the application aspect, the invention utilizes model parameters obtained by space object analysis and different modeling methods, combines database data and adopts a digital twin technology to realize model optimization, thereby perfecting the application of the method in the virtual model optimization aspect.
Disclosure of Invention
The invention provides a space object analysis and modeling optimization method based on artificial intelligence aiming at the defects in the prior art, the method provided by the invention is suitable for an application scene of three-dimensional reconstruction and space object analysis of a space object, and the method specifically comprises a space object analysis method based on artificial intelligence, a space object modeling method based on artificial intelligence and a model optimization method based on artificial intelligence;
in a first aspect, the method for spatial object analysis based on artificial intelligence comprises the following steps:
step a1, preprocessing image information of a space object target acquired in advance, optimizing a space object model and acquiring corresponding parameter information through space object analysis, wherein the method comprises the following steps:
carrying out reverse image enhancement pretreatment on the image information, and improving the detection efficiency and accuracy of a subsequent contour detection algorithm on targets such as a space object window; smoothing the image information, and thinning the image by adding pixel interpolation, increasing pixel resolution and other methods; performing highlighting processing on a frame in the image after the edge detection; denoising the image information by morphology and open operation (firstly corroding and then expanding), removing the edge noise of the image information, and improving the detection efficiency and accuracy of a subsequent edge detection algorithm on the edge of the space object;
performing edge detection on the preprocessed image information, and highlighting small detail information in the image by adopting a Laplacian operator method to improve the model accuracy; and processing the image information obtained after preprocessing by using a contour detection method based on edge detection, neglecting the influence of background and texture and noise interference inside the target, convolving the image with a differential operator, adjusting the BGR parameter range to obtain the optimal detection effect, and finishing contour extraction of the spatial object.
Step a2, detecting a peripheral object (such as a window and other targets) of the space object by using a YoloV3 target detection network model, obtaining analysis data information of the space object, and improving the modeling accuracy of the peripheral object of the space object model, wherein the method comprises the following steps:
and carrying out target detection on the preprocessed image information by using the pre-trained target detection network model. The precision of parameters such as window position, size and the like in the model is improved by using window position parameter information obtained by a pre-trained target detection network model; and carrying out target detection on the peripheral object of the space object on the preprocessed image information by using another pre-trained target detection network model, and selecting necessary target information to perfect the peripheral information of the space object model.
Step a3, importing the space object analysis data information into a data model base in real time for model detail reference optimization;
the step a2 comprises the following steps:
YoloV3 target detection, when implemented, includes: obtaining a prediction result from the characteristics, decoding the prediction result, and sorting and non-maximum inhibition screening the predicted bounding box scores; wherein the last displayed bounding box is obtained after decoding of the prediction result, the formula used is as follows:
b x =σ(t x )+c x
b y =σ(t y )+c y
Pr(object)*IOU)(b,Object)=σ(t o )
wherein: c. C x 、c y Respectively representing the number of lattices with the difference in the x-axis direction and the number of lattices with the difference in the y-axis direction from the leftmost upper corner of the grid where the preset frame is located; p is a radical of w 、p h Respectively representing the width and height of the prior box; t is t x 、t y Respectively representing the offset of the target center point relative to the left upper corner of the grid where the preset frame is located in the x-axis direction and the offset in the y-axis direction; t is t w 、t h Respectively representing the width and height of the predicted frame; σ represents an activation function; pr (object) represents the probability of whether the prior box has an object; IOU represents the cross-over ratio.
In a second aspect, the artificial intelligence-based spatial object modeling method includes a three-dimensional reconstruction method based on deep learning and a three-dimensional reconstruction method based on laser scanning, and the three-dimensional reconstruction method based on deep learning includes the following steps:
step b1, according to the continuous image information of the adjacent scene of the space object target, which is acquired in advance, the three-dimensional image information is determined by utilizing the two-dimensional image information through position calculation;
the camera parameter information is obtained through COLMAP by means of feature extraction, different camera models with different complexity degrees are distinguished, and information such as the camera model, information data, pose of an input visual angle, internal reference, sparse points, common view relation and the like is determined;
photographing according to a time sequence, matching features of adjacent images, and completing physical environment information and self feature acquisition of the space object through feature matching;
b2, carrying out iterative processing on the acquired two-dimensional picture information by utilizing an incremental SFM (Structure from Motion) algorithm to obtain image information and carry out corresponding feature matching, carrying out preliminary sparse three-dimensional reconstruction on an object in reality through the obtained data, and obtaining positions and internal references of different cameras and the common-view relation of the positions and the internal references; finally, obtaining sparse point cloud of the space object and camera postures corresponding to all the visual angles;
step b3, acquiring basic physical information and model projection by adopting an incremental SfM algorithm, performing three-dimensional reconstruction on sparse point cloud information by an MVS (Multi-View Stereo) algorithm, acquiring a depth map and dense point cloud information of a space object, and acquiring the depth map and dense point cloud information of the space object, wherein the method comprises the following steps:
performing MESH reconstruction by using a Poisson method to obtain a MESH model with color, and completing three-dimensional reconstruction of a space object; constructing matching cost through AA-RMVSNet to realize depth estimation, and extracting points with consistent depth to realize dense reconstruction;
carrying out distortion removal processing on the image, reducing parallax estimation difference caused by larger edge when COLMAP uses optical consistency and geometric consistency combined constraint construction to match cost, directly learning through a neural network to complete a learning process from the cost to a depth value, and taking a depth expected value as a depth estimation value of a pixel along the depth direction of a probability body to smooth the interior of different parts in the whole depth image and reduce the occurrence of depth mutation, unsmooth and the like;
estimating depth values and normal values of visual angles simultaneously by utilizing optical consistency through COLMAP, optimizing a depth map by utilizing geometric consistency to obtain a depth map and a normal vector map under a radiometric and a geometric, fusing the depth maps by registering, performing point cloud recovery according to a projection method, completing dense reconstruction, and obtaining dense point cloud information of a space object;
step b1 comprises:
in the position calculation, the relationship between the camera coordinate system and the world coordinate system is expressed by a rotation matrix R and a translation matrix t, so that the coordinates of each point are unified into the same coordinate system, and the formula is as follows:
its homogeneous coordinates can be expressed as:
wherein X c 、Y c 、Z c Coordinates of the object of interest at any point in the image in the camera coordinate system; x w 、Y w 、Z w Representing the transformed coordinates of a selected study object in an image in the world coordinate system, t being a three-dimensional translation vector:
t=[t X t y t z ] T ,0=[0 0 0] T
wherein t is X 、t y 、t z The distances required to translate in the directions of the x axis, the y axis and the z axis respectively;
the rotation matrix R is a3 × 3 orthogonal matrix, whose elements satisfy:
In step b2, when performing corresponding feature matching, the following formula is used:
f nn =arg min||f d -f' d || 2 f'∈F(J)
wherein F (J) represents a feature point around the image J; f. of nn Representing the nearest neighboring feature vector; f. of d Representing points on the actual picture; f' d Representing a certain characteristic point of the selected image;
step b3 comprises the following steps:
in the construction process of constructing matching cost through AA-RMVSNet to realize depth estimation, the following formula is used:
wherein NCC represents a measure of optical coherence between images; l represents a feature of the image;the depth of the best-fit plane is represented,a normal vector representing a best fit plane;NCC representing the reference impact on features I, I.
2) The three-dimensional reconstruction method based on laser scanning comprises the following steps:
step C1, carrying out data acquisition on the space object through a Scan Station C10 three-dimensional laser scanner or other ground three-dimensional laser scanners to obtain laser scanning point cloud data or DSM (Digital Surface Model) data of the space building object, wherein the following laser radar equation is adopted:
wherein, P R Is the received echo power, P T Is the emitted laser power, P b Is the background radiation and noise power; r is the distance between the target and the radar, theta T Is the transmit antenna field angle/beam divergence angle; ρ is the reflectivity of the target surface to the laser, dA is the target surface bin, and Ω is the target light scattering solid angle; d is the aperture of the receiving antenna, η Atm Is the two-pass transmission of the transmission medium, eta Sys Is the optical system transmittance;
step c2, preprocessing the laser point cloud data, and denoising the point cloud by using Gaussian filtering and Laplace algorithm;
when a Gaussian filtering method is used for partially ordered point cloud data, the calculation of the weight of each point is realized for the laser point cloud data through Gaussian blur, and the calculation formula is as follows:
wherein G (x, y) refers to the weight of a point in the selected point cloud; x is the abscissa of a point in the selected point cloud in the coordinate system, y is the ordinate of a point in the selected point cloud in the space coordinate system, mu is the mean value of x, and sigma is the variance of x;
when the Laplace algorithm is used for denoising scattered point cloud data, the point cloud data is subjected to Laplace filtering by the following method:
the first differential Ix (x, y) in the x direction and the first differential Iy (x, y) in the y direction are calculated according to the following equations, respectively:
Ix(x,y)=(x+1)-xI(x+1,y)-I(x,y)=I(x+1,y)-I(x,y)
Iy(x,y)=(y+1)-yI(x,y+1)-I(x,y)=I(x,y+1)-I(x,y)
second order differential I xx (x, y) is calculated according to the following equation:
the de-noising expression of the Laplace algorithm is as follows:
step c3, detecting boundary points, and detecting sharp points based on a point cloud average curvature method; the method for completing space object modeling by utilizing point cloud segmentation to perform data segmentation comprises the following steps:
generating a preliminary outline of a space object model by adopting a deep learning algorithm based on the obtained laser scanning point cloud data or DSM data;
extracting characteristic angular points from the laser scanning point cloud data or DSM data according to the preliminary outline of the model, layering the laser scanning point cloud data, clustering similar layers, and generating a corresponding model according to the clustered point cloud data;
and (3) segmenting the laser scanning point cloud data or DSM data by adopting a plane segmentation method based on normal vector and distance constraint to segment different geometric surfaces, and perfecting the integral model of the space object by characteristic record self-learning.
3) The model optimization method based on artificial intelligence comprises the following steps:
step d1, carrying out omnibearing data acquisition, combining multi-source data and a model database, realizing multi-source data fusion matching through a coordinate conversion and data registration method, and importing the data into the database, wherein the method specifically comprises the following steps: the data of all aspects such as the structure and the material of physical models in different terrains and environments are acquired in an all-around mode by using multi-source data acquisition equipment comprising laser radars and the like, the acquired multi-source data and a cloud model database are combined, so that BIM, an oblique photography model, a laser point cloud model and other GIS data are fused and matched, coordinate conversion and data registration are carried out, coordinate projection and conversion of various three-dimensional data are achieved after the data are unified into the same coordinate system, standardization of specific data and cloud data is achieved through the method, and the data are imported into the database in real time.
D2, mining the space-time correlation among data to the maximum extent by using a Pearson correlation analysis method, K-means, apriori and other algorithms, and performing weighted fusion on the model three-dimensional reconstruction result and the laser point cloud modeling result to complete model optimization;
wherein in performing weighted fusion, the following formula is used:
pre=0.6pre1+0.4pre2
pre refers to an optimized coordinate result obtained by weighted fusion of coordinate results obtained by each three-dimensional reconstruction or laser point cloud modeling; pre1 and pre2 respectively refer to a coordinate result of three-dimensional reconstruction by using a model and a coordinate result of laser point cloud modeling; the coefficient is that the model with high accuracy is endowed with higher weight by predicting the accuracy of the model, and the weight value range is between 0 and 1;
and d3, storing the optimized model into a model database, re-identifying the overall parameters of the model, extracting optimized data and importing the data into the database. And providing data accumulation for subsequent similar model identification.
Compared with the existing modeling method, the method has the following advantages:
1. according to the method, a plurality of sensors are distributed based on the Internet of things to acquire physical entity metadata in real time, synchronous transmission data are optimized based on digital twins, and mapping of a physical object in a virtual space is achieved.
2. The method deeply excavates model data based on an artificial intelligence optimization algorithm, detects, analyzes, calculates and updates the data in real time, and completes optimization of model parameters by combining a big data management decision module, so that the modeling process is more efficient.
3. The invention combines sensor information such as laser point cloud and the like on the basis of a three-dimensional reconstruction model and combines database information to realize multi-source data fusion registration.
Has the advantages that: the invention provides a space object analysis and modeling optimization method based on artificial intelligence, which has the advantages of high modeling speed, short target identification time and the like. The method mainly adopts the combination of SFM and MVS to realize rapid three-dimensional reconstruction, and adopts artificial intelligence algorithm to carry out target detection on the three-dimensional reconstruction result, thereby effectively improving the efficiency and accuracy of modeling. And multi-source data fusion is carried out on the model information based on a digital twin technology, so that comprehensive modeling and optimization of the space building object are realized, and the information is greatly expanded.
Drawings
The advantages of the spatial object analysis and three-dimensional reconstruction modeling optimization techniques of the present invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings and the detailed description of the invention.
Fig. 1 is a flowchart of a spatial object analysis method based on artificial intelligence according to the present invention.
FIG. 2 is a schematic structural flow diagram of a three-dimensional reconstruction modeling optimization technique based on artificial intelligence provided by the invention.
Detailed Description
The invention provides a method for carrying out target detection on a three-dimensional reconstruction object by adopting an artificial intelligence algorithm and a method for modeling and optimizing the three-dimensional reconstruction, which comprise a space object analysis method based on target detection and a three-dimensional reconstruction model optimization method based on deep learning, and the specific flow is as follows:
1. the space object analysis method based on target detection comprises the following steps:
step a1, continuously shooting a target at multiple angles without dead angles to obtain a series of images of the target object, and then performing image enhancement pretreatment; the color of the primary color is subtracted from the white color to finish the reverse color processing; then adding pixel interpolation processing, increasing pixel resolution to refine the image, and finishing smoothing processing; finally, opening the noisy image to remove the noise on the background, and finishing morphological processing;
performing object analysis on the image based on the above work: processing the image by using a Laplacian operator and a Sobel operator, and calculating the change value of the gray value of the image to complete edge detection; then neglecting the influence of background and noise interference in the digital image containing the target and the background to realize the contour detection of the process of target contour extraction; and finally calculating the model parameters of the target to complete object analysis.
When the image is subjected to the reverse color processing, the formula used is as follows:
F(i,j)=255-F(i,j)
(i, j) is the coordinates of an arbitrary point in the image
Performing object analysis on the image based on the above work: processing the image by using a Laplacian operator and a Sobel operator, and calculating the change value of the gray value of the image to complete edge detection; then neglecting the influence of background and noise interference in the digital image containing the target and the background to realize the contour detection of the process of target contour extraction; and finally calculating the model parameters of the target to complete object analysis.
In the process of processing the image by using the Laplacian operator and the Sobel operator to complete edge detection, the following formula is used:
1) Discrete Laplacian operators for two variables:
wherein, G represents the gray scale of the point, θ represents the gradient direction, and Gx and Gy represent the gray scale of the image detected by the horizontal and vertical edges, respectively.
And a2, detecting targets such as peripheral objects of the object, windows and the like through the YoloV3 target detection network model, and further improving the modeling accuracy of the peripheral objects of the space object model, the windows and the like.
The simulation is performed by taking target detection for a window as an example:
the YoloV3 target detection needs to be implemented through three steps, including obtaining a prediction result from the characteristics, decoding the prediction result, sorting the predicted bounding box scores and carrying out non-maximum inhibition screening; wherein the last displayed bounding box can be derived after decoding of the prediction results, using the formula:
b x =σ(t x )+c x
b y =σ(t y )+c y
Pr(object)*IOU)(b,Object)=σ(t o )
c x 、c y the grid number of the phase difference between the upper left corner of the grid where the preset frame is located and the leftmost upper corner in the x-axis direction and the y-axis direction is represented; p is a radical of w 、p h Representing the width and height of the prior box; t is t x 、t y Representing the direction of the target center point relative to the upper left corner of the grid where the preset frame is located in the x-axis direction and the y-axisAn offset in direction; t is t w 、t h Respectively representing the width and height of the predicted frame; σ represents an activation function; pr (object) represents the probability of whether the prior box has an object; IOU represents the cross-over ratio.
Obtaining a data result of a certain detection object after target detection: the window frame is about 1.2 meters long by about 2 meters. I.e. the area of a window is about 2.4 square meters. All window areas (the individual calculation for a particular area of window during the calculation) are 1713.6 square meters.
And a3, importing the data after the object analysis is finished into a database, integrating the data according to relevant data such as terrain, structure, materials and the like, reflecting the real state of the entity, and finally importing the information into a real-time database as data resources for model optimization.
2. The three-dimensional reconstruction optimization method based on deep learning comprises the following steps:
step b1, a three-dimensional reconstruction method based on image recognition:
firstly, based on a deep learning algorithm, different sparse point cloud repair modeling results such as MVS, cas-MVSNet and MeshLab are compared, and after comprehensive comparison is carried out on the aspects of running time, picture quantity requirements, repair difficulty and the like, the MVS algorithm is calculated and is more suitable for reconstruction of outdoor entities. Then, continuously updating the angle and the position of the camera to obtain a series of multi-view continuous images of adjacent scenes;
when the position is calculated, the relation between the camera coordinate system and the world coordinate system can be represented by a rotation matrix R and a translation matrix t, and the coordinates of each point can be unified into the same coordinate system through the process, wherein the process is as follows:
its homogeneous coordinates can be expressed as:
wherein X c 、Y c 、Z c Coordinates of the object of interest at any point in the image in the camera coordinate system; x w 、Y w 、Z w Representing the transformed coordinates of a selected study object in an image in the world coordinate system, t being a three-dimensional translation vector:
t=[t X t y t z ] T ,0=[0 0 0] T
wherein t is X 、t y 、t z Refers to the distance of the required translation in the directions of the x-axis, the y-axis and the z-axis;
the rotation matrix R is a3 × 3 orthogonal matrix, whose elements satisfy:
r denotes each element in the matrix R.
B2, carrying out iterative processing on a large amount of acquired two-dimensional picture information by using the incremental SFM to obtain image information and carrying out corresponding feature matching, carrying out preliminary sparse three-dimensional reconstruction on real objects through the obtained data, and obtaining the poses, internal references and common-view relations of the poses and the internal references of different cameras;
when the SFM is used for carrying out feature matching of sparse three-dimensional reconstruction, the formula is as follows:
f nn =arg min||f d -f' d || 2 f'∈F(J)
wherein F (J) represents a feature point around the image J; f. of nn Representing the nearest neighboring feature vector; f. of d Representing points on the actual picture; f' d Representing a certain characteristic point of the selected image;
b3, acquiring dense point clouds by using MVS (multifunction vehicle vision system), and performing distortion removal treatment on the images so as to prevent the images from having larger view angle estimation difference, thereby reducing errors; then, selecting and inputting a plurality of image pictures with finished feature matching through a visual angle as a reference image and a candidate set, and finishing the extraction of depth features; constructing matching cost of reference influence by using an AA-RMVSNet and utilizing a plane scanning algorithm and expressing the matching cost by using a characteristic body; cost accumulation of AA-RMVSNet is completed by constructing a cost body of a three-dimensional structure formed by connecting cost graphs with the same length and width as those of a reference image in the depth direction; then, the learning process from the cost to the depth value is completed through the direct learning of the neural network, and further the depth estimation is realized; extracting points with consistency in the three-dimensional image to realize dense reconstruction;
wherein in the construction process of matching cost, the following formula is used for completion:
wherein NCC represents a measure of optical coherence between images; l represents a feature of the image;the depth of the best-fit plane is represented,a normal vector representing a best fit plane;NCC representing the reference impact on features I, I.
2) The three-dimensional reconstruction method based on laser scanning comprises the following steps:
step C1, using a Scan Station C10 three-dimensional laser scanner or other ground three-dimensional laser scanning instruments to acquire data of a space object, thereby acquiring laser point cloud data or DSM data;
the general lidar equation is as follows:
wherein: p R Is the received echo power, P T Is the emitted laser power, P b Is background radiation and noise power(ii) a R is the distance between the target and the radar, theta T Is the transmit antenna field angle/beam divergence angle; ρ is the reflectivity of the target surface to the laser, dA is the target surface bin, and Ω is the target light scattering solid angle; d is the aperture/diameter, η, of the receiving antenna Atm Is the two-pass transmission of the transmission medium, eta Sys Is the transmittance of the optical system
Step c2, preprocessing the acquired point cloud data; denoising ordered or partially ordered point clouds by using methods such as median filtering, mean filtering, gaussian filtering and the like, and denoising scattered point cloud data by using a Laplace algorithm; performing point cloud hole repairing on an area which cannot be measured due to factors such as shielding and limitation of measuring equipment; compressing and registering the point cloud data to complete preprocessing;
when a gaussian filtering method is used for partially ordered point cloud data, for the point cloud data, the calculation of each point weight is realized through gaussian blurring, and the calculation formula is as follows:
wherein G (x, y) refers to the weight of a point in the selected point cloud; x is the abscissa of a point in the selected point cloud in the coordinate system, y is the ordinate of the point in the selected point cloud in the space coordinate system, mu is the mean value of x, and sigma is the variance of x;
when the Laplace algorithm is used for denoising scattered point cloud data, laplace filtering is carried out on the point cloud data through the following method, wherein x and y refer to the abscissa and the ordinate of a point:
1) The first differentials in the x-direction and the y-direction are calculated according to the following equations, respectively:
Ix(x,y)=(x+1)-xI(x+1,y)-I(x,y)=I(x+1,y)-I(x,y)
Iy(x,y)=(y+1)-yI(x,y+1)-I(x,y)=I(x,y+1)-I(x,y)
2) The second derivative is calculated according to the following equation:
3) Finally, the de-noising expression of the Laplace algorithm is as follows:
step c3, judging the boundary points by using the distribution uniformity of the target points and the surrounding points to realize the detection of the boundary points, and calculating the local feature weight of the points by using a point cloud average curvature-based method to realize the extraction of the sharp points and finish the detection of the sharp points; and then, performing data segmentation on the acquired point cloud data or DSM data by using a normal vector and distance constrained plane segmentation method to complete modeling of the space object.
The formula for point cloud segmentation by using normal vectors is as follows:
whereinA normal vector representing point p;representing a point p in the neighborhood i The normal vector of (a); r is the set radius of the field; n represents the number of all point clouds in the neighborhood;normal vector representing point p and p i The normal vector of (a) is subtracted and a two-norm is taken.
3) And combining the models obtained by the two methods with database data and optimizing the models:
step d1, carrying out all-dimensional data acquisition on the space object by using multi-source data acquisition equipment, transmitting and importing the acquired latest data into a model database, and completing virtual mapping of the space object in a virtual space through a simulation process, wherein the method comprises the following steps:
the space object is subjected to omnibearing data acquisition by the aid of multi-source data acquisition equipment based on all layers of the space object, and a three-dimensional reconstruction model of the space object is optimized by combining the acquired multi-source data with a model database. And carrying out fusion matching on the collected multi-source data in a three-dimensional scene, including coordinate conversion and data registration, and unifying data such as BIM, oblique photography model, point cloud and the like and other GIS data into a coordinate system. Meanwhile, the coordinate projection conversion of various three-dimensional data can be realized, and the method comprises the following steps: models, grids, images, point clouds, oblique photography models, etc. And the data fusion is fully utilized.
And d2, mining the incidence relation among diversified data such as physical entity data, virtual model data, service data, domain knowledge and the like to support the extraction of deeper knowledge. The data are preprocessed, and the preprocessing comprises data filtering, abnormal data and irrelevant data elimination, data feature extraction and the like. And performing space-time registration on the processed data, such as using a least square registration method to synchronize the data in a time dimension and to be in the same coordinate system in space. And the spatiotemporal correlation between the data after the registration is mined to the greatest extent based on a Pearson correlation analysis method, K-means, an Apriori algorithm and the like. On the basis, knowledge reasoning is further realized through a statistical method, a clustering method, a classification method and the like, the model prediction results are subjected to weighted fusion, and finally data obtained by the digital twin model are stored in a database.
And d3, combining the collected multi-source data with a model database, integrating the processes of system simulation, real-time calculation, big data management and analysis, data visualization and the like into a whole, enabling the space object model obtained through three-dimensional reconstruction to be rapidly and accurately combined and registered with the collected latest multi-source data in a virtual space, and carrying out all-around optimization and correction on the model. Based on system structure and design data, a model of the object under study is established and key energy equipment and system parameters are reserved. And the modeling is unified, and related data is stored, so that the data structure, format, type, interface and the like are standardized. Common modeling languages include Unified Modeling Language (UML), system modeling language (SysML), and the like. In addition, mathematical methods based on domain theory can support data modeling, interoperation, and integration. On the basis, the data model is stored, and functions of data archiving, index access and the like are realized. Through intelligent self-evolution and self-adaptive learning capabilities, the accuracy of the model can be continuously improved along with the accumulation of data.
Identifying the performance parameters and equipment characteristic parameters of the whole system by using the sorted data to obtain optimized data suitable for each model; and on the other hand, the clustering is stored in a historical database for identifying similar models in the subsequent instruction feedback process.
In order to verify the optimization effect of the invention in the aspect of models, the invention performs model optimization on image dense point cloud data obtained by object analysis and model parameter data obtained by three-dimensional reconstruction and laser point cloud by combining database data, and the examples are as follows:
the method is applied to a modeling example of a building and a ramp thereof, data obtained by image recognition and laser scanning are subjected to coordinate conversion and data registration and then are imported into a database, association among the data is mined by combining database data analysis, rapid and accurate combination registration of the data is realized, and the model is corrected in an all-around manner. In the model analysis process, 220 positions of the construction project are detected through equipment and converted into point cloud data to be imported into a database for combination and analysis, and it is found that 176 point cloud structures of the fitted and optimized models do not conform to 44 point cloud structures of the models, the retention rate of the point cloud data is only 80%, wherein the retention rate of the point cloud data of the arc-shaped structures is only 60% and accounts for 91% of the retention rate of the point cloud data. Compared with other modeling modes, the following effects are achieved after model optimization is carried out:
1) The parameter values of the length, the width, the position and the like of the window in the model are more accurate and do not deviate from the actual situation greatly;
2) The modeling of the positions of irregular structures such as curved surfaces, irregular triangular wall corners and the like in the building is more accurate.
In a specific implementation, the present application provides a computer storage medium and a corresponding data processing unit, where the computer storage medium is capable of storing a computer program, and the computer program, when executed by the data processing unit, may execute the inventive content of the artificial intelligence-based spatial object analysis and modeling method provided by the present invention and some or all of the steps in each embodiment. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
It is clear to those skilled in the art that the technical solutions in the embodiments of the present invention can be implemented by means of a computer program and its corresponding general-purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in the form of a computer program, that is, a software product, which may be stored in a storage medium and includes several instructions to enable a device (which may be a personal computer, a server, a single chip microcomputer MUU or a network device) including a data processing unit to execute the method in each embodiment or some parts of the embodiments of the present invention.
The present invention provides a method for analyzing and modeling spatial objects based on artificial intelligence, and a plurality of methods and approaches for implementing the technical solution are provided, the above description is only a preferred embodiment of the present invention, it should be noted that, for those skilled in the art, a plurality of improvements and modifications may be made without departing from the principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components not specified in this embodiment can be implemented by the prior art.
Claims (10)
1. A space object analysis and modeling optimization method based on artificial intelligence is characterized by comprising a space object analysis method based on artificial intelligence, a space object modeling method based on artificial intelligence and a model optimization method based on artificial intelligence.
2. The method for spatial object analysis and modeling optimization based on artificial intelligence according to claim 1, wherein the method for spatial object analysis based on artificial intelligence comprises the following steps:
step a1, preprocessing the acquired image information of a space object target, and acquiring corresponding parameter information through space object analysis;
step a2, detecting a peripheral object of the space object by using a YoloV3 target detection network model to obtain analysis data information of the space object;
and a3, importing the space object analysis data information into a data model base in real time.
3. The method for spatial object analysis and modeling optimization based on artificial intelligence according to claim 2, wherein the step a2 comprises:
the YoloV3 target detection is implemented by: obtaining a prediction result from the characteristics, decoding the prediction result, and sorting and non-maximum inhibition screening the predicted bounding box scores; wherein the last displayed bounding box is obtained after decoding of the prediction result, the formula used is as follows:
b x =σ(t x )+c x
b y =σ(t y )+c y
Pr(object)*IOU)(b,Object)=σ(t o )
wherein: c. C x 、c y Respectively representing the number of lattices with the difference in the x-axis direction and the number of lattices with the difference in the y-axis direction from the upper left corner of the grid where the preset frame is located to the upper left corner positioning point; p is a radical of w 、p h Respectively representing the width and height of the prior box; t is t x 、t y Respectively representing the offset of the target center point relative to the left upper corner of the grid where the preset frame is located in the x-axis direction and the offset in the y-axis direction; t is t w 、t h Respectively representing the width and height of the predicted frame; σ represents an activation function; pr (object) represents the probability of whether the prior box has an object; IOU represents the cross-over ratio.
4. The method for spatial object analysis and modeling optimization based on artificial intelligence of claim 3, wherein the method for spatial object modeling based on artificial intelligence comprises a three-dimensional reconstruction method based on deep learning and a three-dimensional reconstruction method based on laser scanning, and the method for three-dimensional reconstruction based on deep learning comprises the following steps:
step b1, according to the continuous image information of the adjacent scene of the space object target, which is acquired in advance, the three-dimensional image information is determined by utilizing the two-dimensional image information through position calculation;
b2, carrying out iterative processing on the acquired two-dimensional picture information by using an incremental SFM algorithm to obtain image information and carry out corresponding feature matching, carrying out preliminary sparse three-dimensional reconstruction on an object in reality through the obtained data, and obtaining the pose, internal reference and common view relations of different cameras;
step b3, performing three-dimensional reconstruction on the sparse point cloud information through an MVS algorithm to obtain a depth map of the space object and dense point cloud information, wherein the three-dimensional reconstruction comprises the following steps:
using a Poisson method to carry out MESH reconstruction to obtain a MESH model with colors, and finishing the three-dimensional reconstruction of the space object; and constructing matching cost through AA-RMVSNet to realize depth estimation, and extracting points with consistent depth to realize dense reconstruction.
5. The method for spatial object analysis and modeling optimization based on artificial intelligence of claim 4, wherein step b1 comprises:
in the position calculation, in which the relationship between the camera coordinate system and the world coordinate system is expressed by the rotation matrix R and the translation matrix t, to unify the coordinates of the points into the same coordinate system, the formula is as follows:
its homogeneous coordinates can be expressed as:
wherein X c 、Y c 、Z c Coordinates of the object of interest at any point in the image in the camera coordinate system; x w 、Y w 、Z w Representing the transformed coordinates of a selected study object in an image in the world coordinate system, t being a three-dimensional translation vector:
t=[t X t y t z ] T ,0=[0 0 0] T
wherein t is X 、t y 、t z The distances required to translate in the directions of the x axis, the y axis and the z axis respectively;
the rotation matrix R is a3 × 3 orthogonal matrix whose elements satisfy:
6. The method according to claim 5, wherein in step b2, the following formula is used for performing the corresponding feature matching:
f nn =arg min||f d -f′ d || 2 f'∈F(J)
wherein F (J) represents a feature point around the image J; f. of nn Representing the nearest neighbor feature vector; f. of d Representing points on the actual picture; f' d Representing a feature point of the selected image.
7. The method according to claim 6, wherein the step b3 comprises:
in the construction process of constructing matching cost through AA-RMVSNet to realize depth estimation, the following formula is used:
8. The method for spatial object analysis and modeling optimization based on artificial intelligence according to claim 7, wherein the method for three-dimensional reconstruction based on laser scanning comprises the following steps:
step c1, carrying out data acquisition on the space object by using a three-dimensional laser scanning instrument to obtain laser point cloud data or DSM data, wherein the following laser radar equation is adopted:
wherein, P R Is the received echo power, P T Is the emitted laser power, P b Is the background radiation and noise power; r is the distance between the target and the radar, theta T Is the transmit antenna field angle/beam divergence angle; ρ is the reflectivity of the target surface to the laser, dA is the target surface bin, and Ω is the target light scattering solid angle; d is the aperture of the receiving antenna, η Atm Is the two-pass transmission of the transmission medium, eta Sys Is the optical system transmittance;
step c2, preprocessing the laser point cloud data, and denoising the point cloud by using Gaussian filtering and Laplace algorithm;
step c3, detecting boundary points, and detecting sharp points based on a point cloud average curvature method; and performing data segmentation by using point cloud segmentation to complete space object modeling.
9. The method for spatial object analysis and modeling optimization based on artificial intelligence according to claim 8, wherein in the step c2, when a gaussian filtering method is used, the calculation of the weight of each point is realized through gaussian blurring for the laser point cloud data, and the calculation formula is as follows:
wherein G (x, y) refers to the weight of a point in the selected point cloud; x is the abscissa of a point in the selected point cloud in the coordinate system, y is the ordinate of a point in the selected point cloud in the space coordinate system, mu is the mean value of x, and sigma is the variance of x;
when the Laplace algorithm is used for denoising scattered point cloud data, the point cloud data is subjected to Laplace filtering by the following method:
the first differential Ix (x, y) in the x direction and the first differential Iy (x, y) in the y direction are calculated according to the following equations, respectively:
Ix(x,y)=(x+1)-xI(x+1,y)-I(x,y)=I(x+1,y)-I(x,y)
Iy(x,y)=(y+1)-yI(x,y+1)-I(x,y)=I(x,y+1)-I(x,y)
second order differential I xx (x, y) is calculated according to the following equation:
the expression of de-noising by the laplacian algorithm is as follows:
10. The method of claim 9, wherein the artificial intelligence-based model optimization method comprises:
step d1, collecting data, combining multi-source data and a model database, realizing multi-source data fusion matching through a coordinate conversion and data registration method, and importing the data into the database;
d2, performing weighted fusion on the model three-dimensional reconstruction result and the laser point cloud modeling result to complete model optimization;
wherein in performing weighted fusion, the following formula is used:
pre=0.6pre1+0.4pre2
pre refers to an optimized coordinate result obtained by weighted fusion of coordinate results obtained by each three-dimensional reconstruction or laser point cloud modeling; pre1 and pre2 respectively refer to a coordinate result of three-dimensional reconstruction by using a model and a coordinate result of laser point cloud modeling;
and d3, storing the optimized model into a model database, re-identifying the overall parameters of the model, extracting optimized data and importing the data into the database.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210830532.2A CN115222884A (en) | 2022-07-15 | 2022-07-15 | Space object analysis and modeling optimization method based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210830532.2A CN115222884A (en) | 2022-07-15 | 2022-07-15 | Space object analysis and modeling optimization method based on artificial intelligence |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115222884A true CN115222884A (en) | 2022-10-21 |
Family
ID=83612881
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210830532.2A Withdrawn CN115222884A (en) | 2022-07-15 | 2022-07-15 | Space object analysis and modeling optimization method based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115222884A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116071566A (en) * | 2023-03-23 | 2023-05-05 | 广东石油化工学院 | Steel drum track detection method based on grid flow denoising and multi-scale target network |
CN116330667A (en) * | 2023-03-28 | 2023-06-27 | 云阳县优多科技有限公司 | Toy 3D printing model design method and system |
CN117455895A (en) * | 2023-11-27 | 2024-01-26 | 朋友电力科技有限公司 | Preparation device and preparation method for realizing wire clamp |
CN118365805A (en) * | 2024-06-19 | 2024-07-19 | 淘宝(中国)软件有限公司 | Three-dimensional scene reconstruction method and electronic equipment |
-
2022
- 2022-07-15 CN CN202210830532.2A patent/CN115222884A/en not_active Withdrawn
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116071566A (en) * | 2023-03-23 | 2023-05-05 | 广东石油化工学院 | Steel drum track detection method based on grid flow denoising and multi-scale target network |
CN116330667A (en) * | 2023-03-28 | 2023-06-27 | 云阳县优多科技有限公司 | Toy 3D printing model design method and system |
CN116330667B (en) * | 2023-03-28 | 2023-10-24 | 云阳县优多科技有限公司 | Toy 3D printing model design method and system |
CN117455895A (en) * | 2023-11-27 | 2024-01-26 | 朋友电力科技有限公司 | Preparation device and preparation method for realizing wire clamp |
CN118365805A (en) * | 2024-06-19 | 2024-07-19 | 淘宝(中国)软件有限公司 | Three-dimensional scene reconstruction method and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111563442B (en) | Slam method and system for fusing point cloud and camera image data based on laser radar | |
CN110569704B (en) | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision | |
CN111063021B (en) | Method and device for establishing three-dimensional reconstruction model of space moving target | |
CN109544456B (en) | Panoramic environment sensing method based on two-dimensional image and three-dimensional point cloud data fusion | |
CN109615611B (en) | Inspection image-based insulator self-explosion defect detection method | |
CN111563415B (en) | Binocular vision-based three-dimensional target detection system and method | |
Xu et al. | Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor | |
CN109598794B (en) | Construction method of three-dimensional GIS dynamic model | |
CN108648194B (en) | Three-dimensional target identification segmentation and pose measurement method and device based on CAD model | |
CN113139453B (en) | Orthoimage high-rise building base vector extraction method based on deep learning | |
CN115222884A (en) | Space object analysis and modeling optimization method based on artificial intelligence | |
CN110599489A (en) | Target space positioning method | |
CN112946679B (en) | Unmanned aerial vehicle mapping jelly effect detection method and system based on artificial intelligence | |
CN116449384A (en) | Radar inertial tight coupling positioning mapping method based on solid-state laser radar | |
CN114782628A (en) | Indoor real-time three-dimensional reconstruction method based on depth camera | |
CN116805356A (en) | Building model construction method, building model construction equipment and computer readable storage medium | |
CN114463521A (en) | Building target point cloud rapid generation method for air-ground image data fusion | |
CN113920254B (en) | Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof | |
CN110851978B (en) | Camera position optimization method based on visibility | |
CN117292076A (en) | Dynamic three-dimensional reconstruction method and system for local operation scene of engineering machinery | |
CN117953059B (en) | Square lifting object posture estimation method based on RGB-D image | |
CN117541537B (en) | Space-time difference detection method and system based on all-scenic-spot cloud fusion technology | |
CN113536959A (en) | Dynamic obstacle detection method based on stereoscopic vision | |
CN112767459A (en) | Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion | |
CN113129348B (en) | Monocular vision-based three-dimensional reconstruction method for vehicle target in road scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20221021 |
|
WW01 | Invention patent application withdrawn after publication |