CN111080627A - 2D +3D large airplane appearance defect detection and analysis method based on deep learning - Google Patents

2D +3D large airplane appearance defect detection and analysis method based on deep learning Download PDF

Info

Publication number
CN111080627A
CN111080627A CN201911321821.4A CN201911321821A CN111080627A CN 111080627 A CN111080627 A CN 111080627A CN 201911321821 A CN201911321821 A CN 201911321821A CN 111080627 A CN111080627 A CN 111080627A
Authority
CN
China
Prior art keywords
point
point cloud
airplane
points
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911321821.4A
Other languages
Chinese (zh)
Other versions
CN111080627B (en
Inventor
汪俊
郭向林
刘元朋
李红卫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201911321821.4A priority Critical patent/CN111080627B/en
Publication of CN111080627A publication Critical patent/CN111080627A/en
Application granted granted Critical
Publication of CN111080627B publication Critical patent/CN111080627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Abstract

The invention discloses a 2D +3D large airplane appearance defect detection and analysis method based on deep learning, which is characterized by comprising the following steps of: collecting multi-view 2D images and 3D point cloud data; acquiring a complete airplane point cloud model through registration; extracting image and point cloud feature points, and performing 2D-3D correspondence according to feature matching; estimating the pose of the camera according to the 2D-3D corresponding relation; according to the camera pose, the assignment of texture colors of the 2D image to the 3D point cloud is realized; determining point cloud semantic segmentation according to the point cloud color and coordinate information; and (5) performing detection and analysis on the shape defects of the airplane according to the point cloud semantic segmentation. The method is based on a 2D +3D large airplane appearance defect detection and analysis method for deep learning, utilizes a vision sensor device and an optical three-dimensional detection system measurement technology to process and analyze the collected 2D +3D data, can accurately and effectively detect and extract the appearance defects on the large airplane, has reasonable conception, and can be automatically applied in scenes such as airplane safety inspection and the like in practice.

Description

2D +3D large airplane appearance defect detection and analysis method based on deep learning
Technical Field
The invention relates to the fields of deep learning, computer vision, graphics and the like, in particular to a method for detecting and analyzing appearance defects of a large airplane.
Background
The traditional Non-destructive testing (NDT) technology utilizes the characteristics of sound, light, magnetism and electricity, detects whether defects or Non-uniformity exist in the detected object on the premise of not damaging or influencing the use performance of the detected object, gives information such as the size, position, property and quantity of the defects, further judges the current technical state (such as qualification or Non-qualification, residual service life and the like) of the detected object, and has an important position in the aspect of defect detection of large airplanes. However, the conventional non-destructive inspection has the characteristics of high inspection cost and low speed, and although the inspection rate of volume defects (pores, slag, tungsten inclusions, burn-through, undercut, flash, pits, etc.) is high, the inspection of the volume defects (lack of penetration, lack of fusion, cracks, etc.) is easy to miss if the photographing angle is not proper.
At present, the detection and analysis of the shape defects of large-size airplanes by using a computer vision technology based on a crawling robot is one of mainstream nondestructive detection technologies. The technique utilizes a "flat crawler" mobile robot with suction cup feet to crawl the skin of the aircraft and simultaneously locate and analyze the defects. Specifically, rivet detection is performed using a computer vision algorithm using four cameras mounted on the mobile robot, and the robot spine axis is aligned with the rivet line so as to be positioned at the correct location. However, the technology is unstable due to the influence of illumination, the structural design requirement is high, the calibration is complicated, and the missing rate is high. Therefore, automatic non-destructive inspection has not been successful in visual inspection.
With the development of digital measurement technology, an engineering technical method for detecting the appearance of a large-size airplane in a shutdown state by using a laser tracker and a laser scanner is realized. The large airplane appearance detection based on the laser tracker is a non-contact type mapping and detecting method under the complete machine halt state, which is provided for large-size airplanes or special aircrafts, the size of a mapping object is very large, the span exceeds 40m, the three-view size is 46m multiplied by 42m multiplied by 14m, the precision requirement is high, the content to be measured is very various, the method not only comprises the whole appearance mapping of key components such as a nose, a fuselage, a wing, an engine cabin, a horizontal tail, a vertical tail and the like, but also needs to perform the appearance mapping of different positions of each movable wing surface under different configuration states such as cruising, takeoff, landing and the like. However, the 3D point cloud data acquired by the laser tracker has its own problems, such as sparseness of points, and the effective sensing distance for the algorithm does not exceed 10m, although the accuracy is high. In addition, the method is directly based on 3D point cloud, and the difficulty is that no visual features enable tasks such as tracking and positioning to be not as direct as in vision. The visual advantage is that the amount of information contained is enormous and can provide a large number of visual features.
From the above analysis, it can be seen that the vision and laser trackers should not be two techniques in conflict, but rather have advantages and disadvantages. With the development of computer vision and machine learning level, the computer vision can replace human eyes to identify, position and measure targets, and is applied to many industrial detection problems.
Disclosure of Invention
The invention aims to provide a 2D +3D large airplane appearance defect detection and analysis method based on deep learning, which combines visual sensor equipment and an optical three-dimensional detection system and completes the detection and analysis of the large airplane appearance defect by using the acquired data so as to fill the blank in the prior art.
The technical scheme provided by the invention is as follows:
A2D +3D large airplane appearance defect detection and analysis method is characterized by comprising the following steps:
s1, respectively acquiring images and point clouds of a large-size airplane from a plurality of stations by utilizing a PTZ camera and a laser tracker which are installed on a mobile robot to form multi-view 2D images and 3D point cloud data;
s2, acquiring a complete airplane point cloud model through 3D point cloud registration;
s3, respectively extracting 2D image and 3D point cloud characteristic points, and performing 2D-3D correspondence according to characteristic matching;
s4, estimating the pose of the camera according to the corresponding relation of 2D-3D;
s5, according to the pose of the camera, the assignment of the texture color of the 2D image to the 3D point cloud is achieved, and the 3D point cloud with texture information is obtained;
s6, performing semantic segmentation on the 3D point cloud with the texture information;
and S7, performing defect analysis on the large airplane according to the semantic segmentation result.
On the basis of the above scheme, a further improved or preferred scheme further comprises:
further, in the step S5, mapping the 3D point cloud to an image space according to the estimated camera pose; then, for the correctly matched 2D-3D characteristic point pairs, assigning the color information of the 2D characteristic points to the corresponding 3D characteristic points; for the unmatched 2D-3D feature point pairs, selecting the nearest 2D feature point, and assigning color information to the 3D feature points; for other non-feature points, interpolation is used to obtain color information.
Further, in the step S6, performing self-supervised semantic segmentation according to the textured 3D point cloud constructed in the step S5, wherein the process includes the following steps:
s6.1. sequence generation: establishing a spherical neighborhood which takes any point x in the 3D point cloud with the texture information as a center and has a certain central radius, sequencing all points in the spherical neighborhood according to a z coordinate value, then randomly extracting (k-1) points from the spherical neighborhood, wherein the points have smaller z values compared with the point x, and the (k-1) points and the last point x form a z-order sequence with the length of k;
s6.2, repeating the step S6.1, and generating a plurality of z-sequence sequences for each point in the point cloud;
s6.3, self-supervision characteristic learning: with (x)1,x2,…,xk) Represents any z-sequence with length of k, and uses the front (k-1) point (x) of the z-sequence1,x2,...,xk-1) Predicting the next point xκUsing a subsequence (x) of length (k-1)1,x2,…,xk-1) Predicting the displacement xk-xk-1
The input of the self-supervision characteristic learning network structure is a three-dimensional point ordered sequence (x) with the length of (k-1)1,x2,…,xk-1) The output is the displacement to the next point, xk-xk-1Using multiple spatial coding layers to encode each point xiEncoding into a high-dimensional vector ViI is more than or equal to 1 and less than or equal to k-1, and the spatial coding layer consists of 1D convolution, batch normalization and a ReLU activation function; then, the high-dimensional vector sequence (v)1,v2,...,vk-1) Sending to a multilayer Recurrent Neural Network (RNN); finally, the RNN hidden state is transformed into the 3D output y, the spatial displacement estimate required to reach the next point in the sequence, using the full connectivity layer.
Further, the step S7 includes a S7.1 defect detecting process and a S7.2 defect characterizing process, where the S7.1 defect detecting process includes:
a. smoothing and resampling by a mobile least square algorithm, obtaining 3D point cloud data of the component through semantic segmentation, and reconstructing a curved surface by using high-order polynomial interpolation;
b. further estimating the normal and curvature of the curved surface based on a moving least squares method;
c. the component is divided into two parts, a damaged region and a non-damaged region, by using a region growing algorithm:
selecting random points from different regions as seed points, gradually increasing until the whole point cloud covering the component is covered, testing the angle between the normal of the neighborhood point and the normal of the current seed point for each seed point, adding the current neighborhood point to a seed point set if the angle is smaller than a certain threshold, outputting a group of clusters corresponding to each seed point set, regarding one cluster as a part of points on the same smooth surface, combining the clusters, and finally marking the defect region on the component by a visualization method.
Further, the step S2 includes:
s2.1, performing initial registration based on the global measurement field;
and S2.2, carrying out fine registration based on graph optimization on the basis of the initial registration.
And S2.1, calculating the global coordinates of target points arranged around the airplane by using a laser tracker through a self-calibration distance measurement method, and constructing a global measurement field of the whole airplane.
And S2.2, establishing an undirected graph model for optimization by converting the overlapping area between the cloud and the point cloud of each station into the weight of nodes and edges in the graph, and finishing the precise registration of the whole point cloud by iteratively searching and closing a newly generated ring.
Further, the step S3 includes:
s3.1, extracting a group of feature points on the image by using a 2D SIFT detector;
s3.2, extracting a group of feature points of the point cloud of the large airplane after the optimized registration by using the 3D ISS;
and S3.3, obtaining the 2D-3D corresponding relation by utilizing the triple deep neural network to jointly learn the image and the point cloud feature point descriptor according to the two groups of feature points extracted in the steps S31 and S32.
Has the advantages that:
the invention relates to a 2D +3D large airplane appearance defect detection and analysis method based on deep learning, which utilizes a vision sensor device and an optical three-dimensional detection system measurement technology to process and analyze collected 2D +3D data, can accurately and effectively detect and extract appearance defects on a large airplane, has reasonable conception, and can realize automatic application in scenes such as airplane safety inspection and the like in practice.
Drawings
FIG. 1 is a flow chart of aircraft appearance defect detection and analysis according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a global measurement field construction according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a triple depth network structure according to an embodiment of the present invention;
FIG. 4 is a schematic view of a space-filling curve according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an auto-supervised feature learning according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of an aircraft profile defect analysis implementing an embodiment of the invention;
FIG. 7 is a schematic diagram of an aircraft profile defect detection process embodying embodiments of the present invention;
FIG. 8 is a schematic diagram of an aircraft profile defect depth estimation implementing an embodiment of the invention.
FIG. 9 is a diagram illustrating the size and direction of a detected defect in accordance with an embodiment of the present invention.
Detailed Description
The embodiment relates to a method for detecting and analyzing 2D +3D large airplane appearance defects based on deep learning, which mainly comprises the steps of jointly learning a 2D image and a 3D point cloud feature point descriptor through a triple (three-way) neural network; obtaining 2D-3D feature matching pairs by calculating Euclidean distance similarity matrixes between the 2D and 3D feature descriptors; then, estimating the pose of the camera by a PnP method by utilizing the geometric relation between the 2D-3D matching pairs; projecting the 3D point cloud to an image space by using the estimated camera pose to obtain a 3D point cloud with texture; carrying out self-supervision semantic segmentation on the 3D point cloud with the texture; finally, each part obtained by semantic segmentation is subjected to defect analysis.
To further clarify the technical solution and design principle of the present invention, the following detailed description is made with reference to the accompanying drawings.
As shown in fig. 1, a method for detecting and analyzing a shape defect of a 2D +3D large aircraft based on deep learning includes the following steps:
s1, acquiring 2D images and 3D point cloud data of a single station through a PTZ camera and a laser tracker which are installed on a mobile robot, and then shooting images and acquiring 3D point cloud data by the mobile robot from different visual angles;
s2, acquiring complete point cloud data of the airplane through 3D registration
In the field of industrial measurement, limited by the size of an airplane and mutual shielding among all parts, point cloud data measured by a laser tracker of each station only comprises partial airplane appearance and is positioned under a self-measurement coordinate system. In order to obtain the final complete data of the airplane, point cloud data in different coordinate systems need to be unified to a global coordinate system through a point cloud registration method. Based on the above factors, in the present embodiment, for the multi-station 3D point cloud data obtained in step S1, a global measurement field is first established to perform initial registration; and then, based on the initial registration, carrying out fine registration based on graph optimization to provide an aircraft complete point cloud model for subsequent processing steps.
Step S2 specifically includes:
step S2.1:
a. constructing a global measurement field: the point cloud of each station is directly registered by only depending on the traditional method, and the obtained result precision hardly meets the requirement of the appearance detection of the large airplane. Therefore, to improve the overall accuracy, the present embodiment constructs a global measurement field of the entire region to be scanned by the laser tracker.
As shown in FIG. 2, FIG. 2 includes a station T with 1 target point P (x, y, z) and 3 trackers1(0,0,0)、T2(X2,0,0)、T3(X3,Y30), by T1The measuring coordinate system of the tracker on one of the stations is a reference according to T2Determining X-axis at the second tracker site, and combining with T3The third tracker site establishes the XOY plane. Regarding the coordinates of each target point and the tracker as unknown parameters, the distance between the tracker site and the target point is a known parameter, and regarding the coordinates as coordinates, the distance measurement value of the same target point P at different site visual angles through the tracker is constructed, and an equation set with 6 unknown parameters is constructed:
Figure BDA0002327345340000071
Figure BDA0002327345340000072
Figure BDA0002327345340000073
in the above formula, d1For the first station T of the tracker1Distance to target point P, d2For the second site T of the tracker2Distance to target point P, d3For the third station T of the tracker3The distance from the target point P to the target point P can be used for solving the coordinates of the target point in the target point P, and the problem of constructing the global measuring field of the tracker can be converted into a problem of solving a linear equation set by a method of increasing the number of the tracker stations and the number of the target points.
In practical situations, the position of the coordinate system is arbitrarily selected, A tracker sites and B target points are arranged, and the transfer parameters can be obtained as long as the following relations are met:
AB≥3(A+B)
by the self-calibration method, the construction of a field global measuring field can be rapidly and accurately finished, meanwhile, extra data are provided by the number of redundant tracker detection sites or targets, and the overall measuring precision can be improved through overall adjustment.
b. Initial registration: after the global measurement field is established, any one station tracker data can be converted to a global coordinate system through a common target point. Similarly, the point cloud data measured by the laser scanner at a certain station view angle can also be converted into a global coordinate system by registering the target point in the point cloud with the target point of the same name measured by the tracker at the station.
By piCoordinates representing subscripted target points of the global coordinate system, qiThe coordinate of the target point under the local coordinate system of the laser tracker is represented, and the set P ═ { P can be obtained1,p2,…,pnQ ═ Q1,q2,…,qnP as the target point cloud, Q as the source point cloud, PiAs points in the point cloud P, qiDetermining the rigid body orientation relation of two point clouds by using a least square method, wherein the points in the point cloud Q are n and the point number of the point cloud is n:
Figure BDA0002327345340000081
as long as more than 3 target points exist, the point cloud data can be quickly and roughly registered to the constructed global coordinate system, and the initial registration work of the whole data is completed.
In the formula (1.1), R and t are respectively a rotation matrix and a translation matrix corresponding to the two groups of point clouds. Obtaining a translation matrix by performing partial derivation on t in (1.1)
Figure BDA0002327345340000082
Wherein
Figure BDA0002327345340000083
The gravity centers of two sets of point sets P and Q are respectively. By translation, the new coordinates of the points in the two clouds can be represented as:
Figure BDA0002327345340000084
that is to say will
Figure BDA0002327345340000085
Registration is performed as an initial translation vector of the two point cloud.
Equation (1.1) can be simplified as:
Figure BDA0002327345340000086
to minimize the objective function, it is necessary to
Figure BDA0002327345340000087
Performing SVD (singular value decomposition) on H ═ U Λ VTU is a matrix formed by singular vectors, V is an inverse matrix of U, and Λ is a diagonal matrix formed by singular values. When R ═ VUTEquation (1.2) takes the minimum value, and the best rotation matrix is obtained. And then, the optimal rotation matrix is utilized to complete the rough registration of the point clouds P and Q.
In the above formula, the superscript T represents the matrix transpose.
S2.2, fine registration based on graph optimization;
because the registration of each pair of point clouds has a certain slight deviation compared with an ideal result, for the multi-view point cloud registration condition, if the point clouds are linearly registered from the first station in sequence, a large registration error is likely to occur between the first station and the last station, so that the inconsistency of the whole registration result is caused, which is the closed-loop problem to be solved by the multi-view point cloud registration of the airplane.
The embodiment adopts a graph optimization method to select a proper registration order so as to eliminate error accumulation. The point cloud of each visual angle is taken as a node in the graph, and adjacent nodes with overlapping relation are connected by edges to form an undirected graph of the multi-visual angle point cloud. Through a graph theory optimization method, a plurality of end-to-end connected nodes are selected in an iteration mode to serve as a ring and are closed, so that a new node is formed until no residual nodes exist in the graph, and the global optimization registration work of the large airplane point cloud is completed.
S3, extracting the feature points of the 2D image and the 3D point cloud, and performing 2D-3D correspondence according to the feature points of the 2D image and the 3D point cloud, wherein the method specifically comprises the following steps:
s3.1, extracting a group of feature points on an image by utilizing a 2D SIFT detector
Figure BDA0002327345340000091
Figure BDA0002327345340000091
1≤n≤N;
S3.2, extracting a group of characteristic points from the point cloud of the large airplane subjected to optimized registration by using 3D ISS
Figure BDA0002327345340000092
Figure BDA0002327345340000092
1≤m≤M;
Here, N and M represent 2D and 3D key point numbers extracted from the 2D image and the 3D point cloud, respectively. SIFT and ISS are mature feature point extraction algorithms.
S3.3, jointly learning a feature point descriptor of the image and the point cloud by using a triple deep neural network according to the two groups of feature points extracted in the steps S31 and S32;
first, a set of local patches (Patch) centered around each 2D and 3D keypoints is created (descriptors include not only keypoints, but also bags)Including points around the keypoint that contribute to it), the deep Triplet network sums μ and
Figure BDA0002327345340000101
each 2D and 3D keypoint in (a) is mapped to the same high-dimensional feature space to jointly learn a corresponding descriptor set, represented as
Figure BDA0002327345340000102
And
Figure BDA0002327345340000103
d is the dimension of the descriptor. The occurrence of n here and in steps 2.1, 3.1 etc. is used only to indicate an unspecified amount in general and does not mean that the amounts represented are necessarily equal.
Then, taking the triplet as input: one reference sample (anchor/reference positive case), one homogeneous sample (positive case), and one heterogeneous sample (negative case). Triplets constitute two types of descriptor pairs: a matching descriptor pair and a non-matching descriptor pair. The Triplet network maximizes the similarity of matched pairs and minimizes the similarity of unmatched pairs by training the pairwise similarity loss function. Expressing its learning objective in similarity distance is: the similarity distance between matching feature descriptor pairs is much smaller than the similarity distance of non-matching descriptor pairs, thus establishing a 2D-3D correspondence between Φ and Ψ.
As shown in fig. 3, the image and point cloud local patches are sent to the network as positive and negative examples of anchor image patches. Representing input triplets as
Figure BDA0002327345340000104
The triple network consists of three branches, wherein one branch learns the 2D image feature point descriptor G (x)I;θI):xI→ p, image patch xIMapping to descriptor p; the other two branches have shared weight and learn the descriptor F (x) of the feature point of the 3D point cloudM;θM):xM→ q, local small block x of point cloudMMapped to its descriptor q, thetaI、θMIs the network weight. Through TThe riplet loss function implements the similarity between the joint learning image and the point cloud feature points. And finally, optimizing the Triplet network parameters by using a random gradient descent method. Image descriptor function G (x)I;θI) The design is that a VGG convolutional neural network is followed by a complete connection layer to extract a 2D image small block key point descriptor. A global average pooling layer is applied over the feature map of convolution 4. Connecting two fully connected layers at the end of the network outputs the desired descriptor dimensions. 3D feature point descriptor function F (x)M;θM) The method is designed into a PointNet network to extract point cloud local small descriptor. The network is trained using Triplet losses, hence the anchor point
Figure BDA0002327345340000111
Example of harmony
Figure BDA0002327345340000112
Similar distance between matched pairs
Figure BDA0002327345340000113
Far less than anchor point
Figure BDA0002327345340000114
Negative example of
Figure BDA0002327345340000115
Similar distances between non-matching pairs
Figure BDA0002327345340000116
Namely, it is
Figure BDA0002327345340000117
Triple loss Using a weighted Softmargin function
Figure BDA0002327345340000118
Wherein d ═ dpos-dnegSuch a loss function may enable the deep network to speed up convergence.
S4, estimating the pose of the camera by using the feature point matching pairs according to the 2D-3D corresponding relation;
the specific process of step S4 is as follows:
s4.1, matching the feature points according to the 2D and 3D feature points and the descriptors extracted in the step S3.3 to finally obtain 2D-3D feature point matching pairs;
specifically, a similarity measurement matrix of each pair of 2D/3D feature descriptors is calculated according to Euclidean distances between feature vectors, then 3D feature points of each 2D image key point are sequenced according to the similarity measurement, and the top 8 nearest 3D feature points can be selected as matching pairs.
S4.2, acquiring more than three groups of feature matching pairs obtained according to the step S4.1, estimating the camera pose according to a PnP algorithm, and eliminating matching pairs with matching errors through a Random Sample Consensus (RANSAC) algorithm;
s5, realizing assignment from 2D texture colors to 3D point clouds according to the pose information of the camera;
firstly, mapping the 3D point cloud to an image space according to the camera pose estimated in the step S4.2; then, for the correctly matched 2D-3D characteristic point pairs, assigning the color information of the 2D characteristic points to the corresponding 3D characteristic points; for the unmatched 2D-3D feature point pairs, selecting the nearest 2D feature point, and assigning color information to the 3D feature points; for other non-feature points, interpolation is used to obtain color information.
S6, performing self-supervision semantic segmentation according to the 3D point cloud with texture constructed in the step S5, wherein the specific process is as follows:
s6.1. sequence generation: specifically, as shown in fig. 4, for any point x in the textured 3D point cloud, S is usedr(x) Representing a spherical neighborhood centered at x with a radius r. For spherical neighborhood Sr(x) All points in (A) are sorted according to z-coordinate value and then from (S)r(x) Randomly extracts (k-1) points which have smaller z values than x, and the (k-1) points plus the last x form a z-order sequence with the length of k.
S6.2, in order to capture various local structures, repeating the step S6.1 to generate a plurality of z-sequence sequences for each point x in the point cloud.
S6.3. Self-supervision feature learning: with (x)1,x2,…,xk) Represents any z-ordered sequence with length of k, and the embodiment uses the front (k-1) point (x) of the z-ordered sequence1,x2,...,xk-1) Predicting the next point xk. To stabilize the feature learning process, its equivalent task is learned: using a subsequence (x) of length (k-1)1,x2,…,xk-1) Predicting the displacement xk-xk-1. The z-order sequence provides a stable structure to learn unstructured point clouds.
The embodiment includes a spatial coding layer, and the sub-supervised feature learning network structure is shown in fig. 5: the input is a three-dimensional ordered sequence of points (x) of length (k-1)1,x2,…,xk-1) The output is the displacement to the next point, xk-xk-1. Using multiple spatial coding layers to encode each point xiEncoding into high dimensional vectors viThe spatial coding layer consists of 1D convolution, batch normalization and a ReLU activation function; then, the high-dimensional vector sequence (v)1,v2,...,vk-1) Sending to a multi-layer Recurrent Neural Network (RNN); finally, the RNN hidden state is transformed into the 3D output y, the spatial displacement estimate required to reach the next point in the sequence, using the full connectivity layer.
S7, performing defect analysis on each segmented part according to the semantic segmentation constructed in the step S6, and specifically comprising the following steps:
s7.1, defect detection: as shown in fig. 7, the method comprises four steps: firstly, smoothing the point cloud by a Moving Least Squares (MLS) algorithm; next, estimating the normal and curvature of each point in the point cloud; further utilizing the normal and curvature information, and dividing the point cloud into a defect area and a non-defect area by using a region growing algorithm; finally, the defect area is marked by using a visualization method.
The process of step S7.1 specifically is:
a. the 3D point cloud is smoothed and resampled by a Moving Least Squares (MLS) algorithm. The surface is reconstructed by interpolation of a high-order polynomial, and a mathematical model of the surface is described as follows:
given a higher order polynomial function f:
Figure BDA0002327345340000131
and a set of points S ═ ci,fi|f(ci)=fiTherein of
Figure BDA0002327345340000132
Point ciThe moving least squares approximation of the spherical neighborhood is defined as the error functional:
Figure BDA0002327345340000133
Figure BDA0002327345340000134
in order to be a weighted least-squares solution,
Figure BDA0002327345340000135
theta is called a weighting function, and a Gaussian function is used in the present embodiment
Figure BDA0002327345340000136
h represents the average distance.
b. Further estimating the normal and curvature of the curved surface based on a moving least square method; given a query point pqAnd its neighborhood PKDetermining points x and normal vectors n by least squares plane fitting algorithmxThe indicated tangent plane S. gi∈PKThe distance to the plane S is defined as: di=(gi-x)·nx,diS is a least squares plane corresponding to 0, where
Figure BDA0002327345340000137
The center of mass of the lens. Minimum eigenvalue lambda0Corresponding feature vector v0As a normal nxAn approximate estimate of (c). The curvature is estimated by eigenvalues of the covariance matrix:
Figure BDA0002327345340000138
wherein λ0=min(λj=0,1,2). The above steps are repeated and the normal and curvature are estimated for each point.
c. The region growing algorithm is used to divide the part into two parts, a damaged region and a non-damaged region. Firstly, selecting random points from different areas as seed points; then, grow gradually until the entire point cloud is covered. For region growing, a rule is needed to check the homogeneity of the region after each growing step, pick the points that satisfy the surface normal and curvature smoothness constraints, and add them to the current set of seed points. For each seed point, testing the angle between the normal of the neighborhood point and the normal of the current seed point, and if the angle is smaller than a certain threshold value, adding the current neighborhood point to the seed point set. In this way, the algorithm outputs a set of clusters, where each cluster is a set of points that are considered to be part of the same smooth surface. Finally, the defect area is marked by using a visualization method.
S7.2, defect characterization: using the S7.1 defect detection results, the size and depth of the defect are estimated. The purpose of this process is to extract and display the three most important pieces of information for defects: size (bounding box), maximum depth, and direction of defect. Specifically, comprise
a. Extracting the lowest point: for each point a in the defect areaiAs shown in FIG. 8, by Δ z (a)i)=zP(ideal)-z(ai) Estimate its distance from the ideal plane pidealThe height difference of (2). If | Δ z (a)i) If | is lower than the predefined threshold, then consider aiNot the defect point. The lowest point of a defect is determined by the maximum value of all points in the defect area, i.e. max Δ z (a)i) L, and Δ z (a)i) Determines whether the defect is an indentation or a protrusion. When Δ z (a)i) Detect a dent for positive, when Δ z (a)i) A protrusion is detected when negative.
b. Size and orientation of the defect: for the defect area, in order to display the size and direction of the defect, a directional bounding box is constructed using Principal Component Analysis (PCA), i.e., a minimum rectangular area containing the defect area is found. In the present embodiment, first, the centroid of the defective region is calculated; then, PCA algorithm is applied to determine twoCoordinate system e of spindle assemblyξFinally, continue along eξSearching for an endpoint. These points together constitute the directional bounding box of the defect, the result of which is shown in fig. 9.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the foregoing description only for the purpose of illustrating the principles of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined by the appended claims, specification, and equivalents thereof.

Claims (9)

1. A2D +3D large airplane appearance defect detection and analysis method based on deep learning is characterized by comprising the following steps:
s1, respectively acquiring images and point clouds of a large-size airplane from a plurality of stations by utilizing a PTZ camera and a laser tracker which are installed on a mobile robot to form multi-view 2D images and 3D point cloud data;
s2, acquiring a complete airplane point cloud model through 3D point cloud registration;
s3, respectively extracting 2D image and 3D point cloud characteristic points, and performing 2D-3D correspondence according to characteristic matching;
s4, estimating the pose of the camera according to the corresponding relation of 2D-3D;
s5, according to the pose of the camera, the assignment of the texture color of the 2D image to the 3D point cloud is achieved, and the 3D point cloud with texture information is obtained;
s6, performing semantic segmentation on the 3D point cloud with the texture information;
and S7, performing defect analysis on the large airplane according to the semantic segmentation result.
2. The method for detecting and analyzing the appearance defects of the 2D +3D large airplane based on the deep learning as claimed in claim 1, wherein in the step S5, the 3D point cloud is mapped to the image space according to the estimated camera pose; then, for the correctly matched 2D-3D characteristic point pairs, assigning the color information of the 2D characteristic points to the corresponding 3D characteristic points; for the unmatched 2D-3D feature point pairs, selecting the nearest 2D feature point, and assigning color information to the 3D feature points; for other non-feature points, interpolation is used to obtain color information.
3. The method for detecting and analyzing the appearance defects of the 2D +3D large airplane based on the deep learning as claimed in claim 1 or 2, wherein in the step S6, the self-supervised semantic segmentation is performed according to the textured 3D point cloud constructed in the step S5, and the process comprises the following steps:
s6.1. generating a z-order sequence, namely establishing a spherical neighborhood which takes any point x in the 3D point cloud with the texture information as a center and has a certain central radius, sequencing all points in the spherical neighborhood according to a z coordinate value, and then randomly extracting (k-1) points from the spherical neighborhood, wherein the points have smaller z values compared with the point x, and the (k-1) points and the last point x form a z-order sequence with the length of k;
s6.2, repeating the step S6.1, and generating a plurality of z-sequence sequences for each point in the point cloud;
s6.3, self-supervision characteristic learning by (x)1,x2,…,xk) Represents any z-sequence with length of k, and uses the front (k-1) point (x) of the z-sequence1,x2,...,xk-1) Predicting the next point xkUsing a subsequence (x) of length (k-1)1,x2,…,xk-1) Predicting the displacement xk-xk-1
4. The method for detecting and analyzing the appearance defects of 2D +3D large airplanes based on deep learning as claimed in claim 3, wherein the input of the self-supervision feature learning network structure is a three-dimensional ordered point sequence (x) with the length of (k-1)1,x2,…,xk-1) The output is the displacement to the next point, xk-xk-1Using multiple spatial coding layers to encode each point xiEncoding into high dimensional vectors viI is more than or equal to 1 and less than or equal to k-1, and the space coding layer is formed by 1D convolution and batch normalizationComposition of the activation function of change and ReLU; then, the high-dimensional vector sequence (v)1,v2,...,vk-1) Sending to a multilayer Recurrent Neural Network (RNN); finally, the RNN hidden state is transformed into the 3D output y, the spatial displacement estimate required to reach the next point in the sequence, using the full connectivity layer.
5. The method for detecting and analyzing the appearance defects of the large 2D +3D airplane based on the deep learning of claim 1, wherein the step S7 includes a S7.1 defect detection process and a S7.2 defect characterization process, wherein,
s7.1, the defect detection process comprises the following steps:
a. smoothing and resampling by a mobile least square algorithm, obtaining 3D point cloud data of the component through semantic segmentation, and reconstructing a curved surface by using high-order polynomial interpolation;
b. further estimating the normal and curvature of the curved surface based on a moving least squares method;
c. the component is divided into two parts, a damaged region and a non-damaged region, by using a region growing algorithm:
selecting random points from different regions as seed points, gradually increasing until the whole point cloud covering the component is covered, testing the angle between the normal of the neighborhood point and the normal of the current seed point for each seed point, adding the current neighborhood point to a seed point set if the angle is smaller than a certain threshold, outputting a group of clusters corresponding to each seed point set, regarding one cluster as a part of points on the same smooth surface, combining the clusters, and finally marking the defect region on the component by a visualization method.
6. The method for detecting and analyzing the appearance defect of the 2D +3D large airplane based on the deep learning as claimed in claim 1, wherein the step S2 includes:
s2.1, performing initial registration based on the global measurement field;
and S2.2, carrying out fine registration based on graph optimization on the basis of the initial registration.
7. The method for detecting and analyzing the appearance defects of the 2D +3D large airplane based on the deep learning as claimed in claim 6, wherein the step S2.1 is to use a laser tracker to calculate the global coordinates of target points arranged around the airplane by a self-calibration distance measurement method, and to construct the global measurement field of the whole airplane.
8. The method for detecting and analyzing the appearance defects of the 2D +3D large aircraft based on the deep learning as claimed in claim 6, wherein in the step S2.2, an undirected graph model for optimization is established by converting the overlapping area between the cloud and the point cloud of each station into the weight of nodes and edges in the graph, and the fine registration of the whole point cloud is completed by iteratively searching and closing a newly generated ring.
9. The method for detecting and analyzing the appearance defect of the 2D +3D large airplane based on the deep learning as claimed in claim 1, wherein the step S3 includes:
s3.1, extracting a group of feature points on the image by using a 2D SIFT detector;
s3.2, extracting a group of feature points of the point cloud of the large airplane after the optimized registration by using the 3D ISS;
and S3.3, obtaining the 2D-3D corresponding relation by utilizing the triple deep neural network to jointly learn the image and the point cloud feature point descriptor according to the two groups of feature points extracted in the steps S31 and S32.
CN201911321821.4A 2019-12-20 2019-12-20 2D +3D large airplane appearance defect detection and analysis method based on deep learning Active CN111080627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911321821.4A CN111080627B (en) 2019-12-20 2019-12-20 2D +3D large airplane appearance defect detection and analysis method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911321821.4A CN111080627B (en) 2019-12-20 2019-12-20 2D +3D large airplane appearance defect detection and analysis method based on deep learning

Publications (2)

Publication Number Publication Date
CN111080627A true CN111080627A (en) 2020-04-28
CN111080627B CN111080627B (en) 2021-01-05

Family

ID=70316001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911321821.4A Active CN111080627B (en) 2019-12-20 2019-12-20 2D +3D large airplane appearance defect detection and analysis method based on deep learning

Country Status (1)

Country Link
CN (1) CN111080627B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553938A (en) * 2020-04-29 2020-08-18 南京航空航天大学 Multi-station scanning point cloud global registration method based on graph optimization
CN111583318A (en) * 2020-05-09 2020-08-25 南京航空航天大学 Rectifying skin repairing method based on virtual butt joint of measured data of wing body
CN111692997A (en) * 2020-06-09 2020-09-22 西安交通大学 Data-driven vector tail nozzle area in-situ measurement method
CN111860520A (en) * 2020-07-21 2020-10-30 南京航空航天大学 Large airplane point cloud model self-supervision semantic segmentation method based on deep learning
CN112053426A (en) * 2020-10-15 2020-12-08 南京航空航天大学 Deep learning-based large-scale three-dimensional rivet point cloud extraction method
CN112268548A (en) * 2020-12-14 2021-01-26 成都飞机工业(集团)有限责任公司 Airplane local appearance measuring method based on binocular vision
CN112287951A (en) * 2020-12-08 2021-01-29 萱闱(北京)生物科技有限公司 Data output method, device, medium and computing equipment based on image analysis
CN112419401A (en) * 2020-11-23 2021-02-26 上海交通大学 Aircraft surface defect detection system based on cloud edge cooperation and deep learning
CN112419429A (en) * 2021-01-25 2021-02-26 中国人民解放军国防科技大学 Large-scale workpiece surface defect detection calibration method based on multiple viewing angles
CN112489025A (en) * 2020-12-07 2021-03-12 南京钢铁股份有限公司 Method for identifying pit defects on surface of continuous casting billet
CN112505065A (en) * 2020-12-28 2021-03-16 上海工程技术大学 Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN112765560A (en) * 2021-01-13 2021-05-07 新智数字科技有限公司 Equipment health state evaluation method and device, terminal equipment and storage medium
CN112907531A (en) * 2021-02-09 2021-06-04 南京航空航天大学 Multi-mode fusion type composite material surface defect detection system of filament spreading machine
CN113192112A (en) * 2021-04-29 2021-07-30 浙江大学计算机创新技术研究院 Partial corresponding point cloud registration method based on learning sampling
CN113343355A (en) * 2021-06-08 2021-09-03 四川大学 Aircraft skin profile detection path planning method based on deep learning
CN113362313A (en) * 2021-06-18 2021-09-07 四川启睿克科技有限公司 Defect detection method and system based on self-supervision learning
CN114638956A (en) * 2022-05-23 2022-06-17 南京航空航天大学 Whole airplane point cloud semantic segmentation method based on voxelization and three-view
CN114782342A (en) * 2022-04-12 2022-07-22 北京瓦特曼智能科技有限公司 Method and device for detecting defects of urban hardware facilities
CN114881955A (en) * 2022-04-28 2022-08-09 厦门微亚智能科技有限公司 Slice-based annular point cloud defect extraction method and device and equipment storage medium
CN115049842A (en) * 2022-06-16 2022-09-13 南京航空航天大学深圳研究院 Aircraft skin image damage detection and 2D-3D positioning method
CN115326835A (en) * 2022-10-13 2022-11-11 汇鼎智联装备科技(江苏)有限公司 Cylinder inner surface detection method, visualization method and detection system
CN115908519A (en) * 2023-02-24 2023-04-04 南京航空航天大学 Three-dimensional measurement registration error control method for large composite material component
CN115953589A (en) * 2023-03-13 2023-04-11 南京航空航天大学 Engine cylinder block aperture size measuring method based on depth camera
CN116958146A (en) * 2023-09-20 2023-10-27 深圳市信润富联数字科技有限公司 Acquisition method and device of 3D point cloud and electronic device
CN117095002A (en) * 2023-10-19 2023-11-21 深圳市信润富联数字科技有限公司 Hub defect detection method and device and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108406731A (en) * 2018-06-06 2018-08-17 珠海市微半导体有限公司 A kind of positioning device, method and robot based on deep vision
EP3379491A1 (en) * 2017-03-20 2018-09-26 Rolls-Royce plc Surface defect detection
CN109141847A (en) * 2018-07-20 2019-01-04 上海工程技术大学 A kind of aircraft system faults diagnostic method based on MSCNN deep learning
CN109463003A (en) * 2018-03-05 2019-03-12 香港应用科技研究院有限公司 Object identifying
CN109887030A (en) * 2019-01-23 2019-06-14 浙江大学 Texture-free metal parts image position and posture detection method based on the sparse template of CAD
CN110044964A (en) * 2019-04-25 2019-07-23 湖南科技大学 Architectural coating layer debonding defect recognition methods based on unmanned aerial vehicle thermal imaging video
US10408606B1 (en) * 2018-09-24 2019-09-10 Faro Technologies, Inc. Quality inspection system and method of operation
CN110370286A (en) * 2019-08-13 2019-10-25 西北工业大学 Dead axle motion rigid body spatial position recognition methods based on industrial robot and monocular camera
CN110375765A (en) * 2019-06-28 2019-10-25 上海交通大学 Visual odometry method, system and storage medium based on direct method
US20190339206A1 (en) * 2018-05-04 2019-11-07 United Technologies Corporation System and method for damage detection by cast shadows
CN110537203A (en) * 2017-03-27 2019-12-03 三菱重工业株式会社 The defect detecting system of aircraft component and the defect inspection method of aircraft component
US10504003B1 (en) * 2017-05-16 2019-12-10 State Farm Mutual Automobile Insurance Company Systems and methods for 3D image distification

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3379491A1 (en) * 2017-03-20 2018-09-26 Rolls-Royce plc Surface defect detection
CN110537203A (en) * 2017-03-27 2019-12-03 三菱重工业株式会社 The defect detecting system of aircraft component and the defect inspection method of aircraft component
US10504003B1 (en) * 2017-05-16 2019-12-10 State Farm Mutual Automobile Insurance Company Systems and methods for 3D image distification
CN109463003A (en) * 2018-03-05 2019-03-12 香港应用科技研究院有限公司 Object identifying
US20190339206A1 (en) * 2018-05-04 2019-11-07 United Technologies Corporation System and method for damage detection by cast shadows
CN108406731A (en) * 2018-06-06 2018-08-17 珠海市微半导体有限公司 A kind of positioning device, method and robot based on deep vision
CN109141847A (en) * 2018-07-20 2019-01-04 上海工程技术大学 A kind of aircraft system faults diagnostic method based on MSCNN deep learning
US10408606B1 (en) * 2018-09-24 2019-09-10 Faro Technologies, Inc. Quality inspection system and method of operation
CN109887030A (en) * 2019-01-23 2019-06-14 浙江大学 Texture-free metal parts image position and posture detection method based on the sparse template of CAD
CN110044964A (en) * 2019-04-25 2019-07-23 湖南科技大学 Architectural coating layer debonding defect recognition methods based on unmanned aerial vehicle thermal imaging video
CN110375765A (en) * 2019-06-28 2019-10-25 上海交通大学 Visual odometry method, system and storage medium based on direct method
CN110370286A (en) * 2019-08-13 2019-10-25 西北工业大学 Dead axle motion rigid body spatial position recognition methods based on industrial robot and monocular camera

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
IGOR JOVANCEVIC等: "3D Point Cloud Analysis for Detection and Characterization of Defects on Airplane Exterior Surface", 《HAL ARCHIVES-OUVERTES》 *
单辰星: "基于双目视觉的视觉里程计", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *
谭昌柏等: "飞机外形和结构件反求建模技术研究", 《航空学报》 *

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553938A (en) * 2020-04-29 2020-08-18 南京航空航天大学 Multi-station scanning point cloud global registration method based on graph optimization
CN111583318A (en) * 2020-05-09 2020-08-25 南京航空航天大学 Rectifying skin repairing method based on virtual butt joint of measured data of wing body
US11535400B2 (en) 2020-05-09 2022-12-27 Nanjing University Of Aeronautics And Astronautics Fairing skin repair method based on measured wing data
CN111692997A (en) * 2020-06-09 2020-09-22 西安交通大学 Data-driven vector tail nozzle area in-situ measurement method
CN111692997B (en) * 2020-06-09 2021-08-13 西安交通大学 Data-driven vector tail nozzle area in-situ measurement method
CN111860520A (en) * 2020-07-21 2020-10-30 南京航空航天大学 Large airplane point cloud model self-supervision semantic segmentation method based on deep learning
CN112053426A (en) * 2020-10-15 2020-12-08 南京航空航天大学 Deep learning-based large-scale three-dimensional rivet point cloud extraction method
US11556732B2 (en) 2020-10-15 2023-01-17 Nanjing University Of Aeronautics And Astronautics Method for extracting rivet points in large scale three-dimensional point cloud base on deep learning
CN112053426B (en) * 2020-10-15 2022-02-11 南京航空航天大学 Deep learning-based large-scale three-dimensional rivet point cloud extraction method
CN112419401A (en) * 2020-11-23 2021-02-26 上海交通大学 Aircraft surface defect detection system based on cloud edge cooperation and deep learning
CN112489025A (en) * 2020-12-07 2021-03-12 南京钢铁股份有限公司 Method for identifying pit defects on surface of continuous casting billet
CN112287951B (en) * 2020-12-08 2021-04-06 萱闱(北京)生物科技有限公司 Data output method, device, medium and computing equipment based on image analysis
CN112287951A (en) * 2020-12-08 2021-01-29 萱闱(北京)生物科技有限公司 Data output method, device, medium and computing equipment based on image analysis
CN112268548A (en) * 2020-12-14 2021-01-26 成都飞机工业(集团)有限责任公司 Airplane local appearance measuring method based on binocular vision
CN112268548B (en) * 2020-12-14 2021-03-09 成都飞机工业(集团)有限责任公司 Airplane local appearance measuring method based on binocular vision
CN112505065A (en) * 2020-12-28 2021-03-16 上海工程技术大学 Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN112765560A (en) * 2021-01-13 2021-05-07 新智数字科技有限公司 Equipment health state evaluation method and device, terminal equipment and storage medium
CN112765560B (en) * 2021-01-13 2024-04-19 新奥新智科技有限公司 Equipment health state evaluation method, device, terminal equipment and storage medium
CN112419429B (en) * 2021-01-25 2021-08-10 中国人民解放军国防科技大学 Large-scale workpiece surface defect detection calibration method based on multiple viewing angles
CN112419429A (en) * 2021-01-25 2021-02-26 中国人民解放军国防科技大学 Large-scale workpiece surface defect detection calibration method based on multiple viewing angles
CN112907531A (en) * 2021-02-09 2021-06-04 南京航空航天大学 Multi-mode fusion type composite material surface defect detection system of filament spreading machine
CN113192112A (en) * 2021-04-29 2021-07-30 浙江大学计算机创新技术研究院 Partial corresponding point cloud registration method based on learning sampling
CN113343355A (en) * 2021-06-08 2021-09-03 四川大学 Aircraft skin profile detection path planning method based on deep learning
CN113362313A (en) * 2021-06-18 2021-09-07 四川启睿克科技有限公司 Defect detection method and system based on self-supervision learning
CN113362313B (en) * 2021-06-18 2024-03-15 四川启睿克科技有限公司 Defect detection method and system based on self-supervised learning
CN114782342A (en) * 2022-04-12 2022-07-22 北京瓦特曼智能科技有限公司 Method and device for detecting defects of urban hardware facilities
CN114782342B (en) * 2022-04-12 2024-02-09 北京瓦特曼智能科技有限公司 Urban hardware facility defect detection method and device
CN114881955A (en) * 2022-04-28 2022-08-09 厦门微亚智能科技有限公司 Slice-based annular point cloud defect extraction method and device and equipment storage medium
CN114638956A (en) * 2022-05-23 2022-06-17 南京航空航天大学 Whole airplane point cloud semantic segmentation method based on voxelization and three-view
CN114638956B (en) * 2022-05-23 2022-08-05 南京航空航天大学 Whole airplane point cloud semantic segmentation method based on voxelization and three-view
US11836896B2 (en) 2022-05-23 2023-12-05 Nanjing University Of Aeronautics And Astronautics Semantic segmentation method for aircraft point cloud based on voxelization and three views
CN115049842A (en) * 2022-06-16 2022-09-13 南京航空航天大学深圳研究院 Aircraft skin image damage detection and 2D-3D positioning method
CN115049842B (en) * 2022-06-16 2023-11-17 南京航空航天大学深圳研究院 Method for detecting damage of aircraft skin image and positioning 2D-3D
CN115326835A (en) * 2022-10-13 2022-11-11 汇鼎智联装备科技(江苏)有限公司 Cylinder inner surface detection method, visualization method and detection system
CN115908519A (en) * 2023-02-24 2023-04-04 南京航空航天大学 Three-dimensional measurement registration error control method for large composite material component
CN115953589B (en) * 2023-03-13 2023-05-16 南京航空航天大学 Engine cylinder block aperture size measurement method based on depth camera
CN115953589A (en) * 2023-03-13 2023-04-11 南京航空航天大学 Engine cylinder block aperture size measuring method based on depth camera
CN116958146A (en) * 2023-09-20 2023-10-27 深圳市信润富联数字科技有限公司 Acquisition method and device of 3D point cloud and electronic device
CN116958146B (en) * 2023-09-20 2024-01-12 深圳市信润富联数字科技有限公司 Acquisition method and device of 3D point cloud and electronic device
CN117095002A (en) * 2023-10-19 2023-11-21 深圳市信润富联数字科技有限公司 Hub defect detection method and device and storage medium
CN117095002B (en) * 2023-10-19 2024-02-06 深圳市信润富联数字科技有限公司 Hub defect detection method and device and storage medium

Also Published As

Publication number Publication date
CN111080627B (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN111080627B (en) 2D +3D large airplane appearance defect detection and analysis method based on deep learning
Fan et al. Pothole detection based on disparity transformation and road surface modeling
CN107392964B (en) The indoor SLAM method combined based on indoor characteristic point and structure lines
Sun et al. Aerial 3D building detection and modeling from airborne LiDAR point clouds
CN112927360A (en) Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data
WO2015096508A1 (en) Attitude estimation method and system for on-orbit three-dimensional space object under model constraint
CN112902953A (en) Autonomous pose measurement method based on SLAM technology
Yue et al. Hierarchical probabilistic fusion framework for matching and merging of 3-d occupancy maps
CN111640158B (en) End-to-end camera and laser radar external parameter calibration method based on corresponding mask
CN113436260A (en) Mobile robot pose estimation method and system based on multi-sensor tight coupling
CN110866969A (en) Engine blade reconstruction method based on neural network and point cloud registration
CN109035329A (en) Camera Attitude estimation optimization method based on depth characteristic
CN114626470B (en) Aircraft skin key feature detection method based on multi-type geometric feature operator
JP2018128897A (en) Detection method and detection program for detecting attitude and the like of object
Wang et al. Density-invariant registration of multiple scans for aircraft measurement
Jin et al. An indoor location-based positioning system using stereo vision with the drone camera
CN111915517A (en) Global positioning method for RGB-D camera in indoor illumination adverse environment
CN114998395A (en) Effective embankment three-dimensional data change detection method and system
CN116518864A (en) Engineering structure full-field deformation detection method based on three-dimensional point cloud comparison analysis
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
Zhao et al. Vision-based adaptive stereo measurement of pins on multi-type electrical connectors
CN112365592B (en) Local environment feature description method based on bidirectional elevation model
CN109671109A (en) Point off density cloud generation method and system
Yong-guo et al. The navigation of mobile robot based on stereo vision
Loaiza et al. Matching segments in stereoscopic vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant