CN109872397B - Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision - Google Patents

Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision Download PDF

Info

Publication number
CN109872397B
CN109872397B CN201910120856.5A CN201910120856A CN109872397B CN 109872397 B CN109872397 B CN 109872397B CN 201910120856 A CN201910120856 A CN 201910120856A CN 109872397 B CN109872397 B CN 109872397B
Authority
CN
China
Prior art keywords
dimensional
image
segmentation
tray
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910120856.5A
Other languages
Chinese (zh)
Other versions
CN109872397A (en
Inventor
沈琦
李瑾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201910120856.5A priority Critical patent/CN109872397B/en
Publication of CN109872397A publication Critical patent/CN109872397A/en
Application granted granted Critical
Publication of CN109872397B publication Critical patent/CN109872397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a three-dimensional reconstruction method of an airplane part based on multi-eye stereo vision, which comprises the following steps: acquiring images of the tray and the multiple airplane parts based on multi-view stereoscopic vision, and performing integral modeling on the tray and the multiple airplane parts to obtain a three-dimensional model; meanwhile, performing image segmentation and data acquisition of a complex background on the acquired two-dimensional image; and (4) segmenting the three-dimensional model based on the two-dimensional segmentation image and the three-dimensional model, and finishing the acquisition of each three-dimensional part and the positioning of the space coordinate. The invention can effectively solve the part identification work of full-automatic airplane part spraying and provide all supports for matching, spraying track extraction and mechanical arm positioning in a subsequent part model library.

Description

Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
Technical Field
The invention relates to the technical field of three-dimensional reconstruction, in particular to a three-dimensional reconstruction method of airplane parts based on multi-view stereoscopic vision.
Background
According to the actual production line understanding, the aircraft part spraying industry under the current is still based on the assembly line of manual spraying, carries out goods of furniture for display rather than for use, spraying, turn-over through artifical discernment and goes on. In order to realize the full-automatic production of the spraying of the airplane parts, the part identification is realized by a machine vision method and the operation of finishing a specific spraying track by a mechanical arm aiming at each part is designed and developed.
1. Three-dimensional reconstruction technique
With the continuous development of computer vision, the continuous innovation of the three-dimensional reconstruction technology is promoted. The three-dimensional reconstruction technology mainly acquires real external information through a visual sensor, and then acquires three-dimensional information of an object through an information processing technology or a projection model. According to the method for acquiring data information, in 1997, varay et al divide three-dimensional reconstruction into contact type and non-contact type, the contact type is to directly measure three-dimensional data according to an instrument, accurate data can be acquired, but the scene limit is large and the efficiency is low, and common main methods are CMMs, robotics Arms and the like. Isgro et al in 2005 divided the non-contact methods into an active vision method and a passive vision method, the active vision method was to directly use the optical principle to perform optical scanning on the scene or object, and then to obtain the data point cloud through analysis and scanning to realize three-dimensional reconstruction, and can obtain a large amount of details on the object surface, with higher precision, the disadvantages of higher cost and inconvenient operation, and the methods mainly include a laser scanning method, a kinect, a structured light method, a shadow method, etc.; the three-dimensional reconstruction technology based on passive vision is that the modeling of an object is reversely engineered by analyzing various information in an image sequence, so that a three-dimensional model of the object in a scene or a scene is obtained, and the method does not directly control a light source, has low requirement on illumination and low cost, and is suitable for the three-dimensional reconstruction of various complex scenes; the passive vision methods are further classified into monocular stereoscopic vision, binocular stereoscopic vision, and multi-eye stereoscopic vision according to the number of cameras. Monocular stereoscopic vision, namely a method for performing three-dimensional reconstruction by using one camera, performing feature extraction and feature matching on the basis of camera calibration through comparison of front and rear images, and then completing a reconstruction process through a three-dimensional reconstruction method; the technology mainly comprises an X recovery shape method, a motion recovery structure method and a statistical learning method. The working principle of the binocular stereo vision technology is derived from a human binocular vision system, namely, left and right images at the same position are captured by two identical cameras from different visual angles, then the depth information of an object is obtained by utilizing the principle of triangulation, a three-dimensional model of the object is reconstructed by the depth information, and the whole process comprises image acquisition, camera calibration, image correction, feature extraction and stereo matching, three-dimensional reconstruction and optimization; because the binocular stereo vision technology can only obtain space objects from two visual angles, some useful three-dimensional information is lost due to projection in the imaging process, and the reconstruction effect is incomplete. Based on the follow-up research based on multi-view stereo vision, one or more cameras are added to be used as auxiliary measurement on the basis of binocular vision, so that multiple pairs of images of the same object under different angles are obtained, and the most key problem is the image matching problem according to the parallax of human eyes. At present, a lot of scholars research camera calibration and matching algorithms of stereo image pairs in multi-view vision, doctor Lei Cheng in Chinese academy of sciences in 2000 developed CVSuit software to realize functions of camera self-calibration, feature point extraction, three-dimensional visualization of models and the like, snavely and Seitz in 2006 utilized a lot of pictures about different positions and visual angles of Roman on the Internet, put forward theories of gradually increasing motion and structure reconstruction of pictures, and realize three-dimensional reconstruction of Roman city on a computer; zhang Hong of the science and technology university in 2007 realizes a three-dimensional reconstruction system comprising the relative orientation of a stereopair pair, stereo matching and large processing of a parallax map in a VC6.0+ + environment, and Angst and Pollefeys in 2009 propose a new motion structure recovery method based on multiple cameras, so that the process of corresponding feature points among different cameras in the process of constructing a measurement matrix can be avoided, and through decomposing a low-rank matrix, the once calibration of the multiple cameras can be realized while the three-dimensional structure of an object is reconstructed; the Wuhan engineering university in 2013 provides a three-dimensional reconstruction method formed by 8 linearly distributed industrial cameras; the Wuhan university in 2015 proposes three-dimensional full-automatic modeling based on multi-view stereo vision, and proposes a point cloud simplification algorithm combining a bounding box algorithm and a point cloud curvature feature.
The three-dimensional reconstruction based on the multi-view stereo vision can reduce blind areas in measurement under the condition of no contact, obtain a larger visual field range, solve the problem of mismatching in the binocular vision and adapt to various scenes; the method mainly comprises the following five steps: obtaining pictures, calibrating a camera, extracting characteristic points and performing stereo matching, reconstructing motion and structure, and performing model optimization by a light beam adjustment method. By the steps, the sparse point cloud of the scene or the target in the scene can be obtained, and then the dense point cloud is obtained, the point cloud is meshed, the texture is mapped and edited, so that the three-dimensional reconstruction with better effect is completely realized.
(1) Acquiring an image: in machine vision, an industrial CCD camera and an image acquisition card are usually adopted to acquire images, and the influence of factors such as light source conditions, precision, viewpoint difference and the like on the acquisition effect needs to be considered in the acquisition;
(2) Calibrating a camera: can be considered as a process of estimating parameters of the pinhole imaging model. The camera can be regarded as a pinhole imaging model, and for the pinhole imaging model, a camera matrix is used for representing projection transformation from a world coordinate system to a camera coordinate system;
(3) Extracting characteristic points and stereo matching: motion and reconstruction represent the problem of finding structures in computer vision. In any case, it is necessary to find the corresponding points between the images, and in order to find the corresponding points of the images, it is necessary to trace the features such as the corners, edges, gradients, etc. in each picture. The most common method for searching the feature points is SIFT, according to the descriptor of SIFT, the feature points of different pictures are matched, but sometimes the matching of the feature points is wrong, and at this time, RANSAC is required to be used for eliminating the wrong matching;
(4) And (3) motion and structure reconstruction: the SFM (Structure from motion) firstly estimates the relation between all picture shooting positions according to the input matching points, and estimates the positions of the picture characteristic points in practice through the relation of the camera positions and the triangle principle;
(5) Adjustment by a beam method: because the real position of the three-dimensional point cannot be obtained by directly applying the principle of triangulation due to noise caused by illumination or errors caused in actual shooting, the actual position needs to be gradually approximated by reducing the sum of squares of errors by estimating the sum of squares of errors between the measured value and the real value.
2. Image segmentation technique
The image segmentation means that an image is divided into a plurality of non-overlapping regions according to characteristics such as gray scale, color, texture and shape, and the characteristics show similarity in the same region and obvious difference between different regions. The image segmentation method mainly comprises the following steps:
(1) Threshold-based segmentation: such as a threshold method, mainly comparing the gray level of a pixel with a threshold, where the calculation of the threshold depends on the gray level distribution of the pixel in the original image;
(2) Edge-based segmentation: usually, the local features are not continuous, and whether the image belongs to the image edge is judged according to the first derivative and the second derivative of the image intensity;
(3) Region-based segmentation: such as region splitting and merging method and watershed, the image is divided according to the rules specified in advance, and the method mainly comprises two implementation methods, namely region growing, region splitting and region merging. And starting region growth from the selected seed point, calculating pixel points in the neighborhood of the selected seed point, merging, updating the seed point, and repeating the steps until the segmentation is finished. The region splitting and merging starts from the whole, splits the original image according to a certain rule, and merges according to a merging method after splitting is finished, and finally forms the splitting.
(4) Segmentation based on graph theory: for example, the Graph Cut and the Grab Cut associate the image segmentation problem with the minimum segmentation problem of the Graph, the image is mapped into an undirected Graph, each node is correspondingly connected with each pixel, the weight value of an edge represents the non-negative similarity of adjacent pixels in a gray scale, color or texture method, the segmentation of the image is the shearing of the Graph, and essentially, a specific edge is removed, and the Graph is divided into a plurality of sub-graphs so as to realize the segmentation;
the K-means segmentation method belongs to segmentation based on regions, is a clustering algorithm based on distance similarity, and is divided into the same category by comparing samples. The Grab Cut is a segmentation method based on Graph theory, is a subject of Microsoft research institute, and has the main functions of segmentation and matting, compared with the Graph Cut, the model of the target and the background is a mixed Gaussian model GMM with RGB three channels, which is not the minimum segmentation, but an interactive iterative process of segmentation estimation and model parameter learning is continuously carried out instead.
3. Three-dimensional model segmentation technique
In the segmentation of a three-dimensional model, that is, a three-dimensional model is segmented into a set of sub-shapes, so that the segmentation has certain significance, the early three-dimensional model segmentation is mostly manual segmentation, the recent research tends to realize automatic segmentation, and the segmentation is generally divided into two types, namely meaningful segmentation and meaningless segmentation. The segmentation of the three-dimensional model, namely the segmentation of the triangular mesh, according to the set or topological attribute of the triangular mesh, cut apart the model into a set of non-intersect mesh surfaces, the more mature three-dimensional model segmentation method has:
(1) Curvature-based segmentation: zhang et al form a segmentation line fitted by the minimum negative curvature point by calculating the Gaussian curvature of each vertex, thereby realizing meaningful segmentation;
(2) Watershed algorithm: the method is a two-dimensional deduction and three-dimensional segmentation algorithm. Page et al introduced the FMW (Fast Marching Watershed) algorithm to segment the triangular mesh, first computing the principal direction and principal curvature at each vertex, and creating a logo set, and finally performing a growing operation on the logo set using the watershed method.
(3) The region growing algorithm: a method for expanding the segmentation of a three-dimensional network model from the segmentation of a depth map is a method for realizing the segmentation operation of the three-dimensional network model by using a region growing method such as Viera and the like. The area of the triangle where each vertex is located is calculated, the average value of the areas is calculated to serve as a vertex weight, and then the vertex with the maximum weight is repeatedly used for region growing.
(4) The clustering method comprises the following steps: the image segmentation is deduced, and a common method is a K-means method, triangular meshes with similar characteristics are divided into a class, and all three-dimensional meshes are traversed to realize segmentation.
The large airplane parts have the conditions of large quantity and multiple types, and contain the conditions of the same shape and different sizes, and the spraying of one tray part is carried out at one time, and the parts on one tray are many in quantity and different in types. The part structure is complex, the part structure is tightly attached to the grid tray, the bottom grid tray is more complex, the color is completely the same as that of the part, and meanwhile, more parameter information of the part is required to be acquired; more parametric support and more elaborate modeling is required to support the entire reconstruction system and subsequent identification work.
The following problems are proposed for the above scenarios:
(1) Firstly, the scheme of three-dimensional reconstruction is selected, the cost is saved while a finer model is required to be obtained, a three-dimensional reconstruction method based on multi-view stereo vision is used, if the three-dimensional reconstruction is directly adopted, the scene problem cannot be actually solved, namely, the model of each part is obtained and used for subsequent identification and matching work, because the part model of one tray (containing the peripheral background) is obtained by one-time modeling, and the scheme of image obtaining also needs to be designed in detail.
(2) The method comprises the steps of extracting the size of each part, determining whether the part deviates or not, determining the positioning coordinates of the part, acquiring the outline information of the part, and solving the problem directly based on edge detection and image segmentation.
(3) The model-based segmentation faces great interference in three-dimensional segmentation due to the problem similar to the image segmentation, and meanwhile, accurate positioning information and contour information cannot be acquired for segmenting the model.
Disclosure of Invention
Aiming at the defects in the problems, the invention provides a three-dimensional reconstruction method of airplane parts based on multi-view stereoscopic vision, which is applied to an actual production line to improve production intelligence.
The invention discloses a three-dimensional reconstruction method of an airplane part based on multi-eye stereo vision, which comprises the following steps:
acquiring images of the tray and the multiple airplane parts based on multi-view stereoscopic vision, and performing integral modeling on the tray and the multiple airplane parts to obtain a three-dimensional model; meanwhile, performing image segmentation and data acquisition of a complex background on the acquired two-dimensional image;
and (4) segmenting the three-dimensional model based on the two-dimensional segmentation image and the three-dimensional model, and finishing the acquisition of each three-dimensional part and the positioning of the space coordinate.
As a further improvement of the invention, the tray and the multi-airplane parts are integrally modeled based on the images of the multi-eye stereo vision acquisition tray and the multi-airplane parts, so as to obtain a three-dimensional model; the method comprises the following steps:
acquiring an image;
extracting and matching the characteristic points;
an SFM dual view motion restoration architecture;
performing BA nonlinear optimization;
an incremental motion restoration structure;
dense reconstruction and point cloud mesh reconstruction.
As a further improvement of the present invention, in the image acquisition, the layout of the cameras is:
the rule of the tray is a rectangle of 1800mm multiplied by 900mm, an oval guide rail is designed to be 500mm away from the horizontal plane of the platform, 500mm in height and 45-degree depression angle positions of a visual angle, the four corners are respectively placed one by one, the center of the long edge is respectively placed one by one, 6 cameras are totally arranged, the center of the tray is taken as the original point, the cameras are respectively positioned on every 60-degree angle ray, and the whole area is covered by 6 angles;
meanwhile, respectively setting a next target point at the guide rail position on the ray at the 20-degree angle from the origin on the adjacent right side of each camera, continuing to be at the position, and taking the next target point by 20 degrees downwards;
meanwhile, one camera for covering the visual field of the part tray is arranged above the center of the tray, and the overlook two-dimensional image is acquired for the subsequent segmentation operation of the two-dimensional image.
As a further improvement of the present invention, the image segmentation and data acquisition of a complex background for an acquired two-dimensional image includes: image segmentation and part edge and parameter extraction;
the image segmentation method comprises the following steps:
coding in a mode of combining a k-means algorithm and Grab Cut on image acquisition, and aiming at the condition that the colors of image parts are similar to the background color, performing multi-classification on the images by adopting a k-means clustering algorithm aiming at RGB channels, and acquiring multiple images so as to segment target parts and complex backgrounds; after automatic clustering, performing Grab Cut segmentation on the image according to the displayed image difference, determining an editing segmentation area before performing Grab Cut segmentation, and segmenting by adopting a mode of fixing the size of a boundary;
the method for extracting the edge and the parameters of the part comprises the following steps:
reading an image, and performing morphological processing;
performing median filtering and blu filtering on the result of the previous step;
canny edge detection is carried out on the optimized image;
finding and drawing the outmost layer outline through findContours () and drawContours ();
defining minAreaRect () to draw a minimum external moment, and acquiring the length of each edge;
meanwhile, counting the number of the profiles;
calculating the perimeter and the area of each part through contourArea (), arcLength ();
and calling an API (application program interface) of the RotatedRect to acquire the mass center and the relative rotation angle of each part.
As a further improvement of the invention, the three-dimensional model is segmented based on the two-dimensional segmentation image and the three-dimensional model, so that the acquisition of each three-dimensional part and the positioning of a space coordinate are completed; the method comprises the following steps:
carrying out point cloud denoising on the three-dimensional model, and ensuring that the gridded model mainly has the influence on the integrity of the tray and surrounding environment noise points;
normalizing the three-dimensional model based on the bounding box to determine a space coordinate, ensuring invariance of rotation, translation, scaling/expansion of the three-dimensional model after the step, and determining an origin coordinate of the space;
performing depth projection on the model in the bounding box;
mapping based on the depth projection image and the two-dimensional segmentation image, rotating, zooming/enlarging the two-dimensional segmentation image, and simultaneously adjusting the centroid coordinate of each part in the two-dimensional segmentation image;
determining and dividing the dividing surface of the three-dimensional model according to the adjusted three-dimensional dividing image;
and (4) determining space coordinates.
Compared with the prior art, the invention has the beneficial effects that:
the invention designs the whole scheme for recognizing airplane parts by using the thought of three-dimensional reconstruction, and the scheme comprises a surrounding + overhead camera layout in an actual scene based on a large number of levels of various airplane parts and a three-dimensional modeling flow design method aiming at the scene, innovatively uses the three-dimensional reconstruction + two-dimensional image to acquire a model and a segmentation image in parallel, innovatively uses the mapping between the three-dimensional reconstruction and the two-dimensional image to acquire each single part model in the whole model and complete positioning, and completes the acquisition of three-dimensional models of a plurality of parts and related parameters through the three-dimensional modeling + two-dimensional segmentation + two-dimensional three-dimensional mapping; the part identification work of full-automatic aircraft part spraying is effectively solved, and all supports are provided for matching in a subsequent part model library, spraying track extraction and mechanical arm positioning.
Drawings
FIG. 1 is a flowchart of a three-dimensional reconstruction method of an aircraft part based on multi-view stereo vision according to an embodiment of the present invention;
FIG. 2 is a block diagram of an overall system for implementing a three-dimensional reconstruction method according to an embodiment of the present invention;
FIG. 3 is a block diagram of an overall design and architecture of a system for implementing a three-dimensional reconstruction method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a camera layout according to an embodiment of the present invention;
fig. 5 is a basic flow chart of SIFT feature detection disclosed in one embodiment of the present invention;
FIG. 6 is a flow chart of a dual view motion restoration architecture according to an embodiment of the present disclosure;
FIG. 7 is a flow chart of an incremental movement restoration structure disclosed in one embodiment of the present invention;
FIG. 8 is a flowchart of a method for space-based patch diffusion according to an embodiment of the present invention;
fig. 9 is a graph of the part experimental result under the condition of K =8 according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
The three-dimensional reconstruction technology is mature day by day, and the technology based on the multi-view stereo vision has more visual angles than the technology based on the binocular stereo vision, and can acquire more space object details to complete better reconstruction effect. At present, the spraying work of large aircraft parts is finished manually, the three-dimensional reconstruction technology is applied to an aircraft part spraying production line, the machine vision is completed through further part identification after the parts are reconstructed to replace manual operation, the industrial production efficiency can be greatly improved, and meanwhile, the harm of spraying gas to human bodies is reduced. The spraying production line of the airplane parts aims at one tray part in one-time production, the types of the parts are different, and meanwhile, the parts are various in types, different in shapes, different in types and sizes, and the like, so that more parameters are required to support and more precise modeling is required to support the whole reconstruction system and follow-up recognition work. Based on the scenes, the invention provides a three-dimensional reconstruction method of the airplane part based on multi-view stereo vision; according to the method, a complete process of three-dimensional reconstruction based on a motion recovery structure is carried out on the whole tray part from the part vision system building design of two-dimensional and three-dimensional simultaneous acquisition, image segmentation and graphic processing and parameter acquisition of a complex background are carried out on a two-dimensional image while a model is generated, and finally, a three-dimensional model is segmented on the two-dimensional segmented image and the three-dimensional model, so that model positioning and extraction on a single part are completed.
The invention is described in further detail below with reference to the attached drawing figures:
in order to realize the design and realization of a full-automatic production line, the invention provides a three-dimensional reconstruction method based on multi-view stereoscopic vision in a spraying scene of airplane parts, which realizes a set of solution schemes and mainly provides and solves the following steps:
1. designing and realizing a reconstruction method of a three-dimensional model based on multi-view stereo vision;
2. the method comprises the steps of (1) extracting various parts based on a complex scene under a two-dimensional image;
3. segmenting and acquiring a three-dimensional part based on a two-dimensional image and a three-dimensional model;
therefore, the acquisition of the three-dimensional model under the multi-view stereo vision based on the airplane parts is completed.
Therefore, as shown in fig. 1, the invention provides a three-dimensional reconstruction method of an airplane part based on multi-view stereo vision, which comprises the following steps:
acquiring images of the tray and the multiple airplane parts based on multi-view stereoscopic vision, and performing integral modeling on the tray and the multiple airplane parts to obtain a three-dimensional model; meanwhile, performing image segmentation and data acquisition of a complex background on the acquired two-dimensional image; and (4) segmenting the three-dimensional model based on the two-dimensional segmentation image and the three-dimensional model, and finishing the acquisition of each three-dimensional part and the positioning of the space coordinate.
Specifically, the method comprises the following steps:
1. overall design of three-dimensional reconstruction scheme based on multi-eye stereo vision
The invention aims to realize the positioning and identification of a plurality of parts on the tray in a one-time spraying scene; therefore, under the condition that various parts exist on the tray in a one-time spraying scene and the background of the part tray is complex, integral modeling is carried out, and then a model and related parameters of each part are obtained through a design scheme. Based on this, as shown in fig. 2 and 3, the system for realizing the three-dimensional reconstruction method of the invention comprises a three-dimensional reconstruction system, a two-dimensional image processing system and a three-dimensional model segmentation system, which sequentially complete the acquisition from image acquisition to a multi-part integral modeling system; meanwhile, the acquisition and parallel processing of the two-dimensional images are carried out, then the models and the two-dimensional image segmentation maps of the two systems are obtained, and the model segmentation of the final three-dimensional model segmentation system and the positioning acquisition of a single part are carried out by combining the results of the previous two steps.
2. Design and implementation method of three-dimensional reconstruction method based on multi-eye stereo vision
The invention adopts multi-view stereo vision to obtain more scene details and has better matching effect on subsequent segmentation and mapping. The three-dimensional reconstruction method adopts an SFM (Structure from motion) motion Structure recovery method to reconstruct a model, and is designed as follows aiming at each step of flow based on an actual scene:
(1) Image acquisition
As shown in fig. 4, in machine vision, the number and angle of cameras determine the number and effect of the final point cloud fusion, and the model finally generated is directly related to the quality, so it is very important to reasonably select the number of cameras and the placement rule of cameras. In this design, the tray rule is 1800mm x 900 mm's rectangle, and the span is big, need sufficient stadia, consider at the assembly line scene simultaneously, the reason of place restriction and cost consideration, specially design oval guide rail in apart from platform level 500mm, height 500mm and visual angle bow 45 angular position, respectively place one (asymmetric angular point department) in four angles department, long limit center department respectively one, totally 6 cameras, use the tray center as the initial point, the camera is located every 60 angular ray respectively. The entire area was covered at 6 angles.
Meanwhile, a next target point is set at the position of the guide rail on the ray at the angle of 20 degrees from the origin on the adjacent right side of each camera respectively, and the next target moving point is set to be continuously at the position and 20 degrees downwards. That is, after each start, 6 cameras firstly acquire images at an initial position, then the 6 cameras simultaneously move to the right and take the center of a rectangle as an origin at an angular position of 20 degrees, acquire images again, and then the 6 cameras simultaneously rotate clockwise by 20 degrees in the same direction at the current position to acquire images. Therefore, the density of one image every 20 degrees on a 360-degree circle can be obtained, 18 images are collected for subsequent operations such as feature point extraction and stereo matching, three-dimensional reconstruction is completed, more matched pixel points and visual angles can be obtained in tighter parallax, and the imaging effect is better. Such a camera working scheme is designed because the subsequent employed reconstruction mode is based on the multi-view geometry non-contact mode for reconstruction by using the motion recovery state mode, so that the images can be disordered.
Meanwhile, one camera for covering the visual field of the part tray is arranged above the center of the tray, and the overlook two-dimensional image is acquired for the subsequent segmentation operation of the two-dimensional image.
(2) Feature point extraction and matching
As shown in fig. 5, the invention adopts a SIFT feature detection algorithm based on DoG feature detector to detect the feature points of two images. After the feature points of the two images are obtained, euclidean distance is used for measuring the distance of different image feature descriptors, mismatching is reduced by using a nearest neighbor distance ratio matching strategy, namely a KNN algorithm is used for searching 2 features which are most matched with the features, if the ratio of the matching distance of the first feature to the matching distance of the second feature is smaller than a certain threshold value, the matching is accepted, otherwise, the mismatching is regarded as. And verifying the matched feature points, estimating a camera model, matching the feature points, calculating a homographic Homography matrix, screening the feature points, and screening some interior points to serve as reliable matching points.
(3) SFM double-visual angle motion recovery structure
As shown in fig. 6, feature points are extracted to match with each other, and then initialized, so as to provide initial values for nonlinear optimization, including obtaining internal parameters (focal length information is included in an image, exif image information), and recovering camera parameters, which are mainly used in recovering a dual-view motion structure, to find a basis matrix and an eigen matrix, then SVD is performed on the eigen matrix, an accurate camera pose is selected, and coordinates of spatial three-dimensional points are obtained by a triangulation method. Through the initialization process, initial values of all parameters are obtained, and then the non-linear optimization of BA is carried out on the camera parameters, the space three-dimensional points and the observation points. The above is the whole process of the dual-view motion structure recovery.
(4) BA non-linear optimization
The three-dimensional point information in the space is acquired through feature point extraction, camera basis matrix and eigen matrix solving, feature point matching and triangularization, the accumulated error is larger and larger as the visual angle is increased along with the increase of images, and the final reconstruction is influenced. In the optimization process, the same camera is used in consideration of the system, and the internal parameters of the camera are mainly the focal length and the distortionAnd the coefficient is changed, and meanwhile, the camera in the system adopts multi-angle rotation to obtain characteristic images of different angles, and at the moment, partial camera parameters exist under the multi-angle are consistent. Because the focus of the camera is extracted by performing extract _ float _ len () after the image is obtained every time, the following BA optimization mode is adopted to share the internal parameters, unify the initial focus of the camera and fixedly share the initial focus, reduce code logic and improve the operation efficiency. At this time C j Is only responsible for the rotation and translation parameters R j And t j
Figure BDA0001971289530000111
(5) Incremental exercise recovery structure
As shown in fig. 7, the incremental motion restoration structure is directed to the restored reconstruction of the chaotic map, so after extracting image features and matching in the dual-viewpoint geometry (the basic matrix/essential matrix model used in the present invention), an image connection map is constructed, where each vertex is an image, and every two images (enough matching interior points) having a common visible region are connected to serve as the boundary of the image connection map. Meanwhile, the incremental motion recovery structure adopts multi-view acquisition three-dimensional point modeling. After the feature points are detected on each image, feature matching is carried out between the two images, the same feature point can be collected by a plurality of visual angles, the matching points corresponding to the plurality of visual angles are connected to form a Track, and each Track corresponds to a three-dimensional point. In the prior art, a motion structure of a double-view angle is recovered, camera parameters and a scene structure (three-dimensional point coordinates) are recovered by using camera motion for a chaotic map, reconstruction and global BA binding adjustment optimization are continuously performed by increasing more views on the basis, increment SFM is realized, and sparse point cloud is output.
(6) Dense reconstruction and point cloud mesh reconstruction
As shown in fig. 8, after the sparse three-dimensional point cloud is obtained through the incremental SFM, dense reconstruction is performed by using a method based on space patch diffusion, where a space patch is a real space three-dimensional point and is directly diffused on the three-dimensional point. Space patch is a patch of very small size in space, whose center is the coordinates of a three-dimensional point. The patch is made to cover the entire surface by a regular expansion method. Firstly, defining a space patch as a 5 multiplied by 5 square grid, wherein the center of the square grid corresponds to the center of a three-dimensional point, projecting to different images, calculating the photometric consistency, measuring by using NCC (Normalized Cross Correlation), and then completing dense reconstruction through patch expansion and patch filtering.
The method uses the three-dimensional grid reconstruction of the implicit function to obtain a stable grid structure with high precision and certain robustness to the result. The method comprises the following steps of firstly dividing the space, namely uniformly and non-uniformly dividing the space, and aiming at sampling implicit functions. Then, the value of each vertex implicit function is calculated, and the global method needs to fit the vertex implicit functions, provided that the specific expression form of the implicit functions is known, and generally, the implicit functions are expressed as the weighted sum of the basis functions established on each vertex. Therefore, the global approach is a fitting problem, and needs to recalculate the symbol distance value of each vertex by using an implicit function, and then converts the symbol distance field into a grid by using Marching Cube, wherein the function must conform to the fitting function. The local method only needs to calculate the distance of each symbol and carry out Marching Cube. The global method adds one-time fitting, so the method is robust to noise, but the loss of detail is large. Local methods can preserve detail well, but are susceptible to noise. The present invention uses a local approach in the form of Floating scale surface recovery.
Thus, the above three-dimensional reconstruction method completes the overall modeling of the entire multi-part containing pallet.
3. Design and implementation method of two-dimensional image segmentation based on complex background
The camera is placed in the center of the tray and used for obtaining a two-dimensional image, and the purpose in the scheme is to obtain the outline and the position of each part in the tray in a two-dimensional segmentation mode and form a cutting plane for cutting after being unified with a three-dimensional model coordinate system. When the two-dimensional image part segmentation, the imaging processing, the edge detection and the edge redrawing are carried out, parameters including the perimeter, the area, the offset angle, the two-dimensional coordinates and the like of each part are obtained based on the two-dimensional image.
(1) Image segmentation scheme design and implementation
The following problems are solved:
(1) the number of parts on one tray is large, and the parts are different in types;
(2) the color of the part is almost the same as that of the background of the tray, and the cutting difficulty is high;
(3) the tray is of a dense grid structure and is high in overlapping degree with the parts, the visual structure of the tray is more complex than the parts, the division difficulty is high, the influence factors of the final result of the obtained parts are more, and the noise point processing difficulty is high.
Based on the problems, the invention designs a set of algorithm flow to finish the complete and successful acquisition of the parts. The algorithm is implemented as follows:
and (4) coding by adopting a mode of combining a k-means algorithm and Grab Cut on image acquisition. Aiming at the condition that the colors of image parts are similar to the background color, the images are classified in multiple ways by adopting a k-means clustering algorithm according to RGB channels, and multiple images are obtained so as to segment the target parts and the complex background. Meanwhile, the method needs to be used for paying attention to the fact that in color classification, the more classification is, the more influence is caused by illumination, and therefore the image must be acquired under uniform illumination. After automatic clustering, grab Cut segmentation of images is carried out according to the displayed image difference, and before the Grab Cut segmentation, a segmentation area needs to be determined and edited.
The K-means algorithm uses the functions of kmeans (InputArray data, int K, inputOutputArray bestllabels, termCriteria criteria, int attempts, int flags, outputArray centers = noArray ()) in opencv3.4.1 to realize automatic clustering. Where data represents the input data set and K represents the number of classes, which are also the most important data parameters. It should be noted that the whole iterative process time must be lower than the three-dimensional reconstruction time and lower than the limited time on the production line, as shown in fig. 9 for the part experimental result with K = 8.
(2) Part edge and parameter extraction
For the acquired image, the following operations are further performed, which are implemented as follows:
(1) and contour extraction: for generating a final segmentation plane;
(2) and parameter extraction: the part matching method is used for meeting some indexes of production and matching the parts with a model library subsequently, and related parameters need to be collected because the types of the parts are huge, and the types of some parts are completely the same but different in size;
(3) and coordinate positioning: spatial coordinates in the pixel-based two-dimensional image are extracted.
The specific algorithm flow is as follows:
step1: reading an image, and performing morphological processing;
step2: performing median filtering and blu filtering (removing small mouth influence) on the result of the previous step;
step3: canny edge detection is carried out on the image optimized by the Step 2;
step4: finding and drawing the outmost layer outline through findContours () and drawContours (); step5: defining minAreaRect () to draw a minimum external moment, and acquiring the length of each edge;
step6: meanwhile, counting the number of the profiles;
step7: contourArea (), arcLength () calculate the perimeter and area of each part;
step8: calling the API of the RotatedRect to obtain the mass center and the relative rotation angle of each part;
based on the steps, all two-dimensional image segmentation and data acquisition are completed.
4. Design and implementation method for multi-part segmentation based on two-dimensional image mapping three-dimensional model
Based on the consideration of the traditional three-dimensional model segmentation calculated amount and complexity, and the advantages of high two-dimensional image intuition and high calculation efficiency, the design of guiding the three-dimensional model segmentation by the two-dimensional image is adopted to complete the segmentation and positioning of each three-dimensional model part in the scene three-dimensional model. And establishing a mapping relation between the three-dimensional model and the two-dimensional image. And forming a segmentation plane through the extraction line of the contour in the two-dimensional image. And performing plane projection on the three-dimensional model based on the bounding box, determining a space position based on the bounding box, performing scaling/expansion proportion and angle rotation on the two-dimensional plane in the same coordinate system of the projected image and the two-dimensional segmentation plane, and synchronously adjusting two-dimensional coordinate parameters. And (3) normalizing the adjusted two-dimensional segmentation plane to the same coordinate system of the two-dimensional image and the three-dimensional model, translating the two-dimensional image, positioning the segmentation position in the three-dimensional model, completing the segmentation of each part, and determining the z-axis space position based on the two-dimensional coordinates (x, y) of each part and the movement position of the part to the three-dimensional model packing box so as to determine the space coordinates of each part.
Step1: carrying out point cloud denoising on the three-dimensional model to ensure that the gridded model mainly influences the integrity of the tray and reduce the surrounding environment noise points;
step2: normalizing the three-dimensional model based on the bounding box to determine a space coordinate, ensuring invariance of rotation, translation, scaling/expansion of the three-dimensional model after the step, and determining an origin coordinate of the space;
step3: performing depth projection on the model in the bounding box;
step4: mapping based on the depth projection image and the two-dimensional segmentation image, rotating, zooming/enlarging the two-dimensional segmentation image, and simultaneously adjusting the centroid coordinate of each part in the two-dimensional segmentation image;
step5: and determining and dividing the divided surface of the three-dimensional model according to the three-dimensional divided image adjusted by Step 4.
Step6: and (4) determining space coordinates.
And accordingly, acquiring each three-dimensional part and positioning the space coordinate.
The design idea is as follows: for the point cloud mesh model after the three-dimensional reconstruction, extracting each part model, firstly segmenting the acquired two-dimensional image to determine the position and the outline of each part, and using the outline plane as the plane of the finally segmented three-dimensional model. The three-dimensional model is denoised to ensure the edge stability, the main body is the whole tray range, then the space origin coordinate is determined through the bounding box, then the three-dimensional model is demapped to generate a depth mapping image, the mapping image is two-dimensional, and only the difference of the z axis exists between the mapping image and the three-dimensional model to adjust the position of the two-dimensional segmentation plane. The two-dimensional segmentation plane and the three-dimensional model are unified into a coordinate system by adjusting rotation, translation, scaling or expansion of the two-dimensional segmentation image based on the depth image, then translation is carried out to complete segmentation, and meanwhile, aiming at positioning coordinates obtained by the two-dimensional image, adjustment of a space two-dimensional point is also carried out in the operation processes of rotation, translation and the like, so that the coordinates of each part in the three-dimensional model are determined, and the z position of the coordinates is 0 because the two-dimensional model is a plane.
According to the method, a camera scene of an actual scene is constructed and designed, fine three-dimensional modeling is achieved by using a motion recovery structure, a top two-dimensional graph is obtained at the same time, an image segmentation method is provided according to a complex background to obtain a target, operations such as graphical processing, edge detection, size measurement and the like are further performed to obtain the number of parts, parameters of each part and an edge profile, then a coordinate system is unified according to a two-dimensional image and a three-dimensional model, a segmentation plane is determined through the two-dimensional image, and a segmentation result of the three-dimensional model of each part is obtained. And simultaneously acquiring parameters such as the offset angle and the size of each model for subsequent model library matching.
The present invention has been described in terms of the preferred embodiment, and it is not intended to be limited to the embodiment. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (2)

1. A three-dimensional reconstruction method of airplane parts based on multi-view stereo vision is characterized by comprising the following steps:
acquiring images of the tray and the multiple airplane parts based on multi-view stereoscopic vision, and performing integral modeling on the tray and the multiple airplane parts to obtain a three-dimensional model; meanwhile, performing image segmentation and data acquisition of a complex background on the acquired two-dimensional image; the method for establishing the three-dimensional model comprises the following steps: acquiring an image; extracting and matching the characteristic points; an SFM dual view motion restoration architecture; performing BA nonlinear optimization; an incremental motion restoration structure; dense reconstruction and point cloud grid reconstruction; the image segmentation method comprises the following steps: coding in a mode of combining a k-means algorithm and Grab Cut on image acquisition, and aiming at the condition that the colors of image parts are similar to the background color, performing multi-classification on the images by adopting a k-means clustering algorithm aiming at RGB channels, and acquiring multiple images so as to segment target parts and complex backgrounds; after automatic clustering, performing Grab Cut segmentation on the image according to the displayed image difference, determining an editing segmentation area before performing Grab Cut segmentation, and segmenting by adopting a mode of fixing the size of a boundary; the method for extracting the edge and the parameters of the part comprises the following steps: reading an image, and performing morphological processing; performing median filtering and blu filtering on the result of the last step; canny edge detection is carried out on the optimized image; finding and drawing the outmost layer outline through findContours () and drawContours (); defining minAreaRect () to draw a minimum external moment, and acquiring the length of each edge; meanwhile, counting the number of the profiles; calculating the perimeter and the area of each part through contourArea (), arcLength (); calling an API (application program interface) of the RotatedRect to acquire the mass center and the relative rotation angle of each part;
segmenting the three-dimensional model based on the two-dimensional segmentation image and the three-dimensional model to complete the acquisition of each three-dimensional part and the positioning of a space coordinate; the method specifically comprises the following steps: carrying out point cloud denoising on the three-dimensional model; normalizing the three-dimensional model based on the bounding box to determine a space coordinate, ensuring invariance of rotation, translation, scaling/expansion of the three-dimensional model after the step, and determining an origin coordinate of the space; performing depth projection on the model in the bounding box; mapping based on the depth projection image and the two-dimensional segmentation image, rotating, zooming/enlarging the two-dimensional segmentation image, and simultaneously adjusting the centroid coordinate of each part in the two-dimensional segmentation image; determining and dividing the dividing surface of the three-dimensional model according to the adjusted three-dimensional dividing image; and (4) determining space coordinates.
2. A method for the three-dimensional reconstruction of an aircraft part based on multi-view stereo vision according to claim 1, characterized in that in said image acquisition the layout of the cameras is:
the tray rule is a rectangle of 1800mm multiplied by 900mm, an elliptical guide rail is designed to be positioned at an angle position of 500mm from the horizontal plane of the platform, 500mm in height and 45 degrees in depression of a visual angle, one camera is respectively placed at each of four angles, the center of each long edge is respectively positioned at the center of the long edge, 6 cameras are totally arranged, the center of the tray is taken as an origin, the cameras are respectively positioned on each 60-degree angle ray, and the whole area is covered by 6 angles;
meanwhile, respectively setting a next target point at the position of the guide rail on the ray at the angle of 20 degrees from the origin on the adjacent right side of each camera, and continuing to be at the position and taking the position 20 degrees downwards as a next target motion point;
meanwhile, one camera for covering the visual field of the part tray is arranged above the center of the tray, and the overlook two-dimensional image is acquired for the subsequent segmentation operation of the two-dimensional image.
CN201910120856.5A 2019-02-18 2019-02-18 Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision Active CN109872397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910120856.5A CN109872397B (en) 2019-02-18 2019-02-18 Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910120856.5A CN109872397B (en) 2019-02-18 2019-02-18 Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision

Publications (2)

Publication Number Publication Date
CN109872397A CN109872397A (en) 2019-06-11
CN109872397B true CN109872397B (en) 2023-04-11

Family

ID=66918832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910120856.5A Active CN109872397B (en) 2019-02-18 2019-02-18 Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision

Country Status (1)

Country Link
CN (1) CN109872397B (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322464B (en) * 2019-06-30 2021-07-02 华中科技大学 Three-dimensional point cloud-based small-curvature thin-wall part boundary extraction method
CN110288713B (en) * 2019-07-03 2022-12-23 北京机械设备研究所 Rapid three-dimensional model reconstruction method and system based on multi-view vision
CN110517323A (en) * 2019-08-16 2019-11-29 中铁第一勘察设计院集团有限公司 3 D positioning system and method based on manipulator one camera multi-vision visual
CN110533642A (en) * 2019-08-21 2019-12-03 深圳新视达视讯工程有限公司 A kind of detection method of insulator damage
CN110648396A (en) * 2019-09-17 2020-01-03 西安万像电子科技有限公司 Image processing method, device and system
CN110866894B (en) * 2019-10-08 2023-05-26 南京航空航天大学 Cross-granularity sheet metal part identification system and method based on machine vision technology
CN110751145B (en) * 2019-10-18 2023-07-25 南京大学 Multi-view image acquisition and preprocessing system and method under complex illumination condition
CN111080685A (en) * 2019-12-17 2020-04-28 北京工业大学 Airplane sheet metal part three-dimensional reconstruction method and system based on multi-view stereoscopic vision
CN111354031B (en) * 2020-03-16 2023-08-29 浙江一木智能科技有限公司 3D vision guidance system based on deep learning
CN111462107B (en) * 2020-04-10 2020-10-30 视研智能科技(广州)有限公司 End-to-end high-precision industrial part shape modeling method
CN111612824A (en) * 2020-05-26 2020-09-01 天津市微卡科技有限公司 Consciousness tracking recognition algorithm for robot control
CN111582237B (en) * 2020-05-28 2022-08-12 国家海洋信息中心 ATSM model-based high-resolution image airplane type identification method
CN111882668B (en) * 2020-07-30 2022-06-24 清华大学 Multi-view three-dimensional object reconstruction method and system
CN112102397B (en) * 2020-09-10 2021-05-11 敬科(深圳)机器人科技有限公司 Method, equipment and system for positioning multilayer part and readable storage medium
CN112330748B (en) * 2020-09-30 2024-02-20 江苏智库智能科技有限公司 Tray identification and positioning method based on binocular depth camera
CN112287824A (en) * 2020-10-28 2021-01-29 杭州海康威视数字技术股份有限公司 Binocular vision-based three-dimensional target detection method, device and system
CN112784368B (en) * 2020-12-30 2022-07-05 北京航空航天大学 Adaptive shape measuring point planning method and system for surface features
CN113000910B (en) * 2021-03-01 2023-01-20 创新奇智(上海)科技有限公司 Hub machining auxiliary method and device, storage medium, control device and system
CN112949551A (en) * 2021-03-19 2021-06-11 科大讯飞股份有限公司 Eye key information determination method, device, equipment and storage medium
CN112734863B (en) * 2021-03-31 2021-07-02 武汉理工大学 Crossed binocular camera calibration method based on automatic positioning
CN113689326B (en) * 2021-08-06 2023-08-04 西南科技大学 Three-dimensional positioning method based on two-dimensional image segmentation guidance
CN113393383B (en) * 2021-08-17 2021-11-16 常州市新创智能科技有限公司 Splicing method for photographed images of double-depth camera
CN116540255A (en) * 2022-01-26 2023-08-04 上海飞机制造有限公司 System and method for measuring and obtaining plane shape by using multiple laser radars
CN114558794B (en) * 2022-03-03 2023-03-21 南京苏胜天信息科技有限公司 Machine vision artificial intelligence processing system and method thereof
CN114972650B (en) * 2022-06-08 2024-03-19 北京百度网讯科技有限公司 Target object adjusting method and device, electronic equipment and storage medium
CN115755978B (en) * 2022-12-08 2023-07-14 贵州省山地资源研究所 Mining area drainage ditch rapid intelligent inspection method based on multi-rotor unmanned aerial vehicle
CN116228830B (en) * 2023-03-13 2024-01-26 广州图语信息科技有限公司 Three-dimensional reconstruction method and device for triangular mesh coding structured light
CN116228992B (en) * 2023-05-08 2023-07-21 速度科技股份有限公司 Visual positioning method for different types of images based on visual positioning system model
CN116912429B (en) * 2023-09-13 2023-12-08 江苏普旭科技股份有限公司 Three-dimensional reconstruction method and system for high-definition video IG (inter-group) material

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038671A (en) * 2007-04-25 2007-09-19 上海大学 Tracking method of three-dimensional finger motion locus based on stereo vision
CN106841206A (en) * 2016-12-19 2017-06-13 大连理工大学 Untouched online inspection method is cut in heavy parts chemical milling
CN108335350A (en) * 2018-02-06 2018-07-27 聊城大学 The three-dimensional rebuilding method of binocular stereo vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10109055B2 (en) * 2016-11-21 2018-10-23 Seiko Epson Corporation Multiple hypotheses segmentation-guided 3D object detection and pose estimation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101038671A (en) * 2007-04-25 2007-09-19 上海大学 Tracking method of three-dimensional finger motion locus based on stereo vision
CN106841206A (en) * 2016-12-19 2017-06-13 大连理工大学 Untouched online inspection method is cut in heavy parts chemical milling
CN108335350A (en) * 2018-02-06 2018-07-27 聊城大学 The three-dimensional rebuilding method of binocular stereo vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于双目视觉的航空发动机受损叶片三维重建方法;李嘉琛等;《科技与创新》;20180905(第17期);第34-37+39页 *
基于多视图立体视觉的煤场三维建模方法研究;董建伟 等;《燕山大学学报》;20161231;第136-141页 *

Also Published As

Publication number Publication date
CN109872397A (en) 2019-06-11

Similar Documents

Publication Publication Date Title
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN110264567B (en) Real-time three-dimensional modeling method based on mark points
CN109697688B (en) Method and device for image processing
CN106709947B (en) Three-dimensional human body rapid modeling system based on RGBD camera
WO2022088982A1 (en) Three-dimensional scene constructing method, apparatus and system, and storage medium
CN106910242B (en) Method and system for carrying out indoor complete scene three-dimensional reconstruction based on depth camera
CN111243093B (en) Three-dimensional face grid generation method, device, equipment and storage medium
CN110728671B (en) Dense reconstruction method of texture-free scene based on vision
CN113192179B (en) Three-dimensional reconstruction method based on binocular stereo vision
US9147279B1 (en) Systems and methods for merging textures
CN113178009B (en) Indoor three-dimensional reconstruction method utilizing point cloud segmentation and grid repair
CN107170037A (en) A kind of real-time three-dimensional point cloud method for reconstructing and system based on multiple-camera
CN112307553B (en) Method for extracting and simplifying three-dimensional road model
Serna et al. Data fusion of objects using techniques such as laser scanning, structured light and photogrammetry for cultural heritage applications
Xu et al. Survey of 3D modeling using depth cameras
JP4058293B2 (en) Generation method of high-precision city model using laser scanner data and aerial photograph image, generation system of high-precision city model, and program for generation of high-precision city model
CN112465849B (en) Registration method for laser point cloud and sequence image of unmanned aerial vehicle
WO2018133119A1 (en) Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera
CN103839286B (en) The true orthophoto of a kind of Object Semanteme constraint optimizes the method for sampling
CN112862736B (en) Real-time three-dimensional reconstruction and optimization method based on points
CN109064533B (en) 3D roaming method and system
Zhang et al. Lidar-guided stereo matching with a spatial consistency constraint
Özdemir et al. A multi-purpose benchmark for photogrammetric urban 3D reconstruction in a controlled environment
Furukawa High-fidelity image-based modeling
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant