CN111063021A - Method and device for establishing three-dimensional reconstruction model of space moving target - Google Patents

Method and device for establishing three-dimensional reconstruction model of space moving target Download PDF

Info

Publication number
CN111063021A
CN111063021A CN201911146834.2A CN201911146834A CN111063021A CN 111063021 A CN111063021 A CN 111063021A CN 201911146834 A CN201911146834 A CN 201911146834A CN 111063021 A CN111063021 A CN 111063021A
Authority
CN
China
Prior art keywords
image
space
image sequence
model
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911146834.2A
Other languages
Chinese (zh)
Other versions
CN111063021B (en
Inventor
潘泉
周康博
侯晓磊
刘勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201911146834.2A priority Critical patent/CN111063021B/en
Publication of CN111063021A publication Critical patent/CN111063021A/en
Application granted granted Critical
Publication of CN111063021B publication Critical patent/CN111063021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Abstract

The invention discloses a method and a device for establishing a three-dimensional reconstruction model of a space moving target, which are used for acquiring a space monocular image sequence of a space fragment or a failure satellite; training a spatial monocular image sequence by adopting a trained generation countermeasure network to eliminate the flare and shadow of each spatial monocular image in the spatial monocular image sequence and generate a first image sequence; extracting a characteristic value matching relation between each image and other images in the first image sequence by a scale invariant characteristic transformation method; establishing a three-dimensional model sparse point cloud of a space debris or a failure satellite according to the characteristic value matching relation; generating a three-dimensional model grid model according to the three-dimensional model sparse point cloud, and sequentially carrying out grid filtering and texturing on the three-dimensional model grid model to obtain a three-dimensional reconstruction model of a space debris or a failure satellite; the method and the device can solve the problem of distortion of the three-dimensional reconstruction model when the three-dimensional reconstruction model of the space moving target is reconstructed according to the space monocular image sequence.

Description

Method and device for establishing three-dimensional reconstruction model of space moving target
[ technical field ] A method for producing a semiconductor device
The invention belongs to the technical field of three-dimensional reconstruction of moving objects in a cosmic space, and particularly relates to a method for establishing a three-dimensional reconstruction model of a space moving object.
[ background of the invention ]
In space missions, in-orbit services are often required for satellites or space station equipment, such as docking of space stations, recovery of failed satellites, and removal of space debris. With the development of aerospace technologies of countries around the world, more than 6000 spacecrafts launched by human beings are developed so far, only a small part of the spacecrafts enter the atmosphere and are destroyed by themselves, and most satellites or other artificial celestial bodies still remain in the space orbit as space garbage.
The existing mode for removing space garbage generally adopts a primary and secondary satellite to remove a target, but when a large satellite is used for removing space debris or a failed satellite, the problems of fuel supply, orbit conversion and the like are involved, and a large amount of energy and time are wasted. Therefore, the space debris or the failed satellite is observed in advance by the aid of the sub-satellite to obtain an image, and reconstruction is performed according to observation information to provide prior information for next-step removal of the space debris or recovery of the failed satellite by the aid of the mother satellite.
However, since the images obtained by pre-observation are affected by the target material and the space environment, the reconstruction model may be distorted in the reconstruction process, the accuracy of the reconstruction model may be reduced, and the removal of space debris and the recovery plan of the failed satellite may be affected.
[ summary of the invention ]
The invention aims to provide a method and a device for establishing a three-dimensional reconstruction model of a space moving object, which aim to solve the problem of distortion of the three-dimensional reconstruction model when the three-dimensional reconstruction model of the space moving object is reconstructed according to a space monocular image sequence.
The invention adopts the following technical scheme: a method for building a three-dimensional reconstruction model of a space moving object comprises the following steps:
acquiring a space monocular image sequence of a space debris or a failure satellite;
training a spatial monocular image sequence by adopting a trained generation countermeasure network to eliminate the flare and shadow of each spatial monocular image in the spatial monocular image sequence and generate a first image sequence;
extracting a characteristic value matching relation between each image and other images in the first image sequence by a scale invariant characteristic transformation method;
establishing a three-dimensional model sparse point cloud of a space debris or a failure satellite according to the characteristic value matching relation;
and generating a three-dimensional model grid model according to the three-dimensional model sparse point cloud, and sequentially carrying out grid filtering and texturing on the three-dimensional model grid model to obtain a three-dimensional reconstruction model of the space debris or the failure satellite.
Further, after obtaining the three-dimensional reconstruction model of the space debris or the failed satellite, the method further includes:
generating a second image sequence according to the three-dimensional reconstruction model; the second image sequence corresponds to the images in the space monocular image sequence and is equal in number, and the two corresponding images have the same shooting angle with respect to the space debris or the failed satellite;
calculating the defect degree of each image in the second image sequence;
when the defect degree is larger than a first preset threshold value, acquiring a space monocular image corresponding to a defect image in the second image sequence in the space monocular image sequence;
repairing the corresponding space monocular image by adopting the trained generated confrontation network to obtain a repaired space monocular image sequence;
and replacing the space monocular image sequence with the repaired space monocular image sequence, and continuing to execute.
Further, when the second image sequence is generated according to the three-dimensional reconstruction model, the second image sequence is generated by respectively adopting a pure black background and a pure white background.
Further, the method for generating each image in the second image sequence is as follows:
determining a sampling frame of the two-dimensional image;
sequentially carrying out sampling scanning on the three-dimensional reconstruction model by using a sampling frame to generate a plurality of fragment images;
and recombining the plurality of fragment images according to the sampling scanning sequence to obtain a sampling image.
Further, calculating the defect degree of each image in the second image sequence comprises:
acquiring a pure black background image and a pure white background image;
converting the pure black background image and the pure white background image into gray level images;
generating a vacancy distinguishing matrix according to the gray value of each pixel point in the gray image of the pure white background image and the gray value of each pixel point in the gray image of the pure black background image;
and generating the defect degree of each image according to the vacancy distinguishing matrix.
Further, the repairing of the space monocular image corresponding to the trained generated countermeasure network includes:
segmenting the vacancy distinguishing matrix according to the plurality of fragment images to obtain a plurality of fragment matrixes;
calculating the sum value of each element in each fragment matrix;
marking the image area corresponding to the fragment matrix with the sum value larger than a second preset threshold value as an area to be repaired;
and repairing the area to be repaired by adopting a trained pair of generated confrontation networks.
The other technical scheme of the invention is as follows: an apparatus for modeling a three-dimensional reconstruction of a spatially moving object, comprising:
the acquisition module is used for acquiring a space monocular image sequence of a space debris or a failure satellite;
the training module is used for training the spatial monocular image sequence by adopting a trained generation countermeasure network so as to eliminate the flare and the shadow of each spatial monocular image in the spatial monocular image sequence and generate a first image sequence;
the matching module is used for extracting a characteristic value matching relation between each image and other images in the first image sequence by a scale-invariant characteristic transformation method;
the first construction module is used for constructing a three-dimensional model sparse point cloud of the space debris or the failure satellite according to the characteristic value matching relation;
the first generation module is used for generating a three-dimensional model grid model according to the three-dimensional model sparse point cloud, and sequentially carrying out grid filtering and texturing on the three-dimensional model grid model to obtain a three-dimensional reconstruction model of a space debris or a failure satellite.
Further, still include:
the second generation module is used for generating a second image sequence according to the three-dimensional reconstruction model after the three-dimensional reconstruction model of the space debris or the failure satellite is obtained; the second image sequence corresponds to the images in the space monocular image sequence and is equal in number, and the two corresponding images have the same shooting angle with respect to the space debris or the failed satellite;
the calculation module is used for calculating the defect degree of each image in the second image sequence;
the judging module is used for acquiring a space monocular image corresponding to a defective image in the second image sequence in the space monocular image sequence when the defect degree is larger than a first preset threshold value;
the repairing module is used for repairing the space monocular image corresponding to the trained generation countermeasure network to obtain a repaired space monocular image sequence;
and the replacing module is used for replacing the space monocular image sequence with the repaired space monocular image sequence and continuing to execute the operation.
Further, the second generation module further comprises a pure black background generation submodule and a pure white background generation submodule;
and the pure black background generation submodule and the pure white background generation submodule are used for generating a second image sequence by respectively adopting a pure black background and a pure white background when generating the second image sequence according to the three-dimensional reconstruction model.
The other technical scheme of the invention is as follows: a three-dimensional reconstruction modeling device of a space moving object comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor executes the computer program to realize the three-dimensional reconstruction modeling method of the space moving object.
The invention has the beneficial effects that: the method and the device have the advantages that the anti-network is generated to repair the spatial monocular image sequence, so that the effects of flare and shadow caused by the special spatial material and the special illumination environment can be eliminated, the repaired image sequence is subjected to three-dimensional reconstruction, the task requirement of the three-dimensional reconstruction of the special spatial environment can be met, the three-dimensional model distortion rate is reduced, and the better reconstruction model precision is achieved.
[ description of the drawings ]
FIG. 1 is a flow chart of one embodiment of the present application;
FIG. 2 is a flow chart of another embodiment of the present application;
FIG. 3 is a flow chart of a process of generating a first sequence of images in an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating the difference between the extreme points in discrete space and continuous space in the embodiment of the present application;
FIG. 5 is a graph of an image gradient transformation descriptor in an embodiment of the present application;
FIG. 6 is a flow chart of a recovery structure in motion in an embodiment of the present application;
FIG. 7 is a diagram illustrating a reprojection error according to an embodiment of the present application;
fig. 8 is a schematic diagram of a structure recovery method in incremental movement according to an embodiment of the present application.
[ detailed description ] embodiments
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The method for establishing the three-dimensional reconstruction model of the spatial moving object provided by the embodiment of the present application may be applied to various intelligent devices, for example, a tablet computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, and other terminal devices.
The embodiment of the application provides a method for establishing a three-dimensional reconstruction model of a space moving target, which comprises the steps of acquiring a space monocular image sequence of space fragments or invalid satellites as shown in figure 1; training a spatial monocular image sequence by adopting a trained generation countermeasure network to eliminate the flare and shadow of each spatial monocular image in the spatial monocular image sequence and generate a first image sequence; extracting a characteristic value matching relation between each image and other images in the first image sequence by a scale invariant characteristic transformation method; establishing a three-dimensional model sparse point cloud of a space debris or a failure satellite according to the characteristic value matching relation; and generating a three-dimensional model grid model according to the three-dimensional model sparse point cloud, and sequentially carrying out grid filtering and texturing on the three-dimensional model grid model to obtain a three-dimensional reconstruction model of the space debris or the failure satellite.
In the embodiment of the application, a generated countermeasure network (GAN) is used for repairing an input image of unordered time (namely, an image in a spatial monocular image sequence), the influence of flare and shadow on a visual image is fully eliminated, an output result of the generated countermeasure network is used as an image input of three-dimensional reconstruction, coordinates of image Feature points are extracted by using a Scale-Invariant Feature Transform (SIFT) method, three-dimensional information (namely sparse point cloud) of a space fragment or a failed satellite is calculated by using a recovery Structure from Motion (SfM) method in Motion, a dense reconstruction grid model is performed by using a Multi-View Stereo (MVS) method, and grid filtering and processing are sequentially performed on the dense grid model to obtain the three-dimensional reconstruction model. In the embodiment of the application, the generation of the countermeasure network is a generation countermeasure network model generated by adopting a method for generating the countermeasure network for training.
The method and the device have the advantages that the anti-network is generated to repair the spatial monocular image sequence, so that the effects of flare and shadow caused by the special spatial material and the special illumination environment can be eliminated, the repaired image sequence is subjected to three-dimensional reconstruction, the task requirement of the three-dimensional reconstruction of the special spatial environment can be met, the three-dimensional model distortion rate is reduced, and the better reconstruction model precision is achieved.
In addition, the images in the spatial monocular image sequence adopted by the application can be acquired by a monocular camera, and the image sequence can be ordered or unordered. Compared with a laser radar and a double-sided camera, the monocular camera has the characteristics of small volume, high reliability and low cost, can be better suitable for being carried by a microsatellite, namely suitable for observing space fragments or invalid satellites and acquiring images by the sub-satellites in the primary and secondary satellites.
When the step is carried out, a darkroom is used for simulating a space environment, and sunlight is simulated through a parallel light source to collect images.
The generation of the countermeasure network is generated by utilizing the training of the existing occlusion data set, firstly, the images are randomly occluded by the occlusion data set, the generation of the countermeasure network is trained to generate an occlusion part, the generated occlusion part is compared with the data of the occluded part of the original image, and the error between the generated part and the original image is continuously reduced through a loss function so as to finish the training of the generation of the countermeasure network. The use is only allowed when the generated generation countermeasure network achieves a good accuracy in the test set to ensure that the glare and shadows in the spatial monocular image can be well removed.
The training process based on generation of the countermeasure network satisfies:
Figure BDA0002282434400000071
wherein G (z) represents a generative model, D (x) represents a discriminant model, prRepresenting the distribution of real images, pgRepresenting the distribution of the generated image. The generation model g (z) is used to generate an image, and the discrimination model d (x) is used to discriminate the probability that the input data x is from a real image. The training of the discriminant model is supervised two-classification, the training of the generative model is the series training of the generative model and the discriminant model, and the results are converged by the alternate training of the generator and the discriminant, it is noted that the parameters of the discriminant model cannot be changed when the generators are trained in series. Iteratively updating G to maximize the probability that the arbiter will make the wrong classification, and iteratively updating D to maximize the probability that the arbiter will make the correct classification。
In the initial stage of training, it is impossible to provide enough gradient to train D, because the noise generated by the generator G is far from the true value, the discriminator D can accurately discriminate the true and false samples, the log (1-D (G (z)) tends to saturate, there is not enough gradient to minimize the log (1-D (G (z)), and the generating network should be trained to maximize logd (x). The objective function gives stronger gradient information in the early training phase and has the same effect on the training of G and D.
The generation network G designed by the invention mainly comprises two parts, wherein the first part is a line generation network GlMainly responsible for generating image lines according to the original image and filling lines of the image shielding part, and the other part is an image generation network GpAnd the system is mainly responsible for filling, repairing and coloring the lines generated by the line generation network. Let X denote the unoccluded line data and N denote whether the image data is occluded by the occlusion data set, denoted as
Figure BDA0002282434400000081
YtFor a line representation of an image, the lines generated by the line generating network may be expressed as
Figure BDA0002282434400000082
Wherein the content of the first and second substances,
Figure BDA0002282434400000083
⊙ is a Hadamard product in the graph generation network, again with the unobstructed line data x as input, the image generated by the graph generation network can be represented as:
Figure BDA0002282434400000084
the arbiter uses a markov arbiter as a 70 × 70 PatchGAN network. In a traditional discriminator, a general generation countermeasure network can only evaluate whether the whole picture is true or false, obviously, the evaluation will affect the discrimination of some high-resolution, high-definition and large-size pictures, the PatchGAN will give a matrix discrimination of 70 × 70 for each picture, each element of the matrix represents true or false of a receptive field, the precision of the evaluation of the whole picture is improved, the picture repairing error is reduced, and meanwhile, a more accurate result can be provided for the input of the three-dimensional reconstructed picture.
The loss function of the discriminator is
Figure BDA0002282434400000085
Where x is the observed data, y is the output data, and z is the random noise vector. Fig. 3 is a schematic diagram of an image restoration process according to an embodiment of the present application.
Conventional GAN-based image restoration uses only one GAN; the method of the embodiment adopts a contour generation GAN and a pix2pix network (a GAN network for generating images based on the contours generated by the GAN), and enables the image generation result to have better shape contour information of the observed target (dead satellites or fragments) by combining 2 GAN networks.
In this embodiment, the generated countermeasure network by using pre-training is a generated countermeasure network obtained by training using a large number of images of artificial celestial bodies such as satellites and fragments in order to accelerate the training process and to converge the network quickly.
And taking the first image sequence after the generation of the countermeasure network processing as an input image sequence of a three-dimensional reconstruction part, extracting the image feature points of each image in the sequence by using a scale-invariant feature transformation method, and matching one image of the input image sequence with the feature points of other images.
Obtaining image scale space by Gaussian blur image filter, using N-dimensional space Gaussian function
Figure BDA0002282434400000091
It is known that the larger the standard deviation σ of the normal distribution is, the more blurred the image is, where r is a blur radius, and when the size of the two-dimensional template is mxn, the gaussian calculation formula corresponding to the element (x, y) on the template is as follows
Figure BDA0002282434400000092
Two-dimensional Gaussian blur results of the image can be obtained.
In order to prevent the black edge effect caused by excessive loss of the image edge pixels, the image needs to be subjected to separation Gaussian blur processing, and the operation can reduce the amount of Gaussian kernel and convolution operation at the same time. Due to the separability of the gaussian blur, the effect obtained by using the two-dimensional matrix transformation can be equivalently obtained by respectively performing the one-dimensional gaussian matrix transformation in the horizontal direction and the vertical direction, so that the black edges generated by performing the two-dimensional gaussian matrix operation on the image can be offset by two times of one-dimensional gaussian convolution.
Continuously reducing the price of the image without the flare and the shadow, sampling and applying Gaussian filtering to the image without the flare and the shadow, carrying out Gaussian blur with different parameters on each layer of image to obtain a Gaussian pyramid, wherein the layer number of the Gaussian pyramid is determined by the dimension of the image without the flare and the shadow and the dimension of the tip image, and the layer number c is log2[min(m,n)]-t, where t ∈ [0, log2[ min (m, n) ]]]M and n represent the dimension of the original image, t represents the logarithm value of the minimum dimension of the tip image, each layer of the Gaussian golden sub-tower is composed of a plurality of images obtained by adding Gaussian blur of different parameters to the image after removing flare and shadow, and the plurality of images are called as a group.
Comparing the maximum and minimum values of the scale-normalized gaussian laplacian function or gaussian difference function with other feature extraction functions can generate the most stable image features. The difference is made between each adjacent layer of the gaussian pyramid to obtain a gaussian difference pyramid, the monitoring points of the gaussian difference space are compared with the adjacent points and the adjacent layer points to preliminarily obtain discrete space extreme points of the gaussian difference function, and a schematic diagram of the discrete space extreme points and the continuous space extreme points is shown in fig. 4. And performing sub-pixel interpolation in the discrete space points to obtain continuous space extreme points. In order to improve the stability of the key points, curve fitting needs to be carried out on a Gaussian difference function in a scale space, and the fitting function of the Gaussian difference function in the scale space is
Figure BDA0002282434400000101
Where X ═ (X, y, σ)TAnd σ is a scale space coordinate, and (x, y) represents the position of an image pixel. By deriving the fitting function and making it zero, the offset of the extreme point can be calculatedIs composed of
Figure BDA0002282434400000102
The corresponding extreme point equation has a value of
Figure BDA0002282434400000103
Wherein the content of the first and second substances,
Figure BDA0002282434400000104
when offset from the center of interpolation
Figure BDA0002282434400000105
At x, y, σ greater than 0.5, the keypoint location should be changed and re-interpolated at the new location until convergence, and when the iteration range or image boundary is exceeded, the interpolation should be terminated and the point culled.
In addition, the gaussian difference operator generates a strong edge response at the boundary of the image, that is, a large principal curvature exists near a pixel point crossing the edge, and a small principal curvature exists at an adjacent pixel point perpendicular to the edge, so that the pixel point of the edge response should be eliminated. Passing a 2 x2 hessian matrix
Figure BDA0002282434400000106
The principal curvature can be determined, the eigenvalues of H α and β being the gradients in the x and y directions, the trace tr (H) T of the matrixxx+Tyyα + β, determinant det (h) T of matrixxxTyy-(Txy)2αβ, when α is r β and r is greater than 1
Figure BDA0002282434400000107
Let α be the maximum eigenvalue, β be the minimum eigenvalue,
Figure BDA0002282434400000111
at α - β, it is known that the α and β values can represent gradient values in a certain direction, the larger the ratio, the larger the gradient value in the direction, and the smaller the ratio, the smaller the gradient value, and the edge feature is metThe ratio is smaller than a threshold value to determine whether the principal curvature is below the threshold value r. Namely:
Figure BDA0002282434400000112
then obtaining the direction information of the key points, and for the key points obtained by the method, the mode of the gradient distribution characteristics of the pixels in the 3 sigma range is as follows:
Figure BDA0002282434400000113
the gradient characteristic distribution direction is as follows:
θ(x,y)=tan-1((L(x,y+1)-L(x,y-1))/(L(x+1,y)-L(x-1,y))),
the descriptor is established to describe the feature points, the steps are as shown in fig. 5, and the descriptor is selected to be free from the influence of external environment changes such as illumination and visual angle, and has stronger robustness. The descriptor contains key point information and information of other pixel points acting on the key point, and in addition, each descriptor has higher specificity, so that the probability of correct feature matching can be effectively improved. In order to ensure the rotation invariance, firstly, the coordinate axis direction is consistent with the main direction of a key point, secondly, the key point is used as the center, all pixels in a window are selected by taking 16 multiplied by 16 as the window size, the whole window is divided into 16 subblocks of 4 multiplied by 4, each subblock comprises a pixel point, finally, the modulus and the direction of each pixel point are calculated, gradient values in a subarea are distributed to 8 directions, the weight of each subblock is calculated, each characteristic value comprises 16 multiplied by 8 to 128 dimensions, the 128-dimensional characteristic vector can be used as a descriptor, and the characteristic points with unchanged scale and changed features of the image and description information thereof can be obtained through the steps.
And taking the matching relation obtained in the scale invariant feature transformation method as three-dimensional reconstruction input, and performing sparse reconstruction by a structure recovery method in motion to obtain the sparse point cloud of the three-dimensional reconstruction model.
The structure recovery method in motion can be regarded as a problem of estimating camera parameters and three-dimensional point positions, and the steps can be divided into:
(1) matching the detected feature points of every two pictures, and only reserving feature matching pairs which can meet geometric constraint;
(2) restoring internal and external parameters of the camera by a scale invariant feature transformation method;
(3) recovering the depth information of the three-dimensional points by using a triangulation method;
(4) and solving by adopting an optimization algorithm of nonlinear least squares, such as a beam adjustment method.
As shown in fig. 6, the description of the feature point is a 128-dimensional vector containing gradient information of a pixel in the neighborhood of the key point, so that performing feature matching can be regarded as finding the key point information closest to the currently-determined key point in two pictures, and calculating whether the key point information matches the currently-determined key point, and considering a large calculation amount caused by a high-dimensional feature point extracted by the feature transformation with unchanged scale, the nearest neighbor search algorithm adopted in the embodiment of the present application is a BBF algorithm based on a k-d tree (k-dimensional tree).
The k-d tree can be regarded as a binary search tree in a k-dimensional space, in the one-dimensional binary search tree, the numerical value of any node in the left sub-tree of the root node is smaller than that of the root node, the numerical value of any node in the right sub-tree of the root node is larger than that of the root node, and any left and right sub-trees are binary search trees. For k-dimensional space, a certain reference dimension d is selected0Comparing the k-dimensional data with the data at d0The size relationship of the data in dimension, i.e. one perpendicular to d0The hyperplane divides the k-dimensional space into two parts, wherein the data on one side of the hyperplane are all larger than d0The value on the dimension is less than d0The value in the dimension. The dimensionality is continuously and recursively divided in the direction of the maximum variance until the dimensionality is divided to leaf nodes of data in the k-dimensional space, a k-d tree can be established, data information of the k-d tree is stored only in the leaf nodes, and the root nodes and the middle nodes are used for storing data dividing information.
And (3) utilizing a BBF algorithm to carry out a nearest neighbor search step:
(1) for the data set S, the target is soughtData U is firstly judged from the root node to divide dimension d0The value on the data set S is S0When U (k) < s0When U (k) > s, then enter the left sub-tree0And if so, entering the right subtree for access. When U (k) reaches the leaf node, calculating the distance between the leaf node data and U (k) and storing the minimum distance, and marking the data point as the current closest point PnThe distance is recorded as "current minimum distance" Dn
(2) Backtracking upwards, and calculating whether the leaf nodes which are not visited have the distance D less than the current minimum distancenShorter distance, when there is data on a leaf node under the same parent node as U (k) and the distance between U (k) and U (k) is less than DnThen, the branch is considered to have data closer to U (k), and P is updatednAnd DnIf no closer data point is found, the backtracking is continued until a closer point is found or a set maximum backtracking number is reached.
When the number of the feature matching pairs obtained by the two pictures by using a BBF algorithm based on a k-d tree is not less than 16, the two pictures can be used as initial selection images of a recovery structure method in motion. However, since some feature points may not meet objective physical conditions in an actual scene, epipolar geometric calculation should be performed on the feature matching points, and the F matrix is a connection matrix of pixel coordinates of two pictures, and each matching pair pixel coordinate should satisfy the requirement
Figure BDA0002282434400000131
Wherein the contact matrix F contains camera reference information. And solving the F matrix value, wherein the value contains a large amount of noise information, the F matrix value needs to be filtered by a random sampling consistency algorithm, the basic matrix is solved by using an 8-point method in each iteration, and the matching pairs which do not meet the basic matrix are removed to obtain the final image matching pairs. Let the matching feature of the ith graph be fiThen the corresponding feature point { f }1,f2,f3...fnA track is formed, and so on, other eligible tracks can be found, each containing the same feature on the corresponding imageThe pixels are typically key nodes or edge contours of the image.
The parameters in the camera are described by a focal length f and two radial distortion parameters k1,k2The description of the camera external parameters uses a 3 × 3 rotation matrix R and a 1 × 3 translation vector. In the actual scene, a point X in the three-dimensional space is arbitrarily takenjThen, through calculation of the projection equation, the two-dimensional image can be projected on the two-dimensional image, but the point calculated through the projection equation has an error from the true value of the point on the image, and the error is the distance between the projection point and the true point. For m tracks of multiple pictures generated by n view angles, the target optimization equation of the projection error is as follows:
Figure BDA0002282434400000141
wherein, wijTo observe the weight, can be expressed as
Figure BDA0002282434400000142
||qij-P(Ci,Xj)||2The reprojection error for the trajectory j observed in camera i. The scale invariant feature transform method is to optimize the function g (C, X) by using a beam adjustment method by finding appropriate camera scene parameters. The reprojection error is shown in fig. 7.
When the initial image is selected by the scale-invariant feature change method, the image is ensured to have enough matching points and a camera center far enough, otherwise, the beam adjustment method can enter local optimization and cannot obtain a global optimal solution. And then, estimating the external parameters of the initialized matched pairs by adopting a 5-point method, and providing initialized three-dimensional points by triangularization of the track. When two pictures are initialized, rough reconstruction is carried out by using a sparse beam adjustment method, beam adjustment can be continuously carried out by continuously increasing three-dimensional points of the camera until no enough three-dimensional points can be continuously added, and sparse three-dimensional point cloud reconstruction can be obtained at the moment.
And (3) using the three-dimensional reconstruction model sparse point cloud obtained by the motion recovery structure method as input, and performing dense reconstruction by using a multi-view stereoscopic vision method to obtain a three-dimensional model grid model.
Because the scale invariant feature transformation method is a three-dimensional reconstruction algorithm obtained by matching feature points, for each picture, the proportion of the feature points in the pixels is very low in the whole situation, so that dense three-dimensional reconstruction cannot be performed only through the feature points. Therefore, generating a dense reconstruction of multiple views from a three-dimensional sparse point cloud extracted by a scale-invariant feature transform method requires finding the projection of any point on all valid and viewable images of that point in all valid images.
The method and the device adopt a multi-view stereo matching algorithm based on depth map fusion, and are characterized in that redundant recovery point clouds can be obtained, the reconstruction precision is high, the robustness is good, and the method and the device are very suitable for three-dimensional reconstruction of special space environments. The method comprises the following specific steps:
(1) a depth map is reconstructed for each input view. The depth of the image is calculated using a very simple and efficient but robust window matching view and a small number of neighboring views and the confidence value of each pixel is estimated, using only the points with higher confidence in the stitching model process.
Assume that the input is a set of views V ═ V of the object0,...,Vn-1Camera parameters and an approximate bounding box or volume containing the object. Selecting a set of k neighboring views for each reference view R e V
Figure BDA0002282434400000151
And correlated with R by robust window matching. For each pixel p, it travels along the reflected ray within the bounding volume of the object. For each depth value d, the generated three-dimensional position is re-projected into all views in C. Calculating normalized cross-correlation values NCC (R, C) for an m window centered on p and a corresponding window centered on projectionjD), wherein each view CjThere is sub-pixel accuracy.
If the NCC value is greater than the threshold value in at least two graphs in the C set, the depth d is considered to be valid, and the set formed by all graphs with the NCC greater than the threshold value is recorded as Cv(d) In that respect For an effective depth d, the correlation value corr (d) is calculated as Cv(d) The NCC mean value of all images in (1)
Figure BDA0002282434400000161
Wherein | | | Cv(d) Calculated | |, Cv(d) For each pixel p in R, the depth is chosen to be the value of d that maximizes corr (d), otherwise there is no valid value of d, and the confidence of each recovered depth value is expressed as
Figure BDA0002282434400000162
Where φ represents a threshold value, which can be set according to actual use conditions, and is generally set to about 0.6.
(2) The results are stitched into the reconstructed model. In this step, the result set of confidence weighted depth maps is merged using a volumetric method, which outputs a triangular mesh with confidence for each vertex. After obtaining an incomplete set of depth maps, they may be merged into a single surface mesh representation using a volumetric method that converts each depth map into weighted signed distance volumes, sums the volumes, and extracts the surface set to zero. After the multi-view dense reconstruction is performed on the image, the reconstruction model can be more complete only by filtering and texturing the reconstruction result, and it should be noted that in consideration of the task requirements of the three-dimensional reconstruction, the pose estimation and capture are performed on the failed satellite or the fragment through the three-dimensional reconstruction, that is, the task has higher requirements on the shape of the observed object and has no requirements on the visual appearance, the filtering processing is only performed on the reconstruction grid instead of texturing mapping, but the picture input in the resampling process is considered, so the texturing step cannot be omitted.
And sequentially carrying out grid filtering and texturing on the three-dimensional model grid model to obtain a three-dimensional reconstruction model.
After a complete dense grid model is obtained by a multi-view reconstruction method, a large amount of noise still exists in the grid model, so that the grid needs to be smoothed by a grid denoising algorithm. The embodiment of the application adopts a bilateral filtering algorithm to perform denoising and smoothing.
For image i (u), bilateral filtering is defined as:
Figure BDA0002282434400000163
where n (u) is the neighborhood of vertex u ═ x, y, and the spatial domain kernel is expressed as a standard deviation σcStandard gaussian filtering of
Figure BDA0002282434400000171
Value domain kernel is expressed as a standard deviation of σsStandard gaussian filtering of
Figure BDA0002282434400000172
One filtering iteration is completed after updating the values of all the vertices in the mesh.
The texturing process first entails determining the visibility of faces in the input image, where a pair-wise Markov random field energy formula is used to compute a label l that will look at liAssigned to each mesh plane FiUsed as texture:
Figure BDA0002282434400000173
wherein the data item EdataTend to use good input images to texture faces; smoothness term EsmoothMinimizing visibility of the different picture textured stitching seams; the minimization of E (l) is achieved by graph cut and alpha expansion algorithms.
After the label l is obtained by minimizing the Markov random field energy formula, the color block color also needs to be adjusted. First, it should be guaranteed that each mesh vertex belongs to only one texture block. Thus, each vertex at the seam needs to be duplicated into two vertices, i.e.Texture block vertex v on the left side of the seamleftAnd texture Block vertex v on the right side of the seamrightWhen each vertex has only one color f before adjustmentv. Next, an accumulated correction value g is calculated for each vertex by minimizing the following expressionv
Figure BDA0002282434400000174
Wherein v includes vleftAnd vright,viAnd vjAdjacent and in the same texture block, λ is a parameter to be constant. The first term in the above formula ensures the left side of the seam
Figure BDA0002282434400000175
To the right side of the joint
Figure BDA0002282434400000176
The adjusted colors are as similar as possible and the second term is used to minimize the difference in adjustment between adjacent vertices within the same texture block, which facilitates gradual adjustment within a texture block. When finding the optimal g for all verticesvThen, the correction value of each texel is determined from the g of the vertices around the texel through the centroid coordinate systemvAnd (4) inserting. Finally, the correction values are added to the input image, the texture blocks are packed into a texture map set, and the texture coordinates are attached to the vertices.
In another embodiment of the present application, after obtaining the three-dimensional reconstruction model of the space debris or the failed satellite, considering the complexity of the space environment and the uncertainty of the motion of the observed object, such as incomplete model reconstruction caused by environmental factors such as long distance, partial occlusion, local flare or too strong shadow interference, the present embodiment adds a prediction feedback repair step according to the actual situation.
For the three-dimensional reconstruction of the general method, if the accuracy of the observed image is to be increased, the number of input pictures must be increased, but the embodiment is based on the image restoration technology, so the number of input pictures can be increased by using the restoration algorithm on the premise of not increasing the number of taken pictures. In addition, considering the complexity of the actual situation, for a satellite or a spatial fragment body rotating around a single axis in space, more useful picture data cannot be acquired within a specific time, so that the prediction feedback patching of an invisible area or an unfinished reconstruction part by using the image processing method is more practical. The specific scheme is as follows:
generating a second image sequence according to the three-dimensional reconstruction model; the second image sequence corresponds to the images in the space monocular image sequence and is equal in number, and the two corresponding images have the same shooting angle with respect to the space debris or the failed satellite.
And when the second image sequence is generated according to the three-dimensional reconstruction model, generating the second image sequence by respectively adopting a pure black background and a pure white background. The generation method of each image in the second image sequence comprises the following steps: determining a sampling frame of the two-dimensional image; sequentially carrying out sampling scanning on the three-dimensional reconstruction model by using a sampling frame to generate a plurality of fragment images; and recombining the plurality of fragment images according to the sampling scanning sequence to obtain a sampling image.
Calculating the defect degree of each image in the second image sequence; the method specifically comprises the following steps: acquiring a pure black background image and a pure white background image; converting the pure black background image and the pure white background image into gray level images; generating a vacancy distinguishing matrix according to the gray value of each pixel point in the gray image of the pure white background image and the gray value of each pixel point in the gray image of the pure black background image; and generating the defect degree of each image according to the vacancy distinguishing matrix.
When the defect degree is larger than a first preset threshold value, acquiring a space monocular image corresponding to a defect image in the second image sequence in the space monocular image sequence; and repairing the corresponding space monocular image by adopting the trained generated confrontation network to obtain a repaired space monocular image sequence. And replacing the space monocular image sequence with the repaired space monocular image sequence, and continuing to execute. The replacement process here is to select a defective monocular image in the original spatial monocular image sequence, to repair the designated part of the defective monocular image by generating an antagonistic network, and to insert the repaired image into the spatial monocular image sequence after the repair, at this time, the spatial monocular image with the defect will be deleted from the sequence.
The repairing process specifically comprises the following steps: segmenting the vacancy distinguishing matrix according to the plurality of fragment images to obtain a plurality of fragment matrixes; calculating the sum value of each element in each fragment matrix; marking the image area corresponding to the fragment matrix with the sum value larger than a second preset threshold value as an area to be repaired; and repairing the area to be repaired by adopting the trained generated confrontation network pair.
According to the above method, in the present embodiment, as shown in fig. 2, the secondary reconstruction process is specifically described as follows:
(1) and (3) carrying out re-acquisition of converting the three-dimensional model into a two-dimensional picture on the model by segmenting the reconstructed model, and respectively carrying out two-time acquisition on the image by using a pure black background and a pure white background.
(2) And selecting a sampling kernel with a proper size, and rescanning and sampling the picture (the picture of the previous step) in a convolution-like mode.
(3) Calculating the vacancy or fracture area of the image region acquired by the sampling core, and repairing the region by using a generated countermeasure network when the fracture and defect area in the region is larger than a certain threshold value;
(4) rotating a specific angle, and repeating the collecting and repairing steps;
(5) feeding the restored image back to the sampling picture set, and reconstructing the image set by an incremental motion restoration structure method, wherein fig. 8 is a schematic diagram of the incremental motion restoration structure method.
The purpose of segmenting the model in the step (1) is to prevent the points on other surfaces from interfering with the vacancy on the sampling plane when generating the two-dimensional picture, because when the incomplete three-dimensional plane regenerates the two-dimensional image, the vacancy in front is filled by the complete model in the back, and the acquisition in the step (3) is interfered. The size of the sampling core in the step (2) is adaptive to the picture, the threshold value and the task requirement.
The selection of the threshold value is related to the integrity evaluation of image reconstruction, so that the threshold value is the key of the whole feedback restoration link, and after the threshold value gamma is determined according to task requirements, whether the image needs to be repaired can be judged. The size of the resampled picture is set to be a multiplied by b dimension, firstly, the sampled RGB picture is converted into a gray image, and the formula is as follows:
Figure BDA0002282434400000201
wherein, I (x, y) represents the gray value of a pixel point on the generated gray image, IR(x,y)、IG(x, y) and IB(x, y) represent the values of the original RGB image at R, G, B, respectively. In addition, in the two acquisitions, because a pure white background and a pure black background are adopted, the gray values of the two images in the vacant part have jump, when white is used, the gray value is 255, and when black is used, the gray value is 0. The gray value of the black background resampling image is subtracted from the white background resampling image to obtain a vacancy distinguishing matrix
Figure BDA0002282434400000202
The size of the three-dimensional model is the same as that of the resampled picture, the elements are 0 and 255, and the part with the value of 255 is the vacant part of the three-dimensional model. For convenience of calculation, the matrix is aligned
Figure BDA0002282434400000203
And (3) carrying out normalization treatment:
Figure BDA0002282434400000204
and obtaining a normalized vacancy judging matrix P.
The size of the sampling kernel is taken as i x i, expressed as
Figure BDA0002282434400000205
Marking the I multiplied by I order discrimination matrix corresponding to the sampling kernel in the image I as PkIt can be known that
Figure BDA0002282434400000206
The image inpainting discriminant can be written as:
Figure BDA0002282434400000207
wherein ⊙ is Hadamard product,
Figure BDA0002282434400000211
represents Pk⊙CkThe sum of the elements in the result matrix of (a),
Figure BDA0002282434400000212
and the sum of the normalized values which represent the restoration needs to be carried out in the whole collection picture, and when the sum is larger than the threshold value gamma, the picture needs to be restored. In addition, if the size i of the kernel is set to the pixel size of the whole picture, the picture can be directly judged, and in this case, the picture can be judged
Figure BDA0002282434400000213
If the picture needs to be repaired, the specific position needed by the image needs to be determined, and the position determination can be performed through sampling of the sampling kernel. Firstly, the method is approximately the same as the whole image judging flow, and the sampling core is used for scanning and calculating the gray value of the image
Figure BDA0002282434400000214
Determining a region determination threshold γ for the determination region of each kernelkWhen is coming into contact with
Figure BDA0002282434400000215
Then, the position record of the judgment area is stored as an area tjWherein j is 1,2, and n is a storage area number, and after the scanning of the whole picture is finished, the recorded area t is subjected to scanningjSet of constituents TjRepair one by one, where tj∈Tj
When the size of the sampling kernel and the size of the effective area of the picture are not in integral multiple relation, a 0 complementing method can be adopted, and the part beyond the boundary is filled with 0, because 0 in the matrix indicates that no vacancy exists in the image, the part beyond the boundary can be regarded as no image hole by the algorithm through the 0 complementing method, and therefore the weight at the boundary is reduced.
The other technical scheme of the invention is as follows: an apparatus for modeling a three-dimensional reconstruction of a spatially moving object, comprising:
the acquisition module is used for acquiring a space monocular image sequence of a space debris or a failure satellite;
the training module is used for training the spatial monocular image sequence by adopting a trained generation countermeasure network so as to eliminate the flare and the shadow of each spatial monocular image in the spatial monocular image sequence and generate a first image sequence;
the matching module is used for extracting a characteristic value matching relation between each image and other images in the first image sequence by a scale-invariant characteristic transformation method;
the first construction module is used for constructing a three-dimensional model sparse point cloud of the space debris or the failure satellite according to the characteristic value matching relation;
the first generation module is used for generating a three-dimensional model grid model according to the three-dimensional model sparse point cloud, and sequentially carrying out grid filtering and texturing on the three-dimensional model grid model to obtain a three-dimensional reconstruction model of a space debris or a failure satellite.
In this embodiment, the apparatus further includes:
the second generation module is used for generating a second image sequence according to the three-dimensional reconstruction model after the three-dimensional reconstruction model of the space debris or the failure satellite is obtained; the second image sequence corresponds to the images in the space monocular image sequence and is equal in number, and the two corresponding images have the same shooting angle with respect to the space debris or the failed satellite;
the second generation module also comprises a pure black background generation submodule and a pure white background generation submodule; and the pure black background generation submodule and the pure white background generation submodule are used for generating a second image sequence by respectively adopting a pure black background and a pure white background when generating the second image sequence according to the three-dimensional reconstruction model.
The calculation module is used for calculating the defect degree of each image in the second image sequence;
the judging module is used for acquiring a space monocular image corresponding to a defective image in the second image sequence in the space monocular image sequence when the defect degree is larger than a first preset threshold value;
the repairing module is used for repairing the space monocular image corresponding to the trained generation countermeasure network to obtain a repaired space monocular image sequence;
and the replacing module is used for replacing the space monocular image sequence with the repaired space monocular image sequence and continuing to execute the operation.
Another embodiment of the invention: a three-dimensional reconstruction modeling device of a space moving object comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, wherein the processor executes the computer program to realize the three-dimensional reconstruction modeling method of the space moving object.
A three-dimensional reconstruction model establishing device of a space moving object can be computing devices such as a desktop computer, a notebook computer, a palm computer and a cloud server. The three-dimensional reconstruction modeling device for the space moving object can comprise, but is not limited to, a processor and a memory. It will be appreciated by those skilled in the art that the above description is merely an example of a three-dimensional reconstruction modeling apparatus for a spatially moving object and does not constitute a limitation of a three-dimensional reconstruction modeling apparatus for a spatially moving object, and may include more or less components than those shown, or some components in combination, or different components, such as an input-output device, a network access device, etc.
The Processor may be a Central Processing Unit (CPU), or other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may in some embodiments be an internal storage unit of a three-dimensional reconstruction modeling apparatus for a spatially moving object, such as a hard disk or a memory of a three-dimensional reconstruction modeling apparatus for a spatially moving object. The memory may be an external storage device of the three-dimensional reconstruction model building device of the space moving object in other embodiments, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the three-dimensional reconstruction model building device of the space moving object. Further, the memory may also include both an internal storage unit and an external storage device of a three-dimensional reconstruction modeling apparatus for a spatially moving object. The memory is used for storing an operating system, application programs, a BootLoader (BootLoader), data, and other programs, such as program codes of computer programs. The memory may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps in the above-mentioned method embodiments may be implemented.
The embodiments of the present application further provide a computer program product, which when running on a three-dimensional reconstruction model building apparatus for a spatial moving object, enables the three-dimensional reconstruction model building apparatus for the spatial moving object to implement the steps in the above-mentioned method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or apparatus capable of carrying computer program code to a terminal device, recording medium, computer Memory, Read-Only Memory (ROM), Random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed.
Modules described as separate components may or may not be physically separate, and modules may or may not be physical units, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.

Claims (10)

1. A method for establishing a three-dimensional reconstruction model of a space moving object is characterized by comprising the following steps:
acquiring a space monocular image sequence of a space debris or a failure satellite;
training the spatial monocular image sequence by adopting a trained generation countermeasure network so as to eliminate the flare and the shadow of each spatial monocular image in the spatial monocular image sequence and generate a first image sequence;
extracting a characteristic value matching relation between each image and other images in the first image sequence by a scale-invariant characteristic transformation method;
establishing a three-dimensional model sparse point cloud of a space debris or a failure satellite according to the characteristic value matching relation;
and generating a three-dimensional model grid model according to the three-dimensional model sparse point cloud, and sequentially carrying out grid filtering and texturing on the three-dimensional model grid model to obtain a three-dimensional reconstruction model of the space debris or the failure satellite.
2. The method for building a three-dimensional reconstruction model of a space moving object according to claim 1, wherein after obtaining the three-dimensional reconstruction model of the space debris or the failed satellite, the method further comprises:
generating a second image sequence according to the three-dimensional reconstruction model; the second image sequence corresponds to the images in the space monocular image sequence and is equal in number, and the corresponding images are the same in shooting angle of the space debris or the failed satellite;
calculating the defect degree of each image in the second image sequence;
when the defect degree is larger than a first preset threshold value, acquiring a space monocular image corresponding to a defect image in the second image sequence in the space monocular image sequence;
adopting a trained generated confrontation network to repair the corresponding space monocular image to obtain a repaired space monocular image sequence;
and replacing the space monocular image sequence with the repaired space monocular image sequence, and continuing to execute.
3. The method as claimed in claim 2, wherein the generating of the second image sequence based on the three-dimensional reconstructed model is performed by using a black background and a white background.
4. A method for modeling a three-dimensional reconstruction of an object with spatial motion according to claim 2 or 3, wherein each image of said second sequence of images is generated by:
determining a sampling frame of the two-dimensional image;
sequentially carrying out sampling scanning on the three-dimensional reconstruction model by using the sampling frame to generate a plurality of fragment images;
and recombining the plurality of fragment images according to the sampling scanning sequence to obtain a sampling image.
5. The method of claim 4, wherein calculating the defect level of each image in the second sequence of images comprises:
acquiring a pure black background image and a pure white background image;
converting the pure black background image and the pure white background image into gray level images;
generating a vacancy distinguishing matrix according to the gray value of each pixel point in the gray image of the pure white background image and the gray value of each pixel point in the gray image of the pure black background image;
and generating the defect degree of each image according to the vacancy distinguishing matrix.
6. The method as claimed in claim 5, wherein the repairing the corresponding spatial monocular image using the trained generative confrontation network comprises:
segmenting the vacancy distinguishing matrix according to the plurality of fragment images to obtain a plurality of fragment matrixes;
calculating the sum value of each element in each fragment matrix;
marking the image area corresponding to the fragment matrix with the sum value larger than a second preset threshold value as an area to be repaired;
and repairing the area to be repaired by adopting the trained pair of generated confrontation networks.
7. An apparatus for modeling a three-dimensional reconstruction of a spatially moving object, comprising:
the acquisition module is used for acquiring a space monocular image sequence of a space debris or a failure satellite;
the training module is used for training the spatial monocular image sequence by adopting a trained generation countermeasure network so as to eliminate the flare and the shadow of each spatial monocular image in the spatial monocular image sequence and generate a first image sequence;
the matching module is used for extracting a characteristic value matching relation between each image and other images in the first image sequence by a scale-invariant characteristic transformation method;
the first construction module is used for constructing a three-dimensional model sparse point cloud of the space debris or the failure satellite according to the characteristic value matching relation;
and the first generation module is used for generating a three-dimensional model grid model according to the three-dimensional model sparse point cloud, and sequentially carrying out grid filtering and texturing on the three-dimensional model grid model to obtain a three-dimensional reconstruction model of a space debris or a failure satellite.
8. The apparatus for modeling a three-dimensional reconstruction of an object moving in space of claim 7, further comprising:
the second generation module is used for generating a second image sequence according to the three-dimensional reconstruction model after the three-dimensional reconstruction model of the space debris or the failure satellite is obtained; the second image sequence corresponds to the images in the space monocular image sequence and is equal in number, and the corresponding images are the same in shooting angle of the space debris or the failed satellite;
the calculation module is used for calculating the defect degree of each image in the second image sequence;
the judging module is used for acquiring a space monocular image corresponding to a defective image in the second image sequence in the space monocular image sequence when the defect degree is larger than a first preset threshold value;
the restoration module is used for restoring the corresponding space monocular image by adopting the trained generation countermeasure network to obtain a restored space monocular image sequence;
and the replacing module is used for replacing the space monocular image sequence by adopting the repaired space monocular image sequence and continuing to execute.
9. The apparatus according to claim 8, wherein the second generating module further comprises a pure black background generating sub-module and a pure white background generating sub-module;
and the pure black background generation submodule and the pure white background generation submodule are used for generating a second image sequence by respectively adopting a pure black background and a pure white background when generating the second image sequence according to the three-dimensional reconstruction model.
10. An apparatus for three-dimensional reconstruction modeling of a spatially moving object, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements a method for three-dimensional reconstruction modeling of a spatially moving object according to any of claims 1 to 6 when executing the computer program.
CN201911146834.2A 2019-11-21 2019-11-21 Method and device for establishing three-dimensional reconstruction model of space moving target Active CN111063021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911146834.2A CN111063021B (en) 2019-11-21 2019-11-21 Method and device for establishing three-dimensional reconstruction model of space moving target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911146834.2A CN111063021B (en) 2019-11-21 2019-11-21 Method and device for establishing three-dimensional reconstruction model of space moving target

Publications (2)

Publication Number Publication Date
CN111063021A true CN111063021A (en) 2020-04-24
CN111063021B CN111063021B (en) 2021-08-27

Family

ID=70298327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911146834.2A Active CN111063021B (en) 2019-11-21 2019-11-21 Method and device for establishing three-dimensional reconstruction model of space moving target

Country Status (1)

Country Link
CN (1) CN111063021B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640153A (en) * 2020-05-29 2020-09-08 河北工业大学 Space rigid body centroid position detection method based on fusion of vision and inertial unit
CN111640109A (en) * 2020-06-05 2020-09-08 贝壳技术有限公司 Model detection method and system
CN111796310A (en) * 2020-07-02 2020-10-20 武汉北斗星度科技有限公司 High-precision positioning method, device and system based on Beidou GNSS
CN112102475A (en) * 2020-09-04 2020-12-18 西北工业大学 Space target three-dimensional sparse reconstruction method based on image sequence trajectory tracking
CN112288817A (en) * 2020-11-18 2021-01-29 Oppo广东移动通信有限公司 Three-dimensional reconstruction processing method and device based on image
CN112435341A (en) * 2020-11-23 2021-03-02 推想医疗科技股份有限公司 Training method and device for three-dimensional reconstruction network, and three-dimensional reconstruction method and device
CN112530004A (en) * 2020-12-11 2021-03-19 北京奇艺世纪科技有限公司 Three-dimensional point cloud reconstruction method and device and electronic equipment
CN112950767A (en) * 2021-03-24 2021-06-11 东莞中国科学院云计算产业技术创新与育成中心 Target occlusion judgment method and device, computer equipment and storage medium
CN113052880A (en) * 2021-03-19 2021-06-29 南京天巡遥感技术研究院有限公司 SFM sparse reconstruction method, system and application
CN113066168A (en) * 2021-04-08 2021-07-02 云南大学 Multi-view stereo network three-dimensional reconstruction method and system
CN113112589A (en) * 2021-04-13 2021-07-13 哈尔滨工程大学 Three-dimensional reconstruction method of incremental remote sensing image based on space occupation probability fusion
CN113223159A (en) * 2021-05-27 2021-08-06 哈尔滨工程大学 Single remote sensing image three-dimensional modeling method based on target texture virtualization processing
CN114119686A (en) * 2021-11-24 2022-03-01 刘文平 Multi-source remote sensing image registration method for spatial layout similarity calculation
CN114137587A (en) * 2021-12-01 2022-03-04 西南交通大学 Method, device, equipment and medium for estimating and predicting position of moving object
CN114985300A (en) * 2022-04-27 2022-09-02 佛山科学技术学院 Method and system for classifying outlet paperboards of corrugated paperboard production line
CN115660985A (en) * 2022-10-25 2023-01-31 中山大学中山眼科中心 Cataract fundus image repairing method and repairing model training method and device
CN116309998A (en) * 2023-03-15 2023-06-23 杭州若夕企业管理有限公司 Image processing system, method and medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021017A (en) * 2012-12-04 2013-04-03 上海交通大学 Three-dimensional scene rebuilding method based on GPU acceleration
CN105184863A (en) * 2015-07-23 2015-12-23 同济大学 Unmanned aerial vehicle aerial photography sequence image-based slope three-dimension reconstruction method
CN108734728A (en) * 2018-04-25 2018-11-02 西北工业大学 A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
US20180341836A1 (en) * 2017-05-24 2018-11-29 General Electric Company Neural network point cloud generation system
CN109459043A (en) * 2018-12-12 2019-03-12 上海航天控制技术研究所 A kind of spacecraft Relative Navigation based on production reconstructed image
CN109631911A (en) * 2018-12-17 2019-04-16 浙江大学 A kind of attitude of satellite rotation information based on deep learning Target Recognition Algorithms determines method
CN109978807A (en) * 2019-04-01 2019-07-05 西北工业大学 A kind of shadow removal method based on production confrontation network
CN110119780A (en) * 2019-05-10 2019-08-13 西北工业大学 Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network
CN110210418A (en) * 2019-06-05 2019-09-06 西安电子科技大学 A kind of SAR image Aircraft Targets detection method based on information exchange and transfer learning
CN110378985A (en) * 2019-07-19 2019-10-25 中国传媒大学 A kind of animation drawing auxiliary creative method based on GAN

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021017A (en) * 2012-12-04 2013-04-03 上海交通大学 Three-dimensional scene rebuilding method based on GPU acceleration
CN105184863A (en) * 2015-07-23 2015-12-23 同济大学 Unmanned aerial vehicle aerial photography sequence image-based slope three-dimension reconstruction method
US20180341836A1 (en) * 2017-05-24 2018-11-29 General Electric Company Neural network point cloud generation system
CN108734728A (en) * 2018-04-25 2018-11-02 西北工业大学 A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image
CN108898630A (en) * 2018-06-27 2018-11-27 清华-伯克利深圳学院筹备办公室 A kind of three-dimensional rebuilding method, device, equipment and storage medium
CN109459043A (en) * 2018-12-12 2019-03-12 上海航天控制技术研究所 A kind of spacecraft Relative Navigation based on production reconstructed image
CN109631911A (en) * 2018-12-17 2019-04-16 浙江大学 A kind of attitude of satellite rotation information based on deep learning Target Recognition Algorithms determines method
CN109978807A (en) * 2019-04-01 2019-07-05 西北工业大学 A kind of shadow removal method based on production confrontation network
CN110119780A (en) * 2019-05-10 2019-08-13 西北工业大学 Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network
CN110210418A (en) * 2019-06-05 2019-09-06 西安电子科技大学 A kind of SAR image Aircraft Targets detection method based on information exchange and transfer learning
CN110378985A (en) * 2019-07-19 2019-10-25 中国传媒大学 A kind of animation drawing auxiliary creative method based on GAN

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
胡鹏程等: "基于多视角立体视觉的植株三维重建与精度评估", 《农业工程学报》 *
蔡雨婷等: "基于双层级联GAN的草图到真实感图像的异质转换", 《模式识别与人工智能》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111640153A (en) * 2020-05-29 2020-09-08 河北工业大学 Space rigid body centroid position detection method based on fusion of vision and inertial unit
CN111640153B (en) * 2020-05-29 2021-05-28 河北工业大学 Space rigid body centroid position detection method based on fusion of vision and inertial unit
CN111640109A (en) * 2020-06-05 2020-09-08 贝壳技术有限公司 Model detection method and system
CN111796310A (en) * 2020-07-02 2020-10-20 武汉北斗星度科技有限公司 High-precision positioning method, device and system based on Beidou GNSS
CN111796310B (en) * 2020-07-02 2024-02-02 武汉北斗星度科技有限公司 Beidou GNSS-based high-precision positioning method, device and system
CN112102475B (en) * 2020-09-04 2023-03-07 西北工业大学 Space target three-dimensional sparse reconstruction method based on image sequence trajectory tracking
CN112102475A (en) * 2020-09-04 2020-12-18 西北工业大学 Space target three-dimensional sparse reconstruction method based on image sequence trajectory tracking
CN112288817A (en) * 2020-11-18 2021-01-29 Oppo广东移动通信有限公司 Three-dimensional reconstruction processing method and device based on image
CN112435341A (en) * 2020-11-23 2021-03-02 推想医疗科技股份有限公司 Training method and device for three-dimensional reconstruction network, and three-dimensional reconstruction method and device
CN112530004A (en) * 2020-12-11 2021-03-19 北京奇艺世纪科技有限公司 Three-dimensional point cloud reconstruction method and device and electronic equipment
CN112530004B (en) * 2020-12-11 2023-06-06 北京奇艺世纪科技有限公司 Three-dimensional point cloud reconstruction method and device and electronic equipment
CN113052880A (en) * 2021-03-19 2021-06-29 南京天巡遥感技术研究院有限公司 SFM sparse reconstruction method, system and application
CN113052880B (en) * 2021-03-19 2024-03-08 南京天巡遥感技术研究院有限公司 SFM sparse reconstruction method, system and application
CN112950767A (en) * 2021-03-24 2021-06-11 东莞中国科学院云计算产业技术创新与育成中心 Target occlusion judgment method and device, computer equipment and storage medium
CN113066168A (en) * 2021-04-08 2021-07-02 云南大学 Multi-view stereo network three-dimensional reconstruction method and system
CN113112589A (en) * 2021-04-13 2021-07-13 哈尔滨工程大学 Three-dimensional reconstruction method of incremental remote sensing image based on space occupation probability fusion
CN113112589B (en) * 2021-04-13 2022-09-02 哈尔滨工程大学 Three-dimensional reconstruction method of incremental remote sensing image based on space occupation probability fusion
CN113223159A (en) * 2021-05-27 2021-08-06 哈尔滨工程大学 Single remote sensing image three-dimensional modeling method based on target texture virtualization processing
CN114119686A (en) * 2021-11-24 2022-03-01 刘文平 Multi-source remote sensing image registration method for spatial layout similarity calculation
CN114137587A (en) * 2021-12-01 2022-03-04 西南交通大学 Method, device, equipment and medium for estimating and predicting position of moving object
CN114985300A (en) * 2022-04-27 2022-09-02 佛山科学技术学院 Method and system for classifying outlet paperboards of corrugated paperboard production line
CN114985300B (en) * 2022-04-27 2024-03-01 佛山科学技术学院 Method and system for classifying outlet paperboards of corrugated board production line
CN115660985A (en) * 2022-10-25 2023-01-31 中山大学中山眼科中心 Cataract fundus image repairing method and repairing model training method and device
CN116309998A (en) * 2023-03-15 2023-06-23 杭州若夕企业管理有限公司 Image processing system, method and medium

Also Published As

Publication number Publication date
CN111063021B (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN111063021B (en) Method and device for establishing three-dimensional reconstruction model of space moving target
CN110363858B (en) Three-dimensional face reconstruction method and system
Hiep et al. Towards high-resolution large-scale multi-view stereo
Hirschmuller Stereo processing by semiglobal matching and mutual information
US8331615B2 (en) Match, expand, and filter technique for multi-view stereopsis
CN109242954B (en) Multi-view three-dimensional human body reconstruction method based on template deformation
CN109658515A (en) Point cloud gridding method, device, equipment and computer storage medium
Leotta et al. Urban semantic 3D reconstruction from multiview satellite imagery
WO2005081178A1 (en) Method and apparatus for matching portions of input images
CN103822616A (en) Remote-sensing image matching method with combination of characteristic segmentation with topographic inequality constraint
CN113178009B (en) Indoor three-dimensional reconstruction method utilizing point cloud segmentation and grid repair
CN113012293A (en) Stone carving model construction method, device, equipment and storage medium
WO2018133119A1 (en) Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera
CN113298934A (en) Monocular visual image three-dimensional reconstruction method and system based on bidirectional matching
CN114494589A (en) Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer-readable storage medium
Condorelli et al. A comparison between 3D reconstruction using nerf neural networks and mvs algorithms on cultural heritage images
CN115393519A (en) Three-dimensional reconstruction method based on infrared and visible light fusion image
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
Nouduri et al. Deep realistic novel view generation for city-scale aerial images
Bethmann et al. Object-based semi-global multi-image matching
Nicosevici et al. Efficient 3D scene modeling and mosaicing
Dong et al. Learning stratified 3D reconstruction
Arevalo et al. Improving piecewise linear registration of high-resolution satellite images through mesh optimization
Gao et al. Multi-target 3d reconstruction from rgb-d data
Murayama et al. Depth Image Noise Reduction and Super-Resolution by Pixel-Wise Multi-Frame Fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant