CN110782521A - Mobile terminal three-dimensional reconstruction and model restoration method and system - Google Patents
Mobile terminal three-dimensional reconstruction and model restoration method and system Download PDFInfo
- Publication number
- CN110782521A CN110782521A CN201910845738.0A CN201910845738A CN110782521A CN 110782521 A CN110782521 A CN 110782521A CN 201910845738 A CN201910845738 A CN 201910845738A CN 110782521 A CN110782521 A CN 110782521A
- Authority
- CN
- China
- Prior art keywords
- model
- reconstruction
- dimensional
- mobile terminal
- triangle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000010146 3D printing Methods 0.000 claims abstract description 20
- 230000008569 process Effects 0.000 claims abstract description 14
- 238000012545 processing Methods 0.000 claims abstract description 12
- 238000012937 correction Methods 0.000 claims abstract description 11
- 230000003247 decreasing effect Effects 0.000 claims abstract description 4
- 238000011084 recovery Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 24
- 230000007547 defect Effects 0.000 claims description 16
- 230000008439 repair process Effects 0.000 claims description 16
- 238000007639 printing Methods 0.000 claims description 7
- 238000013519 translation Methods 0.000 claims description 7
- 230000015556 catabolic process Effects 0.000 claims description 6
- 238000006731 degradation reaction Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 5
- 238000012217 deletion Methods 0.000 claims description 4
- 230000037430 deletion Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims description 3
- 230000002452 interceptive effect Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 230000001902 propagating effect Effects 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 230000008602 contraction Effects 0.000 claims 1
- 230000007850 degeneration Effects 0.000 claims 1
- 230000004927 fusion Effects 0.000 claims 1
- 238000010008 shearing Methods 0.000 description 5
- 230000000670 limiting effect Effects 0.000 description 3
- 230000002829 reductive effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007500 overflow downdraw method Methods 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a method and a system for three-dimensional reconstruction and model restoration of a mobile terminal, belonging to the technical field of image processing and three-dimensional reconstruction and comprising the following steps: s1: acquiring a series of image data of the modeling object according to a specified angle, and enabling a camera to focus the modeling object to fuzzify a background in the process; s2: extracting image feature points by using Scale Invariant Feature Transform (SIFT), fusing the feature points of every two images, and maximizing the resolution of the images when extracting the feature points; s3: completing sparse reconstruction by using a motion recovery structure SFM; s4: dense reconstruction is completed by using a stereoscopic vision MVS method; reducing the threshold value of the minimum pixel point during dense reconstruction, and increasing effective point cloud; s5: point cloud modeling is completed by using a Poisson reconstruction method PSR; decreasing the shear element at poisson reconstruction increases the effective surface; s6: reconstructing topology; s7: geometric correction; s8: and displaying the model and performing 3D printing or 3D carving.
Description
Technical Field
The invention belongs to the technical field of image processing and three-dimensional reconstruction, and relates to a method and a system for mobile-end three-dimensional reconstruction and model restoration.
Background
In recent years, the 3D printing and 3D engraving technology of the mobile terminal is rapidly developed, and various three-dimensional modeling and 3D printing apps are developed endlessly. The three-dimensional reconstruction technology of the mobile terminal mainly comprises two types: one is by feature matching and estimating camera and scene geometry, for example, by motion recovery from motion (SFM) or color matching (e.g., dense tracking and mapping (DTAM)), which are sensitive to matching and ineffective in homogeneous areas, and the reconstructed model is more defective and cannot be used directly for 3D printing or 3D engraving. Another type is to use depth and texture information of the image, such as Shape From Shading (SFS) or texture from texture (SFT), which are expensive and limited depending on the depth camera or scanner. The mobile terminal can manually establish and modify a model and then provide the model for a 3D printer and a 3D engraving machine for printing or engraving, but a system integrating three-dimensional reconstruction, 3D printing and 3D engraving, which is not mature in the field of open-source 3D printing and 3D engraving, can be used.
On the other hand, the model for 3D printing, 3D engraving must be a defect-free model that is void-free, faceless and completely closed. The existing three-dimensional reconstruction method combining SFM and MVS has the following defects: firstly, the SFM is an algorithm for feature matching and estimation of a camera and a scene geometric structure, the algorithm is sensitive to matching and ineffective in a homogeneous region, the modeling success rate and the modeling defect rate completely depend on the quality of input image data, however, the image data quality cannot be guaranteed due to the randomness of a user using a mobile terminal to shoot an object in real life and the diversity of a camera of the user. Secondly, due to inherent defects of the algorithm, even if the quality of image data meets requirements, the final modeling result needs to be subjected to model repair at a later stage, and a completely closed defect-free model without holes, surface patches and defects can be output.
Disclosure of Invention
In view of this, the present invention aims to provide a method and a system for three-dimensional reconstruction and model restoration at a mobile terminal, wherein a mature system integrating three-dimensional reconstruction and 3D printing or 3D engraving is established at the mobile terminal, and the system is not limited by the hardware condition of the mobile terminal, and can obtain a 3D printing or 3D engraving model of a real object only by shooting a simple photo of the real object.
In order to achieve the purpose, the invention provides the following technical scheme:
in one aspect, the invention provides a mobile terminal three-dimensional reconstruction and model restoration method, which comprises the following steps:
s1: acquiring a series of image data of the modeling object according to a specified angle, and enabling a camera to focus the modeling object to fuzzify a background in the process;
s2: extracting image feature points by using Scale-invariant feature transform (SIFT), and fusing the feature points of every two images; maximizing the resolution of the image when extracting the feature points to extract more feature points;
s3: sparse reconstruction using a Structure From Motion (SFM);
s4: performing dense reconstruction by using a Stereo vision Method (MVS); reducing the threshold value of the minimum pixel point during dense reconstruction, and increasing effective point cloud;
s5: performing point cloud modeling by using a Poisson Surface Reconstruction (PSR); decreasing the shear element at poisson reconstruction increases the effective surface;
s6: reconstructing topology;
s7: geometric correction;
s8: and displaying the model and performing 3D printing or 3D carving.
Further, in step S2, the SIFT algorithm is used to maintain the invariance of rotation, translation and scale transformation, and common feature points are detected on two images adjacent in time; forming a one-to-one correspondence relationship between the characteristic points of two images adjacent in time through the Euclidean distance minimum judgment of the characteristic descriptor, and calculating an equation of shooting movement through an equation set constructed by a plurality of points on the three-dimensional object, namely calculating an external parameter matrix; and then the coordinates of the three-dimensional object, namely the sparse point cloud, are obtained.
Further, in step S3, the SFM algorithm is an off-line algorithm for three-dimensional reconstruction based on various collected disordered images, and the world coordinates (X, Y, Z) of the three-dimensional object and the pixel coordinates (u, v,1) on the photograph are related as follows:
the internal reference matrix is thus represented as:
the external reference matrix is composed of a rotation matrix R and a translation vector T, and is represented as:
the internal reference matrix is an internal parameter of the shooting device and can be obtained through calibration, and the internal reference matrix is kept unchanged in the moving process of the camera. During the moving photographing, the photographing device has the transformation of rotation R and translation T, and the R and the T form an external parameter matrix. The external reference matrix is obtained by calculation according to the corresponding relation of the two images, the SFM algorithm has the function of calculating R and T, and an equation set is constructed to solve the R and T according to the corresponding relation of the characteristic point coordinates of the two images; in the process, in order to avoid the influence of noise, R and T need to be adjusted through a bundle adjust algorithm.
Further, in step S4, based on the MVS of the Voxel, where the MVS is equivalent to a labeling problem of Voxel in 3D space, the labeling in discrete space is a typical markov random field MRF optimization problem, where k is the Voxel point, and the two terms are the consistency term and the balloon inflation:
the identity term expresses two points of identity:
balloon inflation expresses the mandatory tendency to divide the point into interior points:
if the balloon is not inflated, the consistency term divides the points into outer points, so that a reverse force is applied.
Further, in step S5, poisson reconstructing an input point cloud and its normal vector, and outputting a three-dimensional grid, where the point cloud represents the position of the object surface, and its normal vector represents the inside and outside directions; giving an estimate of the smoothed object surface by implicitly fitting an indicator function derived from the object;
reconstruction of stems
Is converted into reconstructed χ
MThe point cloud and its normal vector sum χ
MAre linked together.
Further, in step S6, the topology reconstruction is used to form a mesh model of simple flow patterns and oriented triangles, and specifically includes the following steps:
s61: firstly, converting each polygon into a group of triangles through triangulation;
s62: establishing ordered explicit connections between adjacent triangles;
s63: calculate all adjacencies of the triangle: a) removing topological singularity, selecting a maximum communicated triangular component and removing isolated points in a cutting and extruding mode; b) assigning a direction to a seed triangle and propagating the direction to neighboring triangles, the mesh being traversed and possibly cut along edges of incident triangles having non-uniform directions once all triangles have been visited and possibly flipped;
s64: repairing each hole: the hole is triangulated first, and then a new vertex is inserted in the patch triangulation to approximate the sampling density of the surrounding area.
Further, in step S7, the geometric correction is used to remove typical geometric defects in the triangular mesh, including degeneracy (i.e. zero-area triangles) and self-intersection, and the specific steps are as follows:
s71: marking all adjacent triangles where the vertex of the triangle is located, and setting the adjacent triangles as the neighborhoods of the triangle;
s72: removing the degradation: the method is realized by exchanging (straight triangle) and shrinking (zero triangle) edges, if invalid, deleting a first-order single neighborhood of a degraded triangle, then carrying out triangulation repair, if still invalid, deleting a second-order single neighborhood, then carrying out repair, and deleting at most three orders;
s73: removing self-intersection: based on a method for iteratively deleting and filling a blank, firstly, a space segmentation method (uniform space subdivision) is used for finding intersection, and fixed 1003 voxels are used; after all intersecting triangle pairs have been determined, a delete operation is performed, with each pair of intersecting and non-intersecting triangles deleted. Disconnected non-connected components that may result after deletion are also deleted, the remaining blank is filled in, and the selfing check is run again in the updated voxels. For each triangle still intersected with other parts of the grid, deleting a first-order single neighborhood of the triangle, filling a blank and updating a voxel, if the triangle still intersects with other parts of the grid, deleting a second-order single neighborhood, and so on;
s74: removing the integrated defect: since the goal of the repair algorithm is to eliminate both the degradation and the self-intersection, the iterations must be integrated into a single loop, alternating the operations of S72 and S73 in a single loop until all defects are eliminated.
Further, in step S8, after the model is displayed, personalized customization operation can be performed on the model to realize personalized cutting and noise reduction, the 3D model is modified, and after the model is repaired by the method described in steps S6-S7, the model is printed or engraved.
In the method, because the model repairing module is executed after the three-dimensional reconstruction is completed, the number of point clouds needs to be maximized when the three-dimensional model is reconstructed, the texture of the model needs to be enhanced, and the background and the noise are ignored. The specific operation is as follows:
1. besides the number of the feature points is related to the texture of the image, the resolution of the image directly affects the number of the feature points. Therefore, when image data are obtained, a user is prompted to improve the image resolution as much as possible, and in the feature extraction stage, the image resolution is maximized without performing down-sampling processing so as to extract more feature points;
2. in the dense point cloud reconstruction stage, increasing voxels, and reducing the threshold of the minimum pixel points to increase effective point clouds;
3. the poisson reconstruction of the non-closed point cloud inevitably generates a plurality of redundant triangular patches at the boundary, in order to ensure the correctness of the model, the patches need to be cut, but in the process of cutting the redundant triangular patches, the originally correct triangular patches may be simultaneously removed, so that a model leak is caused. In the Poisson reconstruction stage, the method adjusts the shearing threshold value of the judgment redundant triangular patch in the shearing algorithm to be minimum, and maintains the reconstruction surface to the maximum extent.
On the other hand, the invention provides a mobile terminal three-dimensional reconstruction and model repair system, which comprises a mobile terminal, a background server, and a 3D printer or a 3D engraving machine, wherein:
the mobile terminal comprises an image acquisition module and a model display module, wherein the image acquisition module is used for guiding a user to shoot a sufficient number of images for a modeling object according to a specified angle, and the model display module is used for displaying a reconstructed and repaired model before 3D printing or 3D carving;
the background server comprises
A feature extraction module: the method is used for extracting image feature points by using a Scale-invariant feature transform (SIFT) algorithm and fusing the feature points of every two images;
a sparse reconstruction module: the image fusion method is used for performing sparse reconstruction on the image after the feature points are fused by utilizing a Structure From Motion (SFM) algorithm;
dense reconstruction module: the method is used for carrying out dense reconstruction on the model after sparse reconstruction by utilizing a Stereo vision Method (MVS);
a point cloud modeling module: the method is used for carrying out point cloud modeling on the densely reconstructed model by utilizing a Poisson Surface Reconstruction (PSR);
a model restoration module: in the three-dimensional reconstruction and model restoration method for the mobile terminal, the steps S61-S64 and S71-S74 are used for carrying out topology reconstruction and geometric correction on the model;
a communication module: the modeling object image acquisition module is used for receiving a modeling object image acquired and uploaded by the mobile terminal; after the model is reconstructed and repaired, the model is sent to a model display module of the mobile terminal and then sent to a 3D printer or a 3D engraving machine according to the requirement;
and the 3D printer or the 3D engraving machine is used for receiving and printing the three-dimensional model sent by the background server.
Further, the mobile terminal also comprises a 3D interactive editing module, which is used for carrying out personalized customization operation on the displayed 3D model to realize personalized cutting and noise reduction processing;
and the model repairing module of the background server is also used for repairing the 3D model sent by the mobile terminal after receiving the 3D model modified by the model customizing module, so as to meet the requirement of printing or carving.
The invention has the beneficial effects that: the existing model generated by the mobile terminal three-dimensional reconstruction software has defects, cannot be directly used for 3D printing or 3D carving, and is not provided with a mature system integrating three-dimensional reconstruction and 3D printing or 3D carving in the field of open-source 3D printing and 3D carving for use. The invention mainly aims to establish a mature system integrating three-dimensional reconstruction and 3D printing or 3D carving into a whole under the condition of not limiting mobile end hardware.
The invention aims to overcome the defect that the prior art can not directly print after three-dimensional reconstruction by using a common camera, and the method enhances the texture of the model by maximizing the number of point clouds and neglecting background and noise when reconstructing the three-dimensional model.
In the model repairing stage, the adjacent triangles are controlled within three orders by adopting a lightweight rapid processing algorithm, and the self-intersecting triangles are searched by using a window with a fixed size, so that the processing time is at least two means, and a user can obtain a 3D printing or 3D carving model of a real object by only shooting a simple picture of the real object.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a mobile terminal three-dimensional reconstruction and model restoration method according to the present invention;
fig. 2 is a schematic structural diagram of the mobile terminal three-dimensional reconstruction and model repair system according to the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
On one hand, as shown in fig. 1, the invention provides a mobile terminal three-dimensional reconstruction and model repair method, which includes the following steps:
s1: shooting a sufficient number of images according to a specified angle, focusing a modeling object and blurring a background;
s2: extracting image feature points by using Scale-invariant feature transform (SIFT) and fusing the feature points of every two images; maximizing the resolution of the image when extracting the feature points to extract more feature points;
s3: sparse reconstruction is completed by using a Structure From Motion (SFM);
s4: dense reconstruction is completed by using a Stereo vision Method (MVS); the minimum pixel point threshold is lowered during dense reconstruction, so that effective point cloud is increased;
s5: point cloud modeling is completed by using a Poisson Surface Reconstruction (PSR); decreasing the shear element at poisson reconstruction increases the effective surface;
because the model repair is executed immediately after the three-dimensional reconstruction is completed, the number of point clouds is maximized, the texture of the model is enhanced, and the background and the noise are ignored when the three-dimensional model is reconstructed in the method. The method comprises the following specific operations: 1. maximizing the image resolution in the feature extraction stage to extract more feature points; 2. in the dense point cloud reconstruction stage, the threshold value of the minimum pixel point is reduced to increase the effective point cloud;
3. in the Poisson reconstruction stage, the shearing elements are reduced to increase the effective surface;
after the three-dimensional reconstruction is completed, model restoration is executed, because the model established in the three-dimensional reconstruction stage has abundant characteristic points and texture information and relatively few defects, and in addition, a terminal user is sensitive to processing time, a lightweight rapid processing algorithm is adopted in the model restoration stage, and the algorithm has the main advantages that adjacent triangles are controlled within three orders, and a window with a fixed size is used for searching for self-intersecting triangles, so that the processing time is minimized. The model repairing process mainly comprises two parts of topology reconstruction and geometric correction:
s6: reconstructing topology;
s7: geometric correction;
s8: and displaying the model and performing 3D printing or 3D carving.
Alternatively, in step S2, the SIFT algorithm is used to maintain the invariance of rotation, translation and scale transformation, common feature points are detected on two images adjacent in time, and the feature points of the two images adjacent in time are in one-to-one correspondence by the euclidean distance minimum determination of the feature descriptors, so that the equation of the shooting movement is calculated by the equation set constructed by a plurality of points on the three-dimensional object, that is, the external reference matrix is calculated, and the coordinates of the three-dimensional object, that is, the sparse point cloud, are further calculated.
Optionally, in step S3, the SFM algorithm is an off-line algorithm for three-dimensional reconstruction based on various collected disordered images, and the world coordinates (X, Y, Z) of the three-dimensional object and the pixel coordinates (u, v,1) on the photograph are related as follows:
the internal reference matrix is thus represented as:
the external reference matrix is composed of a rotation matrix R and a translation vector T, and is represented as:
the internal reference matrix is an internal parameter of the shooting device and can be obtained through calibration, the internal reference matrix is kept unchanged in the camera moving process, in the moving shooting process, the shooting device has the transformation of rotating R and translating T, the R and the T form the external reference matrix, the external reference matrix needs to be obtained through calculation according to the corresponding relation of two images, the SFM algorithm is how to calculate the R and the T, an equation set is constructed through the corresponding relation of the characteristic point coordinates of the two images to obtain the R and the T, and meanwhile, in the process, the R and the T need to be adjusted through a bundleadjust algorithm to avoid the influence of noise.
Optionally, in step S4, based on the MVS of the Voxel, which is equivalent to the labeling problem of Voxel in 3D space, the labeling in discrete space is a typical markov random field MRF optimization problem, where k is the Voxel point, and the two terms are the consistency term and the balloon inflation:
the identity term expresses two points of identity:
balloon inflation expresses the mandatory tendency to divide the point into interior points:
if the balloon is not inflated, the consistency term divides the points into outer points, so that a reverse force is applied.
Optionally, in step S5, poisson reconstructing an input point cloud and its normal vector, and outputting a three-dimensional mesh, where the point cloud represents a position on the surface of the object, and its normal vector represents an inside and outside direction; giving an estimate of the smoothed object surface by implicitly fitting an indicator function derived from the object;
reconstruction of stems
Is converted into reconstructed χ
MThe point cloud and its normal vector sum χ
MAre linked together.
Optionally, in step S6, the topology reconstruction is used to form a mesh model of simple flow patterns and oriented triangles, and specifically includes the following steps:
s61: firstly, converting each polygon into a group of triangles through triangulation;
s62: establishing ordered explicit connections between adjacent triangles;
s63: calculate all adjacencies of the triangle: a) removing topological singularity, selecting a maximum communicated triangular component and removing isolated points in a cutting and extruding mode; b) assigning a direction to a seed triangle and propagating the direction to neighboring triangles, the mesh being traversed and possibly cut along edges of incident triangles having non-uniform directions once all triangles have been visited and possibly flipped;
s64: repairing each hole: the hole is triangulated first, and then a new vertex is inserted in the patch triangulation to approximate the sampling density of the surrounding area.
Optionally, in step S7, the geometric correction is used to remove typical geometric defects in the triangular mesh, including degeneracy (i.e. zero-area triangles) and self-intersection, and the specific steps are as follows:
s71: marking all adjacent triangles where the vertex of the triangle is located, and setting the adjacent triangles as the neighborhoods of the triangle;
s72: removing the degradation: the method is realized by exchanging (straight triangle) and shrinking (zero triangle) edges, if the method is invalid, the first-order single neighborhood of the degraded triangle is deleted, then triangulation repair is carried out, the second-order single neighborhood is deleted if the method is not used, then repair is carried out, and at most three orders are deleted;
s73: removing self-intersection: based on the method of iterative deletion/blank filling, firstly, the intersection is found by using a space segmentation method (uniform space subdivision), and fixed 1003 voxels are used; after all intersecting triangle pairs have been determined, a delete operation is performed, with each pair of intersecting and non-intersecting triangles deleted. Disconnected non-connected components that may result after deletion are also deleted, the remaining blank is filled in, and the selfing check is run again in the updated voxels. For each triangle still intersected with other parts of the grid, deleting a first-order single neighborhood of the triangle, filling a blank and updating a voxel, if the triangle still intersects with other parts of the grid, deleting a second-order single neighborhood, and so on;
s74: removing the integrated defect: since the goal of the repair algorithm is to eliminate both the degradation and the self-intersection, the iterations must be integrated into a single loop, alternating the operations of S72 and S73 in a single loop until all defects are eliminated.
Optionally, in step S8, after the model is displayed, personalized customization operation can be performed on the model to implement personalized cutting and noise reduction, the 3D model is modified, and after the model is repaired by the method described in steps S6-S7, the model is printed or engraved.
In the method, because the model repairing module is executed after the three-dimensional reconstruction is completed, the number of point clouds needs to be maximized when the three-dimensional model is reconstructed, the texture of the model needs to be enhanced, and the background and the noise are ignored. The specific operation is as follows:
1. besides the number of the feature points is related to the texture of the image, the resolution of the image directly affects the number of the feature points. Therefore, when image data are obtained, a user is prompted to improve the image resolution as much as possible, and in the feature extraction stage, the image resolution is maximized without performing down-sampling processing so as to extract more feature points;
2. in the dense point cloud reconstruction stage, increasing voxels, and reducing the threshold of the minimum pixel points to increase effective point clouds;
3. the poisson reconstruction of the non-closed point cloud inevitably generates a plurality of redundant triangular patches at the boundary, in order to ensure the correctness of the model, the patches need to be cut, but in the process of cutting the redundant triangular patches, the originally correct triangular patches may be simultaneously removed, so that a model leak is caused. In the Poisson reconstruction stage, the method adjusts the shearing threshold value of the judgment redundant triangular patch in the shearing algorithm to be minimum, and maintains the reconstruction surface to the maximum extent.
On the other hand, as shown in fig. 2, the invention provides a mobile terminal three-dimensional reconstruction and model repair system, which includes a mobile terminal, a background server, and a 3D printer or a 3D engraver, wherein:
the mobile terminal comprises an image acquisition module and a model display module, wherein the image acquisition module is used for guiding a user to shoot a sufficient number of images for a modeling object according to a specified angle, and the model display module is used for displaying a reconstructed and repaired model before 3D printing or 3D carving;
the background server comprises
A feature extraction module: the method is used for extracting image feature points by using a Scale-invariant feature transform (SIFT) algorithm and fusing the feature points of every two images;
a sparse reconstruction module: the image fusion method is used for performing sparse reconstruction on the image after the feature points are fused by utilizing a Structure From Motion (SFM) algorithm;
dense reconstruction module: the method is used for carrying out dense reconstruction on the model after sparse reconstruction by utilizing a Stereo vision Method (MVS);
a point cloud modeling module: the method is used for carrying out point cloud modeling on the densely reconstructed model by utilizing a Poisson Surface Reconstruction (PSR);
a model restoration module: in the three-dimensional reconstruction and model restoration method for the mobile terminal, the steps S61-S64 and S71-S74 are used for carrying out topology reconstruction and geometric correction on the model;
a communication module: the modeling object image acquisition module is used for receiving a modeling object image acquired and uploaded by the mobile terminal; after the model is reconstructed and repaired, the model is sent to a model display module of the mobile terminal and then sent to a 3D printer or a 3D engraving machine according to the requirement;
and the 3D printer or the 3D engraving machine is used for receiving the three-dimensional model sent by the background server and printing or engraving the three-dimensional model.
Optionally, the mobile terminal further comprises a 3D interactive editing module, configured to perform personalized customization operation on the displayed 3D model, so as to implement personalized cutting and noise reduction processing;
and the model repairing module of the background server is also used for repairing the 3D model sent by the mobile terminal after receiving the 3D model modified by the model customizing module, so as to meet the requirement of printing or carving.
According to the method and the system for three-dimensional reconstruction and model restoration of the mobile terminal, a user firstly shoots a sufficient number of pictures at a specified angle through the camera on the mobile terminal, the background is shot less, the background is blurred by focusing a modeling object, and the user is prompted that modeling failure can be caused if the surface is reflective or transparent when shooting. Then the mobile terminal uploads the 2D image to a background server, the server carries out 3D reconstruction and model restoration, and the 3D model is sent to the mobile terminal to be displayed. The user can also edit the 3D model, cut out noise or background, submit to the backend server again, and the backend server carries out automatic hole filling and type conversion to the 3D model again and handles, generates can be directly used for 3D to print or 3D glyptic model, sends and prints or carves out the 3D model for 3D printer or 3D engraver.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.
Claims (10)
1. A mobile terminal three-dimensional reconstruction and model restoration method is characterized by comprising the following steps: the method comprises the following steps:
s1: acquiring a series of image data of the modeling object according to a specified angle, and enabling a camera to focus the modeling object to fuzzify a background in the process;
s2: extracting image feature points by using Scale Invariant Feature Transform (SIFT), and fusing the feature points of every two images; maximizing the resolution of the image when extracting the feature points;
s3: sparse reconstruction is carried out by using a motion recovery structure SFM;
s4: carrying out dense reconstruction by using a stereoscopic vision method MVS; reducing the threshold value of the minimum pixel point during dense reconstruction, and increasing effective point cloud;
s5: performing point cloud modeling by using a Poisson reconstruction method PSR; decreasing the shear element at poisson reconstruction increases the effective surface;
s6: reconstructing topology;
s7: geometric correction;
s8: and displaying the model and performing 3D printing or 3D carving.
2. The mobile-end three-dimensional reconstruction and model restoration method according to claim 1, wherein: in step S2, the SIFT algorithm is used to maintain the invariance of rotation, translation and scale transformation, common feature points are detected on two images adjacent in time, and the feature points of the two images adjacent in time form a one-to-one correspondence relationship by the minimum euclidean distance determination of the feature descriptor, so that the equation of the shooting movement is calculated by the equation set constructed by a plurality of points on the three-dimensional object, that is, the external reference matrix is calculated, and the coordinates of the three-dimensional object, that is, the sparse point cloud, is further calculated.
3. The mobile-end three-dimensional reconstruction and model restoration method according to claim 1, wherein: in step S3, the SFM algorithm is an off-line algorithm for performing three-dimensional reconstruction based on various collected disordered images, and the relationship between the world coordinates (X, Y, Z) of the three-dimensional object and the pixel coordinates (u, v,1) on the photograph is as follows:
the internal reference matrix is thus represented as:
the external reference matrix is composed of a rotation matrix R and a translation vector T, and is represented as:
the internal reference matrix is an internal parameter of the shooting device and can be obtained through calibration, the internal reference matrix is kept unchanged in the moving process of the camera, in the moving shooting process, the shooting device has the transformation of rotating R and translating T, the R and the T form an external reference matrix, the external reference matrix is obtained through calculation according to the corresponding relation of the two images, namely an equation set is constructed through the corresponding relation of characteristic point coordinates of the two images by utilizing an SFM algorithm to obtain the R and the T, and in the process, in order to avoid the influence of noise, the R and the T are adjusted through a bundle adjust algorithm.
4. The mobile-end three-dimensional reconstruction and model restoration method according to claim 1, wherein: in step S4, the MVS based on voxels, where MVS is equivalent to a labeling problem of Voxel in 3D space, and the labeling in discrete space is a markov random field MRF optimization problem, where k is a Voxel point, and the two items are respectively a consistency item and an balloon inflation:
the identity term expresses two points of identity:
balloon inflation expresses the mandatory tendency to divide the point into interior points:
5. the mobile-end three-dimensional reconstruction and model restoration method according to claim 1, wherein: in step S5, poisson reconstructs an input point cloud and a normal vector thereof, and outputs a three-dimensional grid, wherein the point cloud represents the position of the object surface, and the normal vector represents the inside and outside directions; giving an estimate of the smoothed object surface by implicitly fitting an indicator function derived from the object;
reconstruction of stems
Is converted into reconstructed χ
MThe point cloud and its normal vector sum χ
MAre linked together.
6. The mobile-end three-dimensional reconstruction and model restoration method according to claim 1, wherein: in step S6, the topology reconstruction is used to form a mesh model of simple flow patterns and directional triangles, and specifically includes the following steps:
s61: firstly, converting each polygon into a group of triangles through triangulation;
s62: establishing ordered explicit connections between adjacent triangles;
s63: calculate all adjacencies of the triangle: a) removing topological singularity, selecting a maximum communicated triangular component and removing isolated points in a cutting and extruding mode; b) assigning a direction to a seed triangle and propagating the direction to neighboring triangles, the mesh being traversed and possibly cut along edges of incident triangles having non-uniform directions once all triangles have been visited and possibly flipped;
s64: repairing each hole: the hole is triangulated first, and then a new vertex is inserted in the patch triangulation to approximate the sampling density of the surrounding area.
7. The mobile-end three-dimensional reconstruction and model restoration method according to claim 6, wherein: in step S7, the geometric correction is used to remove typical geometric defects in the triangular mesh, including degeneration and self-intersection, and includes the following specific steps:
s71: marking all adjacent triangles where the vertex of the triangle is located, and setting the adjacent triangles as the neighborhoods of the triangle;
s72: removing the degradation: the method is realized by means of exchange and contraction edges, if invalid, a first-order single neighborhood of a degraded triangle is deleted, then triangulation repair is carried out, if invalid still, a second-order single neighborhood of the degraded triangle is deleted, then repair is carried out, and at most three orders are deleted;
s73: removing self-intersection: based on a method for iteratively deleting and filling a blank, firstly, an intersection is found by using a space segmentation method, and fixed 1003 voxels are used; after all intersecting triangle pairs are determined, deleting operation is carried out, and each pair of normal intersecting triangle and intersected triangle is deleted; disconnected non-connected components which may be generated after deletion are also deleted, the rest of blank space is filled, and self-checking is executed again in the updated voxels; for each triangle still intersected with other parts of the grid, deleting a first-order single neighborhood of the triangle, filling a blank and updating a voxel, if the triangle still intersects with other parts of the grid, deleting a second-order single neighborhood, and so on;
s74: removing the integrated defect: since the goal of the repair algorithm is to eliminate both the degradation and the self-intersection, the iterations must be integrated into a single loop, alternating the operations of S72 and S73 in a single loop until all defects are eliminated.
8. The mobile-end three-dimensional reconstruction and model restoration method according to any one of claims 1 to 7, wherein: in step S8, after the model is displayed, personalized customization operation can be performed on the model to realize personalized cutting and noise reduction, the 3D model is modified, and then the model is printed or engraved after model repair is performed through steps S6-S7.
9. A three-dimensional reconstruction and model restoration system for a mobile terminal is characterized in that: including mobile terminal, backstage server, still include 3D printer or 3D engraver, wherein:
the mobile terminal comprises an image acquisition module and a model display module, wherein the image acquisition module is used for guiding a user to shoot a sufficient number of images for a modeling object according to a specified angle, and the model display module is used for displaying a reconstructed and repaired model before 3D printing or 3D carving;
the background server comprises
A feature extraction module: the scale invariant feature transform SIFT algorithm is used for extracting image feature points and fusing the feature points of every two images;
a sparse reconstruction module: the motion recovery structure SFM algorithm is used for carrying out sparse reconstruction on the image after the feature point fusion;
dense reconstruction module: the method is used for carrying out dense reconstruction on the model after sparse reconstruction by using a stereoscopic vision MVS method;
a point cloud modeling module: the system is used for performing point cloud modeling on the densely reconstructed model by utilizing a Poisson reconstruction method PSR;
a model restoration module: in the method for three-dimensional reconstruction and model restoration at a mobile terminal according to claim 7, the steps S61-S64, S71-S74 are used for performing topology reconstruction and geometric correction on the model;
a communication module: the modeling object image acquisition module is used for receiving a modeling object image acquired and uploaded by the mobile terminal; after the model is reconstructed and repaired, the model is sent to a model display module of the mobile terminal and then sent to a 3D printer or a 3D engraving machine according to the requirement;
and the 3D printer or the 3D engraving machine is used for receiving the three-dimensional model sent by the background server and printing or engraving the three-dimensional model.
10. The mobile-end three-dimensional reconstruction and model restoration system according to claim 9, wherein: the mobile terminal also comprises a 3D interactive editing module which is used for carrying out personalized customization operation on the displayed 3D model so as to realize personalized cutting and noise reduction processing;
and the model repairing module of the background server is also used for repairing the 3D model sent by the mobile terminal after receiving the 3D model modified by the model customizing module, so as to meet the requirement of printing or carving.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910845738.0A CN110782521A (en) | 2019-09-06 | 2019-09-06 | Mobile terminal three-dimensional reconstruction and model restoration method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910845738.0A CN110782521A (en) | 2019-09-06 | 2019-09-06 | Mobile terminal three-dimensional reconstruction and model restoration method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110782521A true CN110782521A (en) | 2020-02-11 |
Family
ID=69384105
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910845738.0A Pending CN110782521A (en) | 2019-09-06 | 2019-09-06 | Mobile terminal three-dimensional reconstruction and model restoration method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110782521A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111508063A (en) * | 2020-04-13 | 2020-08-07 | 南通理工学院 | Three-dimensional reconstruction method and system based on image |
CN111968223A (en) * | 2020-08-10 | 2020-11-20 | 哈尔滨理工大学 | Three-dimensional reconstruction system for 3D printing process based on machine vision |
CN112070883A (en) * | 2020-08-28 | 2020-12-11 | 哈尔滨理工大学 | Three-dimensional reconstruction method for 3D printing process based on machine vision |
CN112381945A (en) * | 2020-11-27 | 2021-02-19 | 中国科学院自动化研究所 | Reconstruction method and system of three-dimensional model transition surface |
CN112882666A (en) * | 2021-03-15 | 2021-06-01 | 上海电力大学 | Three-dimensional modeling and model filling-based 3D printing system and method |
CN113009827A (en) * | 2021-02-19 | 2021-06-22 | 济南中科数控设备有限公司 | Dynamic performance optimization method of numerical control engraving machine based on virtual debugging |
CN113538697A (en) * | 2021-07-22 | 2021-10-22 | 内蒙古科技大学 | Method and device for repairing depression caused by creep effect of hot bed, storage medium and terminal |
CN113601833A (en) * | 2021-08-04 | 2021-11-05 | 温州科技职业学院 | FDM three-dimensional printing control system |
CN114697516A (en) * | 2020-12-25 | 2022-07-01 | 花瓣云科技有限公司 | Three-dimensional model reconstruction method, device and storage medium |
CN115661495A (en) * | 2022-09-28 | 2023-01-31 | 中国测绘科学研究院 | Large-scale SfM method for compact division and multi-level combination strategy |
CN118003031A (en) * | 2024-04-09 | 2024-05-10 | 中国长江电力股份有限公司 | Self-adaptive machining method for material-adding repairing bearing bush |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107341851A (en) * | 2017-06-26 | 2017-11-10 | 深圳珠科创新技术有限公司 | Real-time three-dimensional modeling method and system based on unmanned plane image data |
CN108038905A (en) * | 2017-12-25 | 2018-05-15 | 北京航空航天大学 | A kind of Object reconstruction method based on super-pixel |
CN108090960A (en) * | 2017-12-25 | 2018-05-29 | 北京航空航天大学 | A kind of Object reconstruction method based on geometrical constraint |
CN108447116A (en) * | 2018-02-13 | 2018-08-24 | 中国传媒大学 | The method for reconstructing three-dimensional scene and device of view-based access control model SLAM |
CN108734728A (en) * | 2018-04-25 | 2018-11-02 | 西北工业大学 | A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image |
CN109685886A (en) * | 2018-11-19 | 2019-04-26 | 国网浙江杭州市富阳区供电有限公司 | A kind of distribution three-dimensional scenic modeling method based on mixed reality technology |
CN110047139A (en) * | 2019-04-28 | 2019-07-23 | 南昌航空大学 | A kind of specified target three-dimensional rebuilding method and system |
-
2019
- 2019-09-06 CN CN201910845738.0A patent/CN110782521A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107341851A (en) * | 2017-06-26 | 2017-11-10 | 深圳珠科创新技术有限公司 | Real-time three-dimensional modeling method and system based on unmanned plane image data |
CN108038905A (en) * | 2017-12-25 | 2018-05-15 | 北京航空航天大学 | A kind of Object reconstruction method based on super-pixel |
CN108090960A (en) * | 2017-12-25 | 2018-05-29 | 北京航空航天大学 | A kind of Object reconstruction method based on geometrical constraint |
CN108447116A (en) * | 2018-02-13 | 2018-08-24 | 中国传媒大学 | The method for reconstructing three-dimensional scene and device of view-based access control model SLAM |
CN108734728A (en) * | 2018-04-25 | 2018-11-02 | 西北工业大学 | A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image |
CN109685886A (en) * | 2018-11-19 | 2019-04-26 | 国网浙江杭州市富阳区供电有限公司 | A kind of distribution three-dimensional scenic modeling method based on mixed reality technology |
CN110047139A (en) * | 2019-04-28 | 2019-07-23 | 南昌航空大学 | A kind of specified target three-dimensional rebuilding method and system |
Non-Patent Citations (3)
Title |
---|
刘桂敏: "基于图像的建模与绘制技术研究", 《中国优秀硕士学位论文全文数据库》 * |
汪思颖: "基于图像的大规模场景三维重建", 《百度》 * |
田丽: "快速真实感人体三维几何建模与重建技术研究", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111508063A (en) * | 2020-04-13 | 2020-08-07 | 南通理工学院 | Three-dimensional reconstruction method and system based on image |
CN111968223A (en) * | 2020-08-10 | 2020-11-20 | 哈尔滨理工大学 | Three-dimensional reconstruction system for 3D printing process based on machine vision |
CN112070883A (en) * | 2020-08-28 | 2020-12-11 | 哈尔滨理工大学 | Three-dimensional reconstruction method for 3D printing process based on machine vision |
CN112381945A (en) * | 2020-11-27 | 2021-02-19 | 中国科学院自动化研究所 | Reconstruction method and system of three-dimensional model transition surface |
CN112381945B (en) * | 2020-11-27 | 2021-05-25 | 中国科学院自动化研究所 | Reconstruction method and system of three-dimensional model transition surface |
CN114697516A (en) * | 2020-12-25 | 2022-07-01 | 花瓣云科技有限公司 | Three-dimensional model reconstruction method, device and storage medium |
CN114697516B (en) * | 2020-12-25 | 2023-11-10 | 花瓣云科技有限公司 | Three-dimensional model reconstruction method, apparatus and storage medium |
CN113009827A (en) * | 2021-02-19 | 2021-06-22 | 济南中科数控设备有限公司 | Dynamic performance optimization method of numerical control engraving machine based on virtual debugging |
CN113009827B (en) * | 2021-02-19 | 2023-01-31 | 济南中科数控设备有限公司 | Dynamic performance optimization method of numerical control engraving machine based on virtual debugging |
CN112882666A (en) * | 2021-03-15 | 2021-06-01 | 上海电力大学 | Three-dimensional modeling and model filling-based 3D printing system and method |
CN113538697A (en) * | 2021-07-22 | 2021-10-22 | 内蒙古科技大学 | Method and device for repairing depression caused by creep effect of hot bed, storage medium and terminal |
CN113538697B (en) * | 2021-07-22 | 2024-01-02 | 内蒙古科技大学 | Method and device for repairing hot bed creep effect depression, storage medium and terminal |
CN113601833A (en) * | 2021-08-04 | 2021-11-05 | 温州科技职业学院 | FDM three-dimensional printing control system |
CN115661495A (en) * | 2022-09-28 | 2023-01-31 | 中国测绘科学研究院 | Large-scale SfM method for compact division and multi-level combination strategy |
CN118003031A (en) * | 2024-04-09 | 2024-05-10 | 中国长江电力股份有限公司 | Self-adaptive machining method for material-adding repairing bearing bush |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110782521A (en) | Mobile terminal three-dimensional reconstruction and model restoration method and system | |
CN111063021B (en) | Method and device for establishing three-dimensional reconstruction model of space moving target | |
CN109242954B (en) | Multi-view three-dimensional human body reconstruction method based on template deformation | |
CN112288875B (en) | Rapid three-dimensional reconstruction method for unmanned aerial vehicle mine inspection scene | |
CN113012293B (en) | Stone carving model construction method, device, equipment and storage medium | |
CN110363858B (en) | Three-dimensional face reconstruction method and system | |
US20210004973A1 (en) | Image processing method, apparatus, and storage medium | |
CN111629193B (en) | Live-action three-dimensional reconstruction method and system | |
CN110223383A (en) | A kind of plant three-dimensional reconstruction method and system based on depth map repairing | |
CN113689540B (en) | Object reconstruction method and device based on RGB video | |
CN107369204B (en) | Method for recovering basic three-dimensional structure of scene from single photo | |
WO2015188684A1 (en) | Three-dimensional model reconstruction method and system | |
CN108648264B (en) | Underwater scene reconstruction method based on motion recovery and storage medium | |
KR20090064154A (en) | Method and apparatus for 3d reconstructing of object by using multi-view image information | |
CN108053437A (en) | Three-dimensional model acquiring method and device based on figure | |
CN112651881B (en) | Image synthesizing method, apparatus, device, storage medium, and program product | |
CN110363170B (en) | Video face changing method and device | |
KR101602472B1 (en) | Apparatus and method for generating 3D printing file using 2D image converting | |
CN113077545B (en) | Method for reconstructing clothing human body model from image based on graph convolution | |
CN109218706B (en) | Method for generating stereoscopic vision image from single image | |
CN112991458A (en) | Rapid three-dimensional modeling method and system based on voxels | |
KR20190040746A (en) | System and method for restoring three-dimensional interest region | |
KR101575284B1 (en) | Apparatus for generating texture of 3-demensional reconstructed object by resolution level of 2-demensional image and method thereof | |
CN112465984A (en) | Monocular camera sequence image three-dimensional reconstruction method based on double-layer filtering | |
CN111243062A (en) | Manufacturing method for converting planar mural into three-dimensional high-definition digital mural |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200211 |