CN115272587B - Model file generation method and medium for 3D printing and electronic equipment - Google Patents

Model file generation method and medium for 3D printing and electronic equipment Download PDF

Info

Publication number
CN115272587B
CN115272587B CN202211175701.XA CN202211175701A CN115272587B CN 115272587 B CN115272587 B CN 115272587B CN 202211175701 A CN202211175701 A CN 202211175701A CN 115272587 B CN115272587 B CN 115272587B
Authority
CN
China
Prior art keywords
pictures
feature points
key feature
model file
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211175701.XA
Other languages
Chinese (zh)
Other versions
CN115272587A (en
Inventor
李观汉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Anycubic Technology Co Ltd
Original Assignee
Shenzhen Anycubic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Anycubic Technology Co Ltd filed Critical Shenzhen Anycubic Technology Co Ltd
Priority to CN202211175701.XA priority Critical patent/CN115272587B/en
Publication of CN115272587A publication Critical patent/CN115272587A/en
Application granted granted Critical
Publication of CN115272587B publication Critical patent/CN115272587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P10/00Technologies related to metal processing
    • Y02P10/25Process efficiency

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Chemical & Material Sciences (AREA)
  • Materials Engineering (AREA)
  • Manufacturing & Machinery (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The application provides a model file generation method, a medium and electronic equipment for 3D printing, wherein the method comprises the following steps: receiving pictures or videos sent by a user side to obtain a plurality of pictures aiming at the same article; extracting target key feature points and target shooting angles of a plurality of pictures aiming at the pictures; based on target key feature points and target shooting angles of a plurality of pictures, sequentially and annularly arranging the pictures to obtain a plurality of sequentially and annularly arranged pictures; and carrying out three-dimensional reconstruction based on the plurality of pictures which are arranged in sequence in a ring shape to obtain a three-dimensional model file. The method and the device can provide a simple and convenient zero-cost three-dimensional modeling scheme for the user, reduce the three-dimensional modeling threshold and promote the development of 3D printing.

Description

Model file generation method and medium for 3D printing and electronic equipment
Technical Field
The application relates to the technical field of 3D printing, in particular to a model file generation method, a medium and electronic equipment for 3D printing.
Background
The 3D printer is also called as a three-dimensional printer, is a machine of a cumulative manufacturing technology, namely a rapid prototyping technology, is based on a digital model file, and is used for manufacturing a three-dimensional object by printing a layer of adhesive material by using special wax materials, powdered metals or plastic and other adhesive materials. The three-dimensional model is the first step of 3D printing, and 3D printing cannot be achieved without a model file. For the user, the fine, animated model file is a map of real objects in the mind of the user. For platforms and companies, the huge and reliable models are digital assets and electronic materials of 3D printing companies. Therefore, three-dimensional modeling has an irreplaceable primary role in 3D printing.
Three-dimensional modeling refers in a broad sense to the presentation of an object to be depicted in three dimensions on an electronic device, and may be stored in a three-dimensional format on the electronic device. The three-dimensional model is different from the two-dimensional picture, and the two-dimensional picture can only display a part of an object in a plane form from a single angle, and cannot display the object in multiple angles, comprehensively and omnidirectionally. So if the user wants to observe the object at multiple angles or print the object as a real stereoscopic 3D, a two-dimensional picture cannot take over this task.
At present, three-dimensional modeling is still performed by professionals through modeling by using professional modeling software such as 3dmax, rhinoceros and the like, or objects are scanned by handheld high-precision high-value scanning equipment, so that a model is obtained. The professional software is complex to operate, even basic game professionals and movie practitioners can take several days and high-intensity efforts to create a better model, and the ordinary users are required to put more time and effort, and meanwhile, the users cannot last because of the difficulty, the overlong learning time period and insufficient learning feedback, even if the users lack art knowledge or aesthetic feeling after learning, the result is not ideal, and the difference required by the users is overlarge. Therefore, the common user modeling has the defects of high modeling difficulty, poor molding effect, low popularity and the like at present.
Besides the three-dimensional modeling by using professional software, a scanning instrument such as a 3D scanner can be used for carrying out all-round scanning on the object, so that three-dimensional reconstruction is carried out, and a three-dimensional model of the object is generated. The 3D scanner, i.e. a laser scanner, emits essentially laser light, which contacts the object, calculates the offset distance between the laser light and the object by means of triangulation, thereby calculating a part of the object, and then scans step by step until the three-dimensional object reconstruction is completed. The key to determining the quality of the formation of the three-dimensional model is the accuracy, intensity and frequency of the laser emission on the laser scanner, i.e. the quality and intensity of this laser emitter. They instantly determine the dot line density and the laser penetration rate of laser scanning, thereby affecting the molding quality of the three-dimensional reconstruction model. A laser scanner with good quality and good intensity is often quite expensive, and a professional cultural relic scanning company or a specially required professional can configure the scanner, so that the consumption cost and the use cost are dissuaded from most common users.
Therefore, whether professional software modeling or a 3D scanner scans a three-dimensional reconstruction model of an object, there are various limitations, and a threshold exists, which strikes the enthusiasm of modeling creation of each common user, and further suppresses the development of 3D printing.
Therefore, how to implement a 3D modeling solution that is simple to operate and low in cost is a technical problem to be solved.
Disclosure of Invention
In view of this, the application provides a model file generating method, medium and electronic equipment for 3D printing, which are mainly used for solving the problems of complex 3D modeling and high cost.
According to one aspect of the present application, there is provided a model file generation method for 3D printing, the method comprising: receiving pictures or videos sent by a user side to obtain a plurality of pictures aiming at the same article; extracting target key feature points and target shooting angles of a plurality of pictures aiming at the plurality of pictures, and carrying out sequential annular arrangement on the plurality of pictures based on the target key feature points and the target shooting angles of the plurality of pictures to obtain a plurality of sequential annular arrangement pictures; and carrying out three-dimensional reconstruction based on the plurality of pictures which are arranged in sequence in a ring shape to obtain a three-dimensional model file.
According to an aspect of the present application, there is provided a storage medium having stored therein a computer program, wherein the computer program is arranged to execute the above model file generation method for 3D printing at run-time.
According to an aspect of the present application, there is provided an electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the above-mentioned model file generation method for 3D printing.
By means of the technical scheme, the model file generation method, the medium and the electronic equipment for 3D printing, which are provided by the application, enable a user to send pictures or videos on APP, applet or webpage links, and enable feature points, shooting angle extraction and three-dimensional reconstruction processes to be conducted on the pictures, so that a three-dimensional model file is obtained, and the method is simple and convenient to operate and zero in cost. For a user, the user does not need to purchase high-value hardware additionally or learn with high intensity, and only needs to take a plurality of pictures of multiple angles around an object by using shooting equipment such as a mobile phone and a camera, upload the pictures into an app, an applet or a webpage end provided with a model generation algorithm, generate a three-dimensional model with good effect after waiting for a short time, and can supplement and shoot a plurality of pictures with multiple angles to regenerate a better three-dimensional model if the effect is unsatisfactory. The method has great significance for starting in 3D printing, and more people can participate in the 3D printing.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 illustrates an implementation scenario of a model file generating method for 3D printing according to an embodiment of the present application;
FIG. 2 shows a flowchart of a method for generating a model file for 3D printing according to an embodiment of the present application;
FIG. 3 illustrates a flowchart of another model file generation method for 3D printing provided by embodiments of the present application;
fig. 4 is a schematic diagram of a multi-angle shooting object in an example of a model file generating method for 3D printing according to an embodiment of the present application;
fig. 5 shows a schematic diagram of preliminary feature points in an example of a model file generating method for 3D printing according to an embodiment of the present application;
Fig. 6 illustrates a schematic diagram of feature points of noise reduction processing in an example of a model file generating method for 3D printing according to an embodiment of the present application;
fig. 7 is a schematic diagram of a histogram in an example of a model file generating method for 3D printing according to an embodiment of the present application;
fig. 8 illustrates a schematic diagram of target key feature points in an example of a model file generating method for 3D printing according to an embodiment of the present application;
fig. 9 shows a schematic diagram of a three-dimensional model in an example of a model file generating method for 3D printing according to an embodiment of the present application;
fig. 10 shows a schematic structural diagram of a model file generating device for 3D printing according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
As previously analyzed, there are limitations to either specialized software modeling or 3D scanners scanning three-dimensional reconstruction models of objects. Therefore, the model file generation method for 3D printing provided by the embodiment of the application is simple and convenient, has high modeling speed, and is particularly suitable for a common user to realize three-dimensional modeling quickly, conveniently and at zero cost. In short, when a user faces an object which is required to be 3D printed or simply subjected to 3D modeling, a video or a plurality of pictures are shot according to a certain angle through shooting equipment such as a mobile phone or a camera, uploading is performed through a 3D printer APP or an applet, the modeling process is automatically completed through the APP or the applet background according to a preset algorithm, and then a three-dimensional model file which is completed through modeling is displayed to the user, so that the three-dimensional model file is convenient to be used for operations such as checking, storing, forwarding or starting 3D printing.
Referring to fig. 1, an implementation scenario diagram of a model file generating method for 3D printing according to an embodiment of the present application is shown. The scene shows a user side and a server side, wherein the user side comprises an APP or an applet or a webpage link, the server side comprises a model file generating device for 3D printing, the user side can be understood as terminal equipment used by a user, such as a smart phone, a PAD, a computer and the like, and the APP or the applet or the webpage link refers to a client side corresponding to the model file generating device for 3D printing of the server side, such as operation software APP or applet which is installed on the terminal equipment by the user and used for controlling a 3D printer.
Referring to fig. 2, a flowchart of a model file generation method for 3D printing provided in an embodiment of the present application is shown, where the method quickly and simply implements model file generation at a server side corresponding to a 3D printer APP or an applet, and the method includes the following blocks.
S201: and receiving the pictures or the videos sent by the user side to obtain a plurality of pictures aiming at the same article.
As described previously, the user side may be a terminal device used by the user, which is installed with an APP or applet as a client side for 3D printing. The article herein refers broadly to a target object having three-dimensional characteristics, and may be an article for daily use, a building, or the like. When a user wants to perform three-dimensional modeling on an article, the user can perform multi-angle and all-dimensional shooting on the target article through equipment with shooting functions such as a mobile phone, a camera or a watch, and the like, submit shot videos or pictures to an APP or an applet, submit the APP or the applet to a server, and start an automatic modeling process. It can be seen that, at the user end, only simple shooting and uploading are needed, and excessive expertise and additional cost are not needed.
For the mode of shooting videos by users, after receiving the videos, the server side needs to sample multiple videos, so that multiple pictures representing various angles of the object are obtained. When a user shoots a video, the user needs to present the stereoscopic state of the object, so that the server side adopts the video according to the preset frame number after receiving the video, and a plurality of pictures with multiple angles are obtained.
For the mode of shooting pictures by users, the pictures of the various angles of the object are required to be respectively presented according to the preset angles, so that the server side can be directly used after receiving a plurality of pictures. It will be appreciated that the smaller the angle interval of the user's shooting, the more complete the display of the article, but this increases the complexity of the user's shooting and increases the speed of the server-side algorithm, so that, in general, the predetermined angle may be set within a reasonable range, ensuring that the number of pictures obtained is in the tens or twenty, for example, the predetermined angle range is 18 ° to 36 °, if set at 18 °, 20 pictures will be shot around the article, and if set at 36 °, 10 pictures will be shot around the article. It is to be understood that the above angles are merely examples and are not to be construed as limiting.
S202: and extracting target key feature points and target shooting angles of the plurality of pictures aiming at the plurality of pictures.
Feature recognition and extraction are carried out on pictures, and the method is a key technology for image processing. Image features broadly include color features, texture features, shape features, and the like. In the embodiment of the application, current or future image feature processing algorithms may be used, for example, SIFT (Scale-invariant feature transform ), ORB (Oriented FAST and Rotated BRIEF, fast and Brief algorithm-based image feature algorithm), SURF (Speeded Up Robust Features, accelerated robust feature) algorithm, and the like.
SIFT is a scale-space based image local feature description algorithm that keeps invariance to image scaling, rotation, and even affine transformation, and the essence of SIFT algorithm can be categorized as a problem of finding feature points (keypoints) on different scale spaces. The principle is that an image is mapped (transformed) into a set of local feature vectors; feature vectors have translational, scaling, rotational invariance, as well as invariance to illumination changes, affines, and projective transformations. Briefly, the SIFT algorithm implementation steps include: 1. extracting key points; 2. adding detailed information (local features) to the key points; 3. and (3) finding out a plurality of pairs of feature points matched with each other through pairwise comparison of the feature points (the key points attached with the feature vectors) of the two sides, so that the corresponding relation between scenes is established.
ORB is used to quickly create feature vectors for keypoints in an image, which can be used to identify objects in the image. Wherein Fast and Brief are the feature detection algorithm and the vector creation algorithm, respectively. The ORB will first find a special region from the image called a keypoint. Key points are small areas of prominence in the image, such as corner points, for example, which have a sharp change in pixel value from light to dark. The ORB would then calculate a corresponding feature vector for each keypoint. The feature vectors created by the ORB algorithm contain only 1 and 0, called binary feature vectors. 1. And the order of 0's will vary depending on the particular keypoint and its surrounding pixel region. The vector represents the intensity pattern around the keypoint, so multiple feature vectors can be used to identify a larger area, even a specific object in the image. ORB is characterized by a very fast speed and is to some extent immune to noise and image transformations, such as rotation and scaling transformations.
SURF is an improvement over SIFT and is primarily characterized by being fast.
No matter which algorithm is adopted, the aim of extracting the target key feature points and the target shooting angles of the picture is achieved.
In one embodiment, extracting the target key feature point and the target shooting angle of the picture may include the following implementation steps:
(1) And carrying out preliminary feature extraction and extremum detection on the picture to obtain a plurality of preliminary key feature points.
For example, based on a picture feature detection algorithm, at least one picture of a plurality of pictures is convolved with a gaussian filter on a predetermined scale, and then a local maximum point and a local minimum point in a gaussian convolution result are subtracted to obtain the preliminary key feature point containing an extreme point result.
(2) And carrying out accurate positioning and noise reduction treatment on the preliminary key feature points to obtain the key feature points after positioning and noise reduction treatment.
For example, the preliminary key feature points are precisely positioned by using taylor expansion and interpolation processing of the DoG function, and the feature points are filtered according to a preset taylor value, so that the feature points with absolute values smaller than the preset taylor value are removed.
(3) And (3) distributing an initial main direction for the processed key feature points, and carrying out correction processing to obtain target key feature points and target shooting angles.
For example, gradient values and initial directions are calculated for the key feature points after positioning and noise reduction processing, histograms of pixels near the key feature points are determined, modification processing is performed according to the direction with the largest proportion in the histograms, and the target key feature points and the target shooting angles are obtained.
S203: based on target key feature points and target shooting angles of the pictures, sequentially and annularly arranging the pictures to obtain sequentially and annularly arranged pictures;
s204: based on a plurality of annular pictures which are arranged in sequence, three-dimensional reconstruction is carried out to obtain a three-dimensional model file.
Based on the key feature points and shooting angles of each picture, a three-dimensional model file can be constructed by using a three-dimensional reconstruction method. The three-dimensional reconstruction method can be realized by adopting SFM (Structure From Motion) algorithm, SLAM (Simultaneous Localization and Mapping) algorithm and KinectFusion algorithm.
The SFM algorithm is an off-line algorithm for three-dimensional reconstruction based on various collected unordered pictures. For example, feature extraction algorithms such as SIFT are used for extracting image features, and then Euclidean distance between feature points of two pictures is calculated for matching the feature points, so that image pairs with the number of feature point matching reaching the requirement are found. For each image matching pair, epipolar geometry is calculated, F matrix is estimated and matching pairs are improved by ransac algorithm optimization. Thus, if the feature points are propagated in a chain in the matching pair, a track is formed.
The SLAM algorithm is an algorithm for realizing object positioning, mapping and path planning.
The Kinect fusion algorithm is a real-time reconstruction tracking and reconstruction algorithm based on RGBD camera and suitable for any ambient illumination.
Whichever algorithm is adopted, the purpose of three-dimensional reconstruction is achieved.
In one embodiment, the three-dimensional reconstruction may include the following implementation steps:
(1) Based on target key feature points and target shooting angles of a plurality of pictures, feature matching is carried out on every two adjacent pictures, and a sequence of pictures with gradually increased shooting angles is determined every two adjacent pictures;
(2) And circularly arranging the pictures according to the sequence, projecting rays to the pictures to form a plurality of viewing cones, intersecting the viewing cones, and storing space points with high repeatability to form a three-dimensional model file.
In one embodiment, since a plurality of pictures obtained by photographing an object at a certain angle, for example, 20 pictures are obtained at 18 degrees around the object, respectively, feature matching is performed on the pictures, and a sequence of pictures with gradually increasing photographing angles, for example, an angle of gradually increasing 18 degrees, is determined between every two adjacent pictures.
In a specific embodiment, the process of projecting rays to multiple pictures to form multiple viewing cones is that the system algorithm generates multiple points, multiple pictures correspond to multiple points, one picture corresponds to one point, but all points corresponding to all pictures are the same as the corresponding picture, for example, the position of one point is the middle of the picture, then all points are in the middle of all pictures, and thus the generated viewing cones are kept at the same horizontal level; each picture and the corresponding point form a viewing cone, and finally, a plurality of pictures and a plurality of points form a plurality of viewing cones, each viewing cone is overlapped with each other in a point-to-point mode, and then, the overlapped points and pictures finally form a three-dimensional model file.
After the server side completes three-dimensional modeling, the three-dimensional model file can be transmitted to the APP or applet of the user side through a network. The user can receive or check the three-dimensional model file through the APP or the applet, can perform operations such as checking, saving, forwarding and the like, and can also start 3D printing according to the three-dimensional model file.
Referring to fig. 3, another flowchart of a model file generating method for 3D printing according to an embodiment of the present application is shown, where the flowchart shows a manner of further adjusting an initially generated three-dimensional model file.
S301: receiving pictures or videos sent by a user side to obtain a plurality of pictures aiming at the same article;
s302: extracting target key feature points and target shooting angles of a plurality of pictures aiming at the plurality of pictures;
s303: based on target key feature points and target shooting angles of the pictures, sequentially and annularly arranging the pictures to obtain sequentially and annularly arranged pictures;
s304: based on a plurality of pictures which are arranged in sequence in a ring shape, carrying out three-dimensional reconstruction to obtain a three-dimensional model file;
s305: judging whether the three-dimensional model file meets the model requirement, if not, executing S306, and if yes, executing S307;
whether the three-dimensional model file meets the model requirement or not can be judged by the server side according to preset model evaluation parameters, and can also be judged according to satisfaction feedback information of the user side.
S306: prompting a user to provide more supplementary pictures through an APP or an applet, and returning to S302-S303 to regenerate a three-dimensional model file based on the supplementary pictures and the original multiple pictures;
s306: and issuing the final three-dimensional model file to a user side (such as an APP or an applet).
The model file generation method for 3D printing provided in the embodiment of the present application is schematically described below with a specific example.
The first step: and obtaining a multi-angle picture set of the object.
The first is to record a video around the object, which completely reveals the stereoscopic state of the object. And extracting pictures with certain frame intervals from the video frames through video sampling, wherein the extracted pictures are object pictures with multiple angles. The method has the advantages of visual collection, simple use and easy hand-up.
The second method obtains multi-angle picture sets of the objects, namely, a plurality of pictures of the objects are shot according to a certain angle, and a multi-angle object picture is obtained. According to the subsequent algorithm, the more pictures, the smaller the acquisition angle, the finer the generated three-dimensional model. But increases the shooting cost and difficulty of the user to a certain extent and increases the workload of the algorithm. It is therefore proposed to divide the generated model to a fine degree, the shooting angle is varied between 18 ° and 36 °, and the smaller the degree, the more the number of generated pictures, the finer the three-dimensional model. In actual operation, the imaging model does not need to strictly follow an angle of 18 degrees, and only needs to uniformly follow the angle interval between 18 degrees and 36 degrees. Here, the model is collected according to a theoretical angle of 18 °, and 20 model multi-angle photographs are collected in total, as shown in fig. 4.
And secondly, extracting target key feature points and target extraction angles of the picture.
The SIFT algorithm is used as an example in this example.
SIFT is also called a local feature description algorithm, mainly extracts local features in a shot picture, such as parameters of proportion, rotation direction, shooting angle and the like of the picture, has strong anti-interference performance, and can obtain a good extraction effect even if brightness of each picture is uneven and inclination of the angle is not smooth enough. The embodiment of the application mainly uses the advantages of no deformation of the SIFT algorithm direction, strong anti-interference performance, complete feature description and the like, and can accurately compare and match the feature points of two adjacent pictures, which is an important basis for calculating the shooting angle of the picture subsequently.
The specific implementation is described below.
(1) And detecting extreme points to obtain scale invariance.
For example, based on a picture feature detection algorithm, at least one picture of a plurality of pictures is convolved with a gaussian filter on a predetermined scale, and then a local maximum point and a local minimum point in a gaussian convolution result are subtracted to obtain a preliminary key feature point containing an extreme point result.
This step is mainly to outline the features of the picture, and then a picture with rough contour points attached thereto is obtained, as shown in fig. 5.
Specifically, the picture I (x, y) and the gaussian filter G (x, y, kσ) are convolved on a certain scale kiσ (formula (1)), and then the local maximum point and the local minimum point in the gaussian convolution result L (x, y, kσ) are subtracted (formula (2)), so as to obtain a result D (x, y, σ) containing extreme points:
Figure 846325DEST_PATH_IMAGE001
… … … … … … … … formula (1)
Figure 78941DEST_PATH_IMAGE002
… … … … … … formula (2)
(2) The feature points are further accurate, and the picture is noise reduced.
For example, the Taylor expansion and interpolation processing of the DoG function are utilized to accurately position the preliminary key feature points, so that the key feature points after the positioning processing are obtained; and filtering the key feature points after the positioning processing according to the preset Taylor value, and removing the feature points with absolute values smaller than the preset Taylor value to obtain the key feature points after the positioning and noise reduction processing.
Since the above-mentioned preliminary key feature points are extracted and detected on the conventional pixels, in order to make the result further accurate, it is necessary to more accurately locate the preliminary key feature points on the sub-pixels. The accurate positioning can be obtained by Taylor expansion of the DoG function (formula (3)), and then interpolation processing (formula (4)). For some preliminary key specific point A, its Taylor expansion of D (x, y, σ) can be expressed as:
Figure 635824DEST_PATH_IMAGE003
................................. Formula (3)
Wherein the method comprises the steps of
Figure 434016DEST_PATH_IMAGE004
The offset to the point a is obtained by setting the taylor expansion to 0, and obtaining the optimum point. Namely:
Figure 305020DEST_PATH_IMAGE005
… … … … … … … … … … … … … … … … … formula (4)
For example, if X ˆ is greater than 0.5, this is indicated as being closer to the nearby pixel, and conversely, this interpolation is indicated as being closer to the extreme. This interpolation can then be combined with the extreme points, so that a more accurate key feature point can be obtained.
Then, noise reduction processing is performed again, for example, feature points with an absolute value of D (X ˆ) smaller than 0.03 are filtered, and thus noise reduction is achieved.
The feature contours of the pictures are compared through a proximity algorithm to further accurately process, so that more accurate features are obtained, and noise reduction processing is performed, as shown in fig. 6.
(3) And allocating a direction for the characteristic points by utilizing the rotation invariance of the characteristic points, and further positioning and correcting.
For example, a gradient value and an initial direction are calculated for the processed key feature points, a histogram of preset pixels near the processed key feature points is determined, and the direction with the largest proportion in the histogram is used as the main direction of the processed key feature points, so that the target key feature points and the target shooting angles are obtained.
First, gradient values and directions of all pixels in a circle with a radius of 1 pixel of a gaussian image scale where each key feature point is located are calculated.
The specific gradient value z for a certain key feature point z (x, y) is calculated as follows:
Figure 708319DEST_PATH_IMAGE006
… formula (5)
The specific direction θ for a certain key feature point z (x, y) is calculated as follows:
Figure 232842DEST_PATH_IMAGE007
… … … … … … … formula (6)
Note that atan2 is an api function that obtains a two-point angle in the c language.
When this is done, a histogram of 1 pixel around this key feature point will be obtained, with many directions in the histogram, and the direction of the maximum duty cycle being the main direction of this feature point, as shown in fig. 7.
When the key feature points are corrected and positioned again, the direction is repaired, the feature points of the picture are clearer and more accurate, the anti-interference performance is stronger, and a target key feature point schematic diagram is shown as shown in fig. 8. The object contour in fig. 8 is to be taken as a silhouette picture Si, a picture photographing angle Pi of the subsequent three-dimensional reconstruction, and Si and Pi can be read through a picture EXIF field.
And thirdly, reconstructing three dimensions.
After the work is prepared, the accurate silhouette Si of the picture and the picture shooting angle Pi are obtained, and the generation of the model can be started.
Those skilled in the art will appreciate that the three-dimensional information of a human brain can be obtained from a moving object because the brain finds a match in a moving 2D image, and relative depth information is obtained by parallax between the matching points, and a model is generated based on this principle, here using SFM as an example for three-dimensional reconstruction.
The three-dimensional reconstruction includes three steps:
1. feature detection, namely, the feature description of each picture is obtained by using SIFT algorithm thinking.
2. And (3) matching the features, extracting 2 pictures from the picture set, circularly matching the two pictures, wherein two pictures with the most similar feature description are two adjacent pictures, and circularly reciprocating to obtain a Zhang Liang picture sequence with two adjacent pictures and gradually increased angles.
3. Each picture is arranged in a sequential annular manner, a plurality of viewing cones are formed by back projection, the viewing cones are intersected, and space points with high repeatability are stored to form an initial three-dimensional model.
The specific calculation process is as follows:
inputting a picture Ri shot by a user, a corresponding picture silhouette Si and a corresponding picture shooting angle Pi;
starting:
polyhedron with picture R size
Figure 659275DEST_PATH_IMAGE008
Cycling each picture input picture Ri
Generating a viewing cone for each picture
Figure 130708DEST_PATH_IMAGE009
The polyhedron intersects with the corresponding viewing cone:
Figure 185251DEST_PATH_IMAGE010
… … … … … … … … … … formula (7)
And (5) finishing the circulation.
And (3) outputting: initial three-dimensional model
And (3) injection:
the box is a cube, and a common cube is constructed by utilizing the geometry of the image processing frame Cesium;
the computeCone method is a cone construction function of the image processing framework Cesium.
A schematic of the generated three-dimensional model is shown in fig. 9.
The model file generation method for 3D printing has the advantages of simplicity, convenience and low cost compared with the traditional model file generation method.
The common 3d printing modeling is to use 3d modeling software to model or use a professional scanning tool to scan and reconstruct an object, on one hand, the learning cost or the purchasing cost of the method is too high, the modeling threshold is difficult to surmount, on the other hand, the 3d printing process is cracked, the 3d modeling and the 3d printing are often cracked, the 3d modeling and the 3d printing must be firstly modeled in one place and then transported to a 3d printer to be printed, and the whole flow is not smooth enough. The scheme does not need to purchase additional equipment, does not need to use high-strength learning modeling, is simple in technical implementation and clear in realizing thought, can directly read shooting pictures on corresponding equipment of a user on mobile equipment and PC webpages to generate a model, and directly transmits the model to a printer to print, so that the whole process is smoother, unified and not cracked.
The traditional 3D printing model modeling is mainly modeled by professionals by using a large three-dimensional tool, or laser scanning is carried out by using a laser scanner so as to carry out three-dimensional reconstruction, but no matter which method is adopted, the traditional 3D printing model modeling has the defect that a user cannot easily accept the traditional 3D printing model modeling, or has the defects of high learning cost, limited forming effect, high acquisition cost, inconvenience and the like. The scheme does not need to purchase high-value hardware additionally or learn with high intensity, and only needs to take a plurality of pictures of multiple angles around an object by using shooting equipment such as a mobile phone and a camera, upload the pictures into a model generation algorithm, wait for a short time to generate a three-dimensional model with good effect, and if the effect is unsatisfactory, can also supplement and take pictures of multiple angles to regenerate the better three-dimensional model. The method has great significance for starting in 3D printing, and more people can participate in the 3D printing.
Corresponding to the above model file generation method for 3D printing, the embodiment of the present application further provides a model file generation device for 3D printing. Referring to fig. 10, a schematic structural diagram of a model file generating apparatus for 3D printing according to an embodiment of the present application is shown. The device comprises:
The picture determining module 1001 is configured to receive a picture or a video sent by a user side, and obtain a plurality of pictures for the same article;
the feature and angle determining module 1002 is configured to extract, for a plurality of the pictures, target key feature points and target shooting angles of the plurality of pictures;
the three-dimensional reconstruction module 1003 is configured to perform sequential annular arrangement on the multiple pictures based on the target key feature points and the target shooting angles of the multiple pictures; and, arranging pictures in a circular manner based on a plurality of the sequential sheets. And carrying out three-dimensional reconstruction to obtain a three-dimensional model file.
In one embodiment, the picture determining module 1001 is specifically configured to:
receiving a video sent by a user side, wherein the video presents a three-dimensional state of the article, and samples the video according to a preset frame number so as to obtain a plurality of pictures; or,
and receiving a plurality of pictures sent by a user side, wherein the pictures respectively show the appearance of each angle of the article according to a preset angle.
In one embodiment, the feature and angle determination module 1002 includes:
the primary feature point extraction sub-module 10021 is configured to perform primary feature extraction and extremum detection on a plurality of pictures to obtain a plurality of primary key feature points;
The positioning and noise reduction submodule 10022 is configured to perform accurate positioning and noise reduction processing on the preliminary key feature points to obtain the key feature points after positioning and noise reduction processing;
and a correction submodule 10023, configured to assign an initial main direction to the processed key feature point, and perform correction processing to obtain the target key feature point and the target shooting angle.
In one embodiment of the present invention, in one embodiment,
the preliminary feature point extraction submodule 10021 is specifically configured to: based on a picture feature detection algorithm, convolving at least one picture in a plurality of pictures with a Gaussian filter on a preset scale, and subtracting a local maximum point from a local minimum point in a Gaussian convolution result to obtain the preliminary key feature point containing an extreme point result;
the positioning and noise reduction sub-module 10022 is specifically configured to: accurately positioning the preliminary key feature points by utilizing Taylor expansion and interpolation processing of the DoG function to obtain the key feature points after positioning processing; filtering the key feature points after the positioning processing according to a preset Taylor value, and removing feature points with absolute values smaller than a preset Taylor value to obtain the key feature points after the positioning and noise reduction processing;
The correction submodule 10023 is specifically configured to: and calculating a gradient value and an initial direction aiming at the processed key feature points, determining a histogram of preset pixels near the processed key feature points, and taking the direction with the largest proportion in the histogram as the main direction of the processed key feature points to obtain the target key feature points and the target shooting angles.
In one embodiment, for a preliminary key feature point A, its D (x, y, σ) Taylor expansion is:
Figure 287199DEST_PATH_IMAGE011
wherein,,
Figure 59983DEST_PATH_IMAGE004
is the offset to point a.
In one embodiment, the three-dimensional reconstruction module 1003 further includes:
the feature matching submodule 10031 is used for carrying out feature matching on every two adjacent pictures based on target key feature points and target shooting angles of a plurality of pictures, and determining a sequence of every two adjacent pictures with gradually increased shooting angles;
the picture ordering sub-module 10032 is configured to annularly arrange a plurality of pictures in a sequence, project rays to the plurality of pictures to form a plurality of viewing cones, perform intersection processing on the plurality of viewing cones, and store spatial points with high repeatability to form the three-dimensional model file.
In one embodiment, the apparatus further comprises:
The model determining unit 1004 is configured to determine whether the three-dimensional model file meets a model requirement, and if not, obtain more supplementary pictures through the picture determining module 1001, and enable the feature and angle determining module 1002 and the three-dimensional reconstructing module 1003 to reconstruct the three-dimensional model file based on the supplementary pictures and the original multiple pictures.
In one embodiment, the APP or the applet issues the three-dimensional model file to the user side, so that the user side checks, forwards, saves and/or starts the 3D printing process for the three-dimensional model file.
Embodiments of the present application also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for performing the steps of:
(1) Receiving pictures or videos sent by a user side to obtain a plurality of pictures aiming at the same article;
(2) Extracting target key feature points and target shooting angles of a plurality of pictures aiming at the pictures;
(3) Based on the target key feature points and the target shooting angles of the pictures, sequentially and annularly arranging the pictures to obtain sequentially and annularly arranged pictures;
(4) And carrying out three-dimensional reconstruction based on the plurality of pictures which are arranged in sequence in a ring shape to obtain a three-dimensional model file.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Embodiments of the present application also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
(1) Receiving pictures or videos sent by a user side to obtain a plurality of pictures aiming at the same article;
(2) Extracting target key feature points and target shooting angles of a plurality of pictures aiming at the pictures;
(3) Based on target key feature points and target shooting angles of a plurality of pictures, sequentially and annularly arranging the pictures to obtain a plurality of sequentially and annularly arranged pictures;
(4) Based on a plurality of pictures which are arranged in sequence in a ring shape, three-dimensional reconstruction is carried out to obtain a three-dimensional model file.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (9)

1. A model file generation method for 3D printing, the method comprising:
receiving pictures or videos sent by a user side to obtain a plurality of pictures aiming at the same article, wherein the pictures are obtained by shooting around the article in a circle;
extracting target key feature points and target shooting angles of a plurality of pictures aiming at the pictures;
based on the target key feature points and the target shooting angles of the pictures, sequentially and annularly arranging the pictures to obtain sequentially and annularly arranged pictures;
based on a plurality of the pictures which are arranged in sequence in a ring shape, carrying out three-dimensional reconstruction to obtain a three-dimensional model file;
the plurality of pictures are arranged in a sequential annular mode based on the target key feature points and the target shooting angles of the plurality of pictures, so that a plurality of pictures arranged in a sequential annular mode are obtained; based on a plurality of the orderly annular arranged pictures, carrying out three-dimensional reconstruction to obtain a three-dimensional model file, comprising the following steps:
Based on the target key feature points and the target shooting angles of the plurality of pictures obtained by shooting around the article, carrying out feature matching on every two adjacent pictures, and determining a sequence of every two adjacent pictures with gradually increased shooting angles;
the plurality of pictures shot around the article are annularly arranged according to the sequence, rays are projected to the plurality of pictures to form a plurality of viewing cones, the plurality of viewing cones are intersected, and space points with high repeatability are stored to form the three-dimensional model file; and in the process of forming a plurality of viewing cones, rays are projected to the same positions of all the plurality of pictures, so that the plurality of viewing cones are determined to be kept at the same horizontal height.
2. The method of claim 1, wherein the receiving the picture or the video sent by the user side, to obtain a plurality of pictures for the same article, includes:
receiving a video sent by a user side, wherein the video presents a three-dimensional state of the article, and samples the video according to a preset frame number so as to obtain a plurality of pictures; or,
And receiving a plurality of pictures sent by a user side, wherein the pictures respectively show the appearance of each angle of the article according to a preset angle.
3. The method of claim 1, wherein the extracting the target key feature points and the target shooting angles of the plurality of pictures comprises:
performing preliminary feature extraction and extremum detection on a plurality of pictures to obtain a plurality of preliminary key feature points;
performing accurate positioning and noise reduction treatment on the preliminary key feature points to obtain positioned and noise-reduced key feature points;
and allocating an initial main direction for the processed key feature points, and performing correction processing to obtain the target key feature points and the target shooting angles.
4. The method of claim 3, wherein the step of,
the preliminary feature extraction and extremum detection are performed on a plurality of pictures to obtain a plurality of preliminary key feature points, including: based on a picture feature detection algorithm, convolving at least one picture in a plurality of pictures with a Gaussian filter on a preset scale, and subtracting a local maximum point from a local minimum point in a Gaussian convolution result to obtain the preliminary key feature point containing an extreme point result;
The accurate positioning and noise reduction processing is performed on the preliminary key feature points to obtain the positioned and noise-reduced key feature points, including: accurately positioning the preliminary key feature points by utilizing Taylor expansion and interpolation processing of the DoG function to obtain the key feature points after positioning processing; filtering the key feature points after the positioning processing according to a preset Taylor value, and removing feature points with absolute values smaller than a preset Taylor value to obtain the key feature points after the positioning and noise reduction processing;
the step of distributing an initial main direction for the processed key feature points and carrying out correction processing to obtain the target key feature points and the target shooting angles comprises the following steps: and calculating a gradient value and an initial direction aiming at the processed key feature points, determining a histogram of preset pixels near the processed key feature points, and taking the direction with the largest proportion in the histogram as the main direction of the processed key feature points to obtain the target key feature points and the target shooting angles.
5. The method of any one of claims 1, 2, 3, and 4, wherein the method for generating the three-dimensional model file is implemented at a server side corresponding to the 3D printer APP or the applet, and the method further includes:
And judging whether the three-dimensional model file meets the model requirement, if not, prompting a user to provide more supplementary pictures, and regenerating the three-dimensional model file based on the supplementary pictures and the original multiple pictures.
6. The method of any one of claims 1, 2, 3, and 4, wherein the method for generating the three-dimensional model file is implemented at a server side corresponding to the 3D printer APP or the applet, and the method further includes:
and issuing the three-dimensional model file to a user side through the APP or the applet, so that the user side checks, forwards, saves and/or starts 3D printing processing aiming at the three-dimensional model file.
7. The method according to any one of claims 1, 2, 3, 4,
the expression for setting the viewing cone formed by projecting rays to the picture is:
cone (Ri) =computecone (Si, pi), where Ri represents each picture, si represents a picture silhouette corresponding to each picture, pi represents a target shooting angle of each picture, and the computeCone method is a viewing cone constructor;
the expression of the view cone intersection processing is set as follows:
vh (R) =vh (R) Σcone (Ri), where vh (R) represents a polyhedron to which the picture corresponds.
8. A storage medium having a computer program stored therein, wherein the computer program is arranged to perform the method of any of claims 1 to 7 when run.
9. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of claims 1 to 7.
CN202211175701.XA 2022-09-26 2022-09-26 Model file generation method and medium for 3D printing and electronic equipment Active CN115272587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211175701.XA CN115272587B (en) 2022-09-26 2022-09-26 Model file generation method and medium for 3D printing and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211175701.XA CN115272587B (en) 2022-09-26 2022-09-26 Model file generation method and medium for 3D printing and electronic equipment

Publications (2)

Publication Number Publication Date
CN115272587A CN115272587A (en) 2022-11-01
CN115272587B true CN115272587B (en) 2023-05-30

Family

ID=83757418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211175701.XA Active CN115272587B (en) 2022-09-26 2022-09-26 Model file generation method and medium for 3D printing and electronic equipment

Country Status (1)

Country Link
CN (1) CN115272587B (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101195942B1 (en) * 2006-03-20 2012-10-29 삼성전자주식회사 Camera calibration method and 3D object reconstruction method using the same
EP2534835A4 (en) * 2010-02-12 2017-04-05 The University of North Carolina At Chapel Hill Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information
JP5158223B2 (en) * 2011-04-06 2013-03-06 カシオ計算機株式会社 3D modeling apparatus, 3D modeling method, and program
CN105631859B (en) * 2015-12-21 2016-11-09 中国兵器工业计算机应用技术研究所 Three-degree-of-freedom bionic stereo visual system
CN109859305B (en) * 2018-12-13 2020-06-30 中科天网(广东)科技有限公司 Three-dimensional face modeling and recognizing method and device based on multi-angle two-dimensional face
CN110097051B (en) * 2019-04-04 2024-07-19 平安科技(深圳)有限公司 Image classification method, apparatus and computer readable storage medium
CN111369681B (en) * 2020-03-02 2022-04-15 腾讯科技(深圳)有限公司 Three-dimensional model reconstruction method, device, equipment and storage medium
CN112435326A (en) * 2020-11-20 2021-03-02 深圳市慧鲤科技有限公司 Printable model file generation method and related product
CN114676763A (en) * 2022-03-14 2022-06-28 中建西南咨询顾问有限公司 Construction progress information processing method

Also Published As

Publication number Publication date
CN115272587A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
Liu et al. 3D imaging, analysis and applications
US11721067B2 (en) System and method for virtual modeling of indoor scenes from imagery
Fuhrmann et al. Mve-a multi-view reconstruction environment.
Furukawa et al. Multi-view stereo: A tutorial
US9619933B2 (en) Model and sizing information from smartphone acquired image sequences
Scharstein View synthesis using stereo vision
US10410089B2 (en) Training assistance using synthetic images
Stoykova et al. 3-D time-varying scene capture technologies—A survey
Kordelas et al. State-of-the-art algorithms for complete 3d model reconstruction
WO2022021782A1 (en) Method and system for automatically generating six-dimensional posture data set, and terminal and storage medium
Taubin et al. 3d scanning for personal 3d printing: build your own desktop 3d scanner
Park et al. Surface light field fusion
Germann et al. Novel‐View Synthesis of Outdoor Sport Events Using an Adaptive View‐Dependent Geometry
CN110059537B (en) Three-dimensional face data acquisition method and device based on Kinect sensor
Casas et al. Rapid photorealistic blendshape modeling from RGB-D sensors
CN114399610A (en) Texture mapping system and method based on guide prior
Kumara et al. Real-time 3D human objects rendering based on multiple camera details
Hartl et al. Rapid reconstruction of small objects on mobile phones
Khan et al. Towards monocular neural facial depth estimation: Past, present, and future
CN115272587B (en) Model file generation method and medium for 3D printing and electronic equipment
Hasinoff et al. Search-and-replace editing for personal photo collections
CN110177216A (en) Image processing method, device, mobile terminal and storage medium
Nguyen et al. High resolution 3d content creation using unconstrained and uncalibrated cameras
Gledhill 3D panoramic imaging for virtual environment construction
Lee et al. Mobile phone-based 3d modeling framework for instant interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant