CN111179433A - Three-dimensional modeling method and device for target object, electronic device and storage medium - Google Patents

Three-dimensional modeling method and device for target object, electronic device and storage medium Download PDF

Info

Publication number
CN111179433A
CN111179433A CN201911423695.3A CN201911423695A CN111179433A CN 111179433 A CN111179433 A CN 111179433A CN 201911423695 A CN201911423695 A CN 201911423695A CN 111179433 A CN111179433 A CN 111179433A
Authority
CN
China
Prior art keywords
local
target object
subspace
point
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911423695.3A
Other languages
Chinese (zh)
Inventor
王扬斌
张鹿鸣
王泽鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Fubo Technology Co Ltd
Original Assignee
Hangzhou Fubo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Fubo Technology Co Ltd filed Critical Hangzhou Fubo Technology Co Ltd
Priority to CN201911423695.3A priority Critical patent/CN111179433A/en
Publication of CN111179433A publication Critical patent/CN111179433A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application provides a three-dimensional modeling method and device of a target object, electronic equipment and a storage medium, which belong to the technical field of image processing and comprise the following steps: acquiring scanning images obtained by scanning a target object from different angles; randomly selecting a plurality of key points in the scanned image aiming at each scanned image, and constructing a local space corresponding to each key point; dividing the local space corresponding to each key point into a plurality of subspaces along the radial direction of projection, counting the characteristic vector corresponding to each subspace, and connecting the characteristic vectors corresponding to each subspace to obtain the local characteristics corresponding to the key points; and carrying out registration by using local features corresponding to the key points in the different-angle scanning images, and reconstructing a three-dimensional model of the target object. According to the scheme provided by the application, the modeling precision is improved.

Description

Three-dimensional modeling method and device for target object, electronic device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for three-dimensional modeling of a target object, an electronic device, and a computer-readable storage medium.
Background
Three-dimensional modeling has wide application in production and life, such as industrial modeling, medical imaging, cultural relic protection, smart cities and the like. The purpose of three-dimensional modeling is to scan an object, acquire spatial position data and a surface texture image of the object in the real world, and reconstruct a three-dimensional digital model with high reality of physical dimensions and surface characteristics, namely construct three-dimensional information of the object.
However, due to the angle of the observed object and the shielding of the object itself, the modeling accuracy is not high.
Disclosure of Invention
The application provides a three-dimensional modeling method of a target object, which is used for eliminating the influence of overlapping of scanned images and improving the modeling precision.
The application provides a three-dimensional modeling method of a target object, which comprises the following steps:
acquiring scanning images obtained by scanning a target object from different angles;
aiming at each scanned image, randomly selecting a plurality of key points in the scanned image, and constructing a local space corresponding to each key point;
dividing the local space corresponding to each key point into a plurality of subspaces along the radial direction of projection, counting the characteristic vector corresponding to each subspace, and connecting the characteristic vectors corresponding to each subspace to obtain the local characteristics corresponding to the key points;
and carrying out registration by using local features corresponding to the key points in the scanning images at different angles, and reconstructing the three-dimensional model of the target object.
In an embodiment, the randomly selecting a plurality of key points in the scanned image for each scanned image, and constructing a local space corresponding to each key point includes:
aiming at each scanned image, selecting key points in the scanned image by a random sampling method;
and constructing a local space corresponding to the key point according to a preset radius by taking the key point as a circle center.
In an embodiment, the splitting the local space corresponding to each key point into a plurality of subspaces along a radial direction of projection includes:
calculating a local reference axis of the local space;
and performing rotation translation transformation on the local space to enable the key point of the local space to be positioned at the origin of coordinates, aligning the local reference axis with the Z axis of the global coordinate system, and segmenting the local space along the radial direction of projection.
In an embodiment, the counting the feature vector corresponding to each subspace includes:
in each subspace, calculating a local height value and an angle characteristic value corresponding to each local point in the subspace;
calculating a characteristic index value of each local point in the subspace according to the local height value and the angle characteristic value corresponding to each local point;
according to each feature index value, calculating the number of local points corresponding to the feature index value in the subspace, and generating a feature histogram based on the number distribution of the local points;
and constructing a feature vector of the subspace according to the feature histogram of the subspace.
In an embodiment, the connecting the feature vectors corresponding to each subspace to obtain the local features corresponding to the key points includes:
adding the characteristic vectors corresponding to each subspace to obtain a first vector;
and performing dimension compression on the first vector to generate local features of a local space corresponding to the key points.
In one embodiment, the registering with the local features corresponding to the keypoints in the different angle scan images, and reconstructing the three-dimensional model of the target object includes:
performing pairwise registration and multi-view registration by using local features corresponding to the key points in the scanned images at different angles, and determining transformation matrixes of all the scanned images;
and reconstructing the three-dimensional model of the target object according to the transformation matrix of all the scanned images.
In an embodiment, the performing pair-wise registration and multi-view registration by using local features corresponding to the key points in different angle scan images and determining a transformation matrix of all scan images includes:
searching the characteristic point pair with the highest local characteristic similarity from any two scanned images at different angles;
estimating the transformation relation of the scanning images at any two different angles according to the characteristic point pairs, and optimizing the transformation relation by using a nearest iteration algorithm based on a preset threshold value to realize pairwise registration;
and on the basis of pair-wise registration, performing multi-view registration, and combining the transformation relation to construct a transformation matrix of all the scanned images.
In another aspect, the present application further provides a three-dimensional modeling apparatus for a target object, including:
the image acquisition module is used for acquiring scanning images obtained by scanning the target object from different angles;
the space construction module is used for randomly selecting a plurality of key points in each scanned image and constructing a local space corresponding to each key point;
the characteristic counting module is used for dividing the local space corresponding to each key point into a plurality of subspaces along the radial direction of projection, counting the characteristic vector corresponding to each subspace, and connecting the characteristic vector corresponding to each subspace to obtain the local characteristic corresponding to the key point;
and the model reconstruction module is used for registering local features corresponding to the key points in the scanning images at different angles and reconstructing the three-dimensional model of the target object.
In another aspect, the present application further provides an electronic device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the above-described method of three-dimensional modeling of the target object.
In another aspect, the present application further provides a computer-readable storage medium storing a computer program executable by a processor to perform the above-mentioned method for three-dimensional modeling of a target object.
According to the technical scheme provided by the embodiment of the application, the local space corresponding to the key points is constructed, the local space is segmented, the feature vectors of each subspace are counted, the local features of the key points are obtained through connection, registration is carried out by using the local features, the influence of point cloud overlapping of the scanned object caused by the angle of the observed object, the shielding of the object and the like is eliminated, and the modeling precision is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below.
FIG. 1 is a schematic diagram of an application scenario of a three-dimensional modeling method for a target object according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a method for three-dimensional modeling of a target object according to an embodiment of the present disclosure;
FIG. 3 is a schematic partial spatial view of an embodiment of the present application;
FIG. 4 is a schematic flow chart showing details of step 240 in the corresponding embodiment of FIG. 2;
FIG. 5 is a schematic diagram of a fine multi-view registration;
fig. 6 is a block diagram of a three-dimensional modeling apparatus for a target object according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Fig. 1 is a schematic view of an application scenario of the three-dimensional modeling method for a target object provided in the present application. As shown in fig. 1, the application scenario includes: an image acquisition device 110; the image acquisition device 110 may be a TOF (Time of Flight) camera. The image capturing device 110 may scan the target object from different angles, and capture a depth image of the surface of the target object, and then the image capturing device 110 may reconstruct a three-dimensional model of the target object by using the three-dimensional modeling method of the target object provided in the present application.
In an embodiment, the application scenario may further include a smart device 120, and the image capturing apparatus 110 is in wired or wireless network communication with the smart device 120. The smart device 120 may be a Personal Computer (PC), a tablet PC, a smart phone, a Personal Digital Assistant (PDA), or the like, and may also be a server, a server cluster, or a cloud computing center.
The image acquisition device 110 scans the target object from different angles to obtain scanned images, which can be transmitted to the intelligent device 120, and the intelligent device 120 reconstructs the three-dimensional model of the target object by using the three-dimensional modeling method of the target object provided by the present application.
The present application also provides an electronic device, which may be the image acquisition apparatus 110 or the smart device 120. As shown in fig. 1, the smart device 120 may include a processor 121; a memory 122 for storing instructions executable by the processor 121; wherein the processor 121 is configured to execute the three-dimensional modeling method of the target object provided herein.
The Memory 122 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
A computer-readable storage medium is also provided, which stores a computer program executable by the processor 121 to perform the method for three-dimensional modeling of a target object provided herein.
Fig. 2 is a schematic flowchart of a three-dimensional modeling method for a target object provided in the present application, which may be performed by the image capturing apparatus 110 or the smart device 120. As shown in fig. 2, the method may include the following steps.
In step 210, scanned images obtained by scanning the target object from different angles are acquired.
The target object is an entity of the three-dimensional digital model to be reconstructed in the real scene, and can be an artwork, a human body or other real objects. The different angles may be around the target object, ensuring that all sides of the target object are scanned. For example, the scan images may be obtained by scanning the target object from different angles by the TOF camera. The scanned image may be considered to be a depth image of a certain face of the target object acquired by the TOF camera. By rotating the TOP camera or rotating the target object, the scanning angle can be changed, so that depth images of other surfaces of the target object can be acquired.
In step 220, for each scanned image, a plurality of key points are randomly selected from the scanned image, and a local space corresponding to each key point is constructed.
The key points may be corner points in the scanned image, and the corner points are extreme points, that is, points with particularly prominent attributes in some aspect, such as extreme points of depth variation. The local space is a solid space with a fixed size and taking a key point as a center. Each key point can be constructed into a local space in a one-to-one correspondence mode, one scanned image can have a plurality of key points, and one key point corresponds to one local space, so that one scanned image can have a plurality of local spaces.
In one embodiment, the step 220 may include the following steps: aiming at each scanned image, selecting key points in the scanned image by a random sampling method; and constructing a local space corresponding to the key point according to a preset radius by taking the key point as a circle center.
The key points may also be referred to as interest points, and a stable and distinctive point set may be obtained by defining a detection criterion as the key points. Specifically, selecting the key points by the random sampling method may include: (1) by traversing each depth image point, edge detection is performed by finding a position where there is a depth change in a neighboring region. (2) And traversing each depth image point, and determining a coefficient for measuring the surface change and a main direction of the change according to the surface change of the adjacent area. (3) And calculating the interest point according to the main direction found in the second step, and representing the difference between the direction and other directions and the change condition of the surface at the place, namely how stable the point is. (4) And carrying out smooth filtering on the interest value. (5) And (4) carrying out maximum value-free compression to find the final key point, namely the key point selected by the random sampling method.
After determining the key points, as shown in fig. 3, given a key point p and a preset radius R, a local space Q ═ Q can be obtained1,q2,…,qn}。
In step 230, the local space corresponding to each key point is divided into a plurality of subspaces along the radial direction of projection, the feature vector corresponding to each subspace is counted, and the feature vector corresponding to each subspace is connected to obtain the local feature corresponding to the key point.
The radial direction of projection refers to the image acquisition direction, i.e. the light pulse emission direction. The splitting of the local space into a plurality of subspaces may specifically be performed by calculating a local reference axis of the local space; and then, performing rotational translation transformation on the local space to enable the key point of the local space to be positioned at the origin of coordinates, aligning the local reference axis with the Z axis of the global coordinate system, and segmenting the local space along the radial direction of projection. The Z-axis is a normal vector perpendicular to the keypoint p and a small plane around it, as shown in fig. 3 a.
Here, the local reference axis LRA, which may be referred to as a and B in fig. 3, may provide spatial information in the radial and elevation directions, may be calculated by the following equation (1).
Figure BDA0002353020380000081
Wherein, pqiDenotes from p to qiV (p) represents a feature vector, n representsThe size of the local space Q represents a dot product. The feature vector is obtained by calculating Cov (Q)Z) Is obtained from the minimum eigenvalue of (c). Given a critical point p and a support radius R, a series of radii are immediately adjacent to QZ={qi Z}。
After the local reference axis is calculated, the local space Q may be subjected to rotational translation transformation to make the key point p consistent with the origin of coordinates, and the LRA is also aligned with the Z axis of the global coordinate system, so as to divide the local space Q into a plurality of subspaces along the radial direction of projection.
Wherein, counting the feature vectors corresponding to each subspace can be performed by: (1) in each subspace, calculating a local height value and an angle characteristic value corresponding to each local point in the subspace; (2) calculating a characteristic index value of each local point in the subspace according to the local height value and the angle characteristic value corresponding to each local point; (3) according to each feature index value, calculating the number of local points corresponding to the feature index value in the subspace, and generating a feature histogram based on the number distribution of the local points; (4) and constructing a feature vector of the subspace according to the feature histogram of the subspace.
For example, assume that the number of partitioned subspaces is NprIn each subspace, a local height value lh is calculated, which can be calculated by the following formula (2).
lhk=R+qk(z) (2)
Q in the formulakRepresents any one local point, lhkRepresenting a local point qkLocal high value of qk(z) represents qkZ coordinate of (a). A local point refers to any point in the subspace.
For example, a local point q in subspacekThe angle characteristic value of (c) can be calculated by the following equation (3).
Figure BDA0002353020380000091
In the formula pqkDenotes from p to qkVector of (2)。LMAkIs represented by qkLocal minimum axis LMA, LMA'kThe calculation formula is as follows:
Figure BDA0002353020380000092
to determine the direction of the LMA over the frame generated by the LRA and the vector from the keypoint to the current local point, the present application uses three angular feature values (referred to as α, β and γ, respectively) to adequately locate the direction of the LMA.
Then, the feature index value of each local point in the subspace is calculated. Wherein, the local point q is divided intokThe feature index value calculation formula for mapping the four feature values (i.e. one height feature value and three angle feature values) to the subspace is:
Figure BDA0002353020380000101
in the formula Nlh,Nα,Nβ,NγRespectively represent the number of the four characteristic values,
Figure BDA0002353020380000102
respectively representing the four calculated characteristic index values.
After the feature index values of all the points in the subspace are calculated by the method, the number of local points of the feature index values in the same subspace can be calculated according to each feature index value, and the histogram can be calculated by utilizing the distribution of the number of the local points, so that the histogram H representing the height feature value can be obtainedlhHistogram H representing angle feature valueα,Hβ,Hγ. The four histograms are subjected to a scaling factor lambda1、λ2、λ3、λ4Connected, the feature vector f of the subspace is formed, namely:
f={λ1Hlh,λ2Hα,λ3Hβ,λ4Hγ}s.t.Hlh+Hα+Hβ+Hγ=1 (6)
after the feature vector of each subspace is obtained, the local features corresponding to the key points can be obtained by connecting the feature vectors corresponding to each subspace. For example, assuming that a local space is divided into n subspaces, the feature vector f connecting the n subspaces1,f2,..,fnCan order { f1,f2,..,fnThe local features of the local space corresponding to the keypoint.
Because the dimensionality of the local features is more, in order to reduce the calculation amount during feature registration, in an embodiment, the feature vectors corresponding to each subspace can be added to obtain a first vector; and performing dimension compression on the first vector to generate local features of a local space corresponding to the key points.
For example, the first vector may be represented in an F manner, where F represents the sum of feature vectors of n subspaces, and then F may be represented as
F={Flh,Fα,Fβ,Fγ} (7)
Compressing F using the following equation, generating a compressed local feature:
Figure BDA0002353020380000111
wherein, Flh+αIs represented by FlhAnd FαSum of Fβ+γIs represented by FβAnd FγSum of Flh+aSplitting into two parts, one part of which is to be reacted with Fβ+γHave the same dimension, so that the matrix operation can be carried out,
Figure BDA0002353020380000112
is represented by Flh+αTwo parts are separated, wherein
Figure BDA0002353020380000113
And Fβ+γThe dimensions of (a) are kept consistent.
In step 240, local features corresponding to the key points in the different angle scan images are used for registration, and a three-dimensional model of the target object is reconstructed.
Reconstructing the three-dimensional model of the target object means converting the scanned images at different angles into the same coordinate system for display. The registration by using the local features corresponding to the key points in the scanned images at different angles refers to performing feature matching on the local features in the different scanned images to find out image points of the same object point in the different scanned images, namely, the image points with the same name.
In one embodiment, as shown in fig. 4, the step 240 may include the following steps.
In step 241, the local features corresponding to the key points in the different angle scan images are subjected to pair-wise registration and multi-view registration, and a transformation matrix of all the scan images is determined.
The paired registration refers to registering two scanned images, finding out the image points with the same name between the two scanned images, and calculating the transformation relation of the two scanned images. The multi-view registration refers to the registration of a plurality of scanning images, the same-name image points among the plurality of scanning images are found, and the transformation relation of the plurality of scanning images is calculated, so that a transformation matrix is formed.
In one embodiment, the pair-wise registration may be performed first with a coarse pair-wise registration and then with a fine pair-wise registration. The multi-view registration can be based on the fine pair registration to perform multi-view coarse registration and then multi-view fine registration. The method specifically comprises the following steps: (1) searching the characteristic point pair with the highest local characteristic similarity from any two scanned images at different angles; (2) estimating the transformation relation of the scanning images at any two different angles according to the characteristic point pairs; (3) optimizing the transformation relation by using a nearest iteration algorithm based on a preset threshold value to realize pairwise registration; (4) and on the basis of pair-wise registration, performing multi-view registration, and combining the transformation relation to construct a transformation matrix of all the scanned images.
The euclidean distance between local features is the smallest, and the similarity is considered to be the highest. The characteristic point pair refers to two points which are preliminarily considered to belong to the same-name image points in the two scanned images. The conversion relation between the two scanned images means a conversion which is required when the two scanned images are converted into the same coordinate system.
For example, coarse pair-wise registration may be assumed first
Figure BDA0002353020380000121
And
Figure BDA0002353020380000122
Figure BDA0002353020380000123
respectively represent scanned images MsAnd scanning the image MtThe compressed local features of (1). For FsUsing a k-d tree algorithm at FtThe two most similar features are found. At FsAnd is in FtThe most similar feature in NsA feature pair. From NsFirst select k in each feature pair1The most similar feature pairs, then from k1Extracting the top k with larger difference between the distance of the best match and the second best match in each feature pair2A feature pair. The correct transformation relationships are then estimated using a transformation estimation technique based on game theory.
Fine pairwise registration may utilize a fixed threshold based ICP (Iterative close Point nearest iteration) algorithm to align scan image MsAnd scanning the image MtAnd performing registration, and performing fine registration by using an ICP algorithm based on bilateral communication when the registration is converged or the maximum iteration number is reached. By rough pair registration, the image points with the same name are approximately overlapped, but the error precision is far less than the precision requirement in practical application. Accurate registration of the same-name image points is also required in order to minimize errors between them. The ICP algorithm is the most common accurate data registration method, in each iteration process, each point of the data point cloud is searched for the Euclidean distance closest point in the model point cloud as a corresponding point, and the objective function is minimized through the corresponding points to obtain the ICP algorithmAnd (4) applying the four-dimensional transformation matrix to the point cloud data to obtain a new data point cloud and bringing the new data point cloud into the next iteration process. Thereby, optimization of the transformation relation can be achieved.
In an embodiment, the coarse multi-view registration may be performed using a shape growing algorithm: given a set of input scans, { M1,M2,…,MnGet the compressed local feature statistics of key points { F }1,F2,…,Fn}. Initially, all scans are contained in a search space Q, and then one scan and its compressed local feature statistics are selected as seed scan R1And seed characteristics E1. Gradually updating R by iteratively aligning the remaining scans with the seed scan over time1And E1. In particular, in each iteration, one scan M in the search space Q is selectediM is registered by the above-mentioned pair-wise registration methodiAnd R1After registration, if MiNumber of points on and R1The distance between the upper nearest point is less than a predetermined threshold and greater than another predetermined threshold, then M is considerediAnd R1The registration was successful and by increasing MiAnd R1Updating R with non-overlapping points1,E1Also updated simultaneously, and M is addediRemoved from the search space Q. If the registration is not successful, M is addediMoving to the very bottom of the search space Q. Then for the Mi+1And registering the scans until the iteration is finished to obtain a rough transformation matrix of all the scans.
Fine multi-view registration with a corresponding transformation technique: first, as shown in fig. 5, each point in R _1 is assigned a vector having a length equal to the number n of input scans, and each element in the vector corresponds to one input scan. All vectors corresponding to the points in R _1 may form a matrix T with rows corresponding to the points in R _1 and columns corresponding to the n input scans. All values in the matrix T are initialized to 0. If one point in R _1 overlaps with the ith scan M _ i, the ith element in the corresponding vector is set to 1. After the iterative search in R _1, the T setting is complete; judgment of R1And the ith scan MiThe overlapping method is based on the k-d tree technique from R1The closest point of the ith scan is searched. If the point in the ith scan is a point R1Are smaller than a fixed threshold, these closest points are considered as points overlapping the ith scan.
Based on the set T, the overlap ratio of all scans is obtained by calculating the distribution of 1's in T. Based on the overlap ratio, using the William algorithm:
Figure BDA0002353020380000141
wherein S represents the number of valid scan pairs, Nμthe williams algorithm is a registration technique with a cost function shown in equation (9) above, the objective of this cost function is to transform (R)m,Tm) Next, the sum of squared distances Φ between all active scan pairs is minimized. (R)m,Tm) And M is 1,2, … M, where M is the number of scans. All M transition matrices may be connected into one (3 × 3M) matrix R and one (3M × 1) vector T. Thus, Φ is a function of R and T.
Figure BDA0002353020380000142
Figure BDA0002353020380000143
Is a point pair of scanning
Figure BDA0002353020380000144
In step 242, a three-dimensional model of the target object is reconstructed from the transformation matrices for all the scanned images.
Each scanned image can be converted to the same coordinate system according to the transformation matrix, so that a three-dimensional digital model of the target object is formed. Surface reconstruction can be expressed as a poisson problem that seeks an index function that best fits a set of noisy, non-uniform observations, which can strongly recover detail from noisy real-world scans. In an embodiment, after multi-view registration, three-dimensional reconstruction of scan images at different angles is achieved by using a poisson reconstruction technique.
According to the method and the device, the local features are established in the local space of the scanned image, the influence of point cloud overlapping of the scanned object is eliminated, and the accuracy of three-dimensional modeling of the object is improved. Furthermore, the method combining rough registration and precise registration is adopted, so that the time required by modeling is effectively reduced, and the modeling accuracy is improved.
The following are embodiments of the apparatus of the present application, which may be used to implement embodiments of the method for three-dimensional modeling of a target object implemented by the image capturing apparatus 110 or the smart device 120 of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the three-dimensional modeling method of the target object of the present application.
Fig. 6 is a block diagram of a three-dimensional modeling apparatus for a target object according to an embodiment of the present application. As shown in fig. 6, the three-dimensional modeling apparatus of the target object may include: an image acquisition module 610, a space construction module 620, a feature statistics module 630, and a model reconstruction module 640.
An image acquisition module 610, configured to acquire scanning images obtained by scanning a target object from different angles;
a space construction module 620, configured to randomly select a plurality of key points in each scanned image, and construct a local space corresponding to each key point;
a feature statistics module 630, configured to segment the local space corresponding to each key point into multiple subspaces along a projection radial direction, count a feature vector corresponding to each subspace, and connect the feature vector corresponding to each subspace to obtain a local feature corresponding to the key point;
and a model reconstruction module 640, configured to perform registration on local features corresponding to the key points in the different angle scan images, and reconstruct a three-dimensional model of the target object.
The implementation processes of the functions and actions of the modules in the device are specifically described in the implementation processes of the corresponding steps in the three-dimensional modeling method of the target object, and are not described herein again.
In the embodiments provided in the present application, the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (10)

1. A method of three-dimensional modeling of a target object, comprising:
acquiring scanning images obtained by scanning a target object from different angles;
aiming at each scanned image, randomly selecting a plurality of key points in the scanned image, and constructing a local space corresponding to each key point;
dividing the local space corresponding to each key point into a plurality of subspaces along the radial direction of projection, counting the characteristic vector corresponding to each subspace, and connecting the characteristic vectors corresponding to each subspace to obtain the local characteristics corresponding to the key points;
and carrying out registration by using local features corresponding to the key points in the scanning images at different angles, and reconstructing the three-dimensional model of the target object.
2. The method according to claim 1, wherein for each scanned image, randomly selecting a plurality of key points in the scanned image, and constructing a local space corresponding to each key point comprises:
aiming at each scanned image, selecting key points in the scanned image by a random sampling method;
and constructing a local space corresponding to the key point according to a preset radius by taking the key point as a circle center.
3. The method according to claim 2, wherein the dividing the local space corresponding to each key point into a plurality of subspaces along a radial direction of projection comprises:
calculating a local reference axis of the local space;
and performing rotation translation transformation on the local space to enable the key point of the local space to be positioned at the origin of coordinates, aligning the local reference axis with the Z axis of the global coordinate system, and segmenting the local space along the radial direction of projection.
4. The method according to claim 1, wherein said counting the feature vectors corresponding to each of the subspaces comprises:
in each subspace, calculating a local height value and an angle characteristic value corresponding to each local point in the subspace;
calculating a characteristic index value of each local point in the subspace according to the local height value and the angle characteristic value corresponding to each local point;
according to each feature index value, calculating the number of local points corresponding to the feature index value in the subspace, and generating a feature histogram based on the number distribution of the local points;
and constructing a feature vector of the subspace according to the feature histogram of the subspace.
5. The method according to claim 1, wherein the connecting the feature vectors corresponding to each subspace to obtain the local features corresponding to the key points comprises:
adding the characteristic vectors corresponding to each subspace to obtain a first vector;
and performing dimension compression on the first vector to generate local features of a local space corresponding to the key points.
6. The method of claim 1, wherein the registering with the local features corresponding to the keypoints in the different angle scan images, and reconstructing the three-dimensional model of the target object comprises:
performing pairwise registration and multi-view registration by using local features corresponding to the key points in the scanned images at different angles, and determining transformation matrixes of all the scanned images;
and reconstructing the three-dimensional model of the target object according to the transformation matrix of all the scanned images.
7. The method according to claim 6, wherein said performing pair-wise registration and multi-view registration using local features corresponding to said keypoints in different angle scan images, and determining transformation matrices for all scan images comprises:
searching the characteristic point pair with the highest local characteristic similarity from any two scanned images at different angles;
estimating the transformation relation of the scanning images of any two different angles according to the characteristic point pairs,
optimizing the transformation relation by using a nearest iteration algorithm based on a preset threshold value to realize pairwise registration;
and on the basis of pair-wise registration, performing multi-view registration, and combining the transformation relation to construct a transformation matrix of all the scanned images.
8. An apparatus for three-dimensional modeling of a target object, comprising:
the image acquisition module is used for acquiring scanning images obtained by scanning the target object from different angles;
the space construction module is used for randomly selecting a plurality of key points in each scanned image and constructing a local space corresponding to each key point;
the characteristic counting module is used for dividing the local space corresponding to each key point into a plurality of subspaces along the radial direction of projection, counting the characteristic vector corresponding to each subspace, and connecting the characteristic vector corresponding to each subspace to obtain the local characteristic corresponding to the key point;
and the model reconstruction module is used for registering local features corresponding to the key points in the scanning images at different angles and reconstructing the three-dimensional model of the target object.
9. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of three-dimensional modeling of a target object of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the storage medium stores a computer program executable by a processor to perform the method of three-dimensional modeling of a target object according to any one of claims 1 to 7.
CN201911423695.3A 2019-12-31 2019-12-31 Three-dimensional modeling method and device for target object, electronic device and storage medium Withdrawn CN111179433A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911423695.3A CN111179433A (en) 2019-12-31 2019-12-31 Three-dimensional modeling method and device for target object, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911423695.3A CN111179433A (en) 2019-12-31 2019-12-31 Three-dimensional modeling method and device for target object, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN111179433A true CN111179433A (en) 2020-05-19

Family

ID=70650759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911423695.3A Withdrawn CN111179433A (en) 2019-12-31 2019-12-31 Three-dimensional modeling method and device for target object, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN111179433A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862278A (en) * 2020-07-22 2020-10-30 成都数字天空科技有限公司 Animation obtaining method and device, electronic equipment and storage medium
CN112164143A (en) * 2020-10-23 2021-01-01 广州小马慧行科技有限公司 Three-dimensional model construction method and device, processor and electronic equipment
CN113487727A (en) * 2021-07-14 2021-10-08 广西民族大学 Three-dimensional modeling system, device and method
CN113744378A (en) * 2020-05-27 2021-12-03 成都数字天空科技有限公司 Exhibition article scanning method and device, electronic equipment and storage medium
CN114596344A (en) * 2020-12-04 2022-06-07 杭州三坛医疗科技有限公司 Method, device and equipment for determining registration parameters of medical images and storage medium
IT202100014645A1 (en) * 2021-06-04 2022-12-04 Rome C M S R L 3D modeling method and system
WO2024020858A1 (en) * 2022-07-27 2024-02-01 维沃移动通信有限公司 Surface construction method and apparatus, electronic device and medium

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744378A (en) * 2020-05-27 2021-12-03 成都数字天空科技有限公司 Exhibition article scanning method and device, electronic equipment and storage medium
CN113744378B (en) * 2020-05-27 2024-02-20 成都数字天空科技有限公司 Exhibition article scanning method and device, electronic equipment and storage medium
CN111862278A (en) * 2020-07-22 2020-10-30 成都数字天空科技有限公司 Animation obtaining method and device, electronic equipment and storage medium
CN111862278B (en) * 2020-07-22 2024-02-27 成都数字天空科技有限公司 Animation obtaining method and device, electronic equipment and storage medium
CN112164143A (en) * 2020-10-23 2021-01-01 广州小马慧行科技有限公司 Three-dimensional model construction method and device, processor and electronic equipment
CN114596344A (en) * 2020-12-04 2022-06-07 杭州三坛医疗科技有限公司 Method, device and equipment for determining registration parameters of medical images and storage medium
CN114596344B (en) * 2020-12-04 2024-03-19 杭州三坛医疗科技有限公司 Medical image registration parameter determination method, device, equipment and storage medium
IT202100014645A1 (en) * 2021-06-04 2022-12-04 Rome C M S R L 3D modeling method and system
US11978161B2 (en) 2021-06-04 2024-05-07 New Changer Tech S.r.l. 3D modelling method and system
CN113487727A (en) * 2021-07-14 2021-10-08 广西民族大学 Three-dimensional modeling system, device and method
CN113487727B (en) * 2021-07-14 2022-09-02 广西民族大学 Three-dimensional modeling system, device and method
WO2024020858A1 (en) * 2022-07-27 2024-02-01 维沃移动通信有限公司 Surface construction method and apparatus, electronic device and medium

Similar Documents

Publication Publication Date Title
CN111179433A (en) Three-dimensional modeling method and device for target object, electronic device and storage medium
CN110705574B (en) Positioning method and device, equipment and storage medium
US10269147B2 (en) Real-time camera position estimation with drift mitigation in incremental structure from motion
Parmehr et al. Automatic registration of optical imagery with 3D LiDAR data using statistical similarity
WO2015135323A1 (en) Camera tracking method and device
CN111524168B (en) Point cloud data registration method, system and device and computer storage medium
CN110853075A (en) Visual tracking positioning method based on dense point cloud and synthetic view
Stamatopoulos et al. Automated Target-Free Network Orienation and Camera Calibration
Peng et al. Street view cross-sourced point cloud matching and registration
CN112328715A (en) Visual positioning method, training method of related model, related device and equipment
WO2021035627A1 (en) Depth map acquisition method and device, and computer storage medium
Sveier et al. Object detection in point clouds using conformal geometric algebra
CN111325828B (en) Three-dimensional face acquisition method and device based on three-dimensional camera
CN117581232A (en) Accelerated training of NeRF-based machine learning models
Remondino et al. Evaluating hand-crafted and learning-based features for photogrammetric applications
CN110969650B (en) Intensity image and texture sequence registration method based on central projection
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
CN112418250A (en) Optimized matching method for complex 3D point cloud
KR101673144B1 (en) Stereoscopic image registration method based on a partial linear method
Günen et al. Comparison of point cloud filtering methods with data acquired by photogrammetric method and RGB-D sensors
Abdel-Wahab et al. Efficient reconstruction of large unordered image datasets for high accuracy photogrammetric applications
Kang et al. A robust image matching method based on optimized BaySAC
Zakharov et al. An algorithm for 3D-object reconstruction from video using stereo correspondences
Wan et al. A performance comparison of feature detectors for planetary rover mapping and localization
CN111414802B (en) Protein data characteristic extraction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20200519

WW01 Invention patent application withdrawn after publication