CN109242898B - Three-dimensional modeling method and system based on image sequence - Google Patents

Three-dimensional modeling method and system based on image sequence Download PDF

Info

Publication number
CN109242898B
CN109242898B CN201811004634.9A CN201811004634A CN109242898B CN 109242898 B CN109242898 B CN 109242898B CN 201811004634 A CN201811004634 A CN 201811004634A CN 109242898 B CN109242898 B CN 109242898B
Authority
CN
China
Prior art keywords
image
dimensional
modeling
images
photographing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811004634.9A
Other languages
Chinese (zh)
Other versions
CN109242898A (en
Inventor
苏庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaqiang Fante Shenzhen Film Co ltd
Original Assignee
Huaqiang Fante Shenzhen Film Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaqiang Fante Shenzhen Film Co ltd filed Critical Huaqiang Fante Shenzhen Film Co ltd
Priority to CN201811004634.9A priority Critical patent/CN109242898B/en
Publication of CN109242898A publication Critical patent/CN109242898A/en
Application granted granted Critical
Publication of CN109242898B publication Critical patent/CN109242898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a three-dimensional modeling method based on an image sequence, which comprises the following steps: the method comprises the steps of image acquisition, calibration of a photographing device, image preprocessing, feature point extraction, stereo matching, three-dimensional reconstruction and the like, wherein the depth image of each image in the image sequence step is established through the image acquisition, calibration of the photographing device, image preprocessing, feature point extraction and stereo matching, the weighted average distance of the same three-dimensional feature points under different visual angles is determined through the depth image in the three-dimensional reconstruction step, the three-dimensional space coordinates of each three-dimensional feature point of the modeling object are obtained, and the three-dimensional modeling of the modeling object is completed. The method is simple to operate, the three-dimensional modeling precision is high, the effect of image-level three-dimensional modeling can be achieved, a professional scanner is not needed, the modeling cost is reduced, and the popular requirements can be met.

Description

Three-dimensional modeling method and system based on image sequence
Technical Field
The invention relates to the field of three-dimensional modeling, in particular to a high-precision movie-level three-dimensional modeling method and system.
Background
At present, most of three-dimensional models in many games and stereoscopic movies need professionals to be manufactured by three-dimensional modeling of objects by using professional modeling tools, and a large amount of labor and time are consumed. And the three-dimensional scanner is used for extracting the three-dimensional information of the object, so that the equipment price is too high, the three-dimensional modeling cost is too high, and the requirement of popularization is not met.
In the scheme of three-dimensional reconstruction of an object, the existing image-based three-dimensional reconstruction methods can be classified into the following categories: (1) single-view three-dimensional modeling, (2) double-view three-dimensional modeling, and (3) multi-view three-dimensional modeling. Because the depth information of the image acquired according to the single image or the two images has high requirements on the algorithm, the three-dimensional modeling method based on the single view angle and the double view angle cannot achieve the effect of accurate three-dimensional modeling in terms of accuracy. In the existing multi-view three-dimensional modeling scheme, a camera device is generally operated around an object to perform dynamic shooting, a 360-degree image of the needed three-dimensional modeling object is obtained, and then the image is transmitted to a computer to perform three-dimensional modeling.
Disclosure of Invention
Based on the problems in the prior art, the invention aims to provide a three-dimensional modeling method and a three-dimensional modeling system based on an image sequence, which can acquire the image sequence of an object in a simple manner as a reference to perform high-precision three-dimensional modeling and meet the requirements of the film and video industry and the game industry on high-precision three-dimensional modeling.
The purpose of the invention is realized by the following technical scheme:
the embodiment of the invention provides a three-dimensional modeling method based on an image sequence, which comprises the following steps:
image acquisition: under the same illumination condition, a plurality of photographing devices which are arranged around a modeling object needing three-dimensional modeling in a distributed mode are arranged in a three-dimensional surrounding mode, and a plurality of images of the modeling object in all directions are simultaneously collected from multiple angles to serve as an image sequence;
calibrating a photographing device: calibrating each photographing device according to the image acquired by the adjacent photographing device to acquire the parameter of each photographing device;
image preprocessing: carrying out noise reduction processing on a plurality of images acquired in the image acquisition step;
extracting characteristic points: respectively extracting the characteristic points of each image from the plurality of images processed in the image preprocessing step;
stereo matching: converting each characteristic point of each image into a three-dimensional characteristic point according to the parameters of each photographing device obtained in the steps of double-view ranging and calibration of the photographing device, and calculating to obtain a depth image of each image;
three-dimensional reconstruction: and determining the weighted average distance of the same three-dimensional characteristic points under different viewing angles by utilizing the depth images of the images obtained in the stereo matching step to obtain the three-dimensional space coordinates of the three-dimensional characteristic points of the modeling object, namely completing the three-dimensional modeling of the modeling object.
The embodiment of the invention also provides a three-dimensional modeling system based on the image sequence, which comprises:
the system comprises a three-dimensional all-round illumination light source, a plurality of photographing devices, a power supply device, a control device and a modeling device; wherein,
the three-dimensional all-round illumination light source is provided with a plurality of illumination points which are uniformly distributed in a spherical shape, and the central positions of the illumination points are the placement positions of a modeling object needing three-dimensional modeling;
the plurality of photographing devices are uniformly and spherically distributed, and the central positions of the plurality of photographing devices are the placing positions of the modeling object;
the power supply device is respectively electrically connected with the stereo all-around light source and the plurality of photographing devices and can respectively supply power to the stereo all-around light source and each photographing device;
the control device is respectively in communication connection with the plurality of photographing devices and can simultaneously control the plurality of photographing devices to acquire a plurality of image forming image sequences of the modeling object in all directions;
the modeling device is in communication connection with the plurality of photographing devices, can receive a plurality of images of the modeling object in all directions acquired by the plurality of photographing devices, and completes three-dimensional modeling of the modeling object after sequentially performing image preprocessing, feature point extraction, stereo matching and three-dimensional reconstruction on the plurality of images.
According to the technical scheme provided by the invention, the three-dimensional modeling method and the three-dimensional modeling system based on the image sequence have the beneficial effects that:
by using a plurality of photographing devices under the same lighting condition, a professional scanner is not needed, and high-precision three-dimensional modeling of a modeling object can be completed only by acquiring an image sequence formed by a plurality of images of the modeling object at different angles. The method is simple to operate, the three-dimensional modeling precision is high, the effect of image-level three-dimensional modeling can be achieved, a professional scanner is not needed, the modeling cost is reduced, and the popular requirements can be met.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a flowchart of a three-dimensional modeling method based on image sequences according to an embodiment of the present invention;
fig. 2 is a schematic diagram of dual-view distance measurement related to a three-dimensional modeling method based on an image sequence according to an embodiment of the present invention;
fig. 3 is a schematic configuration diagram of a three-dimensional modeling system based on an image sequence according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the specific contents of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention. Details which are not described in detail in the embodiments of the invention belong to the prior art which is known to the person skilled in the art.
As shown in fig. 1, an embodiment of the present invention provides a three-dimensional modeling method based on an image sequence, including:
image acquisition: under the same illumination condition, a plurality of photographing devices which are arranged around a modeling object needing three-dimensional modeling in a distributed mode are arranged in a three-dimensional surrounding mode, and a plurality of images of the modeling object in all directions are simultaneously collected from multiple angles to serve as an image sequence;
calibrating a photographing device: calibrating each photographing device according to the image acquired by the adjacent photographing device to acquire the parameter of each photographing device;
image preprocessing: carrying out noise reduction processing on a plurality of images acquired in the image acquisition step;
extracting characteristic points: respectively extracting the characteristic points of each image from the plurality of images processed in the image preprocessing step;
stereo matching: converting each characteristic point of each image into a three-dimensional characteristic point according to the parameters of each photographing device obtained in the steps of double-view ranging and calibration of the photographing device, and calculating to obtain a depth image of each image;
three-dimensional reconstruction: and determining the weighted average distance of the same three-dimensional characteristic points under different viewing angles by utilizing the depth images of the images obtained in the stereo matching step to obtain the three-dimensional space coordinates of the three-dimensional characteristic points of the modeling object, namely completing the three-dimensional modeling of the modeling object.
In the above method, the image acquisition step includes:
step 11) a stereo ring light source is arranged around the modeling object in a stereo surrounding manner, the stereo ring light source is provided with a plurality of illumination points which are uniformly distributed in a spherical shape, and illumination conditions with the same angles can be provided for the modeling object;
step 12) arranging a plurality of photographing devices around the modeling object in a stereo surrounding manner, wherein the plurality of photographing devices adopt a distribution mode which is the same as the distribution mode of a plurality of illumination points of the stereo surrounding illumination light source and is in uniform spherical distribution;
and step 13) controlling a plurality of photographing devices simultaneously through wireless lightning flash triggering to acquire a plurality of omnidirectional images of the modeling object simultaneously as an image sequence.
In the image acquisition step of the method, each photographing device adopts a professional single lens reflex camera with the same model.
In the calibration step of the photographing device of the method, the acquired parameters of each photographing device comprise: internal parameters and external parameters of each photographing device. Such as the lateral and longitudinal focal lengths, the tilt factor, the principal point coordinates, and the like, and the relative position of the camera coordinate system to the world coordinate system or the relative position of two photographing devices (i.e., cameras), which can be considered as external parameters.
In the image preprocessing step of the method, the noise reduction processing is performed on the plurality of images acquired in the image acquisition step by adopting: and performing smooth filtering image noise reduction processing and mean filtering image noise reduction processing.
In the feature point extraction step of the method, at least one of the following feature point detection algorithms is adopted for respectively extracting the feature points of each image from the plurality of images processed in the image preprocessing step:
a feature detection algorithm based on template matching, a feature detection algorithm based on gray level variation, and a feature detection algorithm based on image edge detection.
The stereo matching of the method comprises the following steps:
step S1), determining the positions of the extracted feature points of each image in a world coordinate system according to the double-view distance measurement to obtain the three-dimensional feature points of each image;
and step S2), calculating to obtain a depth image of a single image according to the three-dimensional characteristic points of the obtained images and the parameters of the photographing devices obtained in the step of calibrating the photographing devices, and repeating the steps until obtaining the depth image of each image.
By the method, the matching of the characteristic points in the image sequence formed by the plurality of images is realized, and the corresponding relation of the same characteristic point in different images is obtained.
The three-dimensional reconstruction of the method comprises the following steps:
step S3), surface geometry of the modeling object is represented by a base surface of a cylinder and a displacement map, wherein the displacement map is obtained by the positions of the image feature points obtained in the stereo matching step in a world coordinate system, namely, the coordinate origin points to the feature points to form a directional vector which diverges from the coordinate origin to the periphery, and the directional vector points to a displacement map which is formed by a spherical frame; the displacement vectors of the displacement diagram point to a spherical frame formed by the plurality of photographing devices which are uniformly and spherically distributed;
step S4), defining a minimum cost function of the cylindrical displacement graph by using the obtained multiple depth images and three-dimensional characteristic points;
step S5), optimizing the minimum cost function of the displacement map to obtain weighted average distances of the same three-dimensional feature points at different viewing angles, and obtaining three-dimensional space coordinates of each three-dimensional feature point of the modeled object (i.e., each pixel point on the surface of the modeled object), i.e., completing three-dimensional modeling of the modeled object.
Specifically, when the method is used for three-dimensional modeling of a modeling object, the modeling object is placed in the center positions of a stereo surround light source and a plurality of photographing devices arranged in a stereo surrounding manner, the object is uniformly illuminated by the stereo surround light source, a plurality of images of the modeling object at different angles are obtained by the plurality of photographing devices (namely cameras) arranged in a 360-degree stereo surrounding manner to form an image sequence, and the plurality of images are transmitted to the modeling device for resolving processing to perform three-dimensional modeling, so that a high-precision three-dimensional model of the modeling object is obtained.
In the method, the arrangement mode of the plurality of photographing devices and the three-dimensional all-around illumination light source is polarization distribution, so that depth images of the modeling object under the same illumination and different angles can be obtained, and the details of the modeling object can be effectively recorded.
The embodiment of the invention also provides a three-dimensional modeling system based on the image sequence, which is used for realizing the method and comprises the following steps:
a stereo all-round light source 21, a plurality of photographing devices 22, a power supply device 23, a control device 24 and a modeling device 25; wherein,
the stereo all-round illumination light source 21 is provided with a plurality of illumination points which are uniformly distributed in a spherical shape, and the central positions of the illumination points are the placement positions of a modeling object to be modeled three-dimensionally;
the plurality of photographing devices 22 are uniformly distributed in a spherical shape, and the central positions of the plurality of photographing devices are the placement positions of the modeling object;
the power supply device 23 is electrically connected with the stereo all-around light source and the plurality of photographing devices respectively and can supply power to the stereo all-around light source and each photographing device respectively;
the control device 24 is in communication connection with the plurality of photographing devices respectively, and can control the plurality of photographing devices to acquire a plurality of images of the modeling object in all directions simultaneously;
the modeling device 25 is in communication connection with the plurality of photographing devices, and can receive a plurality of images of the modeling object in all directions acquired by the plurality of photographing devices, and complete three-dimensional modeling of the modeling object after sequentially performing image preprocessing, feature point extraction, stereo matching and three-dimensional reconstruction on the plurality of images.
In the above system, the modeling device 25 sequentially performs image preprocessing, feature point extraction, stereo matching, and three-dimensional reconstruction on a plurality of images as follows:
image preprocessing: carrying out noise reduction processing on a plurality of images acquired by the plurality of photographing devices;
extracting characteristic points: respectively extracting the characteristic points of each image from the plurality of images processed in the image preprocessing step;
stereo matching: converting each characteristic point of each image into a three-dimensional characteristic point according to the parameters of each photographing device obtained in the steps of double-view ranging and calibration of the photographing device, and calculating to obtain a depth image of each image;
three-dimensional reconstruction: and determining the weighted average distance of the same three-dimensional characteristic points under different viewing angles by utilizing the depth images of the images obtained in the stereo matching step to obtain the three-dimensional space coordinates of the three-dimensional characteristic points of the modeling object, namely completing the three-dimensional modeling of the modeling object.
In the system, the control device is electrically connected with the stereo all-around light source and can control the stereo all-around light source to provide illumination for the modeling object.
In the system, the power supply device is used for supplying power to the three-dimensional all-round illumination light source and the plurality of photographing devices respectively, so that the problems of simultaneous flash triggering of the plurality of photographing devices and insufficient load caused by the three-dimensional all-round illumination light source are solved.
According to the invention, the high-precision three-dimensional modeling of the modeling object can be completed only by acquiring the image sequence formed by the images of the modeling object at different angles without using a professional scanner by using the plurality of photographing devices and the light source capable of performing three-dimensional surround-lighting.
The embodiments of the present invention are described in further detail below.
Referring to fig. 3, an embodiment of the present invention provides a three-dimensional modeling system based on an image sequence, including:
the stereo surround-light source 21: the three-dimensional all-dimensional three-dimensional modeling method comprises the steps that a plurality of illumination points are arranged and distributed in a spherical shape, and a modeling object needing three-dimensional modeling is located in the center of a three-dimensional all-around illumination light source, so that the same illumination condition can be obtained at each angle of the modeling object; preferably, each illumination point can be composed of a polaroid and a COB lamp bead, the polaroids and the COB lamp beads of the same model are preferably used, and the illumination conditions of the modeling objects can be ensured to be consistent.
The plurality of photographing devices 22: the method comprises the steps that a plurality of photographing devices (each photographing device can adopt a professional single-lens reflex and can adopt professional single-lens reflex with the same model) surround a modeling object to carry out omnibearing photographing on the modeling object, and a plurality of images of the modeling object under different angles are obtained;
the power supply device 23: respectively supplying power to the photographing device and the stereo all-around light source;
the control device 24: the system is in communication connection with the plurality of photographing devices, controls image acquisition and controls the transmission of the acquired images to the modeling device;
the modeling means 25: the modeling object is three-dimensionally modeled using an image sequence formed of a plurality of images taken by the photographing apparatus.
The modeling system is applied to realize the three-dimensional modeling method based on the image sequence, and the method comprises the following steps (see figure 1):
step S1) image acquisition: shooting a modeling object in all directions by using shooting devices (namely cameras) of the same model under the same illumination, acquiring a plurality of images of the modeling object in all directions, and forming an image sequence by the plurality of images;
step S2), calibrating a photographing device: calibrating the photographing device according to the image acquired by the adjacent photographing device, and acquiring parameters (internal parameters and external parameters) of the photographing device for acquiring depth information of the image;
step S3) image preprocessing: carrying out basic noise reduction processing on a plurality of acquired images, reducing random noise, highlighting useful information of the images and improving the quality of the images so as to facilitate the extraction of subsequent feature points;
in step S3, the noise reduction processing includes: and performing smooth filtering image noise reduction processing and mean filtering image noise reduction processing.
Step S4) feature point extraction: acquiring information of each feature point in the image sequence, and performing feature point matching; namely, it is
In step S4, the feature point detection algorithm mainly includes, but is not limited to: an extraction algorithm based on template matching, an extraction algorithm based on gray level change and an extraction algorithm based on image edge detection;
step S5) stereo matching: converting each characteristic point of each image into a three-dimensional characteristic point according to the parameters of each photographing device obtained in the steps of double-view ranging and calibration of the photographing device, and calculating to obtain a depth image of each image; matching the characteristic points in the image sequence formed by a plurality of images to obtain the corresponding relation of the same characteristic points in different images;
step S6) three-dimensional reconstruction; according to the dual-view distance measurement principle and the parameters (internal and external parameters) of the photographing device acquired in the step S2, the weighted average distance of the same three-dimensional feature points under different viewing angles is determined according to the corresponding relationship of the feature points acquired in the step S5 in the image (i.e., the depth image of each image acquired in the stereo matching step), so as to acquire the three-dimensional space coordinates of each three-dimensional feature point of the modeling object, thereby completing the three-dimensional modeling of the modeling object.
In step S1 of the method, the image capturing step includes capturing a plurality of images of the modeling object from multiple angles by using a plurality of photographing devices and a stereo surround-lighting source.
In step S2 of the above method, the method for calibrating the photographing device, i.e., the camera calibration method, may adopt a universal zhangying calibration method in the industry to calibrate the camera, and obtain parameters of the camera, including internal parameters (horizontal and vertical focal lengths, tilt factors, principal point coordinates, etc.) and external parameters (relative positions of the camera coordinate system and the world coordinate system or relative positions of two cameras).
In the above method, steps S1 to S5 are steps of obtaining depth information of a single image (i.e. obtaining a depth image of a single image) by dual-view ranging, as shown in fig. 2, an intrinsic geometric relationship between two images (left and right views) is called epipolar geometry, where the epipolar geometry describes mainly a geometric relationship between two image planes and an epipolar plane, the epipolar plane is a plane beam rotating around a baseline and is determined by spatial points, and the epipolar geometry is independent of a scene structure and only depends on internal parameters and external parameters of two cameras (relative poses of the two cameras), CLXLYLZLI.e. the left camera coordinate system, CRXRYRZRI.e. the coordinate system of the right camera, the image points of the space point P on the left and right image planes are respectively PLAnd pR,pLAnd pRI.e. the homologous points. Optical center of left and right cameras CLAnd CR
According to the basic principle of pinhole imaging, the spatial point P, the projection point P on the two image planesL、pROptical center C of left and right camerasLAnd CRThe five points are coplanar and the plane is called the polar plane, the set of polar planes being a planar beam rotated about a baseline (being the line connecting the optical centers of the left and right cameras), the polar plane being defined by the points in space.
The polar plane and the left and right image planes each have an intersection line l and l', called epipolar line, obviously pLIs the same name point pRIs at pLOn the corresponding polar line l', in the same way, pRIs the same name point pLIs also at pROn the corresponding polar line l. In the stereo matching stage, the image point P on the left image corresponding to the space point P needs to be searchedLIs matched with the point pROnly need to be at point pLCorresponding polar line l' and its vicinity (considering the reality)The existence of noise) is searched, which is called epipolar line constraint, which is very important, some classical algorithms in stereo matching are often adopted, and the search range of matching points can be reduced from two dimensions (whole image plane) to one dimension (polar line), thereby greatly reducing the amount of calculation. This matching method is generally performed after the basis matrix has been solved, and is called guided matching.
Pole e is right camera optical center CRProjection points in the left image, and similarly, the pole e' is the left camera optical center CLAt the projection point in the right image, if the two image planes are coplanar, that is, the base line is parallel to the two image planes, and the two sides of the two image planes are collinear, at this time, the two poles are projected to infinity, all polar lines in the image are parallel to each other and parallel to one side of the image plane, which is called image correction, also called epipolar line correction, and the epipolar line correction is converted into a dual-view visual model configured in parallel alignment, as shown in fig. 3, the optical axes of the two cameras are parallel to each other, the two cameras are placed in bilateral symmetry, at this time, the same-name point is on the same line of the left and right images, when searching for a matching point, the matching operation is only performed on the line where the point is located, so that the matching operation is further simplified.
The focal length of the cameras is the same as f, the base line distance B between the two cameras is that the optical axis is parallel to the z axis, and the image points of the space point P projected on the left image and the right image are positioned on the same line and are respectively PL(xL,yL)、PR(xR,yR) The following relationship is satisfied:
Figure BDA0001783736100000081
in the above formula (1-1), d (x)LY) is parallax, and the depth value Z can be calculated according to the similar triangle principle;
Figure BDA0001783736100000082
as can be seen from the above equation (1-2), the depth Z and the base length B, the focal length f of the camera, andcorresponding point parallax d (x)LY), the parallax of the corresponding point can be obtained through stereo matching, the focal length f of the camera can be known through camera calibration, and the depth value of each pixel position can be calculated.
In step S6, three-dimensional modeling is performed using the depth information obtained in steps S1 to S5, and the specific steps are as follows:
in a multi-view acquisition setting, multiple depth images are typically merged into a single mesh, and then further refinement may be performed using the merged mesh as a basis. The present invention step S6 takes the form of a method that does not require merging, nor individual refinement. And the surface geometry of the modeled object is represented using a cylinder as a base plus a displacement map, where the displacement vectors point to a spherical frame formed by a plurality of cameras arranged in a spherical distribution for taking a picture. Also, the present invention computes a single mesh directly in the cylindrical parameter domain, eliminating the need to merge multiple depth maps. The cylindrical displacement map X is calculated to calculate the minimum cost function:
Figure BDA0001783736100000091
in the above formula (1-3), v is the geometry of all the pixels in the displacement graph, E is the set of edges connecting adjacent nodes, and xsIs the offset from the top of the spherical frame relative to the s position, phisAnd psistRepresenting data items and smoothing options. Wherein the data items represent weighted averages of normalized cross-correlation costs (NCC) between adjacent cameras i, j (which may be referred to as camera pairs), using (1-NCC)/2 as the corresponding cylindrical coordinates (s, x)s) Point p of (a) is the cost of the central 3 x 3 sampling region. Estimate a normally-photometric surface as a weighted mixture of the normals seen by each camera: n isij=(wini+wjnj)/|wini+wjnjL, where niIs the surface of the observation point p of the camera i, njIs the surface of observation point p of camera j, if point p can be observed by camera i, where wi=(ni·vi) Otherwise, it is 0. The invention windows the 3D sampleConstrained to be perpendicular to nij(and as perpendicular as possible), a sample is produced that is approximately tangential to the surface. To avoid aliasing due to foreshortening biases, the sample spacing is adjusted on a per-set camera pair basis so that the projected samples are separated by equal pixels on both cameras in the camera pair. Adding all data channel NCC costs for diffuse albedo, specular albedo, and specular normality values can increase the position information compared to other modeling schemes that use only surface color. The overall weight of the camera pair i, j is
Figure BDA0001783736100000094
The final data items are:
Figure BDA0001783736100000092
the first order smoothing term in the three-dimensional reconstruction is advantageous for segmenting the nominal depth map, since only nominal depth surfaces can be allowed for reconstruction without loss. The second order smoothing term provides a smoother geometric estimate because a smooth fit can be made to any rough plane, but the disadvantage is that it is more difficult to optimize. An existing first order smoothing term based on the photometric surface normal eliminates the piecewise constant artifacts, but geometric flaws are still encountered where the photometric normal deviates from the true geometric normal. And based on a second-order smoothing term of the iterative framework, calculating anisotropic smoothing weights to avoid excessively smoothing sharp features. These two techniques are combined in the three-dimensional reconstruction step of the present invention: the selected smoothing term favors neighboring points in the plane defined by the photometric surface normal and is weighted by anisotropic smoothing weights, updating the message passing between each iteration as follows:
Figure BDA0001783736100000093
in the above-mentioned formula (1-5), r represents the angular resolution of the displacement map, psRelative to point (s, x)s) Is offset fromAmount, ni(ii) a ps denotes the camera i observation point psIf the points s, t are horizontally adjacent terms wst=wh;s+wh;tIf it is a vertically adjacent term, it is wv;s+wv;t。wh;s,wv;sThe smoothing parameters for the horizontal and vertical directions of point s are represented, respectively, as follows:
wh;s=Wexp(-βα(αs+hs-h)2-βn(ns+h-ns-h)2) (1-6)
in the above formula (1-6), s + h represents the next horizontal neighborhood of the point s, s-h represents the previous horizontal neighborhood of the point s, and αs,nsRespectively representing the diffuse reflectance and the photometric surface normal, W, beta, of a point sαnThe method comprises three self-defined adjustable parameters.
Through the steps, the process of optimizing the formula (1-3) is completed. The optimization process of the formula (1-3) is to calculate the distance from the surface of the modeling object to the center of the modeling object at different viewing angles, obtain the final accurate distance by weighted average of the distances of the same points at different viewing angles, and complete the process of the center distances of all the pixel points, namely complete the process of three-dimensional modeling of the modeling object.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. A three-dimensional modeling method based on an image sequence is characterized by comprising the following steps:
image acquisition: under the same illumination condition, a plurality of photographing devices which are arranged around a modeling object needing three-dimensional modeling in a distributed mode are arranged in a three-dimensional surrounding mode, and a plurality of images of the modeling object in all directions are simultaneously collected from multiple angles to serve as an image sequence;
calibrating a photographing device: calibrating each photographing device according to the image acquired by the adjacent photographing device to acquire the parameter of each photographing device;
image preprocessing: carrying out noise reduction processing on a plurality of images acquired in the image acquisition step;
extracting characteristic points: respectively extracting the characteristic points of each image from the plurality of images processed in the image preprocessing step;
stereo matching: converting each characteristic point of each image into a three-dimensional characteristic point according to the parameters of each photographing device obtained in the steps of double-view ranging and calibration of the photographing device, and calculating to obtain a depth image of each image;
three-dimensional reconstruction: determining the weighted average distance of the same three-dimensional characteristic points under different viewing angles by utilizing the depth image of each image obtained in the stereo matching step to obtain the three-dimensional space coordinates of each three-dimensional characteristic point of the modeling object, namely completing the three-dimensional modeling of the modeling object; the three-dimensional reconstruction of the method comprises:
step S3), representing the surface geometry of the modeling object by a cylindrical displacement diagram formed by adding a displacement diagram to the base surface of a cylinder, wherein the displacement vectors of the displacement diagram point to a spherical frame formed by the plurality of photographing devices which are arranged in a uniform spherical distribution;
step S4), defining a minimum cost function of the cylindrical displacement graph by using the obtained multiple depth images and three-dimensional characteristic points;
step S5), the minimum cost function of the displacement diagram is optimized to obtain the weighted average distance of the same three-dimensional characteristic points under different viewing angles, and the three-dimensional space coordinates of each three-dimensional characteristic point of the modeling object are obtained, namely, the three-dimensional modeling of the modeling object is completed.
2. The method for three-dimensional modeling based on image sequences according to claim 1, wherein the image acquisition step comprises:
step 11) a stereo ring light source is arranged around the modeling object in a stereo surrounding manner, the stereo ring light source is provided with a plurality of illumination points which are uniformly distributed in a spherical shape, and illumination conditions with the same angles can be provided for the modeling object;
step 12) arranging a plurality of photographing devices around the modeling object in a stereo surrounding manner, wherein the plurality of photographing devices adopt a distribution mode which is the same as the distribution mode of a plurality of illumination points of the stereo surrounding illumination light source and is in uniform spherical distribution;
and step 13) controlling a plurality of photographing devices simultaneously through wireless lightning flash triggering to acquire a plurality of omnidirectional images of the modeling object simultaneously as an image sequence.
3. The method for three-dimensional modeling based on image sequences as claimed in claim 1 or 2, wherein in the image acquisition step of the method, each photographing device uses a professional single lens reflex camera with the same model.
4. The method for three-dimensional modeling based on image sequences as claimed in claim 1 or 2, wherein in the step of calibrating the photographing devices of the method, the parameters of each photographing device obtained comprise: internal parameters and external parameters of each photographing device.
5. The method for three-dimensional modeling based on image sequences according to claim 1 or 2, wherein in the image preprocessing step of the method, the noise reduction processing of the plurality of images acquired in the image acquisition step adopts: any one of smooth filtering image denoising processing and mean filtering image denoising processing;
in the feature point extraction step of the method, at least one of the following feature point detection algorithms is adopted for respectively extracting the feature points of each image from the plurality of images processed in the image preprocessing step:
a feature detection algorithm based on template matching, a feature detection algorithm based on gray level variation, and a feature detection algorithm based on image edge detection.
6. The method for three-dimensional modeling based on an image sequence according to claim 1 or 2, characterized in that the stereo matching of the method comprises:
step S1), determining the positions of the extracted feature points of each image in a world coordinate system according to the double-view distance measurement to obtain the three-dimensional feature points of each image;
and step S2), calculating to obtain a depth image of a single image according to the three-dimensional characteristic points of the obtained images and the parameters of the photographing devices obtained in the step of calibrating the photographing devices, and repeating the steps until obtaining the depth image of each image.
7. A three-dimensional modeling system based on an image sequence, characterized in that, for the three-dimensional modeling method of any one of claims 1 to 6, it comprises:
the system comprises a three-dimensional all-round illumination light source, a plurality of photographing devices, a power supply device, a control device and a modeling device; wherein,
the three-dimensional all-round illumination light source is provided with a plurality of illumination points which are uniformly distributed in a spherical shape, and the central positions of the illumination points are the placement positions of a modeling object needing three-dimensional modeling;
the plurality of photographing devices are uniformly and spherically distributed, and the central positions of the plurality of photographing devices are the placing positions of the modeling object;
the power supply device is respectively electrically connected with the stereo all-around light source and the plurality of photographing devices and can respectively supply power to the stereo all-around light source and each photographing device;
the control device is respectively in communication connection with the plurality of photographing devices and can simultaneously control the plurality of photographing devices to acquire a plurality of image forming image sequences of the modeling object in all directions;
the modeling device is in communication connection with the plurality of photographing devices, can receive a plurality of images of the modeling object in all directions acquired by the plurality of photographing devices, and completes three-dimensional modeling of the modeling object after sequentially performing image preprocessing, feature point extraction, stereo matching and three-dimensional reconstruction on the plurality of images.
8. The image sequence-based three-dimensional modeling system according to claim 7, wherein the modeling means sequentially performs image preprocessing, feature point extraction, stereo matching, and three-dimensional reconstruction on a plurality of images as:
image preprocessing: carrying out noise reduction processing on a plurality of images acquired by the plurality of photographing devices;
extracting characteristic points: respectively extracting the characteristic points of each image from the plurality of images processed in the image preprocessing step;
stereo matching: converting each characteristic point of each image into a three-dimensional characteristic point according to the parameters of each photographing device obtained in the steps of double-view ranging and calibration of the photographing device, and calculating to obtain a depth image of each image;
three-dimensional reconstruction: and determining the weighted average distance of the same three-dimensional characteristic points under different viewing angles by utilizing the depth images of the images obtained in the stereo matching step to obtain the three-dimensional space coordinates of the three-dimensional characteristic points of the modeling object, namely completing the three-dimensional modeling of the modeling object.
9. The image sequence based three-dimensional modeling system of claim 7 or 8, wherein the control device, electrically connected to the stereo surround-lighting source, is capable of controlling the stereo surround-lighting source to provide illumination for the modeled object.
CN201811004634.9A 2018-08-30 2018-08-30 Three-dimensional modeling method and system based on image sequence Active CN109242898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811004634.9A CN109242898B (en) 2018-08-30 2018-08-30 Three-dimensional modeling method and system based on image sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811004634.9A CN109242898B (en) 2018-08-30 2018-08-30 Three-dimensional modeling method and system based on image sequence

Publications (2)

Publication Number Publication Date
CN109242898A CN109242898A (en) 2019-01-18
CN109242898B true CN109242898B (en) 2022-03-22

Family

ID=65067977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811004634.9A Active CN109242898B (en) 2018-08-30 2018-08-30 Three-dimensional modeling method and system based on image sequence

Country Status (1)

Country Link
CN (1) CN109242898B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110322544A (en) * 2019-05-14 2019-10-11 广东康云科技有限公司 A kind of visualization of 3 d scanning modeling method, system, equipment and storage medium
CN112016570B (en) * 2019-12-12 2023-12-26 天目爱视(北京)科技有限公司 Three-dimensional model generation method for background plate synchronous rotation acquisition
CN113327291B (en) * 2020-03-16 2024-03-22 天目爱视(北京)科技有限公司 Calibration method for 3D modeling of remote target object based on continuous shooting
CN113379822B (en) * 2020-03-16 2024-03-22 天目爱视(北京)科技有限公司 Method for acquiring 3D information of target object based on pose information of acquisition equipment
CN111739081A (en) * 2020-08-06 2020-10-02 成都极米科技股份有限公司 Feature point matching method, splicing method and device, electronic equipment and storage medium
CN113178005A (en) * 2021-05-26 2021-07-27 国网河南省电力公司南阳供电公司 Efficient photographing modeling method and device for power equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107257494A (en) * 2017-01-06 2017-10-17 深圳市纬氪智能科技有限公司 A kind of competitive sports image pickup method and its camera system
CN107633532A (en) * 2017-09-22 2018-01-26 武汉中观自动化科技有限公司 A kind of point cloud fusion method and system based on white light scanning instrument
CN107784687A (en) * 2017-09-22 2018-03-09 武汉中观自动化科技有限公司 A kind of three-dimensional rebuilding method and system based on white light scanning instrument
CN108317953A (en) * 2018-01-19 2018-07-24 东北电力大学 A kind of binocular vision target surface 3D detection methods and system based on unmanned plane

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107257494A (en) * 2017-01-06 2017-10-17 深圳市纬氪智能科技有限公司 A kind of competitive sports image pickup method and its camera system
CN107633532A (en) * 2017-09-22 2018-01-26 武汉中观自动化科技有限公司 A kind of point cloud fusion method and system based on white light scanning instrument
CN107784687A (en) * 2017-09-22 2018-03-09 武汉中观自动化科技有限公司 A kind of three-dimensional rebuilding method and system based on white light scanning instrument
CN108317953A (en) * 2018-01-19 2018-07-24 东北电力大学 A kind of binocular vision target surface 3D detection methods and system based on unmanned plane

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Real-Time Camera Tracking and 3D Reconstruction Using Signed Distance Functions;Erik Bylow et al.;《Robotics: Science and Systems (RSS), Online Proceedings》;20131231;全文 *
影像信息驱动的三角网格模型优化方法;张春森 等;《测绘学报》;20180731;第47卷(第7期);全文 *

Also Published As

Publication number Publication date
CN109242898A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109242898B (en) Three-dimensional modeling method and system based on image sequence
WO2021077720A1 (en) Method, apparatus, and system for acquiring three-dimensional model of object, and electronic device
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN107767442B (en) Foot type three-dimensional reconstruction and measurement method based on Kinect and binocular vision
CN106327532B (en) A kind of three-dimensional registration method of single image
Furukawa et al. Accurate camera calibration from multi-view stereo and bundle adjustment
CN106228507B (en) A kind of depth image processing method based on light field
TWI555378B (en) An image calibration, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
JP5442111B2 (en) A method for high-speed 3D construction from images
CN111028155B (en) Parallax image splicing method based on multiple pairs of binocular cameras
CN110827392B (en) Monocular image three-dimensional reconstruction method, system and device
CN107274483A (en) A kind of object dimensional model building method
CN109919911A (en) Moving three dimension method for reconstructing based on multi-angle of view photometric stereo
CN110782498B (en) Rapid universal calibration method for visual sensing network
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
CN113345084B (en) Three-dimensional modeling system and three-dimensional modeling method
CN110852979A (en) Point cloud registration and fusion method based on phase information matching
CA3233222A1 (en) Method, apparatus and device for photogrammetry, and storage medium
WO2018056802A1 (en) A method for estimating three-dimensional depth value from two-dimensional images
CN117456114B (en) Multi-view-based three-dimensional image reconstruction method and system
CN116222425A (en) Three-dimensional reconstruction method and system based on multi-view three-dimensional scanning device
CN115205491A (en) Method and device for handheld multi-view three-dimensional reconstruction
CN111998834B (en) Crack monitoring method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant