CN113538552B - 3D information synthetic image matching method based on image sorting - Google Patents

3D information synthetic image matching method based on image sorting Download PDF

Info

Publication number
CN113538552B
CN113538552B CN202110828646.9A CN202110828646A CN113538552B CN 113538552 B CN113538552 B CN 113538552B CN 202110828646 A CN202110828646 A CN 202110828646A CN 113538552 B CN113538552 B CN 113538552B
Authority
CN
China
Prior art keywords
image
acquisition device
image acquisition
images
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110828646.9A
Other languages
Chinese (zh)
Other versions
CN113538552A (en
Inventor
左忠斌
左达宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianmu Aishi Beijing Technology Co Ltd
Original Assignee
Tianmu Aishi Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmu Aishi Beijing Technology Co Ltd filed Critical Tianmu Aishi Beijing Technology Co Ltd
Priority to CN202110828646.9A priority Critical patent/CN113538552B/en
Publication of CN113538552A publication Critical patent/CN113538552A/en
Application granted granted Critical
Publication of CN113538552B publication Critical patent/CN113538552B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention provides a method for matching a 3D information synthesized image based on image ordering, which comprises the following steps: step 1: determining a source image to be matched; step 2: screening images adjacent to the source image; step 3: matching calculation is carried out on the plurality of screened images; step 4: repeating the step 2-3 for the rest source images to be matched, and finally completing the matching of all the images; aiming at the scheme of surrounding type acquisition of the target object with limited volume for the first time, the method for screening adjacent pictures is provided, so that the calculation difficulty and time of a matching algorithm are reduced, and the synthesis speed and precision are both considered.

Description

3D information synthetic image matching method based on image sorting
Technical Field
The invention relates to the technical field of morphology measurement, in particular to the technical field of 3D morphology measurement.
Background
When 3D measurement is performed, 3D measurement data is used for processing and manufacturing, or 3D data is used for displaying and identifying, a relatively accurate 3D model should be built for the target object. The method commonly used at present comprises the steps of acquiring pictures of objects at different angles by using a machine vision mode, and matching and splicing the pictures to form a 3D model. These 3D models can be regarded as a datamation of real things, and by using these data, a matching object can be manufactured. For example, 3D data of the human foot can be acquired to make a more suitable shoe. In addition, the data can also be used for identity verification. For example, after the human iris 3D model is synthesized, the human iris 3D model can be used as identity standard data, iris 3D data can be acquired again when the human iris 3D model is used, and the identity can be identified by comparing the iris 3D data with the standard data. However, both factory manufacturing and transaction identification have high requirements on the synthesis speed and accuracy of the 3D model, which would otherwise lead to a significant degradation of the customer experience.
In the prior art, it is considered that the improvement of the synthesis speed depends on the optimization of the 3D model reconstruction algorithm. Various algorithms have thus been proposed to improve 3D model reconstruction, but with general effect. This is because the general algorithm is a general algorithm and is suitable for a wider range of scenarios. Since in a general scenario, the acquisition is relatively random. For example, when modeling a building, unmanned aerial vehicles are used to shoot the building, and the flight path is not usually fixed. That is, the acquisition process is not standard. Thus, the current algorithm is also designed for this random process, and there is no prior art algorithm optimization specifically for loop acquisition for a fixed program.
The improvement in accuracy is also believed in the art to be more dependent on the accuracy of the image acquisition. The use of a high resolution camera naturally increases the accuracy of image acquisition and to some extent 3D modeling, but ultra high resolution images also bring about an extreme reduction in the speed of synthesis.
Moreover, the synthesis speed and the synthesis accuracy are a pair of contradictions to some extent, and the improvement of the synthesis speed can lead to the reduction of the final 3D synthesis accuracy; to improve the 3D synthesis accuracy, the synthesis speed needs to be reduced, and more pictures are used for synthesis. First, there is no algorithm in the prior art that can better improve both the synthesis speed and the synthesis effect. Second, acquisition and synthesis are generally considered to be two processes, independent of each other, and are not uniformly considered. This affects the efficiency of 3D synthesis modeling and cannot give attention to improving the synthesis speed and the synthesis accuracy. Finally, in the prior art, it has also been proposed to define the camera position using empirical formulas including rotation angle, target size, object distance, thereby compromising the speed of synthesis and the effect. However, in practical applications, it was found that: unless an accurate angle measuring device is provided, the user is insensitive to the angle, and the angle is difficult to accurately determine; the size of the target is difficult to accurately determine, particularly in certain applications where the target needs to be replaced frequently, a lot of extra work is required for each measurement, and specialized equipment is required to accurately measure irregular targets. The error of measurement causes the camera position to set error, so that acquisition and synthesis speed and effect can be influenced; further improvements in accuracy and speed are needed.
Therefore, the following technical problems are urgently needed to be solved: (1) the optimization bias of the algorithm can be broken, the optimization of the general algorithm is not performed, and an optimization method for the scene of rotary acquisition is searched; (2) the algorithm can be matched with the method for acquiring the images, so that the synthesis speed and the synthesis precision are improved at the same time. (3) The method can be used for generating a scene of a 3D model aiming at a surrounding type acquisition target object and specially optimizing an algorithm.
Disclosure of Invention
In view of the above, the present invention has been made to provide a method for image matching based on 3D information synthesis of image ordering, which overcomes or at least partially solves the above-mentioned problems.
The invention provides a 3D information synthetic image matching method based on image ordering,
step 1: determining a source image to be matched;
step 2: screening images adjacent to the source image;
step 3: matching calculation is carried out on the plurality of screened images;
step 4: repeating the step 2-3 for the rest source images to be matched, and finally completing the matching of all the images;
in step 2, according to the rotation speed s of the image acquisition device and the exposure time interval T, the position Pi (Xi, yi) of any photographing moment is calculated, according to the position Pi (Xi, yi) of the image acquisition device at any photographing moment, the distance Di between the current photographing position Pt (Xt, yt) and the photographing positions at all moments is calculated, di is ordered, the minimum M photographing positions close to Pt are selected, and the images photographed at the corresponding positions are used as images to be matched.
Optionally, in step 2, the calculation method of the position Pi (Xi, yi) at any photographing time is as follows:
the arc length formula is L=N×pi×r/180, wherein N is the number of central angles, r is the radius, and L is the arc length; i.e. n=l×180/(pi×r);
the camera shoots a circle at any exposure time, and the arc length of the sliding of the camera is
Obtaining the photographing position and X-axis angle of the cameraDegree of
Obtaining a position xi=r×cos (N), yi=r×sin (N) of an arbitrary camera position exposure time Pi of one photographing week;
optionally, the distance between the adjacent acquisition positions of the two images is:
wherein L is the linear distance of the optical center of the image acquisition device when two acquisition positions are adjacent; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element of the image acquisition device; t is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; delta is the adjustment coefficient.
Alternatively, δ <0.603.
Alternatively, δ <0.498.
Alternatively, δ <0.356.
Alternatively, δ <0.311.
Optionally, the method further comprises:
performing image enhancement processing on the screened image;
extracting feature points of the screened images, and matching the feature points to obtain sparse feature points;
inputting the matched feature point coordinates, and resolving the position and posture data of the sparse three-dimensional face point cloud and the image acquisition device to obtain model coordinate values of the three-dimensional face point cloud and the position of the sparse object model;
And taking the sparse feature points as initial values, performing multi-view image dense matching, and obtaining dense point cloud data.
The second aspect of the present invention also provides a method for generating a physical object by using three-dimensional model data, including any one of the matching methods.
The third aspect of the invention also provides a three-dimensional model construction method, which comprises any one of the matching methods.
The fourth aspect of the present invention also provides a three-dimensional data comparison method, including any one of the matching methods.
Inventive aspects and technical effects
1. Aiming at the scheme of surrounding type acquisition of the target object with limited volume for the first time, the method for screening adjacent pictures is provided, so that the calculation difficulty and time of a matching algorithm are reduced, and the synthesis speed and precision are both considered.
2. The method and the device are matched with an optimized algorithm in a mode of optimizing the position of the camera for collecting the picture, so that the synthesis speed and the synthesis precision are improved. And when the position is optimized, the angle is not required to be measured, the size of the target is not required to be measured, and the applicability is stronger.
3. By the method for sorting the photo collection distances, photos which are most suitable for matching can be found under any condition, and the most suitable photos cannot be missed, so that the algorithm speed and accuracy are improved.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a flow chart of a 3D synthesis method provided by an embodiment of the invention;
FIG. 2 is a flowchart of an image screening method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an implementation of a rotating structure of an acquisition device according to an embodiment of the present invention;
fig. 4 is a schematic diagram of another implementation manner of the acquisition device with a rotating structure according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an implementation manner of a translational structure of an acquisition device according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an implementation of irregular motion of an acquisition device according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an implementation manner of a multi-camera structure of an acquisition device according to an embodiment of the present invention;
the correspondence of the reference numerals with the respective components is as follows:
1 objective table, 2 rotating device, 3 rotating arm, 4 image acquisition device.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
3D synthetic method flow
The image acquisition device acquires a group of images of the target object through relative movement with the target object; the acquisition device is described in detail in the acquisition device embodiments below.
The processing unit obtains 3D information of the target object according to a plurality of images in the group of images. The specific algorithm is as follows. Of course, the processing unit may be directly disposed in the housing where the image capturing device is located, or may be connected to the image capturing device through a data line or through a wireless manner. For example, an independent computer, a server, a cluster server, or the like may be used as the processing unit, and the image data acquired by the image acquisition device may be transmitted to the processing unit for 3D synthesis. Meanwhile, the data of the image acquisition device can be transmitted to the cloud platform, and the 3D synthesis is performed by utilizing the powerful computing capacity of the cloud platform.
When the acquired pictures are used for 3D synthesis, the 3D synthesis can be realized by adopting the existing algorithm, and the optimization algorithm provided by the invention can also be adopted, as shown in fig. 1, and mainly comprises the following steps:
step 10: and performing image enhancement processing on all the input photos. The following filters are used to enhance the contrast of the original photograph and to suppress noise at the same time.
Wherein: g (x, y) is the gray value of the original image at x, y), f (x, y) is the gray value of the original image at the point after being enhanced by a Wallis filter, m g Is the local gray level mean value s of the original image g Is the standard deviation of local gray scale of the original image, m f S is the local gray target value of the transformed image f The target value of the local gray standard deviation of the transformed image is obtained. c epsilon (0, 1) is the expansion constant of the image variance, and b epsilon (0, 1) is the image brightness coefficient constant.
The filter can greatly enhance image texture modes with different scales in the image, so that the number and the precision of feature points can be improved when the point features of the image are extracted, and the reliability and the precision of a matching result are improved when the photo features are matched.
Step 20: and extracting feature points of all the input photos, and matching the feature points to obtain sparse feature points. And extracting and matching the feature points of the images by adopting a SURF operator. The SURF feature matching method mainly comprises three processes, namely feature point detection, feature point description and feature point matching. The method uses a Hessian matrix to detect feature points, uses a Box filter (Box Filters) to replace second-order Gaussian filtering, uses an integral image to accelerate convolution to improve calculation speed, and reduces the dimension of a local image feature descriptor to accelerate matching speed. The method comprises the following steps of (1) constructing a Hessian matrix, generating all interest points for feature extraction, and constructing the Hessian matrix for generating edge points (mutation points) with stable images; (2) constructing scale space feature point positioning, comparing each pixel point processed by a Hessian matrix with 26 points in a two-dimensional image space and a scale space adjacent area, preliminarily positioning key points, filtering out key points with weaker energy and incorrectly positioned key points, and screening out final stable feature points; (3) the main direction of the feature points is determined by adopting the Harr wavelet features in the circular neighborhood of the statistical feature points. In the circular neighborhood of the characteristic point, counting the sum of the horizontal and vertical harr wavelet characteristics of all points in a 60-degree fan, then rotating the fan at intervals of 0.2 radian and counting the value of the harr wavelet characteristics in the area again, and finally taking the direction of the fan with the largest value as the main direction of the characteristic point; (4) a 64-dimensional feature point description vector is generated, a rectangular region block of 4*4 is taken around the feature point, but the taken rectangular region direction is along the main direction of the feature point. Each sub-region counts haar wavelet characteristics for the horizontal and vertical directions of 25 pixels, where both horizontal and vertical directions are relative to the main direction. The haar wavelet feature is 4 directions of the sum of a horizontal direction value, a vertical direction value, a horizontal direction absolute value and a vertical direction absolute value, and the 4 values are taken as feature vectors of each sub-block area, so that 4 x 4 = 64-dimensional vectors are taken as descriptors of Surf features; (5) the feature points are matched, the matching degree is determined by calculating the Euclidean distance between the two feature points, and the shorter the Euclidean distance is, the better the matching degree of the two feature points is represented.
Step 30: inputting matched feature point coordinates, and calculating position and posture data of the sparse face three-dimensional point cloud and a photographing camera by utilizing a beam method adjustment, so as to obtain model coordinate values of the sparse face model three-dimensional point cloud and the position; and taking the sparse feature points as initial values, performing dense matching on the multi-view photos, and obtaining dense point cloud data. The process mainly comprises four steps: stereopair selection, depth map calculation, depth map optimization and depth map fusion. For each image in the input dataset, we select a reference image to form a stereopair for use in computing the depth map. We can thus get a rough depth map of all images, which may contain noise and errors, we use its neighborhood depth map for consistency checking to optimize the depth map for each image. And finally, carrying out depth map fusion to obtain the three-dimensional point cloud of the whole scene.
Step 40: and (5) reconstructing the curved surface of the human face by utilizing the dense point cloud. The method comprises the steps of defining octree, setting function space, creating vector field, solving poisson equation and extracting equivalent surface. And obtaining an integral relation between the sampling points and the indication function according to the gradient relation, obtaining a vector field of the point cloud according to the integral relation, and calculating an approximation of the gradient field of the indication function to form a poisson equation. And (3) solving an approximate solution by using matrix iteration according to a poisson equation, extracting an equivalent surface by adopting a moving square algorithm, and reconstructing a model of the measured object for the measured point cloud.
Step 50: full-automatic texture mapping of face models. And after the surface model is constructed, texture mapping is carried out. The main process comprises the following steps: (1) texture data is obtained through a surface triangular mesh of an image reconstruction target; (2) and (5) reconstructing visibility analysis of the triangular surface of the model. Calculating a visible image set of each triangular surface and an optimal reference image by using calibration information of the images; (3) triangular face clustering generates texture patches. According to the visible image set of the triangular surface, the optimal reference image and the neighborhood topological relation of the triangular surface, clustering the triangular surface into a plurality of reference image texture patches; (4) the texture patches are automatically ordered to generate a texture image. And sequencing the generated texture patches according to the size relation of the texture patches to generate texture images with minimum surrounding areas, and obtaining texture mapping coordinates of each triangular surface.
It should be noted that the above algorithm is an optimization algorithm of the present invention, and the present algorithm is matched with the image acquisition condition, and the use of the algorithm gives consideration to the time and quality of the synthesis, which is one of the invention points of the present invention. Of course, the conventional 3D synthesis algorithm in the prior art can be used, and the synthesis effect and speed are affected to some extent.
Image screening method
In the synthesis using the above method, the most important step is matching of feature points. In general, it is necessary to perform a matching calculation on one image and the acquired rest of the images, so as to determine an image that can be matched with the image. Such an algorithm can make the algorithm have wider applicability without considering the source of the image. But it is clear that such a calculation is very computationally intensive. The present invention has found through a number of experiments that in practice the best match for each image is an image of its periphery, i.e. a picture that overlaps it. Therefore, the method is provided that firstly, the image closest to each image is screened out for carrying out the calculation of a matching algorithm, so that the matching can be completed in the calculation process of the first few images with high probability, the calculation of the matching algorithm is not needed for all the images, and the matching efficiency can be greatly improved. As shown in fig. 2, a specific screening method is as follows:
1. from the rotation speed s of the camera, and the exposure time interval T, the position Pi (Xi, yi) of any photographing time thereof can be calculated, concretely:
1-1 according to arc length l=n×pi×r/180, where N is the number of degrees of central angle, r is the radius, and L is the arc length. N=l×180/(pi×r);
1-2 camera photographs a week at any exposure time, the arc length the camera slides over
1-3 to obtain the photographing position and X-axis angle of the camera
1-4, the position xi=r×cos (N), yi=r×sin (N) of the exposure time Pi of any camera position at one photographing cycle is obtained;
2. according to the camera positions Pi (Xi, yi) at any photographing moment, the distances Di between the current photographing positions Pt (Xt, yt) and photographing positions at all moments are calculated, di is ordered, and the minimum M photographing positions close to Pt are selected. Typically M may take 4, 5 … 10.
3. And (3) performing step 2 calculation on all the positions to obtain the adjacent position of each photographing position. I.e. all neighbouring photos of each photo are also obtained.
4. And carrying out matching calculation on the photo and all adjacent photos by using the step of the 3D synthesis method flow, and finally modeling. At this time, the input photos in the "3D synthesis method flow" are not all photos but the photos screened out as described above.
In addition to the above method, other methods may be used to screen images, all for the purpose of obtaining images that are adjacent to each other before and after a certain image. For example:
1. the camera starts to rotate from a starting position, and after the rotation speed of the stepping motor is stable, a first image is shot, and the number is 1;
2. A second photographing position is determined according to the condition (as follows) of the adjacent photographing position in the "image pickup device position optimization", and a second image is photographed, numbered 2,
δ<0.603
wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices; f is the focal length of the image acquisition device; d is the rectangular length or width of a photosensitive element (CCD) of the image acquisition device; t is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; delta is the adjustment coefficient.
3. The shooting position adjacent to the second shooting position by L distance is a third shooting position, the number of the third shot image is 3, and the nth shot image is numbered n according to the conditions.
4. When modeling is performed by using the above-described "3D synthesis method flow", images with numbers n-2, n-1, n, n+1, n+2 are selected, respectively, for matching calculation. Of course, more pictures, for example, n-m … n-1, n, n+ … n+m, i.e., 2m images adjacent to the number n, may be selected as the matching targets.
By the method, when the input photos are 100, the synthesis speed is improved by 69.7%, and the synthesis precision and the integrity are improved by 16.1%; when the input photo is 1000, the synthesizing speed is improved by 90.1%, and the synthesizing precision and integrity are improved by 17.3%.
Acquisition device
In order to realize the acquisition of 3D information, the invention provides image acquisition equipment for 3D information acquisition, which comprises an image acquisition device and a rotating device. The image acquisition device is used for acquiring a group of images of the target object through the relative movement of the acquisition area of the image acquisition device and the target object; and the acquisition region moving device is used for driving the acquisition region of the image acquisition device to generate relative motion with the target. The acquisition area is the effective field of view range of the image acquisition device. The structure of the specific acquisition equipment is different, and the structure is as follows:
(1) acquisition equipment with acquisition area moving device of rotating structure
Referring to fig. 3, a target object is fixed on the objective table 1, the rotation device 2 includes a rotation driving device and a rotation arm 3, wherein the rotation driving device may be located above the target object to drive the rotation arm 3 to rotate, the rotation arm 3 is connected with a column extending downward, and an image acquisition device 4 is installed on the column. The image pickup device 4 is rotated around the object by the driving of the rotating device 2.
In another case, referring to fig. 4, the apparatus includes a circular stage 1 for carrying a target; the rotating device 2 comprises a rotating driving device and a rotating arm 3, wherein the rotating arm 3 is in a bent shape, and the horizontal lower section part is rotationally fixed on the base so that the vertical upper section part rotates around the objective table 1; the image acquisition device 4 is used for acquiring an image of a target object, is arranged on the upper section of the rotating arm, and the special image acquisition device 4 can also pitch and rotate up and down along the rotating arm so as to adjust the acquisition angle.
In fact, the manner in which the image pickup device is rotated about the object is not limited to the above, and various structures such as the image pickup device being disposed on an endless track around the object, on a turntable, on a rotating cantilever, and the like may be realized. Therefore, the image acquisition device only needs to rotate around the target object. Of course, the rotation is not necessarily a complete circular motion, and can be only rotated by a certain angle according to the acquisition requirement. The rotation is not necessarily circular, and the motion track of the image acquisition device can be other curve tracks, but the camera is ensured to shoot an object from different angles.
In addition to the above manner, in some cases, the camera may be fixed, and the stage carrying the object rotates, so that the direction of the object facing the image capturing device changes at any time, and the image capturing device is enabled to capture images of the object from different angles. However, in this case, the calculation can still be performed as converted into a motion of the image acquisition device, so that the motion corresponds to a corresponding empirical formula (which will be described in detail below). For example, in a scenario where the stage rotates, it may be assumed that the stage is stationary and the image capture device rotates. The distance of the shooting position when the image acquisition device rotates is set by utilizing an empirical formula, so that the rotating speed of the image acquisition device is deduced, the rotating speed of the objective table is reversely deduced, the rotating speed control is convenient, and the 3D acquisition is realized.
In addition, in order to enable the image acquisition device to acquire images of different directions of the target object, the image acquisition device and the target object can be kept still, and the image acquisition device and the target object can be realized by rotating the optical axis of the image acquisition device. For example: the acquisition area moving device is an optical scanning device, so that the acquisition area of the image acquisition device and the target generate relative motion under the condition that the image acquisition device does not move or rotate. The acquisition area moving device also comprises a light deflection unit which is mechanically driven to rotate or is electrically driven to deflect the light path or is arranged in a plurality of groups in space, so that images of the target object are obtained from different angles. The light deflection unit may typically be a mirror which is rotated such that images of the object in different directions are acquired. Or directly spatially arranging a mirror surrounding the object, in turn causing light from the mirror to enter the image acquisition device. Similarly to the foregoing, the rotation of the optical axis in this case can be regarded as the rotation of the virtual position of the image pickup device, and by this conversion method, it is assumed that the image pickup device is rotated, and thus calculation is performed using the following empirical formula.
The image acquisition device is used for acquiring an image of a target object, and can be a fixed-focus camera or a zoom camera. In particular, the camera may be a visible light camera or an infrared camera. Of course, it should be understood that any device having an image capturing function may be used, and the device is not limited to the present invention, and may be, for example, a CCD, a CMOS, a camera, a video camera, an industrial camera, a monitor, a video camera, a mobile phone, a tablet, a notebook, a mobile terminal, a wearable device, a smart glasses, a smart watch, a smart bracelet, and all devices having an image capturing function.
A background plate may also be added to the device during the rotational setting. The background plate is located opposite to the image acquisition device, and synchronously rotates when the image acquisition device rotates, and remains stationary when the image acquisition device is stationary. The object image collected by the image collecting device is all based on the background plate. Of course, a completely fixed background plate can also be set for the object, so that the background plate can be used as the acquisition background no matter how the image acquisition device moves. The background plate is entirely solid or mostly solid (the main body). In particular, a white or black panel, the specific color being selected according to the target body color. The background plate is usually a flat plate, preferably can also be a curved plate, such as a concave plate, a convex plate and a spherical plate, and even in certain application scenes, the background plate can be a background plate with a wavy surface; the splice plate can also be in various shapes, for example, three sections of planes can be used for splicing, the whole body is concave, or a plane and a curved surface can be used for splicing, and the like.
The device also comprises a processor, also called a processing unit, which is used for synthesizing a 3D model of the target object according to a 3D synthesis algorithm and obtaining 3D information of the target object according to a plurality of images acquired by the image acquisition device.
In addition to the above-described rotation, in some situations, it is difficult to have a large space to accommodate the rotation of the rotation device. In this case, the rotation space of the rotation device is limited. For example, the rotation means may include a rotation driving means and a rotation arm in which a rotation locus of the rotation arm is small in distance from the rotation center or a center line of the rotation arm coincides with (or approximately coincides with) the rotation center line. The rotation driving device may include a motor directly connected to the linear type rotation arm through a gear, and at this time, a physical center line of the rotation arm coincides with a rotation center line of the rotation arm. In another case, the rotating arm is L-shaped, including a cross arm and a vertical arm. The cross arm of the rotating arm is connected with the rotating driving device, and the image acquisition device is arranged on the vertical arm. The rotation driving device comprises a motor, the motor drives the cross arm to rotate, and the vertical arm fixedly connected with the cross arm correspondingly rotates, so that the rotation center line is not overlapped with the physical center line of the vertical arm. Typically, such misalignment distances may be suitably reduced, or the cross arm size suitably reduced, in order to save rotational space. Of course, when in use, the vertical arm of the L-shaped rotating arm can be placed in the object, and the cross arm is placed outside, so that the requirement on the rotating space can be reduced, and the cross arm is required to be longer.
(2) Acquisition equipment with translational motion structure of acquisition area moving device
In addition to the above-described rotation structure, the image pickup device 4 may move in a straight line trajectory with respect to the object. As shown in fig. 5, for example, the image pickup device is located on a linear rail, and is photographed sequentially through the object along the linear rail, and the image pickup device is kept from rotating during the process. Wherein the linear track may also be replaced by a linear cantilever. But more preferably, when the whole image acquisition device moves along the linear track, the image acquisition device performs a certain rotation, so that the optical axis of the image acquisition device faces the target object.
(3) Acquisition equipment with acquisition area moving device of random motion structure
Sometimes, the movement of the acquisition area is irregular, as shown in fig. 6, for example, the image acquisition device 4 may be held to shoot around the object, and at this time, it is difficult to perform movement in a strict orbit, and the movement track of the image acquisition device is difficult to accurately predict. Therefore, how to ensure that the photographed image can accurately and stably synthesize the 3D model is a big problem in this case, and has not been mentioned yet. A more common approach is to take multiple pictures, with redundancy in the number of pictures to solve the problem. However, the result of the synthesis is not stable. Although there are some ways to improve the composition by limiting the rotation angle of the camera, in practice the user is not sensitive to the angle, and even if the preferred angle is given, it is difficult for the user to operate in case of hand-held shooting. Therefore, the invention provides a method for improving the synthesis effect and shortening the synthesis time by limiting the moving distance of the twice photographing camera.
For example, in the face recognition process, the user can hold the mobile terminal to perform mobile shooting around the face of the user. The face 3D model can be accurately synthesized as long as the experience requirement (specifically described below) of the photographing position is met, and face recognition can be achieved by comparing the face 3D model with a pre-stored standard model. For example, the handset may be unlocked, or payment verification may be performed.
In the case of irregular motion, a sensor may be provided in the mobile terminal or the image pickup device, and the linear distance moved by the image pickup device at the time of two shots may be measured by the sensor, and when the movement distance does not satisfy the above-described experience condition regarding L (specifically, the following condition), an alarm may be given to the user. The alarm includes sounding or lighting an alarm to the user. Of course, the distance of the user moving and the movable maximum distance L can be displayed on the screen of the mobile phone when the user moves the image acquisition device or prompted by voice in real time. The sensor for realizing the function comprises: rangefinders, gyroscopes, accelerometers, positioning sensors, and/or combinations thereof.
(4) Acquisition equipment in multi-camera mode
It can be appreciated that, besides enabling the camera to capture images of different angles of the target object by means of relative movement between the camera and the target object, a plurality of cameras can be arranged at different positions around the target object, as shown in fig. 7, so that simultaneous capture of images of different angles of the target object can be realized.
Image acquisition device position optimization
If when the information on the outer surface of the vase is acquired, the image acquisition device can rotate around the vase in a circle, and images within 360 degrees of the circumference of the vase are shot. At this time, the position of the image acquisition device needs to be optimized, otherwise, the time and effect of the 3D model construction are difficult to be considered. Of course, besides the mode of rotating around the target, a plurality of image acquisition devices can be arranged to acquire simultaneously (see the acquisition equipment of a multi-camera mode for specific application), at this time, the positions of the image acquisition devices still need to be optimized, the optimized experience conditions are consistent with the above, and at this time, the optimized positions are the positions between two adjacent image acquisition devices due to the plurality of image acquisition devices.
The acquisition area moving device is of a rotating structure, the image acquisition device rotates around the target object, when 3D acquisition is carried out, the optical axis direction of the image acquisition device at different acquisition positions is changed relative to the target object, and at the moment, the positions of two adjacent image acquisition devices or the two adjacent acquisition positions of the image acquisition device meet the following conditions:
δ<0.603
wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices; f is the focal length of the image acquisition device; d is the rectangular length or width of a photosensitive element (CCD) of the image acquisition device; t is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; delta is the adjustment coefficient.
D, taking a rectangular length when the two positions are along the length direction of the photosensitive element of the image acquisition device; when the two positions are along the width direction of the photosensitive element of the image acquisition device, d takes a rectangular width.
When the image acquisition device is at any one of two positions, the distance from the photosensitive element to the surface of the target object along the optical axis is taken as T. In addition to this method, in another case L is A n 、A n+1 The straight line distance between the optical centers of the two image acquisition devices is equal to A n 、A n+1 Adjacent A of two image acquisition devices n-1 、A n+2 Two image acquisition devices and A n 、A n+1 The distance from each photosensitive element of the two image acquisition devices to the surface of the target object 1 along the optical axis is T respectively n-1 、T n 、T n+1 、T n+2 ,T=(T n-1 +T n +T n+1 +T n+2 )/4. Of course, the average value calculation may be performed not only by the adjacent 4 positions but also by more positions.
As described above, L should be the straight line distance between the optical centers of the two image capturing devices, but since the optical center position of the image capturing device is not easily determined in some cases, the center of the photosensitive element of the image capturing device, the geometric center of the image capturing device, the center of the axis of connection of the image capturing device with the cradle head (or platform, bracket), the center of the proximal end or distal end surface of the lens may be used instead in some cases, and the error caused by this is found to be within an acceptable range through experiments, so the above range is also within the scope of the present invention.
In general, in the prior art, parameters such as an object size and a field angle are used as a mode for estimating a camera position, and a positional relationship between two cameras is also expressed by an angle. The angle is inconvenient in practical use because the angle is not well measured in practical use. And, the object size may change as the measurement object changes. For example, when a 3D information acquisition of an adult head is performed and then a child head is performed, the head size needs to be measured again and reckoned again. The inconvenient measurement and repeated measurement bring about errors in measurement, thereby causing errors in camera position estimation. According to the scheme, according to a large amount of experimental data, the empirical condition which needs to be met by the position of the camera is provided, so that not only is the angle which is difficult to accurately measure measured avoided, but also the size and the dimension of an object do not need to be directly measured. In the experience condition, d and f are fixed parameters of the camera, and when the camera and the lens are purchased, the manufacturer can give corresponding parameters without measurement. T is only a straight line distance, and can be conveniently measured by using a traditional measuring method, such as a ruler and a laser range finder. Therefore, the empirical formula of the invention makes the preparation process convenient and quick, and improves the arrangement accuracy of the camera positions, so that the cameras can be arranged in the optimized positions, thereby simultaneously taking into account the 3D synthesis precision and speed, and specific experimental data are described below.
By using the device provided by the invention, experiments are carried out, and the following experimental results are obtained.
The camera lens was replaced and the experiment was repeated, the following experimental results were obtained.
The camera lens was replaced and the experiment was repeated, the following experimental results were obtained.
From the above experimental results and a lot of experimental experience, it can be derived that the value of δ should satisfy δ <0.603, and at this time, a partial 3D model can be synthesized, and although some parts cannot be synthesized automatically, it is acceptable in case of low requirements, and the part that cannot be synthesized can be compensated by manual or replacement algorithm. Particularly, when the value of delta satisfies delta <0.498, the balance between the synthesis effect and the synthesis time can be optimally considered; delta <0.356 can be chosen for better synthesis, where the synthesis time increases but the quality of the synthesis is better. Of course, to further enhance the effect of the synthesis, δ <0.311 may be selected. And when δ is 0.681, it is not synthesized. It should be noted here that the above ranges are merely preferred embodiments and do not constitute a limitation of the scope of protection. The above data is obtained by performing position optimization, and is not specific to the other data.
And as can be seen from the above experiments, for determining the photographing position of the camera, only the camera parameters (focal length f, CCD size) and the distance T between the camera CCD and the object surface need to be obtained according to the above formula, which makes it easy to design and debug the device. Since the camera parameters (focal length f, CCD size) are already determined at the time of purchase of the camera and are indicated in the product description, they are readily available. The camera position can be calculated easily from the above formula without the need for cumbersome angle of view measurements and object size measurements. Particularly, in some occasions, a camera lens needs to be replaced, and then the method can obtain the camera position by directly replacing the conventional parameter f of the lens and calculating; similarly, when different objects are collected, the measurement of the object size is also complicated due to the different sizes of the objects. By using the method of the invention, the camera position can be more conveniently determined without measuring the object size. The camera position determined by the invention can be used for combining time and combining effect. Thus, the above empirical condition is one of the inventive aspects of the present invention.
The above is the data obtained when the 3D synthesis is performed by collecting the image of the outer surface of the object, and according to the above similar method, the experiments of the inner surface of the object and the connecting portion of the object can be performed, and the corresponding data can be obtained as follows:
the value of δ should satisfy δ <0.587 when performing the inner surface acquisition, at which point it is already possible to synthesize a partial 3D model, although some cannot be synthesized automatically, but is acceptable in case of not high requirements and the non-synthesized part can be compensated by means of manual or replacement algorithms. Particularly, when the value of delta satisfies delta <0.443, the balance between the synthesis effect and the synthesis time can be optimally considered; for better synthesis, δ <0.319 can be chosen, where the synthesis time increases but the quality of the synthesis is better. Of course, to further enhance the effect of the synthesis, δ <0.282 may be chosen. And when delta is 0.675, it is not synthesized. It should be noted here that the above ranges are merely preferred embodiments and do not constitute a limitation of the scope of protection.
When the connection part acquisition is performed, the delta value should satisfy delta <0.513, and the partial 3D model can be synthesized by matching the images of the inner surface and the outer surface to form a complete 3D model comprising the inner surface and the outer surface, although a part cannot be automatically synthesized, the method is acceptable under the condition of low requirement, and the part which cannot be synthesized can be compensated by manual or replacement algorithm. Particularly, when the value of delta satisfies delta <0.415, the balance between the synthesis effect and the synthesis time can be optimally considered; for better synthesis, δ <0.301 can be chosen, where the synthesis time increases but the quality of the synthesis is better. Of course, to further enhance the effect of synthesis, δ <0.269 may be chosen. And when δ is 0.660, it is not synthesized. It should be noted here that the above ranges are merely preferred embodiments and do not constitute a limitation of the scope of protection.
The above data are obtained by experiments performed to verify the condition of the formula, and are not limiting on the invention. Even without this data, the objectivity of the formula is not affected. The person skilled in the art can adjust the parameters of the equipment and the details of the steps according to the requirement to perform experiments, and other data are obtained according with the formula.
Utilization of three-dimensional models
By using the method, a three-dimensional model of the target object can be synthesized, so that the real physical world object is completely digitized. The digitized information can be used for identifying and comparing objects, for designing products, for 3D display, for assisting medical treatment and other various purposes.
For example, after three-dimensional information of a face is acquired, the three-dimensional information can be used as a basis for recognition comparison to perform 3D recognition of the face.
For example, a more fit garment may be designed for a user using a three-dimensional model of the human body.
For example, after generating a three-dimensional model of the workpiece, 3D printing processing may be directly performed.
For example, after a three-dimensional model of the interior of the human body is generated, the human body information may be digitized to simulate a surgical procedure for medical teaching.
The target object, and the object each represent an object for which three-dimensional information is to be acquired. Can be a solid object or a plurality of object compositions. For example, the head, the hands, etc. The three-dimensional information of the target object comprises a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, local three-dimensional features, three-dimensional dimensions and all parameters with the three-dimensional features of the target object. In the present invention, three-dimensional means having XYZ three-direction information, in particular, having depth information, which is essentially different from only two-dimensional plane information. Also in essence different from some definitions called three-dimensional, panoramic, holographic, three-dimensional, but actually only including two-dimensional information, in particular not including depth information.
The acquisition region in the present invention refers to a range that can be photographed by an image acquisition device (e.g., a camera). The image acquisition device in the invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable equipment, intelligent glasses, intelligent watch, intelligent bracelet and all equipment with image acquisition function.
The rotation motion of the invention is that the previous position acquisition plane and the subsequent position acquisition plane are crossed instead of parallel in the acquisition process, or the optical axis of the previous position image acquisition device and the optical axis of the subsequent position image acquisition position are crossed instead of parallel. That is, the movement of the acquisition region of the image acquisition device around or partially around the object can be considered as a relative rotation of the two. Although more orbital rotational motion is exemplified in the embodiments of the present invention, it is understood that the limitations of the present invention may be used as long as non-parallel motion between the acquisition region of the image acquisition device and the target object is rotational. The scope of the invention is not limited to orbital rotation in the embodiments.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in an apparatus in accordance with embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
By now it should be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been shown and described herein in detail, many other variations or modifications of the invention consistent with the principles of the invention may be directly ascertained or inferred from the present disclosure without departing from the spirit and scope of the invention. Accordingly, the scope of the present invention should be understood and deemed to cover all such other variations or modifications.

Claims (11)

1. The method for matching the 3D information synthesized image based on the image ordering is characterized by comprising the following steps of:
step 1: determining a source image to be matched;
step 2: screening images adjacent to the source image;
step 3: matching calculation is carried out on the plurality of screened images;
step 4: repeating the step 2-3 for the rest source images to be matched, and finally completing the matching of all the images;
in step 2, according to the rotation speed s of the image acquisition device and the exposure time interval T, the position Pi (Xi, yi) of any photographing moment is calculated, according to the position Pi (Xi, yi) of the image acquisition device at any photographing moment, the distance Di between the current photographing position Pt (Xt, yt) and the photographing positions at all moments is calculated, di is ordered, the minimum M photographing positions close to Pt are selected, and the images photographed at the corresponding positions are used as images to be matched.
2. The method of claim 1, wherein: in step 2, the calculation method of the position Pi (Xi, yi) at any photographing time is as follows:
the arc length formula is L=N×pi×r/180, wherein N is the number of central angles, r is the radius, and L is the arc length; i.e. n=l×180/(pi×r);
the image acquisition device shoots a circle at any exposure time, and the sliding arc length of the image acquisition device is
The photographing position and the X-axis angle of the image acquisition device are obtained
The position xi=r×cos (N), yi=r×sin (N) of the exposure time Pi of the position of the image pickup device at any one photographing cycle is obtained.
3. The method of claim 1, wherein: the distance between the adjacent acquisition positions of the two images is as follows:
wherein L is the linear distance of the optical center of the image acquisition device when two acquisition positions are adjacent; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element of the image acquisition device; t is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; delta is the adjustment coefficient.
4. A method as claimed in claim 3, wherein: delta <0.603.
5. A method as claimed in claim 3, wherein: delta <0.498.
6. A method as claimed in claim 3, wherein: delta <0.356.
7. A method as claimed in claim 3, wherein: delta <0.311.
8. The method of any one of claims 1-7, wherein: the method further comprises the steps of:
performing image enhancement processing on the screened image;
extracting feature points of the screened images, and matching the feature points to obtain sparse feature points;
inputting the matched feature point coordinates, and resolving the position and posture data of the sparse three-dimensional face point cloud and the image acquisition device to obtain model coordinate values of the three-dimensional face point cloud and the position of the sparse object model;
And taking the sparse feature points as initial values, performing multi-view image dense matching, and obtaining dense point cloud data.
9. A method for generating a physical object using three-dimensional model data, comprising the matching method according to any one of claims 1 to 8.
10. A three-dimensional model construction method, characterized by comprising the matching method according to any one of claims 1 to 8.
11. A three-dimensional data comparison method comprising the matching method of any one of claims 1 to 8.
CN202110828646.9A 2020-02-17 2020-02-17 3D information synthetic image matching method based on image sorting Active CN113538552B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110828646.9A CN113538552B (en) 2020-02-17 2020-02-17 3D information synthetic image matching method based on image sorting

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110828646.9A CN113538552B (en) 2020-02-17 2020-02-17 3D information synthetic image matching method based on image sorting
CN202010095696.6A CN111325780B (en) 2020-02-17 2020-02-17 A Rapid Construction Method of 3D Model Based on Image Screening

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010095696.6A Division CN111325780B (en) 2020-02-17 2020-02-17 A Rapid Construction Method of 3D Model Based on Image Screening

Publications (2)

Publication Number Publication Date
CN113538552A CN113538552A (en) 2021-10-22
CN113538552B true CN113538552B (en) 2024-03-22

Family

ID=71172730

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110828646.9A Active CN113538552B (en) 2020-02-17 2020-02-17 3D information synthetic image matching method based on image sorting
CN202010095696.6A Active CN111325780B (en) 2020-02-17 2020-02-17 A Rapid Construction Method of 3D Model Based on Image Screening

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010095696.6A Active CN111325780B (en) 2020-02-17 2020-02-17 A Rapid Construction Method of 3D Model Based on Image Screening

Country Status (1)

Country Link
CN (2) CN113538552B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112254669B (en) * 2020-10-15 2022-09-16 天目爱视(北京)科技有限公司 Intelligent visual 3D information acquisition equipment of many bias angles

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101266690A (en) * 2007-03-15 2008-09-17 华南农业大学 System and method for three-dimensional image reconstruction of plant root morphology
CN106097436A (en) * 2016-06-12 2016-11-09 广西大学 A kind of three-dimensional rebuilding method of large scene object
CN106683173A (en) * 2016-12-22 2017-05-17 西安电子科技大学 A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching
CN108470373A (en) * 2018-02-14 2018-08-31 天目爱视(北京)科技有限公司 It is a kind of based on infrared 3D 4 D datas acquisition method and device
CN109269405A (en) * 2018-09-05 2019-01-25 天目爱视(北京)科技有限公司 A fast 3D measurement and comparison method
CN109394168A (en) * 2018-10-18 2019-03-01 天目爱视(北京)科技有限公司 A kind of iris information measuring system based on light control
CN109785278A (en) * 2018-12-21 2019-05-21 北京大学深圳研究生院 A kind of three-dimensional sufficient type image processing method, device, electronic equipment and storage medium
CN110533774A (en) * 2019-09-09 2019-12-03 江苏海洋大学 A kind of method for reconstructing three-dimensional model based on smart phone

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2929416B1 (en) * 2008-03-27 2010-11-05 Univ Paris 13 METHOD FOR DETERMINING A THREE-DIMENSIONAL REPRESENTATION OF AN OBJECT FROM A CUTTING IMAGE SEQUENCE, COMPUTER PROGRAM PRODUCT, CORRESPONDING OBJECT ANALYSIS METHOD, AND IMAGING SYSTEM
US10460518B2 (en) * 2015-12-31 2019-10-29 Dassault Systemes Solidworks Corporation Modifying a sub-division model based on the topology of a selection
CN106228507B (en) * 2016-07-11 2019-06-25 天津中科智能识别产业技术研究院有限公司 A kind of depth image processing method based on light field
CN108629799B (en) * 2017-03-24 2021-06-01 成都理想境界科技有限公司 A method and device for realizing augmented reality
CN108732584B (en) * 2017-04-17 2020-06-30 百度在线网络技术(北京)有限公司 Method and device for updating map
CN108198145B (en) * 2017-12-29 2020-08-28 百度在线网络技术(北京)有限公司 Method and device for point cloud data restoration
CN108537865A (en) * 2018-03-21 2018-09-14 哈尔滨工业大学深圳研究生院 A kind of the pseudo-classic architecture model generation method and device of view-based access control model three-dimensional reconstruction
CN108776492B (en) * 2018-06-27 2021-01-26 电子科技大学 Binocular camera-based autonomous obstacle avoidance and navigation method for quadcopter
CN109961505A (en) * 2019-03-13 2019-07-02 武汉零点视觉数字科技有限公司 A kind of ancient times coffin chamber architecture digital reconstructing system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101266690A (en) * 2007-03-15 2008-09-17 华南农业大学 System and method for three-dimensional image reconstruction of plant root morphology
CN106097436A (en) * 2016-06-12 2016-11-09 广西大学 A kind of three-dimensional rebuilding method of large scene object
CN106683173A (en) * 2016-12-22 2017-05-17 西安电子科技大学 A Method of Improving the Density of 3D Reconstruction Point Cloud Based on Neighborhood Block Matching
CN108470373A (en) * 2018-02-14 2018-08-31 天目爱视(北京)科技有限公司 It is a kind of based on infrared 3D 4 D datas acquisition method and device
CN109269405A (en) * 2018-09-05 2019-01-25 天目爱视(北京)科技有限公司 A fast 3D measurement and comparison method
CN110543871A (en) * 2018-09-05 2019-12-06 天目爱视(北京)科技有限公司 point cloud-based 3D comparison measurement method
CN109394168A (en) * 2018-10-18 2019-03-01 天目爱视(北京)科技有限公司 A kind of iris information measuring system based on light control
CN109785278A (en) * 2018-12-21 2019-05-21 北京大学深圳研究生院 A kind of three-dimensional sufficient type image processing method, device, electronic equipment and storage medium
CN110533774A (en) * 2019-09-09 2019-12-03 江苏海洋大学 A kind of method for reconstructing three-dimensional model based on smart phone

Also Published As

Publication number Publication date
CN113538552A (en) 2021-10-22
CN111325780B (en) 2021-07-27
CN111325780A (en) 2020-06-23

Similar Documents

Publication Publication Date Title
CN111292364B (en) Method for rapidly matching images in three-dimensional model construction process
CN111060023B (en) High-precision 3D information acquisition equipment and method
CN113532329B (en) Calibration method with projected light spot as calibration point
CN113379822B (en) Method for acquiring 3D information of target object based on pose information of acquisition equipment
CN111238374B (en) Three-dimensional model construction and measurement method based on coordinate measurement
CN111292239B (en) Three-dimensional model splicing equipment and method
CN113327291B (en) Calibration method for 3D modeling of remote target object based on continuous shooting
CN111445528B (en) A Multi-Camera Common Calibration Method in 3D Modeling
CN111445529B (en) Calibration equipment and method based on multi-laser ranging
CN111076674B (en) Closely target object 3D collection equipment
CN112016570B (en) Three-dimensional model generation method for background plate synchronous rotation acquisition
CN111340959B (en) Three-dimensional model seamless texture mapping method based on histogram matching
CN112304222B (en) Background board synchronous revolution&#39;s 3D information acquisition equipment
CN111208138B (en) Intelligent wood recognition device
CN112629412A (en) Rotary type 3D intelligent vision equipment
WO2021115297A1 (en) 3d information collection apparatus and method
CN113538552B (en) 3D information synthetic image matching method based on image sorting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant