CN112767536A - Three-dimensional reconstruction method, device and equipment of object and storage medium - Google Patents

Three-dimensional reconstruction method, device and equipment of object and storage medium Download PDF

Info

Publication number
CN112767536A
CN112767536A CN202110008816.9A CN202110008816A CN112767536A CN 112767536 A CN112767536 A CN 112767536A CN 202110008816 A CN202110008816 A CN 202110008816A CN 112767536 A CN112767536 A CN 112767536A
Authority
CN
China
Prior art keywords
dimensional
determining
camera
coordinate
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110008816.9A
Other languages
Chinese (zh)
Other versions
CN112767536B (en
Inventor
李嘉茂
王贤舜
朱冬晨
张晓林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Microsystem and Information Technology of CAS
Original Assignee
Shanghai Institute of Microsystem and Information Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Microsystem and Information Technology of CAS filed Critical Shanghai Institute of Microsystem and Information Technology of CAS
Priority to CN202110008816.9A priority Critical patent/CN112767536B/en
Publication of CN112767536A publication Critical patent/CN112767536A/en
Application granted granted Critical
Publication of CN112767536B publication Critical patent/CN112767536B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明涉及计算机视觉技术领域,本发明公开了一种对象的三维重建方法、装置、设备及存储介质。该三维重建方法是通过先获取目标对象的世界坐标下的三维坐标集合,进而确定出目标对象的在相机坐标系下的三维坐标集合,并利用聚类算法确定出清晰区域的目标点集,找到该点集对应的世界坐标下的三维坐标,实现对目标区域的三维重建。本申请提供的三维重建方法具有清晰度高的特点。

Figure 202110008816

The invention relates to the technical field of computer vision, and discloses a three-dimensional reconstruction method, device, equipment and storage medium of an object. The three-dimensional reconstruction method is to first obtain the three-dimensional coordinate set of the target object in the world coordinates, and then determine the three-dimensional coordinate set of the target object in the camera coordinate system, and use the clustering algorithm to determine the target point set of the clear area, find The three-dimensional coordinates in the world coordinates corresponding to the point set realize the three-dimensional reconstruction of the target area. The three-dimensional reconstruction method provided by the present application has the characteristics of high definition.

Figure 202110008816

Description

Three-dimensional reconstruction method, device and equipment of object and storage medium
Technical Field
The present invention relates to the field of computer vision technologies, and in particular, to a method, an apparatus, a device, and a storage medium for three-dimensional reconstruction of an object.
Background
With the development of microfabrication technology, the production of precision engineering micro-parts puts increasing demands on metrology. Precision systems, such as micro-electromechanical systems (MEMS) and micro-optical devices, contain assemblies of micro-features with dimensions on the order of microns to millimeters. Many non-contact optical methods have been used to determine the shape of objects, such as confocal microscopy, white light interferometry, and triangulation-based projection of microscopic fringes. While confocal microscopy and white light interferometry provide resolution in the nanometer range, the microscopic fringe projection method provides a fast and reliable method for measuring field sizes on the order of 1 mm to several cm.
Phase Shift Projection Fringe Profilometry (PSPFP) has the advantages of non-contact, full-field measurement capability, high profile sampling density, low environmental vulnerability, etc. With the phase shift detection scheme, even under the condition of excessive image noise, the detection precision of a part of ten thousand fields of view can be achieved. Conventional microscopic fringe projection methods use a zoom stereomicroscope as the base optical system, with one camera port adapted for fringe projection by LCD, LCOS or DMD. However, the depth of field of a microscope is limited to sub-millimeter levels, which is often insufficient to measure the full depth of a three-dimensional object at once. The large depth of field can be obtained by adding a vertical translation stage of an object, but the system is complex and has low efficiency. Furthermore, the calibration for phase-depth conversion using a precision translation stage is very complicated and requires a reference surface, which directly results in measurement errors. Compared with the traditional lens, the telecentric lens has the characteristic of orthogonal projection and has the advantages of small distortion, constant magnification, increased depth of field and the like. Therefore, telecentric lenses have become a key component in the development of high-precision measurement applications.
Although the use of a telecentric lens increases the depth of field of the microscope to some extent, the depth of field is still limited at present, which limits the use of the method in observing a thicker object scene to some extent, and thus makes the accuracy of the 3D reconstruction of the finally formed object not high.
Disclosure of Invention
The invention aims to solve the technical problem that the accuracy of a reconstruction result is not high in the prior art.
To solve the above technical problem, the present application discloses in one aspect a method for three-dimensional reconstruction of an object, the method comprising:
acquiring an image containing a target object by a camera; the target object comprises a stripe pattern projected by a projector;
determining a three-dimensional coordinate set under the world coordinate of the target object according to the image;
converting the three-dimensional coordinate set under the world coordinate into a three-dimensional coordinate set under a camera coordinate system by utilizing the external parameters of the camera;
determining a clustering center point of a three-dimensional coordinate set under the camera coordinate system by using a clustering algorithm;
determining a target point set according to the clustering center point and preset parameters;
determining a three-dimensional coordinate set of a target area under the world coordinate from the three-dimensional coordinate set under the world coordinate, wherein the three-dimensional coordinate set of the target area under the world coordinate is a set of three-dimensional coordinates under the world coordinate corresponding to the target point set;
and performing three-dimensional reconstruction on the target area according to the three-dimensional coordinate set of the target area in the world coordinate system.
Optionally, the determining a clustering center point of the three-dimensional coordinate set in the camera coordinate system by using a clustering algorithm includes:
determining a Z-direction data set in the camera coordinate system from a three-dimensional coordinate set in the camera coordinate system, wherein the Z direction is a direction perpendicular to an imaging plane of the camera;
performing Meanshift clustering on the Z-direction data set under the camera coordinate system to obtain a maximum clustering set;
the cluster center point is determined from the largest cluster set.
Optionally, after determining a clustering center point of the three-dimensional coordinate set in the camera coordinate system by using a clustering algorithm, the method further includes:
and performing data extraction on the image by using a fuzzy detection method to obtain a three-dimensional coordinate set of the first region in a camera coordinate system.
Optionally, the determining a target point set according to the cluster center point and preset parameters includes:
acquiring the preset parameters, wherein the preset parameters are the depth of field and the correction coefficient of the camera;
determining a Z-direction data set of a second region according to the clustering center point and the preset parameters;
determining a Z-direction data set of the first area according to a three-dimensional coordinate set of the first area in a camera coordinate system;
determining a Z-direction data set of a target area from the Z-direction data set of the second area, wherein the Z-direction data set of the target area belongs to the Z-direction data set of the first area;
determining a target point corresponding to each piece of Z-direction data by using the Z-direction data set of the target area;
and determining a target point set based on the target point corresponding to each piece of Z-direction data.
Optionally, the blur detection method includes a defocus area estimation network method and an image gradient method based on a convolutional neural network architecture.
Optionally, the determining a three-dimensional coordinate set under the world coordinate according to the image includes:
determining a pixel coordinate set and a camera pixel coordinate set of an imaging plane of the projector according to the image;
determining a first calculation formula according to the internal parameters of the camera and the external parameters of the camera;
determining a second calculation formula according to the internal parameters of the projector and the external parameters of the projector;
and determining a three-dimensional coordinate set under the world coordinate system by using the first calculation formula, the second calculation formula, the pixel coordinate set of the projector imaging plane and the camera pixel coordinate set.
Optionally, after performing three-dimensional reconstruction on the target region according to the three-dimensional coordinate set of the target region in the world coordinate system, the method further includes:
repeating the step of three-dimensional reconstruction of the target area to three-dimensionally reconstruct the preset area until the three-dimensional reconstruction of all areas of the target object is completed;
and splicing the three-dimensional reconstruction result of the target object according to the three-dimensional reconstruction results of all the regions of the target object.
The present application also discloses in another aspect a three-dimensional reconstruction apparatus, comprising:
the acquisition module is used for acquiring an image of the stripes projected to the target object by the projector through the camera;
the first determining module is used for determining a three-dimensional coordinate set under world coordinates according to the image;
the conversion module is used for converting the three-dimensional coordinate set under the world coordinate into a three-dimensional coordinate set under a camera coordinate system by utilizing the external parameters of the camera;
the second determining module is used for determining a clustering center point of the three-dimensional coordinate set under the camera coordinate system by using a clustering algorithm;
the third determining module is used for determining a target point set according to the clustering central point and preset parameters;
the fourth determining module is used for determining a three-dimensional coordinate set of the target area under the world coordinate from the three-dimensional coordinate set under the world coordinate, wherein the three-dimensional coordinate set of the target area under the world coordinate is a set of three-dimensional coordinates under the world coordinate corresponding to the target point set;
and the reconstruction module is used for performing three-dimensional reconstruction on the target area according to the three-dimensional coordinate set of the target area in the world coordinate system.
The present application also discloses in another aspect an electronic device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the three-dimensional reconstruction method described above.
The present application also discloses in another aspect a computer storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the three-dimensional reconstruction method described above.
By adopting the technical scheme, the three-dimensional reconstruction method of the object has the following beneficial effects:
the three-dimensional reconstruction method comprises the steps of obtaining a three-dimensional coordinate set of a target object under a world coordinate, determining the three-dimensional coordinate set of the target object under a camera coordinate system, determining a target point set of a clear area by using a clustering algorithm, finding out the three-dimensional coordinate under the world coordinate corresponding to the target point set, and achieving three-dimensional reconstruction of the target area, so that the reconstruction result has the advantage of high definition.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a system framework diagram of a method for applying three-dimensional reconstruction according to the present application;
FIG. 2 is a flow chart of a three-dimensional reconstruction method in an alternative embodiment of the present application;
FIG. 3 is a schematic illustration of an alternative image including a stripe pattern according to the present application;
FIG. 4 is a flow chart of a three-dimensional reconstruction method in another alternative embodiment of the present application;
FIG. 5 is a flow chart of a three-dimensional reconstruction method in another alternative embodiment of the present application;
FIG. 6 is a schematic diagram of the relationship between various coordinates in an alternative embodiment of the present application;
FIG. 7 is a flow chart of a three-dimensional reconstruction method in another alternative embodiment of the present application;
FIG. 8 is a comparison of a result of a three-dimensional reconstruction of a target object according to the present application with a prior art reconstruction result;
fig. 9 is a schematic structural diagram of a three-dimensional reconstruction apparatus according to an alternative embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1, fig. 1 is a system framework diagram of a method for applying three-dimensional reconstruction provided by the present application. The system comprises a terminal 101, a camera system 102 and a projector system 103, wherein the terminal 101 is respectively in communication connection with the camera system 102 and the projector system 103, the terminal 101 can control the projector system 103 to project stripes with preset phases to a target object, the camera system 102 obtains an image containing the target object by shooting the target object after the stripe pattern is projected, and sends the image to the terminal 101, so that the terminal 101 can determine a three-dimensional coordinate set of the target object under a world coordinate system by analyzing and processing the image, convert the three-dimensional coordinate set under the world coordinate system into a three-dimensional coordinate set under a camera coordinate system by using external parameters of a camera, then determine a target point set of a clear area from the three-dimensional coordinate set under the camera coordinate system by using a clustering algorithm, and perform three-dimensional reconstruction based on three-dimensional coordinates under the world coordinate system corresponding to the target point set, thereby obtaining the three-dimensional reconstruction result of the target area. Therefore, the reconstruction result obtained based on the three-dimensional reconstruction method can be clearer.
Optionally, the terminal 101 comprises a control unit, by which the terminal 101 enables control of the camera system 102 and the projector system 103.
Alternatively, the step of three-dimensionally reconstructing the target region in the image processing step in the above step may be implemented in a server connected to the terminal 101, and the server may feed back the processed information to the terminal.
Alternatively, the terminal 101 may be a physical device of the type of a smartphone, desktop computer, tablet computer, notebook computer, digital assistant, smart wearable device, or the like; wherein, wearable equipment of intelligence can include intelligent bracelet, intelligent wrist-watch, intelligent glasses, intelligent helmet etc.. Of course, the terminal 101 is not limited to the electronic device with certain entities, but may also be software running in the electronic device, for example, the terminal 101 may be a web page or an application provided by a service provider to a user.
Alternatively, the terminal 101 may comprise a display screen, a storage device and a control unit connected by a data bus. The display screen is used for displaying data such as images or videos of the target object, and the display screen can be a touch screen of a mobile phone or a tablet computer. The storage device is used for storing program codes, data and data of the shooting device, and the storage device may be a memory of the terminal 101, and may also be a storage device such as a smart media card (smart media card), a secure digital card (secure digital card), and a flash memory card (flash card). Alternatively, the control unit may also be called a processor, and the processor may be a single-core or multi-core processor.
While a specific example of a method for three-dimensional reconstruction of an object according to the present application is described below, fig. 2 is a flow chart of a method for three-dimensional reconstruction according to an alternative embodiment of the present application, and the present specification provides the method operation steps as in the example or the flow chart, but may include more or less operation steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures. Specifically, as shown in fig. 2, fig. 2 illustrates that the method may include the following steps:
s201, acquiring an image containing a target object through a camera; the target object contains a fringe pattern projected by the projector.
In this embodiment, the camera system includes a camera telecentric lens; the projector system comprises a projection telecentric lens; the straight line of the focal length of the camera telecentric lens and the straight line of the focal length of the projector telecentric lens form a preset included angle, so that a standard optical system for measuring the micro-projection profile based on the telecentric lens is formed.
In this embodiment, as shown in fig. 3, fig. 3 is a schematic diagram of an image including a stripe pattern, which is optional in the present application. The image captured by the camera is an image (as shown in fig. 3 b) containing a plurality of twisted stripes arranged at intervals, because when the projector projects a series of vertical stripes (as shown in fig. 3 a) to the target object, the target object has a certain thickness, so that the vertical stripes are twisted on the surface of the target object, and the thickness information of the target object is contained in the twisted stripe pattern.
Optionally, the target object is an object to be observed.
In order to improve the efficiency of measurement, in an alternative embodiment, the process of projecting the target object by the projector is that four sets of transverse stripes are projected onto the target object, each set of transverse stripes is projected four times, each phase difference is pi/2, each projection is performed by taking a picture, and the phase of the transverse stripe projected by the first set has only one cycle, so that transverse phase recovery can be performed directly according to the four images of the first set to obtain a first transverse phase value, while the phase recovered by the second set can obtain a determined second transverse phase value based on the phase relationship between the first set and the second set, and so on, a third transverse phase value and a fourth transverse phase value are obtained in sequence, the fourth transverse phase value is the final required transverse phase, and based on the same steps, the final required longitudinal phase can be obtained. And further obtaining the horizontal phase and the vertical phase of each object point required subsequently, wherein the object points are points on the surface of the target object, and are in one-to-one correspondence with pixel points on the image and space points used for describing the target object in a world coordinate system, so that the object points are uniformly used for referring to the corresponding points of the target object in all coordinate systems in the subsequent description.
It should be noted that, in this embodiment, each of the aforementioned groups is projected four times, and each phase difference is pi/2, which is a condition required for subsequent calculation based on a four-step phase-shifting algorithm, it is known that, if a lateral phase value corresponding to the resistor is calculated based on an N-step phase-shifting algorithm, the number of times of projection corresponding to each group is N, and each phase difference is 2 pi/N.
And S202, determining a three-dimensional coordinate set under the world coordinate of the target object according to the image.
In an alternative embodiment, step S202, as shown in fig. 4, fig. 4 is a flowchart of a three-dimensional reconstruction method in another alternative embodiment of the present application. The method comprises the following steps:
s2021: and determining a pixel coordinate set of an imaging plane of the projector and a pixel coordinate set of the camera according to the image.
Optionally, the step of determining the set of pixel coordinates of the projector imaging plane from the image is as follows:
determining the transverse phase phi of each object point of the target object according to the imagevAnd a longitudinal phase phihThe pixel coordinates at the projector imaging plane for each object point are obtained according to the following phase recovery equation (1):
Figure BDA0002884207220000081
wherein N isvIndicating the number of cycles of the lateral stripe, NhIndicating the number of cycles of the longitudinal stripe, i.e. the NvNumber of horizontal stripes within the image width, NhIndicating the number of longitudinal stripes in the image length. Alternatively, the image size is the size of the image as shown in fig. 3 (b). u. ofpPixel coordinates, v, representing each object point in a transverse direction of the projector imaging planepThe pixel coordinates of each object point in the longitudinal direction of the projector imaging plane are represented. Optionally, the transverse phase phi of each object point of the target object is determined according to the imagevAnd a longitudinal phase phihIn the manner of projecting four sets of stripes as described above. And obtaining a pixel coordinate set of the projector imaging plane according to the pixel coordinate of each object point on the projector imaging plane.
Optionally, after the camera shoots the target object, an image containing the target object is obtained, the camera pixel coordinates corresponding to all pixel points of the image can be directly read subsequently, and a camera pixel coordinate set is obtained based on the camera pixel coordinates corresponding to all pixel points.
S2022: and determining a first calculation formula according to the internal parameters of the camera and the external parameters of the camera.
In this embodiment, the intrinsic parameter of the camera is KcThen K is thecCan be expressed as follows:
Figure BDA0002884207220000082
m iscThe magnification ratio corresponding to the telecentric camera.
When the external parameter of the camera is Lc,LcThe L is the phase position and pose relation between the telecentric lens of the camera and the world coordinate systemcCan be expressed as follows:
Figure BDA0002884207220000091
and K iscAnd LcAll of which are known quantities, and optionally the camera may be projected before the target object is projectedAnd calibrating to obtain the latest internal parameters and external parameters of the camera, and determining the current internal parameters and external parameters of the camera based on historical calibration results.
Optionally, the method for calibrating the camera may use a checkerboard calibration board for calibration, for example, a zhangying calibration method, and may also use a dot calibration board for calibration.
Alternatively, if the camera pixel coordinate of the object point at any point of the target object is set to (u)c,vc) If the three-dimensional coordinates of the object point in the world coordinate system are (Xw, Yw, Zw), the correspondence between the two is as follows:
Figure BDA0002884207220000092
the formula (2) is the first calculation formula.
S2023: and determining a second calculation formula according to the internal parameters of the projector and the external parameters of the projector.
In this embodiment, the intrinsic parameter of the projector is KpThen K is thepCan be expressed as follows:
Figure BDA0002884207220000093
m ispThe magnification corresponding to the projector equipped with the telecentric lens.
When the external parameter of the projector is Lp,LpThe L is the phase position and pose relation between the telecentric lens of the projector and the world coordinate systempCan be expressed as follows:
Figure BDA0002884207220000094
and K ispAnd LpAll the parameters are known quantities, optionally, the projector may be calibrated before the target object is projected, so as to obtain the latest internal parameters and external parameters of the projector, and the current internal parameters and external parameters of the projector may be determined based on the historical calibration results.
Optionally, the projector may be calibrated by using a checkerboard calibration board, or by using a dot calibration board;
in an optional embodiment, the step of calibrating the projector is: the projector projects a horizontal and vertical stripe pattern with a preset phase to a calibration plate, the calibration plate is provided with a plurality of reference points, the horizontal phase and the vertical phase of each reference point are calculated and obtained based on a phase-shifting algorithm, so that the pixel coordinates of each reference point on the imaging plane of the projector are restored through the formula (1), the subsequent steps are the same as the camera calibration method, and the internal parameters and the external parameters of the projector are calculated and obtained.
In another optional embodiment, the step of calibrating the projector is:
(1) the projector projects a horizontal and vertical stripe pattern with a preset phase to a calibration plate, a plurality of reference points exist on the calibration plate, the horizontal phase and the vertical phase corresponding to each reference point are calculated and obtained based on a phase-shifting algorithm, and because the plurality of reference points in the prior art are not in the same plane, namely, individual reference points are protruded out of the plane of the calibration plate, if subsequent steps are carried out according to the individual reference points, the finally obtained calibration result is inaccurate, and therefore, the camera pixel coordinate of each reference point can be obtained;
(2) forming a transverse phase coordinate set according to the camera pixel coordinate of each reference point and the transverse phase of each reference point, and forming a longitudinal phase coordinate set according to the camera pixel coordinate of each reference point and the longitudinal phase of each reference point;
(3) determining a first plane equation according to the transverse phase coordinate set, substituting the camera pixel coordinate of each reference point into the first plane equation, determining an estimated transverse phase value corresponding to each reference point, performing difference between the estimated transverse phase value corresponding to each reference point and the transverse phase, and taking an absolute value to obtain a first transverse error value;
(4) eliminating the reference point of which the first transverse error value is greater than a first preset threshold value, thereby obtaining a new current transverse phase coordinate set;
(5) subsequently, by further repeating the process of obtaining the current first transverse error value, a second transverse error value is obtained by comparing the last transverse error value with the current first transverse error value, and optionally, the second transverse error value is equal to the quotient of the difference value between the last transverse error value and the current first transverse error value;
(6) when the second transverse error value is smaller than a second preset threshold value, confirming that the points in the current transverse phase coordinate set are all in one plane, otherwise, continuously repeating the process of obtaining the current second transverse error value until the second transverse error value is smaller than the second preset threshold value;
(7) similarly, the method for obtaining the current longitudinal phase coordinate set is the same as the step for obtaining the current transverse phase coordinate set, i.e. the steps (3) - (6) are repeated, and are not described again;
(8) then, taking a reference point of the intersection of the current transverse phase coordinate set and the current longitudinal phase coordinate set as a target reference point, and further correspondingly finding a transverse phase and a longitudinal phase corresponding to the target reference point;
(9) and (2) restoring the pixel coordinates of the target reference point on the projector imaging plane through the formula (1), and calculating to obtain the internal parameters and the external parameters of the projector in the same subsequent steps as the camera calibration method.
Alternatively, if the pixel coordinate of the object point at any point of the target object on the projector imaging plane is set as (u)p,vp) Then, the relationship between the three-dimensional coordinate of the object point and the three-dimensional coordinate of the world coordinate system is as follows:
Figure BDA0002884207220000111
this formula (3) is the second calculation formula described above.
S2024: and determining a three-dimensional coordinate set under the world coordinate system by using the first calculation formula, the second calculation formula, the pixel coordinate set of the projector imaging plane and the camera pixel coordinate set.
Optionally, any one ofPixel coordinate (u) of object point in projector imaging planep,vp) Substituting into formula (2), the camera pixel coordinate (u) of the determined object pointc,vc) Substituting the equation into the formula (3) can obtain the following over-determined equation, and further solve the three-dimensional coordinate of the object point in the world coordinate system:
Figure BDA0002884207220000112
the three-dimensional coordinates of all object points of the target object in the world coordinate system can be sequentially solved through the formula (4), so that a three-dimensional coordinate set in the world coordinate system is obtained based on the three-dimensional coordinates of all object points in the world coordinate system.
And S203, converting the three-dimensional coordinate set under the world coordinate into a three-dimensional coordinate set under a camera coordinate system by utilizing the external parameters of the camera.
Alternatively, as can be derived from the above description, the extrinsic parameter of the camera is LcThe phase position and pose relation between the telecentric lens of the camera and the world coordinate system, namely the process of converting the rigid body from the world coordinate system to the camera coordinate system, can obtain the camera-based external parameter L through rotation and translationcCan obtain three-dimensional coordinate [ X ] in camera coordinate systemcYcZc]TThree-dimensional coordinate [ X ] under world coordinate systemWYWZW]TThe following relationships:
Figure BDA0002884207220000121
the three-dimensional coordinates of all object points of the target object in the world coordinate system can be converted into the three-dimensional coordinates of all object points in the camera coordinate system through the formula (5), and the three-dimensional coordinate set in the world coordinate system is obtained based on the three-dimensional coordinates of all object points in the camera coordinate system.
And S204, determining a clustering center point of the three-dimensional coordinate set in the camera coordinate system by using a clustering algorithm.
In an alternative embodiment, in step S204, as shown in fig. 5, fig. 5 is a flowchart of a three-dimensional reconstruction method in another alternative embodiment of the present application. The method comprises the following steps:
s2041: and determining a Z-direction data set in the camera coordinate system from the three-dimensional coordinate set in the camera coordinate system, wherein the Z direction is a direction perpendicular to an imaging plane of the camera.
Optionally, three-dimensional coordinates under the camera coordinate system are combined as { X }ci,Yci,ZciI is a natural number greater than zero, as shown in fig. 6, and fig. 6 is a schematic diagram of a relationship between various coordinates in an alternative embodiment of the present application; wherein, the world coordinate system: xw, Yw, Zw; camera coordinate system: xc, Yc, Zc; image coordinate system: x, y; pixel coordinate system: u, v; the distance between the camera coordinate system and the image coordinate system is the focal length f, i.e. the distance from the point O to the point Oc in the figure, corresponding to the magnification mcAnd optionally, the relationship between the world coordinate system and the camera coordinate system and the pixel coordinate system is as shown in the above formula (2) and formula (5), and as can be seen from fig. 6, a clear image can appear on the camera imaging plane only when the target object is located within the range of the depth of field d, so whether the image is out of focus is related to the value of Zc, so that when a Z-direction data set in the camera coordinate system is determined from a three-dimensional coordinate set in the camera coordinate system, the distribution of all object points on the Zc axis can be obtained.
S2042: and (4) performing Meanshift clustering on the Z-direction data set under the camera coordinate system to obtain a maximum clustering set.
That is to say, when the Meanshift clustering is performed on the Z-direction data set in the camera coordinate system, optionally, the center of a circle with a preset radius is controlled to continuously move to a place with a high data point density according to a preset drift vector until the center of the circle with the preset radius moves to a place with a high point density, at this time, the data set formed in the circle with the preset radius is a target cluster set, that is, a maximum cluster set, and optionally, the shot image is the clearest image in the current scene, so that most of the object points in all the coordinate sets are clear points based on the image, and only a few of the object points are not clear points, and thus, the maximum cluster set is found to be the cluster set containing the most Z-direction data.
S2043: the cluster center point is determined from the largest cluster set.
That is, according to the Z-direction data in the maximum cluster set in step S2042, optionally, an average value of all Z-direction data in the maximum cluster set is used as the cluster center point S, that is, a circle center, and the cluster center point S can also be obtained by performing weight calculation on all Z-direction data in the maximum cluster set.
And S205, determining a target point set according to the cluster central point and preset parameters.
Optionally, after the cluster center S is determined by using the Meanshift cluster, if the depth of field is d and τ is a correction coefficient, determining a target Zc that meets the following set, and further determining a target point set in step S205:
{S-τd<Zc<S+τd}
optionally, τ is in a range of 0.3 to 0.5.
It should be noted that the method for determining the cluster center point may be hierarchical clustering or other density clustering methods besides the mean shift clustering.
In an optional implementation manner, after step S205, the method further includes:
and performing data extraction on the image by using a fuzzy detection method to obtain a three-dimensional coordinate set of the first region in a camera coordinate system.
In an alternative embodiment, the blur detection method includes a defocus area estimation network method and an image gradient method based on a convolutional neural network architecture.
In this embodiment, the out-of-focus area estimation network method based on the convolutional neural network architecture specifically includes: after the network structure of the defocus region estimation is preset, the network structure is trained by using the existing data set, which may be a target object directly obtained from an imageThe camera pixel coordinate set of the plurality of object points can also be a three-dimensional coordinate set under a camera coordinate system or a world coordinate system after conversion is processed, and finally, the picture is directly input into a trained out-of-focus area estimation network, so that a set of predicted clear points can be directly obtained
Figure BDA0002884207220000131
Optionally, the niThe camera pixel coordinates of the object point of the first area can be converted into three-dimensional coordinates under a camera coordinate system based on the internal parameters of the camera according to requirements, and then a three-dimensional coordinate set of the first area under the camera coordinate system is obtained
Figure BDA0002884207220000141
Optionally, the image gradient method is to determine a three-dimensional coordinate set of the first region in the camera coordinate system by solving a gradient distribution of the whole image
Figure BDA0002884207220000142
In an alternative embodiment, step S205, as shown in fig. 8, fig. 8 is a flowchart of a three-dimensional reconstruction method in another alternative embodiment of the present application. The method comprises the following steps:
s2051: and acquiring the preset parameters, wherein the preset parameters are the depth of field and the correction coefficient of the camera.
S2052: and determining a Z-direction data set of the second region according to the cluster center point and the preset parameters.
Optionally, according to the set condition { S- τ d<Zc<S + taud } determining a Z-direction data set of the second area
Figure BDA0002884207220000143
Figure BDA0002884207220000144
Is Z-direction data of any object point in the second area.
S2053: and determining a Z-direction data set of the first area according to the three-dimensional coordinate set of the first area in the camera coordinate system.
That is, from
Figure BDA0002884207220000145
Determining Z-direction data of all object points of the first area, and further obtaining a Z-direction data set of the first area based on the Z-direction data of all object points of the first area
Figure BDA0002884207220000146
S2054: and determining a Z-direction data set of the target area from the Z-direction data sets of the second area, wherein the Z-direction data set of the target area belongs to the Z-direction data set of the first area.
That is, the Z-direction data set of the target region is finally determined
Figure BDA0002884207220000147
ρ is the Z-direction value of the object point of the target area in the camera coordinate system. The accuracy of the obtained clear points is further improved because the finally obtained point set of the target area is subjected to two rounds of deletion and selection
S2055: and determining a target point corresponding to each piece of Z-direction data by using the Z-direction data set of the target area.
S2056: and determining a target point set based on the target point corresponding to each piece of Z-direction data.
Optionally, each pixel point on the image corresponds to an object point of the target object one to one, and the three-dimensional coordinate of the object point in the camera coordinate system is also in a uniquely determined relationship with the corresponding object point, so that the corresponding target point can be determined based on each Z-direction data in the Z-direction data set of the target area, and the target point set is obtained based on the target points.
S206, determining a three-dimensional coordinate set of the target area under the world coordinate from the three-dimensional coordinate set under the world coordinate, wherein the three-dimensional coordinate set of the target area under the world coordinate is a set of three-dimensional coordinates under the world coordinate corresponding to the target point set.
And S207, performing three-dimensional reconstruction on the target area according to the three-dimensional coordinate set of the target area in the world coordinate system.
In an optional embodiment, after the three-dimensional reconstruction of the target region according to the three-dimensional coordinate set of the target region in the world coordinate system, the method further includes:
and repeating the step of three-dimensional reconstruction of the target area to carry out three-dimensional reconstruction of the preset area until the three-dimensional reconstruction of all areas of the target object is completed. And splicing the three-dimensional reconstruction result of the target object according to the three-dimensional reconstruction results of all the regions of the target object.
In this embodiment, the preset region may be a region that has been observed once for the target object, but since the thickness of the target object is thick, a clear point set still exists in a partial region in the previous region, and subsequently, the target object may be projected with stripes and photographed again by adjusting the object stage, that is, adjusting the object stage up, down, left, right, front, back, and the like, and the above steps S202 to S207 are repeated; of course, the predetermined region may also be another unclear region of the target object.
Optionally, the three-dimensional reconstruction of the preset region may be a directly obtained 3D effect, or may also be an arrangement effect of data points presented by the 3D point cloud data, and subsequently, the 3D point cloud data of all the regions may be spliced to obtain a 3D reconstruction result of the whole target object.
As shown in fig. 8, fig. 8 is a comparison graph of a result of three-dimensional reconstruction of a target object according to the present application and a reconstruction result according to the prior art. Fig. 8a is an RGB diagram of a target object photographed by using a camera, fig. 8b is a diagram of a result obtained by three-dimensionally reconstructing the target object by using the three-dimensional reconstruction method of the present application, fig. 8c is an enlarged view of a rectangular frame region in fig. 8b, and fig. 8d is a partially enlarged view of the same position as the rectangular frame region in fig. 8b in a three-dimensional reconstruction structure obtained by three-dimensionally reconstructing the target object by using the prior art. It can be seen that the reconstruction result graph obtained after the target object is three-dimensionally reconstructed according to the method of the present application is clearer and truer, while the reconstruction result graph obtained after the target object is three-dimensionally reconstructed by adopting the prior art still has blank gaps and partial regions are not clear.
The present application further discloses a three-dimensional reconstruction apparatus, as shown in fig. 9, fig. 9 is a schematic structural diagram of a three-dimensional reconstruction apparatus in an alternative embodiment of the present application. It includes:
an acquisition module 901, configured to acquire, by a camera, an image of a stripe projected to a target object by a projector;
a first determining module 902, configured to determine a three-dimensional coordinate set in world coordinates according to the image;
a conversion module 903, configured to convert the three-dimensional coordinate set in the world coordinate into a three-dimensional coordinate set in a camera coordinate system by using the external parameter of the camera;
a second determining module 904, configured to determine a clustering center point of the three-dimensional coordinate set in the camera coordinate system by using a clustering algorithm;
a third determining module 905, configured to determine a target point set according to the cluster center point and preset parameters;
a fourth determining module 906, configured to determine, from the three-dimensional coordinate set in the world coordinate, a three-dimensional coordinate set of the target area in the world coordinate, where the three-dimensional coordinate set of the target area in the world coordinate is a set of three-dimensional coordinates in the world coordinate corresponding to the target point set;
and a reconstruction module 907 configured to perform three-dimensional reconstruction on the target region according to a three-dimensional coordinate set of the target region in the world coordinate system.
In an alternative embodiment, the apparatus comprises:
the second determining module is used for determining a Z-direction data set in the camera coordinate system from the three-dimensional coordinate set in the camera coordinate system, wherein the Z direction is a direction perpendicular to an imaging plane of the camera;
the second determining module is used for carrying out Meanshift clustering on the Z-direction data set under the camera coordinate system to obtain a maximum clustering set;
and the second determining module is used for determining the cluster center point from the maximum cluster set.
In an alternative embodiment, the apparatus further comprises:
and the processing module is used for extracting data of the image by using a fuzzy detection method to obtain a three-dimensional coordinate set of the first area under a camera coordinate system.
In an alternative embodiment, the apparatus comprises:
the second determining module is used for acquiring the preset parameters, wherein the preset parameters are the depth of field and the correction coefficient of the camera;
the second determining module is used for determining a Z-direction data set of the second area according to the clustering center point and the preset parameters;
the second determining module is used for determining a Z-direction data set of the first area according to the three-dimensional coordinate set of the first area in the camera coordinate system;
the second determining module is used for determining a Z-direction data set of a target area from the Z-direction data set of the second area, wherein the Z-direction data set of the target area belongs to the Z-direction data set of the first area;
the second determining module is used for determining a target point corresponding to each piece of Z-direction data by using the Z-direction data set of the target area;
and the second determining module is used for determining a target point set based on the target point corresponding to each piece of Z-direction data.
In an alternative embodiment, the blur detection method includes a defocus area estimation network method and an image gradient method based on a convolutional neural network architecture.
In an alternative embodiment, the apparatus comprises:
the first determining module is used for determining a pixel coordinate set of an imaging plane of the projector and a pixel coordinate set of the camera according to the image;
the first determining module is used for determining a first calculation formula according to the internal parameters of the camera and the external parameters of the camera;
the first determining module is used for determining a second calculation formula according to the internal parameters of the projector and the external parameters of the projector;
and the first determining module is used for determining a three-dimensional coordinate set under the world coordinate system by utilizing the first calculation formula, the second calculation formula, the pixel coordinate set of the projector imaging plane and the camera pixel coordinate set.
In an alternative embodiment, the apparatus comprises:
the reconstruction module is used for repeating the step of three-dimensional reconstruction of the target area to carry out three-dimensional reconstruction of the preset area until the three-dimensional reconstruction of all areas of the target object is completed;
and the reconstruction module is used for splicing the three-dimensional reconstruction result of the target object according to the three-dimensional reconstruction results of all the regions of the target object.
The present application also discloses in another aspect an electronic device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the three-dimensional reconstruction method described above.
The present application also discloses in another aspect a computer storage medium, in which at least one instruction or at least one program is stored, and the at least one instruction or the at least one program is loaded and executed by a processor to implement the three-dimensional reconstruction method described above.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method of three-dimensional reconstruction of an object, the method comprising:
acquiring an image containing a target object by a camera; the target object comprises a stripe pattern projected by a projector;
determining a three-dimensional coordinate set under the world coordinate of the target object according to the image;
converting the three-dimensional coordinate set under the world coordinate into a three-dimensional coordinate set under a camera coordinate system by utilizing the external parameters of the camera;
determining a clustering center point of a three-dimensional coordinate set under the camera coordinate system by using a clustering algorithm;
determining a target point set according to the clustering center point and preset parameters;
determining a three-dimensional coordinate set of a target area under the world coordinate from the three-dimensional coordinate set under the world coordinate, wherein the three-dimensional coordinate set of the target area under the world coordinate is a set of three-dimensional coordinates under the world coordinate corresponding to the target point set;
and performing three-dimensional reconstruction on the target area according to the three-dimensional coordinate set of the target area in the world coordinate system.
2. The three-dimensional reconstruction method according to claim 1, wherein the determining a cluster center point of the three-dimensional coordinate set in the camera coordinate system by using a clustering algorithm comprises:
determining a Z-direction data set in the camera coordinate system from a three-dimensional coordinate set in the camera coordinate system, wherein the Z direction is a direction perpendicular to an imaging plane of the camera;
performing Meanshift clustering on the Z-direction data set under the camera coordinate system to obtain a maximum clustering set;
and determining the cluster center point from the maximum cluster set.
3. The three-dimensional reconstruction method according to claim 2, wherein after determining the cluster center point of the three-dimensional coordinate set in the camera coordinate system by using the clustering algorithm, the method further comprises:
and performing data extraction on the image by using a fuzzy detection method to obtain a three-dimensional coordinate set of the first region in a camera coordinate system.
4. The three-dimensional reconstruction method according to claim 3, wherein the determining a target point set according to the cluster center point and preset parameters comprises:
acquiring the preset parameters, wherein the preset parameters are the depth of field and the correction coefficient of the camera;
determining a Z-direction data set of a second region according to the clustering center point and the preset parameters;
determining a Z-direction data set of the first area according to a three-dimensional coordinate set of the first area in a camera coordinate system;
determining a Z-direction data set of a target area from a Z-direction data set of a second area, wherein the Z-direction data set of the target area belongs to the Z-direction data set of the first area;
determining a target point corresponding to each piece of Z-direction data by using the Z-direction data set of the target area;
and determining a target point set based on the target point corresponding to each piece of Z-direction data.
5. The three-dimensional reconstruction method according to claim 3, wherein the blur detection method includes a defocus area estimation network method and an image gradient method based on a convolutional neural network architecture.
6. The method of claim 1, wherein determining a set of three-dimensional coordinates in world coordinates from the image comprises:
determining a pixel coordinate set and a camera pixel coordinate set of an imaging plane of the projector according to the image;
determining a first calculation formula according to the internal parameters of the camera and the external parameters of the camera;
determining a second calculation formula according to the internal parameters of the projector and the external parameters of the projector;
and determining a three-dimensional coordinate set under the world coordinate system by using the first calculation formula, the second calculation formula, the pixel coordinate set of the projector imaging plane and the camera pixel coordinate set.
7. The three-dimensional reconstruction method according to claim 1, wherein after the three-dimensional reconstruction of the target region according to the three-dimensional coordinate set of the target region in the world coordinate system, the method further comprises:
repeating the step of three-dimensional reconstruction of the target area to three-dimensionally reconstruct a preset area until the three-dimensional reconstruction of all areas of the target object is completed;
and splicing the three-dimensional reconstruction result of the target object according to the three-dimensional reconstruction results of all the regions of the target object.
8. A three-dimensional reconstruction apparatus, comprising:
the acquisition module is used for acquiring an image of the stripes projected to the target object by the projector through the camera;
the first determining module is used for determining a three-dimensional coordinate set under world coordinates according to the image;
the conversion module is used for converting the three-dimensional coordinate set under the world coordinate into a three-dimensional coordinate set under a camera coordinate system by utilizing the external parameters of the camera;
the second determining module is used for determining a clustering center point of the three-dimensional coordinate set under the camera coordinate system by using a clustering algorithm;
the third determining module is used for determining a target point set according to the clustering central point and preset parameters;
the fourth determining module is used for determining a three-dimensional coordinate set of the target area under the world coordinate from the three-dimensional coordinate set under the world coordinate, wherein the three-dimensional coordinate set of the target area under the world coordinate is a set of three-dimensional coordinates under the world coordinate corresponding to the target point set;
and the reconstruction module is used for performing three-dimensional reconstruction on the target area according to the three-dimensional coordinate set of the target area in the world coordinate system.
9. An electronic device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, the at least one instruction, the at least one program, the set of codes, or the set of instructions being loaded and executed by the processor to implement a three-dimensional reconstruction method according to any one of claims 1 to 7.
10. A computer storage medium having at least one instruction or at least one program stored therein, the at least one instruction or the at least one program being loaded and executed by a processor to implement the three-dimensional reconstruction method according to any one of claims 1 to 7.
CN202110008816.9A 2021-01-05 2021-01-05 Three-dimensional reconstruction method, device and equipment for object and storage medium Active CN112767536B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110008816.9A CN112767536B (en) 2021-01-05 2021-01-05 Three-dimensional reconstruction method, device and equipment for object and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110008816.9A CN112767536B (en) 2021-01-05 2021-01-05 Three-dimensional reconstruction method, device and equipment for object and storage medium

Publications (2)

Publication Number Publication Date
CN112767536A true CN112767536A (en) 2021-05-07
CN112767536B CN112767536B (en) 2024-07-26

Family

ID=75699493

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110008816.9A Active CN112767536B (en) 2021-01-05 2021-01-05 Three-dimensional reconstruction method, device and equipment for object and storage medium

Country Status (1)

Country Link
CN (1) CN112767536B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113421334A (en) * 2021-07-06 2021-09-21 山西大学 Multi-focus image three-dimensional reconstruction method based on deep learning
CN114322843A (en) * 2021-12-10 2022-04-12 江苏集萃碳纤维及复合材料应用技术研究院有限公司 Digital fringe projection-based three-dimensional measurement fringe principal value phase extraction method

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667303A (en) * 2009-09-29 2010-03-10 浙江工业大学 Three-dimensional reconstruction method based on coding structured light
CN102609904A (en) * 2012-01-11 2012-07-25 云南电力试验研究院(集团)有限公司电力研究院 Bivariate nonlocal average filtering de-noising method for X-ray image
CN105141885A (en) * 2014-05-26 2015-12-09 杭州海康威视数字技术股份有限公司 Method for video monitoring and device
US20160261851A1 (en) * 2015-03-05 2016-09-08 Shenzhen University Calbration method for telecentric imaging 3d shape measurement system
WO2017080451A1 (en) * 2015-11-11 2017-05-18 Zhejiang Dahua Technology Co., Ltd. Methods and systems for binocular stereo vision
CN108242064A (en) * 2016-12-27 2018-07-03 合肥美亚光电技术股份有限公司 Three-dimensional rebuilding method and system based on face battle array structured-light system
CN108256504A (en) * 2018-02-11 2018-07-06 苏州笛卡测试技术有限公司 A kind of Three-Dimensional Dynamic gesture identification method based on deep learning
CN108629834A (en) * 2018-05-09 2018-10-09 华南理工大学 A kind of three-dimensional hair method for reconstructing based on single picture
CN109118545A (en) * 2018-07-26 2019-01-01 深圳市易尚展示股份有限公司 3-D imaging system scaling method and system based on rotary shaft and binocular camera
WO2019015154A1 (en) * 2017-07-17 2019-01-24 先临三维科技股份有限公司 Monocular three-dimensional scanning system based three-dimensional reconstruction method and apparatus
CN110378965A (en) * 2019-05-21 2019-10-25 北京百度网讯科技有限公司 Determine the method, apparatus, equipment and storage medium of coordinate system conversion parameter
US20190347767A1 (en) * 2018-05-11 2019-11-14 Boe Technology Group Co., Ltd. Image processing method and device
CN111127642A (en) * 2019-12-31 2020-05-08 杭州电子科技大学 Human face three-dimensional reconstruction method
CN111508058A (en) * 2020-02-24 2020-08-07 当家移动绿色互联网技术集团有限公司 Method and device for three-dimensional reconstruction of image, storage medium and electronic equipment
CN111695237A (en) * 2020-05-12 2020-09-22 上海卫星工程研究所 Region decomposition method and system for satellite-to-region coverage detection simulation
CN111750806A (en) * 2020-07-20 2020-10-09 西安交通大学 Multi-view three-dimensional measurement system and method

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101667303A (en) * 2009-09-29 2010-03-10 浙江工业大学 Three-dimensional reconstruction method based on coding structured light
CN102609904A (en) * 2012-01-11 2012-07-25 云南电力试验研究院(集团)有限公司电力研究院 Bivariate nonlocal average filtering de-noising method for X-ray image
CN105141885A (en) * 2014-05-26 2015-12-09 杭州海康威视数字技术股份有限公司 Method for video monitoring and device
US20160261851A1 (en) * 2015-03-05 2016-09-08 Shenzhen University Calbration method for telecentric imaging 3d shape measurement system
WO2017080451A1 (en) * 2015-11-11 2017-05-18 Zhejiang Dahua Technology Co., Ltd. Methods and systems for binocular stereo vision
CN108242064A (en) * 2016-12-27 2018-07-03 合肥美亚光电技术股份有限公司 Three-dimensional rebuilding method and system based on face battle array structured-light system
WO2019015154A1 (en) * 2017-07-17 2019-01-24 先临三维科技股份有限公司 Monocular three-dimensional scanning system based three-dimensional reconstruction method and apparatus
CN108256504A (en) * 2018-02-11 2018-07-06 苏州笛卡测试技术有限公司 A kind of Three-Dimensional Dynamic gesture identification method based on deep learning
CN108629834A (en) * 2018-05-09 2018-10-09 华南理工大学 A kind of three-dimensional hair method for reconstructing based on single picture
US20190347767A1 (en) * 2018-05-11 2019-11-14 Boe Technology Group Co., Ltd. Image processing method and device
CN109118545A (en) * 2018-07-26 2019-01-01 深圳市易尚展示股份有限公司 3-D imaging system scaling method and system based on rotary shaft and binocular camera
CN110378965A (en) * 2019-05-21 2019-10-25 北京百度网讯科技有限公司 Determine the method, apparatus, equipment and storage medium of coordinate system conversion parameter
CN111127642A (en) * 2019-12-31 2020-05-08 杭州电子科技大学 Human face three-dimensional reconstruction method
CN111508058A (en) * 2020-02-24 2020-08-07 当家移动绿色互联网技术集团有限公司 Method and device for three-dimensional reconstruction of image, storage medium and electronic equipment
CN111695237A (en) * 2020-05-12 2020-09-22 上海卫星工程研究所 Region decomposition method and system for satellite-to-region coverage detection simulation
CN111750806A (en) * 2020-07-20 2020-10-09 西安交通大学 Multi-view three-dimensional measurement system and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HRUSZKEWYCZ, S. O等: "Imaging Local Polarization in Ferroelectric Thin Films by Coherent X-Ray Bragg Projection Ptychography", 《PHYSICAL REVIEW LETTERS》 *
杨靖;王茂森;戴劲松;: "基于立体视觉的3维模型重建", 兵工自动化, no. 03 *
赵文胜;尹升爱;: "基于单目视频图像序列的三维表面重建研究", 计算机与现代化, no. 02 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113421334A (en) * 2021-07-06 2021-09-21 山西大学 Multi-focus image three-dimensional reconstruction method based on deep learning
CN113421334B (en) * 2021-07-06 2022-05-20 山西大学 Multi-focus image three-dimensional reconstruction method based on deep learning
CN114322843A (en) * 2021-12-10 2022-04-12 江苏集萃碳纤维及复合材料应用技术研究院有限公司 Digital fringe projection-based three-dimensional measurement fringe principal value phase extraction method

Also Published As

Publication number Publication date
CN112767536B (en) 2024-07-26

Similar Documents

Publication Publication Date Title
CN107705333B (en) Space positioning method and device based on binocular camera
CN108776971B (en) Method and system for determining variable-split optical flow based on hierarchical nearest neighbor
US11512946B2 (en) Method and system for automatic focusing for high-resolution structured light 3D imaging
CN109544643A (en) A kind of camera review bearing calibration and device
CN114494388B (en) Three-dimensional image reconstruction method, device, equipment and medium in large-view-field environment
US20230237683A1 (en) Model generation method and apparatus based on multi-view panoramic image
Nousias et al. Large-scale, metric structure from motion for unordered light fields
CN112767536B (en) Three-dimensional reconstruction method, device and equipment for object and storage medium
CN112945141A (en) Structured light rapid imaging method and system based on micro-lens array
CN113643414A (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN110971791A (en) A method for adjusting the optical axis consistency of a zoom optical system of a camera and a display device
CN114792345B (en) Calibration method based on monocular structured light system
CN113160393B (en) High-precision three-dimensional reconstruction method and device based on large depth of field and related components thereof
CN112729160B (en) Projection calibration method, device and system based on telecentric imaging and storage medium
CN110470216B (en) Three-lens high-precision vision measurement method and device
WO2017187935A1 (en) Information processing apparatus, information processing method, and program
CN117218203A (en) Calibration method, device, equipment and storage medium of camera
JP4102386B2 (en) 3D information restoration device
CN113436247B (en) Image processing method and device, electronic equipment and storage medium
JP6216143B2 (en) Image processing apparatus, control method thereof, and program
CN114399599A (en) Three-dimensional imaging method, apparatus, electronic device, and computer-readable storage medium
Wang et al. A calibration method on 3D measurement based on structured-light with single camera
Labussière et al. Blur aware metric depth estimation with multi-focus plenoptic cameras
Zheng et al. Digital twin-trained deep convolutional neural networks for fringe analysis
JP2015137897A (en) Distance measuring device and distance measuring method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant