CN110838164B - Monocular image three-dimensional reconstruction method, system and device based on object point depth - Google Patents
Monocular image three-dimensional reconstruction method, system and device based on object point depth Download PDFInfo
- Publication number
- CN110838164B CN110838164B CN201910963573.7A CN201910963573A CN110838164B CN 110838164 B CN110838164 B CN 110838164B CN 201910963573 A CN201910963573 A CN 201910963573A CN 110838164 B CN110838164 B CN 110838164B
- Authority
- CN
- China
- Prior art keywords
- dimensional
- reference plane
- dimensional image
- image
- dimensional reconstruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of measurement, in particular to a monocular image three-dimensional reconstruction method, a monocular image three-dimensional reconstruction system and a monocular image three-dimensional reconstruction device based on object point depth. The monocular image three-dimensional reconstruction method comprises the following steps: shooting to obtain a two-dimensional image; segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space; finishing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes; and extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and finishing the three-dimensional reconstruction of all the objects. According to the monocular image three-dimensional reconstruction method, the monocular image three-dimensional reconstruction system and the monocular image three-dimensional reconstruction device based on the object point depth, the reference plane is firstly established by utilizing the three-dimensional reconstruction capability of a monocular pair plane, then each object point in the image is mapped to each simulation plane based on the reference plane, the depth of each object point is obtained, and the three-dimensional reconstruction of the whole image is completed. The system has the advantages of simple structure, easy realization and good scene usability.
Description
The application is a divisional application of a parent application named as 'monocular image three-dimensional reconstruction method, system and device based on a reference plane', with the application number of 201811009447.X and the application date of 2018, 8 and 31.
Technical Field
The invention relates to the field of measurement, in particular to a monocular image three-dimensional reconstruction method, a monocular image three-dimensional reconstruction system and a monocular image three-dimensional reconstruction device based on object point depth.
Background
Image three-dimensional reconstruction is applied to various fields. The monocular vision has simple structure and convenient application, and can only carry out three-dimensional reconstruction on the object on the appointed single plane in the image on the premise of not depending on the known standard substance. The binocular stereo vision simulates human eye functions, and completes three-dimensional reconstruction through parallax, compared with monocular vision, the binocular stereo vision can perform three-dimensional reconstruction on all objects in an image, but the binocular stereo vision has the defects of complex structure, difficult accurate calibration process and large matching error of corresponding points, and in a scene with sparse object surface characteristic points, accurate shapes are difficult to obtain to complete three-dimensional reconstruction. The structured light camera needs to be matched with core devices such as a laser projector, an optical diffraction element, an infrared camera and the like, diffused infrared speckles are captured through the infrared camera, and depth information of each point is calculated. High cost three-dimensional laser scanners, and binocular stereo cameras have high requirements, and therefore a better alternative solution is needed to solve the problem.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: a monocular image three-dimensional reconstruction method, a system and a device based on object point depth are provided.
In order to solve the above technical problems, a first technical solution adopted by the present invention is:
a monocular image three-dimensional reconstruction method based on object point depth comprises the following steps:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes;
and S4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and finishing three-dimensional reconstruction of all the objects.
The second technical scheme adopted by the invention is as follows:
a system for three-dimensional reconstruction of a monocular image based on object point depth comprising one or more processors and a memory, said memory storing a program which when executed by the processors performs the steps of:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes;
and S4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and finishing three-dimensional reconstruction of all the objects.
The third technical scheme adopted by the invention is as follows:
a monocular image three-dimensional reconstruction device based on object point depth comprises a camera and a three-dimensional reconstruction unit which are connected with each other, wherein the camera is configured to shoot a two-dimensional image, and the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
the three-dimensional reconstruction unit is configured to segment the two-dimensional image resulting in a plurality of reference planes in a corresponding three-dimensional space; finishing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes; and extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and completing three-dimensional reconstruction of all the objects.
The invention has the beneficial effects that: according to the monocular image three-dimensional reconstruction method, the monocular image three-dimensional reconstruction system and the monocular image three-dimensional reconstruction device based on the object point depth, the reference plane is firstly established, then each object point in the image is mapped to each simulation plane based on the reference plane, the depth of each object point is obtained, and the three-dimensional reconstruction of the whole image is completed. The system has the advantages of simple structure, easy realization and good scene usability.
Drawings
FIG. 1 is a flow chart illustrating the steps of a monocular image three-dimensional reconstruction method based on a reference plane according to the present invention;
FIG. 2 is a schematic structural diagram of a monocular image three-dimensional reconstruction system based on a reference plane according to the present invention;
FIG. 3 is a schematic diagram of a projected point plane reconstruction of a monocular image three-dimensional reconstruction system based on a reference plane according to the present invention;
description of reference numerals:
1. a processor; 2. a memory.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1, a monocular image three-dimensional reconstruction method based on a reference plane includes the following steps:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes;
and S4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and completing three-dimensional reconstruction of all the objects.
As can be seen from the above description, the beneficial effects of the present invention are: the monocular image three-dimensional reconstruction method based on the reference plane provided by the invention utilizes the three-dimensional reconstruction capability of a monocular to plane, firstly creates the reference plane, and then maps each object point in the image to each simulation plane based on the reference plane to obtain the depth of each object point and complete the three-dimensional reconstruction of the whole image.
Further, step S1 specifically includes:
after the camera is rotated to a scene area needing three-dimensional reconstruction, the scene area is shot by the camera to obtain a two-dimensional image;
the step S2 specifically comprises the following steps:
analyzing the two-dimensional image by using a classification model, and segmenting a plurality of reference planes in a corresponding three-dimensional space;
the step S3 specifically comprises the following steps:
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
the step S4 specifically comprises the following steps:
and extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the object points to obtain a plane where the object points are located, and substituting translation vectors and rotation vectors from optical axes of cameras corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to complete three-dimensional reconstruction of all objects in the two-dimensional image.
Further, step S2 further includes:
and marking the pixels belonging to the reference plane area in the two-dimensional image as a reference plane type, and marking the pixels not belonging to the reference plane area in the two-dimensional image as a non-reference plane type.
Referring to fig. 2, the present invention further provides a monocular image three-dimensional reconstruction system based on a reference plane, including one or more processors 1 and a memory 2, where the memory 2 stores a program, and when executed by the processor 1, the program implements the following steps:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes;
and S4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and finishing three-dimensional reconstruction of all the objects.
From the above description, the beneficial effects of the present invention are: the monocular image three-dimensional reconstruction system based on the reference plane provided by the invention utilizes the three-dimensional reconstruction capability of a monocular to plane, firstly creates the reference plane, and then maps each object point in the image to each simulation plane based on the reference plane to obtain the depth of each object point and complete the three-dimensional reconstruction of the whole image.
Further, the program when executed by the processor further implements the steps comprising:
the step S1 specifically comprises the following steps:
after the camera is rotated to a scene area needing three-dimensional reconstruction, the scene area is shot by the camera to obtain a two-dimensional image;
the step S2 specifically comprises the following steps:
analyzing the two-dimensional image by using a classification model, and segmenting a plurality of reference planes in a corresponding three-dimensional space;
the step S3 specifically comprises the following steps:
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
the step S4 specifically comprises the following steps:
and extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the object points to obtain a plane where the object points are located, and substituting translation vectors and rotation vectors from optical axes of cameras corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to complete three-dimensional reconstruction of all objects in the two-dimensional image.
Further, the program when executed by the processor further implements the steps comprising:
and marking the pixels belonging to the reference plane area in the two-dimensional image as a reference plane type, and marking the pixels not belonging to the reference plane area in the two-dimensional image as a non-reference plane type.
The invention also provides a monocular image three-dimensional reconstruction device based on the reference plane, which comprises a camera and a three-dimensional reconstruction unit which are connected with each other, wherein the camera is configured to shoot to obtain a two-dimensional image, and the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
the three-dimensional reconstruction unit is configured to segment the two-dimensional image resulting in a plurality of reference planes in a corresponding three-dimensional space; finishing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes; and extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and finishing the three-dimensional reconstruction of all the objects.
From the above description, the beneficial effects of the present invention are: the monocular image three-dimensional reconstruction device based on the reference plane provided by the invention utilizes the three-dimensional reconstruction capability of a monocular pair plane, firstly creates the reference plane, and then maps each object point in the image to each simulation plane based on the reference plane to obtain the depth of each object point, thereby completing the three-dimensional reconstruction of the whole image. The system has the advantages of simple structure, easy realization and good scene usability.
Further, the camera is specifically configured to rotate the camera to a scene area that needs three-dimensional reconstruction, and then perform image shooting on the scene area through the camera to obtain a two-dimensional image;
the three-dimensional reconstruction unit is specifically configured to analyze the two-dimensional image using a classification model, and segment a plurality of reference planes in a corresponding three-dimensional space;
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
and extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the object points to obtain a plane where the object points are located, and substituting translation vectors and rotation vectors from optical axes of cameras corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to complete three-dimensional reconstruction of all objects in the two-dimensional image.
Further, the three-dimensional reconstruction unit is specifically configured to mark pixels in the two-dimensional image that belong to a reference plane region as a reference plane type, and mark pixels in the two-dimensional image that do not belong to the reference plane region as a non-reference plane type.
Referring to fig. 1, a first embodiment of the present invention is:
the invention provides a monocular image three-dimensional reconstruction method based on a reference plane, which comprises the following steps:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes;
and S4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and finishing three-dimensional reconstruction of all the objects.
It should be noted that the three-dimensional reconstruction of an image according to the present invention refers to obtaining coordinates of an object in a unified three-dimensional reference coordinate system from the image, a camera is a mapping between a three-dimensional world space and a two-dimensional image, and a mapping model thereof can be expressed as:
the mapping model represents a homogeneous coordinate (X) of a point on a three-dimensional reference coordinate system w ,Y w ,Z w And 1) the relation between the point and the homogeneous coordinate (u, v, 1) mapped on the two-dimensional image coordinate system can be obtained by the internal parameter K of the camera and the external parameter (rotation R and translation t) of the camera. Wherein, the internal parameters of the cameraIs the intrinsic matrix of the camera (u) 0 ,v 0 ) Is the projection position of the optical center of the camera on the CCD imaging plane, f is the focal length of the camera, d x And d y Which are the physical dimensions of each pixel of the CCD in the horizontal and vertical directions, respectively.
The step S1 is specifically:
in the embodiment, a camera is rotated to a scene area needing three-dimensional reconstruction, the camera captures an image of the area, and a two-dimensional image coordinate system is established; the two-dimensional image coordinate system is a coordinate system which is created by taking the upper left corner of the two-dimensional image as an origin, taking the upper right corner as a u-axis and taking the lower right corner as a v-axis; and obtaining the rotation angle value of the optical axis of the camera through the holder, wherein the rotation angle value of the optical axis comprises the vertical rotation angle alpha of the holder c And angle of horizontal rotation beta c ;
The step S2 is specifically:
in the embodiment, a large number of pictures of the same type of application scenes are collected in advance, and the SLIC algorithm is used for performing super-pixel processing on the images to obtain the distribution condition of the colors and the textures of the images; grouping superpixels with the same characteristics, wherein the same characteristics refer to regional pixels with the same type of geometric significance in an image, for example, for a construction site scene, the image is generally divided into two types of geometric types, namely a reference plane (construction surface) and a non-reference plane (an object extending from the reference plane, such as a steel bar, a scaffold, a cement column and the like); carrying out superpixel grouping on an acquired scene picture set, marking the grouping (a reference plane or a non-reference plane), and then establishing a geometric classification model of the scene through deep learning;
after the image is captured in the S1 step, the classification model is used for analyzing the image, a geometric area of a reference plane is segmented, pixels belonging to the area of the reference plane in the image are marked as a type of the reference plane, and other remaining pixels are marked as types of non-reference planes;
the step S3 is specifically:
in the present embodiment, for convenience of description, the optical axis when the pan/tilt head is at the initial position zero azimuth (both the horizontal angle and the vertical angle are 0 degree) is taken as Z c Axis, establishing a camera coordinate system X c Y c Z c (ii) a On the reference plane, the optical axis is used as the origin, and the camera coordinate system X is used c Y c Z c The coordinate axis direction of the three-dimensional coordinate system X is set up as a reference direction w Y w Z w Wherein Y is w Perpendicular to the reference plane;
controlling the holder to respectively position the optical axis of the camera to any three position points of the reference plane, determining the position of the position points of the reference plane by comparing the n multiplied by n pixel area of the picture center with the reference plane type pixel set obtained in the step S2 through an image matching algorithm, and then obtaining the three position points of the reference plane in the coordinate system X according to the rotation angle value of the holder and the distance of laser ranging c Y c Z c The coordinate value of the next step;
in the present embodiment, the laser beam is positioned to the first position point P of the reference plane by the pan/tilt head 1 From point P 1 Distance to laser measuring deviceAnd vertical rotation angle alpha of holder 1 Horizontal angle of rotation beta 1 Calculating to obtain a point P 1 In a coordinate system X c Y c Z c Lower coordinate value->
Similarly, the laser beam can be obtained to the second point P of the reference plane 2 Coordinate values ofAnd a third point P 3 Is greater than or equal to>Are not described in detail herein; />
Normal vector from reference planeCan obtain the vector->Projection on the normal of the reference planeQuantity->
Further, it can be found that when the optical axis is closest to the reference plane, the vertical deviation angle of the optical axis from the zero orientationHorizontal deviation angle of optical axis relative to zero position>
The vertical rotation angle alpha of the holder when capturing the image in the step S1 c And angle of horizontal rotation beta c Obtaining the unit vector of the time of the optical axis
Further, the translation vector from the optical axis to the reference plane in the image captured in step S1 can be obtainedAnd a rotation vectorHandle (rotation R) c And a translation t c ) Mapping model between three-dimensional reference coordinate system and two-dimensional image coordinate system substituted into reference planeIn the method, the three-dimensional reconstruction of the reference plane can be realized, and the coordinates of each pixel point of the reference plane in the image under the three-dimensional reference coordinate system are obtained;
the step S4 is specifically:
in this embodiment, the method for extracting the projection point of the object is as follows:
applying a SharpMask image segmentation algorithm to the image to obtain edge segmentation textures of an object in the image, and calculating projection points of texture pixel points in the reference plane for the texture pixel points belonging to the non-reference plane type pixel set range obtained in the step S2;
the projection points refer to points of each point on the object projected onto the reference plane, and a connecting line of the object point and the projection points is vertical to the reference plane, namely is parallel to a normal vector of the reference plane;
the connection part of the object and the reference plane is reflected on the image, namely, the edge of the object is divided into texture pixel points and adjacent areas of the reference plane pixel points, whether the reference plane type pixel set obtained in the step S2 is included or not is searched in the eight-connected neighborhood of the edge divided texture pixel points of the object, and the included reference plane pixels are listed in the projection point set of the edge divided texture pixel points of the object; combining the texture pixel points with each reference plane pixel point in the projection point set respectively to obtain a plane straight line set of the texture pixel points in a two-dimensional image coordinate system;
in a three-dimensional reference coordinate system X w Y w Z w Then, a point m (0, 0) on the reference plane and a point n (0, 1) outside the reference plane are taken to know the vectorI.e. the normal vector of the reference plane, is substituted into the mapping model between the three-dimensional reference frame and the two-dimensional image frame of the plane passing through the points m, n and perpendicular to the reference plane>/>In the method, the three-dimensional reconstruction of the vertical plane can be realized to obtain a normal vectorA line under a two-dimensional image coordinate system;
finding out the normal vector in the plane straight line set according to the constraint condition principle that the connecting line of the object point and the projection point is parallel to the normal vector of the reference planeThe straight line with the highest parallel correlation degree can be used as a judgment condition by using the included angle of the two straight lines, and the reference plane pixel point corresponding to the straight line with the highest parallel correlation degree is used as a projection point of the object point; and from the three-dimensional reconstruction result of the reference plane in step S2, the coordinate P under the three-dimensional reference coordinate system corresponding to the projection point can be obtained p =(X p ,Y p ,Z p );
From the unit vector in step S3And a translation vector t c The optical axis amount of the captured picture in step S1 can be obtained
Vector projected from normal to reference planeCan obtain the vector->Projection vector on reference plane
By projecting a vectorAs normal vectors, in combination with the projected point P p Obtaining a plane which passes through the projection point and is vertical to the reference plane, wherein the vertical plane is the plane where the object point is located;
Model for mapping between a three-dimensional reference coordinate system and a two-dimensional image coordinate system substituted into a vertical planeObtaining the coordinates of the object point under the three-dimensional reference coordinate system;
repeating the steps to obtain the coordinates of all object points in the image under the three-dimensional reference coordinate system, and completing the three-dimensional reconstruction of all objects in the image;
as shown in fig. 3, in the two-dimensional image, on a reference plane ρ (building site floor), there is a steel bar obliquely inserted on the reference plane, projection points a and B of pixel points a and B of the steel bar on the reference plane ρ are obtained, and a simulation plane π perpendicular to the reference plane ρ is obtained by passing through the projection points a and B 1 、π 2 Further, a plane pi can be obtained 1 、π 2 Mapping the model with a two-dimensional image to obtain three-dimensional coordinates of image pixel points A and B; by analogy, the three-dimensional coordinates of all the pixel points of the reinforcing steel bar can be obtained, and the process is finishedAnd forming the three-dimensional reconstruction of the whole steel bar.
Referring to fig. 2, the second embodiment of the present invention is:
the invention provides a monocular image three-dimensional reconstruction system based on a reference plane, which comprises one or more processors and a memory, wherein the memory stores a program, and the program realizes the following steps when being executed by the processor:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes;
and S4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and finishing three-dimensional reconstruction of all the objects.
It should be noted that the three-dimensional reconstruction of an image according to the present invention refers to obtaining coordinates of an object in a unified three-dimensional reference coordinate system from the image, a camera is a mapping between a three-dimensional world space and a two-dimensional image, and a mapping model thereof can be expressed as:
the mapping model represents a homogeneous coordinate (X) of a point on a three-dimensional reference coordinate system w ,Y w ,Z w And 1) the relation between the point and the homogeneous coordinate (u, v, 1) mapped on the two-dimensional image coordinate system can be obtained through the internal parameter K of the camera and the external parameter (rotation R and translation t) of the camera. Wherein, the internal parameters of the cameraIs the intrinsic matrix of the camera (u) 0 ,v 0 ) Is the projection position of the optical center of the camera on the CCD imaging plane, f is the focal length of the camera, d x And d y Respectively, each pixel of the CCD is in the horizontal directionAnd the physical dimensions in the vertical direction.
The step S1 is specifically:
in the embodiment, a camera is rotated to a scene area needing three-dimensional reconstruction, the camera captures an image of the area, and a two-dimensional image coordinate system is established; the two-dimensional image coordinate system is a coordinate system which is created by taking the upper left corner of the two-dimensional image as an origin, taking the upper right corner as a u-axis and taking the lower right corner as a v-axis; and obtaining the rotation angle value of the optical axis of the camera through the holder, wherein the rotation angle value of the optical axis comprises the vertical rotation angle alpha of the holder c And angle of horizontal rotation beta c ;
The step S2 is specifically:
in the embodiment, a large number of pictures of the same type of application scenes are collected in advance, and the SLIC algorithm is used for performing super-pixel processing on the images to obtain the distribution condition of the colors and the textures of the images; grouping superpixels with the same characteristics, wherein the same characteristics refer to regional pixels with the same type of geometric significance in an image, for example, for a construction site scene, the image is generally divided into two types of geometric types, namely a reference plane (construction surface) and a non-reference plane (an object extending from the reference plane, such as a steel bar, a scaffold, a cement column and the like); carrying out superpixel grouping on an acquired scene picture set, marking the grouping (a reference plane or a non-reference plane), and then establishing a geometric classification model of the scene through deep learning;
after the image is captured in the S1 step, the classification model is used for analyzing the image, a geometric area of a reference plane is segmented, pixels belonging to the area of the reference plane in the image are marked as a type of the reference plane, and other remaining pixels are marked as types of non-reference planes;
the step S3 is specifically:
in the present embodiment, for convenience of description, the optical axis when the pan/tilt head is at the initial position zero azimuth (both the horizontal angle and the vertical angle are 0 degree) is taken as Z c Axis, establishing a camera coordinate system X c Y c Z c (ii) a On the reference plane, the optical axis is used as the origin, and the camera coordinate system X is used c Y c Z c The coordinate axis direction of the three-dimensional reference coordinate system is taken as a reference direction, and a three-dimensional reference coordinate is establishedIs Y w Y w Z w Wherein Y is w Perpendicular to the reference plane;
controlling the holder to respectively position the optical axis of the camera to any three position points of the reference plane, determining the position of the position points of the reference plane by comparing the n multiplied by n pixel area of the picture center with the reference plane type pixel set obtained in the step S2 through an image matching algorithm, and then obtaining the three position points of the reference plane in the coordinate system X according to the rotation angle value of the holder and the distance of laser ranging c Y c Z c The coordinate value of the next step;
in the present embodiment, the laser beam is positioned to the first position point P of the reference plane by the pan/tilt head 1 From point P 1 Distance to laser measuring deviceAnd vertical rotation angle alpha of holder 1 Horizontal rotation angle beta 1 Calculating to obtain a point P 1 In a coordinate system X c Y c Z c Lower coordinate value->
Similarly, the laser beam can be obtained to the second point P of the reference plane 2 Coordinate values ofAnd a third point P 3 Is greater than or equal to>Are not described in detail herein;
normal vector from reference planeAvailable vector(s)>Projection vector on the normal of the reference plane->
Further, it can be found that when the optical axis is closest to the reference plane, the vertical deviation angle of the optical axis from the zero orientationHorizontal deviation of the optical axis from zero orientation>
The vertical rotation angle alpha of the holder when capturing the image in the step S1 c And angle of horizontal rotation beta c Obtaining the unit vector of the time of the optical axis
Further, the projection vector is obtainedAnd unit vector->The included angle therebetween is greater or smaller>
Further, the translation vector from the optical axis to the reference plane in the image captured in step S1 can be obtainedAnd a rotation vectorHandle (rotation R) c And a translation t c ) Mapping model between three-dimensional reference coordinate system and two-dimensional image coordinate system substituted into reference planeIn the method, the three-dimensional reconstruction of the reference plane can be realized, and the coordinates of each pixel point of the reference plane in the image under the three-dimensional reference coordinate system are obtained;
the step S4 is specifically:
in this embodiment, the method for extracting the projection point of the object is as follows:
applying a SharpMask image segmentation algorithm to the image to obtain edge segmentation textures of an object in the image, and calculating projection points of the texture pixel points in the reference plane for the texture pixel points belonging to the non-reference plane type pixel set range obtained in the S2;
the projection point is a point of each point on the object projected onto the reference plane, and a connecting line of the object point and the projection point is perpendicular to the reference plane, namely is parallel to a normal vector of the reference plane;
the connection part of the object and the reference plane is reflected on the image, namely, the edge of the object is divided into texture pixel points and adjacent areas of the reference plane pixel points, whether the reference plane type pixel set obtained in the step S2 is included or not is searched in the eight-connected neighborhood of the edge divided texture pixel points of the object, and the included reference plane pixels are listed in the projection point set of the edge divided texture pixel points of the object; combining the texture pixel points with each reference plane pixel point in the projection point set respectively to obtain a plane straight line set of the texture pixel points in a two-dimensional image coordinate system;
in a three-dimensional reference coordinate system X w Y w Z w Then, a point m (0, 0) on the reference plane and a point n (0, 1) outside the reference plane are taken to know the vectorI.e. the normal vector of the reference plane, is substituted into the mapping model between the three-dimensional reference frame and the two-dimensional image frame of the plane passing through the points m, n and perpendicular to the reference plane> In the method, three-dimensional reconstruction of the vertical plane can be realized to obtain a normal vectorA line under a two-dimensional image coordinate system;
finding out the normal vector in the plane straight line set according to the constraint condition principle that the connecting line of the object point and the projection point is parallel to the normal vector of the reference planeThe straight line with the highest parallel correlation degree is used as a judgment condition by taking the included angle of the two straight lines as the parallel correlation degree, and the reference plane pixel point corresponding to the straight line with the highest parallel correlation degree is used as a projection point of the object point; and from the three-dimensional reconstruction result of the reference plane in step S2, the coordinate P under the three-dimensional reference coordinate system corresponding to the projection point can be obtained p =(X p ,Y p ,Z p );
From the unit vector in step S3And a translation vector t c The optical axis amount of the captured picture in step S1 can be obtained
Vector projected from normal to reference planeCan obtain the vector->Projection vector on reference plane
By projecting a vectorAs normal vectors, in combination with the projected point P p Obtaining a plane which passes through the projection point and is vertical to the reference plane, wherein the vertical plane is the plane where the object point is located;
Model for mapping between a three-dimensional reference coordinate system and a two-dimensional image coordinate system substituted into a vertical planeObtaining the coordinates of the object point under the three-dimensional reference coordinate system;
repeating the steps to obtain the coordinates of all object points in the image under the three-dimensional reference coordinate system, and finishing the three-dimensional reconstruction of all objects in the image.
The third embodiment of the invention is as follows:
the invention also provides a monocular image three-dimensional reconstruction device based on the reference plane, which comprises a camera and a three-dimensional reconstruction unit which are connected with each other, wherein the camera is configured to shoot to obtain a two-dimensional image, and the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
the three-dimensional reconstruction unit is configured to segment the two-dimensional image resulting in a plurality of reference planes in a corresponding three-dimensional space; finishing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes; and extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and finishing the three-dimensional reconstruction of all the objects.
The camera is specifically configured to rotate the camera to a scene area needing three-dimensional reconstruction, and then image shooting is carried out on the scene area through the camera to obtain a two-dimensional image;
the three-dimensional reconstruction unit is specifically configured to analyze the two-dimensional image using a classification model, and segment a plurality of reference planes in a corresponding three-dimensional space;
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
and extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the object points to obtain a plane where the object points are located, and substituting translation vectors and rotation vectors from optical axes of cameras corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to complete three-dimensional reconstruction of all objects in the two-dimensional image.
The three-dimensional reconstruction unit is specifically configured to label pixels in the two-dimensional image that belong to a reference plane area as a reference plane type, and pixels in the two-dimensional image that do not belong to the reference plane area as a non-reference plane type.
In a specific embodiment, the monocular image three-dimensional reconstruction device based on the reference plane includes a measuring end; the measuring end comprises laser, a camera, an angle adjuster and a processor; the laser is arranged on the camera, the laser, the camera and the angle adjuster are respectively connected with the processor, the laser and the camera are respectively connected with the angle adjuster, and the laser angle adjuster further comprises a server and at least more than one terminal; and the measuring end is respectively connected with the terminal through the server. The server is respectively connected with the measuring end and the terminal through a network. The service end provides a communication interface between the measuring end and the terminal, and the service end receives and transmits electric signals to/from the measuring end or the terminal. The terminal displays visual output to the user including two-dimensional images, textual information of the three-dimensional reconstruction results, graphical information, and any combination thereof. The terminal receives control input of a user, sends a control signal to the server, executes two-dimensional image capture, and obtains a three-dimensional reconstruction result of an object in the image.
In summary, according to the method, the system, and the apparatus for three-dimensional reconstruction of monocular images based on a reference plane provided by the present invention, the reference plane is first created by using the three-dimensional reconstruction capability of a monocular pair plane, and then each object point in the image is mapped to each simulation plane based on the reference plane to obtain the depth of each object point, thereby completing the three-dimensional reconstruction of the entire image. The system has the advantages of simple structure, easy realization and good scene usability.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.
Claims (6)
1. A monocular image three-dimensional reconstruction method based on object point depth is characterized by comprising the following steps:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to the plurality of reference planes;
s4, extracting projection points of all objects in the two-dimensional image on a corresponding reference plane, and finishing three-dimensional reconstruction of all the objects;
the step S1 specifically comprises the following steps:
after the camera is rotated to a scene area needing three-dimensional reconstruction, the scene area is shot by the camera to obtain a two-dimensional image;
the step S2 specifically comprises the following steps:
analyzing the two-dimensional image by using a classification model, and segmenting a plurality of reference planes in a corresponding three-dimensional space;
the step S3 specifically comprises the following steps:
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
the step S4 specifically comprises the following steps:
extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the projection points to obtain a plane where the object points are located, substituting translation vectors and rotation vectors from an optical axis of a camera corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to obtain the depth of the object points, and finishing three-dimensional reconstruction of all objects in the two-dimensional image;
the extracting the projection points of all the objects in the two-dimensional image on the corresponding reference plane comprises:
and applying a SharpMask image segmentation method to the two-dimensional image to obtain edge segmentation textures of the object in the two-dimensional image, and calculating projection points of texture pixel points which do not belong to the reference plane area corresponding to the object on the corresponding reference plane.
2. The method for three-dimensional reconstruction of monocular image based on object point depth according to claim 1, wherein step S2 further comprises:
and marking the pixels belonging to the reference plane area in the two-dimensional image as a reference plane type, and marking the pixels not belonging to the reference plane area in the two-dimensional image as a non-reference plane type.
3. A system for three-dimensional reconstruction of a monocular image based on object point depth, comprising one or more processors and a memory, said memory storing a program which when executed by the processors performs the steps of:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes;
s4, extracting projection points of all objects in the two-dimensional image on a corresponding reference plane, and completing three-dimensional reconstruction of all the objects;
the step S1 specifically comprises the following steps:
after the camera is rotated to a scene area needing three-dimensional reconstruction, the scene area is shot by the camera to obtain a two-dimensional image;
the step S2 specifically comprises the following steps:
analyzing the two-dimensional image by using a classification model, and segmenting a plurality of reference planes in a corresponding three-dimensional space;
the step S3 specifically comprises the following steps:
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
the step S4 specifically comprises the following steps:
extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the object points to obtain a plane where the object points are located, substituting translation vectors and rotation vectors from an optical axis of a camera corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to obtain the depth of the object points, and finishing three-dimensional reconstruction of all objects in the two-dimensional image;
the extracting the projection points of all the objects in the two-dimensional image on the corresponding reference plane comprises:
and applying a SharpMask image segmentation method to the two-dimensional image to obtain edge segmentation textures of the object in the two-dimensional image, and calculating projection points of texture pixel points which do not belong to the reference plane area corresponding to the object on the corresponding reference plane.
4. The system of claim 3, wherein the program when executed by the processor further implements the steps of:
and marking the pixels belonging to the reference plane area in the two-dimensional image as a reference plane type, and marking the pixels not belonging to the reference plane area in the two-dimensional image as a non-reference plane type.
5. A monocular image three-dimensional reconstruction device based on object point depth is characterized by comprising a camera and a three-dimensional reconstruction unit which are connected with each other, wherein the camera is configured to shoot a two-dimensional image, and the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
the three-dimensional reconstruction unit is configured to segment the two-dimensional image resulting in a plurality of reference planes in a corresponding three-dimensional space; finishing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes; extracting projection points of all objects in the two-dimensional image on a corresponding reference plane to complete three-dimensional reconstruction of all the objects;
the camera is specifically configured to rotate the camera to a scene area needing three-dimensional reconstruction, and then image shooting is carried out on the scene area through the camera to obtain a two-dimensional image;
the three-dimensional reconstruction unit is specifically configured to analyze the two-dimensional image using a classification model, and segment a plurality of reference planes in a corresponding three-dimensional space;
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the projection points to obtain a plane where the object points are located, substituting translation vectors and rotation vectors from an optical axis of a camera corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to obtain the depth of the object points, and finishing three-dimensional reconstruction of all objects in the two-dimensional image;
the extracting the projection points of all the objects in the two-dimensional image on the corresponding reference plane comprises:
and applying a SharpMask image segmentation method to the two-dimensional image to obtain edge segmentation textures of the object in the two-dimensional image, and calculating projection points of texture pixel points which do not belong to the reference plane area corresponding to the object on the corresponding reference plane.
6. The apparatus according to claim 5, wherein the three-dimensional reconstruction unit is specifically configured to label pixels in the two-dimensional image that belong to a reference plane region as a reference plane type, and pixels in the two-dimensional image that do not belong to a reference plane region as a non-reference plane type.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910963573.7A CN110838164B (en) | 2018-08-31 | 2018-08-31 | Monocular image three-dimensional reconstruction method, system and device based on object point depth |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811009447.XA CN109147027B (en) | 2018-08-31 | 2018-08-31 | Monocular image three-dimensional rebuilding method, system and device based on reference planes |
CN201910963573.7A CN110838164B (en) | 2018-08-31 | 2018-08-31 | Monocular image three-dimensional reconstruction method, system and device based on object point depth |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811009447.XA Division CN109147027B (en) | 2018-08-31 | 2018-08-31 | Monocular image three-dimensional rebuilding method, system and device based on reference planes |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110838164A CN110838164A (en) | 2020-02-25 |
CN110838164B true CN110838164B (en) | 2023-03-24 |
Family
ID=64825870
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910963573.7A Active CN110838164B (en) | 2018-08-31 | 2018-08-31 | Monocular image three-dimensional reconstruction method, system and device based on object point depth |
CN201910964298.0A Active CN110827392B (en) | 2018-08-31 | 2018-08-31 | Monocular image three-dimensional reconstruction method, system and device |
CN201811009447.XA Active CN109147027B (en) | 2018-08-31 | 2018-08-31 | Monocular image three-dimensional rebuilding method, system and device based on reference planes |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910964298.0A Active CN110827392B (en) | 2018-08-31 | 2018-08-31 | Monocular image three-dimensional reconstruction method, system and device |
CN201811009447.XA Active CN109147027B (en) | 2018-08-31 | 2018-08-31 | Monocular image three-dimensional rebuilding method, system and device based on reference planes |
Country Status (1)
Country | Link |
---|---|
CN (3) | CN110838164B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109741404B (en) * | 2019-01-10 | 2020-11-17 | 奥本未来(北京)科技有限责任公司 | Light field acquisition method based on mobile equipment |
CN111220128B (en) * | 2019-01-31 | 2022-10-25 | 金钱猫科技股份有限公司 | Monocular focusing measuring method and terminal |
CN112837404B (en) * | 2019-11-25 | 2024-01-19 | 北京初速度科技有限公司 | Method and device for constructing three-dimensional information of planar object |
CN111415420B (en) * | 2020-03-25 | 2024-01-23 | 北京迈格威科技有限公司 | Spatial information determining method and device and electronic equipment |
CN112198527B (en) * | 2020-09-30 | 2022-12-27 | 上海炬佑智能科技有限公司 | Reference plane adjustment and obstacle detection method, depth camera and navigation equipment |
CN112198529B (en) * | 2020-09-30 | 2022-12-27 | 上海炬佑智能科技有限公司 | Reference plane adjustment and obstacle detection method, depth camera and navigation equipment |
CN112884898B (en) * | 2021-03-17 | 2022-06-07 | 杭州思看科技有限公司 | Reference device for measuring texture mapping precision |
CN114596406A (en) * | 2022-01-25 | 2022-06-07 | 海拓信息技术(佛山)有限公司 | Three-dimensional construction method and device based on monocular camera |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015024361A1 (en) * | 2013-08-20 | 2015-02-26 | 华为技术有限公司 | Three-dimensional reconstruction method and device, and mobile terminal |
CN104809755A (en) * | 2015-04-09 | 2015-07-29 | 福州大学 | Single-image-based cultural relic three-dimensional reconstruction method |
CN107063129A (en) * | 2017-05-25 | 2017-08-18 | 西安知象光电科技有限公司 | A kind of array parallel laser projection three-dimensional scan method |
CN108062788A (en) * | 2017-12-18 | 2018-05-22 | 北京锐安科技有限公司 | A kind of three-dimensional rebuilding method, device, equipment and medium |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6240198B1 (en) * | 1998-04-13 | 2001-05-29 | Compaq Computer Corporation | Method for figure tracking using 2-D registration |
CN102697508B (en) * | 2012-04-23 | 2013-10-16 | 中国人民解放军国防科学技术大学 | Method for performing gait recognition by adopting three-dimensional reconstruction of monocular vision |
CN102708566B (en) * | 2012-05-08 | 2014-10-29 | 天津工业大学 | Novel single-camera and single-projection light source synchronous calibrating method |
CN103578133B (en) * | 2012-08-03 | 2016-05-04 | 浙江大华技术股份有限公司 | A kind of method and apparatus that two-dimensional image information is carried out to three-dimensional reconstruction |
CN103077524A (en) * | 2013-01-25 | 2013-05-01 | 福州大学 | Calibrating method of hybrid vision system |
CN106204717B (en) * | 2015-05-28 | 2019-07-16 | 长沙维纳斯克信息技术有限公司 | A kind of stereo-picture quick three-dimensional reconstructing method and device |
CN105303554B (en) * | 2015-09-16 | 2017-11-28 | 东软集团股份有限公司 | The 3D method for reconstructing and device of a kind of image characteristic point |
CN106960442A (en) * | 2017-03-01 | 2017-07-18 | 东华大学 | Based on the infrared night robot vision wide view-field three-D construction method of monocular |
CN107945268B (en) * | 2017-12-15 | 2019-11-29 | 深圳大学 | A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light |
-
2018
- 2018-08-31 CN CN201910963573.7A patent/CN110838164B/en active Active
- 2018-08-31 CN CN201910964298.0A patent/CN110827392B/en active Active
- 2018-08-31 CN CN201811009447.XA patent/CN109147027B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015024361A1 (en) * | 2013-08-20 | 2015-02-26 | 华为技术有限公司 | Three-dimensional reconstruction method and device, and mobile terminal |
CN104809755A (en) * | 2015-04-09 | 2015-07-29 | 福州大学 | Single-image-based cultural relic three-dimensional reconstruction method |
CN107063129A (en) * | 2017-05-25 | 2017-08-18 | 西安知象光电科技有限公司 | A kind of array parallel laser projection three-dimensional scan method |
CN108062788A (en) * | 2017-12-18 | 2018-05-22 | 北京锐安科技有限公司 | A kind of three-dimensional rebuilding method, device, equipment and medium |
Non-Patent Citations (3)
Title |
---|
Accurate multiple view 3D reconstruction using patch-based stereo for large-scale scenes;Shen S;《IEEE Transactions on Image Processing》;20131231;全文 * |
基于视觉的三维重建技术综述;佟帅;《计算机应用研究》;20110731;全文 * |
基于计算机视觉的三维重建技术综述;徐超;《数字技术与应用》;20170115;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110827392B (en) | 2023-03-24 |
CN110838164A (en) | 2020-02-25 |
CN110827392A (en) | 2020-02-21 |
CN109147027A (en) | 2019-01-04 |
CN109147027B (en) | 2019-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110838164B (en) | Monocular image three-dimensional reconstruction method, system and device based on object point depth | |
CN111062873B (en) | Parallax image splicing and visualization method based on multiple pairs of binocular cameras | |
CN108827147B (en) | Image measuring method and system based on rapid calibration | |
US20190392609A1 (en) | 3 dimensional coordinates calculating apparatus, 3 dimensional coordinates calculating method, 3 dimensional distance measuring apparatus and 3 dimensional distance measuring method using images | |
US11521311B1 (en) | Collaborative disparity decomposition | |
CN111028155B (en) | Parallax image splicing method based on multiple pairs of binocular cameras | |
US7098435B2 (en) | Method and apparatus for scanning three-dimensional objects | |
CN109544628B (en) | Accurate reading identification system and method for pointer instrument | |
CN105654547B (en) | Three-dimensional rebuilding method | |
JP2003130621A (en) | Method and system for measuring three-dimensional shape | |
CN109242898B (en) | Three-dimensional modeling method and system based on image sequence | |
Mousavi et al. | The performance evaluation of multi-image 3D reconstruction software with different sensors | |
CN108629829A (en) | The three-dimensional modeling method and system that one bulb curtain camera is combined with depth camera | |
WO2018032841A1 (en) | Method, device and system for drawing three-dimensional image | |
CN112802208B (en) | Three-dimensional visualization method and device in terminal building | |
Mahdy et al. | Projector calibration using passive stereo and triangulation | |
Wenzel et al. | High-resolution surface reconstruction from imagery for close range cultural Heritage applications | |
WO2018056802A1 (en) | A method for estimating three-dimensional depth value from two-dimensional images | |
Harvent et al. | Multi-view dense 3D modelling of untextured objects from a moving projector-cameras system | |
CN113793392A (en) | Camera parameter calibration method and device | |
CN116952191A (en) | Visual ranging method based on coaxial photography | |
CN114935316B (en) | Standard depth image generation method based on optical tracking and monocular vision | |
Al-Zahrani et al. | Applications of a direct algorithm for the rectification of uncalibrated images | |
Lin | Resolution adjustable 3D scanner based on using stereo cameras | |
CN108257181A (en) | A kind of space-location method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |