CN110838164A - Monocular image three-dimensional reconstruction method, system and device based on object point depth - Google Patents

Monocular image three-dimensional reconstruction method, system and device based on object point depth Download PDF

Info

Publication number
CN110838164A
CN110838164A CN201910963573.7A CN201910963573A CN110838164A CN 110838164 A CN110838164 A CN 110838164A CN 201910963573 A CN201910963573 A CN 201910963573A CN 110838164 A CN110838164 A CN 110838164A
Authority
CN
China
Prior art keywords
dimensional
reference plane
dimensional image
image
dimensional reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910963573.7A
Other languages
Chinese (zh)
Other versions
CN110838164B (en
Inventor
林大甲
江世松
黄宗荣
郑敏忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinqianmao Technology Co Ltd
Original Assignee
Jinqianmao Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinqianmao Technology Co Ltd filed Critical Jinqianmao Technology Co Ltd
Priority to CN201910963573.7A priority Critical patent/CN110838164B/en
Publication of CN110838164A publication Critical patent/CN110838164A/en
Application granted granted Critical
Publication of CN110838164B publication Critical patent/CN110838164B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of measurement, in particular to a monocular image three-dimensional reconstruction method, a monocular image three-dimensional reconstruction system and a monocular image three-dimensional reconstruction device based on object point depth. The monocular image three-dimensional reconstruction method comprises the following steps: shooting to obtain a two-dimensional image; segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space; finishing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes; and extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and finishing the three-dimensional reconstruction of all the objects. According to the monocular image three-dimensional reconstruction method, the monocular image three-dimensional reconstruction system and the monocular image three-dimensional reconstruction device based on the object point depth, the reference plane is firstly established by utilizing the three-dimensional reconstruction capability of a monocular pair plane, then each object point in the image is mapped to each simulation plane based on the reference plane, the depth of each object point is obtained, and the three-dimensional reconstruction of the whole image is completed. The system has the advantages of simple structure, easy realization and good scene usability.

Description

Monocular image three-dimensional reconstruction method, system and device based on object point depth
The application is a divisional application of a parent application named as 'monocular image three-dimensional reconstruction method, system and device based on a reference plane', wherein the application number is 201811009447.X, and the application date is 2018, 8 and 31.
Technical Field
The invention relates to the field of measurement, in particular to a monocular image three-dimensional reconstruction method, a monocular image three-dimensional reconstruction system and a monocular image three-dimensional reconstruction device based on object point depth.
Background
Image three-dimensional reconstruction is applied to various fields. The monocular vision has simple structure and convenient application, and can only carry out three-dimensional reconstruction on the object on the appointed single plane in the image on the premise of not depending on the known standard substance. The binocular stereo vision simulates human eye functions, and completes three-dimensional reconstruction through parallax, compared with monocular vision, the binocular stereo vision can perform three-dimensional reconstruction on all objects in an image, but the binocular stereo vision has the defects of complex structure, difficult accurate calibration process and large matching error of corresponding points, and in a scene with sparse object surface characteristic points, accurate shapes are difficult to obtain to complete three-dimensional reconstruction. The structured light camera needs to be matched with core devices such as a laser projector, an optical diffraction element, an infrared camera and the like, diffused infrared speckles are captured through the infrared camera, and depth information of each point is calculated. High cost three-dimensional laser scanners, and binocular stereo cameras have high requirements, and therefore a better alternative solution is needed to solve the problem.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: a monocular image three-dimensional reconstruction method, a system and a device based on object point depth are provided.
In order to solve the above technical problems, a first technical solution adopted by the present invention is:
a monocular image three-dimensional reconstruction method based on object point depth comprises the following steps:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to the plurality of reference planes;
and S4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and completing the three-dimensional reconstruction of all the objects.
The second technical scheme adopted by the invention is as follows:
a system for three-dimensional reconstruction of a monocular image based on object point depth comprising one or more processors and a memory, said memory storing a program which when executed by the processors performs the steps of:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to the plurality of reference planes;
and S4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and completing the three-dimensional reconstruction of all the objects.
The third technical scheme adopted by the invention is as follows:
a monocular image three-dimensional reconstruction device based on object point depth comprises a camera and a three-dimensional reconstruction unit which are connected with each other, wherein the camera is configured to shoot a two-dimensional image, and the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
the three-dimensional reconstruction unit is configured to segment the two-dimensional image resulting in a plurality of reference planes in a corresponding three-dimensional space; finishing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes; and extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and finishing the three-dimensional reconstruction of all the objects.
The invention has the beneficial effects that: according to the monocular image three-dimensional reconstruction method, the monocular image three-dimensional reconstruction system and the monocular image three-dimensional reconstruction device based on the object point depth, the reference plane is firstly established by utilizing the three-dimensional reconstruction capability of a monocular pair plane, then each object point in the image is mapped to each simulation plane based on the reference plane, the depth of each object point is obtained, and the three-dimensional reconstruction of the whole image is completed. The system has the advantages of simple structure, easy realization and good scene usability.
Drawings
FIG. 1 is a flow chart illustrating the steps of a monocular image three-dimensional reconstruction method based on a reference plane according to the present invention;
FIG. 2 is a schematic structural diagram of a monocular image three-dimensional reconstruction system based on a reference plane according to the present invention;
FIG. 3 is a schematic diagram of projection point plane reconstruction of the monocular image three-dimensional reconstruction system based on a reference plane according to the present invention;
description of reference numerals:
1. a processor; 2. a memory.
Detailed Description
In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.
Referring to fig. 1, a monocular image three-dimensional reconstruction method based on a reference plane includes the following steps:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to the plurality of reference planes;
and S4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and completing the three-dimensional reconstruction of all the objects.
From the above description, the beneficial effects of the present invention are: the monocular image three-dimensional reconstruction method based on the reference plane provided by the invention utilizes the three-dimensional reconstruction capability of a monocular pair plane, firstly creates the reference plane, and then maps each object point in the image to each simulation plane based on the reference plane to obtain the depth of each object point, thereby completing the three-dimensional reconstruction of the whole image.
Further, step S1 is specifically:
after the camera is rotated to a scene area needing three-dimensional reconstruction, the scene area is shot by the camera to obtain a two-dimensional image;
step S2 specifically includes:
analyzing the two-dimensional image by using a classification model, and segmenting a plurality of reference planes in a corresponding three-dimensional space;
step S3 specifically includes:
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
step S4 specifically includes:
and extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the object points to obtain a plane where the object points are located, and substituting translation vectors and rotation vectors from optical axes of cameras corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to complete three-dimensional reconstruction of all objects in the two-dimensional image.
Further, step S2 further includes:
and marking the pixels belonging to the reference plane area in the two-dimensional image as a reference plane type, and marking the pixels not belonging to the reference plane area in the two-dimensional image as a non-reference plane type.
Referring to fig. 2, the present invention further provides a monocular image three-dimensional reconstruction system based on a reference plane, including one or more processors 1 and a memory 2, where the memory 2 stores a program, and the program, when executed by the processor 1, implements the following steps:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to the plurality of reference planes;
and S4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and completing the three-dimensional reconstruction of all the objects.
From the above description, the beneficial effects of the present invention are: the monocular image three-dimensional reconstruction system based on the reference plane provided by the invention utilizes the three-dimensional reconstruction capability of a monocular pair plane, firstly creates the reference plane, and then maps each object point in the image to each simulation plane based on the reference plane to obtain the depth of each object point, thereby completing the three-dimensional reconstruction of the whole image.
Further, the program when executed by the processor further implements the steps comprising:
step S1 specifically includes:
after the camera is rotated to a scene area needing three-dimensional reconstruction, the scene area is shot by the camera to obtain a two-dimensional image;
step S2 specifically includes:
analyzing the two-dimensional image by using a classification model, and segmenting a plurality of reference planes in a corresponding three-dimensional space;
step S3 specifically includes:
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
step S4 specifically includes:
and extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the object points to obtain a plane where the object points are located, and substituting translation vectors and rotation vectors from optical axes of cameras corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to complete three-dimensional reconstruction of all objects in the two-dimensional image.
Further, the program when executed by the processor further implements the steps comprising:
and marking the pixels belonging to the reference plane area in the two-dimensional image as a reference plane type, and marking the pixels not belonging to the reference plane area in the two-dimensional image as a non-reference plane type.
The invention also provides a monocular image three-dimensional reconstruction device based on the reference plane, which comprises a camera and a three-dimensional reconstruction unit which are connected with each other, wherein the camera is configured to shoot to obtain a two-dimensional image, and the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
the three-dimensional reconstruction unit is configured to segment the two-dimensional image resulting in a plurality of reference planes in a corresponding three-dimensional space; finishing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes; and extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and finishing the three-dimensional reconstruction of all the objects.
From the above description, the beneficial effects of the present invention are: the monocular image three-dimensional reconstruction device based on the reference plane provided by the invention utilizes the three-dimensional reconstruction capability of a monocular pair plane, firstly creates the reference plane, and then maps each object point in the image to each simulation plane based on the reference plane to obtain the depth of each object point, thereby completing the three-dimensional reconstruction of the whole image. The system has the advantages of simple structure, easy realization and good scene usability.
Further, the camera is specifically configured to rotate the camera to a scene area that needs three-dimensional reconstruction, and then perform image shooting on the scene area through the camera to obtain a two-dimensional image;
the three-dimensional reconstruction unit is specifically configured to analyze the two-dimensional image using a classification model, and segment a plurality of reference planes in a corresponding three-dimensional space;
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
and extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the object points to obtain a plane where the object points are located, and substituting translation vectors and rotation vectors from optical axes of cameras corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to complete three-dimensional reconstruction of all objects in the two-dimensional image.
Further, the three-dimensional reconstruction unit is specifically configured to mark pixels in the two-dimensional image that belong to a reference plane region as a reference plane type, and mark pixels in the two-dimensional image that do not belong to the reference plane region as a non-reference plane type.
Referring to fig. 1, a first embodiment of the present invention is:
the invention provides a monocular image three-dimensional reconstruction method based on a reference plane, which comprises the following steps:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to the plurality of reference planes;
and S4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and completing the three-dimensional reconstruction of all the objects.
It should be noted that the three-dimensional reconstruction of an image according to the present invention refers to obtaining coordinates of an object in a unified three-dimensional reference coordinate system from the image, a camera is a mapping between a three-dimensional world space and a two-dimensional image, and a mapping model thereof can be expressed as:
the mapping model represents a homogeneous coordinate (X) of a point on a three-dimensional reference coordinate systemw,Yw,Zw1) mapping with the point on the two-dimensional image coordinate systemThe relation between homogeneous coordinates (u, v, 1) can be obtained by camera internal parameters K and camera external parameters (rotation R and translation t). Wherein, the internal parameters of the camera
Figure BDA0002229758610000071
Is the intrinsic matrix of the camera (u)0,v0) Is the projection position of the optical center of the camera on the CCD imaging plane, f is the focal length of the camera, dxAnd dyWhich are the physical dimensions of each pixel of the CCD in the horizontal and vertical directions, respectively.
The step S1 is specifically:
in the embodiment, the camera is rotated to a scene area needing three-dimensional reconstruction, the camera captures images of the area, a two-dimensional image coordinate system is established, the two-dimensional image coordinate system is a coordinate system established by taking the upper left corner of a two-dimensional image as an origin, the u axis is arranged rightward, and the v axis is arranged downward, and a rotation angle value of an optical axis of the camera is obtained through a holder, wherein the rotation angle value of the optical axis comprises a vertical rotation angle α of the holdercAnd horizontal angle of rotation βc
The step S2 is specifically:
in the embodiment, a large number of pictures of the same type of application scenes are collected in advance, and the SLIC algorithm is used for performing super-pixel processing on the images to obtain the distribution condition of the colors and the textures of the images; grouping superpixels with the same characteristics, wherein the same characteristics refer to regional pixels with the same type of geometric significance in an image, for example, for a construction site scene, the image is generally divided into two types of geometric types, namely a reference plane (construction surface) and a non-reference plane (an object extending from the reference plane, such as a steel bar, a scaffold, a cement column and the like); carrying out superpixel grouping on an acquired scene picture set, marking the grouping (a reference plane or a non-reference plane), and then establishing a geometric classification model of the scene through deep learning;
after the image is captured in the step S1, the classification model is used for analyzing the image, a geometric area of a reference plane is segmented, pixels belonging to the area of the reference plane in the image are marked as a type of the reference plane, and other remaining pixels are marked as types of non-reference planes;
the step S3 is specifically:
in the present embodiment, for convenience of description, the optical axis when the pan/tilt head is at the initial position zero azimuth (both the horizontal angle and the vertical angle are 0 degree) is taken as ZcAxis, establishing a camera coordinate system XcYcZc(ii) a On the reference plane, the optical axis is used as the origin, and the camera coordinate system X is usedcYcZcThe coordinate axis direction of the three-dimensional coordinate system X is set up as a reference directionwYwZwWherein Y iswPerpendicular to the reference plane;
controlling the holder to respectively position the optical axis of the camera to any three position points of the reference plane, determining the position of the position points of the reference plane by comparing the n multiplied by n pixel area of the picture center with the reference plane type pixel set obtained in the step S2 through an image matching algorithm, and then obtaining the three position points of the reference plane in the coordinate system X according to the rotation angle value of the holder and the distance of laser rangingcYcZcThe coordinate value of the next step;
in the present embodiment, the laser beam is positioned to the first position point P of the reference plane by the pan/tilt head1From point P1Distance to laser measuring device
Figure BDA0002229758610000081
And vertical rotation angle α of pan and tilt head1Horizontal rotation angle β1Calculating to obtain a point P1In a coordinate system XcYcZcLower coordinate value
Figure BDA0002229758610000082
Figure BDA0002229758610000083
Similarly, the laser beam can be obtained to the second point P of the reference plane2Coordinate values of
Figure BDA0002229758610000084
And a third point P3Coordinate values of
Figure BDA0002229758610000085
Are not described in detail herein;
normal vector from reference plane
Figure BDA0002229758610000086
Available vector
Figure BDA0002229758610000087
Projection vector on normal to reference plane
Figure BDA0002229758610000088
Further, it can be found that when the optical axis is closest to the reference plane, the vertical deviation angle of the optical axis from the zero orientation
Figure BDA0002229758610000089
Horizontal deviation angle of optical axis relative to zero azimuth
Figure BDA00022297586100000810
Pan-tilt vertical rotation angle α when capturing images in step S1cAnd horizontal angle of rotation βcObtaining the unit vector of the time of the optical axis
Further, the projection vector is obtained
Figure BDA00022297586100000812
And unit vector
Figure BDA00022297586100000813
Angle therebetween
Figure BDA00022297586100000814
It can be further appreciated that in the image captured in step S1, the optical axis is directed to the referenceTranslation vector of plane
Figure BDA00022297586100000815
And a rotation vector
Figure BDA0002229758610000091
Handle (rotation R)cAnd a translation tc) Mapping model between three-dimensional reference coordinate system and two-dimensional image coordinate system substituted into reference plane
Figure BDA0002229758610000092
In the method, the three-dimensional reconstruction of the reference plane can be realized, and the coordinates of each pixel point of the reference plane in the image under the three-dimensional reference coordinate system are obtained;
the step S4 is specifically:
in this embodiment, the method for extracting the projection point of the object is as follows:
applying a SharpMask image segmentation algorithm to the image to obtain edge segmentation textures of an object in the image, and calculating projection points of texture pixel points in the reference plane for the texture pixel points belonging to the non-reference plane type pixel set range obtained in S2;
the projection point is a point of each point on the object projected onto the reference plane, and a connecting line of the object point and the projection point is perpendicular to the reference plane, namely is parallel to a normal vector of the reference plane;
the connection position of the object and the reference plane is reflected on the image, namely, the edge of the object is divided into the adjacent regions of the texture pixel points and the reference plane pixel points, whether the reference plane type pixel set obtained in the step S2 is included or not is searched in the eight-connected adjacent regions of the edge divided texture pixel points of the object, and the included reference plane pixels are listed in the projection point set of the edge divided texture pixel points of the object; combining the texture pixel points with each reference plane pixel point in the projection point set respectively to obtain a plane straight line set of the texture pixel points in a two-dimensional image coordinate system;
in a three-dimensional reference coordinate system XwYwZwNext, take a point m (0, 0, 0) on the reference plane and a point n (0, 0, 1) outside the reference plane) Known vector
Figure BDA0002229758610000093
I.e. the normal vector of the reference plane, into the mapping model between the three-dimensional reference coordinate system and the two-dimensional image coordinate system of the plane passing through the points m, n and perpendicular to the reference plane
Figure BDA0002229758610000095
In the method, the three-dimensional reconstruction of the vertical plane can be realized to obtain a normal vector
Figure BDA0002229758610000096
A line under a two-dimensional image coordinate system;
finding out the normal vector in the plane straight line set according to the constraint condition principle that the connecting line of the object point and the projection point is parallel to the normal vector of the reference plane
Figure BDA0002229758610000101
The straight line with the highest parallel correlation degree can be used as a judgment condition by using the included angle of the two straight lines, and the reference plane pixel point corresponding to the straight line with the highest parallel correlation degree is used as a projection point of the object point; and from the three-dimensional reconstruction result of the reference plane in step S2, the coordinate P in the three-dimensional reference coordinate system corresponding to the projection point can be obtainedp=(Xp,Yp,Zp);
From the unit vector in step S3
Figure BDA0002229758610000102
And a translation vector tcThe optical axis vector when capturing the picture in step S1 can be obtained
Figure BDA0002229758610000103
Vector projected from normal to reference planeCan obtain, vector
Figure BDA0002229758610000105
Projection vector on reference plane
Figure BDA0002229758610000106
By projecting a vector
Figure BDA0002229758610000107
As normal vectors, in combination with the projected point PpObtaining a plane which passes through the projection point and is vertical to the reference plane, wherein the vertical plane is the plane where the object point is located;
further, vectors can be obtained
Figure BDA0002229758610000108
Intersection with the vertical plane (X)i,Yi,Zi) Wherein
Figure BDA0002229758610000109
Figure BDA00022297586100001010
Further, the value of the translation of the optical axis to the vertical plane can be obtained
Figure BDA00022297586100001011
Model for mapping between a three-dimensional reference coordinate system and a two-dimensional image coordinate system substituted into a vertical plane
Figure BDA00022297586100001012
Obtaining the coordinates of the object point under the three-dimensional reference coordinate system;
repeating the steps to obtain the coordinates of all object points in the image under the three-dimensional reference coordinate system, and completing the three-dimensional reconstruction of all objects in the image;
as shown in FIG. 3, twoIn the dimensional image, a steel bar obliquely inserted on a reference plane rho (building site floor) is arranged on the reference plane rho, the projection points a and b of a pixel point A, B of the steel bar on the reference plane rho are obtained, and a simulation plane pi passing through the projection points a and b and perpendicular to the reference plane rho is obtained1、π2Further, a plane pi can be obtained1、π2Mapping the model with a two-dimensional image to obtain three-dimensional coordinates of image pixel points A, B; and by analogy, the three-dimensional coordinates of all the pixel points of the steel bar can be obtained, and the three-dimensional reconstruction of the whole steel bar is completed.
Referring to fig. 2, the second embodiment of the present invention is:
the invention provides a monocular image three-dimensional reconstruction system based on a reference plane, which comprises one or more processors and a memory, wherein the memory stores a program, and the program realizes the following steps when being executed by the processor:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to the plurality of reference planes;
and S4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and completing the three-dimensional reconstruction of all the objects.
It should be noted that the three-dimensional reconstruction of an image according to the present invention refers to obtaining coordinates of an object in a unified three-dimensional reference coordinate system from the image, a camera is a mapping between a three-dimensional world space and a two-dimensional image, and a mapping model thereof can be expressed as:
Figure BDA0002229758610000111
the mapping model represents a homogeneous coordinate (X) of a point on a three-dimensional reference coordinate systemw,Yw,Zw1) the relation between the point and the homogeneous coordinate (u, v, 1) mapped on the two-dimensional image coordinate system can pass through the cameraThe partial parameters K and camera extrinsic parameters (rotation R and translation t) are obtained. Wherein, the internal parameters of the camera
Figure BDA0002229758610000112
Is the intrinsic matrix of the camera (u)0,v0) Is the projection position of the optical center of the camera on the CCD imaging plane, f is the focal length of the camera, dxAnd dyWhich are the physical dimensions of each pixel of the CCD in the horizontal and vertical directions, respectively.
The step S1 is specifically:
in the embodiment, the camera is rotated to a scene area needing three-dimensional reconstruction, the camera captures images of the area, a two-dimensional image coordinate system is established, the two-dimensional image coordinate system is a coordinate system established by taking the upper left corner of a two-dimensional image as an origin, the u axis is arranged rightward, and the v axis is arranged downward, and a rotation angle value of an optical axis of the camera is obtained through a holder, wherein the rotation angle value of the optical axis comprises a vertical rotation angle α of the holdercAnd horizontal angle of rotation βc
The step S2 is specifically:
in the embodiment, a large number of pictures of the same type of application scenes are collected in advance, and the SLIC algorithm is used for performing super-pixel processing on the images to obtain the distribution condition of the colors and the textures of the images; grouping superpixels with the same characteristics, wherein the same characteristics refer to regional pixels with the same type of geometric significance in an image, for example, for a construction site scene, the image is generally divided into two types of geometric types, namely a reference plane (construction surface) and a non-reference plane (an object extending from the reference plane, such as a steel bar, a scaffold, a cement column and the like); carrying out superpixel grouping on an acquired scene picture set, marking the grouping (a reference plane or a non-reference plane), and then establishing a geometric classification model of the scene through deep learning;
after the image is captured in the step S1, the classification model is used for analyzing the image, a geometric area of a reference plane is segmented, pixels belonging to the area of the reference plane in the image are marked as a type of the reference plane, and other remaining pixels are marked as types of non-reference planes;
the step S3 is specifically:
in the present embodiment, for convenience of description, the optical axis when the pan/tilt head is at the initial position zero azimuth (both the horizontal angle and the vertical angle are 0 degree) is taken as ZcAxis, establishing a camera coordinate system XcYcZc(ii) a On the reference plane, the optical axis is used as the origin, and the camera coordinate system X is usedcYcZcThe coordinate axis direction of the three-dimensional reference coordinate system Y is set up as a reference directionwYwZwWherein Y iswPerpendicular to the reference plane;
controlling the holder to respectively position the optical axis of the camera to any three position points of the reference plane, determining the position of the position points of the reference plane by comparing the n multiplied by n pixel area of the picture center with the reference plane type pixel set obtained in the step S2 through an image matching algorithm, and then obtaining the three position points of the reference plane in the coordinate system X according to the rotation angle value of the holder and the distance of laser rangingcYcZcThe coordinate value of the next step;
in the present embodiment, the laser beam is positioned to the first position point P of the reference plane by the pan/tilt head1From point P1Distance to laser measuring deviceAnd vertical rotation angle α of pan and tilt head1Horizontal rotation angle β1Calculating to obtain a point P1In a coordinate system XcYcZcLower coordinate value
Figure BDA0002229758610000122
Similarly, the laser beam can be obtained to the second point P of the reference plane2Coordinate values ofAnd a third point P3Coordinate values of
Figure BDA0002229758610000131
Are not described in detail herein;
normal vector from reference plane
Figure BDA0002229758610000132
Available vector
Figure BDA0002229758610000133
Projection vector on normal to reference plane
Figure BDA0002229758610000134
Further, it can be found that when the optical axis is closest to the reference plane, the vertical deviation angle of the optical axis from the zero orientation
Figure BDA0002229758610000135
Horizontal deviation angle of optical axis relative to zero azimuth
Figure BDA0002229758610000136
Pan-tilt vertical rotation angle α when capturing images in step S1cAnd horizontal angle of rotation βcObtaining the unit vector of the time of the optical axis
Further, the projection vector is obtained
Figure BDA0002229758610000138
And unit vector
Figure BDA0002229758610000139
Angle therebetween
Further, the translation vector from the optical axis to the reference plane in the image captured in step S1 can be obtained
Figure BDA00022297586100001311
And a rotation vector
Figure BDA00022297586100001312
Handle (rotation R)cAnd a translation tc) Mapping model between three-dimensional reference coordinate system and two-dimensional image coordinate system substituted into reference plane
Figure BDA00022297586100001313
In the method, the three-dimensional reconstruction of the reference plane can be realized, and the coordinates of each pixel point of the reference plane in the image under the three-dimensional reference coordinate system are obtained;
the step S4 is specifically:
in this embodiment, the method for extracting the projection point of the object is as follows:
applying a SharpMask image segmentation algorithm to the image to obtain edge segmentation textures of an object in the image, and calculating projection points of texture pixel points in the reference plane for the texture pixel points belonging to the non-reference plane type pixel set range obtained in S2;
the projection point is a point of each point on the object projected onto the reference plane, and a connecting line of the object point and the projection point is perpendicular to the reference plane, namely is parallel to a normal vector of the reference plane;
the connection position of the object and the reference plane is reflected on the image, namely, the edge of the object is divided into the adjacent regions of the texture pixel points and the reference plane pixel points, whether the reference plane type pixel set obtained in the step S2 is included or not is searched in the eight-connected adjacent regions of the edge divided texture pixel points of the object, and the included reference plane pixels are listed in the projection point set of the edge divided texture pixel points of the object; combining the texture pixel points with each reference plane pixel point in the projection point set respectively to obtain a plane straight line set of the texture pixel points in a two-dimensional image coordinate system;
in a three-dimensional reference coordinate system XwYwZwNext, a point m (0, 0, 0) on the reference plane and a point n (0, 0, 1) outside the reference plane are taken to find the vector
Figure BDA0002229758610000141
I.e. the normal vector of the reference plane, into the mapping model between the three-dimensional reference coordinate system and the two-dimensional image coordinate system of the plane passing through the points m, n and perpendicular to the reference plane
Figure BDA0002229758610000142
Figure BDA0002229758610000143
In the method, the three-dimensional reconstruction of the vertical plane can be realized to obtain a normal vector
Figure BDA0002229758610000144
A line under a two-dimensional image coordinate system;
finding out the normal vector in the plane straight line set according to the constraint condition principle that the connecting line of the object point and the projection point is parallel to the normal vector of the reference plane
Figure BDA0002229758610000145
The straight line with the highest parallel correlation degree can be used as a judgment condition by using the included angle of the two straight lines, and the reference plane pixel point corresponding to the straight line with the highest parallel correlation degree is used as a projection point of the object point; and from the three-dimensional reconstruction result of the reference plane in step S2, the coordinate P in the three-dimensional reference coordinate system corresponding to the projection point can be obtainedp=(Xp,Yp,Zp);
From the unit vector in step S3
Figure BDA0002229758610000146
And a translation vector tcThe optical axis vector when capturing the picture in step S1 can be obtained
Vector projected from normal to reference planeCan obtain, vector
Figure BDA0002229758610000149
Projection vector on reference plane
Figure BDA00022297586100001410
By projecting a vector
Figure BDA00022297586100001411
As normal vectors, in combination with the projected point PpObtaining a plane which passes through the projection point and is vertical to the reference plane, wherein the vertical plane is the plane where the object point is located;
further, vectors can be obtained
Figure BDA0002229758610000151
Intersection with the vertical plane (X)i,Yi,Zi) Wherein
Figure BDA0002229758610000152
Further, the value of the translation of the optical axis to the vertical plane can be obtained
Model for mapping between a three-dimensional reference coordinate system and a two-dimensional image coordinate system substituted into a vertical plane
Figure BDA0002229758610000155
Obtaining the coordinates of the object point under the three-dimensional reference coordinate system;
repeating the steps to obtain the coordinates of all object points in the image under the three-dimensional reference coordinate system, and finishing the three-dimensional reconstruction of all objects in the image.
The third embodiment of the invention is as follows:
the invention also provides a monocular image three-dimensional reconstruction device based on the reference plane, which comprises a camera and a three-dimensional reconstruction unit which are connected with each other, wherein the camera is configured to shoot to obtain a two-dimensional image, and the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
the three-dimensional reconstruction unit is configured to segment the two-dimensional image resulting in a plurality of reference planes in a corresponding three-dimensional space; finishing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes; and extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and finishing the three-dimensional reconstruction of all the objects.
The camera is specifically configured to rotate the camera to a scene area needing three-dimensional reconstruction, and then image shooting is carried out on the scene area through the camera to obtain a two-dimensional image;
the three-dimensional reconstruction unit is specifically configured to analyze the two-dimensional image using a classification model, and segment a plurality of reference planes in a corresponding three-dimensional space;
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
and extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the object points to obtain a plane where the object points are located, and substituting translation vectors and rotation vectors from optical axes of cameras corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to complete three-dimensional reconstruction of all objects in the two-dimensional image.
The three-dimensional reconstruction unit is specifically configured to mark pixels in the two-dimensional image that belong to a reference plane region as a reference plane type, and pixels in the two-dimensional image that do not belong to the reference plane region as a non-reference plane type.
In a specific embodiment, the monocular image three-dimensional reconstruction device based on the reference plane includes a measuring end; the measuring end comprises laser, a camera, an angle adjuster and a processor; the laser is arranged on the camera, the laser, the camera and the angle adjuster are respectively connected with the processor, the laser and the camera are respectively connected with the angle adjuster, and the laser angle adjuster further comprises a server and at least more than one terminal; and the measuring end is respectively connected with the terminal through the server. The server is respectively connected with the measuring end and the terminal through a network. The service end provides a communication interface between the measuring end and the terminal, and the service end receives/transmits electric signals to/from the measuring end or the terminal. The terminal displays visual output to the user including two-dimensional images, textual information of the three-dimensional reconstruction results, graphical information, and any combination thereof. The terminal receives control input of a user, sends a control signal to the server, executes two-dimensional image capture, and obtains a three-dimensional reconstruction result of an object in the image.
In summary, according to the monocular image three-dimensional reconstruction method, system and device based on the reference plane provided by the present invention, the reference plane is first created by using the three-dimensional reconstruction capability of the monocular pair plane, and then each object point in the image is mapped to each simulation plane based on the reference plane to obtain the depth of each object point, thereby completing the three-dimensional reconstruction of the whole image. The system has the advantages of simple structure, easy realization and good scene usability.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

Claims (6)

1. A monocular image three-dimensional reconstruction method based on object point depth is characterized by comprising the following steps:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to the plurality of reference planes;
s4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and completing three-dimensional reconstruction of all the objects;
step S1 specifically includes:
after the camera is rotated to a scene area needing three-dimensional reconstruction, the scene area is shot by the camera to obtain a two-dimensional image;
step S2 specifically includes:
analyzing the two-dimensional image by using a classification model, and segmenting a plurality of reference planes in a corresponding three-dimensional space;
step S3 specifically includes:
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
step S4 specifically includes:
extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the projection points to obtain a plane where the object points are located, substituting translation vectors and rotation vectors from an optical axis of a camera corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to obtain the depth of the object points, and finishing three-dimensional reconstruction of all objects in the two-dimensional image;
the extracting the projection points of all the objects in the two-dimensional image on the corresponding reference plane comprises:
and applying a SharpMask image segmentation method to the two-dimensional image to obtain edge segmentation textures of the object in the two-dimensional image, and calculating projection points of texture pixel points which do not belong to the reference plane area corresponding to the object on the corresponding reference plane.
2. The method for three-dimensional reconstruction of monocular image based on object point depth according to claim 1, wherein step S2 further includes:
and marking the pixels belonging to the reference plane area in the two-dimensional image as a reference plane type, and marking the pixels not belonging to the reference plane area in the two-dimensional image as a non-reference plane type.
3. A system for three-dimensional reconstruction of a monocular image based on object point depth, comprising one or more processors and a memory, said memory storing a program which when executed by the processors performs the steps of:
s1, shooting to obtain a two-dimensional image, wherein the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
s2, segmenting the two-dimensional image to obtain a plurality of reference planes in a corresponding three-dimensional space;
s3, completing three-dimensional reconstruction corresponding to the two-dimensional image according to the plurality of reference planes;
s4, extracting projection points of all objects in the two-dimensional image on the corresponding reference plane, and completing three-dimensional reconstruction of all the objects;
step S1 specifically includes:
after the camera is rotated to a scene area needing three-dimensional reconstruction, the scene area is shot by the camera to obtain a two-dimensional image;
step S2 specifically includes:
analyzing the two-dimensional image by using a classification model, and segmenting a plurality of reference planes in a corresponding three-dimensional space;
step S3 specifically includes:
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
step S4 specifically includes:
extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the projection points to obtain a plane where the object points are located, substituting translation vectors and rotation vectors from an optical axis of a camera corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to obtain the depth of the object points, and finishing three-dimensional reconstruction of all objects in the two-dimensional image;
the extracting the projection points of all the objects in the two-dimensional image on the corresponding reference plane comprises:
and applying a SharpMask image segmentation method to the two-dimensional image to obtain edge segmentation textures of the object in the two-dimensional image, and calculating projection points of texture pixel points which do not belong to the reference plane area corresponding to the object on the corresponding reference plane.
4. The system of claim 3, wherein the program when executed by the processor further implements the steps of:
and marking the pixels belonging to the reference plane area in the two-dimensional image as a reference plane type, and marking the pixels not belonging to the reference plane area in the two-dimensional image as a non-reference plane type.
5. A monocular image three-dimensional reconstruction device based on object point depth is characterized by comprising a camera and a three-dimensional reconstruction unit which are connected with each other, wherein the camera is configured to shoot a two-dimensional image, and the content of the two-dimensional image is a scene needing three-dimensional reconstruction;
the three-dimensional reconstruction unit is configured to segment the two-dimensional image resulting in a plurality of reference planes in a corresponding three-dimensional space; finishing three-dimensional reconstruction corresponding to the two-dimensional image according to a plurality of reference planes; extracting projection points of all objects in the two-dimensional image on a corresponding reference plane to complete three-dimensional reconstruction of all the objects;
the camera is specifically configured to rotate the camera to a scene area needing three-dimensional reconstruction, and then image shooting is carried out on the scene area through the camera to obtain a two-dimensional image;
the three-dimensional reconstruction unit is specifically configured to analyze the two-dimensional image using a classification model, and segment a plurality of reference planes in a corresponding three-dimensional space;
substituting the translation vector and the rotation vector from the optical axis of the camera to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the reference plane to complete three-dimensional reconstruction of the reference plane in the two-dimensional image;
extracting projection points of all objects in the two-dimensional image on a reference plane corresponding to the projection points to obtain a plane where the object points are located, substituting translation vectors and rotation vectors from an optical axis of a camera corresponding to the object points to the reference plane into a mapping model between a three-dimensional reference coordinate system and a two-dimensional image coordinate system of the object point plane to obtain the depth of the object points, and finishing three-dimensional reconstruction of all objects in the two-dimensional image;
the extracting the projection points of all the objects in the two-dimensional image on the corresponding reference plane comprises:
and applying a SharpMask image segmentation method to the two-dimensional image to obtain edge segmentation textures of the object in the two-dimensional image, and calculating projection points of texture pixel points which do not belong to the reference plane area corresponding to the object on the corresponding reference plane.
6. The apparatus according to claim 5, wherein the three-dimensional reconstruction unit is specifically configured to label pixels in the two-dimensional image that belong to a reference plane region as a reference plane type, and pixels in the two-dimensional image that do not belong to a reference plane region as a non-reference plane type.
CN201910963573.7A 2018-08-31 2018-08-31 Monocular image three-dimensional reconstruction method, system and device based on object point depth Active CN110838164B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910963573.7A CN110838164B (en) 2018-08-31 2018-08-31 Monocular image three-dimensional reconstruction method, system and device based on object point depth

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811009447.XA CN109147027B (en) 2018-08-31 2018-08-31 Monocular image three-dimensional rebuilding method, system and device based on reference planes
CN201910963573.7A CN110838164B (en) 2018-08-31 2018-08-31 Monocular image three-dimensional reconstruction method, system and device based on object point depth

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811009447.XA Division CN109147027B (en) 2018-08-31 2018-08-31 Monocular image three-dimensional rebuilding method, system and device based on reference planes

Publications (2)

Publication Number Publication Date
CN110838164A true CN110838164A (en) 2020-02-25
CN110838164B CN110838164B (en) 2023-03-24

Family

ID=64825870

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201910963573.7A Active CN110838164B (en) 2018-08-31 2018-08-31 Monocular image three-dimensional reconstruction method, system and device based on object point depth
CN201811009447.XA Active CN109147027B (en) 2018-08-31 2018-08-31 Monocular image three-dimensional rebuilding method, system and device based on reference planes
CN201910964298.0A Active CN110827392B (en) 2018-08-31 2018-08-31 Monocular image three-dimensional reconstruction method, system and device

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN201811009447.XA Active CN109147027B (en) 2018-08-31 2018-08-31 Monocular image three-dimensional rebuilding method, system and device based on reference planes
CN201910964298.0A Active CN110827392B (en) 2018-08-31 2018-08-31 Monocular image three-dimensional reconstruction method, system and device

Country Status (1)

Country Link
CN (3) CN110838164B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741404B (en) * 2019-01-10 2020-11-17 奥本未来(北京)科技有限责任公司 Light field acquisition method based on mobile equipment
CN111220130B (en) * 2019-01-31 2022-09-13 金钱猫科技股份有限公司 Focusing measurement method and terminal capable of measuring object at any position in space
CN112837404B (en) * 2019-11-25 2024-01-19 北京初速度科技有限公司 Method and device for constructing three-dimensional information of planar object
CN111415420B (en) * 2020-03-25 2024-01-23 北京迈格威科技有限公司 Spatial information determining method and device and electronic equipment
CN112198527B (en) * 2020-09-30 2022-12-27 上海炬佑智能科技有限公司 Reference plane adjustment and obstacle detection method, depth camera and navigation equipment
CN112198529B (en) * 2020-09-30 2022-12-27 上海炬佑智能科技有限公司 Reference plane adjustment and obstacle detection method, depth camera and navigation equipment
CN112884898B (en) * 2021-03-17 2022-06-07 杭州思看科技有限公司 Reference device for measuring texture mapping precision

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024361A1 (en) * 2013-08-20 2015-02-26 华为技术有限公司 Three-dimensional reconstruction method and device, and mobile terminal
CN104809755A (en) * 2015-04-09 2015-07-29 福州大学 Single-image-based cultural relic three-dimensional reconstruction method
CN107063129A (en) * 2017-05-25 2017-08-18 西安知象光电科技有限公司 A kind of array parallel laser projection three-dimensional scan method
CN108062788A (en) * 2017-12-18 2018-05-22 北京锐安科技有限公司 A kind of three-dimensional rebuilding method, device, equipment and medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6240198B1 (en) * 1998-04-13 2001-05-29 Compaq Computer Corporation Method for figure tracking using 2-D registration
CN102697508B (en) * 2012-04-23 2013-10-16 中国人民解放军国防科学技术大学 Method for performing gait recognition by adopting three-dimensional reconstruction of monocular vision
CN102708566B (en) * 2012-05-08 2014-10-29 天津工业大学 Novel single-camera and single-projection light source synchronous calibrating method
CN103578133B (en) * 2012-08-03 2016-05-04 浙江大华技术股份有限公司 A kind of method and apparatus that two-dimensional image information is carried out to three-dimensional reconstruction
CN103077524A (en) * 2013-01-25 2013-05-01 福州大学 Calibrating method of hybrid vision system
CN106204717B (en) * 2015-05-28 2019-07-16 长沙维纳斯克信息技术有限公司 A kind of stereo-picture quick three-dimensional reconstructing method and device
CN105303554B (en) * 2015-09-16 2017-11-28 东软集团股份有限公司 The 3D method for reconstructing and device of a kind of image characteristic point
CN106960442A (en) * 2017-03-01 2017-07-18 东华大学 Based on the infrared night robot vision wide view-field three-D construction method of monocular
CN107945268B (en) * 2017-12-15 2019-11-29 深圳大学 A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024361A1 (en) * 2013-08-20 2015-02-26 华为技术有限公司 Three-dimensional reconstruction method and device, and mobile terminal
CN104809755A (en) * 2015-04-09 2015-07-29 福州大学 Single-image-based cultural relic three-dimensional reconstruction method
CN107063129A (en) * 2017-05-25 2017-08-18 西安知象光电科技有限公司 A kind of array parallel laser projection three-dimensional scan method
CN108062788A (en) * 2017-12-18 2018-05-22 北京锐安科技有限公司 A kind of three-dimensional rebuilding method, device, equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SHEN S: "Accurate multiple view 3D reconstruction using patch-based stereo for large-scale scenes", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
佟帅: "基于视觉的三维重建技术综述", 《计算机应用研究》 *
徐超: "基于计算机视觉的三维重建技术综述", 《数字技术与应用》 *

Also Published As

Publication number Publication date
CN110827392A (en) 2020-02-21
CN109147027B (en) 2019-11-08
CN110838164B (en) 2023-03-24
CN110827392B (en) 2023-03-24
CN109147027A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
CN110838164B (en) Monocular image three-dimensional reconstruction method, system and device based on object point depth
CN111062873B (en) Parallax image splicing and visualization method based on multiple pairs of binocular cameras
US10825198B2 (en) 3 dimensional coordinates calculating apparatus, 3 dimensional coordinates calculating method, 3 dimensional distance measuring apparatus and 3 dimensional distance measuring method using images
US11816829B1 (en) Collaborative disparity decomposition
US6781618B2 (en) Hand-held 3D vision system
CN111028155B (en) Parallax image splicing method based on multiple pairs of binocular cameras
Liu et al. Multiview geometry for texture mapping 2d images onto 3d range data
CN109544628B (en) Accurate reading identification system and method for pointer instrument
Willi et al. Robust geometric self-calibration of generic multi-projector camera systems
US8179448B2 (en) Auto depth field capturing system and method thereof
CN105654547B (en) Three-dimensional rebuilding method
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
Mahdy et al. Projector calibration using passive stereo and triangulation
Pathak et al. Dense 3D reconstruction from two spherical images via optical flow-based equirectangular epipolar rectification
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
Wenzel et al. High-resolution surface reconstruction from imagery for close range cultural Heritage applications
Yamaguchi et al. Superimposing thermal-infrared data on 3D structure reconstructed by RGB visual odometry
Harvent et al. Multi-view dense 3D modelling of untextured objects from a moving projector-cameras system
CN111914790B (en) Real-time human body rotation angle identification method based on double cameras under different scenes
CN115326835B (en) Cylinder inner surface detection method, visualization method and detection system
CN107274449B (en) Space positioning system and method for object by optical photo
Stamos Automated registration of 3D-range with 2D-color images: an overview
WO2018056802A1 (en) A method for estimating three-dimensional depth value from two-dimensional images
Cho et al. Content authoring using single image in urban environments for augmented reality
Colombo et al. Shape reconstruction and texture sampling by active rectification and virtual view synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant