CN109087325A - A kind of direct method point cloud three-dimensional reconstruction and scale based on monocular vision determines method - Google Patents
A kind of direct method point cloud three-dimensional reconstruction and scale based on monocular vision determines method Download PDFInfo
- Publication number
- CN109087325A CN109087325A CN201810800534.0A CN201810800534A CN109087325A CN 109087325 A CN109087325 A CN 109087325A CN 201810800534 A CN201810800534 A CN 201810800534A CN 109087325 A CN109087325 A CN 109087325A
- Authority
- CN
- China
- Prior art keywords
- image
- point cloud
- dimensional
- scale
- foreground
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 239000013598 vector Substances 0.000 claims abstract description 27
- 239000011159 matrix material Substances 0.000 claims abstract description 7
- 230000000007 visual effect Effects 0.000 claims abstract description 6
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 7
- 238000005286 illumination Methods 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 3
- 230000005484 gravity Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Image Generation (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of direct method point cloud three-dimensional reconstructions and scale based on monocular vision to determine method, comprising the following steps: S1, the subgraph that original image frame is averagely divided to multiple 9*9 solve the proportion of foreground of each subgraph using OTSU;S2, whether needed to utilize Retinex improving image quality according to threshold decision subgraph;S3, target three-dimensional space is reconstructed using the visual odometry based on direct method and sparse method;S4, point cloud data normal vector is solved;S5, calculate minimum inner product and, obtain the target three-dimensional space vector parallel with world coordinate system z-axis;S6, point cloud pose is corrected using three-dimension varying matrix;S7, using cloud relative scalar and object physical size ratio, obtain the actual size of Three-dimensional Gravity composition.Calculating process of the present invention is simple, and error calculated is small, can obtain accurate dimension of object.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision.
Background
DSO is a Visual Odometer (VO) based on a direct method and a sparse method in SLAM, the DSO is not a complete SLAM, and the DSO has no functions of loop detection, map multiplexing and the like, so that a large amount of calculation is saved. In order to realize real-time three-dimensional reconstruction, the DSO utilizes photometric calibration to preprocess images, but a large number of calibration samples are needed during photometric calibration, and the DSO has no universal applicability.
Monocular vision adopts a single camera as a sensor to obtain two-dimensional projection of a three-dimensional world, namely image information of an environment is obtained, and then a three-dimensional scene is reconstructed through a corresponding algorithm. The obtained three-dimensional scene is used for navigation and positioning of the inspection robot. When the three-dimensional world is converted into a two-dimensional image, one dimension is reduced, and correspondingly, a scale factor is reduced when the three-dimensional world is reflected in a reconstructed image. All object sizes in the reconstructed image are relative, and the size relationship holds whether the image is enlarged or reduced by any times. However, the specific size of the object is unknown, and the obstacle avoidance and the path finding of the inspection robot are affected to different degrees. The existing method for acquiring the real scale is based on the three-dimensional reconstruction of the characteristic point method, and is not based on the determination of the scale of the direct method point cloud three-dimensional reconstruction.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision. The method has the advantages of simple calculation process, small error of calculation result and capability of obtaining accurate object size.
The purpose of the invention is realized by the following technical scheme: a direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision comprises the following steps:
s1, dividing the original image frame into a plurality of 9 × 9 sub-images, and solving the foreground proportion of each sub-image by using OTSU;
s2, judging whether the sub-image needs to improve the image quality by utilizing Retinex according to the threshold value;
s3, reconstructing a target three-dimensional space by using a direct method and a sparse method-based visual odometer (DSO);
s4, solving a normal vector of the point cloud data;
s5, calculating the minimum inner product sum to obtain a vector of the target three-dimensional space parallel to the z axis of the world coordinate system;
s6, correcting the point cloud pose by using the three-dimensional transformation matrix according to the vector obtained in the step S5;
and S7, obtaining the actual size of the three-dimensional reconstruction image by using the ratio of the relative scale of the point cloud to the actual scale of the object.
Further, the OTSU is a global-based binarization algorithm, and the specific implementation method thereof is as follows: calculating an inter-class variance function of the image:
g=ω0ω1(μ0-μ1)2(1)
wherein, ω is0The average gray level of the foreground pixels is mu0;ω1The average gray level is μ for the proportion of the background to the whole image1;
Solving the g value aiming at each gray value, finding out the gray value corresponding to the maximum value in all the g values, taking the gray value as a threshold value to divide the image into a foreground part and a background part, taking the gray value larger than the threshold value as a foreground image, and taking the gray value smaller than or equal to the threshold value as a background image;
the specific implementation method of the step S1 is as follows: dividing an image into n multiplied by n sub-images, then using an OTSU algorithm for each sub-image, dividing each pixel of the sub-image into a foreground or a background according to a threshold value, wherein the number of the pixels of the foreground and the background is m and n respectively; then, calculating the proportion BR of the foreground part to the total pixels in the sub-image, namely:
BR=m/(m+n) (2)。
further, the specific implementation method of step S2 is as follows: setting a threshold value T, and when T > BR, considering that the area is well illuminated; otherwise, considering that the illumination of the area is uneven, further optimization is needed; the optimization method comprises the following specific steps:
if I (x, y) is image information captured by the camera, L (x, y) represents an illumination component irradiated by the light source, and R (x, y) represents a reflection component of a true color of the object, then
I(x,y)=L(x,y)·R(x,y) (3)
The method is obtained by adopting a single-scale Retinex expansion formula (1):
logR(x,y)=logI(x,y)-log[F(x,y)*I(x,y)](4)
wherein, denotes convolution operation, F (x, y) is a surround function, and the calculation method is:
in the formula, c is a surrounding scale, and K is a normalization constant; the surround function satisfies:
∫∫F(x,y)dxdy=1 (6)
and taking the inverse logarithm of the log R (x, y) to obtain an improved image.
Further, the specific implementation method of step S4 is as follows: converting the problem of solving the normal vector of the point cloud data into a least square method plane fitting estimation problem;
setting the number of point cloud data as n and Pi(xi,yi,zi) Represents ith point cloud data, i ═ 1,2, …, n;
let the plane equation be:
a*x+b*y+c*z+d=0 (7)
wherein a, b, c and d are undetermined parameters, and a, b and c cannot be 0 at the same time; the distance from the point cloud data to the plane is set as diAnd then:
order toSolving the minimum value of L for the objective function;
the requirements for taking the minimum value of L are as follows:
wherein:
in the formula:
obtaining the following by the same method:
wherein,
from formula (11), there are:
wherein:
there is then a system of equations:
A1*a+B1*b+C1*c+D1*d=0 (19)
A2*a+B2*b+C2*c+D2*d=0 (20)
A3*a+B3*b+C3*c+D3*d=0 (21)
D1*a+D2*b+D3*c+D4*d=0 (22)
and solving the equation set to obtain a plane equation, and further obtaining a normal vector of the plane equation.
Further, the specific implementation method of step S5 is as follows: after normal vectors of all point clouds are obtained, a target function is set as follows:
whereinIs the normal vector of the ith point cloud,representing a vector parallel to a z-axis of a world coordinate system in the three-dimensional reconstructed image; traversal solutionSo that the objective function takes a minimum value, i.e.:
the invention has the beneficial effects that: the method improves the image based on Retinex, improves the general applicability of the whole method and aims at the problem of uncertainty of monocular vision scale; and then calculating a normal vector of the point cloud, establishing a target function, obtaining a corrected point cloud three-dimensional reconstruction image through three-dimensional transformation, and finally obtaining the actual size of the three-dimensional reconstruction image by utilizing the ratio of the relative scale of the point cloud to the actual scale of the object. The method has the advantages of simple calculation process, small error of calculation result and capability of obtaining accurate object size.
Drawings
FIG. 1 is a flow chart of the direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision of the present invention;
FIG. 2 is an original image captured by a camera according to an embodiment of the present invention;
FIG. 3 is a diagram of an embodiment of the present invention after improving quality by Retinex;
FIG. 4 is a point cloud three-dimensional reconstruction image obtained based on DSO according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of correction of the pose of the point cloud of the invention;
fig. 6 is a reconstructed image obtained by the algorithm of the present invention according to this embodiment.
Detailed Description
The invention provides a direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision, which has the core purposes that: based on the problem that the photometric scaling does not have universal applicability in DSO, an image enhancement method based on Retinex is provided. And aiming at the problem of uncertainty of monocular visual scales, the ratio of the actual scale of the real world to the relative scale under the three-dimensional coordinate is utilized to obtain the actual scale value of the point cloud data. The technical scheme of the invention is further explained by combining the attached drawings. As shown in fig. 1, a direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision specifically includes the following steps:
s1, dividing the original image frame into a plurality of 9 × 9 sub-images, and solving the foreground proportion of each sub-image by using OTSU;
the OTSU is a global-based binarization algorithm, and the specific implementation method is as follows: calculating an inter-class variance function of the image:
g=ω0ω1(μ0-μ1)2(25)
wherein, ω is0The average gray level of the foreground pixels is mu0;ω1The average gray level is μ for the proportion of the background to the whole image1;
Solving the g value aiming at each gray value, finding out the gray value corresponding to the maximum value in all the g values, taking the gray value as a threshold value to divide the image into a foreground part and a background part, taking the gray value larger than the threshold value as a foreground image (the gray value is equal to 1 in a binary image), and taking the gray value smaller than or equal to the threshold value as a background image (the gray value is equal to 0 in the binary image);
the specific implementation method of the step S1 is as follows: dividing an image into n multiplied by n sub-images, then using an OTSU algorithm for each sub-image, dividing each pixel of the sub-image into a foreground or a background according to a threshold value, wherein the number of the pixels of the foreground and the background is m and n respectively; then, calculating the proportion BR of the foreground part to the total pixels in the sub-image, namely:
BR=m/(m+n) (26)。
s2, judging whether the sub-image needs to improve the image quality by utilizing Retinex according to the threshold value; the specific implementation method comprises the following steps: setting a threshold value T, and when T > BR, considering that the area is well illuminated; otherwise, considering that the illumination of the area is uneven, further optimization is needed; the optimization method comprises the following specific steps:
if I (x, y) is image information captured by the camera, L (x, y) represents an illumination component irradiated by the light source, and R (x, y) represents a reflection component of a true color of the object, then
I(x,y)=L(x,y)·R(x,y) (27)
The method is obtained by adopting a single-scale Retinex (SSR) expansion formula (25):
logR(x,y)=logI(x,y)-log[F(x,y)*I(x,y)](28)
wherein, denotes convolution operation, F (x, y) is a surround function, and the calculation method is:
in the formula, c is a surrounding scale, and K is a normalization constant; the surround function satisfies:
∫∫F(x,y)dxdy=1 (30)
the log R (x, y) is logarithmized to obtain an improved image, and the original image used in this embodiment is shown in fig. 2, and the improved image is shown in fig. 3.
S3, reconstructing a target three-dimensional space by using a visual odometer based on a direct method and a sparse method, as shown in FIG. 4;
s4, solving a normal vector of the point cloud data; the specific implementation method comprises the following steps: converting the problem of solving the normal vector of the point cloud data into a least square method plane fitting estimation problem;
setting the number of point cloud data as n and Pi(xi,yi,zi) Represents ith point cloud data, i ═ 1,2, …, n;
let the plane equation be:
a*x+b*y+c*z+d=0 (31)
wherein a, b, c and d are undetermined parameters, and a, b and c cannot be 0 at the same time; the distance from the point cloud data to the plane is set as diAnd then:
order toSolving the minimum value of L for the objective function;
the requirements for taking the minimum value of L are as follows:
wherein:
in the formula:
obtaining the following by the same method:
wherein,
from formula (35), there are:
wherein:
there is then a system of equations:
A1*a+B1*b+C1*c+D1*d=0 (43)
A2*a+B2*b+C2*c+D2*d=0 (44)
A3*a+B3*b+C3*c+D3*d=0 (45)
D1*a+D2*b+D3*c+D4*d=0 (46)
and solving the equation set to obtain a plane equation, and further obtaining a normal vector of the plane equation.
S5, calculating the minimum inner product sum to obtain a vector of the target three-dimensional space parallel to the z axis of the world coordinate system; the specific implementation method comprises the following steps: the specific implementation method comprises the following steps: after normal vectors of all point clouds are obtained, a target function is set as follows:
whereinIs the normal vector of the ith point cloud,representing a vector parallel to a z-axis of a world coordinate system in the three-dimensional reconstructed image; traversal solutionSo that the objective function takes a minimum value, i.e.:
s6, correcting the point cloud pose by using the three-dimensional transformation matrix according to the vector obtained in the step S5; as shown in FIG. 5, the three-dimensional reconstruction is a rectangular area in the image, and the rectangular area is solved by S5 to be parallel to the z-axis of the world coordinate systemFinally pass throughAfter S6 is completed, the three-dimensional transformation matrix is used to translate and rotate the target object, so that the three-dimensional transformation matrix can be used for correction, and posture correction using the three-dimensional transformation matrix is a common technical means in the field and is not described herein again;
and S7, obtaining the actual size of the three-dimensional reconstruction image by using the ratio of the relative scale of the point cloud to the actual scale of the object.
Fig. 6 is a reconstructed image obtained by the algorithm of the present invention according to this embodiment. From fig. 6, a top scale value of 0.978 and a bottom scale value of 0.147 can be obtained, and a height of 9.6m can be obtained by measurement, i.e., a unit scale value of 11.552m can be obtained. Comparing the dimension difference of the first floor to be 0.271, the obtained actual first floor is about 3.131m, the actual floor height is about 3.2m, the error is about 0.069m, and the error is very small.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.
Claims (5)
1. A direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision is characterized by comprising the following steps:
s1, dividing the original image frame into a plurality of 9 × 9 sub-images, and solving the foreground proportion of each sub-image by using OTSU;
s2, judging whether the sub-image needs to improve the image quality by utilizing Retinex according to the threshold value;
s3, reconstructing a target three-dimensional space by using a visual odometer based on a direct method and a sparse method;
s4, solving a normal vector of the point cloud data;
s5, calculating the minimum inner product sum to obtain a vector of the target three-dimensional space parallel to the z axis of the world coordinate system;
s6, correcting the point cloud pose by using the three-dimensional transformation matrix according to the vector obtained in the step S5;
and S7, obtaining the actual size of the three-dimensional reconstruction image by using the ratio of the relative scale of the point cloud to the actual scale of the object.
2. The direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision according to claim 1, wherein the OTSU is a global binarization-based algorithm, and the specific implementation method is as follows: calculating an inter-class variance function of the image:
g=ω0ω1(μ0-μ1)2(1)
wherein, ω is0The average gray level of the foreground pixels is mu0;ω1The average gray level is μ for the proportion of the background to the whole image1;
Solving the g value aiming at each gray value, finding out the gray value corresponding to the maximum value in all the g values, taking the gray value as a threshold value to divide the image into a foreground part and a background part, taking the gray value larger than the threshold value as a foreground image, and taking the gray value smaller than or equal to the threshold value as a background image;
the specific implementation method of the step S1 is as follows: dividing an image into n multiplied by n sub-images, then using an OTSU algorithm for each sub-image, dividing each pixel of the sub-image into a foreground or a background according to a threshold value, wherein the number of the pixels of the foreground and the background is m and n respectively; then, calculating the proportion BR of the foreground part to the total pixels in the sub-image, namely:
BR=m/(m+n) (2)。
3. the direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision according to claim 1, wherein the step S2 is implemented by: setting a threshold value T, and when T > BR, considering that the area is well illuminated; otherwise, considering that the illumination of the area is uneven, further optimization is needed; the optimization method comprises the following specific steps:
if I (x, y) is image information captured by the camera, L (x, y) represents an illumination component irradiated by the light source, and R (x, y) represents a reflection component of a true color of the object, then
I(x,y)=L(x,y)·R(x,y) (3)
The method is obtained by adopting a single-scale Retinex expansion formula (1):
logR(x,y)=logI(x,y)-log[F(x,y)*I(x,y)](4)
wherein, denotes convolution operation, F (x, y) is a surround function, and the calculation method is:
in the formula, c is a surrounding scale, and K is a normalization constant; the surround function satisfies:
∫∫F(x,y)dxdy=1 (6)
and taking the inverse logarithm of the log R (x, y) to obtain an improved image.
4. The direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision according to claim 1, wherein the step S4 is implemented by: converting the problem of solving the normal vector of the point cloud data into a least square method plane fitting estimation problem;
setting the number of point cloud data as n and Pi(xi,yi,zi) Represents ith point cloud data, i ═ 1,2, …, n;
let the plane equation be:
a*x+b*y+c*z+d=0 (7)
wherein a, b, c and d are undetermined parameters, and a, b and c cannot be 0 at the same time; the distance from the point cloud data to the plane is set as diAnd then:
order toSolving the minimum value of L for the objective function;
the requirements for taking the minimum value of L are as follows:
wherein:
in the formula:
obtaining the following by the same method:
wherein,
from formula (11), there are:
wherein:
there is then a system of equations:
A1*a+B1*b+C1*c+D1*d=0 (19)
A2*a+B2*b+C2*c+D2*d=0 (20)
A3*a+B3*b+C3*c+D3*d=0 (21)
D1*a+D2*b+D3*c+D4*d=0 (22)
and solving the equation set to obtain a plane equation, and further obtaining a normal vector of the plane equation.
5. The direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision according to claim 1, wherein the step S5 is implemented by: after normal vectors of all point clouds are obtained, a target function is set as follows:
whereinIs the normal vector of the ith point cloud,representing a vector parallel to a z-axis of a world coordinate system in the three-dimensional reconstructed image; traversal solutionSo that the objective function takes a minimum value, i.e.:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810800534.0A CN109087325B (en) | 2018-07-20 | 2018-07-20 | Direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810800534.0A CN109087325B (en) | 2018-07-20 | 2018-07-20 | Direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109087325A true CN109087325A (en) | 2018-12-25 |
CN109087325B CN109087325B (en) | 2022-03-04 |
Family
ID=64838218
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810800534.0A Active CN109087325B (en) | 2018-07-20 | 2018-07-20 | Direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109087325B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919911A (en) * | 2019-01-26 | 2019-06-21 | 中国海洋大学 | Moving three dimension method for reconstructing based on multi-angle of view photometric stereo |
CN110197615A (en) * | 2018-02-26 | 2019-09-03 | 北京京东尚科信息技术有限公司 | For generating the method and device of map |
CN110223336A (en) * | 2019-05-27 | 2019-09-10 | 上海交通大学 | A kind of planar fit method based on TOF camera data |
CN110246212A (en) * | 2019-05-05 | 2019-09-17 | 上海工程技术大学 | A kind of target three-dimensional rebuilding method based on self-supervisory study |
CN111382613A (en) * | 2018-12-28 | 2020-07-07 | 中国移动通信集团辽宁有限公司 | Image processing method, apparatus, device and medium |
CN113340313A (en) * | 2020-02-18 | 2021-09-03 | 北京四维图新科技股份有限公司 | Navigation map parameter determination method and device |
CN114731373A (en) * | 2019-06-21 | 2022-07-08 | 兹威达公司 | Method for determining one or more sets of exposure settings for a three-dimensional image acquisition process |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104596502A (en) * | 2015-01-23 | 2015-05-06 | 浙江大学 | Object posture measuring method based on CAD model and monocular vision |
CN105205858A (en) * | 2015-09-18 | 2015-12-30 | 天津理工大学 | Indoor scene three-dimensional reconstruction method based on single depth vision sensor |
CN106846417A (en) * | 2017-02-06 | 2017-06-13 | 东华大学 | The monocular infrared video three-dimensional rebuilding method of view-based access control model odometer |
CN107121967A (en) * | 2017-05-25 | 2017-09-01 | 西安知象光电科技有限公司 | A kind of laser is in machine centering and inter process measurement apparatus |
CN107564061A (en) * | 2017-08-11 | 2018-01-09 | 浙江大学 | A kind of binocular vision speedometer based on image gradient combined optimization calculates method |
CN108062776A (en) * | 2018-01-03 | 2018-05-22 | 百度在线网络技术(北京)有限公司 | Camera Attitude Tracking method and apparatus |
EP3333538A1 (en) * | 2016-12-07 | 2018-06-13 | Hexagon Technology Center GmbH | Scanner vis |
-
2018
- 2018-07-20 CN CN201810800534.0A patent/CN109087325B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104596502A (en) * | 2015-01-23 | 2015-05-06 | 浙江大学 | Object posture measuring method based on CAD model and monocular vision |
CN105205858A (en) * | 2015-09-18 | 2015-12-30 | 天津理工大学 | Indoor scene three-dimensional reconstruction method based on single depth vision sensor |
EP3333538A1 (en) * | 2016-12-07 | 2018-06-13 | Hexagon Technology Center GmbH | Scanner vis |
CN106846417A (en) * | 2017-02-06 | 2017-06-13 | 东华大学 | The monocular infrared video three-dimensional rebuilding method of view-based access control model odometer |
CN107121967A (en) * | 2017-05-25 | 2017-09-01 | 西安知象光电科技有限公司 | A kind of laser is in machine centering and inter process measurement apparatus |
CN107564061A (en) * | 2017-08-11 | 2018-01-09 | 浙江大学 | A kind of binocular vision speedometer based on image gradient combined optimization calculates method |
CN108062776A (en) * | 2018-01-03 | 2018-05-22 | 百度在线网络技术(北京)有限公司 | Camera Attitude Tracking method and apparatus |
Non-Patent Citations (1)
Title |
---|
曾凡锋 等: "Retinex在光照不均文本图像中的研究", 《计算机工程与设计》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110197615A (en) * | 2018-02-26 | 2019-09-03 | 北京京东尚科信息技术有限公司 | For generating the method and device of map |
CN110197615B (en) * | 2018-02-26 | 2022-03-04 | 北京京东尚科信息技术有限公司 | Method and device for generating map |
CN111382613A (en) * | 2018-12-28 | 2020-07-07 | 中国移动通信集团辽宁有限公司 | Image processing method, apparatus, device and medium |
CN111382613B (en) * | 2018-12-28 | 2024-05-07 | 中国移动通信集团辽宁有限公司 | Image processing method, device, equipment and medium |
CN109919911B (en) * | 2019-01-26 | 2023-04-07 | 中国海洋大学 | Mobile three-dimensional reconstruction method based on multi-view photometric stereo |
CN109919911A (en) * | 2019-01-26 | 2019-06-21 | 中国海洋大学 | Moving three dimension method for reconstructing based on multi-angle of view photometric stereo |
CN110246212A (en) * | 2019-05-05 | 2019-09-17 | 上海工程技术大学 | A kind of target three-dimensional rebuilding method based on self-supervisory study |
CN110246212B (en) * | 2019-05-05 | 2023-02-07 | 上海工程技术大学 | Target three-dimensional reconstruction method based on self-supervision learning |
CN110223336B (en) * | 2019-05-27 | 2023-10-17 | 上海交通大学 | Plane fitting method based on TOF camera data |
CN110223336A (en) * | 2019-05-27 | 2019-09-10 | 上海交通大学 | A kind of planar fit method based on TOF camera data |
CN114731373A (en) * | 2019-06-21 | 2022-07-08 | 兹威达公司 | Method for determining one or more sets of exposure settings for a three-dimensional image acquisition process |
CN113340313A (en) * | 2020-02-18 | 2021-09-03 | 北京四维图新科技股份有限公司 | Navigation map parameter determination method and device |
CN113340313B (en) * | 2020-02-18 | 2024-04-16 | 北京四维图新科技股份有限公司 | Navigation map parameter determining method and device |
Also Published As
Publication number | Publication date |
---|---|
CN109087325B (en) | 2022-03-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109087325B (en) | Direct method point cloud three-dimensional reconstruction and scale determination method based on monocular vision | |
CN109410256B (en) | Automatic high-precision point cloud and image registration method based on mutual information | |
US11941831B2 (en) | Depth estimation | |
CN107063228B (en) | Target attitude calculation method based on binocular vision | |
US11210801B1 (en) | Adaptive multi-sensor data fusion method and system based on mutual information | |
US9786062B2 (en) | Scene reconstruction from high spatio-angular resolution light fields | |
CN112132958A (en) | Underwater environment three-dimensional reconstruction method based on binocular vision | |
KR20140027468A (en) | Depth measurement quality enhancement | |
CN110599489A (en) | Target space positioning method | |
CN109300151B (en) | Image processing method and device and electronic equipment | |
CN111602177B (en) | Method and apparatus for generating a 3D reconstruction of an object | |
CN112200848B (en) | Depth camera vision enhancement method and system under low-illumination weak-contrast complex environment | |
CN109766896B (en) | Similarity measurement method, device, equipment and storage medium | |
CN118154687B (en) | Target positioning and obstacle avoidance method and system for meal delivery robot based on monocular vision | |
CN117237789A (en) | Method for generating texture information point cloud map based on panoramic camera and laser radar fusion | |
CN113888420A (en) | Underwater image restoration method and device based on correction model and storage medium | |
CN117649589A (en) | LNG unloading arm target identification method based on improved YOLO-V5s model | |
Hamzah et al. | Stereo matching algorithm based on illumination control to improve the accuracy | |
CN110514140B (en) | Three-dimensional imaging method, device, equipment and storage medium | |
CN114882095B (en) | Object height online measurement method based on contour matching | |
CN112766338B (en) | Method, system and computer readable storage medium for calculating distance image | |
CN108416815A (en) | Assay method, equipment and the computer readable storage medium of air light value | |
CN109214398B (en) | Method and system for measuring rod position from continuous images | |
CN111462321A (en) | Point cloud map processing method, processing device, electronic device and vehicle | |
Usami et al. | 3d shape recovery of polyp using two light sources endoscope |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: No. 109, 1st Floor, Building 2, No. 11 Tianying Road, Chengdu High tech Zone, Chengdu, Sichuan Province, 611700 Patentee after: Chengdu Shidao Information Technology Co.,Ltd. Address before: 611731 floor 2, No. 4, Xinhang Road, West Park, high tech Zone (West Zone), Chengdu, Sichuan Patentee before: CHENGDU ZHIMA TECHNOLOGY CO.,LTD. |
|
CP03 | Change of name, title or address |