CN112164105A - Method for combining binocular vision with uncalibrated luminosity vision - Google Patents

Method for combining binocular vision with uncalibrated luminosity vision Download PDF

Info

Publication number
CN112164105A
CN112164105A CN202010877425.6A CN202010877425A CN112164105A CN 112164105 A CN112164105 A CN 112164105A CN 202010877425 A CN202010877425 A CN 202010877425A CN 112164105 A CN112164105 A CN 112164105A
Authority
CN
China
Prior art keywords
vision
depth
normal vector
representing
photometric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010877425.6A
Other languages
Chinese (zh)
Other versions
CN112164105B (en
Inventor
周波
杨博雄
李社蕾
刘小飞
李明杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010877425.6A priority Critical patent/CN112164105B/en
Publication of CN112164105A publication Critical patent/CN112164105A/en
Application granted granted Critical
Publication of CN112164105B publication Critical patent/CN112164105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A method for combining binocular vision with uncalibrated luminosity vision relates to a method, in particular to a method for combining binocular vision with uncalibrated luminosity vision. The invention aims to solve the problem that surface depth with richer details is difficult to obtain under the condition of not calibrating photometric vision after binocular vision and photometric stereo vision are combined. The invention comprises two cameras and a plurality of non-collinear light sources, wherein the light sources project light to an object in a time-sharing manner, the cameras collect image sequences, the plurality of light sources and the two cameras form two uncalibrated photometric visual systems, the cameras and the cameras form a binocular visual system, the surface depth of the object can be obtained, the depth obtained by binocular vision starts, the direction of the light sources is obtained, the direction of the surface normal is obtained from the direction of the light sources, and the optimal surface depth is obtained by the depth and the normal joint optimization. The invention belongs to the field of industrial detection.

Description

Method for combining binocular vision with uncalibrated luminosity vision
Technical Field
The invention relates to a method, in particular to a method for combining binocular vision and uncalibrated photometric vision, and belongs to the field of industrial detection.
Background
Binocular vision can obtain object depth information, but lacks object surface detail information; photometric stereo vision can obtain surface details through normal vector integration, but depth drift exists in the integration process, binocular vision and photometric stereo vision are combined together, surface depths with richer details can be obtained, but photometric vision needs to calibrate a light source, so that how to realize the combination of binocular vision and uncalibrated photometric vision is a problem to be solved urgently at present.
Disclosure of Invention
The invention provides a method for combining binocular vision and uncalibrated luminosity vision, aiming at solving the problem that surface depth with richer details is difficult to obtain under the condition of uncalibrated luminosity vision after the binocular vision is combined with the luminosity stereo vision.
The technical scheme adopted by the invention for solving the problems is as follows: the method comprises the following specific steps:
step one, two cameras form a binocular stereo vision system, and the depth information of the surface of an object is acquired based on an image sequence acquired by the two cameras;
secondly, calculating the gradient of the surface of the object based on the depth information of the surface of the object, and further calculating the normal vector of the surface of the object;
taking the normal vector of the surface of the object as a known quantity, substituting the known quantity into two photometric vision systems, performing combined optimization, and calculating the illumination direction;
step four, substituting the illumination direction as a known quantity into two photometric vision systems, performing combined optimization, and calculating a normal vector of the surface of the object;
and step five, combining the normal vector of the surface of the object calculated in the step four with the depth of the surface of the object calculated in the step one to construct an optimization function, and optimizing the depth of the surface.
Further, in the first step, the object surface depth information is acquired as follows:
step one, the position of an object is unchanged, six light sources project illumination to the object in a time-sharing mode, a left camera and a right camera collect image sequences, and a gray sequence value i with the length of six is arranged at each pixel position on the image of the camerauv=(i1,i2,i3,i4,i5,i6);
Step two, performing stereo correction on the image, performing stereo matching on polar lines, searching a maximum similarity value on a left camera and a right camera according to a gray sequence value of a pixel, acquiring the surface depth S (u, v) of the object,
Figure RE-GDA0002799878520000021
in the formula (1)
Figure RE-GDA0002799878520000022
Figure RE-GDA0002799878520000023
Indicates the focal length of the camera, [ u ]0v0]Representing camera principal point, u representing pixel column coordinates, u0Representing principal point pixel column coordinates, v representing pixel row coordinates, v0Representing principal point pixel row coordinates, fxDenotes the focal length in the x direction, fyDenotes the focal length in the y direction, ZuvRepresenting the vertical distance, mu, of the camera origin to the object surfaceuvRepresenting the coefficients;
step three, calculating partial derivatives of the object surface depth S (u, v):
Figure RE-GDA0002799878520000024
Figure RE-GDA0002799878520000025
in the formulas (2) and (3), p represents the partial derivative of the depth in the u direction, and q represents the partial derivative of the depth in the v direction;
step four, calculating a normal vector of the surface of the object;
Figure RE-GDA0002799878520000026
in the formula (4)n denotes a normal vector, nxRepresenting the x component, n, of the normal vectoryRepresenting the y component of the normal vector, nzRepresenting a normal vector z component;
substituting the normal vector of the surface of the object as a known quantity into two photometric vision systems, performing combined optimization, and calculating the illumination direction; p is a point on the surface of the object, I1 and I2 are corresponding points, I1 is the image coordinates of P projected on the left camera image, I2 is the image coordinates of P projected on the right camera image; the normal vector of the P point is
Figure RE-GDA0002799878520000027
i1Representing gray scale of I1, I2Representing gray scale of I2,/kIndicating the k-th light source direction. Therefore, the two luminosity equations of the left camera and the right camera are respectively a formula (6) and a formula (7); there are 12 equations for a pair of corresponding points, there are multiple corresponding points between the left camera image and the right camera image, note the corresponding point number as N, there are N × 12 equations, the unknown number in the equation is 18, therefore can solve the light source direction with the least square method;
Figure RE-GDA0002799878520000031
Figure RE-GDA0002799878520000032
substituting the illumination direction as a known quantity into two photometric vision systems, taking a normal vector of a depth point as an unknown quantity, combining equations (6) and (7), and optimally calculating a normal vector of the surface of the object;
defining the distance between the expected object surface depth and the depth acquired by the binocular vision system, and using the cost function;
Figure RE-GDA0002799878520000033
wherein
Figure RE-GDA0002799878520000034
Representing the depth of the surface obtained by binocular stereo vision, ZuvA desired object surface depth;
taking the inner product of the normal vector of the object surface and the gradient of the object surface as a cost function;
Figure RE-GDA0002799878520000035
Figure RE-GDA0002799878520000036
representing photometry to obtain surface normal vectors.
Combining the two cost functions to form a final cost function, and optimizing the depth of the object surface by taking the minimum cost function as a target:
Figure RE-GDA0002799878520000037
in the formula (4), Z represents an object depth set, lambda represents a coefficient which is larger than 0 and smaller than 1 and is used for adjusting the weight of a depth error and a normal vector error, and EdIndicating a depth error, EnRepresenting the normal vector error.
Further, the formula for calculating the normal vector of the surface of the object in the step (four) is as follows:
Figure RE-GDA0002799878520000041
further, the cost function in the step (five) is:
Figure RE-GDA0002799878520000042
further, the cost function in the step (six) is:
Figure RE-GDA0002799878520000043
furthermore, a left photometric equation and a right photometric equation established by the camera jointly optimize the light source direction and the normal vector.
Further, stereo matching is performed by using a gray sequence.
The invention has the beneficial effects that: the invention solves the problem that the surface depth with richer details is difficult to obtain under the condition of not calibrating photometric vision after binocular vision and photometric stereo vision are combined.
Detailed Description
The first embodiment is as follows: the method for combining binocular vision and uncalibrated photometric vision adopts at least two cameras and at least three non-collinear light sources, wherein the light sources project illumination to an object in a time-sharing manner, and the cameras collect images; the method is realized by the following steps:
step one, two cameras form a binocular stereo vision system, and the depth information of the surface of an object is acquired based on an image sequence acquired by the two cameras;
secondly, calculating the gradient of the surface of the object based on the depth information of the surface of the object, and further calculating the normal vector of the surface of the object;
taking the normal vector of the surface of the object as a known quantity, substituting the known quantity into two photometric vision systems, performing combined optimization, and calculating the illumination direction;
step four, substituting the illumination direction as a known quantity into two photometric vision systems, performing combined optimization, and calculating a normal vector of the surface of the object;
and step five, combining the normal vector of the surface of the object calculated in the step four with the depth of the surface of the object calculated in the step one to construct an optimization function, and optimizing the depth of the surface.
The second embodiment is as follows: in the first step of the method for combining binocular vision and uncalibrated photometric vision according to the present embodiment, the step of obtaining the depth information of the object surface is as follows:
the method comprises the following steps of:
step (one), the position of the object is notAlternatively, six light sources are used for projecting illumination to the object in a time-sharing manner, the left camera and the right camera collect image sequences, and each pixel position on the camera image has a gray sequence value i with the length of sixuv=(i1,i2,i3,i4,i5,i6);
Step two, performing stereo correction on the image, performing stereo matching on polar lines, searching a maximum similarity value on a left camera and a right camera according to a gray sequence value of a pixel, acquiring the surface depth S (u, v) of the object,
Figure RE-GDA0002799878520000051
(1),
in the formula (1)
Figure RE-GDA0002799878520000052
Figure RE-GDA0002799878520000053
Indicates the focal length of the camera, [ u ]0v0]Representing camera principal point, u representing pixel column coordinates, u0Representing principal point pixel column coordinates, v representing pixel row coordinates, v0Representing principal point pixel row coordinates, fxDenotes the focal length in the x direction, fyDenotes the focal length in the y direction, ZuvRepresenting the vertical distance, mu, of the camera origin to the object surfaceuvRepresenting the coefficients;
step three, calculating partial derivatives of the object surface depth S (u, v):
Figure RE-GDA0002799878520000054
Figure RE-GDA0002799878520000055
in the formulas (2) and (3), p represents the partial derivative of the depth in the u direction, and q represents the partial derivative of the depth in the v direction;
step four, calculating a normal vector of the surface of the object;
Figure RE-GDA0002799878520000056
(5),
in the formula (4), n represents a normal vector, nxRepresenting the x component, n, of the normal vectoryRepresenting the y component of the normal vector, nzRepresenting a normal vector z component;
substituting the normal vector of the surface of the object as a known quantity into two photometric vision systems, performing combined optimization, and calculating the illumination direction; p is a point on the surface of the object, I1 and I2 are corresponding points, I1 is the image coordinates of P projected on the left camera image, I2 is the image coordinates of P projected on the right camera image; the normal vector of the P point is
Figure RE-GDA0002799878520000057
i1Watch (A)
Gray scale of I1, I2Representing gray scale of I2,/kIndicating the k-th light source direction. Therefore, the two luminosity equations of the left camera and the right camera are respectively a formula (6) and a formula (7); there are 12 equations for a pair of corresponding points, there are multiple corresponding points between the left camera image and the right camera image, note the corresponding point number as N, there are N × 12 equations, the unknown number in the equation is 18, therefore can solve the light source direction with the least square method;
Figure RE-GDA0002799878520000061
Figure RE-GDA0002799878520000062
substituting the illumination direction as a known quantity into two photometric vision systems, taking a normal vector of a depth point as an unknown quantity, combining equations (6) and (7), and optimally calculating a normal vector of the surface of the object;
defining the distance between the expected object surface depth and the depth acquired by the binocular vision system, and using the cost function;
Figure RE-GDA0002799878520000063
wherein
Figure RE-GDA0002799878520000064
Representing the depth of the surface obtained by binocular stereo vision, ZuvA desired object surface depth;
taking the inner product of the normal vector of the object surface and the gradient of the object surface as a cost function;
Figure RE-GDA0002799878520000065
Figure RE-GDA0002799878520000066
representing photometry to obtain surface normal vectors.
Combining the two cost functions to form a final cost function, and optimizing the depth of the object surface by taking the minimum cost function as a target:
Figure RE-GDA0002799878520000067
in the formula (4), Z represents an object depth set, lambda represents a coefficient which is larger than 0 and smaller than 1 and is used for adjusting the weight of a depth error and a normal vector error, and EdIndicating a depth error, EnRepresenting the normal vector error.
The third concrete implementation mode: in this embodiment, the formula for calculating the normal vector of the object surface in step (iv) of the method for combining binocular vision with uncalibrated photometric vision is as follows:
Figure RE-GDA0002799878520000071
the fourth concrete implementation mode: in this embodiment, the cost function in step (v) of the method for combining binocular vision and uncalibrated photometric vision is as follows:
Figure RE-GDA0002799878520000072
the fifth concrete implementation mode: in this embodiment, the cost function in step (six) of the method for combining binocular vision and uncalibrated photometric vision is as follows:
Figure RE-GDA0002799878520000073
the sixth specific implementation mode: in this embodiment, the left and right photometric equations established by the camera of the method for combining binocular vision with uncalibrated photometric vision jointly optimize the light source direction and normal vector.
The seventh embodiment: in the method for combining binocular vision and uncalibrated photometric vision according to the embodiment, the stereo matching is performed by using the gray sequence.
Principle of operation
The invention adopts two cameras and a plurality of non-collinear light sources, the light sources project light to an object in a time-sharing manner, the cameras collect image sequences, the plurality of light sources and the two cameras form two uncalibrated photometric visual systems, the cameras and the cameras form a binocular visual system, the surface depth of the object can be obtained, the depth obtained by the binocular vision starts, the direction of the light sources is obtained, the direction of the surface normal is obtained from the direction of the light sources, and the optimal surface depth is obtained by the depth and the normal joint optimization.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A binocular vision and method that not demarcate the luminosity vision to combine, said method adopts at least two cameras and at least three not collinear light sources, the light source time sharing projects the illumination to the object, the camera gathers the picture; the method is characterized in that: the method for combining the binocular vision with the uncalibrated photometric vision is realized by the following steps:
step one, two cameras form a binocular stereo vision system, and the depth information of the surface of an object is acquired based on an image sequence acquired by the two cameras;
secondly, calculating the gradient of the surface of the object based on the depth information of the surface of the object, and further calculating the normal vector of the surface of the object;
taking the normal vector of the surface of the object as a known quantity, substituting the known quantity into two photometric vision systems, performing combined optimization, and calculating the illumination direction;
step four, substituting the illumination direction as a known quantity into two photometric vision systems, performing combined optimization, and calculating a normal vector of the surface of the object;
and step five, combining the normal vector of the surface of the object calculated in the step four with the depth of the surface of the object calculated in the step one to construct an optimization function, and optimizing the depth of the surface.
2. The method of claim 1, wherein the binocular vision is combined with uncalibrated photometric vision, and the method further comprises: the method comprises the following steps of:
step one, the position of an object is unchanged, six light sources project illumination to the object in a time-sharing mode, a left camera and a right camera collect image sequences, and a gray sequence value i with the length of six is arranged at each pixel position on the image of the camerauv=(i1,i2,i3,i4,i5,i6);
Step two, performing stereo correction on the image, performing stereo matching on polar lines, searching a maximum similarity value on a left camera and a right camera according to a gray sequence value of a pixel, acquiring the surface depth S (u, v) of the object,
Figure RE-FDA0002799878510000011
in the formula (1)
Figure RE-FDA0002799878510000012
Figure RE-FDA0002799878510000013
Indicates the focal length of the camera, [ u ]0v0]Representing camera principal point, u representing pixel column coordinates, u0Representing principal point pixel column coordinates, v representing pixel row coordinates, v0Representing principal point pixel row coordinates, fxDenotes the focal length in the x direction, fyDenotes the focal length in the y direction, ZuvRepresenting the vertical distance, mu, of the camera origin to the object surfaceuvRepresenting the coefficients;
step three, calculating partial derivatives of the object surface depth S (u, v):
Figure RE-FDA0002799878510000021
Figure RE-FDA0002799878510000022
in the formulas (2) and (3), p represents the partial derivative of the depth in the u direction, and q represents the partial derivative of the depth in the v direction;
step four, calculating a normal vector of the surface of the object;
Figure RE-FDA0002799878510000023
in the formula (4), n represents a normal vector, nxRepresenting the x component, n, of the normal vectoryThe y-component of the normal vector is represented,nzrepresenting a normal vector z component;
substituting the normal vector of the surface of the object as a known quantity into two photometric vision systems, performing combined optimization, and calculating the illumination direction; p is a point on the surface of the object, I1 and I2 are corresponding points, I1 is the image coordinates of P projected on the left camera image, I2 is the image coordinates of P projected on the right camera image; the normal vector of the P point is
Figure RE-FDA0002799878510000024
i1Representing gray scale of I1, I2Representing gray scale of I2,/kIndicating the k-th light source direction. Therefore, the two luminosity equations of the left camera and the right camera are respectively a formula (6) and a formula (7); there are 12 equations for a pair of corresponding points, there are multiple corresponding points between the left camera image and the right camera image, note the corresponding point number as N, there are N × 12 equations, the unknown number in the equation is 18, therefore can solve the light source direction with the least square method;
Figure RE-FDA0002799878510000025
Figure RE-FDA0002799878510000026
substituting the illumination direction as a known quantity into two photometric vision systems, taking a normal vector of a depth point as an unknown quantity, combining equations (6) and (7), and optimally calculating a normal vector of the surface of the object;
defining the distance between the expected object surface depth and the depth acquired by the binocular vision system, and using the cost function;
Figure RE-FDA0002799878510000031
wherein
Figure RE-FDA0002799878510000032
Representing the depth of the surface obtained by binocular stereo vision, ZuvA desired object surface depth;
taking the inner product of the normal vector of the object surface and the gradient of the object surface as a cost function;
Figure RE-FDA0002799878510000033
Figure RE-FDA0002799878510000034
representing photometry to obtain surface normal vectors.
Combining the two cost functions to form a final cost function, and optimizing the depth of the object surface by taking the minimum cost function as a target:
Figure RE-FDA0002799878510000035
in the formula (4), Z represents an object depth set, lambda represents a coefficient which is larger than 0 and smaller than 1 and is used for adjusting the weight of a depth error and a normal vector error, and EdIndicating a depth error, EnRepresenting the normal vector error.
3. The method of claim 2, wherein the binocular vision is combined with uncalibrated photometric vision, and the method further comprises: the formula for calculating the normal vector of the surface of the object in the step (IV) is as follows:
Figure FDA0002653036440000036
4. the method of claim 2, wherein the binocular vision is combined with uncalibrated photometric vision, and the method further comprises: the cost function in the step (five) is as follows:
Figure FDA0002653036440000037
5. the method of claim 2, wherein the binocular vision is combined with uncalibrated photometric vision, and the method further comprises: the cost function in the step (six) is as follows:
Figure FDA0002653036440000038
6. the method of claim 2, wherein the binocular vision is combined with uncalibrated photometric vision, and the method further comprises: the left and right photometric equations established by the camera jointly optimize the light source direction and normal vector.
7. The method of claim 2, wherein the binocular vision is combined with uncalibrated photometric vision, and the method further comprises: and carrying out stereo matching by using the gray sequence.
CN202010877425.6A 2020-08-27 2020-08-27 Method for combining binocular vision with uncalibrated luminosity vision Active CN112164105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010877425.6A CN112164105B (en) 2020-08-27 2020-08-27 Method for combining binocular vision with uncalibrated luminosity vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010877425.6A CN112164105B (en) 2020-08-27 2020-08-27 Method for combining binocular vision with uncalibrated luminosity vision

Publications (2)

Publication Number Publication Date
CN112164105A true CN112164105A (en) 2021-01-01
CN112164105B CN112164105B (en) 2022-04-08

Family

ID=73860297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010877425.6A Active CN112164105B (en) 2020-08-27 2020-08-27 Method for combining binocular vision with uncalibrated luminosity vision

Country Status (1)

Country Link
CN (1) CN112164105B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111650212A (en) * 2020-07-03 2020-09-11 东北大学 Metal surface normal direction three-dimensional information acquisition method based on linear array camera three-dimensional vision
CN113318913A (en) * 2021-02-01 2021-08-31 北京理工大学 Glue dot three-dimensional reconstruction method based on uncalibrated photometric stereo vision
CN116447978A (en) * 2023-06-16 2023-07-18 先临三维科技股份有限公司 Hole site information detection method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106524909A (en) * 2016-10-20 2017-03-22 北京旷视科技有限公司 Three-dimensional image acquisition method and apparatus
CN106780726A (en) * 2016-12-23 2017-05-31 陕西科技大学 The dynamic non-rigid three-dimensional digital method of fusion RGB D cameras and colored stereo photometry
CN107403449A (en) * 2017-08-09 2017-11-28 深度创新科技(深圳)有限公司 A kind of vision system and its three-dimensional rebuilding method based on photometric stereo vision
CN109146934A (en) * 2018-06-04 2019-01-04 成都通甲优博科技有限责任公司 A kind of face three-dimensional rebuilding method and system based on binocular solid and photometric stereo
US20200252598A1 (en) * 2019-02-06 2020-08-06 Canon Kabushiki Kaisha Control apparatus, imaging apparatus, illumination apparatus, image processing apparatus, image processing method, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106524909A (en) * 2016-10-20 2017-03-22 北京旷视科技有限公司 Three-dimensional image acquisition method and apparatus
CN106780726A (en) * 2016-12-23 2017-05-31 陕西科技大学 The dynamic non-rigid three-dimensional digital method of fusion RGB D cameras and colored stereo photometry
CN107403449A (en) * 2017-08-09 2017-11-28 深度创新科技(深圳)有限公司 A kind of vision system and its three-dimensional rebuilding method based on photometric stereo vision
CN109146934A (en) * 2018-06-04 2019-01-04 成都通甲优博科技有限责任公司 A kind of face three-dimensional rebuilding method and system based on binocular solid and photometric stereo
US20200252598A1 (en) * 2019-02-06 2020-08-06 Canon Kabushiki Kaisha Control apparatus, imaging apparatus, illumination apparatus, image processing apparatus, image processing method, and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHAOYANG WANG ET AL: "Binocular Photometric Stereo Acquisition and Reconstruction for 3D Talking Head Applications", 《IN INTERSPEECH-2013》 *
杜希瑞: "基于Kinect的动态非刚性体三维数字化研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111650212A (en) * 2020-07-03 2020-09-11 东北大学 Metal surface normal direction three-dimensional information acquisition method based on linear array camera three-dimensional vision
CN113318913A (en) * 2021-02-01 2021-08-31 北京理工大学 Glue dot three-dimensional reconstruction method based on uncalibrated photometric stereo vision
CN116447978A (en) * 2023-06-16 2023-07-18 先临三维科技股份有限公司 Hole site information detection method, device, equipment and storage medium
CN116447978B (en) * 2023-06-16 2023-10-31 先临三维科技股份有限公司 Hole site information detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112164105B (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN112164105B (en) Method for combining binocular vision with uncalibrated luminosity vision
CN106875339B (en) Fisheye image splicing method based on strip-shaped calibration plate
US8593524B2 (en) Calibrating a camera system
CN111243033B (en) Method for optimizing external parameters of binocular camera
CN108510551B (en) Method and system for calibrating camera parameters under long-distance large-field-of-view condition
CN109919911B (en) Mobile three-dimensional reconstruction method based on multi-view photometric stereo
CN110487216A (en) A kind of fringe projection 3-D scanning method based on convolutional neural networks
Chatterjee et al. Algorithms for coplanar camera calibration
CN104918035A (en) Method and system for obtaining three-dimensional image of target
JP2003254748A (en) Stereo image characteristic inspection system
CN106780297B (en) Image high registration accuracy method under scene and Varying Illumination
WO2011083669A1 (en) Stereo camera device
CN106340045B (en) Calibration optimization method in three-dimensional facial reconstruction based on binocular stereo vision
CN103776419A (en) Binocular-vision distance measurement method capable of widening measurement range
CN111028281A (en) Depth information calculation method and device based on light field binocular system
CN109084959B (en) Optical axis parallelism correction method based on binocular distance measurement algorithm
CN106683133B (en) Method for obtaining target depth image
CN115880369A (en) Device, system and method for jointly calibrating line structured light 3D camera and line array camera
JPH11355813A (en) Device for deciding internal parameters of camera
CN111829435A (en) Multi-binocular camera and line laser cooperative detection method
CN110555880B (en) Focal length unknown P6P camera pose estimation method
CN108921936A (en) A kind of underwater laser grating matching and stereo reconstruction method based on ligh field model
JP2004028811A (en) Device and method for correcting distance for monitoring system
CN113034590B (en) AUV dynamic docking positioning method based on visual fusion
CN112700504B (en) Parallax measurement method of multi-view telecentric camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant