CN110310371A - A method of based on vehicle-mounted monocular focus sequence picture construction object three-D profile - Google Patents

A method of based on vehicle-mounted monocular focus sequence picture construction object three-D profile Download PDF

Info

Publication number
CN110310371A
CN110310371A CN201910447327.6A CN201910447327A CN110310371A CN 110310371 A CN110310371 A CN 110310371A CN 201910447327 A CN201910447327 A CN 201910447327A CN 110310371 A CN110310371 A CN 110310371A
Authority
CN
China
Prior art keywords
image
pixel
sequence
coordinate system
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910447327.6A
Other languages
Chinese (zh)
Other versions
CN110310371B (en
Inventor
董志国
武肖搏
刘建成
张宇超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN201910447327.6A priority Critical patent/CN110310371B/en
Publication of CN110310371A publication Critical patent/CN110310371A/en
Application granted granted Critical
Publication of CN110310371B publication Critical patent/CN110310371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/571Depth or shape recovery from multiple images from focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computing Systems (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the automatic Pilot field and Identifying Technique of Object field in Vehicle Engineering, solves and realize the time-consuming, laborious of object three-dimensional reconstruction, low efficiency in existing intelligent driving field and be difficult to reach high-precision technical problem.The sequence image that object is obtained by shooting unit, obtains pretreatment image sequence by processing;Extract the clear pixel of each frame preprocessing sequence image, calculate the (ionospheric) focussing factor of each pixel of preprocessing sequence image, and it seeks all pixels point in total focus image and reaches corresponding image sequence number when maximum (ionospheric) focussing factor, depth distance Δ z between flanking sequence image can be calculated according to the product of real-time speed v and Δ t, using Δ z as the depth value of corresponding pixel points, the coordinate that pixel coordinate system fastens with world coordinates corresponding point is calculated further according to the result of Zhang Zhengyou calibration method calibration, to carry out three-dimensional reconstruction by two-dimensional image sequence, reconstruct the three-D profile of object.

Description

A method of based on vehicle-mounted monocular focus sequence picture construction object three-D profile
Technical field
It is specifically a kind of based on figure the invention belongs to the automatic Pilot field and Identifying Technique of Object field in Vehicle Engineering As the automatic threedimensional model of vehicle-mounted object of processing building object three-D profile constructs system.
Background technique
Currently, intelligent driving field realizes that the major programme of object three-dimensionalreconstruction has vision and laser radar two major classes.Its In, vision is that target object surface information is obtained using camera, according to the quantity of camera, can be divided into monocular vision and Binocular vision.Monocular vision is using one known pattern of structured light, and camera has received anti-by body surface After the pattern penetrated, the difference with master pattern is calculated by image procossing, to realize three-dimensionalreconstruction.Binocular vision is base The three-dimensional geometric information of object is recovered in principle of parallax, rebuilds object three-dimensional contour outline and position.
Current techniques using visible sensation method do the short slab of three-dimensionalreconstruction it is obvious that monocular and binocular robustness are all very poor, this The precision of kind system can be affected with the variation of ambient enviroment, when ambient is by dying down by force, the essence of binocular vision Degree can have a greatly reduced quality.And monocular vision if the light of surrounding is very strong, is taken the photograph on the contrary, it is only suitable for the environment of dark As head is difficult to accurately identify bright spot.
Laser radar realizes that three-dimensional reconstruction can also substantially be divided into two classes, and one kind is based on triangulation, another quilt Referred to as ToF telemetry.Principle based on triangulation algorithm is that sequence C CD incudes Laser emission to body surface formation in real time Reflective information can be calculated by known launch angle α, receiving angle β and laser head at a distance from CCD according to sine Radar is at a distance from object out.The principle of ToF (Time Of Flight) is by the propagation delay time between measurement light pulse Between carry out the distance of computing object object.
The difficult point of laser radar is that how to carry out high-speed data acquisition by hardware and handle in real time by algorithm, obtains High-precision original point cloud data, and cost compared it is relatively high for camera vision.
Summary of the invention
The technical problems to be solved by the present invention are: how to change and solve to realize object three in existing intelligent driving field It ties up the time-consuming, laborious of reconstruction, low efficiency and is difficult to reach high-precision disadvantage, so that providing one kind is able to achieve quick high accuracy weight The system of structure object three-D profile.
The present invention is achieved by the following technical scheme: a kind of be based on vehicle-mounted monocular focus sequence picture construction object three The method for tieing up profile, includes the following steps:
(a) mobile vehicle drives shooting unit to move along object extending direction, and shoots a target every the Δ t time The image of object, to obtain the N frame sequence image of object whithin a period of time;
(b) image procossing is carried out to the full sequence image that shooting unit is shot, cutting transformation is carried out to image, with The irrelevant background area in part in sequence image except object measured zone is removed, preprocessing sequence image is obtained;
(c) the clear pixel of each frame preprocessing sequence image is extracted, to construct total focus image;Determine pretreatment sequence The pixel point focusing estimation window of any pixel point in column image, calculate each pixel of preprocessing sequence image focusing because Son: for any pixel point (i, j) in preprocessing sequence image, using the pixel as starting point, lower or so four neighbours on it Domain generates four gray level co-occurrence matrixes, the maximum value for acquiring four gray level co-occurrence matrixes correlative character values is calculated separately, with this Determine the maximal correlation pixel of the pixel four direction, and true with the marginal position that this four pixels are its estimation window The fixed pixel point focusing estimation window U (i, j), then it focuses estimation window width dimensions W1=D1+D2+1, height dimension W2= D3+D4+1, wherein D1, D2, D3, D4 respectively indicate maximal correlation pixel and any pixel point calculated on four direction The number of pixels being spaced between (i, j) adds 1, focuses estimation window having a size of W1 × W2;It is calculated often using estimation window is focused The (ionospheric) focussing factor of each pixel in one frame image;Wherein, (ionospheric) focussing factor is that corresponding pixel points are every in focusing estimation window The average value of a pixel evaluation of estimate sum;The (ionospheric) focussing factor of any pixel (i, j) is the pixel in each frame pretreatment image The average value of each pixel evaluation of estimate sum of the point in its estimation window, is shown below: Fk(i, j)=∑ (gx(x,y)2+ gy(x,y)2)2/(w1×w2)
Wherein, gx(x, y) and gy(x, y) respectively indicates kth frame preprocessing sequence image IkWith Sobel operator in X and the side Y To convolution;Reach clear pixel of the pixel as the frame image of maximum (ionospheric) focussing factor in every frame image;From all Clear pixel in image constitutes a secondary total focus image;
(d) the corresponding image sequence number of each pixel in total focus image is sought, according to the successive suitable of image sequence number Sequence indicates the relative positional relationship of target surface point, and the depth distance Δ z between flanking sequence image can be moved according to real-time The product of the speed v and Δ t of dynamic load body calculates, and then learns the depth relationship of target surface all the points: acquiring acquisition kth Z-direction coordinate Z of the shooting unit in world coordinate system when sub-picturek, ZkIt is the sum of the Δ z all from 1 to k, as follows:
Write down each pixel coordinate of clear pixel where it in image coordinate system of image in kth width image (ik、jk);According to the transition matrix of pixel coordinate system to world coordinate system by the pixel coordinate (i of all clear pixelsk、jk) Obtain its corresponding world coordinates (Xk, Yk, Zk);Method according to this obtains the clear pixel of all images in preprocessing sequence image Coordinate set { (X of the point in world coordinate systemk, Yk, Zk) | 1≤k≤N }, N is total picture number of sequence image;
(e) the coordinate set { (X according to target surface point in world coordinate systemk, Yk, Zk) | 1≤k≤N }, it will be each Point connects and composes triangle gridding with the point around it in field, and the wheel for forming target surface is connected by the face that triangle gridding forms The reconstruct of wide image hotpoint target surface three-D profile figure in world coordinate system.
The working principle of the invention is: according to lens imaging principle, when focal length and at a distance of a timing, object distance is to uniquely determine 's;The point for only meeting the determination object distance when imaging, on object could be obtained clearly on as plane as referred to as focus Image;If being unsatisfactory for the determination object distance, clearly point image cannot be obtained, what is obtained is a circle of confusion, figure at this time Picture referred to as out-of-focus image.So obtaining a series of sequence chart of objects on depth of field direction first according to focus principle Picture makes entire sequential covering object all information on depth of field direction;Then pass through certain fusion rule in sequence image The point of each focus is obtained, so that each depth of field position very clearly image is reconstructed, it is referred to as complete poly- Burnt image;Depth information is restored by focus analysis again, to carry out three-dimensional reconstruction by two-dimensional image sequence.Compared to other Method, this method are not required to carry out complicated light source scale operation, and illumination condition when to Image Acquisition is not also harsh, ranging range Bigger, building threedimensional model speed is also very considerable.
Detailed description of the invention
Fig. 1 is the schematic diagram that certain pixel focuses estimation window with it.
Fig. 2 is experimental provision schematic diagram used in the present invention.
Fig. 3 is camera in the position image 1., 2., 3. shot and the object profile signal of three-dimensionalreconstruction reduction Figure.
Specific embodiment
The step of present invention devises a kind of target surface contour measuring method based on image procossing, this method is wrapped Include: automobile carries a shooting unit and moves along object extending direction, and the image of an object is shot every the Δ t time, To obtain the sequence image of object whithin a period of time;Image is carried out to the full sequence image that shooting unit is shot Processing, carries out cutting transformation to image, to remove the irrelevant background area in part in sequence image except object measured zone Domain obtains preprocessing sequence image;The clear pixel of each frame preprocessing sequence image is extracted, preprocessing sequence image is calculated The (ionospheric) focussing factor of each pixel, and seek all pixels point in total focus image and reach corresponding image when maximum (ionospheric) focussing factor Sequence number, the depth distance Δ z between flanking sequence image can be calculated according to the product of real-time speed v and Δ t, by Δ z As the depth value of corresponding pixel points, pixel coordinate system and world coordinates are calculated further according to the result of Zhang Zhengyou calibration method calibration The coordinate of corresponding point is fastened, to carry out three-dimensional reconstruction by two-dimensional image sequence, reconstructs the three-D profile of object.
Further more detailed description is made to technical solution of the present invention With reference to embodiment.The present invention is adopted Technical solution is: a kind of vehicle-mounted automatic measurement system based on image procossing, carries out in accordance with the following steps:
Ccd image acquisition sending device is fixed on vehicle body by step 1;
Step 2 prints a gridiron pattern, it is pasted in one plane, as calibration object.By adjusting calibration object or The direction of video camera shoots the photo of some different directions for calibration object.By this group by Zhang Zhengyou calibration method obtain camera at As 4 inner parameters in linear model: u0、v0、fx、fy, respectively indicate the pixel coordinate of principal point and the effective focal length of camera;2 A external parameter: R, t respectively indicate the rotation relationship and translation relation of camera coordinates system and world coordinate system, according to this six Parameter can calculate pixel coordinate system to world coordinate system transformational relation.
Step 3, automobile drive ccd image acquisition sending device to move along object extending direction, and every the Δ t time The image of an object is shot, to obtain the N frame sequence image of object whithin a period of time, and the image that will acquire is logical It crosses data line and reaches computer.
Step 4 carries out image procossing to the full sequence image that shooting unit is shot, carries out cutting change to image It changes, to remove the irrelevant background area in part in sequence image except object measured zone, obtains preprocessing sequence image.
Step 5 extracts the clear pixel of each frame preprocessing sequence image, to construct total focus image, determines complete poly- The pixel point focusing estimation window of any pixel point in burnt image, calculate each pixel of preprocessing sequence image focusing because Son: for any pixel point (i, j) (light color square placed in the middle in Fig. 1) in total focus image, using the pixel as starting point, at it Four neighborhoods generate four gray level co-occurrence matrixes up and down, calculate separately and acquire four gray level co-occurrence matrixes correlative character values Maximum value (four direction respectively has a maximum value), maximal correlation pixel (Fig. 1 of the pixel four direction is determined with this Four squares of middle dark color), and determine that the pixel point focusing is evaluated with the marginal position that this four pixels are its estimation window Window U (i, j), then its focus estimation window width dimensions W1=D1+D2+1, height dimension W2=D3+D4+1, wherein D1, D2, D3, D4 indicate number of pixels, focus estimation window having a size of W1 × W2.As shown in fig. 1, D1 indicates a certain pixel (light color Square) with the pixel number at the interval of its Far Left maximal correlation pixel add 1 (space-number 1, D1=2), D2 indicates table Show that a certain pixel and the pixel number at the interval of its rightmost maximal correlation pixel add 1 (space-number 0, D2=1).Benefit The (ionospheric) focussing factor of each pixel in each frame image is calculated with focusing estimation window;Wherein, (ionospheric) focussing factor is respective pixel Point is in the average value for focusing each pixel evaluation of estimate sum in estimation window.Any pixel in each frame pretreatment image The (ionospheric) focussing factor of (i, j) is the average value of each pixel evaluation of estimate sum of the pixel in its estimation window, as follows: Fk(i, j)=∑ (gx(x,y)2+gy(x,y)2)2/(w1×w2)
Wherein, gx(x, y) and gy(x, y) respectively indicates kth frame preprocessing sequence image IkWith Sobel operator in X and the side Y To convolution.
Step 6 seeks all pixels point in total focus image and reaches corresponding image sequence number when maximum (ionospheric) focussing factor, The relative positional relationship of target surface point, the depth between flanking sequence image can be indicated according to the sequencing of sequence number Distance, delta z can be calculated according to the product of real-time speed v and Δ t, and then learn the depth relationship of target surface all the points. Acquire Z-direction coordinate Z of the ccd image acquisition sending device in world coordinate system when obtaining kth pair (k is sequence number) imagek, Zk It is the sum of the Δ z all from 1 to k, as follows:
Write down pixel coordinate (i of the pixel of each focus in kth width image in the figure image coordinate systemk、 jk);Transformational relation of the pixel coordinate system obtained further according to second step camera calibration to world coordinate system:
Wherein, u0、v0、fx、fyRespectively indicate the pixel coordinate of principal point and the effective focal length of camera;R, t respectively indicates camera The rotation relationship and translation relation of coordinate system and world coordinate system, zcIndicate z coordinate of the principal point under camera coordinates system.According to turn Relationship is changed by the pixel coordinate (i of all clear pixelsk、jk) obtain its corresponding world coordinates (Xk, Yk, Zk).Method according to this, Obtain coordinate set of the point of object corresponding to the focus pixel of all images in sequence image in world coordinate system {(Xk, Yk, Zk) | 1≤k≤N }, N is total picture number of sequence image;
Step 7, according to coordinate set { (X of the target surface point in world coordinate systemk, Yk, Zk) | 1≤k≤N }, it will Each point and the point around it in field connect and compose triangle gridding, are connected by the face that triangle gridding forms and form target surface Contour pattern realize target surface three-D profile figure in world coordinate system reconstruct;The coordinate value line between consecutive points Property interpolation method obtain, the reconstruct of target surface is graphically displayed on the display screen of Computer Control Unit.
As shown in Fig. 2, 1., 2., 3. 4. vehicle-mounted camera takes a picture object to be identified respectively in position, due to Vehicle body changes away from the distance of object, and focal position also changes, camera site 1., 2., 3. corresponding focusing position It sets respectively (1), (2), (3).The object profile of 1., 2., 3. image and three-dimensionalreconstruction reduction that camera is shot in position is such as Shown in Fig. 3.

Claims (3)

1. a kind of method based on vehicle-mounted monocular focus sequence picture construction object three-D profile, which is characterized in that including as follows Step:
(a) mobile vehicle drives shooting unit to move along object extending direction, and shoots an object every the Δ t time Image, to obtain the N frame sequence image of object whithin a period of time;
(b) image procossing is carried out to the full sequence image that shooting unit is shot, cutting transformation is carried out to image, with removal The irrelevant background area in part in sequence image except object measured zone, obtains preprocessing sequence image;
(c) the clear pixel of each frame preprocessing sequence image is extracted, to construct total focus image;Determine preprocessing sequence chart The pixel point focusing estimation window of any pixel point as in calculates the (ionospheric) focussing factor of each pixel of preprocessing sequence image: For any pixel point (i, j) in preprocessing sequence image, using the pixel as starting point, lower or so four neighborhoods are raw on it At four gray level co-occurrence matrixes, the maximum value for acquiring four gray level co-occurrence matrixes correlative character values is calculated separately, is determined with this The maximal correlation pixel of the pixel four direction, and being determined with the marginal position that this four pixels are its estimation window should Pixel point focusing estimation window U (i, j), then it focuses estimation window width dimensions W1=D1+D2+1, height dimension W2=D3+ D4+1, wherein D1, D2, D3, D4 respectively indicate maximal correlation pixel on four direction and any pixel point calculated (i, J) number of pixels being spaced between adds 1, focuses estimation window having a size of W1 × W2;Each frame is calculated using estimation window is focused The (ionospheric) focussing factor of each pixel in image;Wherein, (ionospheric) focussing factor is that corresponding pixel points are focusing each picture in estimation window The average value of vegetarian refreshments evaluation of estimate sum;The (ionospheric) focussing factor of any pixel (i, j) is that the pixel exists in each frame pretreatment image The average value of each pixel evaluation of estimate sum in its estimation window, is shown below: Fk(i, j)=∑ (gx(x, y)2+gy(x, y)2)2/(w1×w2)
Wherein, gx(x, y) and gy(x, y) respectively indicates kth frame preprocessing sequence image IkWith Sobel operator in X and Y-direction Convolution;Reach clear pixel of the pixel as the frame image of maximum (ionospheric) focussing factor in every frame image;From all images In clear pixel constitute a secondary total focus image;
(d) the corresponding image sequence number of each pixel in total focus image is sought, according to the precedence table of image sequence number Show the relative positional relationship of target surface point, the depth distance Δ z between flanking sequence image can be according to real-time mobile load The product of the speed v and Δ t of body calculates, and then learns the depth relationship of target surface all the points: acquiring and obtains kth pair figure As when Z-direction coordinate Z of the shooting unit in world coordinate systemk, ZkIt is the sum of the Δ z all from 1 to k, as follows:
Write down pixel coordinate (i of each clear pixel where it in image coordinate system of image in kth width imagek、 jk);According to the transition matrix of pixel coordinate system to world coordinate system by the pixel coordinate (i of all clear pixelsk、jk) obtain Its corresponding world coordinates (Xk, Yk, Zk);Method according to this, the clear pixel for obtaining all images in preprocessing sequence image exist Coordinate set { (X in world coordinate systemk, Yk, Zk) | 1≤k≤N }, N is total picture number of sequence image;
(e) the coordinate set { (X according to target surface point in world coordinate systemk, Yk, Zk) | 1≤k≤N }, by each point with Point around it in field connects and composes triangle gridding, and the profile diagram for forming target surface is connected by the face that triangle gridding forms The reconstruct of shape realization target surface three-D profile figure in world coordinate system.
2. a kind of method based on vehicle-mounted monocular focus sequence picture construction object three-D profile as described in claim 1, It is characterized in that, pixel coordinate system obtains with the following method to world coordinate system transformational relation:
Shooting unit is fixed on mobile vehicle by step 1;
Step 2, calibration object is fixed in one plane;By adjusting calibration object or the direction of shooting unit, clapped for calibration object The photo for taking the photograph some different directions obtains this group of photo in 4 in camera imaging linear model by Zhang Zhengyou calibration method Portion's parameter: u0、v0、fx、fy, respectively indicate the pixel coordinate of principal point and the effective focal length of camera;2 external parameters: R, t, respectively Indicate the rotation relationship and translation relation of camera coordinates system and world coordinate system, zcIndicate that z of the principal point under camera coordinates system is sat Mark, can calculate pixel coordinate system to world coordinate system transformational relation according to this seven parameters:
3. a kind of method based on vehicle-mounted monocular focus sequence picture construction object three-D profile as claimed in claim 1 or 2, It is characterized in that, shooting unit acquires sending device using ccd image, mobile vehicle is automobile.
CN201910447327.6A 2019-05-27 2019-05-27 Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image Active CN110310371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910447327.6A CN110310371B (en) 2019-05-27 2019-05-27 Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910447327.6A CN110310371B (en) 2019-05-27 2019-05-27 Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image

Publications (2)

Publication Number Publication Date
CN110310371A true CN110310371A (en) 2019-10-08
CN110310371B CN110310371B (en) 2023-04-04

Family

ID=68075133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910447327.6A Active CN110310371B (en) 2019-05-27 2019-05-27 Method for constructing three-dimensional contour of object based on vehicle-mounted monocular focusing sequence image

Country Status (1)

Country Link
CN (1) CN110310371B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115641368A (en) * 2022-10-31 2023-01-24 安徽农业大学 Method for extracting characteristics of defocused checkerboard image for calibration
WO2023160301A1 (en) * 2022-02-23 2023-08-31 杭州萤石软件有限公司 Object information determination method, mobile robot system, and electronic device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103801989A (en) * 2014-03-10 2014-05-21 太原理工大学 Airborne automatic measurement system for determining origin of coordinates of workpiece according to image processing
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
US20160335809A1 (en) * 2015-05-14 2016-11-17 Qualcomm Incorporated Three-dimensional model generation
CN107680152A (en) * 2017-08-31 2018-02-09 太原理工大学 Target surface topography measurement method and apparatus based on image procossing
US20180295347A1 (en) * 2017-04-05 2018-10-11 Denso Corporation Apparatus for measuring three-dimensional position of object
US20180307922A1 (en) * 2017-04-20 2018-10-25 Hyundai Motor Company Method of detecting obstacle around vehicle
US10205929B1 (en) * 2015-07-08 2019-02-12 Vuu Technologies LLC Methods and systems for creating real-time three-dimensional (3D) objects from two-dimensional (2D) images
CN109671115A (en) * 2017-10-16 2019-04-23 三星电子株式会社 The image processing method and device estimated using depth value

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103801989A (en) * 2014-03-10 2014-05-21 太原理工大学 Airborne automatic measurement system for determining origin of coordinates of workpiece according to image processing
US20160335809A1 (en) * 2015-05-14 2016-11-17 Qualcomm Incorporated Three-dimensional model generation
US10205929B1 (en) * 2015-07-08 2019-02-12 Vuu Technologies LLC Methods and systems for creating real-time three-dimensional (3D) objects from two-dimensional (2D) images
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
US20180295347A1 (en) * 2017-04-05 2018-10-11 Denso Corporation Apparatus for measuring three-dimensional position of object
US20180307922A1 (en) * 2017-04-20 2018-10-25 Hyundai Motor Company Method of detecting obstacle around vehicle
CN107680152A (en) * 2017-08-31 2018-02-09 太原理工大学 Target surface topography measurement method and apparatus based on image procossing
CN109671115A (en) * 2017-10-16 2019-04-23 三星电子株式会社 The image processing method and device estimated using depth value

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TOMPKINS R.CORTLAND等: ""3D reconstruction from a monocular vision system for unmanned ground vehicles"", 《ELECTRO-OPTICAL REMOTE SENSING, PHOTONIC TECHNOLOGIES, AND APPLICATIONS V》 *
刘双印: ""三维自由曲线的立体匹配及重构方法"", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160301A1 (en) * 2022-02-23 2023-08-31 杭州萤石软件有限公司 Object information determination method, mobile robot system, and electronic device
CN115641368A (en) * 2022-10-31 2023-01-24 安徽农业大学 Method for extracting characteristics of defocused checkerboard image for calibration
CN115641368B (en) * 2022-10-31 2024-06-04 安徽农业大学 Out-of-focus checkerboard image feature extraction method for calibration

Also Published As

Publication number Publication date
CN110310371B (en) 2023-04-04

Similar Documents

Publication Publication Date Title
CN106289106B (en) The stereo vision sensor and scaling method that a kind of line-scan digital camera and area array cameras are combined
US9443308B2 (en) Position and orientation determination in 6-DOF
CN109544679A (en) The three-dimensional rebuilding method of inner wall of the pipe
CN107154014B (en) Real-time color and depth panoramic image splicing method
EP1580523A1 (en) Three-dimensional shape measuring method and its device
CN110766669B (en) Pipeline measuring method based on multi-view vision
CN103900494A (en) Homologous point rapid matching method used for binocular vision three-dimensional measurement
CN104036488A (en) Binocular vision-based human body posture and action research method
CN104408762A (en) Method for obtaining object image information and three-dimensional model by using monocular unit and two-dimensional platform
CN102073863A (en) Method for acquiring characteristic size of remote video monitored target on basis of depth fingerprint
CN104236479A (en) Line structured optical three-dimensional measurement system and three-dimensional texture image construction algorithm
CN113446957B (en) Three-dimensional contour measuring method and device based on neural network calibration and speckle tracking
CN113643345A (en) Multi-view road intelligent identification method based on double-light fusion
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
CN108362228A (en) A kind of hybrid three-dimensional measuring apparatus of finishing tool grating and measurement method based on double ray machines
CN112254670B (en) 3D information acquisition equipment based on optical scanning and intelligent vision integration
CN110310371A (en) A method of based on vehicle-mounted monocular focus sequence picture construction object three-D profile
CN109085603A (en) Optical 3-dimensional imaging system and color three dimensional image imaging method
JP2001147110A (en) Random pattern generating device and its method, distance image generating device and its method, and program providing medium
CN104732586A (en) Fast reconstruction method for three-dimensional human body dynamic form and fast construction method for three-dimensional movement light stream
Harvent et al. Multi-view dense 3D modelling of untextured objects from a moving projector-cameras system
CN117710588A (en) Three-dimensional target detection method based on visual ranging priori information
CN112254638B (en) Intelligent visual 3D information acquisition equipment that every single move was adjusted
CN109443319A (en) Barrier range-measurement system and its distance measuring method based on monocular vision
Jamwal et al. A survey on depth map estimation strategies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant