CN102316355A - Generation method of 3D machine vision signal and 3D machine vision sensor - Google Patents
Generation method of 3D machine vision signal and 3D machine vision sensor Download PDFInfo
- Publication number
- CN102316355A CN102316355A CN201110272416A CN201110272416A CN102316355A CN 102316355 A CN102316355 A CN 102316355A CN 201110272416 A CN201110272416 A CN 201110272416A CN 201110272416 A CN201110272416 A CN 201110272416A CN 102316355 A CN102316355 A CN 102316355A
- Authority
- CN
- China
- Prior art keywords
- image
- machine vision
- data
- color
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Abstract
A generation method of a 3D machine vision signal is characterized by: A. projecting a black and white image with variable sine brightness to a surface of an object to be measured, which comprises the following steps: generating a stripe image whose a gray level value changes with sine functions; setting a gray level change scope of the stripe image as [-pi, pi],wherein the gray level value of each stripe in the image is unique; projecting the generated stripe image to the surface of the object to be measured at one time; B. taking pictures of the object surface and connecting image data to follow-up treatment equipment through a data transmission interface, which comprises the following steps: taking pictures of the object surface so as to acquire a two dimensional image, wherein the two dimensional image comprises black and white stripe phase information modulated by the object surface; collecting and processing through an image acquisition and processing unit so as to acquire contour data of the object surface; sending the contour data of the object surface to the follow-up treatment equipment through the data transmission interface; C. processing three dimensional data to a 3D pseudo-color image by using the follow-up treatment equipment and completing 3D measurement and detection through processing the pseudo-color signal.
Description
Technical field
The present invention relates to the generation method and the 3D machine vision transducer of 3D machine vision signal.
Background technology
On the current market or the form that exists of the industrial colour TV camera (2D) that appears in the newspapers and 3D machine vision transducer and the problem of existence:
Analog video camera: analog signal output only has 2-D data and positional information;
Digital camera: all kinds of digital output interfaces, but only export 2-D data and positional information;
Intelligent camera: all kinds of digital output interfaces, only export 2-D data and positional information and graphical analysis object information;
3D machine vision transducer: digital interface can export elevation information, but just a string (or a line) discrete data is not a width of cloth full color BITMAP picture.
The 3D smart camera: digital interface, can the output image analysis result.
Two-dimensional signal all can only be exported for three kinds in above-mentioned front, though the two kinds of information that 3D can be provided in back have following limitation:
3D machine vision transducer: be used line laser; Adopt scan mode, object to be detected need move with respect to generating device of laser, can only export the data of a line of reflection elevation information at every turn; Follow-uply also need gather and splice, just can become the image that a width of cloth can supply be analyzed.
The 3D smart camera: also be to be used line laser, adopt scan mode, object to be detected need move with respect to generating device of laser, and this method is spliced for three-dimensional data in camera, directly exports the result who measures through analyzing the back.This flexible method property is with open not enough, and often cost is higher.
Summary of the invention
The objective of the invention is to overcome the deficiency that prior art exists, and a kind of generation method and 3D machine vision transducer of 3D machine vision signal are provided.Can realize: 1. for the detection in the visual field, need not scan, promote the speed that detects.2. the output result is a continuous images, the result that each two field picture representative is once gathered.3. the output coloured image is that follow-up detection and measurement provides initial data.4. only need gather a two field picture, just can obtain depth direction (Z axle) data of object under test.
Realize that method of the present invention is: encoder, projection arrangement, camera, data transmission interface, aftertreatment device are integrated in the product; Again the 3D point cloud of asking for is changed into the BMP colour element; With the different different object heights of color range performance; Thereby the color picture signal of three-dimensional information is contained in output, wherein:
A. the black and white image that sinusoidal brightness is changed projects the object under test surface, comprises the following steps:
Generate the stripe pattern that gray value is sinusoidal variations by computer;
The grey scale change scope of stripe pattern is made as [п, п], and the gray value of each stripe all is unique in image;
The stripe pattern that generates once is incident upon the object under test surface;
B. body surface is taken pictures and view data is connected to aftertreatment device through data transmission interface, comprise the following steps:
Body surface is taken pictures, and the two dimensional image that is obtained has comprised through the black and white strip phase information after the body surface modulation;
Through IMAQ with processing unit is gathered and through handling the outline data of acquisition body surface;
The outline data of body surface is sent to aftertreatment device through data transmission interface;
C. aftertreatment device is processed into the 3D pseudo color image with three-dimensional data, thereby accomplishes 3D measurement or detection through this pseudo color signal is handled.
This method also comprises:
When generating pseudo color image; Described height (Z) coordinate information changes into color range through the corresponding relation of palette regulation, and by the RGB representation in components; It has determined single color of pixel color range in the some cloud; The collection of pixels that combines ranks information (XY) and colouring information (RGB) has constituted width of cloth Bitmap figure, and Bitmap figure is added that header file has just constituted different coloured image forms, i.e. pseudo color image afterwards with the ending sign.
The process processing that relates among the described B comprises finds the solution the phase place of the black and white strip image that collects; Utilize image to produce the uniqueness of the fringe gray level value of mentioning in the principle, can confirm each bar black and white strip position in sequence, and then confirm that the fringe phase after the contour of object modulation is poor; Obtain after the fringe phase difference; Further obtain the projected angle of each stripe again,, obtain the three-dimensional information of each pixel through the corresponding relation of projected angle and elevation information; (X), row (Y), (Z) coordinate information highly at once, all pixels that contain three-dimensional information within the vision constitute the point cloud of object under test.
Realize that 3D machine vision transducer of the present invention is: encoder, projection arrangement, camera, data transmission interface, aftertreatment device are integrated in the product, and the color picture signal of three-dimensional information is contained in output at last; Wherein: the work peripheral system comprises detected object, 3D machine vision transducer, data transmission interface, PC industrial computer or other data processing equipments; Wherein vision sensor comprises stripe pattern projecting unit and CCD camera unit; Wherein the stripe pattern projecting unit projects the detected object surface with sinusoidal Luminance Distribution striped; The CCD camera unit is exported the image on detected object surface after treatment continuously, and sends to PC industrial computer or other data processing equipments through digital transmission interface.
This 3D machine vision transducer also comprises:
Described 3D machine vision transducer is characterized in that described stripe pattern projecting unit comprises sinusoidal Luminance Distribution image generator and image projection device, and wherein sinusoidal Luminance Distribution image generator is connected with image projection device; Camera unit comprises IMAQ and processing unit, CCD FEP, CCD photo-sensitive cell and C Mount lens interface; Wherein IMAQ and processing unit are connected with C Mount lens interface through CCD FEP, CCD photo-sensitive cell, and IMAQ and processing unit also are connected with data transmission interface with sinusoidal Luminance Distribution image generator respectively.
Described stripe pattern projection arrangement (projecting apparatus) comprises three-color LED light source, DLP processor, input interface and zoom formula projection lens.
The beneficial effect that the present invention has:
In the one-time detection task, just need not scan and to carry out complete detection to a face.
The output reflection detects three-dimensional information within the vision, comprising: the coordinate information of the coordinate information of plane X Y and short transverse Z.
With the information of color reflection short transverse, built-in palette is confirmed the corresponding relation of color color range and height.
The pseudo color image of output can supply follow-up processing.
Because need not relative motion just can detect, processing speed is able to promote.
Only need gather a two field picture, just can obtain depth direction (Z axle) data of object under test.
Description of drawings
Fig. 1 is a method flow diagram of the present invention.
Fig. 2 is a system block diagram of the present invention.
Fig. 3 is the vision sensor block diagram of Fig. 2.
Fig. 4 is the sketch map of 3D industrial camera.
Fig. 5 is the internal structure sketch map of Fig. 4.
Among the figure: 1 PC industrial computer or other data processing equipments, 2 3D industrial cameras, 21 IMAQs and processing unit, 22 CCD FEPs, 23 CCD photo-sensitive cells, 24 C Mount lens interfaces, 25 sinusoidal Luminance Distribution image generators, 26 image projection devices, 3 detected objects, 4 data transmission interfaces, 5 camera lenses, 6 cameras.
Embodiment
Below in conjunction with accompanying drawing the present invention is described further.
Method and operation principle are following(as shown in Figure 1)
:
The black and white projected image of sinusoidal pattern Luminance Distribution is projected the object under test surface through pictcure generator and projection arrangement, and this projection arrangement mainly adopts minitype projection module (comprising three-color LED light source, DLP treatment technology, input interface, zoom formula projection output etc.).Ccd image acquisition system (comprising CCD photo-sensitive cell, FEP and output interface etc.) is taken pictures to body surface; The two dimensional image that obtains has comprised the gray level image and the profile on object under test surface; Gather through IMAQ and processing unit; And, obtain the outline data of body surface through handling.Its processing procedure comprises the anti-phase value that solves of the phase information of the black and white image that collects, is implying the pattern information of object in this phase value.Obtain further to obtain the projected angle of throw light again after the phase difference, through the corresponding relation of projected angle and elevation information, obtain the three-dimensional information of each pixel, (X), row (Y) coordinate also have height (Z coordinate) at once.All pixels that contain three-dimensional information within the vision constitute the point cloud of object under test.Elevation information can change into color range through the corresponding relation of palette regulation, and by the RGB representation in components, it has determined single color of pixel color range in the some cloud.The collection of pixels that combines ranks information (XY) and colouring information (RGB) has constituted width of cloth Bitmap figure, adds that header file just can constitute different coloured image forms afterwards with the ending sign.Coloured image is connected to follow-up processing unit through digital coffret, measures or detects thereby accomplish 3D.
Concrete steps are following: (as shown in Figure 1)
Beginning;
Demarcate camera and projection arrangement;
Obtain camera and projection arrangement inside and outside parameter;
Sinusoidal Luminance Distribution image is projected to body surface;
Obtain the image of taking pictures of body surface;
Image is carried out filtering and denoising;
Image binaryzation;
Obtain the image projection projectional angle;
Three-dimensionalreconstruction calculates, obtains three dimensional point cloud;
Show cloud data with color range;
Data result transfers to aftertreatment device;
Finish.
System comprises(as shown in Figure 2)
:Detected object 3,3D industrial camera (vision sensor) 2, data transmission interface 4, PC industrial computer or other data processing equipments 1.
The course of work: the sinusoidal gray scale striped that is produced by the encoder of vision sensor projects body surface through projection arrangement (as: projecting apparatus), and camera is taken pictures to object and exported continuous images and sends to PC industrial computer or other data processing equipments through digital transmission interface.
The machine vision transducer comprises(as shown in Figure 3)
:IMAQ and processing unit 21, CCD FEP 22, CCD photo-sensitive cell 23, C Mount lens interface 24, sinusoidal Luminance Distribution image generator 25, image projection device 26.
The course of work:
The course of work of 3D machine vision transducer: peripheral system output control signal is given to the 3D vision sensor, triggers vision sensor work; Vision sensor throws sinusoidal Luminance Distribution striped to the testee surface after being triggered, and object is taken pictures; Treatment system through the 3D vision sensor is built-in is handled the image of taking, and generates the pseudo color image that contains elevation information; The 3D vision sensor is through digital interface output pseudo color image; Peripheral system after 3D machine vision transducer is ready to pseudo color image, sends transfer instruction according to communication protocol through inquiry, notice 3D vision sensor output pseudo color image; After the 3D vision sensor receives transfer instruction, send the pseudo color image data towards Interface for digital communication; Peripheral system is after receiving a complete two field picture, and startup is measured or detected software module it is carried out necessary processing.So just accomplished the entire work process of 3D vision sensor.
Explanation of nouns:
Demarcate: confirm the relation between video space and the physical space, set up the transmission equation that quantizes.
Inside and outside parameter: the many parameters in the calibration process.
Binaryzation:, be divided into two kinds of pixels with different of black and white with the threshold value of piece image according to selected GTG.
How to obtain the projected angle of throw light: the calculating through phase difference realizes.
Three-dimensionalreconstruction: just be based on the two-dimensional image information that gets access to and known space geometry relation, set up space (three-dimensional) model of object.
Claims (7)
1. the generation method of a 3D machine vision signal; It is characterized in that comprising encoder, projection arrangement, camera, data transmission interface, aftertreatment device are integrated in the product; Again the 3D point cloud of asking for is changed into the BMP colour element; With the different different object heights of color range performance, thereby the color picture signal of three-dimensional information is contained in output; Wherein:
A. the black and white image that sinusoidal brightness is changed projects the object under test surface, comprises the following steps:
Generate the stripe pattern that gray value is sinusoidal variations by computer;
The grey scale change scope of stripe pattern is made as [п, п], and the gray value of each stripe all is unique in image;
The stripe pattern that generates once is incident upon the object under test surface;
B. body surface is taken pictures and view data is connected to aftertreatment device through data transmission interface, comprise the following steps:
Body surface is taken pictures, and the two dimensional image that is obtained has comprised through the black and white strip phase information after the body surface modulation;
Through IMAQ with processing unit is gathered and through handling the outline data of acquisition body surface;
The outline data of body surface is sent to aftertreatment device through data transmission interface;
C. aftertreatment device is processed into the 3D pseudo color image with three-dimensional data, thereby accomplishes 3D measurement or detection through this pseudo color signal is handled.
2. the generation method of 3D machine vision signal as claimed in claim 1; What it is characterized in that relating among the described B finds the solution through handling the phase place that comprises the black and white strip image that collects; Utilize image to produce the uniqueness of the fringe gray level value of mentioning in the principle, can confirm each bar black and white strip position in sequence, and then confirm that the fringe phase after the contour of object modulation is poor; Obtain after the fringe phase difference; Further obtain the projected angle of each stripe again,, obtain the three-dimensional information of each pixel through the corresponding relation of projected angle and elevation information; (X), row (Y), (Z) coordinate information highly at once, all pixels that contain three-dimensional information within the vision constitute the point cloud of object under test.
3. the generation method of 3D machine vision signal as claimed in claim 2; It is characterized in that the corresponding relation of described height (Z) coordinate information through the palette regulation; Change into color range; And by the RGB representation in components, it has determined single color of pixel color range in the some cloud, and the collection of pixels that combines ranks information (XY) and colouring information (RGB) has constituted width of cloth Bitmap figure; Bitmap figure is added that header file has just constituted different coloured image forms afterwards with the ending sign, or claim the pseudo color image.
4. the generation method of 3D machine vision signal as claimed in claim 1 is characterized in that also comprising the following steps:
Beginning;
Demarcate camera and projection arrangement;
Obtain camera and projection arrangement inside and outside parameter;
Sinusoidal Luminance Distribution image is projected to body surface;
Obtain the image of taking pictures of body surface;
Image is carried out filtering and denoising;
Image binaryzation;
Obtain the image projection projectional angle;
Three-dimensionalreconstruction calculates, obtains three dimensional point cloud;
Show cloud data with color range;
Data result transfers to aftertreatment device;
Finish.
5. realize the 3D machine vision transducer of the 3D machine vision signal creating method of claim 1; It is characterized in that encoder, projection arrangement, camera, data transmission interface, aftertreatment device are integrated in the product, the color picture signal of three-dimensional information is contained in output at last; Wherein: the work peripheral system comprises peripheral detected object, 3D machine vision transducer, data transmission interface, PC industrial computer or other data processing equipments; Wherein 3D machine vision transducer comprises stripe pattern projecting unit and CCD camera unit; Wherein the stripe pattern projecting unit projects the detected object surface with sinusoidal Luminance Distribution striped; The CCD camera unit is exported the image on detected object surface after treatment continuously, and sends to PC industrial computer or other data processing equipments through digital transmission interface.
6. 3D machine vision transducer as claimed in claim 5 is characterized in that described stripe pattern projecting unit comprises sinusoidal Luminance Distribution image generator and image projection device, and wherein sinusoidal Luminance Distribution image generator is connected with image projection device; Camera unit comprises IMAQ and processing unit, CCD FEP, CCD photo-sensitive cell and C Mount lens interface; Wherein IMAQ and processing unit are connected with C Mount lens interface through CCD FEP, CCD photo-sensitive cell, and IMAQ and processing unit also are connected with data transmission interface with sinusoidal Luminance Distribution image generator respectively.
7. 3D machine vision transducer as claimed in claim 6 is characterized in that described sinusoidal Luminance Distribution image projection arrangement comprises three-color LED light source, DLP processor, input interface and zoom formula projection lens.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110272416A CN102316355A (en) | 2011-09-15 | 2011-09-15 | Generation method of 3D machine vision signal and 3D machine vision sensor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110272416A CN102316355A (en) | 2011-09-15 | 2011-09-15 | Generation method of 3D machine vision signal and 3D machine vision sensor |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102316355A true CN102316355A (en) | 2012-01-11 |
Family
ID=45429115
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110272416A Pending CN102316355A (en) | 2011-09-15 | 2011-09-15 | Generation method of 3D machine vision signal and 3D machine vision sensor |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102316355A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631848A (en) * | 2014-10-31 | 2016-06-01 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105651173A (en) * | 2016-02-29 | 2016-06-08 | 题晶 | 3D measuring method and system |
CN106464857A (en) * | 2014-03-26 | 2017-02-22 | 驼鹿科技公司 | Compact 3D depth capture systems |
CN107241592A (en) * | 2016-06-29 | 2017-10-10 | 鲁班嫡系机器人 | A kind of projecting unit and filming apparatus, processor, imaging device including the unit |
WO2019119372A1 (en) * | 2017-12-21 | 2019-06-27 | 深圳前海达闼云端智能科技有限公司 | Display method and device, electronic device and computer program product |
CN112287859A (en) * | 2020-11-03 | 2021-01-29 | 北京京东乾石科技有限公司 | Object recognition method, device and system, computer readable storage medium |
CN113935891A (en) * | 2021-09-09 | 2022-01-14 | 完美世界(北京)软件科技发展有限公司 | Pixel-style scene rendering method, device and storage medium |
CN114332407A (en) * | 2021-12-09 | 2022-04-12 | 合肥源信智能仪器有限公司 | Method for assisting handwriting authentication by using three-dimensional equipment |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3639869B2 (en) * | 2002-06-05 | 2005-04-20 | 学校法人慶應義塾 | Stereoscopic observation apparatus and method for creating a stereoscopic image that reproduces the color of the surface of an object |
US20060132915A1 (en) * | 2004-12-16 | 2006-06-22 | Yang Ung Y | Visual interfacing apparatus for providing mixed multiple stereo images |
JP2006268345A (en) * | 2005-03-23 | 2006-10-05 | Toshiba Corp | Image processing device and image processing method |
TW200838287A (en) * | 2007-03-07 | 2008-09-16 | Ling-Yuan Tseng | System and method for 3D solid image capture and display |
CN101788274A (en) * | 2010-03-15 | 2010-07-28 | 电子科技大学 | Method for 3D shape measurement of colourful composite grating |
CN101825443A (en) * | 2010-03-09 | 2010-09-08 | 深圳大学 | Three-dimensional imaging method and system |
CN101825445A (en) * | 2010-05-10 | 2010-09-08 | 华中科技大学 | Three-dimension measuring system for dynamic object |
CN101907954A (en) * | 2010-07-02 | 2010-12-08 | 中国科学院深圳先进技术研究院 | Interactive projection system and interactive projection method |
CN102074045A (en) * | 2011-01-27 | 2011-05-25 | 深圳泰山在线科技有限公司 | System and method for projection reconstruction |
-
2011
- 2011-09-15 CN CN201110272416A patent/CN102316355A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3639869B2 (en) * | 2002-06-05 | 2005-04-20 | 学校法人慶應義塾 | Stereoscopic observation apparatus and method for creating a stereoscopic image that reproduces the color of the surface of an object |
US20060132915A1 (en) * | 2004-12-16 | 2006-06-22 | Yang Ung Y | Visual interfacing apparatus for providing mixed multiple stereo images |
JP2006268345A (en) * | 2005-03-23 | 2006-10-05 | Toshiba Corp | Image processing device and image processing method |
TW200838287A (en) * | 2007-03-07 | 2008-09-16 | Ling-Yuan Tseng | System and method for 3D solid image capture and display |
CN101825443A (en) * | 2010-03-09 | 2010-09-08 | 深圳大学 | Three-dimensional imaging method and system |
CN101788274A (en) * | 2010-03-15 | 2010-07-28 | 电子科技大学 | Method for 3D shape measurement of colourful composite grating |
CN101825445A (en) * | 2010-05-10 | 2010-09-08 | 华中科技大学 | Three-dimension measuring system for dynamic object |
CN101907954A (en) * | 2010-07-02 | 2010-12-08 | 中国科学院深圳先进技术研究院 | Interactive projection system and interactive projection method |
CN102074045A (en) * | 2011-01-27 | 2011-05-25 | 深圳泰山在线科技有限公司 | System and method for projection reconstruction |
Non-Patent Citations (1)
Title |
---|
陈金法,等: "面向IC芯片的精密立体视觉测量检测平台", 《计算机光盘软禁与应用》 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106464857A (en) * | 2014-03-26 | 2017-02-22 | 驼鹿科技公司 | Compact 3D depth capture systems |
CN106464857B (en) * | 2014-03-26 | 2019-03-08 | 驼鹿科技公司 | Compact 3D depth capture systems |
CN105631848B (en) * | 2014-10-31 | 2018-11-09 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN105631848A (en) * | 2014-10-31 | 2016-06-01 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105651173A (en) * | 2016-02-29 | 2016-06-08 | 题晶 | 3D measuring method and system |
CN107241592A (en) * | 2016-06-29 | 2017-10-10 | 鲁班嫡系机器人 | A kind of projecting unit and filming apparatus, processor, imaging device including the unit |
WO2018001252A1 (en) * | 2016-06-29 | 2018-01-04 | 鲁班嫡系机器人 | Projection unit and photographing apparatus comprising same projection unit, processor, and imaging device |
EP3481062A4 (en) * | 2016-06-29 | 2020-01-08 | Robotics Robotics Limited | Projection unit and photographing apparatus comprising same projection unit, processor, and imaging device |
CN107241592B (en) * | 2016-06-29 | 2020-08-11 | 鲁班嫡系机器人 | Imaging device and imaging method |
WO2019119372A1 (en) * | 2017-12-21 | 2019-06-27 | 深圳前海达闼云端智能科技有限公司 | Display method and device, electronic device and computer program product |
CN112287859A (en) * | 2020-11-03 | 2021-01-29 | 北京京东乾石科技有限公司 | Object recognition method, device and system, computer readable storage medium |
CN113935891A (en) * | 2021-09-09 | 2022-01-14 | 完美世界(北京)软件科技发展有限公司 | Pixel-style scene rendering method, device and storage medium |
CN114332407A (en) * | 2021-12-09 | 2022-04-12 | 合肥源信智能仪器有限公司 | Method for assisting handwriting authentication by using three-dimensional equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102316355A (en) | Generation method of 3D machine vision signal and 3D machine vision sensor | |
EP3531066B1 (en) | Three-dimensional scanning method including a plurality of lasers with different wavelengths, and scanner | |
US20240118218A1 (en) | Stroboscopic stepped illumination defect detection system | |
CN1238691C (en) | Combined stereovision, color 3D digitizing and motion capture system | |
US7729530B2 (en) | Method and apparatus for 3-D data input to a personal computer with a multimedia oriented operating system | |
US9432593B2 (en) | Target object information acquisition method and electronic device | |
CA2731680C (en) | System for adaptive three-dimensional scanning of surface characteristics | |
CN104197861B (en) | Three-dimension digital imaging method based on structure light gray scale vector | |
CN110838164B (en) | Monocular image three-dimensional reconstruction method, system and device based on object point depth | |
CN103824318A (en) | Multi-camera-array depth perception method | |
CN108665535A (en) | A kind of three-dimensional structure method for reconstructing and system based on coding grating structured light | |
CN112361989B (en) | Method for calibrating parameters of measurement system through point cloud uniformity consideration | |
Beltran et al. | A comparison between active and passive 3d vision sensors: Bumblebeexb3 and Microsoft Kinect | |
CN111811431A (en) | Three-dimensional scanner, three-dimensional scanning system and method | |
Sergiyenko et al. | Multi-view 3D data fusion and patching to reduce Shannon entropy in Robotic Vision | |
CN107044830B (en) | Distributed multi-view stereoscopic vision system and target extraction method | |
Rodrigues et al. | Structured light techniques for 3D surface reconstruction in robotic tasks | |
CN111866490A (en) | Depth image imaging system and method | |
CN107392955B (en) | Depth of field estimation device and method based on brightness | |
CN109102548B (en) | Method and system for identifying tracking range | |
CN202340288U (en) | 3D (three-dimensional) machine vision sensor | |
CN212843399U (en) | Portable three-dimensional measuring equipment | |
Agrawal et al. | RWU3D: Real World ToF and Stereo Dataset with High Quality Ground Truth | |
CN113902791B (en) | Three-dimensional reconstruction method and device based on liquid lens depth focusing | |
CN104637042A (en) | Camera calibration method based on circular reference object and system thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20120111 |