CN109727282A - A kind of Scale invariant depth map mapping method of 3-D image - Google Patents
A kind of Scale invariant depth map mapping method of 3-D image Download PDFInfo
- Publication number
- CN109727282A CN109727282A CN201811608105.XA CN201811608105A CN109727282A CN 109727282 A CN109727282 A CN 109727282A CN 201811608105 A CN201811608105 A CN 201811608105A CN 109727282 A CN109727282 A CN 109727282A
- Authority
- CN
- China
- Prior art keywords
- image
- scale
- coordinate
- depth map
- cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013507 mapping Methods 0.000 title claims abstract description 19
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
Abstract
The invention discloses a kind of Scale invariant depth map mapping methods of 3-D image, obtain the three-dimensional point cloud image of object, the parameter that setting mapping needs, it calculates the line number R and columns C of image and generates the image that gray scale is 0, point in traversal point cloud, the grey scale pixel value for calculating corresponding position, obtains the depth map of specified Scale invariant.The method of the present invention is a kind of depth map mapping method for the Scale invariant that three-dimensional figure picture point converges conjunction, the Object Depth figure of same scale can be obtained on different depth, it can be handled using existing image algorithm, and have the feature that can calculate object space three-dimensional position by depth map.
Description
Technical field
The present invention relates to a kind of depth of 3 D picture figure mapping method, the Scale invariant of specifically a kind of 3-D image is deep
Spend figure mapping method.
Background technique
In industrial automation, machine vision improves the flexibility and the degree of automation of production, in some dangerous work rings
Border or artificial vision be difficult to the occasion met the requirements, and machine in normal service vision substitutes artificial vision;It is raw in high-volume industry simultaneously
During production, manually visual inspection product quality low efficiency and precision is not high, can be mentioned significantly with machine vision detection method
The degree of automation of high efficiency and production.The software of two-dimensional visual, algorithm and application are highly developed at present, and three-dimensional
The currently used method of vision is the point cloud for acquiring object, and the image of a three-dimension object is exactly the set of cloud (see figure
1), for object on different height face, the depth image of 3-D image is the (see figure 2) of variation, and relevant treatment algorithm is very empty
It lacks.
Summary of the invention
To solve the above problems, being one the invention proposes a kind of Scale invariant depth map mapping method of 3-D image
Kind three-dimensional figure picture point converges the depth map mapping method of the Scale invariant of conjunction, and this method can obtain identical ruler on different depth
The Object Depth figure of degree can be handled using existing image algorithm, and had and can be calculated object space by depth map
The feature of three-dimensional position.
The Scale invariant depth map mapping method of 3-D image of the present invention, its step are as follows:
Step 1. obtains the three-dimensional point cloud image of object by three-dimensional camera, obtains three on different height of same object
Tie up picture point cloud.
The parameter that step 2. setting mapping needs: image top left co-ordinate (x1, y1), image lower-left angular coordinate (x2,
y2), pixel wide W, Dmax are 255 corresponding depth values when calculating gray scale, and Dmin is 0 corresponding depth when calculating gray scale
Value.Wherein x1For X axis coordinate of the image upper left corner in cloud coordinate system, y1For Y-axis of the image upper left corner in cloud coordinate system
Coordinate;x2For X axis coordinate of the image lower right corner in cloud coordinate system, y2The Y-axis for being the image lower right corner in cloud coordinate system is sat
Mark.
Step 3. indicates image depth values G with gray value of image:
G = (Dz-Dmin)/(Dmax-Dmin)× 255
Wherein Dz is the Z coordinate of current point.
Step 4. calculates the line number R and columns C of image according to the parameter of step 2, and generates the image that gray scale is 0:
R=(x2-x1)/W
C=(x2-x1)/W
Point (D in step 5. traversal point cloudx, Dy, Dz), calculate corresponding position (r1, c1) grey scale pixel value:
r1=(Dx-x1)/W
c1=(Dy-y1) /W
Wherein Dx、Dy、DzX, Y, the Z coordinate respectively put, r1Line position for respective pixel position is set, c1For respective pixel position
Column position.
The G value assignment of step 3 is recycled at (r1, the c1) of image M in step 6., owns on image until having traversed
Point, M are obtained Scale invariant depth map.
The method of the present invention is a kind of two-dimensional depth figure mapping method for the Scale invariant that three-dimensional figure picture point converges conjunction, this method
The Object Depth figure of same scale can be obtained on different depth, can be handled using existing image algorithm, and have
The feature of object space three-dimensional position can be calculated by depth map.
Detailed description of the invention
Fig. 1 is the point cloud of 3-D image.Wherein: a figure is overall point cloud chart, and b figure is local point cloud chart.
Fig. 2 is dimensional variation of the same object on conventional depth figure different height.Wherein: a figure is wide-long shot
The depth image of workpiece point cloud mapping, b figure is the depth image of shooting at close range point cloud.
Fig. 3 is the setting schematic diagram for the parameter that mapping needs.Wherein x1 is X-axis of the image upper left corner in cloud coordinate system
Coordinate, y1 are Y axis coordinate of the image upper left corner in cloud coordinate system;X2 is X-axis of the image lower right corner in cloud coordinate system
Coordinate, y2 are Y axis coordinate of the image lower right corner in cloud coordinate system, and w is pixel wide.
Fig. 4 is the depth map for the Scale invariant that the Scale invariant depth map mapping method of 3-D image of the present invention obtains
Figure.Wherein: a figure is the Scale invariant depth image of the workpiece point cloud mapping of wide-long shot, and b figure is shooting at close range point cloud
Scale invariant depth image.
Specific embodiment
Below with reference to embodiment and attached drawing, invention is further described in detail.
Embodiment:
The three-dimensional figure picture point cloud on different height of same object is obtained, may compare conventional depth image, such as Fig. 2, scale
It is different on the image.
The parameter that setting mapping needs:
Object | Value |
x1 | -300.0 |
y1 | -300.0 |
x2 | 400 |
y2 | 300 |
W | 1.0 |
Dmax | -720.0 |
Dmin | -760.0 |
Unit is identical with point cloud unit, is here millimeter.
Calculate the line number and columns of image:
R=(x2–x1)/W= 700
C=(x2–x1)/W= 600
The image that gray scale is 0 is generated according to ranks quantity.
Traversal point cloud, such as the coordinate of one of point cloud are as follows:
Dx = -70.1
Dy = 2.8
Dz = -741.3
The then ranks coordinate of respective pixel are as follows:
r1=(Dx–x1The round of)/W=229.9 is 230;
c1=(Dy–y1The round of)/W=297.2 is 297;
The then gray value of respective pixel are as follows:
G=(Dz-Dmin)/(Dmax-Dmin)×255=119.21
The gray value for the position that each pair of point is answered is calculated in the above manner, and traversing two point cloud chart pictures in this manner can obtain
To the image of Fig. 4, because there is scale invariability, existing vision algorithm processing can use, as template matching, minimum are outer
Circle etc. is connect, object space can be positioned, and object space three-dimensional coordinate can be calculated according to the inverse operation of above method, for machine
Device people or other equipment crawl or processing.
Claims (1)
1. a kind of Scale invariant depth map mapping method of 3-D image, its step are as follows:
Step 1. obtains the three-dimensional point cloud image of object by three-dimensional camera;
The parameter that step 2. setting mapping needs:
Image top left co-ordinate (x1, y1), image lower-left angular coordinate (x2, y2), pixel wide W, Dmax are when calculating gray scale
255 corresponding depth values, Dmin are 0 corresponding depth value when calculating gray scale;Wherein x1It is the image upper left corner in a cloud coordinate system
In X axis coordinate, y1For Y axis coordinate of the image upper left corner in cloud coordinate system;x2It is the image lower right corner in cloud coordinate system
X axis coordinate, y2For Y axis coordinate of the image lower right corner in cloud coordinate system;
Step 3. indicates image depth values G with gray value of image:
G = (Dz-Dmin)/(Dmax-Dmin)× 255
Wherein Dz is the Z coordinate of current point;
Step 4. calculates the line number R and columns C of image according to the parameter of step 2:
R=(x2-x1)/W
C=(x2-x1)/W
And generate the image M that gray scale is 0;
Point (D in step 5. traversal point cloudx, Dy, Dz), calculate corresponding position (r1, c1) grey scale pixel value:
r1=(Dx-x1)/W
c1=(Dy-y1)/W
Wherein Dx、Dy、DzX, Y, the Z coordinate respectively put, r1Line position for respective pixel position is set, c1For respective pixel position
Column position;
The G value assignment of step 3 is recycled to (the r of image M in step 6.1, c1) at, until having traversed all the points on image, M is
For obtained Scale invariant depth map.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811608105.XA CN109727282A (en) | 2018-12-27 | 2018-12-27 | A kind of Scale invariant depth map mapping method of 3-D image |
PCT/CN2019/087244 WO2020133888A1 (en) | 2018-12-27 | 2019-05-16 | Scale-invariant depth map mapping method for three-dimensional image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811608105.XA CN109727282A (en) | 2018-12-27 | 2018-12-27 | A kind of Scale invariant depth map mapping method of 3-D image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109727282A true CN109727282A (en) | 2019-05-07 |
Family
ID=66297310
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811608105.XA Pending CN109727282A (en) | 2018-12-27 | 2018-12-27 | A kind of Scale invariant depth map mapping method of 3-D image |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109727282A (en) |
WO (1) | WO2020133888A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020133888A1 (en) * | 2018-12-27 | 2020-07-02 | 南京埃克里得视觉技术有限公司 | Scale-invariant depth map mapping method for three-dimensional image |
CN112767399A (en) * | 2021-04-07 | 2021-05-07 | 惠州高视科技有限公司 | Semiconductor bonding wire defect detection method, electronic device and storage medium |
CN113538547A (en) * | 2021-06-03 | 2021-10-22 | 苏州小蜂视觉科技有限公司 | Depth processing method of 3D line laser sensor and dispensing equipment |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114882496B (en) * | 2022-04-15 | 2023-04-25 | 武汉益模科技股份有限公司 | Three-dimensional part similarity calculation method based on depth image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103455984A (en) * | 2013-09-02 | 2013-12-18 | 清华大学深圳研究生院 | Method and device for acquiring Kinect depth image |
US20160093234A1 (en) * | 2014-09-26 | 2016-03-31 | Xerox Corporation | Method and apparatus for dimensional proximity sensing for the visually impaired |
CN106780592A (en) * | 2016-06-30 | 2017-05-31 | 华南理工大学 | Kinect depth reconstruction algorithms based on camera motion and image light and shade |
WO2018039871A1 (en) * | 2016-08-29 | 2018-03-08 | 北京清影机器视觉技术有限公司 | Method and apparatus for processing three-dimensional vision measurement data |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109727282A (en) * | 2018-12-27 | 2019-05-07 | 南京埃克里得视觉技术有限公司 | A kind of Scale invariant depth map mapping method of 3-D image |
-
2018
- 2018-12-27 CN CN201811608105.XA patent/CN109727282A/en active Pending
-
2019
- 2019-05-16 WO PCT/CN2019/087244 patent/WO2020133888A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103455984A (en) * | 2013-09-02 | 2013-12-18 | 清华大学深圳研究生院 | Method and device for acquiring Kinect depth image |
US20160093234A1 (en) * | 2014-09-26 | 2016-03-31 | Xerox Corporation | Method and apparatus for dimensional proximity sensing for the visually impaired |
CN106780592A (en) * | 2016-06-30 | 2017-05-31 | 华南理工大学 | Kinect depth reconstruction algorithms based on camera motion and image light and shade |
WO2018039871A1 (en) * | 2016-08-29 | 2018-03-08 | 北京清影机器视觉技术有限公司 | Method and apparatus for processing three-dimensional vision measurement data |
CN108541322A (en) * | 2016-08-29 | 2018-09-14 | 北京清影机器视觉技术有限公司 | The treating method and apparatus of dimensional visual measurement data |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020133888A1 (en) * | 2018-12-27 | 2020-07-02 | 南京埃克里得视觉技术有限公司 | Scale-invariant depth map mapping method for three-dimensional image |
CN112767399A (en) * | 2021-04-07 | 2021-05-07 | 惠州高视科技有限公司 | Semiconductor bonding wire defect detection method, electronic device and storage medium |
CN112767399B (en) * | 2021-04-07 | 2021-08-06 | 高视科技(苏州)有限公司 | Semiconductor bonding wire defect detection method, electronic device and storage medium |
CN113538547A (en) * | 2021-06-03 | 2021-10-22 | 苏州小蜂视觉科技有限公司 | Depth processing method of 3D line laser sensor and dispensing equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2020133888A1 (en) | 2020-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109727282A (en) | A kind of Scale invariant depth map mapping method of 3-D image | |
CN111242080B (en) | Power transmission line identification and positioning method based on binocular camera and depth camera | |
CN107588721A (en) | The measuring method and system of a kind of more sizes of part based on binocular vision | |
CN110136211A (en) | A kind of workpiece localization method and system based on active binocular vision technology | |
CN110823252B (en) | Automatic calibration method for multi-line laser radar and monocular vision | |
CN107450885A (en) | A kind of coordinate transform method for solving of industrial robot and three-dimension sensor | |
CN112132907B (en) | Camera calibration method and device, electronic equipment and storage medium | |
CN1293752A (en) | Three-D object recognition method and pin picking system using the method | |
CN104976950B (en) | Object space information measuring device and method and image capturing path calculating method | |
JP2692603B2 (en) | 3D measurement method | |
TW201525633A (en) | CNC machining route amending system and method | |
CN112288815B (en) | Target die position measurement method, system, storage medium and device | |
JPH055041B2 (en) | ||
KR102023087B1 (en) | Method for camera calibration | |
CN107895166A (en) | The method that the geometric hashing of feature based description realizes target robust control policy | |
Lee et al. | Implementation of a robotic arm with 3D vision for shoes glue spraying system | |
CN112070844A (en) | Calibration method and device of structured light system, calibration tool diagram, equipment and medium | |
CN108050934B (en) | Visual vertical positioning method for workpiece with chamfer | |
CN114102593B (en) | Method for grabbing regular materials by robot based on two-dimensional low-definition image | |
Lilienblum et al. | A coded 3D calibration method for line-scan cameras | |
CN103200417B (en) | 2D (Two Dimensional) to 3D (Three Dimensional) conversion method | |
CN111815693B (en) | Depth image generation method and device | |
CN114549659A (en) | Camera calibration method based on quasi-three-dimensional target | |
CN110035279B (en) | Method and device for searching SFR test area in checkerboard test pattern | |
CN114049304A (en) | 3D grating detection method and device, computer equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190507 |