CN106570903B - A kind of visual identity and localization method based on RGB-D camera - Google Patents

A kind of visual identity and localization method based on RGB-D camera Download PDF

Info

Publication number
CN106570903B
CN106570903B CN201610894251.8A CN201610894251A CN106570903B CN 106570903 B CN106570903 B CN 106570903B CN 201610894251 A CN201610894251 A CN 201610894251A CN 106570903 B CN106570903 B CN 106570903B
Authority
CN
China
Prior art keywords
point
plane
potential
cloud
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610894251.8A
Other languages
Chinese (zh)
Other versions
CN106570903A (en
Inventor
张智军
张文康
黄永前
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201610894251.8A priority Critical patent/CN106570903B/en
Publication of CN106570903A publication Critical patent/CN106570903A/en
Application granted granted Critical
Publication of CN106570903B publication Critical patent/CN106570903B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present invention provides a kind of visual identity based on RGB-D camera and localization methods, the following steps are included: 1) be converted into three-dimensional point cloud after, by using Kinect camera sensing device carrying out the acquisition of color image and depth image, and the plane in scene is extracted;2) after, extracting the plane in step 1), the extraction and segmentation of object are carried out to remaining cloud;3) point of the object obtained in step 2), is converged conjunction to be identified and matched respectively;4), the object point cloud for getting step 2) realizes the positioning of object by calculating.This method carries out identifying and positioning for object based on the three-dimensional point cloud image that RGB-D sensor Kinect II is acquired, the complex calculations such as matching between multiple image when being not involved with object positioning, computational efficiency is greatly increased, strong real-time is provided simultaneously with, is suitable for the advantages of daily life complex environment.

Description

A kind of visual identity and localization method based on RGB-D camera
Technical field
The present invention relates to the identification of machine vision and positioning field, especially a kind of vision based on RGB-D camera is known Not and localization method.
Background technique
Currently, existing object identification and positioning system based on more mesh color image cameras, is by three-dimensional mostly Image with different sensors acquisition, obtains the position of each pixel in space, deposits cost is larger, the speed of service Slowly, the problems such as system complex.
Object edge segmentation is that the method that the image based on colour TV camera carries out convex closure extraction is realized mostly, the processing side Method needs to consider object appearance color, is easy to produce erroneous judgement when encountering the background color situation similar to object, and convex closure mentions The method taken is there is also object convex closure profile mistake and the problem of include background parts.
Relative to the existing method for carrying out object identification and positioning based on more mesh color image cameras, passed using RGB-D Sensor, which carries out object identification and the method for positioning, has many advantages:
Firstly, calculation amount is small, arithmetic speed is fast, the positioning of strong real-time, object is at low cost, the RGB-D that Microsoft releases Sensor Kinect II reduces the cost of 3-D scanning, directly provides a user the higher color image of resolution ratio, depth map Picture and point cloud chart picture can directly obtain each pixel in camera coordinates system only by a RGB-D sensor Position obtains each pixel in space without the image acquired by different sensors in the more mesh systems of Stereo matching Position;
Secondly, accuracy increases with robustness, based on depth image and point cloud chart picture that RGB-D camera provides, Plane extraction and the segmentation of object and positioning can be directly carried out, the influence of object appearance itself and background color is effectively prevented, The occurrence of erroneous judgement can be reduced, improves the Stability and veracity of system.
Summary of the invention
The purpose of the present invention is in view of the above shortcomings of the prior art, provide a kind of vision based on RGB-D camera Identification and localization method, this method calculation amount is small, strong real-time, and can adapt to daily life scene.
The purpose of the present invention can be achieved through the following technical solutions:
A kind of visual identity and localization method based on RGB-D camera, the described method comprises the following steps:
1) three are converted into after carrying out the acquisition of color image and depth image to object by Kinect camera sensing device Tie up point cloud chart picture;
2) each point of the three-dimensional point cloud image obtained to step 1) carries out corresponding normal vector calculating;
3) the normal direction duration set obtained to step 2) is carried out with the background plane that algorithm of region growing places object It extracts;
4) point of the background plane extracted in step 3) is removed, and conjunction is converged to remaining cloud progress object point and is mentioned It takes and convex closure extraction process;
5) each object point extracted in step 4) is converged conjunction to combine with corresponding convex closure, carries out second zone growth, Realize the segmentation of each object integrity profile and the extraction of complete point set;
6) it according to the integrity profile of each object obtained in step 5), extracts corresponding color image and carries out feature respectively Extraction and match cognization;
7) point in the integrity profile for each object that step 5) obtains is converged into conjunction and carries out operation of averaging, obtain each object Location information in camera coordinates system;
8) location information of each object for obtaining step 7) in camera coordinates system carries out coordinate system transformation, is transformed into In world coordinate system, the positioning of each object is realized.
Preferably, in step 2), the method that calculates normal vector are as follows:
If PkFor want to claim surface normal point, find point P firstkIn the picture up and down nearby four point P1、 P2、P3And P4, P1And P3Form vector ν1, P2And P4Form vector ν2, then point PkSurface normal νpν can be passed through2Multiplication cross ν1 It obtains, specific as follows:
νp2×ν1
The normal vector of each point of three-dimensional point cloud image passes through formula as above and is calculated.
Preferably, in step 3), the normal vector of each point of first sequential scan three-dimensional point cloud image encounters vertical method Vector then continually looks for the point nearby and normal vector is vertical point, is added in potential plane point set, if potential plane point set The number at midpoint is greater than the threshold value of setting, then it is assumed that the potential plane point set is that a plane point set merges and potential planar point Collection is added in plane set, is otherwise continued to scan on remaining normal vector, after the end of scan, be can be obtained plane set, Realize the extraction of plane.
Preferably, in step 5), the extracting method of the segmentation of object integrity profile and complete point set the following steps are included:
A) three-dimensional point cloud, the point in the Convex range of planar point set peace face are inputted;
B) point in sequential scan plane Convex range finds the point for belonging in Convex range but being not belonging to plane;
C) it after the non-flat millet cake in the Convex range found in step b), continually looks for all convex near the non-flat millet cake Non-flat millet cake in packet, and be added to potential object point and concentrate;
If the number of the potential object point centrostigma d) in step c) is less than the threshold value of setting, returns to step b) and continue to sweep Retouch remaining plane convex closure point;
If the number of the potential object point centrostigma e) in step c) is more than or equal to the threshold value of setting, then it is assumed that the potential object Body point set is an object point set;
F) continue the searching borderline point of convex closure near the potential object point set in step e) and be added to potential object point It concentrates, i.e., the object point outside convex closure is also searched out and carry out and be added potential object point concentration;
G) potential object point set obtained in step f) is added in collection of objects;
If h) there are still the points in cloud not to be scanned, returns to step b) and continually look for new object point set;
If the whole points i) put in cloud are scanned, algorithm terminates to obtain collection of objects.
Preferably, step 6) specifically:
Firstly, the three-dimensional point cloud set by object obtains object corresponding region in color image, where object Image-region intercept after using Open CV Feature Detector::detect () function obtain surf feature Point;
Secondly, the Feature2D::compute () function for further using Open CV obtains surf feature vector, object Surf feature vector be added in identification library as the identification feature of the object, or it is matched directly as the object identification Feature vector, when corresponding surf feature vector is matched in the surf feature vector and library for realizing object, using nearest Neighbour's open source library FLANN carries out the matching of feature vector;
Finally, the matching of feature vector is carried out using arest neighbors open source library FLANN, particularly as being using Open CV Descriptor Matcher::match () function is realized.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1, the present invention use RGB-D sensor, compared to the existing object identification based on more mesh color image cameras with Positioning system such as matches at the complex calculations between multiple image when being not involved with object positioning, possess that calculation amount is small, computational efficiency The advantages such as high, arithmetic speed is fast, the positioning of strong real-time, object at low cost, accuracy is high, strong robustness are, it can be achieved that accurate quick Stable object identification and positioning;
2, the present invention uses algorithm of region growing, and background plane extraction is carried out in three-dimensional point cloud, and it is small to possess calculation amount, quasi- Really the features such as segmentation, quick separating and the extraction of background plane can be realized, improve accurate extraction, positioning and the identification of object A possibility that;
3, the point of each object is converged conjunction and combined with corresponding convex closure by the present invention, with second zone growth algorithm, is picked Except the point for belonging to background parts in object convex closure, increases the point for belonging to object parts outside convex closure, realize the complete of contour of object It extracts, improves the accuracy of object positioning.
Detailed description of the invention
Fig. 1 is the flow chart of the method for the present invention.
Fig. 2 is the normal vector multiplication cross schematic diagram on surface of the present invention.
Fig. 3 is plane domain growth algorithm flow chart of the present invention.
Fig. 4 is the segmentation of object integrity profile of the present invention and the extraction flow chart of complete point set.
Fig. 5 (a) is Kinect camera coordinates system schematic diagram, and Fig. 5 (b) is actual world coordinate system schematic diagram.
Specific embodiment
Present invention will now be described in further detail with reference to the embodiments and the accompanying drawings, but embodiments of the present invention are unlimited In this.
Embodiment:
A kind of visual identity based on RGB-D camera and localization method are present embodiments provided, as shown in Figure 1, mainly It is made of the acquisition of three-dimensional point cloud, plane extraction, the segmentation of object, the positioning of the feature extraction and matching of object, object, it is described Method specifically includes the following steps:
Step 1: being converted after carrying out the acquisition of color image and depth image to object by Kinect camera sensing device For three-dimensional point cloud image;
In this step, Kinect sensor can acquire RGB-d image, pass through included api function or open nature The thirds such as interaction (Open Natural Interaction, Open NI), Dian Yunku (Point Cloud Library, PCL) Square function library can be obtained three-dimensional point cloud image.
Step 2: carrying out corresponding normal vector calculating to each point for the three-dimensional point cloud image that step 1 obtains;
As shown in Fig. 2, being the normal vector multiplication cross schematic diagram on surface of the present invention, PkFor want to claim surface normal point, Point P is found firstkIn the picture up and down nearby four point P1、P2、P3And P4, P1And P3Form vector ν1, P2And P4Form to Measure ν2, then point PkSurface normal νpν can be passed through2Multiplication cross ν1It obtains, specific as follows:
νp2×ν1
The normal vector of each point of three-dimensional point cloud image can be calculated by formula as above.
Step 3: the background placed with algorithm of region growing to object is flat to the normal direction duration set that step 2 obtains Face extracts;
In this step, as shown in figure 3, the normal vector of each point of first sequential scan three-dimensional point cloud image, it is vertical to encounter Normal vector then continually look for the point nearby and normal vector be vertical point, be added in potential plane point set S, if potential plane The number N at the midpoint point set SsGreater than the threshold value N of setting, then it is assumed that the potential plane point set S is a plane point set merging and handle Potential plane point set S is added in plane set C, otherwise continues to scan on remaining normal vector, after the end of scan, can be obtained Plane set C is obtained, realizes the extraction of plane.
Step 4: the point of the background plane extracted in step 3 is removed, and object point cloud is carried out to remaining cloud Set is extracted and convex closure extraction process;
It is combined Step 5: each object point of extraction middle in step 4 is converged conjunction with corresponding convex closure, carries out secondary area Domain growth, realizes the segmentation of each object integrity profile and the extraction of complete point set;
It is segmentation and the extraction flow chart of complete point set of object integrity profile, what is got from step 3 is flat shown in Fig. 4 After face set carries out convex closure operation, planar point set was both contained in obtained plane convex closure, while further comprising object point Set.In order to realize the complete extraction of contour of object, the accuracy of object positioning is improved, the point of each object is converged conjunction by the present invention It is combined with corresponding convex closure, with second zone growth algorithm, rejects the point for belonging to background parts in object convex closure, increase convex The extracting method of the outer point for belonging to object parts of packet, the segmentation of the object integrity profile and complete point set the following steps are included:
A) three-dimensional point cloud, the point in the Convex range of planar point set peace face are inputted;
B) point in sequential scan plane Convex range finds the point for belonging in Convex range but being not belonging to plane;
C) it after the non-flat millet cake in the Convex range found in step b), continually looks for all convex near the non-flat millet cake Non-flat millet cake in packet, and be added in potential object point set S';
If the number N at the potential midpoint object point set S' d) in step c)sLess than the threshold value N of setting, then step b) is returned to Continue to scan on remaining plane convex closure point;
If the number N at the potential midpoint object point set S' e) in step c)sMore than or equal to the threshold value N of setting, then it is assumed that should Potential object point set S' is an object point set;
F) continue the searching borderline point of convex closure near the potential object point set S' in step e) and be added to potential object In point set S', i.e., the object point outside convex closure is also searched out and come and be added in potential object point set S';
G) potential object point set S' obtained in step f) is added in collection of objects C';
If h) there are still the points in cloud not to be scanned, returns to step b) and continually look for new object point set;
If the whole points i) put in cloud are scanned, algorithm terminates to obtain collection of objects C'.
Step 6: according to the integrity profile of each object obtained in step 5, extract corresponding color image and respectively into The identification of row feature extraction and matching;
In this step, firstly, the three-dimensional point cloud set by object can obtain object corresponding area in color image Domain uses Feature Detector::detect () function of Open CV after the image-region where object is intercepted Surf characteristic point is obtained, the Feature2D::compute () function for further using Open CV obtains surf feature vector, The identification feature that the surf feature vector of object can be used as the object is added in identification library, or knows directly as the object Not matched feature vector, when corresponding surf feature vector is matched in the surf feature vector and library for realizing object, Using arest neighbors open source library FLANN carry out feature vector matching, finally, using arest neighbors open source library FLANN carry out feature to The matching of amount, particularly as be using Open CV Descriptor Matcher::match () function realize.
Step 7: the point in the integrity profile for each object that step 5 obtains, which is converged conjunction, carries out operation of averaging, obtain Location information of each object in camera coordinates system;
Step 8: location information of each object that step 7 is obtained in camera coordinates system, carries out coordinate system transformation, turn It changes in world coordinate system, realizes the positioning of each object.
If Fig. 5 (a) and Fig. 5 (b) are respectively Kinect camera coordinates system schematic diagram and actual world coordinate system schematic diagram, Two coordinate systems are right-handed coordinate system, and positioning of the Yao Shixian object in world coordinate system needs to demarcate camera, phase There are following relationships between machine coordinate system and world coordinate system:
Wherein, XC、YC、ZcIndicate the location components of object in the camera coordinate system, XW、YW、ZWIndicate the alive boundary of object Location components in coordinate system,WithIt is the spin matrix and excursion matrix of camera respectively, is the outer parameter of camera.
It needs to pass through spin matrix in coordinate system conversion processAnd excursion matrixIt is sat to calculate the world of object Mark system, and the two matrixes are obtained by the calibration of camera, matlab camera has been arrived in use when carrying out camera calibration Calibration tool case (Camera Calibration Toolbox for Matlab), demarcates Kinect by chessboard method. Due to Kinect default camera coordinates system be arranged at infrared camera, as shown in figure 5, and Kinect acquisition picture be Mirror image picture, therefore to use the infrared camera of Kinect to acquire infrared image when carrying out camera calibration, and adopting The picture collected carries out just being input in tool box after the mirror image switch processing of left and right one by one being demarcated.In calibration, phase is first allowed Machine acquires the intrinsic parameter calculating that 20 or more pictures carry out camera in different angle and different distance around chessboard, finally by chess Disk is fixed on the calculating that target position acquisition piece image carries out Camera extrinsic number.
The coordinate of the outer parameter matrix obtained after calibration and the object being calculated in camera coordinates system is carried out following Operation:
It can be obtained spatial coordinated information of the object in world coordinate system.
The above, only the invention patent preferred embodiment, but the scope of protection of the patent of the present invention is not limited to This, anyone skilled in the art is in the range disclosed in the invention patent, according to the present invention the skill of patent Art scheme and its patent of invention design are subject to equivalent substitution or change, belong to the scope of protection of the patent of the present invention.

Claims (4)

1. a kind of visual identity and localization method based on RGB-D camera, it is characterised in that: the method includes following steps It is rapid:
1) three-dimensional point is converted into after carrying out the acquisition of color image and depth image to object by Kinect camera sensing device Cloud atlas picture;
2) each point of the three-dimensional point cloud image obtained to step 1) carries out corresponding normal vector calculating;
3) the normal direction duration set obtained to step 2), mentions with the background plane that algorithm of region growing places object It takes;
4) point of the background plane extracted in step 3) is removed, and to remaining cloud carry out object point converge conjunction extract and Convex closure extraction process;
5) each object point extracted in step 4) is converged conjunction to combine with corresponding convex closure, carries out second zone growth, realized The segmentation of each object integrity profile and the extraction of complete point set;
6) it according to the integrity profile of each object obtained in step 5), extracts corresponding color image and carries out feature extraction respectively And match cognization;
7) point in the integrity profile for each object that step 5) obtains is converged into conjunction and carries out operation of averaging, obtain each object in phase Location information in machine coordinate system;
8) location information of each object for obtaining step 7) in camera coordinates system carries out coordinate system transformation, is transformed into the world In coordinate system, the positioning of each object is realized;
In step 5), the segmentation of the object integrity profile and the extraction of complete point set the following steps are included:
A) three-dimensional point cloud, the point in the Convex range of planar point set peace face are inputted;
B) point in sequential scan plane Convex range finds the point for belonging in Convex range but being not belonging to plane;
C) it after the non-flat millet cake in the Convex range found in step b), continually looks for all in convex closure near the non-flat millet cake Non-flat millet cake, and be added to potential object point and concentrate;
If the number of the potential object point centrostigma d) in step c) be less than setting threshold value, return to step b) continue to scan on it is surplus Under plane convex closure point;
If the number of the potential object point centrostigma e) in step c) is more than or equal to the threshold value of setting, then it is assumed that the potential object point Collection is an object point set;
F) continue the searching borderline point of convex closure near the potential object point set in step e) and be added to potential object point concentration, The object point outside convex closure is also searched out and carrys out and be added potential object point concentration;
G) potential object point set obtained in step f) is added in collection of objects;
If h) there are still the points in cloud not to be scanned, returns to step b) and continually look for new object point set;
If the whole points i) put in cloud are scanned, algorithm terminates to obtain collection of objects.
2. a kind of visual identity and localization method based on RGB-D camera according to claim 1, it is characterised in that: In step 2), the method that calculates normal vector are as follows:
If PkFor want to claim surface normal point, find point P firstkIn the picture up and down nearby four point P1、P2、P3 And P4, P1And P3Form vector ν1, P2And P4Form vector ν2, then point PkSurface normal νpν can be passed through2Multiplication cross ν1It obtains, It is specific as follows:
νp2×ν1
The normal vector of each point of three-dimensional point cloud image passes through formula as above and is calculated.
3. a kind of visual identity and localization method based on RGB-D camera according to claim 1, it is characterised in that: In step 3), the normal vector of each point of first sequential scan three-dimensional point cloud image encounters vertical normal vector and then continually looks for The point is nearby and normal vector is vertical point, is added in potential plane point set, if the number at potential plane point set midpoint is greater than The threshold value of setting, then it is assumed that the potential plane point set is that a plane point set merges and potential plane point set is added to planar set In conjunction, remaining normal vector is otherwise continued to scan on, after the end of scan, plane set is can be obtained, realizes the extraction of plane.
4. a kind of visual identity and localization method based on RGB-D camera according to claim 1, it is characterised in that: Step 6) specifically:
Firstly, the three-dimensional point cloud set by object obtains object corresponding region in color image, the figure where object Surf characteristic point is obtained using Feature the Detector::detect () function of Open CV after intercepting as region;
Secondly, the Feature2D::compute () function for further using Open CV obtains surf feature vector, object Surf feature vector is added in identification library as the identification feature of the object, or directly as the matched spy of the object identification Vector is levied, when corresponding surf feature vector is matched in the surf feature vector and library for realizing object, uses arest neighbors The library FLANN that increases income carries out the matching of feature vector;
Finally, the matching of feature vector is carried out using arest neighbors open source library FLANN, particularly as being using Open CV Descriptor Matcher::match () function is realized.
CN201610894251.8A 2016-10-13 2016-10-13 A kind of visual identity and localization method based on RGB-D camera Active CN106570903B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610894251.8A CN106570903B (en) 2016-10-13 2016-10-13 A kind of visual identity and localization method based on RGB-D camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610894251.8A CN106570903B (en) 2016-10-13 2016-10-13 A kind of visual identity and localization method based on RGB-D camera

Publications (2)

Publication Number Publication Date
CN106570903A CN106570903A (en) 2017-04-19
CN106570903B true CN106570903B (en) 2019-06-18

Family

ID=58532076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610894251.8A Active CN106570903B (en) 2016-10-13 2016-10-13 A kind of visual identity and localization method based on RGB-D camera

Country Status (1)

Country Link
CN (1) CN106570903B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564536A (en) * 2017-12-22 2018-09-21 洛阳中科众创空间科技有限公司 A kind of global optimization method of depth map

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107564059A (en) * 2017-07-11 2018-01-09 北京联合大学 Object positioning method, device and NI Vision Builder for Automated Inspection based on RGB D information
CN107480603B (en) * 2017-07-27 2020-09-18 和创懒人(大连)科技有限公司 Synchronous mapping and object segmentation method based on SLAM and depth camera
CN107609520B (en) * 2017-09-15 2020-07-03 四川大学 Obstacle identification method and device and electronic equipment
CN107610176B (en) * 2017-09-15 2020-06-26 斯坦德机器人(深圳)有限公司 Pallet dynamic identification and positioning method, system and medium based on Kinect
CN109870983B (en) * 2017-12-04 2022-01-04 北京京东尚科信息技术有限公司 Method and device for processing tray stack image and system for warehousing goods picking
CN108247635B (en) * 2018-01-15 2021-03-26 北京化工大学 Method for grabbing object by depth vision robot
CN108716324B (en) * 2018-03-26 2020-02-21 江苏大学 Door opening anti-collision system and method suitable for automatic driving automobile
CN108830150B (en) * 2018-05-07 2019-05-28 山东师范大学 One kind being based on 3 D human body Attitude estimation method and device
US10452947B1 (en) 2018-06-08 2019-10-22 Microsoft Technology Licensing, Llc Object recognition using depth and multi-spectral camera
CN109101967A (en) * 2018-08-02 2018-12-28 苏州中德睿博智能科技有限公司 The recongnition of objects and localization method, terminal and storage medium of view-based access control model
CN111062987A (en) * 2018-09-05 2020-04-24 天目爱视(北京)科技有限公司 Virtual matrix type three-dimensional measurement and information acquisition device based on multiple acquisition regions
CN109211210B (en) * 2018-09-25 2021-07-13 深圳市超准视觉科技有限公司 Target object identification positioning measurement method and device
CN109801309B (en) * 2019-01-07 2023-06-20 华南理工大学 Obstacle sensing method based on RGB-D camera
US11245875B2 (en) 2019-01-15 2022-02-08 Microsoft Technology Licensing, Llc Monitoring activity with depth and multi-spectral camera
CN109974707B (en) * 2019-03-19 2022-09-23 重庆邮电大学 Indoor mobile robot visual navigation method based on improved point cloud matching algorithm
CN110223297A (en) * 2019-04-16 2019-09-10 广东康云科技有限公司 Segmentation and recognition methods, system and storage medium based on scanning point cloud data
CN110136211A (en) * 2019-04-18 2019-08-16 中国地质大学(武汉) A kind of workpiece localization method and system based on active binocular vision technology
CN110342252A (en) * 2019-07-01 2019-10-18 芜湖启迪睿视信息技术有限公司 A kind of article automatically grabs method and automatic grabbing device
CN110349225B (en) * 2019-07-12 2023-02-28 四川易利数字城市科技有限公司 BIM model external contour rapid extraction method
CN110553628A (en) * 2019-08-28 2019-12-10 华南理工大学 Depth camera-based flying object capturing method
CN111476841B (en) * 2020-03-04 2020-12-29 哈尔滨工业大学 Point cloud and image-based identification and positioning method and system
WO2021223124A1 (en) * 2020-05-06 2021-11-11 深圳市大疆创新科技有限公司 Position information obtaining method and device, and storage medium
CN114274139B (en) * 2020-09-27 2024-04-19 西门子股份公司 Automatic spraying method, device, system and storage medium
CN116843631B (en) * 2023-06-20 2024-04-02 安徽工布智造工业科技有限公司 3D visual material separating method for non-standard part stacking in light steel industry

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101486543B1 (en) * 2013-05-31 2015-01-26 한국과학기술원 Method and apparatus for recognition and segmentation object for 3d object recognition
CN104240297A (en) * 2014-09-02 2014-12-24 东南大学 Rescue robot three-dimensional environment map real-time construction method
CN105046710A (en) * 2015-07-23 2015-11-11 北京林业大学 Depth image partitioning and agent geometry based virtual and real collision interaction method and apparatus
CN105913489B (en) * 2016-04-19 2019-04-23 东北大学 A kind of indoor three-dimensional scenic reconstructing method using plane characteristic

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108564536A (en) * 2017-12-22 2018-09-21 洛阳中科众创空间科技有限公司 A kind of global optimization method of depth map
CN108564536B (en) * 2017-12-22 2020-11-24 洛阳中科众创空间科技有限公司 Global optimization method of depth map

Also Published As

Publication number Publication date
CN106570903A (en) 2017-04-19

Similar Documents

Publication Publication Date Title
CN106570903B (en) A kind of visual identity and localization method based on RGB-D camera
CN108406731B (en) Positioning device, method and robot based on depth vision
Lu et al. Robust RGB-D odometry using point and line features
CN108932475A (en) A kind of Three-dimensional target recognition system and method based on laser radar and monocular vision
CN108369741B (en) Method and system for registration data
CN104463108B (en) A kind of monocular real time target recognitio and pose measuring method
CN103093191B (en) A kind of three dimensional point cloud is in conjunction with the object identification method of digital image data
CN102141398B (en) Monocular vision-based method for measuring positions and postures of multiple robots
CN102509348B (en) Method for showing actual object in shared enhanced actual scene in multi-azimuth way
CN106826815A (en) Target object method of the identification with positioning based on coloured image and depth image
CN107907048A (en) A kind of binocular stereo vision method for three-dimensional measurement based on line-structured light scanning
CN109308718B (en) Space personnel positioning device and method based on multiple depth cameras
Gao et al. Study on navigating path recognition for the greenhouse mobile robot based on K-means algorithm
CN106225774B (en) A kind of unmanned agriculture tractor road measuring device and method based on computer vision
CN111856436A (en) Combined calibration device and calibration method for multi-line laser radar and infrared camera
Peng et al. Binocular-vision-based structure from motion for 3-D reconstruction of plants
Chenchen et al. A camera calibration method for obstacle distance measurement based on monocular vision
Boerner et al. Brute force matching between camera shots and synthetic images from point clouds
Chakravorty et al. Automatic image registration in infrared-visible videos using polygon vertices
Xiao-Lian et al. Identification and location of picking tomatoes based on machine vision
Singh et al. Towards generation of effective 3D surface models from UAV imagery using open source tools
Song et al. Segmentation and localization method of greenhouse cucumber based on image fusion technology
Bakhshipour et al. Recognition of pomegranate on tree and stereoscopic locating of the fruit
CN105069781A (en) Salient object spatial three-dimensional positioning method
SrirangamSridharan et al. Object localization and size estimation from RGB-D images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant