CN109579852A - Robot autonomous localization method and device based on depth camera - Google Patents
Robot autonomous localization method and device based on depth camera Download PDFInfo
- Publication number
- CN109579852A CN109579852A CN201910056644.5A CN201910056644A CN109579852A CN 109579852 A CN109579852 A CN 109579852A CN 201910056644 A CN201910056644 A CN 201910056644A CN 109579852 A CN109579852 A CN 109579852A
- Authority
- CN
- China
- Prior art keywords
- depth
- image
- point
- robot
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
Abstract
A kind of robot autonomous localization method and device based on depth camera, different from common vision positioning scheme in such a way that color image or gray level image extract feature, but gray level image including depth information is converted by the depth image with depth information, constant depth value is equal to using unit pixel when conversion, then linear transfor;Either by depth value divided by the depth capacity in scene, then multiplied by the form of max pixel value;Characteristic point is extracted on gray level image including depth information in post-conversion, then is mapped in three-dimensional world by the depth value of respective pixel, obtained characteristic point is matched with reconstructed good point map, to calculate the current accurate pose of mobile robot.It overcomes Conventional visual localization method and changes too sensitive drawback to environment light, there is extremely strong universality.
Description
Technical field
The present invention relates to robot fields, and in particular to the autonomic positioning method and device of robot, more particularly to it is a kind of
Mobile robot autonomic positioning method and device based on depth camera.
Background technique
With the fast development of mobile robot, computer vision, SLAM, artificial intelligence, three-dimensional reconstruction, collect environment
The multi-functional mobile robot research level in one such as perception, dynamic decision and planning, behaviour control and execution also rapidly mentions
It rises.The performance of mobile robot is constantly improve, and the application range of mobile robot also greatly extends, not only in industry, agriculture
Be widely used in the industries such as industry, medical treatment, service, and urban safety, national defence and space exploration field etc. it is harmful with
Dangerous situation is also applied well, and application prospect is very wide.
Autonomous positioning is the core technology of mobile robot.How not additionally increase cost under conditions of improve moving machine
Device people's autonomous positioning precision becomes urgent problem to be solved.
Currently, the autonomous positioning of mobile robot relies on two class sensors more.One kind is the positioning based on laser sensor
Method;Another kind of is the vision positioning method of view-based access control model sensor.In the localization method based on laser sensor, two dimension swashs
Optical sensor is mainstream alignment sensor used in current mobile robot, but due to the information of two-dimensional laser sensor acquisition
Too simple, environmental suitability is poor, cannot usually satisfy the use demand.And the another straight residence of price of high-precision laser sensor
It is high not under, drawn high the cost of mobile robot, significantly limit process of its marketization and.
In the vision positioning method of another kind of view-based access control model sensor, since visual sensor is relative to laser sensing
Device, the especially relative inexpensiveness of high-precision laser sensor, and the information type acquired is image information, compared to laser
The two-dimensional surface information also more horn of plenty of sensor acquisition, becomes the important development direction of the following mobile robot autonomous positioning.
However, carrying out the mobile robot of autonomous positioning in actual use, although laser can be overcome using visual sensor
It is very few and to environmental change sensitive issue that sensor acquires information, but in terms of positioning accuracy and stability not
Foot.Visual sensor localization method is divided into monocular, binocular, tri- kinds of RGBD using the difference of camera according to it at present.These methods
Cardinal principle use visual sensor acquisition gray level image or color image, the greatest problem that the method faces
It is exactly that stability is poor.Institute's acquired image is influenced greatly by environment light variation, and sunrise sunset turns on light and the daily factor such as turns off the light all
It will lead to image and great variety occur in gray scale and color, so as to cause the unstable of autonomous positioning algorithm.
Summary of the invention
The present invention in view of the above-mentioned problems, propose a kind of robot autonomous localization method and device based on depth camera,
To improve mobile robot autonomous positioning precision and position stability, to overcome influence of the environment light to positioning performance.
In order to solve the above technical problems, according to an aspect of the invention, there is provided a kind of machine based on depth camera
People's freedom positioning device, comprising: image capture module, image processing module, tracing positioning and pose adjust module;
Described image acquisition module real-time sampling depth image, depth image from depth camera are with depth letter
The image of breath;
Described image processing module is responsible for converting gray scale including depth information for the depth image with depth information
Image, and characteristic point is extracted on the gray level image including depth information by conversion;
The tracing positioning is according to robot last moment motion state and pose to current time robot pose
Rough estimate is done, the characteristic point extracted in reconstructed obtained point map and present frame is then subjected to paired comparisons, to count
The accurate pose of present frame is calculated, i.e. realization fine positioning;
The pose adjustment module adjusts robot pose according to fine positioning calculated result in real time, realizes autonomous positioning.
Preferably, described image acquisition module is from depth camera with depth image described in higher frequency acquisition.
Preferably, described image processing module convert the depth image with depth information to it is including depth information
Before gray level image, the depth image is filtered to remove noise.
Preferably, the characteristic point of the extraction is angle point+other description informations;
The angle point has the angle point characteristic of physical layer, can accurately describe environmental information and be stable;
Other described description informations are description of the angle point;
Angle point after extraction projects using corresponding depth information, is changed into 3D point map;The 3D of point map is sat at this time
Mark is the coordinate relative to Current camera, uses formula
(xw,yw,zw)T=P (xc,yc,zc)T
It is converted into world coordinates;
Wherein, P is the position auto―control of Current camera, (xw,yw,zw) be 3D point map world coordinates, (xc,yc,zc) be
The Current camera coordinate of 3D point map, T indicate transposition;
For the characteristic point of acquisition, value of the pixel coordinate under depth image is exactly the depth value of this feature point, is used
The characteristic point with depth value realizes the alignment of gray level image and depth image.
Preferably, the artificial mobile robot of the machine.
According to another aspect of the present invention, a kind of robot autonomous localization method based on depth camera is provided, is wrapped
Include following steps:
Step S1: collecting sensor information, the sensor is depth camera, from the real-time sampling depth of the depth camera
Image, the depth image are the images with depth information;
Step S2: obtaining last moment robot pose and the point map coordinate in reconstructed completion before, uses
Last moment robot speed and pose do rough estimate to current time robot pose according to robot motion model;
Step S3: the depth image described in acquisition with depth information is handled, by the depth with depth information
Degree image is converted into gray level image including depth information;
Step S4:, and characteristic point is extracted on the gray level image including depth information by conversion;
The characteristic point extracted in reconstructed obtained point map and present frame is carried out paired comparisons by step S5;It is described
Paired comparisons are the characteristic point after extracting, and according to it, same pixel is sat on the depth image with depth information
Corresponding depth value is marked, projects into 3D (three-dimensional) point cloud, and by the 3D point cloud of the sub-picture and reconstructed obtained 3D (three
Dimension) sparse cloud map matched, the accurate pose of the robot is obtained after matching;
Step S6: the accurate pose of presently described robot is saved and according to the current accurate pose and last moment
The robot pose calculates the estimated value of the robot speed, with the positioning for subsequent time.
Preferably, in the step S1, with depth image described in higher frequency acquisition.
Preferably, in the step S3, ash including depth information is being converted by the depth image with depth information
It spends before image, the depth image is filtered to remove noise.
Preferably, in the step S4, the characteristic point of the extraction is angle point+other description informations;
The angle point has the angle point characteristic of physical layer, can accurately describe environmental information and be stable;
Other described description informations are description of the angle point;
Angle point after extraction projects using corresponding depth information, is changed into 3D point map;The 3D of point map is sat at this time
Mark is the coordinate relative to Current camera, uses formula
(xw,yw,zw)T=P (xc,yc,zc)T
It is converted into world coordinates;
Wherein, P is the position auto―control of Current camera, (xw,yw,zw) be 3D point map world coordinates, (xc,yc,zc) be
The Current camera coordinate of 3D point map, T indicate transposition;
For the characteristic point of acquisition, value of the pixel coordinate under depth image is exactly the depth value of this feature point, is used
The characteristic point with depth value realizes the alignment of gray level image and depth image.
According to another aspect of the present invention, a kind of mobile robot is provided, including the robot based on depth camera
Freedom positioning device, or autonomous positioning is carried out using the robot autonomous localization method based on depth camera.
Compared with the prior art, the invention has the following advantages:
1. it is fixed to efficiently solve tradition by converting gray level image including depth information for depth image by the present invention
Position method changes more sensitive issue to environment light.
2. the angle point in the gray level image including depth information that the present invention uses has stronger physical significance, Neng Gouzhun
Actual environment really is reacted, so that the stability for the characteristic point extracted is stronger, makes location algorithm compared to traditional scheme more robust.
Its image coordinate corresponding depth under depth image has been used when 3. the present invention is by projecting characteristic points to three-dimensional coordinate
Angle value realizes the perfect alignment of gray level image and depth image, the error generated when avoiding two kinds of image alignments.
4. can be achieved with independently being accurately positioned without being merged with inertial sensor information.
Detailed description of the invention
Embodiments of the present invention is described in detail in conjunction with the accompanying drawings, above and other objects of the present invention, feature,
Advantage will be apparent from.
Fig. 1 is the robot autonomous localization apparatus structure schematic diagram based on depth camera
Fig. 2 is the flow chart of the robot autonomous localization method based on depth camera
Fig. 3 is to convert depth image to the feature-rich point proposed after gray level image
Specific embodiment
Embodiments of the present invention are described with reference to the accompanying drawings.It is understood that specific implementation described herein
Mode is only used for explaining related content, rather than limitation of the invention.It also should be noted that for ease of description, it is attached
Only the parts related to the present invention are shown in figure.It should be noted simultaneously that in the absence of conflict, the reality in the present invention
The feature applied in mode and embodiment can be combined with each other.
Change to overcome the problems, such as Conventional visual localization method to environment light too sensitive, the invention proposes one kind to be based on
The robot autonomous localization method and device of depth camera.Color image or ash are used different from common vision positioning scheme
The mode of image zooming-out feature is spent, the depth image with depth information is converted grayscale image including depth information by the program
Picture is equal to constant depth value using unit pixel when conversion, then linear transfor;Either by depth value divided by scene most
Big depth, then multiplied by the form of max pixel value.Characteristic point is extracted on gray level image including depth information in post-conversion,
It is mapped in three-dimensional world by the depth value of respective pixel again, obtained characteristic point and reconstructed good point map is carried out
Matching, to calculate the current accurate pose of mobile robot.The method that such depth image turns gray level image, can be greatly
The drawbacks of avoiding Conventional visual scheme has extremely strong universality.
The technical scheme is that a kind of robot autonomous localization method and device based on depth camera.Such as Fig. 1 institute
Show, the robot autonomous localization device based on depth camera, comprising:
Image capture module, image processing module, tracing positioning and pose adjust module.
Wherein, image capture module is responsible for real-time sampling depth image, it is preferable that described image acquisition module is from depth phase
With higher frequency acquisition depth image in machine;The depth image tool is the depth image with depth information;Preferably, institute
The frequency for stating the depth image publication of depth camera is not less than 30HZ.If obtaining underfrequency, it is unable to ensure and adopts
Success can be positioned every time by collecting present frame, and this problem can then be well solved not less than 30HZ by issuing frequency.
Wherein, image processing module is responsible for converting gray scale including depth information for the depth image with depth information
Image, and feature is extracted on the processed gray level image.
When handling collected depth image, since depth image is different from the description unit of gray level image:
For depth image according to the difference of camera used, unit is length unit, can be rice, millimeter etc., and gray level image single pixel
Numerical value generally between 0-255, so need using suitable method by the depth value of depth image be converted to 0-255 it
Between gray level image gray value.The present invention proposes that following method is handled:
Firstly, designing a maximum depth value and a minimum according to the actual conditions of the depth image of collection in worksite
Depth value.Secondly, the depth value of single pixel is less than minimum depth value, and gray value is set for acquired depth image
It is set to 0;Maximum depth value is greater than for depth value, gray value is set as 255;For depth value between minimum depth value and
Between the two, gray value calculates maximum depth value according to the following formula:
Gray value=(depth value-minimum depth value)/(maximum depth value-minimum depth value) * 255
Preferably, the characteristic point of extraction is angle point+other description informations.Since the gray level image is converted from depth image
It gets, therefore angle point at this time has the angle point characteristic of physical layer, can accurately describe environmental information, while also more
Stablize.The problems such as noise is had due to the depth image being directly obtained, it is therefore desirable to filtering processing be added.
What other description informations referred to is exactly description of the angle point, can be using ORB, SIFT etc., it is preferable that use
ORB characteristic point.Why by the way of characteristic point, rather than each pixel is projected, or simply uses 3D point cloud
It is matched, is because characteristic point is able to use the feature that less data amount describes current environment, computation complexity is far below
Dense point cloud, while including a large amount of unnecessary informations in dense point cloud, also bring overweight computational burden.Angle after extraction
Point projects using corresponding depth information, can be changed into 3D (three-dimensional) point map.It should be noted that point map at this time
3D (three-dimensional) coordinate be coordinate relative to Current camera, be denoted as (xc,yc,zc), it is also necessary to pass through the pose square of Current camera
Battle array P, is converted into world coordinates (xw,yw,zw).That is (xw,yw,zw)T=P (xc,yc,zc)T
Wherein, T indicates transposition;
The program carries out feature point extraction on the gray level image that depth image converts, once characteristic point is obtained,
Value of its pixel coordinate under depth image is exactly the depth value of this feature point, using the special point with depth value, just
Realize the perfect alignment of gray level image and depth image.It is generated when avoiding the two kinds of image alignments faced in RGBD camera
Error.
Angle point after extraction projects using corresponding depth information, can be changed into 3D point map.Such method can
Solve in traditional RGBD vision positioning depth map and grayscale image can not perfection be aligned generated error problem.
Wherein, tracing positioning is responsible for according to robot last moment motion state and pose to the moment robot position
Appearance does rough estimate, and the characteristic point extracted in reconstructed obtained point map and present frame is then carried out paired comparisons, thus
Calculate the accurate pose of present frame.
Wherein, tracing positioning is responsible for according to robot last moment motion state and pose to the moment robot position
Appearance does rough estimate, does a rough estimate to last moment speed according to pose usually after calculating last moment pose.So
Last moment pose and speed are substituted into motion model afterwards, estimate current time pose.By the pose Jing Guo initial estimation,
By the characteristic point with depth value extracted on gray level image come further refinement, to obtain accurate camera pose.Specifically
Method are as follows: initial estimation is being obtained, and after obtaining the point map that present frame projected, it can be according to the robot position of initial estimation
Appearance selects the reconstructed good point map near it, and the point map progress that present frame point map was selected with these
Match.In this step, the matching way of all kinds of three-dimensional points can be used.It, can first root since point map is per se with description
Point map is matched according to bag of words method, is then calculated again using the iteration closest approach algorithm (ICP) with pairing.Obtain essence
After true pose, rough estimate can be carried out to the speed at the moment according to current pose and last moment pose, more as next time
Velocity amplitude when new, which is preserved, to use when estimation next time.It is entire fixed because the publication frequency of camera is higher
The location frequency of bit stream journey is also higher, using the velocity estimation mode of such simplicity, has not only saved the calculating time, but also can obtain
Accurate pose estimated value, then matching refinement after passing through can calculate correct robot pose.
Wherein, pose adjustment module is responsible for adjusting robot pose in real time according to fine positioning calculated result, and it is autonomous fixed to realize
Position.
Preferably, the artificial mobile robot of the machine;
Based on above-mentioned module, without loss of generality, in any position fixing process of robot, process is as shown in Figure 2.
The robot autonomous localization method based on depth camera, following steps:
Step S1: collecting sensor information, it is preferable that the sensor is depth camera, in the depth camera with compared with
High frequency acquisition depth image;The depth image is the depth image with depth information;
Step S2: obtaining last moment robot pose and the point map coordinate in reconstructed completion before, uses
Last moment robot speed and pose do rough estimate to the moment pose according to motion model.
Step S3: the deep image information described in acquisition with depth information is handled, is had first to described
The depth image of depth information is filtered to remove noise, and the depth image with depth information is then processed into packet
Gray level image containing depth information.It preferably, by depth value processing is according to a certain percentage the integer gray value within 0~255.
Step S4: the gray level image extraction characteristic point including depth information for treated.Preferably, using ORB spy
Sign point or SHFT characteristic point.Preferably, the ORB characteristic point of use.Turn due to having used from the depth image with depth information
The gray level image including depth information changed, has prevented influence of the illumination to such characteristic point, to make this from the root
A little characteristic points are more robust.The gray level image converted by depth image is as shown in Figure 3, it can be seen that, in this gray scale
Largely stable ORB characteristic point can be proposed on image, and these characteristic points are not illuminated by the light the influence of variation, after being especially advantageous for
Continuous tracking and matching.
Step S5, by the characteristic point after extraction, according to its on the depth image with depth information it is identical
The corresponding depth value of pixel coordinate, projects into 3D (three-dimensional) point cloud, and by the 3D point cloud of the sub-picture and reconstructed obtains
Sparse cloud map of 3D (three-dimensional) is matched, and the accurate pose of the robot is obtained.
Step S6: the accurate pose of presently described robot is saved and according to the current accurate pose and last moment
The robot pose calculates the estimated value of the robot speed, with the positioning for subsequent time.
Robot autonomous localization method and device proposed by the present invention based on depth camera can overcome Conventional visual fixed
Position method changes too sensitive problem to environment light.
Firstly, Conventional visual locating scheme extracts feature, the maximum disadvantage of such method using gray level image or RGB image
End is to be easy to be interfered by light in environment, once environment light varies widely, many characteristic points will change, and lead
Cause positioning mistake, and the grayscale image including depth information that the present invention is converted to by using the depth image with depth information
Picture, the then absolutely not worry of this respect.
Secondly, the depth information of object is determined by the geometric dimension and structure feature of object itself in environment, not
The angle point for changing with the change of environment light, therefore proposing from depth drop more meets angle point characteristic, has extremely strong representative
Property.Meanwhile compared to 3D point cloud data are directly used, which is used less and stable representative feature
Point.Comprising a large amount of unnecessary information in 3D point cloud, can operation efficiency be substantially reduced, it can not real time execution.And it uses dilute
Characteristic point is dredged as point map, representative strong, data volume is few and stablizes, and can satisfy the requirement of real-time.
Finally, tradition RGBD camera when acquiring image, actually uses RGBD camera while acquiring two images, one
Sampling depth figure is opened, then an acquisition grayscale image carries out software and hardware alignment to two images, but always exists due to camera mark
Error caused by fixed or system noise, and locator value is caused deviation occur.And the present invention is in the depth image with depth information
Feature point extraction is carried out on the gray level image including depth information of conversion, has used it when by projecting characteristic points to three-dimensional coordinate
Image coordinate corresponding depth value under depth image thereby realizes the perfect alignment of gray level image and depth image, keeps away
The error generated when face in RGBD camera two kinds of image alignments is exempted from.
This method solve visual signatures to change particularly sensitive problem for environment light.Overcome the light and shade variation of light
The technical issues of very big influence being generated on characteristic point, in the case where light is excessively dark or exposure is too strong, conventional method
The case where can not proposing validity feature point, is accurately positioned in lower realization.And the stability for the characteristic point extracted is stronger, makes to position
Algorithm compares traditional scheme more robust.Use its image coordinate under depth image when by projecting characteristic points to three-dimensional coordinate
Corresponding depth value realizes the perfect alignment of gray level image and depth image, improves system performance, and the present invention is not necessarily to and is used to
Property sensor information is merged, and can be achieved with independently being accurately positioned.
So far, it has been combined preferred implementation method shown in the drawings and describes technical solution of the present invention, still, this field
It it will be appreciated by the skilled person that above embodiment is used for the purpose of clearly demonstrating the present invention, and is not to model of the invention
It encloses and is defined, protection scope of the present invention is expressly not limited to these specific embodiments.Without departing from the principle of the present invention
Under the premise of, those skilled in the art can make equivalent change or replacement to the relevant technologies feature, these changes or replacement
Technical solution later will fall within the scope of protection of the present invention.
Claims (10)
1. a kind of robot autonomous localization device based on depth camera characterized by comprising image capture module, image
Processing module, tracing positioning and pose adjust module;
Described image acquisition module acquires the depth image in real time from depth camera, and the depth image is with depth letter
The image of breath;
Described image processing module is responsible for converting gray level image including depth information for the depth image with depth information,
And characteristic point is extracted on the gray level image including depth information by conversion;
The tracing positioning does slightly current time robot pose according to robot last moment motion state and pose
Then the characteristic point extracted in reconstructed obtained point map and present frame is carried out paired comparisons, to calculate by estimation
The accurate pose of present frame, i.e. realization fine positioning;
The pose adjustment module adjusts robot pose according to fine positioning calculated result in real time, realizes autonomous positioning.
2. the robot autonomous localization device according to claim 1 based on depth camera, which is characterized in that
Described image acquisition module is from depth camera with depth image described in higher frequency acquisition.
3. the robot autonomous localization device according to claim 1 based on depth camera, which is characterized in that
Described image processing module by the depth image with depth information be converted into gray level image including depth information it
Before, the depth image is filtered to remove noise.
4. the robot autonomous localization device according to claim 3 based on depth camera, which is characterized in that
The characteristic point of the extraction is angle point+other description informations;
The angle point has the angle point characteristic of physical layer, can accurately describe environmental information and be stable;
Other described description informations are description of the angle point;
Angle point after extraction projects using corresponding depth information, is changed into 3D point map;The 3D coordinate of point map is at this time
Relative to the coordinate of Current camera, formula is used
(xw,yw,zw)T=P (xc,yc,zc)T
It is converted into world coordinates;
Wherein, P is the position auto―control of Current camera, (xw,yw,zw) be 3D point map world coordinates, (xc,yc,zc) for 3D
Scheme the Current camera coordinate of point, T indicates transposition;
For the characteristic point of acquisition, value of the pixel coordinate under depth image is exactly the depth value of this feature point, using having
The characteristic point of depth value realizes the alignment of gray level image and depth image.
5. the robot autonomous localization device according to claim 1 based on depth camera, which is characterized in that
The artificial mobile robot of machine.
6. a kind of robot autonomous localization method based on depth camera, which comprises the steps of:
Step S1: collecting sensor information, the sensor is depth camera, from the real-time sampling depth figure of the depth camera
Picture, the depth image are the images with depth information;
Step S2: last moment robot pose and the point map coordinate in reconstructed completion before are obtained, uses upper one
Moment robot speed and pose do rough estimate to current time robot pose according to robot motion model;
Step S3: the depth image described in acquisition with depth information is handled, by the depth map with depth information
As being converted into gray level image including depth information;
Step S4:, and characteristic point is extracted on the gray level image including depth information by conversion;
The characteristic point extracted in reconstructed obtained point map and present frame is carried out paired comparisons by step S5;The pairing
Compare for the characteristic point after extracting, according to it on the depth image with depth information same pixel coordinate pair
The depth value answered, projects into three-dimensional point cloud, and by the three-dimensional point cloud of the sub-picture and reconstructed obtained sparse 3 D point cloud
Map is matched, and the accurate pose of the robot is obtained after matching;
Step S6: the accurate pose of presently described robot is saved and according to the current accurate pose and last moment
Robot pose calculates the estimated value of the robot speed, with the positioning for subsequent time.
7. the robot autonomous localization method according to claim 6 based on depth camera, which is characterized in that
In the step S1, with depth image described in higher frequency acquisition.
8. the robot autonomous localization device according to claim 6 based on depth camera, which is characterized in that
In the step S3, before converting gray level image including depth information for the depth image with depth information,
The depth image is filtered to remove noise.
9. the robot autonomous localization device according to claim 6 based on depth camera, which is characterized in that
In the step S4, the characteristic point of the extraction is angle point+other description informations;
The angle point has the angle point characteristic of physical layer, can accurately describe environmental information and be stable;
Other described description informations are description of the angle point;
Angle point after extraction projects using corresponding depth information, is changed into 3D point map;The 3D coordinate of point map is at this time
Relative to the coordinate of Current camera, formula is used
(xw,yw,zw)T=P (xc,yc,zc)T
It is converted into world coordinates;
Wherein, P is the position auto―control of Current camera, (xw,yw,zw) be 3D point map world coordinates, (xc,yc,zc) for 3D
Scheme the Current camera coordinate of point, T indicates transposition;
For the characteristic point of acquisition, value of the pixel coordinate under depth image is exactly the depth value of this feature point, using having
The characteristic point of depth value realizes the alignment of gray level image and depth image.
10. a kind of mobile robot, which is characterized in that
Including the robot autonomous localization device as described in claim 1 based on depth camera, or using such as claim 6
The robot autonomous localization method based on depth camera carries out autonomous positioning.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910056644.5A CN109579852A (en) | 2019-01-22 | 2019-01-22 | Robot autonomous localization method and device based on depth camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910056644.5A CN109579852A (en) | 2019-01-22 | 2019-01-22 | Robot autonomous localization method and device based on depth camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109579852A true CN109579852A (en) | 2019-04-05 |
Family
ID=65916948
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910056644.5A Pending CN109579852A (en) | 2019-01-22 | 2019-01-22 | Robot autonomous localization method and device based on depth camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109579852A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110068824A (en) * | 2019-04-17 | 2019-07-30 | 北京地平线机器人技术研发有限公司 | A kind of sensor pose determines method and apparatus |
CN110223351A (en) * | 2019-05-30 | 2019-09-10 | 杭州蓝芯科技有限公司 | A kind of depth camera localization method based on convolutional neural networks |
CN110260866A (en) * | 2019-07-19 | 2019-09-20 | 闪电(昆山)智能科技有限公司 | A kind of robot localization and barrier-avoiding method of view-based access control model sensor |
CN110823171A (en) * | 2019-11-15 | 2020-02-21 | 北京云迹科技有限公司 | Robot positioning method and device and storage medium |
CN111829489A (en) * | 2019-04-16 | 2020-10-27 | 杭州海康机器人技术有限公司 | Visual positioning method and device |
CN112053383A (en) * | 2020-08-18 | 2020-12-08 | 东北大学 | Method and device for real-time positioning of robot |
WO2021057739A1 (en) * | 2019-09-27 | 2021-04-01 | Oppo广东移动通信有限公司 | Positioning method and device, apparatus, and storage medium |
WO2021056283A1 (en) * | 2019-09-25 | 2021-04-01 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for adjusting a vehicle pose |
CN112907742A (en) * | 2021-02-18 | 2021-06-04 | 湖南国科微电子股份有限公司 | Visual synchronous positioning and mapping method, device, equipment and medium |
WO2022078467A1 (en) * | 2020-10-14 | 2022-04-21 | 深圳市杉川机器人有限公司 | Automatic robot recharging method and apparatus, and robot and storage medium |
CN117351213A (en) * | 2023-12-06 | 2024-01-05 | 杭州蓝芯科技有限公司 | Box body segmentation positioning method and system based on 3D vision |
CN112053383B (en) * | 2020-08-18 | 2024-04-26 | 东北大学 | Method and device for positioning robot in real time |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012052615A1 (en) * | 2010-10-21 | 2012-04-26 | Zenrobotics Oy | Method for the filtering of target object images in a robot system |
CN104375509A (en) * | 2014-12-11 | 2015-02-25 | 山东大学 | Information fusion positioning system and method based on RFID (radio frequency identification) and vision |
CN105865462A (en) * | 2015-01-19 | 2016-08-17 | 北京雷动云合智能技术有限公司 | Three dimensional SLAM method based on events with depth enhanced vision sensor |
CN106772431A (en) * | 2017-01-23 | 2017-05-31 | 杭州蓝芯科技有限公司 | A kind of Depth Information Acquistion devices and methods therefor of combination TOF technologies and binocular vision |
CN108406731A (en) * | 2018-06-06 | 2018-08-17 | 珠海市微半导体有限公司 | A kind of positioning device, method and robot based on deep vision |
-
2019
- 2019-01-22 CN CN201910056644.5A patent/CN109579852A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012052615A1 (en) * | 2010-10-21 | 2012-04-26 | Zenrobotics Oy | Method for the filtering of target object images in a robot system |
CN104375509A (en) * | 2014-12-11 | 2015-02-25 | 山东大学 | Information fusion positioning system and method based on RFID (radio frequency identification) and vision |
CN105865462A (en) * | 2015-01-19 | 2016-08-17 | 北京雷动云合智能技术有限公司 | Three dimensional SLAM method based on events with depth enhanced vision sensor |
CN106772431A (en) * | 2017-01-23 | 2017-05-31 | 杭州蓝芯科技有限公司 | A kind of Depth Information Acquistion devices and methods therefor of combination TOF technologies and binocular vision |
CN108406731A (en) * | 2018-06-06 | 2018-08-17 | 珠海市微半导体有限公司 | A kind of positioning device, method and robot based on deep vision |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111829489B (en) * | 2019-04-16 | 2022-05-13 | 杭州海康机器人技术有限公司 | Visual positioning method and device |
CN111829489A (en) * | 2019-04-16 | 2020-10-27 | 杭州海康机器人技术有限公司 | Visual positioning method and device |
CN110068824A (en) * | 2019-04-17 | 2019-07-30 | 北京地平线机器人技术研发有限公司 | A kind of sensor pose determines method and apparatus |
CN110068824B (en) * | 2019-04-17 | 2021-07-23 | 北京地平线机器人技术研发有限公司 | Sensor pose determining method and device |
CN110223351A (en) * | 2019-05-30 | 2019-09-10 | 杭州蓝芯科技有限公司 | A kind of depth camera localization method based on convolutional neural networks |
CN110223351B (en) * | 2019-05-30 | 2021-02-19 | 杭州蓝芯科技有限公司 | Depth camera positioning method based on convolutional neural network |
CN110260866A (en) * | 2019-07-19 | 2019-09-20 | 闪电(昆山)智能科技有限公司 | A kind of robot localization and barrier-avoiding method of view-based access control model sensor |
WO2021056283A1 (en) * | 2019-09-25 | 2021-04-01 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for adjusting a vehicle pose |
WO2021057739A1 (en) * | 2019-09-27 | 2021-04-01 | Oppo广东移动通信有限公司 | Positioning method and device, apparatus, and storage medium |
CN110823171B (en) * | 2019-11-15 | 2022-03-25 | 北京云迹科技股份有限公司 | Robot positioning method and device and storage medium |
CN110823171A (en) * | 2019-11-15 | 2020-02-21 | 北京云迹科技有限公司 | Robot positioning method and device and storage medium |
CN112053383A (en) * | 2020-08-18 | 2020-12-08 | 东北大学 | Method and device for real-time positioning of robot |
CN112053383B (en) * | 2020-08-18 | 2024-04-26 | 东北大学 | Method and device for positioning robot in real time |
WO2022078467A1 (en) * | 2020-10-14 | 2022-04-21 | 深圳市杉川机器人有限公司 | Automatic robot recharging method and apparatus, and robot and storage medium |
CN112907742A (en) * | 2021-02-18 | 2021-06-04 | 湖南国科微电子股份有限公司 | Visual synchronous positioning and mapping method, device, equipment and medium |
CN117351213A (en) * | 2023-12-06 | 2024-01-05 | 杭州蓝芯科技有限公司 | Box body segmentation positioning method and system based on 3D vision |
CN117351213B (en) * | 2023-12-06 | 2024-03-05 | 杭州蓝芯科技有限公司 | Box body segmentation positioning method and system based on 3D vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109579852A (en) | Robot autonomous localization method and device based on depth camera | |
CN103106688B (en) | Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering | |
CN104574347B (en) | Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data | |
CN107833181B (en) | Three-dimensional panoramic image generation method based on zoom stereo vision | |
CN108648215B (en) | SLAM motion blur pose tracking algorithm based on IMU | |
CN108615244B (en) | A kind of image depth estimation method and system based on CNN and depth filter | |
CN111612760A (en) | Method and apparatus for detecting obstacles | |
CN109579825B (en) | Robot positioning system and method based on binocular vision and convolutional neural network | |
WO2016118499A1 (en) | Visual localization within lidar maps | |
CN107016697B (en) | A kind of height measurement method and device | |
CN103839258A (en) | Depth perception method of binarized laser speckle images | |
CN107084680B (en) | A kind of target depth measurement method based on machine monocular vision | |
CN104864849B (en) | Vision navigation method and device and robot | |
CN110751123B (en) | Monocular vision inertial odometer system and method | |
CN111127540B (en) | Automatic distance measurement method and system for three-dimensional virtual space | |
CN111307146B (en) | Virtual reality wears display device positioning system based on binocular camera and IMU | |
CN103020988A (en) | Method for generating motion vector of laser speckle image | |
CN107038758B (en) | Augmented reality three-dimensional registration method based on ORB operator | |
CN111998862A (en) | Dense binocular SLAM method based on BNN | |
CN111127556B (en) | Target object identification and pose estimation method and device based on 3D vision | |
CN114677758A (en) | Gait recognition method based on millimeter wave radar point cloud | |
CN110070577A (en) | Vision SLAM key frame and feature point selection method based on characteristic point distribution | |
CN113888639A (en) | Visual odometer positioning method and system based on event camera and depth camera | |
CN105844614B (en) | It is a kind of that northern method is referred to based on the vision for proofreading robot angle | |
CN107345814A (en) | A kind of mobile robot visual alignment system and localization method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190405 |
|
RJ01 | Rejection of invention patent application after publication |