CN107689063A - A kind of robot indoor orientation method based on ceiling image - Google Patents
A kind of robot indoor orientation method based on ceiling image Download PDFInfo
- Publication number
- CN107689063A CN107689063A CN201710625812.9A CN201710625812A CN107689063A CN 107689063 A CN107689063 A CN 107689063A CN 201710625812 A CN201710625812 A CN 201710625812A CN 107689063 A CN107689063 A CN 107689063A
- Authority
- CN
- China
- Prior art keywords
- image
- robot
- pose
- coordinate
- ceiling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
The invention provides a kind of robot indoor orientation method based on ceiling image, comprise the following steps:The pattern for having notable difference with the image in adjacent fields is set in ceiling in advance;Programming causes robot traversal indoor location, and shoots photo by top camera, and the image collected is carried out into image mosaic, obtains global image;Indoor actual size is compared with the Pixel Dimensions of global image, obtains image coordinate and the coordinate transformation relation of actual coordinate;The current attained pose of robot is determined by the pose of optical flow method and images match calculating robot in global image, and according to coordinate transformation relation.Pose of the present invention by optical flow method and images match calculating robot in global image, and the current attained pose of robot is determined according to coordinate transformation relation;The robot indoor orientation method of the present invention, strong adaptability, stability are good.
Description
Technical field
The invention belongs to vision guided navigation field, more particularly, to a kind of indoor positioning side of robot based on ceiling image
Method.
Background technology
With extensive use of the equipment such as robot, intelligent vehicle in life, production, people are for its intelligent requirement
Also more and more higher, and realize that autonomous positioning and navigation are mobile robot basic demand and core technology.Particularly will in precision
Ask under indoor conditions high, that movement environment is complicated, how effectively to position and the problem of navigation is one important and significant.
The indoor positioning and airmanship of comparative maturity at present, mainly include magnetic tracks navigation, inertial navigation, vision guided navigation
Deng.Wherein magnetic tracks navigation is needed to lay magnetic tracks, and robot can only be moved based on track, very flexible and unsightly.
Inertial navigation precision is not high, and accumulated error be present, it usually needs is aided with other method and eliminates accumulated error, improves positioning accurate
Degree.And vision guided navigation is usually that feature recognition is carried out to the image in front of robot, and navigated using characteristics of image.But by
In the complexity of indoor object of which movement, such as people, the object moved etc. of walking, all very likely change the characteristics of image of object,
Cause the failure of location navigation algorithm.
Therefore, the problem of prior art is present be:Robot indoor positioning navigation adaptability is weak, stability is poor.
The content of the invention
In view of this, the present invention is directed to propose a kind of robot indoor orientation method based on ceiling image, to realize
It is good to position simple and convenient, strong adaptability, stability.
To reach above-mentioned purpose, the technical proposal of the invention is realized in this way:
A kind of robot indoor orientation method based on ceiling image, comprises the following steps:
(1) robot traversal indoor environment shooting global image, builds map;
(2) Robot path planning moves, by optical flow method and images match calculating robot in global image
Pose, and the current attained pose of robot is determined according to coordinate transformation relation
Further, the step (1) specifically includes:
(101) image in advance in ceiling setting with adjacent fields has the pattern of notable difference;
(102) programming causes robot traversal indoor location, and shoots photo, the figure that will be collected by top camera
As carrying out image mosaic, global image is obtained;
(103) indoor actual size is compared with the Pixel Dimensions of global image, obtains image coordinate and sat with actual
Target coordinate transformation relation.
Further, the step (2) specifically includes:
(21) relative displacement obtains:Present image is shot compared with a upper picture frame, robot bag is obtained by optical flow method
The relative displacement in distance and direction is included, with the horizontal pixel displacement of the average value approximation robot of image level light stream, uses image
The vertical pixel displacement of the average value approximation robot of vertical light stream;
(22) present image pose is estimated:According to relative displacement and the image pose of last moment, current time is estimated
Image pose;
(23) present image pose determines:Centered on the image attitude angle of estimation, in error angle, by resolving accuracy
Division, obtains one group of attitude angle, for each attitude angle, current image frame is rotated into the angle, postrotational image is distinguished
In the image of center interception formed objects, and matched with the image near estimated position, by matching degree highest image position
Appearance is as present image pose;
(24) currently practical pose determines:According to coordinate transformation relation, present image pose is converted into currently practical position
Appearance.
Further, the global image obtained in the step (2) is rectangle, and empty portions are replaced by black picture element.
Further, described image coordinate is using the global image upper left corner as origin, is to the right x-axis positive direction, is downwards y
Axle positive direction.
Further, the actual coordinate be using actual point corresponding to the global image upper left corner as origin, x-axis positive direction with
Image coordinate x-axis positive direction is consistent, and y-axis positive direction is consistent with image coordinate y-axis positive direction.
Relative to prior art, a kind of robot indoor orientation method based on ceiling image of the present invention has
Following advantage:
(1) strong adaptability:Positioned based on ceiling motif, the interference by environmental change is smaller, frequent in flow of personnel
Scene could be used that, position, compare the methods of laser positioning, adaptability is stronger with ultrasonic wave;
(2) it is simple and convenient:Compared with other manual features or form of beacons, in ceiling arrangement pattern to original environment
Change is smaller, simple to operate, and can play beautification function;Complete cloth to postpone and can forever use, without as beacon, ultrasonic wave etc.
Need to charge and safeguard;Algorithm directly extracts feature from figure, for itself having the ceiling of obvious characteristic even can be with
Without arrangement pattern;
(3) real-time accuracy is high:On the basis of relative displacement reckoning, topography's progress is selected from global image
Match somebody with somebody, improve computational efficiency and real-time;Compared to the localization method calibrated by a small amount of key frame or characteristic image, the hair
The bright real-time accuracy in motion process is higher;
(4) stability is good:Due to carrying out the reckoning of relative displacement using optical flow method, avoid what ground skidding was brought
Error, reduce the requirement to wheel and ground environment.The method matched by global image, overcome conventional inertia navigation and deposit
Accumulated error problem.Even if a range of deviation or error occurs in certain positioning, still can when positioning next time
Calibrated, without causing positioning to be collapsed, there is higher stability.
Brief description of the drawings
The accompanying drawing for forming the part of the present invention is used for providing a further understanding of the present invention, schematic reality of the invention
Apply example and its illustrate to be used to explain the present invention, do not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the main flow chart of the robot indoor orientation method in ceiling image described in the embodiment of the present invention;
Fig. 2 be the embodiment of the present invention described in localization method in indoor positioning step flow chart.
Embodiment
It should be noted that in the case where not conflicting, the feature in embodiment and embodiment in the present invention can phase
Mutually combination.
Describe the present invention in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
As shown in figure 1, the robot indoor orientation method of the invention based on ceiling image, comprises the following steps:
(1) ceiling motif is arranged:In advance in ceiling arrangement pattern to ensure camera at an arbitrary position in the visual field
Image has notable difference with the image in the visual field of adjacent position;
Puted up on the ceiling using printing multicolour pattern in example.
(2) global image is shot:Programming causes robot to travel through indoor location at same direction, and passes through top camera
Shoot photo, it is ensured that when captured photo includes robot and run by the direction, all image informations that may photograph, figure
As sample frequency and robot movement velocity should ensure that every two field pictures directly include at least 1/3 intersection.Robot is clapped
All image informations taken the photograph should include any possible position acquired image information in actual motion.The image that will be collected
Image mosaic is carried out, obtains global image;
Preferably, the global image that step (2) the global image shooting step obtains is rectangle, and empty portions are by black
Pixel replaces.
Robot uses circular base plate, Two-wheeled, and industrial computer is moved by motor driver control machine people.Camera
It is horizontally arranged at directly over the center of circle, face ceiling direction.Camera is fixed with robot fuselage, i.e., with the motion of robot
And move.Camera is connected by USB with industrial computer, by industrial computer program to the image storage, splicing and the feedback that collect.
Robot is placed on one jiao of room, moved along side, often moves the distance collection piece image of half height or so.Firm row collection
After complete, half height right position is moved down from starting point, moves and gathers in same direction.Until complete room is walked by robot, to all
The image collected carries out splicing, obtains global image.Here image mosaic is realized using opencv stitcher classes,
Obtained global image is rectangle, and empty portions are replaced by black picture element.To accelerate calculating speed, global image is entered as the following formula
The processing of row gray processing:
G=r × 0.299+g × 0.587+b × 0.114
(3) coordinate transformation Relation acquisition:Indoor actual size is obtained by scene or drawing measurement, indoor actual size with
The Pixel Dimensions of global image are compared, and try to achieve image coordinate and the coordinate transformation relation of actual coordinate;
Preferably, in step (3) the coordinate transformation Relation acquisition step, described image coordinate is with global image upper left
Angle is origin, is to the right x-axis positive direction, is downwards y-axis positive direction.
Actual coordinate in step (3) the coordinate transformation Relation acquisition step is with real corresponding to the global image upper left corner
Border point is origin, and x-axis positive direction is consistent with image coordinate x-axis positive direction, and y-axis positive direction is consistent with image coordinate y-axis positive direction.
Image coordinate is with actual coordinate corresponding relation, indoor physical length subtract length of wagon actual range correspond to it is complete
Office's image length subtracts the pixel distance of single frames photo length, indoor developed width subtract body width actual range correspond to it is complete
Office's picture traverse subtracts the pixel distance of single frames film width, and the actual coordinate tried to achieve by length and width-image coordinate engineer's scale is deposited
In slight error, the engineer's scale of selection of small.
(4) indoor positioning:By the pose of optical flow method and images match calculating robot in global image, and according to seat
Subscript conversion relation determines the current attained pose of robot.
Complete step (1) to step (3), that is, complete map structuring work, next can be carried out by map positioning and
Navigation.By specifying starting and terminal point and path planning, the motion of Robot path planning, and pose is corrected by positioning.Opening
Begin before motion, a two field picture is first gathered, for comparing with next two field picture.For the image of collection, we pass through Gauss first
Filtering carries out denoising, by each pixel of input array with Gaussian kernel convolution by convolution and as output pixel value.Two
Tie up Gaussian function expression formula such as following formula.To accelerate calculating speed, it is also necessary to do gray processing processing to the image collected, formula with
It is previously identical.
As shown in Fig. 2 (4) the indoor positioning step includes:
(41) relative displacement obtains:Present image is shot compared with a upper picture frame, robot bag is obtained by optical flow method
The relative displacement in distance and direction is included, with the horizontal pixel displacement of the average value approximation robot of image level light stream, uses image
The vertical pixel displacement of the average value approximation robot of vertical light stream;
Optical flow computation is realized using opencv cvCalcOpticalFlowLK functions, its algorithm uses use
Lucas&Kanade algorithms.Horizontal pixel displacement and vertical pixel displacement are obtained by the horizontal light stream of averaging and vertical light stream
Afterwards, you can try to achieve the distance and deflection of displacement.
(42) present image pose is estimated:According to relative displacement and the image pose of last moment, current time is estimated
Image pose;
By last moment position with estimated displacement vector addition, estimate current location, while by last moment deflection with
Estimate that deflection is added, estimate current deflection.
(43) present image pose determines:Centered on the image attitude angle of estimation, in error angle, by resolving accuracy
Division, obtains one group of attitude angle.For each attitude angle, current image frame is rotated into the angle, postrotational image is distinguished
In the image of center interception formed objects, and matched with the image near estimated position, by matching degree highest image position
Appearance is as present image pose.
To ensure computational efficiency, deflection error range should not be too big, and precision also should not be too high, might as well take error range ±
3 °, 1 ° of precision, rotated around picture centre during image rotation.Due to usually requiring bigger container image during rotation, therefore can select
With sufficiently large container image, such as the square that the length of side is artwork diagonal line length, image is rotated.Then center is selected
Smaller one block of image is matched, it is ensured that truncated picture is significant image, i.e., not comprising blank.
As above-mentioned parameter, it is necessary to will collect image carry out 7 times rotation and interception, obtain 7 width pictures, respectively with selection
Size suitable images near estimated position are matched.Comprehensive consideration speed and effect, are matched using correlation matching algorithm,
Locking numerical value is bigger to represent that matching degree is higher, and matching formula is:
(44) currently practical pose determines:According to coordinate transformation relation, present image pose is converted into currently practical position
Appearance;
Preferably, in described (20) global image shooting and (40) indoor positioning step, camera is horizontally fixed on machine
People's center top, the motion of its visual angle random device people and move.
Whether actual coordinate where judging current robot is in the expected place specified, if it is not, then according to planning
Path is modified.Its positioning is set to be maintained at error expected in smaller error range.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention
God any modification, equivalent substitution and improvements made etc., should be included in the scope of the protection with principle.
Claims (6)
- A kind of 1. robot indoor orientation method based on ceiling image, it is characterised in that:Comprise the following steps:(1) robot traversal indoor environment shooting global image, builds map;(2) Robot path planning moves, by the pose of optical flow method and images match calculating robot in global image, And the current attained pose of robot is determined according to coordinate transformation relation.
- A kind of 2. robot indoor orientation method based on ceiling image according to claim 1, it is characterised in that:Institute Step (1) is stated to specifically include:(101) image in advance in ceiling setting with adjacent fields has the pattern of notable difference;(102) programming causes robot traversal indoor location, and shoots photo by top camera, and the image collected is entered Row image mosaic, obtains global image;(103) indoor actual size is compared with the Pixel Dimensions of global image, obtains image coordinate and actual coordinate Coordinate transformation relation.
- A kind of 3. robot indoor orientation method based on ceiling image according to claim 1, it is characterised in that:Institute Step (2) is stated to specifically include:(21) relative displacement obtains:Shoot present image with a upper picture frame compared with, by optical flow method obtain robot including away from It is vertical with image with the horizontal pixel displacement of the average value approximation robot of image level light stream from the relative displacement with direction The vertical pixel displacement of the average value approximation robot of light stream;(22) present image pose is estimated:According to relative displacement and the image pose of last moment, the image at current time is estimated Pose;(23) present image pose determines:Centered on the image attitude angle of estimation, in error angle, drawn by resolving accuracy Point, one group of attitude angle is obtained, for each attitude angle, current image frame is rotated into the angle, postrotational image is existed respectively Center intercepts the image of formed objects, and is matched with the image near estimated position, by matching degree highest image pose As present image pose;(24) currently practical pose determines:According to coordinate transformation relation, present image pose is converted into currently practical pose.
- A kind of 4. robot indoor orientation method based on ceiling image according to claim 1, it is characterised in that:Institute It is rectangle to state the global image obtained in step (1), and empty portions are replaced by black picture element.
- A kind of 5. robot indoor orientation method based on ceiling image according to claim 2, it is characterised in that:Institute It is using the global image upper left corner as origin to state image coordinate, is to the right x-axis positive direction, is downwards y-axis positive direction.
- A kind of 6. robot indoor orientation method based on ceiling image according to claim 2, it is characterised in that:Institute It is x-axis positive direction and image coordinate x-axis positive direction one using actual point corresponding to the global image upper left corner as origin to state actual coordinate Cause, y-axis positive direction is consistent with image coordinate y-axis positive direction.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710625812.9A CN107689063A (en) | 2017-07-27 | 2017-07-27 | A kind of robot indoor orientation method based on ceiling image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710625812.9A CN107689063A (en) | 2017-07-27 | 2017-07-27 | A kind of robot indoor orientation method based on ceiling image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107689063A true CN107689063A (en) | 2018-02-13 |
Family
ID=61153127
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710625812.9A Pending CN107689063A (en) | 2017-07-27 | 2017-07-27 | A kind of robot indoor orientation method based on ceiling image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107689063A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109029464A (en) * | 2018-08-21 | 2018-12-18 | 北京理工大学 | A kind of vision two dimensional code indoor orientation method setting pattern image certainly |
CN109186606A (en) * | 2018-09-07 | 2019-01-11 | 南京理工大学 | A kind of robot composition and air navigation aid based on SLAM and image information |
CN109520509A (en) * | 2018-12-10 | 2019-03-26 | 福州臻美网络科技有限公司 | A kind of charging robot localization method |
CN109634297A (en) * | 2018-12-18 | 2019-04-16 | 辽宁壮龙无人机科技有限公司 | A kind of multi-rotor unmanned aerial vehicle and control method based on light stream sensor location navigation |
CN109901594A (en) * | 2019-04-11 | 2019-06-18 | 清华大学深圳研究生院 | A kind of localization method and system of weed-eradicating robot |
CN110028017A (en) * | 2019-04-08 | 2019-07-19 | 杭州国辰牵星科技有限公司 | A kind of passive vision navigation unmanned fork lift system and air navigation aid for explosion-proof warehouse |
CN110509273A (en) * | 2019-08-16 | 2019-11-29 | 天津职业技术师范大学(中国职业培训指导教师进修中心) | The robot mechanical arm of view-based access control model deep learning feature detects and grasping means |
CN110587621A (en) * | 2019-08-30 | 2019-12-20 | 深圳智慧林网络科技有限公司 | Robot, robot-based patient care method and readable storage medium |
CN110595480A (en) * | 2019-10-08 | 2019-12-20 | 瓴道(上海)机器人科技有限公司 | Navigation method, device, equipment and storage medium |
CN111862214A (en) * | 2020-07-29 | 2020-10-30 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method and device, computer equipment and storage medium |
CN111912337A (en) * | 2020-07-24 | 2020-11-10 | 上海擎朗智能科技有限公司 | Method, device, equipment and medium for determining robot posture information |
WO2021093288A1 (en) * | 2019-11-15 | 2021-05-20 | 浙江大学华南工业技术研究院 | Magnetic stripe-simulation positioning method and device based on ceiling-type qr codes |
CN114216454A (en) * | 2021-10-27 | 2022-03-22 | 湖北航天飞行器研究所 | Unmanned aerial vehicle autonomous navigation positioning method based on heterogeneous image matching in GPS rejection environment |
WO2023000528A1 (en) * | 2021-07-23 | 2023-01-26 | 深圳市优必选科技股份有限公司 | Map positioning method and apparatus, computer-readable storage medium and terminal device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105225240A (en) * | 2015-09-25 | 2016-01-06 | 哈尔滨工业大学 | The indoor orientation method that a kind of view-based access control model characteristic matching and shooting angle are estimated |
CN106338991A (en) * | 2016-08-26 | 2017-01-18 | 南京理工大学 | Robot based on inertial navigation and two-dimensional code and positioning and navigation method thereof |
CN106681353A (en) * | 2016-11-29 | 2017-05-17 | 南京航空航天大学 | Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion |
CN106708048A (en) * | 2016-12-22 | 2017-05-24 | 清华大学 | Ceiling image positioning method of robot and ceiling image positioning system thereof |
-
2017
- 2017-07-27 CN CN201710625812.9A patent/CN107689063A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105225240A (en) * | 2015-09-25 | 2016-01-06 | 哈尔滨工业大学 | The indoor orientation method that a kind of view-based access control model characteristic matching and shooting angle are estimated |
CN106338991A (en) * | 2016-08-26 | 2017-01-18 | 南京理工大学 | Robot based on inertial navigation and two-dimensional code and positioning and navigation method thereof |
CN106681353A (en) * | 2016-11-29 | 2017-05-17 | 南京航空航天大学 | Unmanned aerial vehicle (UAV) obstacle avoidance method and system based on binocular vision and optical flow fusion |
CN106708048A (en) * | 2016-12-22 | 2017-05-24 | 清华大学 | Ceiling image positioning method of robot and ceiling image positioning system thereof |
Non-Patent Citations (1)
Title |
---|
徐德等: "《机器人视觉测量与控制》", 31 January 2016, 国防工业出版社 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109029464A (en) * | 2018-08-21 | 2018-12-18 | 北京理工大学 | A kind of vision two dimensional code indoor orientation method setting pattern image certainly |
CN109186606B (en) * | 2018-09-07 | 2022-03-08 | 南京理工大学 | Robot composition and navigation method based on SLAM and image information |
CN109186606A (en) * | 2018-09-07 | 2019-01-11 | 南京理工大学 | A kind of robot composition and air navigation aid based on SLAM and image information |
CN109520509A (en) * | 2018-12-10 | 2019-03-26 | 福州臻美网络科技有限公司 | A kind of charging robot localization method |
CN109634297A (en) * | 2018-12-18 | 2019-04-16 | 辽宁壮龙无人机科技有限公司 | A kind of multi-rotor unmanned aerial vehicle and control method based on light stream sensor location navigation |
CN110028017A (en) * | 2019-04-08 | 2019-07-19 | 杭州国辰牵星科技有限公司 | A kind of passive vision navigation unmanned fork lift system and air navigation aid for explosion-proof warehouse |
CN109901594A (en) * | 2019-04-11 | 2019-06-18 | 清华大学深圳研究生院 | A kind of localization method and system of weed-eradicating robot |
CN110509273A (en) * | 2019-08-16 | 2019-11-29 | 天津职业技术师范大学(中国职业培训指导教师进修中心) | The robot mechanical arm of view-based access control model deep learning feature detects and grasping means |
CN110509273B (en) * | 2019-08-16 | 2022-05-06 | 天津职业技术师范大学(中国职业培训指导教师进修中心) | Robot manipulator detection and grabbing method based on visual deep learning features |
CN110587621A (en) * | 2019-08-30 | 2019-12-20 | 深圳智慧林网络科技有限公司 | Robot, robot-based patient care method and readable storage medium |
CN110595480A (en) * | 2019-10-08 | 2019-12-20 | 瓴道(上海)机器人科技有限公司 | Navigation method, device, equipment and storage medium |
WO2021093288A1 (en) * | 2019-11-15 | 2021-05-20 | 浙江大学华南工业技术研究院 | Magnetic stripe-simulation positioning method and device based on ceiling-type qr codes |
CN111912337A (en) * | 2020-07-24 | 2020-11-10 | 上海擎朗智能科技有限公司 | Method, device, equipment and medium for determining robot posture information |
US11644302B2 (en) | 2020-07-24 | 2023-05-09 | Keenon Robotics Co., Ltd. | Method and apparatus for determining pose information of a robot, device and medium |
CN111862214A (en) * | 2020-07-29 | 2020-10-30 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method and device, computer equipment and storage medium |
CN111862214B (en) * | 2020-07-29 | 2023-08-25 | 上海高仙自动化科技发展有限公司 | Computer equipment positioning method, device, computer equipment and storage medium |
WO2023000528A1 (en) * | 2021-07-23 | 2023-01-26 | 深圳市优必选科技股份有限公司 | Map positioning method and apparatus, computer-readable storage medium and terminal device |
CN114216454A (en) * | 2021-10-27 | 2022-03-22 | 湖北航天飞行器研究所 | Unmanned aerial vehicle autonomous navigation positioning method based on heterogeneous image matching in GPS rejection environment |
CN114216454B (en) * | 2021-10-27 | 2023-09-08 | 湖北航天飞行器研究所 | Unmanned aerial vehicle autonomous navigation positioning method based on heterogeneous image matching in GPS refusing environment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107689063A (en) | A kind of robot indoor orientation method based on ceiling image | |
CN111862672B (en) | Parking lot vehicle self-positioning and map construction method based on top view | |
Heng et al. | Project autovision: Localization and 3d scene perception for an autonomous vehicle with a multi-camera system | |
CN109191504A (en) | A kind of unmanned plane target tracking | |
Scaramuzza et al. | Extrinsic self calibration of a camera and a 3d laser range finder from natural scenes | |
CN108406731A (en) | A kind of positioning device, method and robot based on deep vision | |
CN111862673B (en) | Parking lot vehicle self-positioning and map construction method based on top view | |
CN109725645B (en) | Nested unmanned aerial vehicle landing cooperation sign design and relative pose acquisition method | |
CN108226938A (en) | A kind of alignment system and method for AGV trolleies | |
CN110163963B (en) | Mapping device and mapping method based on SLAM | |
CN108733039A (en) | The method and apparatus of navigator fix in a kind of robot chamber | |
CN205426175U (en) | Fuse on -vehicle multisensor's SLAM device | |
CN110308729A (en) | The AGV combined navigation locating method of view-based access control model and IMU or odometer | |
Shen et al. | Localization through fusion of discrete and continuous epipolar geometry with wheel and IMU odometry | |
CN112802196B (en) | Binocular inertia simultaneous positioning and map construction method based on dotted line feature fusion | |
CN208323361U (en) | A kind of positioning device and robot based on deep vision | |
CN110108269A (en) | AGV localization method based on Fusion | |
CN110207722A (en) | A kind of automation calibration for cameras mileage system and method | |
CN117367427A (en) | Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment | |
CN115147344A (en) | Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance | |
Park et al. | Global map generation using LiDAR and stereo camera for initial positioning of mobile robot | |
Fang et al. | Ground-texture-based localization for intelligent vehicles | |
CN116804553A (en) | Odometer system and method based on event camera/IMU/natural road sign | |
CN114485648B (en) | Navigation positioning method based on bionic compound eye inertial system | |
Hoang et al. | Planar motion estimation using omnidirectional camera and laser rangefinder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180213 |
|
RJ01 | Rejection of invention patent application after publication |