CN111968177B - Mobile robot positioning method based on fixed camera vision - Google Patents

Mobile robot positioning method based on fixed camera vision Download PDF

Info

Publication number
CN111968177B
CN111968177B CN202010709719.8A CN202010709719A CN111968177B CN 111968177 B CN111968177 B CN 111968177B CN 202010709719 A CN202010709719 A CN 202010709719A CN 111968177 B CN111968177 B CN 111968177B
Authority
CN
China
Prior art keywords
coordinate system
image
mobile robot
camera
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010709719.8A
Other languages
Chinese (zh)
Other versions
CN111968177A (en
Inventor
王翔宇
刘晓贝
梁升一
梁静思
刘维明
李世华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010709719.8A priority Critical patent/CN111968177B/en
Publication of CN111968177A publication Critical patent/CN111968177A/en
Application granted granted Critical
Publication of CN111968177B publication Critical patent/CN111968177B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/10861Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices sensing of data fields affixed to objects or articles, e.g. coded labels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The invention discloses a mobile robot positioning method based on fixed camera vision. Firstly, pasting a two-dimensional code marker on a mobile robot, and obtaining a feature point group of the two-dimensional code marker in a scene image through preliminary matching by using an ORB feature detection algorithm; secondly, removing outliers by using a radius filtering algorithm to obtain a feature point group which is distributed on the two-dimensional code markers in a scene image in a centralized manner, and taking the coordinate average value of the feature point group as a point group center to obtain an image pixel coordinate of the two-dimensional code marker center, namely completing the positioning of the mobile robot under an image pixel coordinate system; and finally, establishing a global coordinate conversion model for the scene, obtaining related parameters through camera calibration, converting the image pixel coordinates of the mobile robot in the image pixel coordinate system into actual world coordinates of the mobile robot in a world coordinate system, and further realizing global visual positioning of the mobile robot. The positioning method provided by the invention has the advantages of good real-time performance and high positioning precision.

Description

Mobile robot positioning method based on fixed camera vision
Technical Field
The invention belongs to the technical field of mobile robot positioning, and particularly relates to a mobile robot positioning method for fixing camera vision.
Background
With the continuous expansion of the indoor positioning demand, the indoor positioning method with high effectiveness and adaptability has become a research hotspot. In the field of mobile robots, obtaining accurate real-time positions of mobile robots is also a key problem.
The existing positioning technology comprises inertial navigation positioning, wireless signal positioning, laser radar positioning, visual positioning and the like. The inertial navigation positioning can generate a large accumulated error due to drift, wireless signals are easy to interfere to cause inaccurate positioning, and the laser radar has high equipment cost. In contrast, visual positioning is less costly, but a relatively high degree of accuracy can be achieved.
Visual localization methods can be divided into relative localization and global localization. Visual SLAM (singular Localization And Mapping) is used much in the relative positioning, and the current SLAM mainly adopts a method based on a landmark, namely, a camera is used for obtaining key characteristics of a scene around a mobile robot to be used as an environmental landmark for calculating a relative position. Although the method can achieve a good positioning effect and the applied scene range can be large, when the mobile robot is in a changing environment, the change of the map information can affect the positioning. Meanwhile, the camera needs to be carried on the mobile robot, and the structural appearance of the robot is more complex. And requires additional processors or occupies the processor resources of the robot itself. The global positioning is that the camera is fixed in the scene, the view field in the whole scene can be obtained, the influence of the change of objects in the scene is avoided under the condition of not considering the shielding, the global positioning is independent of the robot, and the processor resource of the mobile robot is not occupied. The common method is a positioning method of a target Feature matching mode, and the classical Feature detection methods include a Scale Invariant Feature Transform (SIFT) algorithm, a speed-Up Robust Features (SURF) algorithm and an object FAST and computed BRIEF (ORB) algorithm. The SIFT algorithm obtains more feature points and has better rotation invariance and scale invariance, but the algorithm efficiency is lower. The SURF algorithm has certain improvement on efficiency compared with SIFT, and other performances are approximately equal to SIFT, but the real-time requirement is difficult to achieve. The ORB algorithm is more efficient than the former two, but has the problem of poor scale invariance.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the characteristics of low precision and poor scale invariance of an ORB algorithm of the conventional positioning method of the mobile robot, the positioning method of the mobile robot based on the vision of the fixed camera is provided.
The technical scheme is as follows: in order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows: a mobile robot positioning method based on fixed camera vision comprises the following steps:
firstly, selecting a two-dimensional code mark as a template image, then pasting the two-dimensional code mark on a mobile robot, and taking the mobile robot image shot by a fixed camera as a scene image. Extracting feature points of two-dimensional code marks in the template image and two-dimensional code marks in the scene image by using an ORB algorithm, calculating the similarity between the feature points according to the Hamming distance between the feature point coordinate vector of the two-dimensional code marks in the template image and the feature point coordinate vector of the two-dimensional code marks in the scene image, and finding the most similar feature point in the scene image for each feature point in the template image by comparing the similarity between every two feature points;
secondly, removing outliers in the point groups of the scene image subjected to the initial matching of the feature points in the first step by adopting a radius filtering algorithm to obtain feature point groups which are distributed on the two-dimensional code marks in the scene image in a centralized manner, and then taking the coordinate average value of the feature point groups distributed in the centralized manner as the center of the point group to obtain image pixel coordinates of the center of the two-dimensional code marks in the scene image, namely the image pixel coordinates of the mobile robot in a scene image pixel coordinate system, so as to realize the positioning of the mobile robot in the scene image pixel coordinate system;
and thirdly, after the image pixel coordinates of the mobile robot in the scene image pixel coordinate system are positioned, establishing a global coordinate conversion relation according to a camera imaging model, and converting the obtained image pixel coordinates of the mobile robot into actual world coordinates of the mobile robot based on camera calibration, thereby realizing global visual positioning of the mobile robot.
Further, in the first step, an ORB algorithm is used to extract feature points in the template image and the scene image, and then feature point matching is performed according to hamming distance to find a feature point matching pair set with the minimum distance, that is, for each feature point in the template image, a feature point most similar to the feature point is found in the scene image, wherein hamming distance is defined as the number of different characters at the corresponding positions of two equal-length character strings and the number of different characters between feature vectorsThe smaller the two distances are, the higher the similarity is, and the image pixel point coordinates corresponding to the two feature points in the template image and the scene image are respectively set as
Figure BDA0002596073350000021
And
Figure BDA0002596073350000022
then the hamming distance of the pixel coordinates corresponding to these two feature points is:
Figure BDA0002596073350000023
further, in the second step, in the scene image, then a proper filtering radius is selected, for a certain feature point in the scene image, if the two-dimensional coordinate distance between other feature points and the feature point is less than or equal to the filtering radius, the other feature points are called as neighbors of the feature point, all the feature points in the scene image are traversed, the number of the neighbors of each feature point is counted, a proper threshold number is set, the feature points of which the number reaches the threshold number are retained, and the feature points which do not reach the threshold number are removed. And then, the average coordinate of the point group is obtained and used as the center of the point group, so that the image pixel coordinate of the center of the two-dimensional code mark in the scene image pixel coordinate system, namely the image pixel coordinate of the mobile robot in the scene image pixel coordinate system is obtained, and the positioning of the mobile robot in the scene image pixel coordinate system is further realized.
Further: in the third step, after the image pixel coordinates of the mobile robot in the scene image are located, a coordinate conversion relation is established according to a camera imaging model, a classical global coordinate conversion model is established, key parameters in the global coordinate conversion model are camera parameters, the camera parameters are usually calculated in an experimental mode, and the process is camera calibration; the whole global coordinate conversion model relates to four coordinate systems, namely an image pixel coordinate system, an image physical coordinate system, a camera coordinate system and a world coordinate system. And converting the obtained image pixel coordinates of the mobile robot in the image pixel coordinate system into actual world coordinates of the mobile robot in a world coordinate system, thereby realizing the global visual positioning of the mobile robot.
(1) Conversion of image physical coordinate system to image pixel coordinate system
As shown in fig. 2, the image pixel coordinate system is a two-dimensional rectangular coordinate system, which reflects the arrangement of pixels in the camera chip. The origin O' is positioned at the upper left corner of the image, and the u coordinate axis and the v coordinate axis are respectively superposed with two edges of the image. The pixel coordinate is a discrete value, the pixel is taken as a unit, the image physical coordinate system is in an ideal state, the image center O is taken as a coordinate system origin, and the x coordinate axis and the y coordinate axis are respectively parallel to the u coordinate axis and the v coordinate axis. The two coordinate systems are translation amounts of (u) 0 ,v 0 ) The translational relationship of (a).
If the physical size of a single pixel point in the camera photosensitive element is dx × dy, (x, y) is the image physical coordinate of the mobile robot in the image physical coordinate system, the image pixel coordinate (u, v) of the mobile robot in the image pixel coordinate system obtained in the second step satisfies:
Figure BDA0002596073350000031
writing the above formula in homogeneous form, there are:
Figure BDA0002596073350000032
the above equation completes the conversion from the image physical coordinate system to the image pixel coordinate system;
(2) Conversion of camera coordinate system to image physical coordinate system
The camera coordinate system is a three-dimensional rectangular coordinate system with an origin O c At the optical center of the lens, x c Axis and y c The axes being parallel to respective sides of the image plane, z c The axis is the optical axis of the lens and is vertical to the image plane.
As shown in FIG. 3, let the mobile robot be in camera coordinatesCoordinate under the system is P (x) c ,y c ,z c ) The point P is projected onto an image plane through a light ray of the projection center, a projection point P '(x, y) on the image physical coordinate plane is coordinates of the mobile robot in the image physical coordinate system, and coordinates of the projection point P' in the camera coordinate system are (x, y, f), where f is a camera optical center O c The vertical distance to the origin O' of the physical coordinate system of the image, according to the principle of similar triangles, is:
Figure BDA0002596073350000041
likewise, the above equation is written in homogeneous form:
Figure BDA0002596073350000042
the above equation completes the conversion from the camera coordinate system to the image physical coordinate system;
(3) Conversion of world coordinate system to camera coordinate system
The world coordinate system describes the position of an object in real space, and the camera coordinate system can be operated by rotating the world coordinate system R 3×3 And a translation operation t 3×1 The world coordinate of the mobile robot in the world coordinate system is (x) w ,y w ,z w ) Then the conversion relationship can be expressed as:
Figure BDA0002596073350000043
the above equation completes the conversion from the world coordinate system to the camera coordinate system;
(4) Conversion of world coordinate system to image pixel coordinate system
By integrating the formulas in (1), (2) and (3), the conversion relationship between the world coordinate system and the pixel coordinate system can be obtained, and the following steps are included:
Figure BDA0002596073350000044
Figure BDA0002596073350000051
according to the above formula, let:
Figure BDA0002596073350000052
K 1 f in the expression x 、f y The normalized focal length of the camera in the directions of the x axis and the y axis is represented by pixels;
similarly, let:
K 2 =[R 3×3 t 3×1 ]
wherein, K 1 Internal reference matrix, K, called camera 2 External parameter matrix called camera, let K = K 1 K 2 And K is called a projection matrix.
The conversion relation formula between the world coordinate system and the pixel coordinate system can be written as follows:
Figure BDA0002596073350000053
therefore, the key problem is to obtain an internal parameter matrix and an external parameter matrix of the camera. The method adopts a Zhangzhen calibration method, and acquires 20-30 images for calibration. Therefore, the conversion relation between the image pixel coordinate of the mobile robot in the image pixel coordinate system and the world coordinate of the mobile robot in the world coordinate system is obtained. Therefore, after the image pixel coordinates of the mobile robot in the image pixel coordinate system are obtained, the world coordinates of the mobile robot in the world coordinate system can be obtained through the conversion, and the global visual positioning of the mobile robot is further completed.
Has the beneficial effects that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
(1) The two-dimensional code is used as a marker, so that the two-dimensional code can be used as a positioning marker and can be used for storing other information subsequently.
(2) The ORB algorithm is combined with the radius filtering method, positioning is directly carried out according to the characteristics of feature matching concentrated distribution, the influence caused by poor scale invariance is avoided, and the detection and positioning speed is effectively improved.
Drawings
FIG. 1 is a schematic diagram of a radius filtering algorithm;
FIG. 2 is a schematic diagram of an image pixel coordinate system and an image physical coordinate system;
FIG. 3 is a diagram of a global visual model.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
Firstly, selecting a two-dimensional code mark as a template image, then pasting the two-dimensional code mark on a mobile robot, and taking the mobile robot image shot by a fixed camera as a scene image. Extracting feature points of two-dimensional code marks in the template image and two-dimensional code marks in the scene image by using an ORB algorithm, calculating the similarity between the feature points according to the Hamming distance between the feature point coordinate vector of the two-dimensional code marks in the template image and the feature point coordinate vector of the two-dimensional code marks in the scene image, and finding the most similar feature points in the scene image for each feature point in the template image by comparing the similarity between every two feature points;
secondly, removing outliers in the point groups of the scene image subjected to the initial matching of the feature points in the first step by adopting a radius filtering algorithm to obtain feature point groups which are distributed on the two-dimensional code marks in the scene image in a centralized manner, and then taking the coordinate average value of the feature point groups distributed in the centralized manner as the center of the point group to obtain image pixel coordinates of the center of the two-dimensional code marks in the scene image, namely the image pixel coordinates of the mobile robot in a scene image pixel coordinate system, so as to realize the positioning of the mobile robot in the scene image pixel coordinate system;
and thirdly, after the image pixel coordinates of the mobile robot in the scene image pixel coordinate system are positioned, establishing a global coordinate conversion relation according to a camera imaging model, and converting the obtained image pixel coordinates of the mobile robot into actual world coordinates of the mobile robot based on camera calibration, thereby realizing global visual positioning of the mobile robot.
In order to verify the effectiveness of the mobile robot positioning method based on the fixed camera vision in practical engineering application, the self-built experiment platform is used for carrying out experiments on the positioning effect from two aspects of track effect and static positioning precision.
In a track effect experiment, the mobile robot is controlled to move along a roughly rectangular route, the method is used for positioning, a moving track is drawn in an image, and the track continuity is good from the result, so that the real-time performance of the method is good.
In a positioning precision experiment, 9 points are taken at equal intervals on two diagonal lines of a scene range, the mobile robot is moved to the nine points for positioning, and a statistical algorithm outputs coordinate and global actual coordinate data to be drawn on a coordinate graph. The coordinate of the mobile robot is defined as the center coordinate of the two-dimensional code marker carried on the mobile robot, and the error is defined as the Euclidean distance between two points of the actual measurement coordinate and the algorithm positioning coordinate of the mobile robot. The average static positioning error obtained by the experiment is 1.28cm, and the precision is higher.
The above embodiments are merely illustrative of the technical ideas of the present invention, and do not limit the scope of the present invention. It should be noted that any improvement made to the technical solution on the technical idea of the present invention belongs to the protection scope of the present invention.

Claims (5)

1. A mobile robot positioning method based on fixed camera vision is characterized by comprising the following steps:
firstly, selecting a two-dimensional code mark as a template image, then pasting the two-dimensional code mark on a mobile robot, taking the mobile robot image shot by a fixed camera as a scene image, extracting feature points of the two-dimensional code mark in the template image and the two-dimensional code mark in the scene image by using an ORB algorithm, calculating the similarity between the feature points according to the Hamming distance between the feature point coordinate vector of the two-dimensional code mark in the template image and the feature point coordinate vector of the two-dimensional code mark in the scene image, and finding the most similar feature point in the scene image for each feature point in the template image by comparing the similarity between every two feature points;
secondly, removing outliers in the point groups of the scene image subjected to the characteristic point preliminary matching in the first step by adopting a radius filtering algorithm to obtain characteristic point groups which are distributed on the two-dimensional code marks in the scene image in a concentrated manner, and then taking the average value of the coordinates of the characteristic point groups distributed in the concentrated manner as the center of the point group to obtain the image pixel coordinates of the center of the two-dimensional code marks in the scene image, namely the image pixel coordinates of the mobile robot in a scene image pixel coordinate system, so as to realize the positioning of the mobile robot in the scene image pixel coordinate system;
and thirdly, after the image pixel coordinates of the mobile robot in the scene image pixel coordinate system are positioned, establishing a global coordinate conversion relation based on camera calibration according to a camera imaging model, and converting the obtained image pixel coordinates of the mobile robot into actual world coordinates of the mobile robot, thereby realizing the global visual positioning of the mobile robot.
2. The method for positioning a mobile robot based on fixed camera vision as claimed in claim 1, wherein in the first step, the ORB algorithm is used to extract feature points in the template image and the scene image, and then feature point matching is performed according to the hamming distance to find a feature point matching pair set with the minimum distance, that is, the feature point in the template image finds the most similar feature point in the scene image, wherein the hamming distance is defined as the number of different characters at the corresponding positions of two equal-length character strings, the smaller the two distances between feature vectors are, the higher the similarity is, and the coordinates of image pixel points corresponding to the two feature points in the template image and the scene image are respectively set as
Figure FDA0002596073340000011
And
Figure FDA0002596073340000012
then the hamming distance of the pixel coordinates corresponding to these two feature points is:
Figure FDA0002596073340000013
3. the method according to claim 2, wherein in the second step, a proper filtering radius is selected from the scene image, and for a certain feature point in the scene image, if the distance between the two-dimensional coordinates of the other feature points and the feature point is less than or equal to the filtering radius, the other feature points are called as neighbors of the feature point, all feature points in the scene image are traversed, the number of neighbors of each feature point is counted, a proper threshold number is set, feature points with the number of neighbors reaching the threshold number are retained, and if the distance is not reached, the feature points are removed, and then the average coordinate of the point group is obtained as the center of the point group, so that the image pixel coordinate of the center of the two-dimensional code mark in the scene image pixel coordinate system, that is, the image pixel coordinate of the mobile robot in the scene image pixel coordinate system, thereby realizing the positioning of the mobile robot in the scene image pixel coordinate system.
4. The method for positioning the mobile robot based on the fixed camera vision as claimed in claim 3, wherein in the third step, after the image pixel coordinates of the mobile robot in the scene image are positioned, a coordinate transformation relation is established according to a camera imaging model, a classical global coordinate transformation model is established, the key parameters in the global coordinate transformation model are camera parameters, usually, the parameters are obtained by calculation in an experimental manner, and the process is to calibrate the camera; the whole camera geometric model relates to four coordinate systems, namely an image pixel coordinate system, an image physical coordinate system, a camera coordinate system and a world coordinate system, and the obtained image pixel coordinates of the mobile robot in the image pixel coordinate system are converted into actual world coordinates of the mobile robot in the world coordinate system, so that the global visual positioning of the mobile robot is realized.
5. The method for positioning a mobile robot based on fixed camera vision according to claim 4, wherein the positioning method in step three is as follows:
(1) Conversion of image physical coordinate system to image pixel coordinate system
The image pixel coordinate system is a two-dimensional rectangular coordinate system which reflects the arrangement condition of pixels in a camera chip, the origin O' is positioned at the upper left corner of an image, the u coordinate axis and the v coordinate axis are respectively superposed with two edges of the image, the pixel coordinate is a discrete value, the pixel is taken as a unit, the image physical coordinate system is in an ideal state, the image center O is taken as the origin of the coordinate system, the x coordinate axis and the y coordinate axis are respectively parallel to the u coordinate axis and the v coordinate axis, and the two coordinate systems are translation amounts (u coordinate axis and v coordinate axis) 0 ,v 0 ) The translation relationship of (a);
if the physical size of a single pixel point in the camera photosensitive element is dx × dy, (x, y) is the image physical coordinate of the mobile robot in the image physical coordinate system, the image pixel coordinate (u, v) of the mobile robot in the image pixel coordinate system obtained in the second step satisfies:
Figure FDA0002596073340000021
writing the above formula in homogeneous form, there are:
Figure FDA0002596073340000022
the above equation completes the conversion from the image physical coordinate system to the image pixel coordinate system;
(2) Conversion of camera coordinate system to image physical coordinate system
The camera coordinate system is a three-dimensional rectangular coordinate system with an origin O c At the optical center of the lens, x c Axis and y c The axes being parallel to the two sides of the image plane, z c The axis is the optical axis of the lens and is vertical to the image plane;
let the coordinate of the mobile robot be P (x) under the camera coordinate system c ,y c ,z c ) The point P is projected onto the image plane through the light of the projection center, and the projection point P '(x, y) on the image physical coordinate plane is the coordinates of the mobile robot in the image physical coordinate system, so the projection point (' the coordinates in the camera coordinate system are (x, y, f), where f is the camera optical center O c The vertical distance to the origin O' of the physical coordinate system of the image, according to the principle of similar triangles, is:
Figure FDA0002596073340000031
likewise, the above equation is written in homogeneous form:
Figure FDA0002596073340000032
the above equation completes the conversion from the camera coordinate system to the image physical coordinate system;
(3) Conversion of world coordinate system to camera coordinate system
The world coordinate system describes the position of an object in real space, and the camera coordinate system can be operated by rotating the world coordinate system 3×3 And a translation operation t 3×1 The world coordinate of the mobile robot in the world coordinate system is (x) 3 ,y w ,z w ) Then the conversion relationship can be expressed as:
Figure FDA0002596073340000033
the above equation completes the conversion from the world coordinate system to the camera coordinate system;
(4) Conversion of world coordinate system to image pixel coordinate system
By integrating the formulas in (1), (2) and (3), the conversion relationship between the world coordinate system and the pixel coordinate system can be obtained, and the following steps are included:
Figure FDA0002596073340000034
according to the above formula, let:
Figure FDA0002596073340000041
K 1 f in the expression x 、f y The normalized focal length of the camera in the directions of the x axis and the y axis is represented by pixels;
likewise, let:
K 2 =[R 3×3 t 3×1 ]
wherein, K 1 Internal reference matrix, K, called camera 2 External parameter matrix called camera, let K = K 1 K 2 K is called projection matrix;
the conversion relation formula between the world coordinate system and the pixel coordinate system can be written as follows:
Figure FDA0002596073340000042
CN202010709719.8A 2020-07-22 2020-07-22 Mobile robot positioning method based on fixed camera vision Active CN111968177B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010709719.8A CN111968177B (en) 2020-07-22 2020-07-22 Mobile robot positioning method based on fixed camera vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010709719.8A CN111968177B (en) 2020-07-22 2020-07-22 Mobile robot positioning method based on fixed camera vision

Publications (2)

Publication Number Publication Date
CN111968177A CN111968177A (en) 2020-11-20
CN111968177B true CN111968177B (en) 2022-11-18

Family

ID=73364413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010709719.8A Active CN111968177B (en) 2020-07-22 2020-07-22 Mobile robot positioning method based on fixed camera vision

Country Status (1)

Country Link
CN (1) CN111968177B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112729779B (en) * 2020-12-25 2023-02-24 中冶南方工程技术有限公司 Robot handheld laser sensor optical axis adjusting method and robot
CN112833883B (en) * 2020-12-31 2023-03-10 杭州普锐视科技有限公司 Indoor mobile robot positioning method based on multiple cameras
CN113370816B (en) * 2021-02-25 2022-11-18 德鲁动力科技(成都)有限公司 Quadruped robot charging pile and fine positioning method thereof
CN113436276B (en) * 2021-07-13 2023-04-07 天津大学 Visual relative positioning-based multi-unmanned aerial vehicle formation method
CN113628273B (en) * 2021-07-23 2023-12-15 深圳市优必选科技股份有限公司 Map positioning method, map positioning device, computer readable storage medium and terminal equipment
CN113792564B (en) * 2021-09-29 2023-11-10 北京航空航天大学 Indoor positioning method based on invisible projection two-dimensional code
CN113843798B (en) * 2021-10-11 2023-04-28 深圳先进技术研究院 Correction method and system for mobile robot grabbing and positioning errors and robot

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154536A (en) * 2017-12-13 2018-06-12 南京航空航天大学 The camera calibration method of two dimensional surface iteration
CN109855602A (en) * 2019-01-14 2019-06-07 南通大学 Move the monocular visual positioning method under visual field
CN110288656A (en) * 2019-07-01 2019-09-27 太原科技大学 A kind of object localization method based on monocular cam

Also Published As

Publication number Publication date
CN111968177A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN111968177B (en) Mobile robot positioning method based on fixed camera vision
CN109631855B (en) ORB-SLAM-based high-precision vehicle positioning method
Ji et al. Panoramic SLAM from a multiple fisheye camera rig
CN107194991B (en) Three-dimensional global visual monitoring system construction method based on skeleton point local dynamic update
CN110966991A (en) Single unmanned aerial vehicle image positioning method without control point
CN112484746B (en) Monocular vision auxiliary laser radar odometer method based on ground plane
CN112396656B (en) Outdoor mobile robot pose estimation method based on fusion of vision and laser radar
CN113409459A (en) Method, device and equipment for producing high-precision map and computer storage medium
CN110017852B (en) Navigation positioning error measuring method
CN111915517A (en) Global positioning method for RGB-D camera in indoor illumination adverse environment
CN111383205A (en) Image fusion positioning method based on feature points and three-dimensional model
CN113298947A (en) Multi-source data fusion-based three-dimensional modeling method medium and system for transformer substation
CN112001926A (en) RGBD multi-camera calibration method and system based on multi-dimensional semantic mapping and application
CN111964680A (en) Real-time positioning method of inspection robot
CN105335977A (en) Image pickup system and positioning method of target object
WO2022228391A1 (en) Terminal device positioning method and related device therefor
Sadeghi et al. 2DTriPnP: A robust two-dimensional method for fine visual localization using Google streetview database
CN115774265A (en) Two-dimensional code and laser radar fusion positioning method and device for industrial robot
CN114140527A (en) Dynamic environment binocular vision SLAM method based on semantic segmentation
Gao et al. Complete and accurate indoor scene capturing and reconstruction using a drone and a robot
CN113240656A (en) Visual positioning method and related device and equipment
Wen et al. Roadside hd map object reconstruction using monocular camera
CN116468786B (en) Semantic SLAM method based on point-line combination and oriented to dynamic environment
CN115631317B (en) Tunnel lining ortho-image generation method and device, storage medium and terminal
Zhang et al. Accurate real-time SLAM based on two-step registration and multimodal loop detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant