CN111968177A - Mobile robot positioning method based on fixed camera vision - Google Patents

Mobile robot positioning method based on fixed camera vision Download PDF

Info

Publication number
CN111968177A
CN111968177A CN202010709719.8A CN202010709719A CN111968177A CN 111968177 A CN111968177 A CN 111968177A CN 202010709719 A CN202010709719 A CN 202010709719A CN 111968177 A CN111968177 A CN 111968177A
Authority
CN
China
Prior art keywords
coordinate system
image
mobile robot
camera
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010709719.8A
Other languages
Chinese (zh)
Other versions
CN111968177B (en
Inventor
王翔宇
刘晓贝
梁升一
梁静思
刘维明
李世华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010709719.8A priority Critical patent/CN111968177B/en
Publication of CN111968177A publication Critical patent/CN111968177A/en
Application granted granted Critical
Publication of CN111968177B publication Critical patent/CN111968177B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/10861Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices sensing of data fields affixed to objects or articles, e.g. coded labels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a mobile robot positioning method based on fixed camera vision. Firstly, pasting a two-dimensional code marker on a mobile robot, and obtaining a feature point group of the two-dimensional code marker in a scene image through preliminary matching by using an ORB feature detection algorithm; secondly, removing outliers by using a radius filtering algorithm to obtain a feature point group which is distributed on the two-dimensional code markers in a scene image in a centralized manner, and taking the coordinate average value of the feature point group as a point group center to obtain an image pixel coordinate of the two-dimensional code marker center, namely completing the positioning of the mobile robot under an image pixel coordinate system; and finally, establishing a global coordinate conversion model for the scene, obtaining relevant parameters through camera calibration, converting the image pixel coordinates of the mobile robot in the image pixel coordinate system into actual world coordinates of the mobile robot in a world coordinate system, and further realizing global visual positioning of the mobile robot. The positioning method provided by the invention has the advantages of good real-time performance and high positioning precision.

Description

Mobile robot positioning method based on fixed camera vision
Technical Field
The invention belongs to the technical field of mobile robot positioning, and particularly relates to a mobile robot positioning method for fixing camera vision.
Background
With the continuous expansion of the indoor positioning demand, the indoor positioning method with high effectiveness and adaptability has become a research hotspot. In the field of mobile robots, obtaining accurate real-time positions of mobile robots is also a key problem.
The existing positioning technology comprises inertial navigation positioning, wireless signal positioning, laser radar positioning, visual positioning and the like. The inertial navigation positioning can generate a large accumulated error due to drift, wireless signals are easy to interfere to cause inaccurate positioning, and the laser radar has high equipment cost. In contrast, visual positioning is less costly, but a relatively high degree of accuracy can be achieved.
Visual localization methods can be divided into relative localization and global localization. The relative positioning is mostly performed by visual SLAM (simultaneous Localization And mapping), And the current SLAM mainly adopts a method based on road signs, that is, key features of a scene around the mobile robot are obtained by a camera And are used as environmental road signs for calculating relative positions. Although the method can achieve a good positioning effect and the applied scene range can be large, when the mobile robot is in a changing environment, the change of the map information can affect the positioning. Meanwhile, the camera needs to be carried on the mobile robot, and the structural appearance of the robot is more complex. And requires additional processors or occupies the processor resources of the robot itself. The global positioning is that the camera is fixed in a scene, the view field in the whole scene can be obtained, under the condition of not considering the shielding, the influence of the change of objects in the scene is avoided, the global positioning is independent of the robot, and the processor resource of the mobile robot is not occupied. The common method is a positioning method of a target Feature matching mode, and the classical Feature detection methods include a SIFT (Scale Invariant Feature transform) algorithm, a SURF (speed-Up Robust Features) algorithm, and an ORB (organized FAST and retrieved BRIEF) algorithm. The SIFT algorithm obtains more feature points and has better rotation invariance and scale invariance, but the algorithm efficiency is lower. The SURF algorithm has certain improvement on efficiency compared with SIFT, and other performances are approximately equal to SIFT, but the real-time requirement is difficult to achieve. The ORB algorithm is more efficient than the former two, but has the problem of poor scale invariance.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the characteristics of low precision and poor scale invariance of an ORB algorithm of the conventional positioning method of the mobile robot, the mobile robot positioning method based on the fixed camera vision is provided.
The technical scheme is as follows: in order to realize the purpose of the invention, the technical scheme adopted by the invention is as follows: a mobile robot positioning method based on fixed camera vision comprises the following steps:
firstly, selecting a two-dimensional code mark as a template image, then pasting the two-dimensional code mark on a mobile robot, and taking the mobile robot image shot by a fixed camera as a scene image. Extracting feature points of two-dimensional code marks in the template image and two-dimensional code marks in the scene image by using an ORB algorithm, calculating the similarity between the feature points according to the Hamming distance between the feature point coordinate vector of the two-dimensional code marks in the template image and the feature point coordinate vector of the two-dimensional code marks in the scene image, and finding the most similar feature points in the scene image for each feature point in the template image by comparing the similarity between every two feature points;
secondly, removing outliers in the point groups of the scene image subjected to the initial matching of the feature points in the first step by adopting a radius filtering algorithm to obtain feature point groups which are distributed on the two-dimensional code marks in the scene image in a centralized manner, and then taking the coordinate average value of the feature point groups distributed in the centralized manner as the center of the point group to obtain image pixel coordinates of the center of the two-dimensional code marks in the scene image, namely the image pixel coordinates of the mobile robot in a scene image pixel coordinate system, so as to realize the positioning of the mobile robot in the scene image pixel coordinate system;
and thirdly, after the image pixel coordinates of the mobile robot in the scene image pixel coordinate system are positioned, establishing a global coordinate conversion relation according to a camera imaging model, and converting the obtained image pixel coordinates of the mobile robot into actual world coordinates of the mobile robot based on camera calibration, thereby realizing global visual positioning of the mobile robot.
Further, in the first step, an ORB algorithm is used to extract feature points in the template image and the scene image, and then feature point matching is performed according to hamming distance to find a feature point matching pair set with the minimum distance, that is, each feature point in the template image finds the most similar feature point in the scene image, wherein the hamming distance is defined as the number of different characters at the corresponding positions of two equal-length character strings, the smaller the two distances between feature vectors are, the higher the similarity is, and the image pixel point coordinates corresponding to the two feature points in the template image and the scene image are respectively set as
Figure BDA0002596073350000021
And
Figure BDA0002596073350000022
then the hamming distance of the pixel coordinates corresponding to these two feature points is:
Figure BDA0002596073350000023
further, in the second step, in the scene image, then a proper filtering radius is selected, for a certain feature point in the scene image, if the two-dimensional coordinate distance between other feature points and the feature point is less than or equal to the filtering radius, the other feature points are called as neighbors of the feature point, all the feature points in the scene image are traversed, the number of the neighbors of each feature point is counted, a proper threshold number is set, the feature points of which the number reaches the threshold number are retained, and the feature points which do not reach the threshold number are removed. And then, calculating the average coordinate of the point group as the center of the point group, thereby obtaining the image pixel coordinate of the two-dimensional code mark center in the scene image pixel coordinate system, namely the image pixel coordinate of the mobile robot in the scene image pixel coordinate system, and further realizing the positioning of the mobile robot in the scene image pixel coordinate system.
Further: in the third step, after the image pixel coordinates of the mobile robot in the scene image are located, a coordinate conversion relation is established according to a camera imaging model, a classical global coordinate conversion model is established, key parameters in the global coordinate conversion model are camera parameters, the camera parameters are usually calculated in an experimental mode, and the process is camera calibration; the whole global coordinate conversion model relates to four coordinate systems, namely an image pixel coordinate system, an image physical coordinate system, a camera coordinate system and a world coordinate system. And converting the obtained image pixel coordinates of the mobile robot in the image pixel coordinate system into actual world coordinates of the mobile robot in a world coordinate system, thereby realizing the global visual positioning of the mobile robot.
(1) Conversion of image physical coordinate system to image pixel coordinate system
As shown in fig. 2, the image pixel coordinate system is a two-dimensional rectangular coordinate system, which reflects the arrangement of pixels in the camera chip. The origin O' is positioned at the upper left corner of the image, and the u coordinate axis and the v coordinate axis are respectively superposed with two edges of the image. The pixel coordinate is a discrete value, the pixel is taken as a unit, the image physical coordinate system is in an ideal state, the image center O is taken as a coordinate system origin, and the x coordinate axis and the y coordinate axis are respectively parallel to the u coordinate axis and the v coordinate axis. The two coordinate systems are translation amounts of (u)0,v0) The translational relationship of (a).
If the physical size of a single pixel point in the camera photosensitive element is dx × dy, (x, y) is the image physical coordinate of the mobile robot in the image physical coordinate system, the image pixel coordinate (u, v) of the mobile robot in the image pixel coordinate system obtained in the second step satisfies:
Figure BDA0002596073350000031
writing the above formula in homogeneous form, there are:
Figure BDA0002596073350000032
the above equation completes the conversion from the image physical coordinate system to the image pixel coordinate system;
(2) conversion of camera coordinate system to image physical coordinate system
The camera coordinate system is a three-dimensional rectangular coordinate system with an origin OcAt the optical center of the lens, xcAxis and ycThe axes being parallel to respective sides of the image plane, zcThe axis is the optical axis of the lens and is vertical to the image plane.
As shown in fig. 3, let the coordinate of the mobile robot be P (x) in the camera coordinate systemc,yc,zc) Projecting the point P to an image plane through a light ray of the projection center, wherein a projection point P '(x, y) on the image physical coordinate plane is a coordinate of the mobile robot under the image physical coordinate system, and a coordinate of the projection point P' under the camera coordinate system is (x, y, f), wherein f is a camera optical center OcThe vertical distance to the origin O' of the image physical coordinate system, according to the principle of similar triangles, is:
Figure BDA0002596073350000041
likewise, the above equation is written in homogeneous form:
Figure BDA0002596073350000042
the above equation completes the conversion from the camera coordinate system to the image physical coordinate system;
(3) conversion of world coordinate system to camera coordinate system
The world coordinate system describes the position of an object in real space, and the camera coordinate system can be operated by rotating the world coordinate system3×3And a translation operation t3×1The world coordinate of the mobile robot in the world coordinate system is (x)w,yw,zw) Then the conversion relationship can be expressed as:
Figure BDA0002596073350000043
the above equation completes the conversion from the world coordinate system to the camera coordinate system;
(4) conversion of world coordinate system to image pixel coordinate system
By integrating the formulas in (1), (2) and (3), the conversion relationship between the world coordinate system and the pixel coordinate system can be obtained, and the following steps are included:
Figure BDA0002596073350000044
Figure BDA0002596073350000051
according to the above formula, let:
Figure BDA0002596073350000052
K1f in the expressionx、fyThe normalized focal length of the camera in the directions of the x axis and the y axis is represented by pixels;
likewise, let:
K2=[R3×3 t3×1]
wherein, K1Internal reference matrix, K, called camera2An external parameter matrix called camera, let K equal to K1K2And K is called a projection matrix.
The formula for the conversion relationship between the world coordinate system and the pixel coordinate system can be written as:
Figure BDA0002596073350000053
therefore, the key problem is to obtain an internal reference matrix and an external reference matrix of the camera. The method adopts a Zhangzhen calibration method, and acquires 20-30 images for calibration. Therefore, the conversion relation between the image pixel coordinate of the mobile robot in the image pixel coordinate system and the world coordinate of the mobile robot in the world coordinate system is obtained. Therefore, after the image pixel coordinates of the mobile robot in the image pixel coordinate system are obtained, the world coordinates of the mobile robot in the world coordinate system can be obtained through the conversion, and the global visual positioning of the mobile robot is further completed.
Has the advantages that: compared with the prior art, the technical scheme of the invention has the following beneficial technical effects:
(1) the two-dimensional code is used as a marker, and can be used as a positioning marker and used for storing other information subsequently.
(2) The ORB algorithm is combined with the radius filtering method, positioning is directly carried out according to the characteristics of feature matching concentrated distribution, the influence caused by poor scale invariance is avoided, and the detection and positioning speed is effectively improved.
Drawings
FIG. 1 is a schematic diagram of a radius filtering algorithm;
FIG. 2 is a schematic diagram of an image pixel coordinate system and an image physical coordinate system;
FIG. 3 is a diagram of a global visual model.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
Firstly, selecting a two-dimensional code mark as a template image, then pasting the two-dimensional code mark on a mobile robot, and taking the mobile robot image shot by a fixed camera as a scene image. Extracting feature points of two-dimensional code marks in the template image and two-dimensional code marks in the scene image by using an ORB algorithm, calculating the similarity between the feature points according to the Hamming distance between the feature point coordinate vector of the two-dimensional code marks in the template image and the feature point coordinate vector of the two-dimensional code marks in the scene image, and finding the most similar feature points in the scene image for each feature point in the template image by comparing the similarity between every two feature points;
secondly, removing outliers in the point groups of the scene image subjected to the initial matching of the feature points in the first step by adopting a radius filtering algorithm to obtain feature point groups which are distributed on the two-dimensional code marks in the scene image in a centralized manner, and then taking the coordinate average value of the feature point groups distributed in the centralized manner as the center of the point group to obtain image pixel coordinates of the center of the two-dimensional code marks in the scene image, namely the image pixel coordinates of the mobile robot in a scene image pixel coordinate system, so as to realize the positioning of the mobile robot in the scene image pixel coordinate system;
and thirdly, after the image pixel coordinates of the mobile robot in the scene image pixel coordinate system are positioned, establishing a global coordinate conversion relation according to a camera imaging model, and converting the obtained image pixel coordinates of the mobile robot into actual world coordinates of the mobile robot based on camera calibration, thereby realizing global visual positioning of the mobile robot.
In order to verify the effectiveness of the mobile robot positioning method based on the fixed camera vision in practical engineering application, the self-built experiment platform is used for carrying out experiments on the positioning effect from two aspects of track effect and static positioning precision.
In a track effect experiment, the mobile robot is controlled to move along a roughly rectangular route, the method is used for positioning, a moving track is drawn in an image, and the track continuity is good from the result, so that the real-time performance of the method is good.
In a positioning precision experiment, 9 points are taken at equal intervals on two diagonal lines of a scene range, the mobile robot is moved to the nine points for positioning, and a statistical algorithm outputs coordinate and global actual coordinate data to be drawn on a coordinate graph. The coordinate of the mobile robot is defined as the center coordinate of the two-dimensional code marker carried on the mobile robot, and the error is defined as the Euclidean distance between two points of the actual measurement coordinate and the algorithm positioning coordinate of the mobile robot. The average static positioning error obtained by the experiment is 1.28cm, and the precision is higher.
The above embodiments are merely illustrative of the technical ideas of the present invention, and do not limit the scope of the present invention. It should be noted that any improvement made to the technical solution on the technical idea of the present invention belongs to the protection scope of the present invention.

Claims (5)

1. A mobile robot positioning method based on fixed camera vision is characterized by comprising the following steps:
firstly, selecting a two-dimensional code mark as a template image, then pasting the two-dimensional code mark on a mobile robot, taking the mobile robot image shot by a fixed camera as a scene image, extracting the feature points of the two-dimensional code mark in the template image and the two-dimensional code mark in the scene image by using an ORB algorithm, calculating the similarity between the feature points according to the hamming distance between the feature point coordinate vector of the two-dimensional code mark in the template image and the feature point coordinate vector of the two-dimensional code mark in the scene image, and finding the most similar feature point in the scene image for each feature point in the template image by comparing the similarity between every two feature points;
secondly, removing outliers in the point groups of the scene image subjected to the initial matching of the feature points in the first step by adopting a radius filtering algorithm to obtain feature point groups which are distributed on the two-dimensional code marks in the scene image in a centralized manner, and then taking the coordinate average value of the feature point groups distributed in the centralized manner as the center of the point group to obtain image pixel coordinates of the center of the two-dimensional code marks in the scene image, namely the image pixel coordinates of the mobile robot in a scene image pixel coordinate system, so as to realize the positioning of the mobile robot in the scene image pixel coordinate system;
and thirdly, after the image pixel coordinates of the mobile robot in the scene image pixel coordinate system are positioned, establishing a global coordinate conversion relation based on camera calibration according to a camera imaging model, and converting the obtained image pixel coordinates of the mobile robot into actual world coordinates of the mobile robot, thereby realizing the global visual positioning of the mobile robot.
2. The method as claimed in claim 1, wherein in the first step, the ORB algorithm is used to extract feature points in the template image and the scene image, and then feature point matching is performed according to hamming distance to find a feature point matching pair set with the minimum distance, that is, each feature point in the template image finds the most similar feature point in the scene image, wherein hamming distance is defined as the number of different characters at the corresponding positions of two character strings with equal length, and the smaller the two distances between feature vectors are, the higher the similarity is, and the image pixel point coordinates corresponding to the two feature points in the template image and the scene image are respectively set as image pixel point coordinates corresponding to the two feature points in the template image and the scene image
Figure FDA0002596073340000011
And
Figure FDA0002596073340000012
then the hamming distance of the pixel coordinates corresponding to these two feature points is:
Figure FDA0002596073340000013
3. the method according to claim 2, wherein in the second step, a proper filtering radius is selected from the scene image, and for a certain feature point in the scene image, if the distance between the two-dimensional coordinates of the other feature points and the feature point is less than or equal to the filtering radius, the other feature points are called as neighbors of the feature point, all feature points in the scene image are traversed, the number of neighbors of each feature point is counted, a proper threshold number is set, feature points with the number of neighbors reaching the threshold number are retained, and if the distance is not reached, the feature points are removed, and then the average coordinate of the point group is determined as the center of the point group, so as to obtain the image pixel coordinate of the center of the two-dimensional code mark in the scene image pixel coordinate system, that is, the image pixel coordinate of the mobile robot in the scene image pixel coordinate system, and further realize the positioning of the mobile robot in the scene image pixel coordinate system.
4. The method for positioning the mobile robot based on the fixed camera vision as claimed in claim 3, wherein in the third step, after the image pixel coordinates of the mobile robot in the scene image are positioned, a coordinate transformation relation is established according to a camera imaging model, a classical global coordinate transformation model is established, the key parameters in the global coordinate transformation model are camera parameters, usually, the parameters are obtained by calculation in an experimental manner, and the process is to calibrate the camera; the whole camera geometric model relates to four coordinate systems, namely an image pixel coordinate system, an image physical coordinate system, a camera coordinate system and a world coordinate system, and the obtained image pixel coordinates of the mobile robot in the image pixel coordinate system are converted into actual world coordinates of the mobile robot in the world coordinate system, so that the global visual positioning of the mobile robot is realized.
5. The method for positioning a mobile robot based on fixed camera vision according to claim 4, wherein the positioning method in step three is as follows:
(1) conversion of image physical coordinate system to image pixel coordinate system
The image pixel coordinate system is a two-dimensional rectangular coordinate system, reflects the arrangement condition of pixels in a camera chip, the origin O' of the image pixel coordinate system is positioned at the upper left corner of the image, the u coordinate axis and the v coordinate axis are respectively superposed with two edges of the image, the pixel coordinate is a discrete value, the pixel is taken as a unit, the image physical coordinate system is in an ideal stateIn the ideal state, the image center O is the origin of the coordinate system, the x coordinate axis and the y coordinate axis are respectively parallel to the u coordinate axis and the v coordinate axis, and the two coordinate systems are the translation amount (u coordinate axis and v coordinate axis)0,v0) The translation relationship of (a);
if the physical size of a single pixel point in the camera photosensitive element is dx × dy, (x, y) is the image physical coordinate of the mobile robot in the image physical coordinate system, the image pixel coordinate (u, v) of the mobile robot in the image pixel coordinate system obtained in the second step satisfies:
Figure FDA0002596073340000021
writing the above formula in homogeneous form, there are:
Figure FDA0002596073340000022
the above equation completes the conversion from the image physical coordinate system to the image pixel coordinate system;
(2) conversion of camera coordinate system to image physical coordinate system
The camera coordinate system is a three-dimensional rectangular coordinate system with an origin OcAt the optical center of the lens, xcAxis and ycThe axes being parallel to respective sides of the image plane, zcThe axis is the optical axis of the lens and is vertical to the image plane;
let the coordinate of the mobile robot be P (x) under the camera coordinate systemc,yc,zc) The point P is projected onto the image plane through the light of the projection center, and the projection point P '(x, y) on the image physical coordinate plane is the coordinates of the mobile robot in the image physical coordinate system, so the projection point (' the coordinates in the camera coordinate system are (x, y, f), where f is the camera optical center OcThe vertical distance to the origin O' of the image physical coordinate system, according to the principle of similar triangles, is:
Figure FDA0002596073340000031
likewise, the above equation is written in homogeneous form:
Figure FDA0002596073340000032
the above equation completes the conversion from the camera coordinate system to the image physical coordinate system;
(3) conversion of world coordinate system to camera coordinate system
The world coordinate system describes the position of an object in real space, and the camera coordinate system can be operated by rotating the world coordinate system3×3And a translation operation t3×1The world coordinate of the mobile robot in the world coordinate system is (x)3,yw,zw) Then the conversion relationship can be expressed as:
Figure FDA0002596073340000033
the above equation completes the conversion from the world coordinate system to the camera coordinate system;
(4) conversion of world coordinate system to image pixel coordinate system
By integrating the formulas in (1), (2) and (3), the conversion relationship between the world coordinate system and the pixel coordinate system can be obtained, and the following steps are included:
Figure FDA0002596073340000034
according to the above formula, let:
Figure FDA0002596073340000041
K1f in the expressionx、fyThe normalized focal length of the camera in the directions of the x axis and the y axis is represented by pixels;
likewise, let:
K2=[R3×3 t3×1]
wherein, K1Internal reference matrix, K, called camera2An external parameter matrix called camera, let K equal to K1K2K is called projection matrix;
the formula for the conversion relationship between the world coordinate system and the pixel coordinate system can be written as:
Figure FDA0002596073340000042
CN202010709719.8A 2020-07-22 2020-07-22 Mobile robot positioning method based on fixed camera vision Active CN111968177B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010709719.8A CN111968177B (en) 2020-07-22 2020-07-22 Mobile robot positioning method based on fixed camera vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010709719.8A CN111968177B (en) 2020-07-22 2020-07-22 Mobile robot positioning method based on fixed camera vision

Publications (2)

Publication Number Publication Date
CN111968177A true CN111968177A (en) 2020-11-20
CN111968177B CN111968177B (en) 2022-11-18

Family

ID=73364413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010709719.8A Active CN111968177B (en) 2020-07-22 2020-07-22 Mobile robot positioning method based on fixed camera vision

Country Status (1)

Country Link
CN (1) CN111968177B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112729779A (en) * 2020-12-25 2021-04-30 中冶南方工程技术有限公司 Robot handheld laser sensor optical axis adjusting method and robot
CN112833883A (en) * 2020-12-31 2021-05-25 杭州普锐视科技有限公司 Indoor mobile robot positioning method based on multiple cameras
CN113370816A (en) * 2021-02-25 2021-09-10 德鲁动力科技(成都)有限公司 Quadruped robot charging pile and fine positioning method thereof
CN113436276A (en) * 2021-07-13 2021-09-24 天津大学 Visual relative positioning-based multi-unmanned aerial vehicle formation method
CN113628273A (en) * 2021-07-23 2021-11-09 深圳市优必选科技股份有限公司 Map positioning method and device, computer readable storage medium and terminal equipment
CN113792564A (en) * 2021-09-29 2021-12-14 北京航空航天大学 Indoor positioning method based on invisible projection two-dimensional code
CN113843798A (en) * 2021-10-11 2021-12-28 深圳先进技术研究院 Method and system for correcting grabbing and positioning errors of mobile robot and robot
CN114240998A (en) * 2021-11-17 2022-03-25 乐山师范学院 Robot vision identification positioning method and system
CN115741680A (en) * 2022-11-03 2023-03-07 三峡大学 Multi-degree-of-freedom mechanical arm system based on laser guidance and visual assistance and hole accurate positioning method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154536A (en) * 2017-12-13 2018-06-12 南京航空航天大学 The camera calibration method of two dimensional surface iteration
CN109855602A (en) * 2019-01-14 2019-06-07 南通大学 Move the monocular visual positioning method under visual field
CN110288656A (en) * 2019-07-01 2019-09-27 太原科技大学 A kind of object localization method based on monocular cam

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108154536A (en) * 2017-12-13 2018-06-12 南京航空航天大学 The camera calibration method of two dimensional surface iteration
CN109855602A (en) * 2019-01-14 2019-06-07 南通大学 Move the monocular visual positioning method under visual field
CN110288656A (en) * 2019-07-01 2019-09-27 太原科技大学 A kind of object localization method based on monocular cam

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
罗高等: "基于二维码的单目视觉测距移动机器人定位研究", 《组合机床与自动化加工技术》 *
谷凤伟等: "一种简易的单目视觉位姿测量方法研究", 《光电技术应用》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112729779B (en) * 2020-12-25 2023-02-24 中冶南方工程技术有限公司 Robot handheld laser sensor optical axis adjusting method and robot
CN112729779A (en) * 2020-12-25 2021-04-30 中冶南方工程技术有限公司 Robot handheld laser sensor optical axis adjusting method and robot
CN112833883A (en) * 2020-12-31 2021-05-25 杭州普锐视科技有限公司 Indoor mobile robot positioning method based on multiple cameras
CN112833883B (en) * 2020-12-31 2023-03-10 杭州普锐视科技有限公司 Indoor mobile robot positioning method based on multiple cameras
CN113370816A (en) * 2021-02-25 2021-09-10 德鲁动力科技(成都)有限公司 Quadruped robot charging pile and fine positioning method thereof
CN113436276A (en) * 2021-07-13 2021-09-24 天津大学 Visual relative positioning-based multi-unmanned aerial vehicle formation method
CN113436276B (en) * 2021-07-13 2023-04-07 天津大学 Visual relative positioning-based multi-unmanned aerial vehicle formation method
WO2023000528A1 (en) * 2021-07-23 2023-01-26 深圳市优必选科技股份有限公司 Map positioning method and apparatus, computer-readable storage medium and terminal device
CN113628273A (en) * 2021-07-23 2021-11-09 深圳市优必选科技股份有限公司 Map positioning method and device, computer readable storage medium and terminal equipment
CN113628273B (en) * 2021-07-23 2023-12-15 深圳市优必选科技股份有限公司 Map positioning method, map positioning device, computer readable storage medium and terminal equipment
CN113792564A (en) * 2021-09-29 2021-12-14 北京航空航天大学 Indoor positioning method based on invisible projection two-dimensional code
CN113792564B (en) * 2021-09-29 2023-11-10 北京航空航天大学 Indoor positioning method based on invisible projection two-dimensional code
CN113843798A (en) * 2021-10-11 2021-12-28 深圳先进技术研究院 Method and system for correcting grabbing and positioning errors of mobile robot and robot
CN114240998A (en) * 2021-11-17 2022-03-25 乐山师范学院 Robot vision identification positioning method and system
CN115741680A (en) * 2022-11-03 2023-03-07 三峡大学 Multi-degree-of-freedom mechanical arm system based on laser guidance and visual assistance and hole accurate positioning method

Also Published As

Publication number Publication date
CN111968177B (en) 2022-11-18

Similar Documents

Publication Publication Date Title
CN111968177B (en) Mobile robot positioning method based on fixed camera vision
CN109631855B (en) ORB-SLAM-based high-precision vehicle positioning method
Ji et al. Panoramic SLAM from a multiple fisheye camera rig
CN109993793B (en) Visual positioning method and device
CN111882612A (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
CN111383205B (en) Image fusion positioning method based on feature points and three-dimensional model
CN113409459B (en) Method, device and equipment for producing high-precision map and computer storage medium
CN113989450A (en) Image processing method, image processing apparatus, electronic device, and medium
CN110017852B (en) Navigation positioning error measuring method
CN111932627B (en) Marker drawing method and system
CN113095184B (en) Positioning method, driving control method, device, computer equipment and storage medium
CN112396656A (en) Outdoor mobile robot pose estimation method based on fusion of vision and laser radar
CN113298947A (en) Multi-source data fusion-based three-dimensional modeling method medium and system for transformer substation
WO2022228391A1 (en) Terminal device positioning method and related device therefor
CN111915517A (en) Global positioning method for RGB-D camera in indoor illumination adverse environment
CN105335977A (en) Image pickup system and positioning method of target object
Sadeghi et al. 2DTriPnP: A robust two-dimensional method for fine visual localization using Google streetview database
CN115239822A (en) Real-time visual identification and positioning method and system for multi-module space of split type flying vehicle
Liao et al. SE-Calib: Semantic Edge-Based LiDAR–Camera Boresight Online Calibration in Urban Scenes
CN114140527A (en) Dynamic environment binocular vision SLAM method based on semantic segmentation
Wen et al. Roadside hd map object reconstruction using monocular camera
CN116817887B (en) Semantic visual SLAM map construction method, electronic equipment and storage medium
CN115631317B (en) Tunnel lining ortho-image generation method and device, storage medium and terminal
Wu et al. Multi‐camera traffic scene mosaic based on camera calibration
Zhang et al. Accurate real-time SLAM based on two-step registration and multimodal loop detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant