CN111721318A - Template matching visual odometer based on self-adaptive search area - Google Patents

Template matching visual odometer based on self-adaptive search area Download PDF

Info

Publication number
CN111721318A
CN111721318A CN202010453502.5A CN202010453502A CN111721318A CN 111721318 A CN111721318 A CN 111721318A CN 202010453502 A CN202010453502 A CN 202010453502A CN 111721318 A CN111721318 A CN 111721318A
Authority
CN
China
Prior art keywords
search area
image
upper left
coordinate
left corner
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010453502.5A
Other languages
Chinese (zh)
Other versions
CN111721318B (en
Inventor
吕查德
曾庆喜
阚宇超
高唱
胡义轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202010453502.5A priority Critical patent/CN111721318B/en
Publication of CN111721318A publication Critical patent/CN111721318A/en
Application granted granted Critical
Publication of CN111721318B publication Critical patent/CN111721318B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a template matching visual odometer based on a self-adaptive search area, which specifically comprises the following steps: acquiring texture information of the ground where the robot is located in real time and velocity components v of the robot in the x and y directions of a world physical coordinate system at the current momentxAnd vy(ii) a Selecting a template in the central area of the image at the previous moment, and based on the template information and the velocity component v of the robot at the current momentxAnd vyAnd determining a search area in the image at the current moment, and performing template matching in the search area of the image at the current moment so as to solve the position of the robot at the current moment. The invention greatly reduces the calculated amount and the mismatching rate of the traditional template matching visual odometer and improves the working efficiency of the odometer.

Description

Template matching visual odometer based on self-adaptive search area
Technical Field
The invention belongs to the technical field of robot navigation and positioning.
Background
With the continuous development of computer vision technology in recent years, visual odometers are becoming the research hotspots in the field of mobile robot navigation positioning. The template matching visual odometer is used as a novel visual odometer, and is used for acquiring a ground image with low texture in real time, performing template matching among continuous image frames to obtain the displacement of a vehicle, and calculating the rotation angle of the vehicle by using a steering model similar to Ackerman. The method can greatly simplify the calculation model of the visual odometer and effectively solve the problem that the monocular visual odometer lacks scale information. However, in the process of template matching, the traditional template matching visual odometer needs to perform template matching on all areas in an image frame, which undoubtedly increases the calculation amount and affects the real-time performance of the odometer. Therefore, the method for adjusting the search area in real time according to the motion state of the robot is provided, and is especially important for improving the real-time performance and the positioning accuracy of the template matching visual odometer.
Disclosure of Invention
The purpose of the invention is as follows: in order to solve the problems of large calculation amount, low working efficiency and the like in the prior art, the invention provides a template matching visual odometer based on a self-adaptive search area.
The technical scheme is as follows: the invention provides a template matching visual odometer based on a self-adaptive search area, which specifically comprises the following steps:
step 1: installing an image acquisition device on the robot, wherein the image acquisition device acquires an image of the ground where the robot is located in real time, and the image comprises ground texture information;
step 2: obtaining IMU data same with the image timestamp in real time, and resolving speed components v of the robot in the x and y directions of a world physical coordinate system at the current moment according to the IMU dataxAnd vy
And step 3: selecting a block with the width of T from the central area of the image at the previous momentwHeight is ThIs used as a template, and the coordinate of the upper left corner of the template is (u)0,v0);
And 4, step 4: according to vx、vy、Tw、ThAnd (u)0,v0) Calculating the size and the upper left corner coordinate of the search area in the image at the current moment;
and 5: template matching is performed in the search area of the image at the current time, so that the position of the robot at the current time is calculated.
Further, said step 4 is according to vx、vy、Tw、ThAnd (u)0,v0) Calculating the size and the upper left corner coordinate of the search area, specifically:
determining the size:
Figure BDA0002508414050000021
wherein S iswFor the length of the search area, ShFor the width of the search area, h is the height of the image acquisition device from the ground, fxFor normalized focal length of the image acquisition device in the x-direction, fyThe normalized focal length of the image acquisition device in the y direction is shown, and fps is the frame frequency of the image acquisition device; w is a preset length error value, and H is a preset width error value;
determining the coordinates of the upper left corner of the search area:
when v isx≤0,vyWhen the coordinate of the upper left corner of the search area is more than 0, the coordinate of the upper left corner of the search area is (u)0-W/2,v0-H/2);
When v isx>0,vyWhen the coordinate of the upper left corner of the search area is more than 0, the coordinate of the upper left corner of the search area is (u)0+Tw-Sw+W/2,v0-H/2);
When v isx≤0,vyWhen the coordinate of the upper left corner of the search area is less than or equal to 0, the coordinate of the upper left corner of the search area is (u)0-W/2,v0+Th-Sh+H/2);
When v isx≤0,vyWhen the coordinate of the upper left corner of the search area is more than 0, the coordinate of the upper left corner of the search area is (u)0+Tw-Sw+W/2,v0+Th-Sh+H/2)。
Furthermore, W is less than or equal to 50 pixels, and H is less than or equal to 50 pixels.
Has the advantages that: the invention can self-adaptively adjust the position and the size of a search area in real time according to the motion condition of the robot, greatly reduce the calculated amount and the mismatching rate of the traditional template matching visual odometer, improve the working efficiency of the odometer and enable the odometer to have more excellent real-time performance and positioning accuracy.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a schematic diagram of the location of a search region in an image to be matched, where (a) is vx≤0,vyWhen the value is more than 0, the position of the search area in the image to be matched is shown as (b) vx>0,vyWhen the value is more than 0, the position of the search area in the image to be matched is shown in the (c) is vx≤0,vyWhen the value is less than or equal to 0, the position of the search area in the image to be matched is shown as (d) vx≤0,vyAnd when the position of the search area in the image to be matched is larger than 0, the position of the search area in the image to be matched is shown schematically.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention.
As shown in fig. 1, the present embodiment provides a template matching visual odometer based on an adaptive search area, which specifically includes the following steps:
step 1: the image is acquired in real time. The image contains texture information of the geographic surface where the robot is located
Step 2: acquiring the speed components v of the robot in the x and y directions in the world physical coordinate system at the current moment in real timexAnd vy
And step 3: the template is initialized. Sequentially taking two frames of images, namely an ith frame of image and an (i + 1) th frame of image from the image sequence, namely an image at the ith moment and an image at the (i + 1) th moment; taking a rectangular area in the central area of the ith frame image as a template, wherein the width of the template is TwHeight is ThThe coordinate of the upper left corner of the template is (u)0,v0) And the (i + 1) th frame image is used as an image to be matched.
Step 4: the search area size is calculated. Before template matching, a search area is determined in the (i + 1) th frame image, and the search range is reducedThe matching efficiency is improved, the search area can be adaptively adjusted according to the motion condition of the robot, and the speed component v of the robot is usedx、vyAnd determining the coordinates of the upper left corner of the search area, and adjusting the search area in real time according to the speed component, so that the technical calculation precision during template matching calculation is ensured, the calculation amount is reduced, and the calculation speed is increased. The determination of the search area can greatly reduce meaningless matching operation and improve the template matching efficiency.
And 5: and performing template matching operation in the search area determined in the (i + 1) th frame image, and calculating according to the template matching visual odometer position calculating step to obtain the position of the robot at the (i + 1) th moment.
Preferably, a monocular camera is used for acquiring images, the monocular camera is arranged at the front end of the robot and is perpendicular to the ground, so that pictures shot by a camera lens contain ground texture information, and the shooting frequency of the camera is set to be 10-30 Hz.
Preferably, the specific method for acquiring the velocity component is as follows: the method comprises the steps of installing an IMU on a robot, acquiring IMU data with the same time stamp as a picture shot by a camera in real time, and resolving speed components v of the robot in x and y directions in a world physical coordinate system at the current momentxAnd vy
Preferably, the size of the search area is obtained by using the following formula:
Figure BDA0002508414050000031
wherein S iswAnd ShRespectively the length and width of the search area, h is the height of the camera from the ground, fxAnd fyNormalized focal lengths, v, of the camera in the x and y directions, respectivelyxAnd vyThe driving speed components (unit is m/s) of the robot solved by the IMU in the x direction and the y direction, fps is a camera frame rate, W and H are length sizes which are properly increased in length and width in consideration of the error between the estimated speed and the actual speed of the robot respectively, and the maximum values of W and H are not more than 50 pixel units.
As shown in fig. 2, the coordinates of the upper left corner of the search area are determined using the following formula:
when v is shown as (a) in FIG. 2x≤0,vyWhen the coordinate of the upper left corner of the search area is (u), the robot can be judged to be in a left front driving state, and the template moves towards the lower right side relative to the initial position in the image to be matched at the moment, so that the coordinate of the upper left corner of the search area is (u)0-W/2,v0-H/2);
When v is shown as (b) in FIG. 2x>0,vyWhen the coordinate of the upper left corner of the search area is (u), the robot can be judged to be in a right front driving state, and the template moves towards the lower left side relative to the initial position in the image to be matched at the moment, so that the coordinate of the upper left corner of the search area is (u)0+Tw-Sw+W/2,v0-H/2);
When v is shown as (c) in FIG. 2x≤0,vyWhen the coordinate of the upper left corner of the search area is less than or equal to 0, the robot can be judged to be in a left rear driving state, and at the moment, the template should move towards the upper right corner relative to the initial position in the image to be matched, so that the coordinate of the upper left corner of the search area is (u)0-W/2,v0+Th-Sh+H/2);
When v is shown as (d) in FIG. 2x≤0,vyWhen the coordinate of the upper left corner of the search area is (u), the robot can be judged to be in the right rear driving state, and the template should move to the upper left side relative to the initial position in the image to be matched at the moment, so that the coordinate of the upper left corner of the search area is (u)0+Tw-Sw+W/2,v0+Th-Sh+H/2)。
In addition to the above embodiments, the present invention may have other embodiments. The invention is not to be considered as being limited to the specific embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed.

Claims (4)

1. A template matching visual odometer based on an adaptive search area is characterized by comprising the following steps:
step 1: installing an image acquisition device on the robot, wherein the image acquisition device acquires an image of the ground where the robot is located in real time, and the image comprises ground texture information;
step 2: obtaining IMU data same with the image timestamp in real time, and resolving speed components v of the robot in the x and y directions of a world physical coordinate system at the current moment according to the IMU dataxAnd vy
And step 3: selecting a block with the width of T from the central area of the image at the previous momentwHeight is ThIs used as a template, and the coordinate of the upper left corner of the template is (u)0,v0);
And 4, step 4: according to vx、vy、Tw、ThAnd (u)0,v0) Calculating the size and the upper left corner coordinate of the search area in the image at the current moment;
and 5: template matching is performed in the search area of the image at the current time, so that the position of the robot at the current time is calculated.
2. The adaptive search area-based template matching visual odometer according to claim 1, wherein the image acquisition device employs a monocular camera mounted at the front end of the robot and perpendicular to the ground.
3. The adaptive search area-based template matching visual odometer according to claim 1, wherein the size and the upper left-hand coordinates of the search area are calculated in step 4, specifically:
determining the size:
Figure FDA0002508414040000011
wherein S iswFor the length of the search area, ShFor the width of the search area, h is the height of the image acquisition device from the ground, fxFor normalized focal length of the image acquisition device in the x-direction, fyIs the normalized focal length of the image capturing device in the y-direction, fps is the frame frequency of the image capturing device, W is the preset length error value, and H is the preset width error valueA value;
determining the coordinates of the upper left corner of the search area:
when v isx≤0,vyWhen the coordinate of the upper left corner of the search area is more than 0, the coordinate of the upper left corner of the search area is (u)0-W/2,v0-H/2);
When v isx>0,vyWhen the coordinate of the upper left corner of the search area is more than 0, the coordinate of the upper left corner of the search area is (u)0+Tw-Sw+W/2,v0-H/2);
When v isx≤0,vyWhen the coordinate of the upper left corner of the search area is less than or equal to 0, the coordinate of the upper left corner of the search area is (u)0-W/2,v0+Th-Sh+H/2);
When v isx≤0,vyWhen the coordinate of the upper left corner of the search area is more than 0, the coordinate of the upper left corner of the search area is (u)0+Tw-Sw+W/2,v0+Th-Sh+H/2)。
4. The adaptive search region based template matching visual odometer of claim 3, wherein W0 ≦ 50 pixels, H0 ≦ 50 pixels.
CN202010453502.5A 2020-05-26 2020-05-26 Template matching visual odometer based on self-adaptive search area Active CN111721318B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010453502.5A CN111721318B (en) 2020-05-26 2020-05-26 Template matching visual odometer based on self-adaptive search area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010453502.5A CN111721318B (en) 2020-05-26 2020-05-26 Template matching visual odometer based on self-adaptive search area

Publications (2)

Publication Number Publication Date
CN111721318A true CN111721318A (en) 2020-09-29
CN111721318B CN111721318B (en) 2022-03-25

Family

ID=72565055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010453502.5A Active CN111721318B (en) 2020-05-26 2020-05-26 Template matching visual odometer based on self-adaptive search area

Country Status (1)

Country Link
CN (1) CN111721318B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6243494B1 (en) * 1998-12-18 2001-06-05 University Of Washington Template matching in 3 dimensions using correlative auto-predictive search
CN101673403A (en) * 2009-10-10 2010-03-17 安防制造(中国)有限公司 Target following method in complex interference scene
CN104915969A (en) * 2015-05-21 2015-09-16 云南大学 Template matching tracking method based on particle swarm optimization
CN107590502A (en) * 2017-09-18 2018-01-16 西安交通大学 A kind of whole audience dense point fast matching method
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method
CN110411475A (en) * 2019-07-24 2019-11-05 南京航空航天大学 A kind of robot vision odometer assisted based on template matching algorithm and IMU
CN110520694A (en) * 2017-10-31 2019-11-29 深圳市大疆创新科技有限公司 A kind of visual odometry and its implementation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6243494B1 (en) * 1998-12-18 2001-06-05 University Of Washington Template matching in 3 dimensions using correlative auto-predictive search
CN101673403A (en) * 2009-10-10 2010-03-17 安防制造(中国)有限公司 Target following method in complex interference scene
CN104915969A (en) * 2015-05-21 2015-09-16 云南大学 Template matching tracking method based on particle swarm optimization
CN107590502A (en) * 2017-09-18 2018-01-16 西安交通大学 A kind of whole audience dense point fast matching method
CN110520694A (en) * 2017-10-31 2019-11-29 深圳市大疆创新科技有限公司 A kind of visual odometry and its implementation
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method
CN110411475A (en) * 2019-07-24 2019-11-05 南京航空航天大学 A kind of robot vision odometer assisted based on template matching algorithm and IMU

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
M. O. A. AQEL等: "Adaptive-search template matching technique based on vehicle acceleration for monocular visual odometry system", 《IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING》 *
曾庆喜等: "无人驾驶车辆单目视觉里程计快速位姿估计", 《河北科技大学学报》 *

Also Published As

Publication number Publication date
CN111721318B (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN108363946B (en) Face tracking system and method based on unmanned aerial vehicle
US10659768B2 (en) System and method for virtually-augmented visual simultaneous localization and mapping
CN109785291B (en) Lane line self-adaptive detection method
WO2019084804A1 (en) Visual odometry and implementation method therefor
CN110570453B (en) Binocular vision-based visual odometer method based on closed-loop tracking characteristics
CN113108771B (en) Movement pose estimation method based on closed-loop direct sparse visual odometer
CN105678809A (en) Handheld automatic follow shot device and target tracking method thereof
WO2021208933A1 (en) Image rectification method and apparatus for camera
CN112083403B (en) Positioning tracking error correction method and system for virtual scene
CN110099268B (en) Blind area perspective display method with natural color matching and natural display area fusion
WO2019156072A1 (en) Attitude estimating device
WO2023236508A1 (en) Image stitching method and system based on billion-pixel array camera
CN112204946A (en) Data processing method, device, movable platform and computer readable storage medium
JP2011030015A5 (en)
CN111862169B (en) Target follow-up method and device, cradle head camera and storage medium
CN110060295B (en) Target positioning method and device, control device, following equipment and storage medium
CN113610865B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111105467A (en) Image calibration method and device and electronic equipment
CN111721318B (en) Template matching visual odometer based on self-adaptive search area
JP2009301181A (en) Image processing apparatus, image processing program, image processing method and electronic device
CN116804553A (en) Odometer system and method based on event camera/IMU/natural road sign
CN115375767A (en) Binocular vision odometer method based on event contrast maximization
CN112613372B (en) Outdoor environment visual inertia SLAM method and device
CN114841989A (en) Relative pose estimation method based on monocular event camera
CN112767442A (en) Pedestrian three-dimensional detection tracking method and system based on top view angle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant