CN110738706A - quick robot vision positioning method based on track conjecture - Google Patents

quick robot vision positioning method based on track conjecture Download PDF

Info

Publication number
CN110738706A
CN110738706A CN201910877211.6A CN201910877211A CN110738706A CN 110738706 A CN110738706 A CN 110738706A CN 201910877211 A CN201910877211 A CN 201910877211A CN 110738706 A CN110738706 A CN 110738706A
Authority
CN
China
Prior art keywords
image
target
mobile robot
positioning
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910877211.6A
Other languages
Chinese (zh)
Other versions
CN110738706B (en
Inventor
柏建军
耿新
尚文武
邹洪波
陈云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Hangzhou Electronic Science and Technology University
Original Assignee
Hangzhou Electronic Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Electronic Science and Technology University filed Critical Hangzhou Electronic Science and Technology University
Priority to CN201910877211.6A priority Critical patent/CN110738706B/en
Publication of CN110738706A publication Critical patent/CN110738706A/en
Application granted granted Critical
Publication of CN110738706B publication Critical patent/CN110738706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The invention discloses an fast robot visual positioning method based on track speculation, which adopts a gridding processing method when processing image characteristics, acquires position coordinates of a target robot in an image, converts the image coordinates into actual coordinates by using coordinate conversion, and finally transmits data to a mobile robot in a motion state through a wireless communication module, and completes dynamic positioning of the mobile robot by using the angular speed and the angle of the mobile robot.

Description

quick robot vision positioning method based on track conjecture
Technical Field
The invention belongs to the field of mobile robot positioning, and particularly relates to a visual positioning method for mobile robots.
Background
With the development of science and technology and the advancement of society, as a strategic emerging industry, a robot is widely applied to military, civil and other aspects, people have greatly improved requirements on the robot, and the robot has great market space and prospects.
The method comprises the following steps of using infrared, ultrasonic, radio frequency identification and other wireless technologies to realize indoor positioning, having low ultrasonic propagation speed and high measurement precision, being insensitive to external light and magnetic field, being widely applied by , but being easily influenced by multipath effect, non-line-of-sight propagation and temperature change, achieving infrared with low cost, simple structure and the like, but being easily influenced by external interference, influencing the positioning precision, needing to set a large amount of hardware equipment for auxiliary positioning, being not beneficial to maintenance.
Disclosure of Invention
The invention provides fast robot vision positioning methods based on track conjecture.
The invention adopts a fixed monocular camera to position the mobile robot in the positioning area, provides gridding processing methods for identifying the target of the image, obtains the position coordinate of the mobile robot in the positioning area, and provides a positioning method aiming at the robot in a motion state.
quick robot vision positioning method based on track conjecture, characterized in that, the method includes the following steps:
, fixedly hanging and installing the camera at a fixed height of to enable the lens of the camera to be horizontal with the positioning area, connecting a power supply and a network cable, shooting and obtaining an original color image of the positioning area;
and step two, placing the calibration plate in a positioning area, and acquiring and storing images of the calibration plate under different poses.
And thirdly, processing the calibration board image by using MATLAB, calibrating the camera and acquiring camera parameters.
And step four, setting a rectangular color area above the mobile robot as a positioning target. And acquiring a robot image with a rectangular color mark by using a video camera, and correcting the image according to the parameters calibrated by the camera.
Step five, preprocessing the corrected image, and processing by using a gridding method to obtain the position coordinate of the target robot in the image;
the preprocessing operation on the corrected image firstly refers to:
1. and acquiring an image to be processed, and selecting a region of interest to be processed in step .
2. And converting the image from RGB into HSV color space.
3. And the image is opened and closed, interference is eliminated, and objects are filled, so that the target identification is more accurate.
The method comprises the following steps of processing by using a gridding method to obtain the position coordinates of a target robot in an image, and specifically comprises the following steps:
dividing the positioning image into a regular grid form, and dividing the image of the target robot into squares with the same size by using pixels;
selecting the center points of the squares as sampling points, obtaining the pixel information of each center point, comparing the pixel information with the pixel information of a known rectangular color identifier, when the pixel information of the center of the square is equal to of the pixel information of the known rectangular color identifier, taking the small square at the position as a grid occupied by the rectangular color identifier, marking the grid as a target area, otherwise, marking the grid as a non-target area, comparing the pixel information of the centers of all the squares of the positioning image, recording the pixel coordinates of all the target areas, and obtaining the centroid coordinates of the target areas by using the pixel coordinates of the target areas, namely the pixel coordinates of the rectangular color identifier.
Coordinates of center of mass (C)x,Cy) By solving a weighted average of the coordinates of the center point of all target areas, i.e.Where n is the number of target regions, xiAbscissa, y, representing the center point of the target areaiRepresenting the ordinate of the central point of the target area;
and step six, converting the image coordinates into actual coordinates by utilizing coordinate conversion.
Seventhly, aiming at the mobile robot in the motion state, data transmission is carried out through a wireless communication module, and the dynamic positioning of the mobile robot is completed by utilizing the angular speed and the angle of the mobile robot;
if the size of the image obtained by preprocessing is 530pixel, in order to improve the real-time performance of dynamic processing, steps are carried out to accelerate the acquisition process of the color features, the processing amount of data is reduced, the data amount which does not contain effective target features in the preprocessed image is eliminated, so that the preprocessed image is processed by using a dynamic window method, the color rectangle is marked as a rectangle of 15cm 18cm, a window of 100pixels needs to be arranged in the image plane, the size of the window on the corresponding motion plane is 33cm, and the center position of the dynamic window is determined by the coordinates of the mobile robot acquired at the previous time , the heading angle and the speed of the mobile robot.
The method comprises the steps of obtaining required angular speed and angle by various sensor devices on the mobile robot, transmitting data measured by the sensors to an upper computer through a wireless communication module, predicting the position of the mobile robot at the next moment by the upper computer through the angular speed and the angle at the current moment, carrying out target recognition on the position, obtaining the position coordinate of the mobile robot, and realizing the positioning of the mobile robot.
The mobile robot position at the next moment is predicted, and specifically:
if the position of the mobile robot at the current moment is P1(x1,y1) The obtained angular velocity is wrAnd wlThe radius of the wheel is d, the included angle between the wheel radius and the horizontal direction is theta, and the position of the mobile robot at the time of is P2(x2,y2),x2=x1-(r-r*cos(wT+θ-π/2)),y2=y1+ r sin (wT + θ - π/2), wherein vr=wr*d,
Figure BDA0002204689190000031
And taking the position as the center of the dynamic window, and carrying out gridding processing on the image to obtain the position coordinate of the mobile robot. If the target is not found in the new window, the target coordinate P of the current moment is used1(x1,y1) For the center, 2 × v × T finds the target in a window of length, repositioning the target. And continuously and circularly operating to realize the positioning of the dynamic target.
Preferably, in the third step, the MATLAB is used to process the calibration plate image to perform camera calibration, and the obtaining of the camera parameters means: the method comprises the steps of placing a black and white calibration plate with a known length in a positioning area, shooting images, obtaining and storing a plurality of calibration plate images in different poses, calibrating a Camera by using a Camera calibration in MATLAB, and obtaining internal and external parameters of the Camera.
Compared with the prior art, the invention has the advantages that vision-based mobile robot positioning methods are provided, a gridding processing method is adopted when the method processes image features, the data volume of image processing is reduced, the positioning accuracy is ensured, the image processing time required by target recognition is reduced, and the processing method in a motion state plays an important role in shortening the image processing time.
Drawings
FIG. 1 is a diagram of a positioning system architecture;
FIG. 2 is a grid segmentation diagram;
FIG. 3 is a target decision diagram;
FIG. 4 is a view of dynamic positioning of FIG. 1;
FIG. 5 is a view of dynamic positioning of FIG. 2;
fig. 6 is a diagram of a motion model of the mobile robot.
Detailed Description
The invention is described in further detail with reference to the drawings and examples, it being understood that the specific examples set forth herein are illustrative only and are not intended to be limiting of the invention.
The following illustrates the specific embodiments of the overall process of the present invention as follows:
step , mounting the camera on the R-shaped bracket, fixing the camera at fixed height, ensuring the mirror surface of the camera to be horizontal with the positioning area, connecting a power supply and a network cable, shooting, and acquiring the original color image of the positioning area by using the upper computer software of the desktop computer, as shown in fig. 1.
And step two, adjusting the aperture of the lens, focusing and keeping unchanged, placing the calibration plate in a positioning area, continuously changing the position of the calibration plate in front of the lens, collecting a plurality of images, acquiring images of the calibration plate in different positions and storing the images.
And step three, processing the calibration plate image by using MATLAB after the acquisition is finished, calibrating the camera, and obtaining camera parameters.
Step four, under the condition, the mobile robot is not suitable for being used as the target of image positioning because of different appearance and size of , and when the mobile robot is at different positions, the obtained images are different and have no invariance, and the positioning can be realized only through correction, thus not only increasing the processing time, but also adding correction errors and being not beneficial to positioning.
And fifthly, preprocessing the corrected image, and processing by using a gridding method to obtain the position coordinates of the target robot in the image.
And step six, converting the image coordinates into actual coordinates by utilizing coordinate conversion.
And seventhly, aiming at the mobile robot in the motion state, carrying out data transmission through a wireless communication module, and completing dynamic positioning of the mobile robot by utilizing the attitude information such as the speed, the angle and the like of the mobile robot, as shown in fig. 4.
In the third step, MATLAB is used for processing the calibration plate image to calibrate the camera, and the camera parameter acquisition means that:
the method comprises the steps of placing a black and white calibration plate with a known length in a positioning area, shooting images, obtaining and storing a plurality of calibration plate images in different poses, calibrating a Camera by using a Camera calibration in MATLAB, and obtaining internal and external parameters of the Camera.
In the fifth step, the preprocessing operation on the corrected image firstly means that:
1. and acquiring an image to be processed, and selecting a region of interest to be processed in step .
2. And converting the image from RGB into HSV color space.
3. And the image is opened and closed, interference is eliminated, and objects are filled, so that the target identification is more accurate.
In the fifth step, the position coordinates of the target robot in the image are acquired by processing with a gridding method, namely:
the image is divided into regular grid forms, and the image of the target robot is divided into regular squares by using pixels.
Wherein, i 1, 1.. multidot.m, J1, N, M is the number of horizontal dividing lines, and N is the number of vertical dividing lines. The distance between the dividing lines is equally spaced, and the length d is determined by the pixel size of the rectangular color region. This divides the positioning area into small squares of equal size, as shown in fig. 2.
1. And after the positioning image is subjected to gridding processing, selecting the central point of a grid as a sampling point, acquiring pixel information at each central point, and comparing the pixel information with the pixel information of a known rectangular color identifier.
2. When the pixel information at the center of the square corresponds to the pixel information of the known rectangle color identifier, we will regard the small square at that position as the grid occupied by the rectangle color identifier, and record it as the target area, and conversely, record it as the non-target area, as shown in fig. 3.
3. Comparing the pixel information of the centers of all grids of the positioning area, recording the pixel coordinates of all target areas, and acquiring the centroid coordinate of the target area, namely the pixel coordinate of the rectangular color identifier, by using the pixel coordinates of the target areas.
Coordinates of center of mass (C)x,Cy) Can be obtained by solving a weighted average of the coordinates of the center point of all the target areas, i.e.
Figure BDA0002204689190000051
Where n is the number of target regions, xiAbscissa, y, representing the center point of the target areaiRepresenting the ordinate of the central point of the target area;
in the seventh step, for the mobile robot in the motion state, data transmission is performed through the wireless communication module, and the dynamic positioning of the mobile robot is completed by using attitude information such as the speed, the angle, and the like of the mobile robot:
if the size of the image obtained by preprocessing is 530pixel, in order to improve the real-time performance of dynamic processing, the color feature acquisition process is accelerated at , the data processing amount is reduced, the data amount which does not contain effective target features in the preprocessed image is eliminated, so the preprocessed image is processed by using a dynamic window method, the color rectangle is a rectangle of 15cm 18cm, a window of 100pixels needs to be arranged in the image plane, the size of the window on the corresponding motion plane is 33cm, and the center position of the dynamic window is determined by the coordinates of the mobile robot, the heading angle of the mobile robot and the speed of the mobile robot which are acquired at the previous time .
The method comprises the steps of utilizing various sensor devices on the mobile robot to obtain required data information such as speed, azimuth angle and the like, transmitting data measured by the sensors to an upper computer through a wireless communication module, predicting the position of the mobile robot at the next moment through position and attitude information such as the speed, the angle and the like at the current moment by the upper computer, carrying out target identification on the position, obtaining the position coordinate of the mobile robot, and achieving the positioning of the mobile robot.
The prediction of the position of the mobile robot at the next time point is to acquire the position coordinates of the mobile robot by reducing the processing time of color features and increasing the processing speed of data amount, deleting data not including target color features, setting a dynamic window of an appropriate size, and performing target recognition.
1. With t1、t2、t3Taking the time image as an example, t is obtained by using a gridding method1、t2Target robot coordinate P of moment image1(x1,y1)、P2(x2,y2) (/ zone)Field is t1The target area at the time, \ area is t2The target area at the moment) as shown in fig. 4.
According to t2The angular speed w, the angle theta, the wheel radius d and the image acquisition period T of the robot at the moment target are calculated to obtain moment T3Position coordinate P of3(x3,y3) Comprises the following steps:
x3=x2+w*d*T*cosθ
y3=y2+w*d*T*sinθ
and taking the position coordinate as the center of the dynamic window, and carrying out gridding processing on the image to obtain a new coordinate of the target robot.
2. By T1、T2、T3Taking two images at a time as an example, firstly, a gridding method is utilized to obtain a coordinate P of the target robot in the images1(x1,y1)、P2(x2,y2) (/ region is T)1A target area of time, \ area is T2The target area at the moment) as shown in fig. 5.
Then obtaining T through wireless module2Course angle theta of target robot at moment and angular speeds w of left wheel and right wheell、wrDeduct time T of 3And taking the position coordinate as the center of the dynamic window, and carrying out gridding processing on the image to obtain a new coordinate of the target robot.
A model of the motion of a wheeled mobile robot is shown in FIG. 6.
Wherein, the relationship between the speed of the left wheel, the speed of the right wheel and the linear speed angular speed is as follows:
vr=wr*d,vl=wl*d
Figure BDA0002204689190000061
Figure BDA0002204689190000062
Figure BDA0002204689190000063
derived T3P of the moment3(x3,y3) It should satisfy:
x3=x2-(r-r*cos(wT+θ-π/2))
y3=y2+r*sin(wT+θ-π/2)
if no target is found in the new window, T is used2And (3) searching the target in a window with the time coordinate as the center and the length of 2 v T, and re-tracking the target.
And continuously and circularly operating to realize the positioning of the dynamic target.

Claims (2)

1, fast robot vision positioning method based on track conjecture, characterized in that, the method includes the following steps:
, fixedly hanging and installing the camera at a fixed height of to enable the lens of the camera to be horizontal with the positioning area, connecting a power supply and a network cable, shooting and obtaining an original color image of the positioning area;
placing the calibration plate in a positioning area, and acquiring and storing images of the calibration plate at different poses;
processing the calibration plate image by using MATLAB, calibrating the camera and obtaining camera parameters;
setting a rectangular color area above the mobile robot as a positioning target; acquiring a robot image with a rectangular color identifier by using a camera, and correcting the image according to parameters calibrated by the camera;
step five, preprocessing the corrected image, and processing by using a gridding method to obtain the position coordinate of the target robot in the image;
the preprocessing operation on the corrected image firstly refers to:
①, acquiring an image to be processed, and selecting an area of interest to perform the next steps of processing;
②, converting the image from RGB into HSV color space;
③, opening and closing the image, eliminating interference, filling object, and making target identification more accurate;
the method comprises the following steps of processing by using a gridding method to obtain the position coordinates of a target robot in an image, and specifically comprises the following steps:
dividing the positioning image into a regular grid form, and dividing the image of the target robot into squares with the same size by using pixels;
when the pixel information of the grid center is equal to of the pixel information of the known rectangular color identifier, the small grid at the position is taken as a grid occupied by the rectangular color identifier and is marked as a target area, otherwise, the small grid is marked as a non-target area;
coordinates of center of mass (C)x,Cy) By solving a weighted average of the coordinates of the center point of all target areas, i.e.Where n is the number of target regions, xiAbscissa, y, representing the center point of the target areaiRepresenting the ordinate of the central point of the target area;
step six, converting the image coordinate into an actual coordinate by utilizing coordinate conversion;
seventhly, aiming at the mobile robot in the motion state, data transmission is carried out through a wireless communication module, and the dynamic positioning of the mobile robot is completed by utilizing the angular speed and the angle of the mobile robot;
the method comprises the steps of acquiring required angular velocity and angle by utilizing various sensor devices on the mobile robot, transmitting data measured by a sensor to an upper computer through a wireless communication module, predicting the position of the mobile robot at the next moment by the upper computer through the angular velocity and the angle at the current moment, carrying out target identification on the position, acquiring the position coordinate of the mobile robot, and realizing the positioning of the mobile robot;
the mobile robot position at the next moment is predicted, and specifically:
if the position of the mobile robot at the current moment is P1(x1,y1) The obtained angular velocity is wrAnd wlThe radius of the wheel is d, the included angle between the wheel radius and the horizontal direction is theta, and the position of the mobile robot at the time of is P2(x2,y2),x2=x1-(r-r*cos(wT+θ-π/2)),y2=y1+ r sin (wT + θ - π/2), wherein vr=wr*d,vl=wl*d,
Figure FDA0002204689180000021
Taking the position as the center of the dynamic window, and carrying out gridding processing on the image to obtain the position coordinate of the mobile robot; if the target is not found in the new window, the target coordinate P of the current moment is used1(x1,y1) For the center, searching the target in a window with the length of 2 × v × T, and repositioning the target; and continuously and circularly operating to realize the positioning of the dynamic target.
2. The fast robot vision positioning method based on track speculation as claimed in claim 1, wherein in the third step, MATLAB is used to process calibration plate images for Camera calibration, and the Camera parameter acquisition means that black and white calibration plates with known lengths are placed in a positioning area to capture images, and a plurality of calibration plate images at different poses are acquired and stored, and Camera calibration is performed by using a Camera calibration in MATLAB to acquire internal and external parameters of a Camera.
CN201910877211.6A 2019-09-17 2019-09-17 Rapid robot visual positioning method based on track conjecture Active CN110738706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910877211.6A CN110738706B (en) 2019-09-17 2019-09-17 Rapid robot visual positioning method based on track conjecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910877211.6A CN110738706B (en) 2019-09-17 2019-09-17 Rapid robot visual positioning method based on track conjecture

Publications (2)

Publication Number Publication Date
CN110738706A true CN110738706A (en) 2020-01-31
CN110738706B CN110738706B (en) 2022-03-29

Family

ID=69267973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910877211.6A Active CN110738706B (en) 2019-09-17 2019-09-17 Rapid robot visual positioning method based on track conjecture

Country Status (1)

Country Link
CN (1) CN110738706B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409387A (en) * 2021-05-11 2021-09-17 深圳拓邦股份有限公司 Robot vision positioning method and robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2880626A1 (en) * 2012-07-30 2015-06-10 Sony Computer Entertainment Europe Limited Localisation and mapping
CN108972565A (en) * 2018-09-27 2018-12-11 安徽昱康智能科技有限公司 Robot instruction's method of controlling operation and its system
CN109902374A (en) * 2019-02-22 2019-06-18 同济大学 A kind of burst pollution source tracing method based on flight sensor patrol track optimizing
CN109931940A (en) * 2019-01-22 2019-06-25 广东工业大学 A kind of robot localization method for evaluating confidence based on monocular vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2880626A1 (en) * 2012-07-30 2015-06-10 Sony Computer Entertainment Europe Limited Localisation and mapping
CN108972565A (en) * 2018-09-27 2018-12-11 安徽昱康智能科技有限公司 Robot instruction's method of controlling operation and its system
CN109931940A (en) * 2019-01-22 2019-06-25 广东工业大学 A kind of robot localization method for evaluating confidence based on monocular vision
CN109902374A (en) * 2019-02-22 2019-06-18 同济大学 A kind of burst pollution source tracing method based on flight sensor patrol track optimizing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANDRE M. SANTANA ET AL.: "An Approach for 2D Visual Occupancy Grid Map Using Monocular Vision", 《ELECTRONIC NOTES IN THEORETICAL COMPUTER SCIENCE》 *
李黎等: "关节型工业机器人轨迹规划研究综述", 《计算机工程与应用》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409387A (en) * 2021-05-11 2021-09-17 深圳拓邦股份有限公司 Robot vision positioning method and robot

Also Published As

Publication number Publication date
CN110738706B (en) 2022-03-29

Similar Documents

Publication Publication Date Title
WO2022142759A1 (en) Lidar and camera joint calibration method
CN109308693B (en) Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera
CN104796612B (en) High definition radar linkage tracing control camera system and linkage tracking
CN105716542B (en) A kind of three-dimensional data joining method based on flexible characteristic point
CN104200086A (en) Wide-baseline visible light camera pose estimation method
CN103578109A (en) Method and device for monitoring camera distance measurement
CN107917700B (en) Small-amplitude target three-dimensional attitude angle measurement method based on deep learning
CN111288967A (en) Remote high-precision displacement detection method based on machine vision
CN110889829A (en) Monocular distance measurement method based on fisheye lens
CN108398123B (en) Total station and dial calibration method thereof
CN106370160A (en) Robot indoor positioning system and method
CN106352871A (en) Indoor visual positioning system and method based on artificial ceiling beacon
CN112966571B (en) Standing long jump flight height measurement method based on machine vision
CN112308930B (en) Camera external parameter calibration method, system and device
CN109483507B (en) Indoor visual positioning method for walking of multiple wheeled robots
CN112197766A (en) Vision attitude measuring device for mooring rotor platform
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN109781068A (en) The vision measurement system ground simulation assessment system and method for space-oriented application
CN115717867A (en) Bridge deformation measurement method based on airborne double cameras and target tracking
CN111161305A (en) Intelligent unmanned aerial vehicle identification tracking method and system
CN110738706A (en) quick robot vision positioning method based on track conjecture
CN113936031A (en) Cloud shadow track prediction method based on machine vision
CN116309851B (en) Position and orientation calibration method for intelligent park monitoring camera
CN105303580A (en) Identification system and method of panoramic looking-around multi-camera calibration rod
CN115585810A (en) Unmanned vehicle positioning method and device based on indoor global vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant