CN108051007A - AGV navigation locating methods based on ultrasonic wave networking and stereoscopic vision - Google Patents

AGV navigation locating methods based on ultrasonic wave networking and stereoscopic vision Download PDF

Info

Publication number
CN108051007A
CN108051007A CN201711033453.4A CN201711033453A CN108051007A CN 108051007 A CN108051007 A CN 108051007A CN 201711033453 A CN201711033453 A CN 201711033453A CN 108051007 A CN108051007 A CN 108051007A
Authority
CN
China
Prior art keywords
image
point
information source
coordinate system
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711033453.4A
Other languages
Chinese (zh)
Inventor
王丹
陆满君
龚明
胡昆霖
徐卉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Tim Industrial Co Ltd
Original Assignee
Shanghai Tim Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Tim Industrial Co Ltd filed Critical Shanghai Tim Industrial Co Ltd
Priority to CN201711033453.4A priority Critical patent/CN108051007A/en
Publication of CN108051007A publication Critical patent/CN108051007A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/02Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems using reflection of acoustic waves
    • G01S15/06Systems determining the position data of a target
    • G01S15/08Systems for measuring distance only

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a kind of AGV navigation locating methods based on ultrasonic wave networking and stereoscopic vision, steps of the method are:Step 1 carries out ultrasonic wave networking, using range measurement principle, calculates distance between information source label and information source base station;Step 2 positions information source label, and location information is uploaded to host computer server;The driving device of step 3, controller control AGV, makes AGV be moved according to path planning;Step 4 starts stereo vision module, pass through image filtering, Edge check, feature information extraction, Stereo matching and targeted attitude and the flow of range measurement, the measurement of relative distance and relative attitude, the steering angle for adjusting AGV and its distance between shelf are carried out to the cargo that is intended to carry or unload;Step 5, the insertion hole that pallet fork is aligned to the shelf, then cargo has been removed.The present invention can quick high accuracy location navigation, convenient for subsequent carrying or unloading operation, suitable for the region of man-machine hybrid working.

Description

AGV navigation positioning method based on ultrasonic networking and stereoscopic vision
Technical Field
The invention relates to the technical field of navigation and positioning, in particular to an AGV positioning and navigation method based on combination of ultrasonic networking and stereoscopic vision.
Background
In the logistics industry, the loading, unloading and carrying of goods are extremely frequent working procedures, and a large amount of human resources and cost are consumed. In developed countries, the cost accounts for 15% -33% of the logistics cost, and in China, the cost also accounts for about 15%. So AGVs (automated Guided vehicles) are gradually applied to warehousing, manufacturing, ports, etc., providing convenient conditions for the realization of logistics automation.
Indoor positioning and navigation mainly refer to determining the position of the AGV in real time and guiding the AGV to move according to a correct route. The positioning and navigation function is realized by the following steps: 1) Establishing an environment map; 2) Self-positioning; 3) And planning the path.
The existing positioning navigation technology comprises vision, laser, magnetic force line, two-dimensional code, wifi, inertial navigation and the like, and the positioning technologies have advantages and disadvantages respectively, can be reasonably designed and selected by combining with actual conditions,
ultrasonic waves are a cyclic sound pressure with a vibration frequency greater than 20kHz, outside the frequency range audible to the human ear. Ultrasonic waves generally propagate in an elastic medium in a longitudinal wave mode, are a propagation mode of energy, and have the characteristics of high frequency, short wavelength, good directivity and the like. Meanwhile, the device is not easily influenced by external environmental conditions such as weather conditions, environmental illumination, barrier shadows, surface roughness and the like. The navigation and positioning by ultrasonic waves are widely applied to sensing systems of various mobile robots.
The traditional positioning scheme using ultrasonic technology mainly adopts a reflection type distance measurement method. And positioning according to the time difference between the echo and the transmitted wave. This method relies on reflectors and requires a high reflective surface. When the reflecting surface is not smooth or has an inclination angle, echo signals are easy to lose; when the reflecting surface is complicated, the echo signal becomes difficult to be performed. Therefore, a base station with an ultrasonic transmitter is usually adopted to form a network, and a positioning tag with a receiver and the like are matched, and a control system performs certain algorithm processing on the acquired signal to obtain the position environment information of the positioning tag.
In the warehouse logistics center of man-machine hybrid work, when using the AGV that has autonomic location navigation function to implement the transport function, though ultrasonic positioning navigation's positioning accuracy can reach centimetre level, because its gesture and relative distance that can't know the goods, difficult follow-up implementation is more meticulous operation. The situation is particularly prominent when manpower still participates in logistics transportation, rapid and high-precision positioning and navigation cannot be realized, and subsequent operation of the AGV on goods is restricted.
Disclosure of Invention
The invention aims to provide an AGV positioning and navigation method based on combination of ultrasonic networking and stereoscopic vision, which is used for positioning and navigating an AGV through ultrasonic networking and measuring the position and the posture of a cargo by using the stereoscopic vision so as to adjust the distance and the orientation angle of the AGV relative to a goods shelf, facilitate subsequent carrying or unloading operation and be suitable for a region of man-machine mixed work.
The invention provides an AGV navigation positioning method based on ultrasonic networking and stereoscopic vision, which comprises the following steps:
step 1, starting ultrasonic networking, and calculating the distance between an information source label and an information source base station by using a distance measuring principle;
step 2, positioning the information source label by utilizing a trilateral positioning principle, and uploading positioning information of the information source label to an upper computer server by an AGV (automatic guided vehicle) through a wireless communication network;
step 3, the upper computer obtains positioning information of the AGV through ultrasonic networking, and a driving device of the AGV is controlled through a path planning controller, so that the AGV moves from a starting point to a destination according to a planned path;
step 4, starting a stereoscopic vision module, measuring the relative distance and the relative posture of the goods to be carried or unloaded through the processes of image filtering, edge detection, characteristic information extraction, stereoscopic matching and target posture and distance measurement, and adjusting the steering angle of the AGV and the distance between the AGV and a goods shelf;
and 5, aligning the fork with the insertion hole of the goods shelf, and then carrying the goods.
Preferably, the ultrasonic networking comprises an information source label, a plurality of information source base stations and a radio frequency transmitting station;
the radio frequency transmitting station comprises a radio frequency transmitting module which transmits radio frequency synchronous signals to the information source labels and the information source base stations;
each information source base station comprises a radio frequency receiving module and an ultrasonic wave transmitting module, and the information source base station firstly receives radio frequency signals from the radio frequency transmitting station and then transmits ultrasonic pulses to the information source label;
the information source label comprises a radio frequency receiving module and an ultrasonic receiving module, the information source label receives the radio frequency signals sent by the radio frequency transmitting station and records the receiving time, and then receives the ultrasonic pulse signals transmitted by each information source base station and records the receiving time.
Preferably, the number of the information source base stations is four, and the four information source base stations are respectively a second information source base station, a third information source base station and two first information source base stations;
the information source base station and the radio frequency transmitting station are surrounded on the outer side of the information source label.
Preferably, the step 1 comprises:
step 1.1, a radio frequency transmitting station of the ultrasonic networking transmits a radio frequency synchronous signal;
step 1.2, an information source base station of the ultrasonic networking receives a radio frequency signal from a radio frequency transmitting station and sends an ultrasonic pulse to an information source label;
step 1.3, the information source label firstly receives the radio frequency signal and records the moment of receiving the signal as T 0 Then receiving the ultrasonic pulse signal transmitted by the ith source base station, and recording the time of the received signal as T i The calculation formula of the distance between the positioning tag and the ith information source base station is as follows:
l i =(T i -T 0 )×C 2 (1)
in the formula, C 2 =340m/s propagation velocity of ultrasonic pulse at normal temperature l i And calculating the distance information between the ith information source base station and the information source label.
Preferably, the step 2 comprises: selecting coordinates of 3 information source base stations as P i (x i ,y i ,z i ) (i =1,2, 3), o (x, y, z) is the coordinate of the source label, and the geometric relationship obtained by using the trilateration method is as follows:
in the formula, x, y and z are respectively coordinates of an x axis, a y axis and a z axis of the information source label; x is the number of i 、y i And z i Respectively, the coordinates of each axis of the ith source base station.
Preferably, the step 4 comprises:
step 4.1, image filtering: filtering the left camera image and the right camera image by adopting a median filtering method, eliminating noise interference and obtaining smooth left camera image and smooth right camera image; the specific method of the step 4.1 comprises the following steps: firstly, determining a two-dimensional window W of odd pixels, wherein the size of the two-dimensional window W is k × l, and after the pixels in the window are queued according to the gray scale, replacing the original gray scale value with the gray scale value at the middle position of the window to obtain:
f(x,y)=median{h(x-k,y-l),(k,l)∈W} (3)
in the formula, h (x, y) is an original gray value, f (x, y) is a gray value of an enhanced image, and W is the size of a selected window;
step 4.2, edge detection: respectively carrying out edge detection on the two images by a canny edge detection method to obtain edge characteristic information;
and 4.3, extracting characteristic information: extracting the angular point, straight line, circle or ellipse characteristics of the image subjected to edge detection, and identifying selected characteristic point information from the image;
step 4.4, stereo matching: according to the two-dimensional information of the identified feature points in the left camera and the right camera, completing the corresponding matching of the feature points on the left image and the right image, and removing bad matching points;
step 4.5, measuring the target position and the attitude: and according to the stereo matching result, constructing a coordinate system of the test object, and calculating the position and the posture of the test object relative to the world coordinate.
Preferably, said step 4.2 comprises:
step 4.2.1, firstly, smoothing filtering is carried out on the image f (x, y) by using a Gaussian filter function G (x, y), the Gaussian filter function G (x, y) and the image f (x, y) are convoluted to obtain a smoothed image G (x, y), and the following steps are obtained:
in the formula, G (x, y) is a two-dimensional Gaussian distribution function, and sigma is the standard deviation of normal distribution;
step 4.2.2, calculating the gradient strength and direction of each pixel point (x, y) in the smoothed image g (x, y), and calculating the gradient strength and direction of each pixel point (x, y) in the smoothed image g (x, y), so as to obtain:
gradient intensity of each pixel point (x, y) in the image g (x, y)And gradient directions θ (x, y) are:
in the formula, E x 、E y Convolution of the first directional derivatives of G (x, y) with the image f (x, y), respectively;
step 4.2.3, calculating edge points: the center edge point is the maximum point in the region of the convolution of G (x, y) and f (x, y) in the edge gradient direction; whether the point is an edge point is determined by judging whether the intensity of the point is the maximum value of the area thereof in each gradient direction.
Preferably, said step 4.3 comprises:
step 4.3.1, angular point extraction: a point feature extraction method based on a Harris operator is adopted, and the angular point response function is as follows:
R=detM-k(traceM) 2 (7)
in the formula (I), the compound is shown in the specification,detM=λ 1 ×λ 2 ,traceM=λ 12 k is weight coefficient, and the value is 0.04-0.06; g x Gradient in the x direction, g y Is a gradient in the y-direction and,is a Gaussian filtering template; let have a characteristic value of λ 1 And λ 2 And judging conditions are as follows:
b1、λ 1 >>λ 2 or λ 2 >>λ 1 The corresponding point is an edge point, and the corresponding R is negative;
b2、λ 1 and λ 2 Are all very smallThe corresponding point is a common point on the image, and the corresponding | R | value is small;
b3、λ 1 and λ 2 The corresponding points are corner points, and the corresponding R values are large;
step 4.3.2, straight line extraction: describing the shape of the boundary of the region by adopting a Hough transformation method; let the content of an image be a line segment, and the coordinate of a point A on it be x i And y i The corresponding parameter space values are rho and theta which are respectively the polar distance and the polar angle of the point; the mapping relation between the polar distance and the polar angle is as follows:
ρ=x i cosθ+y i sinθ (8)
in the parameter space, the maximum value of the polar distance ρ isWherein x is max And y max Is the maximum coordinate value of the image; the polar angle theta is varied from 0 deg. to 180 deg..
Preferably, the step 4.4 adopts the euclidean distance method as the similarity measurement standard, and gives the feature points p on the reference image and the image to be matched i And q is j Euclidean distance value D ij The definition is as follows:
wherein L is i (k) And L j (k) Are respectively a characteristic point p i And q is j The feature description vector of (1); according to the characteristics of Euclidean distance, when D ij The smaller the value, the feature point p is represented i And q is j The more similar, the more likely it is a matching point pair; setting a threshold value T, if D ij When the value is less than T, the characteristic point p i And q is j Are considered to be a match;
the step of step 4.4 comprises:
4.4a, taking the characteristic point P in the standard image 1 Finding out the first two characteristic points P nearest to the Euclidean distance in the image to be matched r1 And P r2 If the characteristic point P r1 And P r2 Middle-most recent distance D min Divided by the next closest distance U cmin If the value is less than the matching threshold, the feature point P in the standard image is represented 1 The characteristic point P closest to Euclidean distance in the image to be matched r1 Matching;
4.4b to P in the image to be matched r1 Repeating the process of step 4.4a to obtain P in the standard image as the characteristic point r1 Candidate matching point P of 1 '; if P l And P 1 ' is the same point, then P l And P r1 Matching is successful, otherwise, abandoning;
4.4c, and circulating until the matching is completed.
Preferably, said step 4.5 comprises:
step 4.5.1, position measurement; setting the distance between the connecting lines of the projection centers of the two cameras as the base line distance b, setting the origin of the coordinates of the cameras at the optical center of the lens of the cameras, and setting the origin of the coordinate system of the left and right images at the intersection O of the optical axis of the cameras and the plane 1 And O 2 The corresponding coordinates of any point P in the space in the left image and the right image are respectively P 1 (x 1 ,y 1 ) And P 2 (x 2 ,y 2 ) The coordinates of point P in the left camera coordinate system are (x, y, z), and the distance is calculated by the following formula:
in the formula, delta z represents the precision of the distance between the measured point and the stereoscopic vision system, z represents the absolute distance between the measured point and the stereoscopic vision system, f represents the focal length of the camera, b represents the base line distance of the binocular stereoscopic vision system, and delta d represents the parallax precision of the measured point;
step 4.5.2, measuring the attitude; according to the process of central projection imaging of the camera, the established coordinate system comprises the following components:
1-1, object coordinate System O O -x O y O z O Setting the ith characteristic point on the target at the positionCoordinate on the system is q i =[x oi ,y oi ,z oi ] T
2-2, measuring coordinate system O S -x S y S z S Taking the camera coordinate system of the first camera as the measurement coordinate system, and setting the coordinate of the ith characteristic point on the target on the coordinate system as p i =[x Si ,y Si ,z Si ] T
3-3 pixel coordinate system o-u of images acquired by two cameras L v L And o-u R v R With the upper left corner of the imaging plane as the origin, u L ,v L The axes are respectively parallel to the x-axis and the y-axis of the image plane coordinate system; the projection coordinates of the ith characteristic point on the target on the two images under the pixel coordinate system are [ u ] respectively L ,v L ] T And [ u ] R ,v R ] T
Taking the first camera coordinate system as the measurement coordinate system, the projection matrix of the two cameras is:
M 1 =K[1 0] (11)
M 2 =K[R C T C ] (12)
where K is the intrinsic parameter matrix of the camera, R C And T C A rotation matrix and a translation vector of the second camera relative to the first camera; for the projection matrix M 1 Let M stand for 11 、M 12 、M 13 To correspond to M 1 The homogeneous coordinate of the ith characteristic point on the image collected by the first camera is set as [ u [ ] Li v Li 1] T ,P i Obtaining a formula (13) for the homogeneous coordinate of the feature point in the measurement coordinate system:
for the projection matrix M 2Having a relationship [ u ] with the homogeneous coordinate system of the projection of the feature point on the image acquired by the second camera Ri v Ri 1] T
Simultaneous equations (13) and (14) can be obtained:
the equation (15) can be solved by the least square principle
Setting n matched characteristic point image projections obtained on the images collected by the two cameras, wherein the three-dimensional coordinate P = { P } of the characteristic point in the measurement coordinate system i Solved (i =1, \8230;, n), and three-dimensional coordinates Q = { Q } in the object coordinate system i As is known (i =1, \8230;, n), the coordinate system relationship is given by:
P=sRQ+T (16)
wherein s is a proportionality coefficient; t is a displacement vector and is a three-dimensional vector, namely T = [ T = [ T x ,T y ,T z ] T (ii) a R is a rotation matrix which is a3 multiplied by 3 unit orthogonal matrix;
if the data is ideal measurement data, the formula (16) can be directly solved by the information of all the points to obtain s, R and T; since there will be an error, the error is:
e i =p i -sRq i -T (17)
according to the least square theory, when the sum of the squares of all measurement errorsWhen the minimum time is needed, the most appropriate solution can be obtained;
the coordinates of the centroids of the n feature points in the measurement coordinate system and the object coordinate system are assumed to be:
obtaining new coordinates under a coordinate system respectively taking the centroid as an origin:
the sum of the squared errors can be written as:
or the following steps:
wherein, the first and the second end of the pipe are connected with each other,
by calculating the proportional coefficient and the rotation vector, the displacement vector can be calculated according to the coordinates of the centroid of the feature point in the measurement coordinate system and the object coordinate system;
relative poses are expressed by Euler's angle method, and the corresponding rotation matrix is given by the rotation order of z → y → x:
r is a rotation matrix which is a trigonometric function combination of three angles (alpha, beta, theta); a rotation angle alpha around an X axis is a pitch angle, a rotation angle beta around a Y axis is a yaw angle, and a rotation angle theta around a Z axis is a rolling angle; the rotation matrix can be obtained by pose solutionAs an element in the ith row and jth column of the matrix), the euler angle is obtained according to equation (22):
β=arcsin(r 13 ) (24)
compared with the prior art, the invention has the beneficial effects that: the method is more precise in subsequent implementation, can realize rapid high-precision positioning navigation, can adjust the distance and the orientation angle of the AGV relative to the goods shelf, is convenient for subsequent carrying or unloading operation, and is suitable for the area of man-machine mixed work.
Drawings
FIG. 1 is a schematic diagram illustrating the components and principles of an ultrasound networking system according to an embodiment of the present invention;
FIG. 2 is a flow chart of an AGV positioning navigation method according to one embodiment of the present invention;
FIG. 3 is a flow chart of the stereo vision algorithm of the present invention;
FIG. 4 is a diagram showing the correspondence between two-dimensional images and three-dimensional information of an object according to the present invention.
Detailed Description
The invention provides an AGV navigation positioning method based on ultrasonic networking and stereoscopic vision, which is further explained by combining a specific embodiment mode and an attached drawing in order to make the invention more obvious and understandable.
Fig. 1 is a schematic diagram of the composition of an ultrasonic networking system of the present invention, where the ultrasonic networking system includes an information source tag, multiple information source base stations, and a radio frequency transmitting station.
The radio frequency transmitting station comprises a radio frequency transmitting module and can transmit radio frequency synchronous signals to each information source base station and each information source label. Each information source base station comprises a radio frequency receiving module and an ultrasonic wave transmitting module, receives radio frequency signals from the radio frequency transmitting station, and then sends ultrasonic wave pulses to the information source labels. The information source label comprises a radio frequency receiving module and an ultrasonic receiving module, the information source label firstly receives a radio frequency signal sent by a radio frequency transmitting station, and records the moment as T 0 Then receiving the ultrasonic pulse signal transmitted by the ith source base station and recording the moment as T i
The number of the source base stations in this embodiment is four, and the four source base stations are respectively a second source base station 2, a third source base station 3, and two first source base stations 1. The source base station and the radio frequency transmitting station are surrounded on the periphery of the source label.
As shown in FIG. 2, the AGV navigation positioning method of the present invention comprises the following steps:
step 1, networking the information source base station with the ultrasonic wave transmitting module, and calculating the distance between the information source label and the information source base station by using a ranging principle. The method comprises the following specific steps:
step 1.1, the radio frequency transmitting station transmits a radio frequency synchronous signal.
And step 1.2, the information source base station receives the radio frequency signal from the radio frequency transmitting station and then sends the ultrasonic pulse to the information source label.
Step 1.3, the information source label firstly receives the radio frequency signal and records the moment of receiving the signal as T 0 Then receiving the ultrasonic pulse signal transmitted by the ith source base station, and recording the time of the received signal as T i The calculation formula of the distance between the positioning tag and the ith information source base station is as follows:
l i =(T i -T 0 )×C 2 (1)
in the formula, C 2 =340m/s propagation velocity of ultrasonic pulse at normal temperature l i And calculating the distance information between the ith information source base station and the information source label.
And 2, positioning the information source label by using a trilateral positioning principle, and uploading the positioning information to an upper computer server by the AGV through a wireless communication network.
Wherein, the specific steps of the step 2 are as follows:
selecting coordinates of 3 information source base stations as P i (x i ,y i ,z i ) (i =1,2, 3), o (x, y, z) is the coordinate of the source label, and the geometric relationship is obtained by using trilateration:
in the formula, x, y and z are respectively coordinates of an x axis, a y axis and a z axis of the information source label; x is the number of i 、y i And z i Respectively is the coordinate of each axis of the ith information source base station; the coordinates o (x, y, z) of the source tag can be solved from the formula (2), thereby completing the positioning function.
And 3, the upper computer obtains positioning information of the AGV through ultrasonic networking, and the controller controls a driving device of the AGV through path planning, so that the AGV moves from a starting point to a destination according to a planned route.
And 4, starting the stereoscopic vision module, imaging the slave shelf by using two cameras by adopting a triangulation-based method, and recovering distance information from parallax. The steps are mainly to finish the measurement of the relative distance and the relative posture of goods to be transported or unloaded through the processes of image filtering, edge detection, characteristic information extraction, three-dimensional matching and target posture and distance measurement, and adjust the steering angle of the AGV and the distance between the AGV and a goods shelf so that the subsequent transporting or unloading actions can be smoothly carried out.
The specific method of the step 4 comprises the following steps:
step 4.1, image filtering: and filtering the left camera image and the right camera image respectively, eliminating noise interference and obtaining the smooth left camera image and the smooth right camera image. The image filtering adopts a median filtering method, and the basic idea is to replace the gray value of a pixel point by the median of the gray values of the neighborhood of the pixel point; the specific method is that firstly, a two-dimensional window W with odd pixels is determined, the size is k × l, after all pixels in the window are queued according to the gray scale, the gray scale value at the middle position of the window is used for replacing the original gray scale value, and the following results are obtained:
f(x,y)=median{h(x-k,y-l),(k,l)∈W} (3)
where h (x, y) is the original gray scale value, f (x, y) is the gray scale value of the enhanced image, and W is the selected window size.
Step 4.2, edge detection: and respectively carrying out edge detection on the two images to obtain edge characteristic information. The canny edge detection method is used here.
Step 4.2.1, firstly, smoothing filtering is carried out on the image f (x, y) by using a Gaussian filter function G (x, y), and the image f (x, y) and the Gaussian filter function G (x, y) are convolved to obtain a smooth image G (x, y):
in the formula, G (x, y) is a two-dimensional gaussian distribution function, and σ is a standard deviation of a normal distribution.
Step 4.2.2, calculating the gradient strength and direction of each pixel point (x, y) in the smoothed image g (x, y), and calculating the gradient strength and direction of each pixel point (x, y) in the smoothed image g (x, y):
the gradient strength of each pixel point (x, y) in the image g (x, y)And gradient directions θ (x, y) are:
in the formula, E x 、E y Respectively, the convolution of the first directional derivative of G (x, y) with the image f (x, y).
Step 4.2.3, calculating edge points: the center edge point is the maximum point in the region of the convolution of G (x, y) and f (x, y) in the edge gradient direction. Thus, it is possible to determine whether the intensity of the point is the maximum value of its area in each gradient direction to determine whether the point is an edge point.
When a pixel satisfies the following three conditions, it is considered as an edge point of the image:
(a1) The edge strength of the point is greater than the edge strength of two adjacent pixel points along the gradient direction of the point;
(a2) The direction difference between the point and the two adjacent points in the gradient direction of the point is less than 45 degrees;
(a3) The maximum value of the edge intensity in the 3 × 3 region centered on the point is smaller than a certain threshold value.
In addition, if the condition a1 and the condition a2 are satisfied at the same time, two adjacent pixels in the gradient direction are eliminated from the candidate edge points, and the condition a3 is equivalent to matching the threshold image composed of the region gradient maximum value with the edge points, which eliminates many false edge points.
And 4.3, extracting characteristic information: and extracting the characteristics of the corners, lines, circles or ellipses of the image subjected to edge detection, and identifying selected characteristic point information from the characteristics.
Step 4.3.1, angular point extraction: a point feature extraction method based on a Harris operator is adopted, and the angular point response function is as follows:
R=detM-k(traceM) 2 (7)
in the formula (I), the compound is shown in the specification,detM=λ 1 ×λ 2 ,traceM=λ 12 k is weight coefficient, and the value is 0.04-0.06; g is a radical of formula x Gradient in the x direction, g y Is a gradient in the y-direction,is a gaussian filtering template. Let have a characteristic value of λ 1 And λ 2 If the judgment condition is as follows:
(b1)、λ 1 >>λ 2 or λ 2 >>λ 1 The corresponding point is an edge point, and the corresponding R is negative;
(b2)、λ 1 and λ 2 Are all small, the corresponding point is a common point on the image (i.e. a flat area on the image), and the corresponding | R | value is small;
(b3)、λ 1 and λ 2 The points are very large, the corresponding points are corner points, and the corresponding R values are large.
Step 4.3.2, straight line extraction: a method for describing the boundary shape of the region by adopting a Hough transformation method; let the content of a certain image be a line segment, and the coordinate of a certain point A on it be x i And y i The corresponding parameter space values are ρ and θ, which are the polar distance and polar angle of the point, respectively.
The mapping relationship between them is:
ρ=x i cosθ+y i sinθ (8)
in the parameter space, the maximum value of the polar distance ρ isWherein x is max And y max Is the maximum coordinate value of the image (data space). The polar angle theta is varied from 0 deg. to 180 deg..
Step 4.4, stereo matching: and according to the two-dimensional information of the identified feature points in the left camera and the right camera, completing the corresponding matching of the feature points on the left image and the right image, and removing bad matching points.
Euclidean distance is a very common similarity measurement method, and a characteristic point p on a reference image and an image to be matched is given i And q is j The Euclidean distance method is adopted as the standard of similarity measurement, and the Euclidean distance value D ij The definition is as follows:
wherein L is i (k) And L j (k) Are respectively a characteristic point p i And q is j The feature description vector. According to the characteristics of Euclidean distance, when D ij The smaller the value, the more similar the two points are, the more likely the two points are to be matching point pairs, a threshold value T is set, if D ij When the value is less than T, the two points are considered to be matched. The method comprises the following specific steps:
a) Taking characteristic point P in standard image 1 Finding out the first two characteristic points P nearest to the Euclidean distance in the image to be matched r1 And P r2 If the closest distance D of the two feature points is min Divided by the next closest distance U cmin If the value is less than the matching threshold, the feature point P in the standard image is represented 1 Characteristic point P nearest to Euclidean distance in image to be matched r1 And (6) matching.
b) To be matched with P in the image r1 Repeating the process of the step a) for the characteristic points to obtain P in the standard image r1 Candidate matching point P of 1 ’。
c) If P is l And P 1 ' is the same point, then P l And P r1 Matching is successful, otherwise, abandoning.
d) This process is repeated until the match is complete.
Step 4.5, measuring the target position and the attitude: and further constructing a coordinate system of the test object according to the stereo matching result, and calculating the position and the posture of the test object relative to the world coordinate. The method comprises the following specific steps:
step 4.5.1, position measurement; as shown in FIG. 4, assuming that the distance between the connecting lines of the projection centers of the two cameras, i.e. the base line distance, is b, the origin of the camera coordinates is at the optical center of the camera lens, and the origin of the left and right image coordinate systems is at the intersection O of the optical axis of the camera and the plane 1 And O 2 A point P in space corresponding to the sitting position in the left and right imagesAre marked by P 1 (x 1 ,y 1 ) And P 2 (x 2 ,y 2 ) The coordinates of point P in the left camera coordinate system are (x, y, z), and the distance can be calculated by the following formula:
in the formula, Δ z represents the precision of the distance between the measured point and the stereoscopic vision system, z represents the absolute distance between the measured point and the stereoscopic vision system, f represents the focal length of the camera, b represents the base line distance of the binocular stereoscopic vision system, and Δ d represents the parallax precision of the measured point.
Step 4.5.2, attitude measurement; according to the process of central projection imaging of the camera, the following coordinate systems are established:
1-1) object coordinate System O O -x O y O z O (ii) a Let the coordinate of the ith feature point on the target on the coordinate system be q i =[x oi ,y oi ,z oi ] T
2-2) measuring coordinate system O S -x S y S z S (ii) a The binocular vision only needs to consider the relationship between different positions where the cameras collect images, so the camera coordinate system of the first camera is taken as the measurement coordinate system. Let the coordinate of the ith feature point on the target on the coordinate system be p i =[x Si ,y Si ,z Si ] T
3-3) pixel coordinate system o-u of images acquired by two cameras L v L And o-u R v R (ii) a Using the upper left corner of the imaging plane as the origin, u L ,v L The axes are parallel to the x-axis and the y-axis of the image plane coordinate system, respectively. The projection coordinates of the ith characteristic point of the target on the two images under the pixel coordinate systems are [ u ] respectively L ,v L ] T And [ u ] R ,v R ] T The relationship between the coordinate systems is shown in fig. 4.
Because the first camera coordinate system is used as a measurement coordinate system, the projection matrix of the two cameras is as follows:
M 1 =K[1 0] (11)
M 2 =K[R C T C ] (12)
where K is the intrinsic parameter matrix of the camera, R C And T C The rotation matrix and translation vector of the second camera relative to the first camera. For the projection matrix M 1 Let M stand for 11 、M 12 、M 13 To correspond to M 1 The homogeneous coordinate of the ith characteristic point on the image collected by the first camera is set as [ u [ ] Li v Li 1] T ,P i The homogeneous coordinate in the measurement coordinate system of the feature point has the following relationship:
for the same reason, for projection matrix M 2Has a similar relation with the homogeneous coordinate system projected by the characteristic point on the image collected by the second camera Ri v Ri 1] T
Simultaneous equations (13) and (14) can be obtained:
the equation (15) can be solved by the least square principle
Set up two cameras and gather the pictureThe above obtained n matched feature point image projections, the three-dimensional coordinate P = { P } of the feature point in the measurement coordinate system i Solved (i =1, \8230;, n), and three-dimensional coordinates Q = { Q } in the object coordinate system i Is known (i =1, \8230;, n). These two coordinate system relationships can be expressed as:
P=sRQ+T (16)
wherein s is a proportionality coefficient; t is a displacement vector and is a three-dimensional vector, namely T = [ T = [ T x ,T y ,T z ] T (ii) a R is a rotation matrix, which is a3 × 3 unit orthogonal matrix.
If the data is ideal measurement data, the equation can be directly solved by the information of all the points to obtain s, R and T.
But there will always be an error, and it is possible to obtain:
e i =p i -sRq i -T (17)
according to the least square theory, when the sum of the squares of all measurement errorsAt the minimum, the most suitable solution can be obtained.
The coordinates of the centroids of the n feature points in the measurement coordinate system and the object coordinate system are assumed to be:
new coordinates in a coordinate system with the centroid as the origin, respectively, can be obtained:
the sum of the squared errors can be written as:
or
Wherein, the first and the second end of the pipe are connected with each other,
by calculating the proportional coefficient and the rotation vector, the displacement vector can be calculated according to the coordinates of the centroid of the characteristic point in the measurement coordinate system and the object coordinate system.
Relative poses are expressed by Euler's angle method, and the corresponding rotation matrix is given by the rotation order of z → y → x:
r is a rotation matrix which is a trigonometric combination of three angles (alpha, beta, theta). The rotation angle alpha around the X axis is a pitch angle, the rotation angle beta around the Y axis is a yaw angle, and the rotation angle theta around the Z axis is a roll angle. The rotation matrix can be obtained by pose solutionAs an element in the ith row and jth column of the matrix), the euler angle can be obtained from equation (22):
β=arcsin(r 13 ) (24)
therefore, in summary, in step 4, the driving device of the AGV is controlled to perform distance movement and steering according to the distance information and the posture information of the goods relative to the AGV, which are measured by the stereoscopic vision, so that the AGV is directly opposite to the rack and keeps a proper distance, thereby completing the subsequent transporting operation.
And 5, aligning the fork with an insertion hole of the goods shelf, and finally carrying the goods.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.

Claims (10)

1. An AGV navigation positioning method based on ultrasonic networking and stereoscopic vision is characterized by comprising the following steps:
step 1, starting ultrasonic networking, and calculating the distance between an information source label and an information source base station by using a distance measuring principle;
step 2, positioning the information source label by utilizing a trilateral positioning principle, and uploading positioning information of the information source label to an upper computer server by an AGV (automatic guided vehicle) through a wireless communication network;
step 3, the upper computer obtains positioning information of the AGV through ultrasonic networking, and a driving device of the AGV is controlled through a path planning controller, so that the AGV moves from a starting point to a destination according to a planned path;
step 4, starting a stereoscopic vision module, measuring the relative distance and the relative posture of the goods to be carried or unloaded through the processes of image filtering, edge detection, characteristic information extraction, stereoscopic matching and target posture and distance measurement, and adjusting the steering angle of the AGV and the distance between the AGV and a goods shelf;
and 5, aligning the fork with the insertion hole of the goods shelf, and then carrying the goods.
2. An AGV navigation positioning method according to claim 1,
the ultrasonic networking comprises an information source label, a plurality of information source base stations and a radio frequency transmitting station;
the radio frequency transmitting station comprises a radio frequency transmitting module which transmits radio frequency synchronous signals to the information source labels and the information source base stations;
each information source base station comprises a radio frequency receiving module and an ultrasonic wave transmitting module, and the information source base station receives radio frequency signals from the radio frequency transmitting station and then transmits ultrasonic wave pulses to the information source label;
the information source label comprises a radio frequency receiving module and an ultrasonic receiving module, wherein the radio frequency receiving module of the information source label receives radio frequency signals sent by the radio frequency transmitting station and records the receiving time, and then receives ultrasonic pulse signals transmitted by each information source base station and records the receiving time.
3. An AGV navigation positioning method according to claim 2,
the number of the information source base stations is four, and the information source base stations are respectively a second information source base station, a third information source base station and two first information source base stations;
and the information source base station and the radio frequency transmitting station are surrounded on the outer side of the information source label.
4. An AGV navigation positioning method according to claim 2,
the step 1 comprises the following steps:
step 1.1, a radio frequency transmitting station of the ultrasonic networking transmits a radio frequency synchronous signal;
step 1.2, an information source base station of the ultrasonic networking receives a radio frequency signal from a radio frequency transmitting station and sends an ultrasonic pulse to an information source label;
step 1.3, the information source label firstly receives the radio frequency signal, and records the moment of receiving the signal as T 0 Then, the ultrasonic pulse signal transmitted by the ith source base station is received, and the time of the received signal is recorded as T i The calculation formula of the distance between the positioning tag and the ith information source base station is as follows:
l i =(T i -T 0 )×C 2 (1)
in the formula, C 2 =340m/s propagation velocity of ultrasonic pulse at normal temperature l i And calculating the distance information between the ith information source base station and the information source label.
5. An AGV navigation positioning method according to claim 4,
the step 2 comprises the following steps: selecting coordinates of 3 information source base stations as P i (x i ,y i ,z i ) (i =1,2, 3), o (x, y, z) is the coordinate of the source label, and the geometric relationship is obtained by using trilateration:
in the formula, x, y and z are respectively coordinates of an x axis, a y axis and a z axis of the information source label; x is the number of i 、y i And z i Respectively, the coordinates of each axis of the ith source base station.
6. An AGV navigation positioning method according to claim 5,
the step 4 comprises the following steps:
step 4.1, image filtering: filtering the left camera image and the right camera image by adopting a median filtering method, eliminating noise interference and obtaining smooth left camera image and smooth right camera image; the specific method of the step 4.1 comprises the following steps: firstly, determining a two-dimensional window W with odd pixels, wherein the size of the two-dimensional window W is k x l, and after all pixels in the window are queued according to the gray scale, replacing the original gray scale value with the gray scale value at the middle position of the window to obtain:
f(x,y)=median{h(x-k,y-l),(k,l)∈W} (3)
in the formula, h (x, y) is an original gray value, f (x, y) is a gray value of an enhanced image, and W is the size of a selected window;
step 4.2, edge detection: respectively carrying out edge detection on the two images by a canny edge detection method to obtain edge characteristic information;
step 4.3, feature information extraction: extracting the characteristics of corners, straight lines, circles or ellipses of the image subjected to edge detection, and identifying selected characteristic point information from the characteristics;
step 4.4, stereo matching: according to the two-dimensional information of the identified feature points in the left camera and the right camera, completing the corresponding matching of the feature points on the left image and the right image, and removing bad matching points;
step 4.5, measuring the target position and the attitude: and according to the stereo matching result, constructing a coordinate system of the test object, and calculating the position and the posture of the test object relative to the world coordinate.
7. An AGV navigation positioning method according to claim 6,
said step 4.2 comprises:
step 4.2.1, smooth filtering is firstly carried out on the image f (x, y) by using a Gaussian filter function G (x, y), the Gaussian filter function G (x, y) and the image f (x, y) are convoluted to obtain a smooth image G (x, y), and the following steps are obtained:
in the formula, G (x, y) is a two-dimensional Gaussian distribution function, and sigma is the standard deviation of normal distribution;
step 4.2.2, calculating the gradient strength and direction of each pixel point (x, y) in the smoothed image g (x, y), and calculating the gradient strength and direction of each pixel point (x, y) in the smoothed image g (x, y), so as to obtain:
gradient intensity of each pixel point (x, y) in the image g (x, y)And gradient directions theta (x, y) are respectively:
In the formula, E x 、E y Convolution of the first directional derivatives of G (x, y) with the image f (x, y), respectively;
step 4.2.3, calculating edge points: the center edge point is the maximum point in the region of the convolution of G (x, y) and f (x, y) in the edge gradient direction; whether the point is an edge point is determined by judging whether the intensity of the point is the maximum value of the area thereof in each gradient direction.
8. An AGV navigation positioning method according to claim 7,
said step 4.3 comprises:
step 4.3.1, angular point extraction: a point feature extraction method based on a Harris operator is adopted, and the angular point response function is as follows:
R=detM-k(traceM) 2 (7)
in the formula (I), the compound is shown in the specification,detM=λ 1 ×λ 2 ,traceM=λ 12 k is weight coefficient, and the value is 0.04-0.06; g x Gradient in the x direction, g y Is a gradient in the y-direction,is a Gaussian filtering template; let have a characteristic value of λ 1 And λ 2 And judging conditions are as follows:
b1、λ 1 >>λ 2 or λ 2 >>λ 1 The corresponding point is an edge point, and the corresponding R is negative;
b2、λ 1 and λ 2 The values are all very small, the corresponding points are common points on the image, and the corresponding | R | values are small;
b3、λ 1 and λ 2 The corresponding points are corner points, and the corresponding R values are large;
step 4.3.2, linear extraction: describing the shape of the boundary of the region by adopting a Hough transformation method; let the content of an image be a line segment, and the coordinate of a point A on it be x i And y i The corresponding parameter space values are rho and theta which are respectively the polar distance and the polar angle of the point; the mapping relation between the polar distance and the polar angle is as follows:
ρ=x i cosθ+y i sinθ (8)
in the parameter space, the maximum value of the polar distance ρ isWherein x is max And y max Is the maximum coordinate value of the image; the polar angle theta is varied from 0 deg. to 180 deg..
9. An AGV navigation positioning method according to claim 8,
step 4.4 adopts Euclidean distance method as the standard of similarity measurement, and gives characteristic points p on the reference image and the image to be matched i And q is j Euclidean distance value D ij The definition is as follows:
wherein L is i (k) And L j (k) Are respectively a characteristic point p i And q is j The feature description vector of (3); according to the characteristics of Euclidean distance, when D ij The smaller the value, the feature point p is represented i And q is j The more similar, the more likely it is a matching point pair; setting a threshold value T, if D ij When the value is less than T, the characteristic point p i And q is j Are considered to be matching;
the step of step 4.4 comprises:
4.4a, taking the characteristic point P in the standard image 1 Finding out the first two characteristic points P nearest to the Euclidean distance in the image to be matched r1 And P r2 If the characteristic point P r1 And P r2 Middle-most recent distance D min Divided by the next closest distance U cmin If the value is less than the matching threshold, the feature point P in the standard image is represented 1 Characteristic point P nearest to Euclidean distance in image to be matched r1 Matching;
4.4b to P in the image to be matched r1 For the feature point, the process of step 4.4a is repeated to obtain P in the standard image r1 Is selected as a candidate matching point P 1 '; if P l And P 1 ' is the same point, then P l And P r1 Matching is successful, otherwise, abandoning;
4.4c, and circulating until the matching is completed.
10. An AGV navigation positioning method according to claim 9,
said step 4.5 comprises:
step 4.5.1, position measurement; setting the distance between the connecting lines of the projection centers of the two cameras as the base line distance b, setting the origin of the coordinates of the cameras at the optical center of the lens of the cameras, and setting the origin of the coordinate system of the left and right images at the intersection O of the optical axis of the cameras and the plane 1 And O 2 The corresponding coordinates of any point P in the space in the left image and the right image are respectively P 1 (x 1 ,y 1 ) And P 2 (x 2 ,y 2 ) The coordinates of point P in the left camera coordinate system are (x, y, z), and the distance is calculated by the following formula:
in the formula, delta z represents the precision of the distance between the measured point and the stereoscopic vision system, z represents the absolute distance between the measured point and the stereoscopic vision system, f represents the focal length of the camera, b represents the baseline distance of the binocular stereoscopic vision system, and delta d represents the parallax precision of the measured point;
step 4.5.2, measuring the attitude; according to the process of central projection imaging of the camera, the established coordinate system comprises the following components:
1-1, object coordinate system O O -x O y O z O Let the coordinate of the ith feature point on the target in the coordinate system be q i =[x oi ,y oi ,z oi ] T
2-2, measuring coordinate system O S -x S y S z S Taking the camera coordinate system of the first camera as the measurement coordinate system, and setting the coordinate of the ith characteristic point on the target on the coordinate system as p i =[x Si ,y Si ,z Si ] T
3-3 pixel coordinate system o-u of images acquired by two cameras L v L And o-u R v R With the upper left corner of the imaging plane as the origin, u L ,v L The axes are respectively parallel to the x-axis and the y-axis of the image plane coordinate system; the projection coordinates of the ith characteristic point on the target on the two images under the pixel coordinate system are [ u ] respectively L ,v L ] T And [ u ] R ,v R ] T
Taking the first camera coordinate system as a measurement coordinate system, the projection matrix of the two cameras is:
M 1 =K[1 0] (11)
M 2 =K[R C T C ] (12)
where K is the intrinsic parameter matrix of the camera, R C And T C A rotation matrix and a translation vector of the second camera relative to the first camera; for the projection matrix M 1 Let M stand for 11 、M 12 、M 13 To correspond to M 1 The homogeneous coordinate of the ith characteristic point on the image collected by the first camera is set as [ u [ ] Li v Li 1] T ,P i Obtaining a formula (13) for the homogeneous coordinate of the feature point in the measurement coordinate system:
for the projection matrix M 2Having a relationship [ u ] with the homogeneous coordinate system of the projection of the feature point on the image acquired by the second camera Ri v Ri 1] T
Simultaneous equations (13) and (14) can be obtained:
the equation (15) can be solved by the least square principle
Setting n matched characteristic point image projections obtained on the images collected by the two cameras, wherein the three-dimensional coordinate P = { P } of the characteristic point in the measurement coordinate system i Solved (i =1, \8230;, n), and three-dimensional coordinates Q = { Q } in the object coordinate system i As is known (i =1, \8230;, n), the coordinate system relationship is given by:
P=sRQ+T (16)
wherein s is a proportionality coefficient; t is a displacement vector and is a three-dimensional vector, namely T = [ T = [) x ,T y ,T z ] T (ii) a R is a rotation matrix which is a3 multiplied by 3 unit orthogonal matrix;
if the data is ideal measurement data, the formula (16) can be directly solved by the information of all the points to obtain s, R and T; since there will be an error, the error is:
e i =p i -sRq i -T (17)
according to the least squares theory, when allSum of squares of measurement errorsWhen the minimum, the most appropriate solution can be obtained;
assuming that the coordinates of the centroids of the n feature points in the measurement coordinate system and the object coordinate system are:
obtaining new coordinates under coordinate systems respectively taking the centroid as an origin:
the sum of the squared errors can be written as:
or the following steps:
wherein the content of the first and second substances,
by calculating the proportional coefficient and the rotation vector, the displacement vector can be calculated according to the coordinates of the centroid of the feature point in the measurement coordinate system and the object coordinate system;
relative poses are expressed by Euler's angle method, and the corresponding rotation matrix is given by the rotation order of z → y → x:
r is a rotation matrix which is a trigonometric function combination of three angles (alpha, beta, theta); a rotation angle alpha around an X axis is a pitch angle, a rotation angle beta around a Y axis is a yaw angle, and a rotation angle theta around a Z axis is a rolling angle; the rotation matrix can be obtained by pose solution(r ij (i, j =1, \ 8230;, 3) is the element in the ith row and jth column of the matrix), the euler angle is found according to equation (22):
β=arcsin(r 13 ) (24)
CN201711033453.4A 2017-10-30 2017-10-30 AGV navigation locating methods based on ultrasonic wave networking and stereoscopic vision Pending CN108051007A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711033453.4A CN108051007A (en) 2017-10-30 2017-10-30 AGV navigation locating methods based on ultrasonic wave networking and stereoscopic vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711033453.4A CN108051007A (en) 2017-10-30 2017-10-30 AGV navigation locating methods based on ultrasonic wave networking and stereoscopic vision

Publications (1)

Publication Number Publication Date
CN108051007A true CN108051007A (en) 2018-05-18

Family

ID=62118585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711033453.4A Pending CN108051007A (en) 2017-10-30 2017-10-30 AGV navigation locating methods based on ultrasonic wave networking and stereoscopic vision

Country Status (1)

Country Link
CN (1) CN108051007A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110133663A (en) * 2019-05-08 2019-08-16 西安联丰迅声信息科技有限责任公司 A kind of distributed acoustic is as joint calibration localization method
WO2021004483A1 (en) * 2019-07-11 2021-01-14 深圳市海柔创新科技有限公司 Navigation method, mobile carrier, and navigation system
CN113110433A (en) * 2021-04-02 2021-07-13 深圳优地科技有限公司 Robot posture adjusting method, device, equipment and storage medium
CN113490171A (en) * 2021-08-11 2021-10-08 重庆大学 Indoor positioning method based on visual label
CN113581720A (en) * 2021-07-06 2021-11-02 浙江世仓智能仓储设备有限公司 Running control method of four-way shuttle car for three-dimensional shuttle library
CN113885510A (en) * 2021-10-21 2022-01-04 齐鲁工业大学 Obstacle avoidance and pilot following method and system for four-legged robot
CN114524209A (en) * 2021-12-21 2022-05-24 杭叉集团股份有限公司 AGV high-position stacking method and detection device based on double TOF cameras

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100042319A1 (en) * 2008-08-15 2010-02-18 Wu Chih-Jen Automatic ultrasonic and computer-vision navigation device and method using the same
CN101726296A (en) * 2009-12-22 2010-06-09 哈尔滨工业大学 Vision measurement, path planning and GNC integrated simulation system for space robot
CN104777835A (en) * 2015-03-11 2015-07-15 武汉汉迪机器人科技有限公司 Omni-directional automatic forklift and 3D stereoscopic vision navigating and positioning method
CN105841687A (en) * 2015-01-14 2016-08-10 上海智乘网络科技有限公司 Indoor location method and indoor location system
CN106507478A (en) * 2016-11-28 2017-03-15 成都理想境界科技有限公司 A kind of localization method, positioning network and alignment system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100042319A1 (en) * 2008-08-15 2010-02-18 Wu Chih-Jen Automatic ultrasonic and computer-vision navigation device and method using the same
CN101726296A (en) * 2009-12-22 2010-06-09 哈尔滨工业大学 Vision measurement, path planning and GNC integrated simulation system for space robot
CN105841687A (en) * 2015-01-14 2016-08-10 上海智乘网络科技有限公司 Indoor location method and indoor location system
CN104777835A (en) * 2015-03-11 2015-07-15 武汉汉迪机器人科技有限公司 Omni-directional automatic forklift and 3D stereoscopic vision navigating and positioning method
CN106507478A (en) * 2016-11-28 2017-03-15 成都理想境界科技有限公司 A kind of localization method, positioning network and alignment system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
胡春田: ""基于CCD图像的三维重建技术研究与实现"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
陈兴祁: ""基于超声波与视觉定位的智能小车导航技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110133663A (en) * 2019-05-08 2019-08-16 西安联丰迅声信息科技有限责任公司 A kind of distributed acoustic is as joint calibration localization method
CN110133663B (en) * 2019-05-08 2023-03-10 西安联丰迅声信息科技有限责任公司 Distributed acoustic image joint calibration positioning method
WO2021004483A1 (en) * 2019-07-11 2021-01-14 深圳市海柔创新科技有限公司 Navigation method, mobile carrier, and navigation system
CN113110433A (en) * 2021-04-02 2021-07-13 深圳优地科技有限公司 Robot posture adjusting method, device, equipment and storage medium
CN113110433B (en) * 2021-04-02 2024-05-31 深圳优地科技有限公司 Robot posture adjustment method, device, equipment and storage medium
CN113581720A (en) * 2021-07-06 2021-11-02 浙江世仓智能仓储设备有限公司 Running control method of four-way shuttle car for three-dimensional shuttle library
CN113490171A (en) * 2021-08-11 2021-10-08 重庆大学 Indoor positioning method based on visual label
CN113490171B (en) * 2021-08-11 2022-05-13 重庆大学 Indoor positioning method based on visual label
CN113885510A (en) * 2021-10-21 2022-01-04 齐鲁工业大学 Obstacle avoidance and pilot following method and system for four-legged robot
CN113885510B (en) * 2021-10-21 2023-11-10 齐鲁工业大学 Four-foot robot obstacle avoidance and pilot following method and system
CN114524209A (en) * 2021-12-21 2022-05-24 杭叉集团股份有限公司 AGV high-position stacking method and detection device based on double TOF cameras

Similar Documents

Publication Publication Date Title
CN108012325B (en) Navigation positioning method based on UWB and binocular vision
CN108051007A (en) AGV navigation locating methods based on ultrasonic wave networking and stereoscopic vision
US10809071B2 (en) Method for constructing a map while performing work
CN108571971B (en) AGV visual positioning system and method
CN109598765B (en) Monocular camera and millimeter wave radar external parameter combined calibration method based on spherical calibration object
WO2021026850A1 (en) Qr code-based navigation attitude determining and positioning method and system
Zhou A new minimal solution for the extrinsic calibration of a 2D LIDAR and a camera using three plane-line correspondences
EP3324209A1 (en) Methods and systems for vehicle environment map generation and updating
KR102420476B1 (en) Apparatus and method for estimating location of vehicle and computer recordable medium storing computer program thereof
CN107980138B (en) False alarm obstacle detection method and device
CN102763132B (en) Three-dimensional measurement apparatus and processing method
JP5992184B2 (en) Image data processing apparatus, image data processing method, and image data processing program
CN103065323B (en) Subsection space aligning method based on homography transformational matrix
US20190120934A1 (en) Three-dimensional alignment of radar and camera sensors
EP1394761A2 (en) Obstacle detection device and method therefor
WO2019022912A1 (en) Systems and methods for determining a vehicle position
KR102261791B1 (en) Reflector and 2d lidar based autonomous docking system
EP3155369B1 (en) System and method for measuring a displacement of a mobile platform
CN114474056B (en) Monocular vision high-precision target positioning method for grabbing operation
CA3044322A1 (en) Self-calibrating sensor system for a wheeled vehicle
CN112669354A (en) Multi-camera motion state estimation method based on vehicle incomplete constraint
CN113843798A (en) Method and system for correcting grabbing and positioning errors of mobile robot and robot
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN113643380A (en) Mechanical arm guiding method based on monocular camera vision target positioning
CN116091603A (en) Box workpiece pose measurement method based on point characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180518