CN111337011A - Indoor positioning method based on laser and two-dimensional code fusion - Google Patents

Indoor positioning method based on laser and two-dimensional code fusion Download PDF

Info

Publication number
CN111337011A
CN111337011A CN201911257092.0A CN201911257092A CN111337011A CN 111337011 A CN111337011 A CN 111337011A CN 201911257092 A CN201911257092 A CN 201911257092A CN 111337011 A CN111337011 A CN 111337011A
Authority
CN
China
Prior art keywords
robot
dimensional code
pose
binary image
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911257092.0A
Other languages
Chinese (zh)
Inventor
林欢
李栗
张利平
程敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yijiahe Technology Co Ltd
Original Assignee
Yijiahe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yijiahe Technology Co Ltd filed Critical Yijiahe Technology Co Ltd
Priority to CN201911257092.0A priority Critical patent/CN111337011A/en
Publication of CN111337011A publication Critical patent/CN111337011A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • G01C15/002Active optical surveying means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Toxicology (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an indoor positioning method based on laser and two-dimensional code fusion, which comprises the following steps: utilize laser sensor to acquire laser positioning data and calculate first position appearance h of robott(ii) a Second pose k of robot is calculated by utilizing two-dimensional code indoor positioning datat(ii) a Using particle filtering to align the first bit position htAnd a second position ktFusing to obtain the final pose of the robot; generating particles; updating the generated particle state by using a pose prediction equation; updating the weight of the particles by using the observation model; calculating state variable estimates, final stateAnd the state variable estimation value is the final pose of the robot. The invention overcomes the influence of illumination change, greatly improves the positioning precision of the robot and effectively avoids the problem of robot positioning loss.

Description

Indoor positioning method based on laser and two-dimensional code fusion
Technical Field
The invention relates to the field of inspection robots, in particular to an indoor positioning method based on laser and two-dimensional code fusion.
Background
The positioning is a process that the robot obtains the self pose through sensing the self and surrounding environment information and certain data processing. Indoor positioning of autonomous mobile robots has been widely studied as the most fundamental problem in the field of robot research. The current positioning method of the indoor mobile robot mainly comprises navigation beacon positioning and probability map positioning based on laser. Beacon positioning utilizes artificial or natural road signs and triangulation principles to locate, relies on beacons of known characteristics in a range of environments, and requires visual sensors mounted on mobile robots to observe the beacons. Beacon-based positioning systems can provide fast, stable, accurate absolute position information, but are susceptible to indoor lighting intensity variations. The method has the advantages that the probability map positioning based on laser is realized, the uncertainty is expressed by utilizing a probability theory, the robot orientation is expressed as a probability distribution of all possible robot poses, the position with the highest probability is the position of the robot, the positioning precision is high, but the robot is easy to get lost due to the influence of factors such as indoor real environment change and the like in actual operation, and the positioning is lost with a certain probability.
The idea of Particle filtering (PF: Particle Filter) is based on the Monte Carlo method (Monte Carlo methods), which uses a set of particles to represent the probability, and can be used on any form of state space model. The core idea is to express the distribution of random state particles by extracting the random state particles from the posterior probability, and the method is a Sequential Importance Sampling method (Sequential Importance Sampling). Briefly, the particle filtering method is a process of approximating a probability density function by searching a group of random samples propagated in a state space, and substituting an integral operation with a sample mean value to obtain a state minimum variance distribution. The samples herein refer to particles, and any form of probability density distribution can be approximated when the number of samples N → ∞ is. The superiority of the particle filtering technology in a nonlinear and non-Gaussian system determines that the application range of the particle filtering technology is very wide. In addition, the multi-modal processing capability of the particle filter is one of the reasons for the wide application of the particle filter. Internationally, particle filtering has been applied to various fields. In the field of economics, it is applied to economic data prediction; the radar tracking device has been applied to radar tracking of air flyers in the military field, and passive tracking of air to air and air to ground; it is used in the field of traffic control for video monitoring of vehicles or people; it is also used for robot mapping.
Disclosure of Invention
In order to solve the above problems, the present invention provides an indoor positioning method based on laser and two-dimensional code fusion, which comprises the following steps:
s1 calculating the first pose h of the robot by acquiring laser positioning data through the laser sensortThe method comprises the following steps:
calculating a predicted pose initial value of the robot;
determining each scanning angle based on the positioning scanning parameters by taking the predicted pose initial value as a center, acquiring discrete scanning data of different scanning angles, and calculating poses of the robot under different scanning angles to form all candidate poses of the predicted pose initial value;
calculating the confidence coefficient and the confidence coefficient score of each candidate pose, and selecting the candidate pose with the highest confidence coefficient score as the first pose htThe highest confidence score is scoremax
S2 calculating second pose k of robot by utilizing two-dimensional code indoor positioning datatThe method comprises the following steps:
shooting an image containing a two-dimensional code located indoors;
carrying out binarization processing on the image to obtain a binary image;
obtaining a target area of the two-dimensional code in the binary image;
determining the pose of the robot according to the target area of the two-dimensional code in the binary image, namely the second pose k of the robott
S3 using particle filtering to correct the first attitude htAnd a second position ktPerforming fusionAnd obtaining the final pose of the robot, comprising:
establishing a pose prediction equation of the robot, integrating the latest laser positioning data and two-dimension code positioning data to establish an observation model, wherein the establishment formula of the observation model is as follows:
ht=(xht,yht,γht)
kt=(xkt,ykt,γkt)
zt=λ1ht2kt=(xt,yt,γt)
wherein h istThe observed value of the laser sensor to the pose of the robot at the time t is the first pose, xhtFor the abscissa of the robot observed by the laser sensor at time t,
yhtfor the ordinate, gamma, of the robot observed by the laser sensor at time thtThe posture of the robot observed by the laser sensor at the time t; k is a radical oftAn observed value of the pose of the robot, namely a second pose x at the moment t based on the two-dimensional codektFor the abscissa of the robot observed based on the two-dimensional code at time t,
yktordinate, γ, of the robot observed for time t based on the two-dimensional codektThe posture of the robot observed based on the two-dimensional code at the moment t; x is the number oftThe abscissa of the robot after integrating the laser sensor and the observation data based on the two-dimensional code at the time t,
yktthe vertical coordinate, gamma, of the robot after integrating the laser sensor and the observation data based on the two-dimensional code for the time ttIntegrating the laser sensor and observation data based on the two-dimensional code for t moment and then determining the posture of the robot; lambda [ alpha ]1Is the weight of the first pose, λ2Is the weight of the second pose, λ12=1,λ1=scoremax
Generating particles;
updating the generated particle state by using a pose prediction equation;
updating the weight of the particles by using the observation model;
and calculating a state variable estimation value, wherein the final state variable estimation value is the final pose of the robot.
Further, the shooting of the image containing the two-dimensional code located at the indoor top specifically includes:
the camera of shooing the two-dimensional code sets up in the top of robot, and the two-dimensional code uses I as the interval, pastes on the ceiling, and the camera is vertical upwards shoots, and I's computational formula is as follows:
I=wL/f
where w is the imaging width of the lens, L is the distance from the ceiling to the camera, and f is the lens focal length.
Further, the binarizing processing on the image specifically includes:
carrying out graying processing on the image by adopting a weighted average method to obtain a grayscale image, and carrying out binarization processing on the grayscale image by adopting a maximum inter-class variance method to obtain a binary image.
Further, the obtaining of the target region of the two-dimensional code in the binary image specifically includes:
calculating the pixel size of the two-dimensional code in the binary image by using the actual size of the two-dimensional code; then scanning the binary image according to rows and columns, filtering the scanned target area according to the pixel size of the two-dimensional code, and finally obtaining the target area of the two-dimensional code in the binary image, wherein the specific steps are as follows:
the pixel size of the two-dimensional code in the image is calculated as follows:
Figure BDA0002310571560000031
wherein d is the actual physical length of the actual image corresponding to the binary image, m is the pixel length of the binary image, and L0Is the pixel length, W, of the target area of the two-dimensional code0The pixel width of a two-dimensional code target area is L, the actual physical length of the two-dimensional code is L, and the actual physical width of the two-dimensional code is W;
scanning the binary image line by line, recording the positions of the left and right boundary points in the binary image,determining the extent of the pixel length L of the formed target region between the left and right boundary points, when L ∈ (L)0-5,L0+5), and the left and right boundary points are respectively located in the same column in the binary image, the binary image is scanned column by column, the positions of the left and right boundary points in the binary image are recorded, the range of the pixel length W of the target region (the target region formed between the left and right boundary points in line-by-line scanning and in column-by-column scanning) is determined, when W ∈ (W ∈) (W0-5,W0+5) and the upper and lower boundary points are respectively located in the same line of the image, the target area is marked as the target area of the two-dimensional code.
Further, the second position ktThe specific calculation formula of (A) is as follows:
Figure BDA0002310571560000032
wherein x isqAnd yqRespectively representing the abscissa and the ordinate of any point in the target area of the two-dimensional code; assuming that a center point of the binary image is O, a position of the camera in the two-dimensional space, that is, a position point of the robot is O ', and a center point of the two-dimensional code in the binary image is Q, since the camera takes a vertical upward image, an X value of the point O ' in the map coordinate system is an X value of the image center point O, and a Y value of the point O ' in the map coordinate system is a Y value of the binary image center point 0, Δ X is a displacement difference in an X-axis direction between the position point O ' of the robot and the center point of the two-dimensional code in the actual space, and Δ Y is a displacement difference in a Y-axis direction between the position point O ' of the robot and the center point of the two-dimensional code in the actual space, specifically:
Figure BDA0002310571560000041
Figure BDA0002310571560000042
wherein m is the pixel length of the binary image, n is the pixel width of the binary image, m/2 is the distance from the central point O of the binary image to the left and right sides of the binary image, u is the pixel line number where the left boundary of the two-dimensional code in the binary image is located, v is the pixel line number where the right boundary of the two-dimensional code in the binary image is located, and u + (v-u)/2 is the pixel line number where the central point of the two-dimensional code in the binary image is located; n/2 is the distance from the central point O of the binary image to the upper and lower boundaries, r is the number of pixel lines where the upper boundary of the two-dimensional code is located, s is the number of pixel lines where the lower boundary of the two-dimensional code is located, and r + (s-r)/2 is the number of pixel lines where the central point of the two-dimensional code in the binary image is located.
Further, the establishing of the pose prediction equation of the robot is specifically as follows:
determining the initial pose P of the robot by the pose of two frames of laser light from the laser sensor of the robot0(x0,y0,θ0) And initial velocity
Figure BDA0002310571560000043
Wherein x is0,y0Initial abscissa and ordinate, theta, respectively, of the robot as observed by the laser sensor0The initial attitude of the robot observed by the laser sensor;
Figure BDA0002310571560000044
respectively, an abscissa component and an ordinate component of the initial velocity of the robot observed by the laser sensor,
Figure BDA0002310571560000045
the attitude component of the initial speed of the robot observed by the laser sensor; acceleration information of the robot obtained from the newly received inertial navigation data
Figure BDA0002310571560000046
Updating the movement speed of the robot, wherein
Figure BDA0002310571560000047
Acceleration of the robot in the X-axis direction at time t,
Figure BDA0002310571560000048
For the acceleration of the robot in the y-axis direction at time t,
Figure BDA0002310571560000049
is the angular acceleration of the robot at time t. Suppose that the robot speed at time t is
Figure BDA00023105715600000410
The pose prediction equation of the robot can be expressed as:
Pt T=PT t-1+[(Vt-1+At-1)Δt]T+ut-1
wherein, Pt、Pt-1Respectively receiving poses corresponding to two continuous frames of laser data at the time t and the time t-1 for the robot,
Figure BDA00023105715600000411
the speed of the robot at time t-1,
Figure BDA00023105715600000412
the speed of the robot in the x-axis direction at time t-1,
Figure BDA00023105715600000413
the speed of the robot in the y-axis direction at time t-1,
Figure BDA00023105715600000414
angular velocity, u, of the robot at time t-1t-1The system noise at time t-1.
Further, the updating the particle weight value by using the observation model specifically includes:
updating the weight of the particles by using an observation value obtained by indoor positioning based on the laser sensor and the two-dimensional code, and calculating the corresponding weight for each particle along with the sequential arrival of the observation value; the weight represents the probability of observation obtained when each particle is taken by the predicted pose; such evaluation is performed for all particles, the closer the particles are to the observed value, the higher the weight obtained; the weights are calculated as follows:
ωi=1/di
wherein ω isiIs the weight of the ith particle, diAnd (4) the Euclidean distance which is taken as the value of the ith particle distance observation model.
Further, the calculating the state variable estimation value specifically includes:
calculating an estimate of a state variable by calculating a weighted average of all particles
Figure BDA0002310571560000051
Namely, the estimated value of the pose of the robot when the robot receives the k frame of laser data.
Further, the method also comprises the step of counting the number of the effective particles NeffWhen the effective particle number is less than the specified value, resampling is carried out, otherwise, the step of updating the generated particle state by using the pose prediction equation is returned to for iteration, and the effective particle number N iseffThe calculation formula of (2) is as follows:
Figure BDA0002310571560000052
wherein ω isiIs the weight of the ith particle.
Compared with the prior art, the invention has the following beneficial effects:
the invention overcomes the influence of the illumination change of the indoor environment on the positioning precision of the robot and the problem that the robot is easy to lose the positioning, greatly improves the positioning precision of the robot and effectively avoids the problem of the positioning loss of the robot.
Drawings
Fig. 1 is a flow chart of a robot indoor positioning method based on laser and two-dimensional code fusion.
Fig. 2 is a flow chart of a laser-based indoor positioning algorithm.
Fig. 3 is a schematic view of a scanning window.
FIG. 4 is a schematic diagram of a laser spot selection.
Fig. 5 is a flowchart of a robot indoor positioning method based on two-dimensional codes.
Fig. 6 is a schematic diagram of two-dimensional code positioning.
Detailed Description
The indoor positioning algorithm based on the fusion of the laser and the two-dimensional code, which is provided by the invention, is described in detail below with reference to the accompanying drawings.
The coordinate system used in the present embodiment includes a map coordinate system, a robot coordinate system, and a laser sensor coordinate system.
The robot coordinate system is a coordinate system with a robot as an origin, and in two-dimensional navigation, the origin is generally located at the center point of the robot.
The laser sensor coordinate system is a coordinate system with the central position of the laser sensor as the origin, and the laser sensor coordinate system is used for the pose of the laser data.
The comparison and calculation of the pose needs to convert data under different coordinate systems into the same coordinate system, and the conversion of the different coordinate systems can be realized through a TF module (coordinate conversion module) in the ROS system.
The indoor positioning algorithm based on the fusion of the laser and the two-dimensional code utilizes the indoor positioning result of the robot obtained by the laser and the two-dimensional code respectively, and utilizes particle filtering to fuse the laser and the two-dimensional code result, and finally obtains the pose information of the robot, wherein the specific process comprises the following steps:
laser-based robot indoor positioning
1. Calculating predicted pose of robot
As shown in FIG. 2, the position and posture of the robot at the current moment in the map coordinate system are obtained by using the odometer and the inertial navigation unit, and the position and posture of the robot at the moment t-1 are assumed to be (x)t-1,yt-1,γt-1) The last time point for calculating the pose is t-2, and the corresponding pose is (x)t-2,yt-2,γt-2) The time point of the next pose calculation is t, and the corresponding pose is (x)t,yt,γt). The time interval between the time points t-2 and t-1 is Deltat, and the moving speed V (V) of the robot is estimatedx,vy,vγ),vxLinear velocity of x-axis, vyLinear velocity of the y-axis, vγFor angular velocity, estimation formulaThe following were used:
Figure BDA0002310571560000061
using vx,vy,vγTo calculate the initial value (x) of the predicted pose of the robot at the time point tt,yt,γt) Since the laser sensor is sampled uniformly, the time interval between time points t-1 and t is Δ t.
Figure BDA0002310571560000062
Due to the error of actual hardware, the accurate pose of the robot at the time point t deviates from the initial predicted pose. Next, the predicted pose initial value (x) is optimized by the degree of coincidence between the laser measurement data and the map data corresponding to the predicted pose initial valuet,yt,γt) And finally obtaining the optimal position closest to the accurate pose.
2. Obtaining discrete scan data at different scan angles
The robot takes the predicted pose initial value as a center, determines different scanning angles based on scanning parameters, and takes the pose of each scanning angle as all candidate poses of the predicted pose initial value; the scanning parameters include displacement scanning parameters and angle scanning parameters, as shown in fig. 3, the displacement scanning parameters are used for limiting the displacement range of the robot during positioning scanning, that is, the displacement scanning parameters deviate from Lcm in a front-back left-right direction by taking an initial predicted pose value as a center in a map coordinate system to form a square with the side length of 2Lcm, the side of the square is parallel to or perpendicular to one coordinate axis of the map coordinate system, and the side of the square is the displacement scanning parameters. The angle scanning parameter is used for limiting the angle range when the robot carries out positioning scanning, namely a predicted angle initial value gamma of a predicted pose initial value in a map coordinate systemtThe left and right sides are each deviated by an angle of W degrees. The pose of the robot under different scanning angles forms a predicted pose initial value (x)t,yt,γt) All candidate poses of. The above-mentionedThe scout scan constituting the candidate pose is a virtual scout scan, which is a simulation of an actual scout scan without requiring actual movement of the robot.
And calculating the map grid position corresponding to each laser reflection point at each scanning angle (namely calculating the coordinate of each map grid in a map coordinate system) according to the laser data obtained by the positioning scanning of the robot, and taking the map grid position as the discrete scanning data of each scanning angle. For discrete scanning data of a certain scanning angle, if there are multiple laser reflection points repeatedly falling on the same map grid position, only the coordinates of the map grid corresponding to one of the laser reflection points in the map coordinate system are taken, as shown in fig. 4, a gray grid is a condition that the multiple laser reflection points fall on the same map grid, and only the coordinates of one laser reflection point in the gray grid are taken for calculating the confidence of the subsequent step.
3. Calculating confidence
According to the confidence degree of each map grid corresponding to each candidate pose (the confidence degree value of the map grid is related to the map building process and is a determined value in the positioning process), calculating the confidence degree sigma of each candidate pose, wherein the formula is as follows:
Figure BDA0002310571560000071
Figure BDA0002310571560000072
wherein M is the total number of map grids in the discrete scanning data corresponding to a certain candidate pose, and the coordinate of the nth map grid in the map coordinate system is (x)n,yn) The confidence of the map grid is
Figure BDA0002310571560000073
Has a value range of [0, 1 ]]。
Calculating the confidence coefficient weight omega corresponding to each candidate pose according to the pose difference between each candidate pose and the predicted pose, wherein the formula is as follows:
Figure BDA0002310571560000074
where Δ x is the displacement between each candidate pose and the predicted pose along the x-axis, Δ y is the displacement between each candidate pose and the predicted pose along the y-axis, ωxyIs the displacement weight, Δ r is the rotation angle between the candidate pose and the predicted pose, ωrIs a rotation angle weight, typically ωxyAnd ωrTaking 1 indicates that the weight of the displacement and the rotation angle is the same.
Taking the product of the confidence coefficient sigma of each candidate pose and the confidence coefficient weight omega as the current confidence coefficient score of the candidate pose, wherein the calculation formula is as follows:
score=σ·ω
and selecting the pose with the highest confidence score to update the predicted pose as the optimal predicted pose, namely the first pose at the time t is (x)ht,yht,γht). The first pose is the predicted pose of the robot which is optimal at the t moment and is obtained based on the three-dimensional laser sensor. The highest confidence score is scoremax
Robot indoor positioning algorithm based on two-dimensional code
As shown in fig. 5, the indoor positioning algorithm based on the two-dimensional code first performs binarization processing on a shot image, scans binary images in rows and columns in sequence to obtain the position and size of a target in the image, then marks the target of the two-dimensional code according to the size of the target, and finally determines the position of the robot according to the position of the two-dimensional code in the image. According to the invention, the camera for observing the two-dimensional code is positioned at the top end of the robot, the two-dimensional code is pasted on a ceiling at intervals of I, and the camera vertically shoots upwards. The calculation formula of I is as follows:
I=wL/f
where w is the imaging width of the lens, L is the distance from the ceiling to the camera, and f is the lens focal length.
1. Carrying out binarization processing on the image
Firstly, the image is processed by adopting a weighted average methodLine graying, which is a process of equalizing R, G, B component values of a color image, gives R, G, B different weights so that the weighted average of R, G, B values, i.e., R, G, B, and ω (ω) is givenrR+ωgG+ωbB)/3
Wherein ω isr、ωg、ωbRespectively R, G, B, in this embodiment, ωr=0.9,ωg=1.77,ωbWhen the number is 0.33, the most appropriate gradation image can be obtained.
Then, the obtained gray level image is subjected to binarization processing by adopting a maximum inter-class variance method, the gray level image is divided into a foreground area and a background area by using a certain gray level value, when the variance between the two areas is maximum, the difference between the two areas is maximum, the gray level value is a minimum threshold value of binarization, and the expression is as follows:
G=ω1212)2
wherein the ratio of the number of the pixels in the foreground region is omega1Average gray of μ1The proportion of the number of the pixels in the background area is omega2Average gray of μ2And the between-class variance is G. And traversing within the range of 0-255, and finding out the gray value which enables G to be maximum, wherein the gray value is used as the minimum threshold value T of binarization.
The binarized image is a binary image, and the expression of the binary image is as follows:
Figure BDA0002310571560000081
wherein f (a, b) is an input image, g (a, b) is an output image, (a, b) represents a pixel point in the image, a represents a row of the pixel point, and b represents a column of the pixel point.
2. Extraction of two-dimensional code target
Firstly, calculating the pixel size of the two-dimensional code in a binary image by using the actual size of the two-dimensional code, then scanning the binary image according to rows and columns, and filtering the scanned target area according to the pixel size of the two-dimensional code to finally obtain the target area of the two-dimensional code. The pixel size of the two-dimensional code in the binary image is calculated as follows:
Figure BDA0002310571560000091
wherein d is the actual physical length of the actual image corresponding to the binary image, m is the pixel length of the binary image, and L0Is the pixel length, W, of the target area of the two-dimensional code0The pixel width of a two-dimensional code target area is L, the actual physical length of the two-dimensional code is L, and the actual physical width of the two-dimensional code is W;
progressively scanning the binary image, recording the positions of the left and right boundary points in the binary image, determining the range of pixel length L of the target region formed between the left and right boundary points, when L ∈ (L)0-5,L0+5), and the left and right boundary points are respectively located in the same column in the binary image, the binary image is scanned column by column, the positions of the left and right boundary points in the binary image are recorded, the range of the pixel length W of the target region (the target region formed between the left and right boundary points in line-by-line scanning and in column-by-column scanning) is determined, when W ∈ (W ∈) (W0-5,W0+5) and the upper and lower boundary points are respectively located in the same line of the image, the target area is marked as the target area of the two-dimensional code.
3. Positioning a robot
As shown in fig. 6, the pixel size of a binary image corresponding to an image captured by a camera is m × n, m is the pixel length of the binary image, n is the pixel width of the binary image, the distance from the center point O of the binary image to the left and right sides of the binary image is m/2, the distance from the center point O to the upper and lower boundaries is n/2, the binary image is scanned line by line, the number of pixel rows where the upper boundary is located in the two-dimensional code is r, the number of pixel rows where the lower boundary is located is s, the number of pixel rows where the point p is located is r + (s-r)/2, the binary image is scanned line by line in the same manner, the number of pixel rows where the left boundary is u, the number of pixel rows where the right boundary is v, the number of pixel columns where the point p is u + (v-u)/2, PN is ON ═ u + (v-u)/2-m/2, PN ═ n/2- [ r + (s-r)/2], the corresponding length of ON in an actual space is Δ X, and the corresponding length of PN in the actual space is calculated as follows:
Figure BDA0002310571560000092
Figure BDA0002310571560000093
the point O ' is a position of the camera in the two-dimensional space, that is, a position of the robot, because a position of the two-dimensional code in the map coordinate system is known in practice and the camera is vertically upward photographed, an x value of the point O ' in the map coordinate system is an x value of the image center point O, a y value of the point O ' in the map coordinate system is a y value of the image center point O, and assuming that at time t, a coordinate of the point Q is (x is a y value of the image center point O)q,yq) And the pose of the O 'point is O' (x)kt,ykt,γkt) The calculation method is as follows:
Figure BDA0002310571560000101
wherein x isqAnd yqRespectively representing the abscissa and the ordinate of any point in the target area of the two-dimensional code; pose (x)kt,ykt,γkt) The second pose is the pose of the robot at the time t, and the second pose is the pose of the robot at the time t obtained based on indoor positioning of the two-dimensional codes.
Robot indoor positioning algorithm based on laser and two-dimensional code fusion
As shown in fig. 1, the positioning result of the laser sensor and the positioning result of the two-dimensional code are fused by using particle filtering, so as to obtain the final positioning result of the robot.
1. Establishing a pose prediction equation of the robot:
1-1, determining the initial pose P of the robot by the pose of two frames of laser from the laser sensor of the robot0(x0,y0,θ0) And initial velocity
Figure BDA0002310571560000102
Wherein x is0,y0Initial abscissa and ordinate, theta, respectively, of the robot as observed by the laser sensor0The initial attitude of the robot observed by the laser sensor;
Figure BDA0002310571560000103
respectively, an abscissa component and an ordinate component of the initial velocity of the robot observed by the laser sensor,
Figure BDA00023105715600001013
the attitude component of the initial velocity of the robot observed by the laser sensor.
1-2, obtaining the latest robot acceleration information according to IMU (Inertial measurement unit)
Figure BDA0002310571560000104
Updating the movement speed of the robot, wherein
Figure BDA0002310571560000105
Acceleration of the robot in the X-axis direction at time t,
Figure BDA0002310571560000106
For the acceleration of the robot in the y-axis direction at time t,
Figure BDA0002310571560000107
is the angular acceleration of the robot at time t. Suppose that the robot speed at time t is
Figure BDA0002310571560000108
The pose prediction equation of the robot can be expressed as:
Pt T=PT t-1+[(Vt-1+At-1)Δt]T+ut-1
wherein, Pt、Pt-1Respectively receiving two continuous frames of laser numbers at t moment and t-1 moment for the robotAccording to the corresponding position and posture,
Figure BDA0002310571560000109
the speed of the robot at time t-1,
Figure BDA00023105715600001010
the speed of the robot in the x-axis direction at time t-1,
Figure BDA00023105715600001011
the speed of the robot in the y-axis direction at time t-1,
Figure BDA00023105715600001012
angular velocity, u, of the robot at time t-1t-1The system noise at time t-1.
Reasonably integrating the latest laser positioning data and the latest two-dimensional code positioning data, and establishing an observation model:
ht=(xht,yht,γht)
kt=(xkt,ykt,γkt)
zt=λ1ht2kt=(xt,yt,γt)
wherein h istThe observed value of the laser sensor to the pose of the robot at the time t is the first pose, xhtFor the abscissa, y, of the robot observed by the laser sensor at time thtFor the ordinate, gamma, of the robot observed by the laser sensor at time thtThe posture of the robot observed by the laser sensor at the time t; k is a radical oftAn observed value of the pose of the robot, namely a second pose x at the moment t based on the two-dimensional codektThe abscissa, y, of the robot observed for time t based on the two-dimensional codektOrdinate, γ, of the robot observed for time t based on the two-dimensional codektThe posture of the robot observed based on the two-dimensional code at the moment t; x is the number oftAbscissa, y, of the robot after integrating the laser sensor and the observation data based on the two-dimensional code for the time tktIs time tOrdinate, gamma, of a robot after integration of a laser sensor and observation data based on a two-dimensional codetIntegrating the laser sensor and observation data based on the two-dimensional code for t moment and then determining the posture of the robot; lambda [ alpha ]1Is the weight of the first pose, λ2Is the weight of the second pose, λ12=,λ1=scoremax
2. And (3) generating particles: according to the predetermined initial pose P of the robot0=(x0,y0,θ0) The initial moment randomly generates N uniformly distributed particles in the motion range of the robot, and the particles have
Figure BDA0002310571560000111
Three features.
3. Updating the particle state using the robot motion model: and updating the state of each particle in the step 2 according to the motion pose prediction equation of the robot obtained in the step 1.
4. Updating the weight of the particles: updating the weight of the particles by using an observation value obtained by indoor positioning based on the laser sensor and based on the two-dimensional code, and calculating the corresponding weight for each particle along with the sequential arrival of the observation value; the weight represents the probability of obtaining an observation when each particle is taken by the predicted pose; such evaluation is performed for all particles, the closer the particles are to the observed value, the higher the weight obtained; the weights are calculated as follows:
ωi=1/di
wherein ω isiIs the weight of the ith particle, diAnd (4) the Euclidean distance which is taken as the value of the ith particle distance observation model.
5. Calculating a state variable estimation value: calculating an estimate of a state variable by calculating a weighted average of all particles
Figure BDA0002310571560000112
Namely, the estimated value of the pose of the robot when the robot receives the k frame of laser data.
6. Resampling: in order to solve the problem of weight degradation in the calculation process, the method adoptsNumber of effective particles NeffAnd (3) measuring the degradation degree of the particles:
Figure BDA0002310571560000113
wherein ω isiIs the weight of the ith particle, the number of effective particles NeffSmaller, indicating more weight degradation. When N is presenteffIs less than a threshold value NthThen, resampling is carried out; otherwise, returning to the step 3. The specific method for resampling comprises the following steps: and (3) screening the particles according to the weight of the particles, wherein in the screening process, a large number of particles with large weight are retained, a small part of particles with small weight are discarded, and the particles with large weight are replaced by the particles with large weight, and the steps 3-6 are repeated until the robot stops moving.
The invention overcomes the influence of the illumination change of the indoor environment on the positioning precision of the robot and the problem that the robot is easy to lose the positioning, greatly improves the positioning precision of the robot and effectively avoids the problem of the positioning loss of the robot.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. An indoor positioning method based on laser and two-dimensional code fusion is characterized by comprising the following steps:
s1 calculating the first pose h of the robot by acquiring laser positioning data through the laser sensortThe method comprises the following steps:
calculating a predicted pose initial value of the robot;
determining each scanning angle based on the positioning scanning parameters by taking the predicted pose initial value as a center, acquiring discrete scanning data of different scanning angles, and calculating poses of the robot under different scanning angles to form all candidate poses of the predicted pose initial value;
calculating the confidence coefficient and the confidence coefficient score of each candidate pose, and selecting the candidate pose with the highest confidence coefficient score as the first posePose htThe highest confidence score is scoremax
S2 calculating second pose k of robot by utilizing two-dimensional code indoor positioning datatThe method comprises the following steps:
shooting an image containing a two-dimensional code located indoors;
carrying out binarization processing on the image to obtain a binary image;
obtaining a target area of the two-dimensional code in the binary image;
determining the pose of the robot according to the target area of the two-dimensional code in the binary image, namely the second pose k of the robott
S3 using particle filtering to correct the first attitude htAnd a second position ktFusing to obtain the final pose z of the robottThe method comprises the following steps: establishing a pose prediction equation of the robot, integrating the latest laser positioning data and two-dimension code positioning data to establish an observation model, wherein the establishment formula of the observation model is as follows:
ht=(xht,yht,γht)
kt=(xkt,ykt,γkt)
zt=λ1ht2kt=(xt,yt,γt)
wherein h istThe observed value of the laser sensor to the pose of the robot at the time t is the first pose, xhtFor the abscissa, y, of the robot observed by the laser sensor at time thtFor the ordinate, gamma, of the robot observed by the laser sensor at time thtThe posture of the robot observed by the laser sensor at the time t; k is a radical oftAn observed value of the pose of the robot, namely a second pose x at the moment t based on the two-dimensional codektThe abscissa, y, of the robot observed for time t based on the two-dimensional codektOrdinate, γ, of the robot observed for time t based on the two-dimensional codektThe posture of the robot observed based on the two-dimensional code at the moment t; x is the number oftFor robots integrating laser sensors and observation data based on two-dimensional codes at time tAbscissa, yktThe vertical coordinate, gamma, of the robot after integrating the laser sensor and the observation data based on the two-dimensional code for the time ttIntegrating the laser sensor and observation data based on the two-dimensional code for t moment and then determining the posture of the robot; lambda [ alpha ]1Is the weight of the first pose, λ2Is the weight of the second pose, λ12=1,λ1=scoremax
Generating particles;
updating the generated particle state by using a pose prediction equation;
updating the weight of the particles by using the observation model;
and calculating a state variable estimation value, wherein the final state variable estimation value is the final pose of the robot.
2. The indoor positioning method based on the fusion of the laser and the two-dimensional code as claimed in claim 1, wherein the capturing the image containing the two-dimensional code located at the indoor top specifically includes:
the camera of shooing the two-dimensional code sets up in the top of robot, and the two-dimensional code uses I as the interval, pastes on the ceiling, and the camera is vertical upwards to be shot, and I's computational formula is as follows:
I=wL/f
where w is the imaging width of the lens, L is the distance from the ceiling to the camera, and f is the lens focal length.
3. The indoor positioning method based on the fusion of the laser and the two-dimensional code according to claim 2, wherein the binarization processing of the image specifically comprises:
carrying out graying processing on the image by adopting a weighted average method to obtain a grayscale image, and carrying out binarization processing on the grayscale image by adopting a maximum inter-class variance method to obtain a binary image.
4. The indoor positioning method based on laser and two-dimensional code fusion as claimed in claim 3, wherein the obtaining of the target area of the two-dimensional code in the binary image is specifically:
calculating the pixel size of the two-dimensional code in the binary image by using the actual size of the two-dimensional code; then scanning the binary image according to rows and columns, filtering the scanned target area according to the pixel size of the two-dimensional code, and finally obtaining the target area of the two-dimensional code in the binary image, wherein the specific steps are as follows:
the pixel size of the two-dimensional code in the image is calculated as follows:
Figure FDA0002310571550000021
wherein d is the actual physical length of the actual image corresponding to the binary image, m is the pixel length of the binary image, and L0Is the pixel length, W, of the target area of the two-dimensional code0The pixel width of a two-dimensional code target area is L, the actual physical length of the two-dimensional code is L, and the actual physical width of the two-dimensional code is W;
progressively scanning the binary image, recording the positions of the left and right boundary points in the binary image, determining the range of pixel length L of the target region formed between the left and right boundary points, when L ∈ (L)0-5,L0+5), and the left and right boundary points are respectively located in the same column in the binary image, the binary image is scanned column by column, the positions of the left and right boundary points in the binary image are recorded, the range of the pixel length W of the target region (the target region formed between the left and right boundary points in progressive scanning and in column by scanning) is determined, when W ∈ (W ∈) (W0-5,W0+5) and the upper and lower boundary points are respectively located in the same line in the image, the target area is marked as the target area of the two-dimensional code.
5. The indoor positioning method based on the fusion of the laser and the two-dimensional code as claimed in claim 4, wherein the second pose k istThe specific calculation formula of (A) is as follows:
Figure FDA0002310571550000031
wherein x isaAnd yqRespectively two-dimensional codeThe abscissa and the ordinate of any point in the target area; assuming that a center point of the binary image is O, a position of the camera in the two-dimensional space, that is, a position point of the robot is O ', and a center point of the two-dimensional code in the binary image is Q, since the camera is vertically upward photographed, an X value of the point O ' in the map coordinate system is an X value of the image center point O, and a Y value of the point O ' in the map coordinate system is a Y value of the binary image center point O, Δ X is a displacement difference in an X-axis direction between the position point O ' of the robot and the center point of the two-dimensional code in the actual space, and Δ Y is a displacement difference in a Y-axis direction between the position point O ' of the robot and the center point of the two-dimensional code in the actual space, specifically:
Figure FDA0002310571550000032
Figure FDA0002310571550000033
wherein m is the pixel length of the binary image, n is the pixel width of the binary image, m/2 is the distance from the central point O of the binary image to the left and right sides of the binary image, u is the pixel line number where the left boundary of the two-dimensional code in the binary image is located, v is the pixel line number where the right boundary of the two-dimensional code in the binary image is located, and u + (v-u)/2 is the pixel line number where the central point of the two-dimensional code in the binary image is located; n/2 is the distance from the central point O of the binary image to the upper and lower boundaries, r is the number of pixel lines where the upper boundary of the two-dimensional code is located, s is the number of pixel lines where the lower boundary of the two-dimensional code is located, and r + (s-r)/2 is the number of pixel lines where the central point of the two-dimensional code in the binary image is located.
6. The indoor positioning method based on the fusion of the laser and the two-dimensional code according to claim 5, wherein the establishing of the pose prediction equation of the robot is specifically as follows:
determining the initial pose P of the robot by the pose of two frames of laser light from the laser sensor of the robot0(x0,y0,θ0) And initial velocity
Figure FDA0002310571550000034
Wherein x is0,y0Initial abscissa and ordinate, theta, respectively, of the robot as observed by the laser sensor0The initial attitude of the robot observed by the laser sensor;
Figure FDA0002310571550000035
respectively, an abscissa component and an ordinate component of the initial velocity of the robot observed by the laser sensor,
Figure FDA0002310571550000036
attitude components of the initial velocity of the robot observed by the laser sensor; acceleration information of the robot obtained from the newly received inertial navigation data
Figure FDA0002310571550000037
Updating the movement speed of the robot, wherein
Figure FDA0002310571550000038
Acceleration of the robot in the X-axis direction at time t,
Figure FDA0002310571550000039
For the acceleration of the robot in the y-axis direction at time t,
Figure FDA00023105715500000310
is the angular acceleration of the robot at time t. Suppose that the robot speed at time t is
Figure FDA00023105715500000311
The pose prediction equation of the robot can be expressed as:
Pt T=PT t-1+[(Vt-1+At-1)Δt]T+ut-1
wherein, Pt、Pt-1Are respectively provided withRespectively receiving poses corresponding to two continuous frames of laser data at the time t and the time t-1 for the robot,
Figure FDA0002310571550000041
the speed of the robot at time t-1,
Figure FDA0002310571550000042
the speed of the robot in the x-axis direction at time t-1,
Figure FDA0002310571550000043
the speed of the robot in the y-axis direction at time t-1,
Figure FDA0002310571550000044
angular velocity, u, of the robot at time t-1t-1The system noise at time t-1.
7. The indoor positioning method based on the fusion of the laser and the two-dimensional code according to claim 6, wherein the updating of the particle weight value by using the observation model specifically comprises:
updating the weight of the particles by using an observation value obtained by indoor positioning based on the laser sensor and based on the two-dimensional code, and calculating the corresponding weight for each particle along with the sequential arrival of the observation value; the weight represents the probability of obtaining an observation when each particle is taken by the predicted pose; such evaluation is performed for all particles, the closer the particles are to the observed value, the higher the weight obtained; the weights are calculated as follows:
ωi=1/di
wherein ω isiIs the weight of the ith particle, diAnd (4) the Euclidean distance which is taken as the value of the ith particle distance observation model.
8. The indoor positioning method based on the fusion of the laser and the two-dimensional code according to claim 7, wherein the calculating the state variable estimation value specifically comprises:
by calculating a weighted average of all particlesCalculating an estimate of a state variable
Figure FDA0002310571550000046
Namely, the estimated value of the pose of the robot when the robot receives the k frame of laser data.
9. The indoor positioning method based on the fusion of the laser and the two-dimensional code as claimed in claim 8, further comprising, when the number of effective particles is NeffWhen the effective particle number is less than the specified value, resampling is carried out, otherwise, the step of updating the generated particle state by using the pose prediction equation is returned to for iteration, and the effective particle number N iseffThe calculation formula of (2) is as follows:
Figure FDA0002310571550000045
wherein ω isiIs the weight of the ith particle.
CN201911257092.0A 2019-12-10 2019-12-10 Indoor positioning method based on laser and two-dimensional code fusion Pending CN111337011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911257092.0A CN111337011A (en) 2019-12-10 2019-12-10 Indoor positioning method based on laser and two-dimensional code fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911257092.0A CN111337011A (en) 2019-12-10 2019-12-10 Indoor positioning method based on laser and two-dimensional code fusion

Publications (1)

Publication Number Publication Date
CN111337011A true CN111337011A (en) 2020-06-26

Family

ID=71185741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911257092.0A Pending CN111337011A (en) 2019-12-10 2019-12-10 Indoor positioning method based on laser and two-dimensional code fusion

Country Status (1)

Country Link
CN (1) CN111337011A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111947647A (en) * 2020-08-26 2020-11-17 四川阿泰因机器人智能装备有限公司 Robot accurate positioning method integrating vision and laser radar
CN113405544A (en) * 2021-05-08 2021-09-17 中电海康集团有限公司 Mapping and positioning method and system for mobile robot
CN113432533A (en) * 2021-06-18 2021-09-24 北京盈迪曼德科技有限公司 Robot positioning method and device, robot and storage medium
WO2024001339A1 (en) * 2022-07-01 2024-01-04 华为云计算技术有限公司 Pose determination method and apparatus, and computing device
CN117824667A (en) * 2024-03-06 2024-04-05 成都睿芯行科技有限公司 Fusion positioning method and medium based on two-dimensional code and laser
CN117824666A (en) * 2024-03-06 2024-04-05 成都睿芯行科技有限公司 Two-dimensional code pair for fusion positioning, two-dimensional code calibration method and fusion positioning method

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104197899A (en) * 2014-09-24 2014-12-10 中国科学院宁波材料技术与工程研究所 Mobile robot location method and system
CN104424457A (en) * 2013-08-20 2015-03-18 复旦大学 Method for identifying two-dimensional code under the condition of nonlinear distortion
CN104933387A (en) * 2015-06-24 2015-09-23 上海快仓智能科技有限公司 Rapid positioning and identifying method based on two-dimensional code decoding
CN105184208A (en) * 2015-09-02 2015-12-23 福建联迪商用设备有限公司 Two-dimension code preliminary positioning method and system
CN105651286A (en) * 2016-02-26 2016-06-08 中国科学院宁波材料技术与工程研究所 Visual navigation method and system of mobile robot as well as warehouse system
CN105701434A (en) * 2015-12-30 2016-06-22 广州卓德信息科技有限公司 Image correction method for two-dimensional code distorted image
CN106123908A (en) * 2016-09-08 2016-11-16 北京京东尚科信息技术有限公司 Automobile navigation method and system
CN106323294A (en) * 2016-11-04 2017-01-11 新疆大学 Positioning method and device for patrol robot of transformer substation
CN106485183A (en) * 2016-07-14 2017-03-08 深圳市华汉伟业科技有限公司 A kind of Quick Response Code localization method and system
CN107451508A (en) * 2017-09-20 2017-12-08 天津通信广播集团有限公司 A kind of self-defined Quick Response Code position and azimuth determining system and implementation method
CN108225303A (en) * 2018-01-18 2018-06-29 水岩智能科技(宁波)有限公司 Two-dimensional code positioning label, and positioning navigation system and method based on two-dimensional code
CN108253958A (en) * 2018-01-18 2018-07-06 亿嘉和科技股份有限公司 A kind of robot real-time location method under sparse environment
CN108458715A (en) * 2018-01-18 2018-08-28 亿嘉和科技股份有限公司 A kind of robot localization initial method based on laser map
CN109002046A (en) * 2018-09-21 2018-12-14 中国石油大学(北京) A kind of Navigation System for Mobile Robot and air navigation aid
CN109029418A (en) * 2018-05-29 2018-12-18 威马智慧出行科技(上海)有限公司 A method of vehicle is positioned in closed area
CN109211251A (en) * 2018-09-21 2019-01-15 北京理工大学 A kind of instant positioning and map constructing method based on laser and two dimensional code fusion
EP3438698A1 (en) * 2017-08-01 2019-02-06 Ford Global Technologies, LLC Method for operating a motor vehicle with a lidar sensor
CN109443351A (en) * 2019-01-02 2019-03-08 亿嘉和科技股份有限公司 A kind of robot three-dimensional laser positioning method under sparse environment
CN109579824A (en) * 2018-10-31 2019-04-05 重庆邮电大学 A kind of adaptive Kano Meng Te localization method incorporating two-dimensional barcode information
CN109753838A (en) * 2018-12-12 2019-05-14 深圳市三宝创新智能有限公司 Two-dimensional code identification method, device, computer equipment and storage medium
CN110196044A (en) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method
CN110309687A (en) * 2019-07-05 2019-10-08 华中科技大学 A kind of bearing calibration of image in 2 D code and means for correcting
CN110319834A (en) * 2018-03-30 2019-10-11 深圳市神州云海智能科技有限公司 A kind of method and robot of Indoor Robot positioning

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424457A (en) * 2013-08-20 2015-03-18 复旦大学 Method for identifying two-dimensional code under the condition of nonlinear distortion
CN104197899A (en) * 2014-09-24 2014-12-10 中国科学院宁波材料技术与工程研究所 Mobile robot location method and system
CN104933387A (en) * 2015-06-24 2015-09-23 上海快仓智能科技有限公司 Rapid positioning and identifying method based on two-dimensional code decoding
CN105184208A (en) * 2015-09-02 2015-12-23 福建联迪商用设备有限公司 Two-dimension code preliminary positioning method and system
CN105701434A (en) * 2015-12-30 2016-06-22 广州卓德信息科技有限公司 Image correction method for two-dimensional code distorted image
CN105651286A (en) * 2016-02-26 2016-06-08 中国科学院宁波材料技术与工程研究所 Visual navigation method and system of mobile robot as well as warehouse system
CN106485183A (en) * 2016-07-14 2017-03-08 深圳市华汉伟业科技有限公司 A kind of Quick Response Code localization method and system
CN106123908A (en) * 2016-09-08 2016-11-16 北京京东尚科信息技术有限公司 Automobile navigation method and system
CN106323294A (en) * 2016-11-04 2017-01-11 新疆大学 Positioning method and device for patrol robot of transformer substation
EP3438698A1 (en) * 2017-08-01 2019-02-06 Ford Global Technologies, LLC Method for operating a motor vehicle with a lidar sensor
CN107451508A (en) * 2017-09-20 2017-12-08 天津通信广播集团有限公司 A kind of self-defined Quick Response Code position and azimuth determining system and implementation method
CN108225303A (en) * 2018-01-18 2018-06-29 水岩智能科技(宁波)有限公司 Two-dimensional code positioning label, and positioning navigation system and method based on two-dimensional code
CN108458715A (en) * 2018-01-18 2018-08-28 亿嘉和科技股份有限公司 A kind of robot localization initial method based on laser map
CN108253958A (en) * 2018-01-18 2018-07-06 亿嘉和科技股份有限公司 A kind of robot real-time location method under sparse environment
CN110319834A (en) * 2018-03-30 2019-10-11 深圳市神州云海智能科技有限公司 A kind of method and robot of Indoor Robot positioning
CN109029418A (en) * 2018-05-29 2018-12-18 威马智慧出行科技(上海)有限公司 A method of vehicle is positioned in closed area
CN109002046A (en) * 2018-09-21 2018-12-14 中国石油大学(北京) A kind of Navigation System for Mobile Robot and air navigation aid
CN109211251A (en) * 2018-09-21 2019-01-15 北京理工大学 A kind of instant positioning and map constructing method based on laser and two dimensional code fusion
CN109579824A (en) * 2018-10-31 2019-04-05 重庆邮电大学 A kind of adaptive Kano Meng Te localization method incorporating two-dimensional barcode information
CN109753838A (en) * 2018-12-12 2019-05-14 深圳市三宝创新智能有限公司 Two-dimensional code identification method, device, computer equipment and storage medium
CN109443351A (en) * 2019-01-02 2019-03-08 亿嘉和科技股份有限公司 A kind of robot three-dimensional laser positioning method under sparse environment
CN110196044A (en) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method
CN110309687A (en) * 2019-07-05 2019-10-08 华中科技大学 A kind of bearing calibration of image in 2 D code and means for correcting

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111947647A (en) * 2020-08-26 2020-11-17 四川阿泰因机器人智能装备有限公司 Robot accurate positioning method integrating vision and laser radar
CN111947647B (en) * 2020-08-26 2024-05-17 四川阿泰因机器人智能装备有限公司 Precise positioning method for vision and laser radar integrated robot
CN113405544A (en) * 2021-05-08 2021-09-17 中电海康集团有限公司 Mapping and positioning method and system for mobile robot
CN113405544B (en) * 2021-05-08 2024-02-09 中电海康集团有限公司 Mobile robot map building and positioning method and system
CN113432533A (en) * 2021-06-18 2021-09-24 北京盈迪曼德科技有限公司 Robot positioning method and device, robot and storage medium
CN113432533B (en) * 2021-06-18 2023-08-15 北京盈迪曼德科技有限公司 Robot positioning method and device, robot and storage medium
WO2024001339A1 (en) * 2022-07-01 2024-01-04 华为云计算技术有限公司 Pose determination method and apparatus, and computing device
CN117824667A (en) * 2024-03-06 2024-04-05 成都睿芯行科技有限公司 Fusion positioning method and medium based on two-dimensional code and laser
CN117824666A (en) * 2024-03-06 2024-04-05 成都睿芯行科技有限公司 Two-dimensional code pair for fusion positioning, two-dimensional code calibration method and fusion positioning method
CN117824667B (en) * 2024-03-06 2024-05-10 成都睿芯行科技有限公司 Fusion positioning method and medium based on two-dimensional code and laser
CN117824666B (en) * 2024-03-06 2024-05-10 成都睿芯行科技有限公司 Two-dimensional code pair for fusion positioning, two-dimensional code calibration method and fusion positioning method

Similar Documents

Publication Publication Date Title
CN110221603B (en) Remote obstacle detection method based on laser radar multi-frame point cloud fusion
CN111337011A (en) Indoor positioning method based on laser and two-dimensional code fusion
CN109345588B (en) Tag-based six-degree-of-freedom attitude estimation method
Veľas et al. Calibration of rgb camera with velodyne lidar
JP3880702B2 (en) Optical flow detection apparatus for image and self-position recognition system for moving object
CN105700525B (en) Method is built based on Kinect sensor depth map robot working environment uncertainty map
CN110825101A (en) Unmanned aerial vehicle autonomous landing method based on deep convolutional neural network
JPH1151650A (en) Three-dimensional self-position recognizing device for mobile body
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
CN108074251A (en) Mobile Robotics Navigation control method based on monocular vision
CN106022266A (en) Target tracking method and target tracking apparatus
Nagy et al. SFM and semantic information based online targetless camera-LIDAR self-calibration
Wang et al. Autonomous landing of multi-rotors UAV with monocular gimbaled camera on moving vehicle
CN116879870A (en) Dynamic obstacle removing method suitable for low-wire-harness 3D laser radar
CN116619358A (en) Self-adaptive positioning optimization and mapping method for autonomous mining robot
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
Zhou et al. Comparative analysis of SLAM algorithms for mechanical LiDAR and solid-state LiDAR
CN111160280B (en) RGBD camera-based target object identification and positioning method and mobile robot
CN114529585A (en) Mobile equipment autonomous positioning method based on depth vision and inertial measurement
CN107093187A (en) The measuring method and device of a kind of unmanned plane during flying speed
CN111553342B (en) Visual positioning method, visual positioning device, computer equipment and storage medium
CN111402324A (en) Target measuring method, electronic equipment and computer storage medium
CN117330052A (en) Positioning and mapping method and system based on infrared vision, millimeter wave radar and IMU fusion
RU2583756C2 (en) Method of signature-based positioning of urban area images in visible and ir bands
CN116045965A (en) Multi-sensor-integrated environment map construction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200626