CN112629520A - Robot navigation and positioning method, system, equipment and storage medium - Google Patents

Robot navigation and positioning method, system, equipment and storage medium Download PDF

Info

Publication number
CN112629520A
CN112629520A CN202011344552.6A CN202011344552A CN112629520A CN 112629520 A CN112629520 A CN 112629520A CN 202011344552 A CN202011344552 A CN 202011344552A CN 112629520 A CN112629520 A CN 112629520A
Authority
CN
China
Prior art keywords
robot
map
position information
navigation
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011344552.6A
Other languages
Chinese (zh)
Inventor
石碰
杜佳佳
周成成
蒋涛
左昉
林福宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiguang Tongda Technology Co ltd
Original Assignee
Beijing Jiguang Tongda Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiguang Tongda Technology Co ltd filed Critical Beijing Jiguang Tongda Technology Co ltd
Priority to CN202011344552.6A priority Critical patent/CN112629520A/en
Publication of CN112629520A publication Critical patent/CN112629520A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application relates to a robot navigation and positioning method, system, equipment and storage medium, and relates to the technical field of robot navigation. The method adopts the RFID positioning technology to obtain the absolute position information of the robot, obtains the relative position information through visual positioning, fuses the data of two different information sources of the absolute positioning information and the relative positioning information together, achieves the effect of complementary advantages, and realizes the high-precision positioning of the robot. Acquiring an environment image around the robot, and obtaining a navigation map and an obstacle map by combining the accurate position information of the robot; and fusing the navigation map and the barrier map to obtain the terrain map marked with the height information of the barrier. The height information of the obstacles in the terrain map is compared with a preset obstacle threshold value, so that a control instruction is output, the robot can realize accurate navigation and effective obstacle avoidance through the terrain map, the accuracy is high, the robustness is good, and the robot is easy to realize.

Description

Robot navigation and positioning method, system, equipment and storage medium
Technical Field
The present application relates to the field of robot navigation technologies, and in particular, to a method, a system, a device, and a storage medium for robot navigation and positioning.
Background
With the rapid development of robotics, more and more intelligent robots appear in our daily life. In recent years, intelligent robots have become a representative strategic target in high technology areas. The appearance and development of the robot technology not only enable the traditional industrial production appearance to be changed fundamentally, but also have profound influence on the social life of human beings.
The autonomous positioning navigation is one of the prerequisites for realizing the intellectualization of the robot and is a key factor for endowing the robot with perception and action capacity. When the robot performs task operation, when the working environment changes, the robot needs to know the position of the robot in the current environment in the changed environment, so that the robot can accurately execute the task. However, due to the complexity of the living environment, the environment changes at any time, and is influenced and interfered by the outside, and people have urgent needs for location services, so the accurate navigation and positioning of the robot become a big difficulty.
Disclosure of Invention
In order to facilitate accurate navigation and positioning of a mobile robot, the application provides a robot navigation and positioning method, system, device and storage medium.
In a first aspect, the following technical solution is adopted in a robot navigation and positioning method provided by the present application.
A robot navigation and positioning method, comprising:
acquiring absolute position information and relative position information of the robot;
fusing the absolute position information and the relative position information of the robot to obtain accurate position information of the robot;
acquiring a working environment image of the robot, and obtaining a navigation map and an obstacle map by combining with accurate position information of the robot;
fusing the navigation map and the barrier map to obtain a terrain map marked with barrier height information;
and judging whether the robot performs crossing or steering avoidance according to the height information of the obstacles in the terrain map.
By adopting the technical scheme, the absolute position information and the relative position information of the robot are acquired, and the data of two different information sources, namely the absolute positioning information and the relative positioning information, are fused together, so that the effect of advantage complementation is achieved, and the high-precision positioning of the robot is realized. The method comprises the steps of obtaining a terrain map according to a navigation map and a barrier map, marking height information of all barriers in the terrain map, comparing the height information of the barriers in the terrain map with a preset barrier threshold value in the navigation process, judging that the robot overturns or avoids the barriers, and outputting a control instruction to achieve accurate navigation of the robot.
Optionally, the process of acquiring the absolute position information of the robot is as follows:
establishing an initial particle set, and setting an initial pose of each particle, wherein one particle pose corresponds to one robot pose;
predicting the pose of each particle at the t moment for the pose of each particle at the t-1 moment;
constructing a relational expression of the total weight of the particles at the time t, the readability weight and the phase difference weight, and obtaining the total weight of each particle at the time t;
and calculating the pose of the robot at the t moment by using the total weight and the pose of each particle at the t moment to obtain the absolute position information of the robot.
By adopting the technical scheme, the RFID technology combines phase difference information with high sensitivity and readability, and absolute positioning of the robot is achieved by combining a particle filter positioning algorithm and an observation model based on the phase difference, so that pile gathering position information of the robot is obtained, and the technical problem of robot positioning under the conditions of high precision and sparse label distribution is solved.
Optionally, the process of acquiring the relative position information of the robot is as follows:
acquiring a target object image and an expected target object image, and acquiring a real-time acquired target object edge contour and an expected target object edge contour through a moving contour;
controlling the motion of the robot in real time according to the difference between the expected edge profile of the target object and the edge profile of the target object acquired in real time until the difference is minimum;
and calculating the relation between the edge movement of the target object image and the edge movement of the image acquisition equipment according to the movement speed of the robot so as to obtain the relative position information of the robot.
By adopting the technical scheme, the robot is controlled to move based on the analysis of the active contour to reach the correct position, the relative position information of the robot relative to the target object is obtained according to the movement condition of the robot, and the accurate position information of the robot is conveniently obtained by combining the absolute position information of the robot.
Optionally, the acquiring an environmental image around the robot and obtaining a navigation map and an obstacle map by combining the accurate position information of the robot specifically include:
acquiring an environment image of the foot of the robot to obtain a narrow baseline image, acquiring an environment image except the foot of the robot to obtain a wide baseline image, matching the wide baseline image and the narrow baseline image, and acquiring a navigation map taking the current accurate position information of the robot as a navigation starting point according to a matching result and by combining the accurate position information of the robot;
acquiring a panoramic image, and scanning obstacles in the surrounding environment through a laser radar to obtain a scanning map containing a plurality of obstacles; and measuring the obstacles in the surrounding environment to obtain obstacle height information, and marking the obstacle height information into a scanning map to obtain an obstacle map.
By adopting the technical scheme, the environment image around the foot of the robot and the environment image outside the foot are respectively obtained, and then the two images are matched, so that the navigation map which is taken as the navigation starting point is obtained. The navigation map can clearly show the overall working environment of the robot. And scanning and measuring the panorama and the obstacles to obtain an obstacle map marked with obstacle height information. All obstacles in the overall working environment of the robot and obstacle height information thereof can be displayed in the obstacle map.
Optionally, the navigation map and the obstacle map are fused to obtain a terrain map marked with obstacle height information, and the specific process is as follows:
extracting characteristic points of the navigation map and the barrier map, and then matching the extracted characteristic points to obtain matching point pairs;
acquiring a rotation matrix, a scale variable and a translation vector between the navigation map and the obstacle map according to the matching point pairs by using an affine transformation method, and calculating a corresponding affine transformation matrix;
and fusing the navigation map and the obstacle map by using the rotation matrix, the scale variable, the translation vector and the affine transformation matrix so as to obtain the terrain map marked with the height information of the obstacle.
By adopting the technical scheme, the navigation map and the obstacle map are fused, the obtained terrain map can display the whole working environment of the robot and all obstacle height information in the environment, the obstacle height information in the terrain map can be conveniently compared with the preset obstacle threshold value subsequently, and therefore a control command is output to control the movement of the robot.
In a second aspect, the following technical solution is adopted in the robot navigation and positioning system provided by the present application.
A robotic navigation and positioning system, comprising:
the positioning module is used for acquiring absolute position information and relative position information of the robot;
the data fusion module is used for fusing the absolute position information and the relative position information of the robot to obtain the accurate position information of the robot;
the environment perception module is used for acquiring a working environment image of the robot and obtaining a navigation map and an obstacle map by combining with accurate position information of the robot;
the map fusion module is used for fusing the navigation map and the barrier map to obtain a terrain map marked with barrier height information;
the obstacle avoidance navigation module is used for judging whether the robot performs crossing or steering avoidance according to the height information of the obstacles in the terrain map;
the controller is used for outputting a control instruction according to the judgment result of the obstacle avoidance navigation module; and the number of the first and second groups,
and the driver is used for driving the robot to move according to the control instruction output by the controller.
By adopting the technical scheme, the absolute position information and the relative position information of the robot acquired by the positioning module are combined, so that the effect of advantage complementation is achieved, and high-precision positioning is realized. And acquiring a navigation map and a barrier map through the environment perception module, and fusing the navigation map and the barrier map to obtain a terrain map marked with barrier height information. And comparing the height information of the obstacles in the terrain map with a preset obstacle threshold value, and judging whether the robot performs crossing or steering avoidance so as to control the robot to move.
Optionally, the positioning module includes:
the RFID submodule is used for acquiring absolute position information of the robot by reading the RFID tag;
and the visual positioning submodule is used for acquiring the relative position information of the robot.
By adopting the technical scheme, the RFID positioning technology is an absolute positioning mode, accurate coordinate information of a three-dimensional space can be obtained, and the visual positioning is a relative positioning mode, so that accurate relative position information can be obtained. Therefore, the data of two different information sources are fused together, and the complementary advantages can be realized.
Optionally, the environment sensing module includes:
the narrow baseline image acquisition submodule is used for acquiring a narrow baseline image acquired by acquiring the environment of the foot of the robot in real time;
the wide baseline image acquisition submodule is used for acquiring a wide baseline image acquired by acquiring the environment except the feet of the robot in real time;
the image fusion sub-module is used for matching the wide baseline image with the narrow baseline image and obtaining a navigation map taking the accurate position information as a navigation starting point according to a matching result and by combining the accurate position information of the robot;
the panoramic scanning sub-module is used for acquiring a panoramic image;
the barrier scanning submodule is used for scanning barriers in the robot working environment and barrier height information; and the number of the first and second groups,
and the obstacle marking sub-module is used for marking the obstacles and the height information thereof into the panoramic image to obtain an obstacle map.
By adopting the technical scheme, the narrow baseline image acquisition submodule and the wide baseline image acquisition submodule respectively acquire the environment of the feet of the robot and the environment images except the feet, and the navigation map is obtained through the image fusion submodule. And acquiring panoramic images and height information of all obstacles through the panoramic scanning submodule and the obstacle scanning submodule, so that the height information of all obstacles in the whole working environment of the robot and the whole working environment of the robot can be displayed.
In a third aspect, the following technical solution is adopted in the apparatus for robot navigation and positioning provided by the present application.
A computer device comprising a memory and a processor, the memory having stored thereon a computer program that is loadable and executable by the processor, the processor when executing the computer program implementing:
acquiring absolute position information and relative position information of the robot;
fusing the absolute position information and the relative position information of the robot to obtain accurate position information of the robot;
acquiring a working environment image of the robot, and obtaining a navigation map and an obstacle map by combining with accurate position information of the robot;
fusing the navigation map and the barrier map to obtain a terrain map marked with barrier height information;
and judging whether the robot carries out the steps of turning over or steering avoidance according to the height information of the obstacles in the terrain map.
By adopting the technical scheme, the processor in the computer equipment can realize the robot navigation and positioning method according to the related computer program stored in the memory, thereby realizing the accurate navigation and positioning of the robot.
In a fourth aspect, the storage medium for robot navigation and positioning provided by the present application adopts the following technical solution.
A computer-readable storage medium having a computer program stored thereon, the computer program when executed by a processor performing:
acquiring absolute position information and relative position information of the robot;
fusing the absolute position information and the relative position information of the robot to obtain accurate position information of the robot;
acquiring a working environment image of the robot, and obtaining a navigation map and an obstacle map by combining with accurate position information of the robot;
fusing the navigation map and the barrier map to obtain a terrain map marked with barrier height information;
and judging whether the robot carries out the steps of turning over or steering avoidance according to the height information of the obstacles in the terrain map.
By adopting the technical scheme, the robot navigation and positioning method can be stored in a computer readable storage medium, so that a computer program of the robot navigation and positioning method stored in the computer readable storage medium can be executed by a processor, and accurate navigation and positioning of the robot can be realized.
To sum up, the application comprises the following beneficial technical effects:
1. the RFID positioning technology is an absolute positioning mode, coordinate information of a three-dimensional space can be obtained, and the visual positioning is a relative positioning mode. According to the method, the data of two different information sources, namely absolute positioning information and relative positioning information, are fused together by adopting a mode of integrating an RFID positioning technology and vision, the effect of advantage complementation is achieved, and the high-precision positioning of the robot is realized.
2. The method comprises the steps of processing through a wide-baseline stereo camera, a narrow-baseline stereo camera and a panoramic radar to establish a terrain map, comparing the height information of obstacles in the terrain map with a preset obstacle threshold value in the navigation process, and outputting a control instruction.
Drawings
Fig. 1 is a flowchart of a robot navigation and positioning method according to an embodiment of the present disclosure.
FIG. 2 is a schematic diagram of the robot navigation and positioning logic according to an embodiment of the present application.
Fig. 3 is a flowchart of acquiring absolute position information of a robot according to an embodiment of the present disclosure.
FIG. 4 is a block diagram of a robot navigation and positioning system according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments.
The present embodiment is only for explaining the present application, and it is not limited to the present application, and those skilled in the art can make modifications of the present embodiment without inventive contribution as needed after reading the present specification, but all of them are protected by patent law within the scope of the claims of the present application.
The embodiment of the application discloses a robot navigation and positioning method, as shown in fig. 1 and 2, the method comprises the following steps.
And S100, acquiring absolute position information and relative position information of the robot.
(1) And acquiring absolute position information of the robot by using an RFID positioning technology.
A reader-writer and a reader-writer antenna are installed on the robot, and a passive ultrahigh frequency RFID tag is installed on the ground to serve as a reference tag. The EPCs of all the reference tags and the positions of the reference tags in the corresponding world coordinate system are stored in the robot as known information, and the robot controls the RFID system to continuously read the reference tags in the continuous motion process and carries out positioning through the readability and phase related information of the reference tags extracted from the RFID system.
As shown in fig. 3, the method specifically includes the steps of:
s101, establishing an initial particle set, wherein particles correspond to the robot, and one particle pose corresponds to one robot pose.
Setting the initial pose of the particles as shown in formula (1):
Figure DEST_PATH_IMAGE001
(1)
wherein the content of the first and second substances,Xi 0is at the initial moment in time of particle concentrationiAn initial pose of the individual particle;R 1R 2R 3is a random number between 0 and 1;T j = T 1T 2,…,T m T j Is the reference tag read at the initial time,x Tj is the abscissa of the reference tag read at the initial time,y Tj is the ordinate of the reference tag read at the initial moment,ris the maximum read range of the reference tag.
S102, for each particletPose at time-1, using track-extrapolation to predict each particletThe pose at that moment.
The key of the Dead Reckoning (DR) positioning technology is to measure the distance traveled by the robot per unit time interval and the change of the robot heading in the time interval. The rotation rate and the acceleration rate are respectively measured by using the tug and the accelerometer, the measurement result is integrated, the moving distance and the course change of the robot are solved, and the pose of the robot is obtained according to a track deduction algorithm.
S103, constructiontObtaining a relation between the total weight of the time particles and the readability weight and the phase difference weighttThe total weight of each particle at a time instant.
First, the particle readability weight is evaluated: setting a particle pose judgment condition, judging the reasonability of the pose of the robot represented by each particle in the particle set by using the particle pose judgment condition, and setting the particle readability weight through reasonability check asω i t1,,=1, set as particle readability weight not passing rationality testω i t1,,=0。
Calculating the phase difference weight of the particles, wherein the specific formula is as follows:
Figure 546230DEST_PATH_IMAGE002
(2)
Figure DEST_PATH_IMAGE003
(3)
Figure 804649DEST_PATH_IMAGE004
(4)
wherein the content of the first and second substances,ω i t2,,first, theiA particle is arranged intPhase difference weight of time, Δθ t The antennas are respectively arranged attAt time-1 andtand the difference of the observed phases of the same RFID reference tag measured at the moment. Deltaθ i,t Is the firstiThe predicted phase difference of the individual particles,σis the standard deviation of the phase measurement noise, Δd t Is thatt The distance difference of the time is different,d1 is att-1 the distance of the antenna to the RFID reference tag at time instant,d2 is attThe distance of the antenna to the RFID reference tag at the moment,λrepresenting the wavelength of the radio signal.
The total weight of the particlesω i t,Expressed as:
Figure DEST_PATH_IMAGE005
(5)
wherein the content of the first and second substances,ω i t,-1is the firstiA particle is arranged int-the total weight of the time instants of 1,ω i t1,,is the firstiA particle is arranged intThe weight of the readability of the moment in time,ω i t2,,is the firstiA particle is arranged intPhase difference weights for the time instants.
And S104, calculating the pose of the robot at the t moment by using the total weight and the pose of each particle, so as to realize the absolute positioning of the robot.
Calculating the pose of the robot by estimating the weighted geometric center of the particle set, and realizing robot positioning:
Figure 787648DEST_PATH_IMAGE006
(6)
wherein the content of the first and second substances,x j is the abscissa of the predicted pose and is,y j is the ordinate of the predicted pose,th j is the attitude angle of the predicted pose,x i t,is thattAt the first momentiThe abscissa of each particle in the world coordinate system,y i t,is thattAt the first momentiThe ordinate of each particle in the world coordinate system,th i t,is thattAt the first momentiThe attitude angle of each particle under the world coordinate system,Nis the number of particles in the particle set.
(2) Relative position information of the robot is acquired by the camera.
In the embodiment, the target object image acquired by the RGB camera directly reflects the relative position and posture relationship between the target object and the end-effector of the robot. Before tracking the target object, off-line learning is firstly carried out, the robot is controlled to the correct position to obtain the expected target object image, and the edge contour of the expected target object is obtained by utilizing the active contourv *
Figure DEST_PATH_IMAGE007
(7)
Wherein the content of the first and second substances,x* j,y* jrespectively representing the edge profile of the desired target objectjThe abscissa and the ordinate of the point,nthe desired number of points on the edge profile of the target object.
When the online working is carried out, the RGB camera collects the image of the target object in real time, and the edge contour of the target object acquired in real time is obtained through the active contourv
Figure 261486DEST_PATH_IMAGE008
(8)
Wherein the content of the first and second substances,x* j,y* jrespectively representing the edge profile of the target object obtained in real timejThe abscissa and ordinate of the point.
The difference value of the expected edge profile of the target object and the edge profile of the target object acquired in real timee=v-v *The movement of the robot is controlled in real time until the difference is eliminated or minimized, at which point the robot has reached the correct position.
Relationship between image edge motion and RGB camera edge motion:
Figure DEST_PATH_IMAGE009
(9)
wherein the content of the first and second substances,τ c is the motion speed of the RGB camera:τ c =(T c ,W c ),T c =(T cx ,T cy ,T cz ) Is the speed of the translation, and the speed of the translation,W c =(W cx ,W cy ,W cz ) Is the speed of rotation of the drum, and,Lis to reflect the three-dimensional coordinates of the target object (X j ,Y j ,Z j ) Coordinates corresponding to corresponding points in the image coordinates: (x j ,y j ) A matrix of relations, taking into account the perspective projection,x j =X j /Z j y j =Y j /Z j then, thenL Is determined by the following formula:
Figure 154487DEST_PATH_IMAGE010
(10)
for more convenient calculation, in actual workZ j Is convenient to useZ* j(i.e., the desired value).
By a relationship matrixLObtaining three-dimensional coordinates of the target object: (X j ,Y j ,Z j ) Thereby obtaining the relative position information of the robot.
And S200, fusing the absolute position information and the relative position information of the robot to obtain the accurate position information of the robot.
In the embodiment, two positioning technologies are fused to perform accurate positioning, and visual relative position information is used for auxiliary positioning.
The specific fusion mode is as follows:
and taking a world coordinate system used in the RFID positioning process as a global coordinate system, and converting the relative position information obtained by visual solution into the world coordinate system through space transformation. And constructing a hierarchical neural network, and inputting the two position coordinates into the neural network for data fusion.
In this embodiment, a three-layer feedforward network model based on a BP algorithm is adopted, and the established three-layer feedforward network model is trained offline. The training algorithm takes the perception information represented by the binary vector as an input vector, the corresponding decision command binary vector as a target vector, and the sample data is the perception information category in the prior knowledge range. After model training is finished, absolute position information and relative position information after spatial transformation are used as input of the three-layer feedforward network model, and a fusion strategy is generated according to preset fusion parameters, so that fusion of the absolute position information and the relative position information of the robot is finished, and accurate position information of the robot is obtained.
The neural network adopted by the embodiment has strong fault tolerance, self-organization, self-learning and self-adaption capabilities and can realize complex mapping; the neural network also has strong nonlinear processing capability and can well meet the requirements of the data fusion technology.
S300, collecting the working environment image of the robot, and obtaining a navigation map and an obstacle map by combining the accurate position information of the robot.
(1) And taking the current accurate position information of the robot as the starting position of navigation. And (3) adopting a narrow baseline stereo camera to carry out real-time shooting on the environment near the feet of the robot to obtain a narrow baseline image. And (3) adopting a wide baseline stereo camera to carry out real-time shooting on a distant environment except the feet of the robot to obtain a wide baseline image.
According to optical characteristic information (gray value, color value and the like), geometric characteristic information (appearance, size and the like), spatial position information (position, direction and the like in an image) and characteristic information (position information of edges and corners) and the like represented by a wide baseline image and a narrow baseline image, structural characteristics in the information are combined to form a stable characteristic vector, the characteristic vector keeping certain stability for factors such as geometric deformation, illumination change and the like of the image is used as an invariant, the wide baseline image data and the narrow baseline image data are matched, most stable extremum characteristic region pairs matched in the two images are extracted according to a near Euclidean distance ratio criterion, mismatching characteristic region pairs are eliminated by using a sequential sampling consistency algorithm, the epi-polar geometric relationship of the two image data is estimated, and a matching result is obtained. And obtaining a navigation map taking the current accurate position information as a navigation starting point according to the matching result and by combining the current accurate position information of the robot.
(2) The method comprises the steps of shooting the surrounding environment by a panoramic camera to obtain a panoramic image, and scanning obstacles in the surrounding environment by a plurality of laser radars to obtain a scanning map containing a plurality of obstacles. And then measuring the obstacles in the surrounding environment to obtain obstacle height information, and marking the obstacle height information into a scanning map to obtain an obstacle map.
The process of measuring the height information of the obstacle by the laser radar comprises the following steps: the ground is set as a plane, and the high and low angles of the barrierαIt can be measured that the distance of inclination between the radar and the apex of the obstacle can be calculated from the propagation speed of the radio wave and the time taken for it to travel back and forthdAt the height of the obstaclehComprises the following steps:
Figure DEST_PATH_IMAGE011
(11)
and marking the calculated height information of the obstacles to the panoramic scanning map to obtain an obstacle map.
And S400, fusing the navigation map and the obstacle map to obtain the terrain map marked with the obstacle height information.
Step 401, extracting feature points of the navigation map and the obstacle map, and then matching the extracted feature points to obtain matching point pairs.
And step 402, acquiring a rotation matrix, a scale variable and a translation vector between the navigation map and the obstacle map according to the matching point pairs by using an affine transformation method, and calculating a corresponding affine transformation matrix T.
Figure 304846DEST_PATH_IMAGE012
(12)
Where Φ represents an angle of rotation of the obstacle map compared to the navigation map, s represents a fusion scale,δ x ,δ y respectively show the position of the obstacle in comparison with the navigation mapxDirection andythe amount of deviation in direction.
And 403, fusing the navigation map and the obstacle map by using the rotation matrix, the scale variable, the translation vector and the affine transformation matrix, so as to obtain the terrain map marked with the obstacle height information.
And S500, judging whether the robot performs crossing or steering avoidance according to the height information of the obstacles in the terrain map.
And comparing the height information of the obstacles in the terrain map with a preset obstacle threshold value so as to allow the robot to judge whether to perform crossing or steering avoidance through navigation. And when the height information of the obstacle is smaller than a preset obstacle threshold value, the robot can be executed to cross the obstacle after the obstacle height information shows that the robot can cross the height of the obstacle. And when the height information of the obstacle is not less than the preset obstacle threshold value, the robot is executed to turn to avoid and avoid the obstacle when the height information of the obstacle indicates that the robot cannot climb over the height of the obstacle.
The embodiment of the present application further discloses a robot navigation and positioning system, as shown in fig. 2 and 4, the system includes:
(1) and the positioning module is used for acquiring the absolute position information and the relative position information of the robot.
In this embodiment, the positioning module includes:
and the RFID sub-module is used for acquiring absolute position information of the robot by reading the RFID tag. The RFID sub-module includes: RFID label, read write line and read write antenna. A reader-writer and a reader-writer antenna are installed on the robot, and a passive ultrahigh frequency RFID tag is installed on the ground to serve as a reference tag. The reader/writer reads/writes information in the RFID tag. And the reader-writer antenna transmits radio-frequency signals between the RFID label and the reader-writer through the communication base station.
And the visual positioning submodule is used for acquiring the relative position information of the robot. The vision positioning sub-module in this embodiment uses an RGB camera, and the camera is mounted on the end-effector of the robot, so that the obtained object image directly reflects the relative position and posture relationship between the target object and the end-effector of the robot, thereby obtaining the relative position information of the robot.
(2) And the data fusion module is used for fusing the absolute position information and the relative position information of the robot to obtain the accurate position information of the robot.
(3) And the environment perception module is used for acquiring a working environment image of the robot and obtaining a navigation map and an obstacle map by combining the accurate position information of the robot.
In this embodiment, the environment sensing module specifically includes:
and the narrow-baseline stereo camera is used for acquiring a narrow baseline image acquired by collecting the environment of the feet of the robot in real time.
The wide baseline stereo camera is used for acquiring wide baseline images acquired from a remote environment except the feet of the robot in real time.
And the image fusion sub-module is used for matching the wide baseline image and the narrow baseline image and obtaining a navigation map taking the accurate position information as a navigation starting point according to the matching result and by combining the accurate position information of the robot.
And the panoramic scanning submodule is used for acquiring a panoramic image. The panoramic scanning sub-module may be a panoramic camera.
And the obstacle scanning submodule is used for scanning obstacles in the working environment of the robot and obstacle height information. The obstacle scanning sub-module may be a laser scanning radar.
And the obstacle marking submodule is used for marking the height information of the measured obstacles into the panoramic scanning map to obtain an obstacle map.
(4) And the map fusion module is used for fusing the navigation map and the barrier map to obtain the terrain map marked with the height information of the barrier.
(5) And the obstacle avoidance navigation module is used for judging whether the robot performs crossing or steering avoidance according to the height information of the obstacles in the terrain map.
(6) And the controller is used for outputting a control instruction according to the judgment result of the obstacle avoidance navigation module.
(7) And the driver is used for driving the robot to move according to the control instruction output by the controller.
The embodiment of the application also discloses a storage medium for robot navigation and positioning, which adopts the following technical scheme:
a computer-readable storage medium having a computer program stored thereon, the computer program when executed by a processor performing:
acquiring absolute position information and relative position information of the robot;
fusing the absolute position information and the relative position information of the robot to obtain accurate position information of the robot;
acquiring a working environment image of the robot, and obtaining a navigation map and an obstacle map by combining with accurate position information of the robot;
fusing the navigation map and the barrier map to obtain a terrain map marked with barrier height information;
and judging whether the robot carries out the steps of turning over or steering avoidance according to the height information of the obstacles in the terrain map.
The embodiment of the application also discloses a device for robot navigation and positioning, which adopts the following technical scheme:
a computer device comprising a memory and a processor, the memory having stored thereon a computer program that is loadable and executable by the processor, the processor when executing the computer program implementing:
acquiring absolute position information and relative position information of the robot;
fusing the absolute position information and the relative position information of the robot to obtain accurate position information of the robot;
acquiring a working environment image of the robot, and obtaining a navigation map and an obstacle map by combining with accurate position information of the robot;
fusing the navigation map and the barrier map to obtain a terrain map marked with barrier height information;
and judging whether the robot carries out the steps of turning over or steering avoidance according to the height information of the obstacles in the terrain map.
The above embodiments are preferred embodiments of the present application, and the protection scope of the present application is not limited by the above embodiments, so: all equivalent changes made according to the structure, shape and principle of the present application shall be covered by the protection scope of the present application.

Claims (10)

1. A robot navigation and positioning method is characterized by comprising the following steps:
acquiring absolute position information and relative position information of the robot;
fusing the absolute position information and the relative position information of the robot to obtain accurate position information of the robot;
acquiring a working environment image of the robot, and obtaining a navigation map and an obstacle map by combining with accurate position information of the robot;
fusing the navigation map and the barrier map to obtain a terrain map marked with barrier height information;
and judging whether the robot performs crossing or steering avoidance according to the height information of the obstacles in the terrain map.
2. The robot navigation and positioning method of claim 1, wherein the absolute position information of the robot is obtained by:
establishing an initial particle set, and setting an initial pose of each particle, wherein one particle pose corresponds to one robot pose;
predicting the pose of each particle at the t moment for the pose of each particle at the t-1 moment;
constructing a relational expression of the total weight of the particles at the time t, the readability weight and the phase difference weight, and obtaining the total weight of each particle at the time t;
and calculating the pose of the robot at the t moment by using the total weight and the pose of each particle at the t moment to obtain the absolute position information of the robot.
3. The robot navigation and positioning method of claim 1, wherein the relative position information of the robot is obtained by:
acquiring a target object image and an expected target object image, and acquiring a real-time acquired target object edge contour and an expected target object edge contour through a moving contour;
controlling the motion of the robot in real time according to the difference between the expected edge profile of the target object and the edge profile of the target object acquired in real time until the difference is minimum;
and calculating the relation between the edge movement of the target object image and the edge movement of the image acquisition equipment according to the movement speed of the robot so as to obtain the relative position information of the robot.
4. The robot navigation and positioning method according to claim 1, wherein the acquiring of the working environment image of the robot and the combining of the precise position information of the robot to obtain the navigation map and the obstacle map specifically comprises:
acquiring an environment image of the foot of the robot to obtain a narrow baseline image, acquiring an environment image except the foot of the robot to obtain a wide baseline image, matching the wide baseline image and the narrow baseline image, and acquiring a navigation map taking the current accurate position information of the robot as a navigation starting point according to a matching result and by combining the accurate position information of the robot;
acquiring a panoramic image, and scanning obstacles in the surrounding environment through a laser radar to obtain a scanning map containing a plurality of obstacles; and measuring the obstacles in the surrounding environment to obtain obstacle height information, and marking the obstacle height information into a scanning map to obtain an obstacle map.
5. The robot navigation and positioning method according to claim 1, wherein the navigation map and the obstacle map are fused to obtain a terrain map labeled with obstacle height information, and the specific process is as follows:
extracting characteristic points of the navigation map and the barrier map, and then matching the extracted characteristic points to obtain matching point pairs;
acquiring a rotation matrix, a scale variable and a translation vector between the navigation map and the obstacle map according to the matching point pairs by using an affine transformation method, and calculating a corresponding affine transformation matrix;
and fusing the navigation map and the obstacle map by using the rotation matrix, the scale variable, the translation vector and the affine transformation matrix so as to obtain the terrain map marked with the height information of the obstacle.
6. A robotic navigation and positioning system, comprising:
the positioning module is used for acquiring absolute position information and relative position information of the robot;
the data fusion module is used for fusing the absolute position information and the relative position information of the robot to obtain the accurate position information of the robot;
the environment perception module is used for acquiring a working environment image of the robot and obtaining a navigation map and an obstacle map by combining with accurate position information of the robot;
the map fusion module is used for fusing the navigation map and the barrier map to obtain a terrain map marked with barrier height information;
the obstacle avoidance navigation module is used for judging whether the robot performs crossing or steering avoidance according to the height information of the obstacles in the terrain map;
the controller is used for outputting a control instruction according to the judgment result of the obstacle avoidance navigation module; and the number of the first and second groups,
and the driver is used for driving the robot to move according to the control instruction output by the controller.
7. The robotic navigation and positioning system of claim 6, wherein the positioning module includes:
the RFID submodule is used for acquiring absolute position information of the robot by reading the RFID tag;
and the visual positioning submodule is used for acquiring the relative position information of the robot.
8. The robotic navigation and positioning system of claim 6, wherein the context awareness module includes:
the narrow baseline image acquisition submodule is used for acquiring a narrow baseline image acquired by acquiring the environment of the foot of the robot in real time;
the wide baseline image acquisition submodule is used for acquiring a wide baseline image acquired by acquiring the environment except the feet of the robot in real time;
the image fusion sub-module is used for matching the wide baseline image and the narrow baseline image and obtaining a navigation map taking the accurate position information as a navigation starting point according to a matching result and by combining the accurate position information of the robot;
the panoramic scanning sub-module is used for acquiring a panoramic image;
the barrier scanning submodule is used for scanning barriers in the robot working environment and barrier height information; and the number of the first and second groups,
and the obstacle marking sub-module is used for marking the obstacles and the height information thereof into the panoramic image to obtain an obstacle map.
9. Computer device comprising a memory and a processor, said memory having stored thereon a computer program which is loadable and executable by the processor, characterized in that the processor realizes the steps of the method according to any of the preceding claims 1-5 when executing said computer program.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method according to any of the claims 1-5.
CN202011344552.6A 2020-11-25 2020-11-25 Robot navigation and positioning method, system, equipment and storage medium Pending CN112629520A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011344552.6A CN112629520A (en) 2020-11-25 2020-11-25 Robot navigation and positioning method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011344552.6A CN112629520A (en) 2020-11-25 2020-11-25 Robot navigation and positioning method, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112629520A true CN112629520A (en) 2021-04-09

Family

ID=75303994

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011344552.6A Pending CN112629520A (en) 2020-11-25 2020-11-25 Robot navigation and positioning method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112629520A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113390433A (en) * 2021-07-20 2021-09-14 上海擎朗智能科技有限公司 Robot positioning method and device, robot and storage medium
CN113465605A (en) * 2021-06-09 2021-10-01 西安交通大学 Mobile robot positioning system and method based on photoelectric sensing measurement network
CN113532441A (en) * 2021-08-20 2021-10-22 河南牧原智能科技有限公司 Method, device and storage medium for integrated navigation of carriers in pigsty
CN114001743A (en) * 2021-10-29 2022-02-01 京东方科技集团股份有限公司 Map drawing method, map drawing device, map drawing system, storage medium, and electronic apparatus
CN114236553A (en) * 2022-02-23 2022-03-25 杭州蓝芯科技有限公司 Autonomous mobile robot positioning method based on deep learning
CN114415659A (en) * 2021-12-13 2022-04-29 烟台杰瑞石油服务集团股份有限公司 Robot safety obstacle avoidance method and device, robot and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105699985A (en) * 2016-03-23 2016-06-22 北京信息科技大学 Single-line laser radar device
CN106680832A (en) * 2016-12-30 2017-05-17 深圳优地科技有限公司 Obstacle detection method and device of mobile robot and mobile robot
CN107167141A (en) * 2017-06-15 2017-09-15 同济大学 Robot autonomous navigation system based on double line laser radars
CN107901041A (en) * 2017-12-15 2018-04-13 中南大学 A kind of robot vision servo control method based on image blend square
CN108621167A (en) * 2018-07-23 2018-10-09 中南大学 A kind of visual servo decoupling control method based on profile side and the interior feature that takes all of
CN109716160A (en) * 2017-08-25 2019-05-03 北京嘀嘀无限科技发展有限公司 For detecting the method and system of vehicle environmental information
CN110554353A (en) * 2019-08-29 2019-12-10 华中科技大学 mobile robot absolute positioning method based on RFID system
CN111413970A (en) * 2020-03-18 2020-07-14 天津大学 Ultra-wideband and vision integrated indoor robot positioning and autonomous navigation method
CN111427363A (en) * 2020-04-24 2020-07-17 深圳国信泰富科技有限公司 Robot navigation control method and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105699985A (en) * 2016-03-23 2016-06-22 北京信息科技大学 Single-line laser radar device
CN106680832A (en) * 2016-12-30 2017-05-17 深圳优地科技有限公司 Obstacle detection method and device of mobile robot and mobile robot
CN107167141A (en) * 2017-06-15 2017-09-15 同济大学 Robot autonomous navigation system based on double line laser radars
CN109716160A (en) * 2017-08-25 2019-05-03 北京嘀嘀无限科技发展有限公司 For detecting the method and system of vehicle environmental information
CN107901041A (en) * 2017-12-15 2018-04-13 中南大学 A kind of robot vision servo control method based on image blend square
CN108621167A (en) * 2018-07-23 2018-10-09 中南大学 A kind of visual servo decoupling control method based on profile side and the interior feature that takes all of
CN110554353A (en) * 2019-08-29 2019-12-10 华中科技大学 mobile robot absolute positioning method based on RFID system
CN111413970A (en) * 2020-03-18 2020-07-14 天津大学 Ultra-wideband and vision integrated indoor robot positioning and autonomous navigation method
CN111427363A (en) * 2020-04-24 2020-07-17 深圳国信泰富科技有限公司 Robot navigation control method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
夏春蕊;王瑞;李晓娟;关永;张杰;魏洪兴;: "动态环境下基于概率模型检测的路径规划方法", 计算机工程与应用, no. 12, pages 5 - 11 *
陈云涛 等: "雷达技术基础", vol. 2014, 31 July 2014, 国防工业出版社, pages: 16 - 17 *
陈冰: "一种新的宽基线图像匹配方法", 西安电子科技大学学报(自然科学版), vol. 38, no. 2, 30 April 2011 (2011-04-30), pages 116 - 122 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113465605A (en) * 2021-06-09 2021-10-01 西安交通大学 Mobile robot positioning system and method based on photoelectric sensing measurement network
CN113465605B (en) * 2021-06-09 2022-10-25 西安交通大学 Mobile robot positioning system and method based on photoelectric sensing measurement network
CN113390433A (en) * 2021-07-20 2021-09-14 上海擎朗智能科技有限公司 Robot positioning method and device, robot and storage medium
CN113532441A (en) * 2021-08-20 2021-10-22 河南牧原智能科技有限公司 Method, device and storage medium for integrated navigation of carriers in pigsty
CN114001743A (en) * 2021-10-29 2022-02-01 京东方科技集团股份有限公司 Map drawing method, map drawing device, map drawing system, storage medium, and electronic apparatus
CN114415659A (en) * 2021-12-13 2022-04-29 烟台杰瑞石油服务集团股份有限公司 Robot safety obstacle avoidance method and device, robot and storage medium
CN114415659B (en) * 2021-12-13 2024-05-28 烟台杰瑞石油服务集团股份有限公司 Robot safety obstacle avoidance method and device, robot and storage medium
CN114236553A (en) * 2022-02-23 2022-03-25 杭州蓝芯科技有限公司 Autonomous mobile robot positioning method based on deep learning
CN114236553B (en) * 2022-02-23 2022-06-10 杭州蓝芯科技有限公司 Autonomous mobile robot positioning method based on deep learning

Similar Documents

Publication Publication Date Title
Alatise et al. A review on challenges of autonomous mobile robot and sensor fusion methods
CN112629520A (en) Robot navigation and positioning method, system, equipment and storage medium
Li et al. Openstreetmap-based autonomous navigation for the four wheel-legged robot via 3d-lidar and ccd camera
WO2017028653A1 (en) Method and system for automatically establishing map indoors by mobile robot
JP6855524B2 (en) Unsupervised learning of metric representations from slow features
CN112740274A (en) System and method for VSLAM scale estimation on robotic devices using optical flow sensors
Chen et al. Robot navigation with map-based deep reinforcement learning
CN114998276B (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
CN111474932B (en) Mobile robot mapping and navigation method integrating scene experience
de Oliveira et al. A robot architecture for outdoor competitions
CN114067210A (en) Mobile robot intelligent grabbing method based on monocular vision guidance
Jensen et al. Laser range imaging using mobile robots: From pose estimation to 3d-models
Giordano et al. 3D structure identification from image moments
Lang et al. Mobile robot localization and object pose estimation using optical encoder, vision and laser sensors
Nandkumar et al. Simulation of Indoor Localization and Navigation of Turtlebot 3 using Real Time Object Detection
Bikmaev et al. Visual Localization of a Ground Vehicle Using a Monocamera and Geodesic-Bound Road Signs
CN115690343A (en) Robot laser radar scanning and mapping method based on visual following
Alkhawaja et al. Low-cost depth/IMU intelligent sensor fusion for indoor robot navigation
Muramatsu et al. Mobile robot navigation utilizing the web based aerial images without prior teaching run
Yildiz et al. CNN based sensor fusion method for real-time autonomous robotics systems
CN115032984A (en) Semi-autonomous navigation method and system for port logistics intelligent robot
Tanveer et al. An ipm approach to multi-robot cooperative localization: Pepper humanoid and wheeled robots in a shared space
Maleki et al. Visual Navigation for Autonomous Mobile Material Deposition Systems using Remote Sensing
Van Toan et al. A Single 2D LiDAR Extrinsic Calibration for Autonomous Mobile Robots
Liao et al. The development of an artificial neural networks aided image localization scheme for indoor navigation applications with floor plans built by multi-platform mobile mapping systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination