CN113985465A - Sensor fusion positioning method and system, readable storage medium and computer equipment - Google Patents

Sensor fusion positioning method and system, readable storage medium and computer equipment Download PDF

Info

Publication number
CN113985465A
CN113985465A CN202111265056.6A CN202111265056A CN113985465A CN 113985465 A CN113985465 A CN 113985465A CN 202111265056 A CN202111265056 A CN 202111265056A CN 113985465 A CN113985465 A CN 113985465A
Authority
CN
China
Prior art keywords
map
sensor
positioning result
positioning
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111265056.6A
Other languages
Chinese (zh)
Inventor
聂志华
曹燕杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Intelligent Industry Technology Innovation Research Institute
Original Assignee
Jiangxi Intelligent Industry Technology Innovation Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Intelligent Industry Technology Innovation Research Institute filed Critical Jiangxi Intelligent Industry Technology Innovation Research Institute
Priority to CN202111265056.6A priority Critical patent/CN113985465A/en
Publication of CN113985465A publication Critical patent/CN113985465A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a sensor fusion positioning method, a system, a readable storage medium and computer equipment, wherein the method comprises the following steps: respectively acquiring map data of a visual sensor and map data of a laser sensor, and carrying out time synchronization processing and motion track recording on the two map data to obtain a sparse feature point map and a two-dimensional code occupying grid map; matching the sparse feature point map to a two-dimensional code occupation grid map according to the transformation relation to obtain a composite map; acquiring an initial pose of the robot determined by a visual sensor and acquiring pose data of a milemeter; calculating a positioning result by taking the initial pose as an initial value of the laser sensor, and judging whether the deviation of the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than a preset threshold value; and if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is less than or equal to a preset threshold value, planning a path for the robot according to the positioning result, and enabling the robot to move to an appointed position according to the planned path.

Description

Sensor fusion positioning method and system, readable storage medium and computer equipment
Technical Field
The invention relates to the technical field of robot control, in particular to a sensor fusion positioning method, a sensor fusion positioning system, a readable storage medium and computer equipment.
Background
With the rapid development of science and technology and the improvement of living standards of people, intelligent equipment is gradually integrated into our lives, and a mobile robot serving as intelligent equipment widely applied generally has the functions of bearing, environment recognition, autonomous navigation, obstacle avoidance and the like, and has the advantages of flexible functions, stable operation, high intelligence and the like, so that the mobile robot is widely applied to industries such as intelligent manufacturing, logistics sorting, warehouse storage, port transportation and the like. Brings great convenience for the production and transportation process and has good application and development prospects.
In the navigation positioning function of the mobile robot, usually, only a laser sensor or a visual sensor is used for positioning, but in the navigation positioning technology, the laser sensor only acquires distance information of a two-dimensional plane, so that the information representing the environment is limited, and a positioning algorithm based on the laser sensor is easy to generate mismatching or overlong global search time during global positioning and is only suitable for inter-frame matching. The visual sensor can acquire rich environmental information, a closed loop can be detected more quickly and accurately by using the visual bag-of-words model, global matching positioning is suitable, and a feature point map formed by a positioning algorithm based on vision is not suitable for path planning.
The existing vision sensor and the laser sensor generate different maps, the laser sensor establishes a grid-occupied map, the vision sensor establishes a feature point map, the two maps have different forms and different coordinate systems, and therefore two kinds of data cannot be effectively fused to improve positioning accuracy. Two kinds of sensors of vision sensor location and laser sensor location lack effective butt joint, generally keep apart the use, can't form effectual location data.
Disclosure of Invention
In view of the foregoing, it is an object of the present invention to provide a sensor fusion positioning method, system, readable storage medium and computer device to solve at least the above-mentioned deficiencies in the related art.
The invention provides a sensor fusion positioning method, wherein a sensor is arranged on a robot, the sensor at least comprises a visual sensor and a laser sensor, and the sensor fusion positioning method comprises the following steps:
respectively acquiring map data of the visual sensor and the laser sensor, and carrying out synchronous processing and motion track recording on the two map data in time to obtain a sparse feature point map and a two-dimensional code occupying grid map;
matching the sparse feature point map to the two-dimensional code occupation grid map according to a transformation relation to obtain a composite map;
acquiring the initial pose of the robot determined by the vision sensor according to the composite map and acquiring pose data of an odometer;
taking the initial pose as an initial value of the laser sensor, calculating a positioning result according to a laser positioning algorithm of the laser sensor, and judging whether the deviation of the positioning result and the pose data of the odometer or a covariance matrix of the positioning result is greater than a preset threshold value;
and if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is less than or equal to the preset threshold, planning a path for the robot according to the positioning result so that the robot moves to an appointed position according to the planned path.
In addition, according to the sensor fusion positioning method provided by the invention, the following additional technical features can be provided:
further, after the step of determining whether the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than a preset threshold, the method further includes:
if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than the preset threshold value, controlling the robot to move to a recovery point;
and returning to execute the steps of obtaining the initial pose of the robot determined by the vision sensor according to the composite map and obtaining the pose data of the odometer.
Further, the step of performing time synchronization processing and motion trajectory recording on the two map data to obtain a sparse feature point map and a two-dimensional code occupying grid map includes:
extracting mapping motion tracks of the two map data, and performing linear interpolation processing on the mapping motion tracks;
and corresponding the pose of the mapping motion track according to the timestamp, and constructing a least square problem solving transformation matrix to obtain a sparse feature point map and a two-dimensional code occupation grid map.
Further, the step of controlling the robot to move to a recovery point includes:
generating a plurality of recovery points which are beneficial to positioning in the composite map by clustering the visual feature points;
and controlling the robot to move to the recovery point by taking the pose data closest to the current time in the saved pose data of the odometer as a starting point.
Further, the method also comprises the following steps:
and continuously supplementing map features in the existing composite map through a visual positioning algorithm of the visual sensor.
The invention also provides a sensor fusion positioning system, wherein the sensor is arranged on a robot, the sensor at least comprises a visual sensor and a laser sensor, and the sensor fusion positioning system comprises:
the first acquisition module is used for respectively acquiring map data of the visual sensor and the laser sensor, and carrying out synchronous processing and motion track recording on the two map data in time to obtain a sparse feature point map and a two-dimensional code occupying grid map;
the conversion module is used for matching the sparse feature point map to the two-dimensional code occupation grid map according to a transformation relation to obtain a composite map;
the second acquisition module is used for acquiring the initial pose of the robot determined by the vision sensor according to the composite map and acquiring pose data of the odometer;
the judging module is used for taking the initial pose as an initial value of the laser sensor, calculating a positioning result according to a laser positioning algorithm of the laser sensor, and judging whether the deviation of the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than a preset threshold value;
and the first processing module is used for planning a path for the robot according to the positioning result if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is less than or equal to the preset threshold value, so that the robot moves to a specified position according to the planned path.
In addition, the sensor fusion positioning system provided by the invention can also have the following additional technical characteristics:
further, the method also comprises the following steps:
the second processing module is used for controlling the robot to move to a recovery point if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than the preset threshold; and returning to execute the steps of obtaining the initial pose of the robot determined by the vision sensor according to the composite map and obtaining the pose data of the odometer.
Further, the first obtaining module includes:
the first processing unit is used for extracting mapping motion tracks of the two map data and carrying out linear interpolation processing on the mapping motion tracks;
and the second processing unit is used for corresponding the pose of the mapping motion track according to the timestamp, constructing a least square problem solving transformation matrix and obtaining a sparse feature point map and a two-dimensional code occupying grid map.
Further, the second processing module comprises:
the generating unit is used for generating a plurality of recovery points which are beneficial to positioning in the composite map by clustering the visual feature points;
and the control unit is used for controlling the robot to move to the recovery point by taking the pose data which is closest to the current time in the saved pose data of the odometer as a starting point.
Further, the method also comprises the following steps:
and the supplement module is used for continuously supplementing map features in the existing composite map through a visual positioning algorithm of the visual sensor.
In another aspect, the present invention further provides a readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the sensor fusion positioning method described above.
In another aspect, the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the sensor fusion positioning method when executing the computer program.
Compared with the prior art, the invention has the beneficial effects that: map data of the visual sensor and map data of the laser sensor are subjected to time synchronization processing and movement track recording, and are aligned to form a composite map according to a transformation relation, so that the map data of the two sensors are effectively fused, and the overall positioning precision is improved; simultaneously, when the robot starts in optional position, self position is confirmed fast to the vision sensor that the use information is abundant, carries out the location optimization through laser sensor, and the further precision that promotes the location forms effectual location data, finds accurate locating position.
Drawings
FIG. 1 is a flow chart of a sensor fusion positioning method according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a sensor fusion positioning method according to a second embodiment of the present invention;
FIG. 3 is a block diagram of a battery test system in a third embodiment of the present invention;
fig. 4 is a structural diagram of a vehicle in a fourth embodiment of the invention.
Description of the main element symbols:
memory device 10 Logging module 13
Processor with a memory having a plurality of memory cells 20 Building block 14
Computer program 30 Extraction module 15
First acquisition module 11 Second acquisition module 16
Creation module 12 Computing module 17
The following detailed description will further illustrate the invention in conjunction with the above-described figures.
Detailed Description
To facilitate an understanding of the invention, the invention will now be described more fully with reference to the accompanying drawings. Several embodiments of the invention are presented in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Example one
Referring to fig. 1, a sensor fusion positioning method according to a first embodiment of the present invention is shown, in which the sensor is disposed on a robot, the sensor at least includes a visual sensor and a laser sensor, and the sensor fusion positioning method specifically includes steps S101 to S105:
s101, respectively obtaining map data of the visual sensor and the laser sensor, and carrying out synchronous processing and motion track recording on the two map data in time to obtain a sparse feature point map and a two-dimensional code occupying grid map;
it should be noted that the laser sensor establishes an occupation grid map, the visual sensor establishes a feature point map, and the two maps have different forms and different coordinate systems. In specific implementation, map building motion tracks of map data built by two sensors are extracted, linear interpolation processing is carried out on the map building motion tracks, poses of the map building motion tracks are aligned according to timestamps, so that the two maps are synchronously processed in time.
S102, matching the sparse feature point map to the two-dimensional code occupation grid map according to a transformation relation to obtain a composite map;
in specific implementation, the sparse feature point map is matched to the two-dimensional code occupation grid map according to a transformation relation so as to obtain a composite map.
And during map building, the two sensors are simultaneously carried out and time synchronization processing is carried out, so that the corresponding pose relationship is determined by using the time stamp, and the matching relationship is determined by the time stamp after the linear interpolation processing of the track is required due to different calculation frequencies of the two algorithms, thereby obtaining the composite map.
It should be noted that, in some other optional embodiments, in order to store and load a map, after the map is created, the newly created map is stored as a file in a binary form, and the binary file can be restored to a composite map when the map is subsequently used, so that the map utilization rate is improved.
S103, acquiring the initial pose of the robot determined by the vision sensor according to the composite map and acquiring pose data of an odometer;
in specific implementation, because the starting position of the robot is uncertain, the initial pose is determined and the pose data of the odometer is acquired by using the positioning algorithm of the vision sensor with rich information in the initialization stage, so that the positioning effect is faster and more robust. When the robot is started, the problem that the visual features are difficult to extract at the current starting visual angle is considered, namely the robot is started and then moves randomly in a small range, an area with rich textures is found, and the success rate of positioning is improved.
S104, taking the initial pose as an initial value of the laser sensor, calculating a positioning result according to a laser positioning algorithm of the laser sensor, and judging whether the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than a preset threshold value;
in specific implementation, the positioning accuracy of the laser positioning algorithm based on the laser sensor is higher than that of the visual positioning algorithm of the visual sensor, so that the result is used as an initial value to be sent to the laser sensor after the initial pose is determined by the positioning algorithm of the visual sensor, and the positioning result is calculated according to the laser positioning algorithm of the laser sensor.
And after a positioning result is calculated, judging whether the deviation of the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than a preset threshold value.
It can be understood that the pose estimated by the odometer is not accurate enough but does not shift too far in a short time, and the laser positioning algorithm of the laser sensor is accurate but can shift seriously if the pose data of the laser sensor is not matched with the global map, so that the comparison result of the two and the covariance matrix output by the laser positioning algorithm according to the laser sensing are used as a judgment condition.
And S105, if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is less than or equal to the preset threshold, planning a path for the robot according to the positioning result, so that the robot moves to a specified position according to the planned path.
In specific implementation, if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is less than or equal to the preset threshold, the positioning result is used for planning a path for the robot, and the laser sensor is used for avoiding obstacles, so that the robot moves to an appointed position according to the planned path.
It should be noted that in some alternative embodiments, a GPS sensor or other hardware devices with the same function may be used instead of the odometer to acquire the pose data.
In summary, in the sensor fusion positioning method in the above embodiment of the present invention, the map data of the visual sensor and the map data of the laser sensor are subjected to time synchronization processing and motion track recording, and are aligned according to the transformation relationship to form a composite map, so that the map data of the two sensors are effectively fused, and the overall positioning accuracy is improved; simultaneously, when the robot starts in optional position, self position is confirmed fast to the vision sensor that the use information is abundant, carries out the location optimization through laser sensor, and the further precision that promotes the location forms effectual location data, finds accurate locating position.
Example two
Referring to fig. 2, a sensor fusion positioning method in a second embodiment of the present invention is shown, in which the sensor is disposed on a robot, the sensor at least includes a visual sensor and a laser sensor, and the sensor fusion positioning method specifically includes steps S201 to S210:
s201, respectively acquiring map data of the vision sensor and the laser sensor;
it should be noted that the laser sensor establishes an occupation grid map, the visual sensor establishes a feature point map, and the two maps have different forms and different coordinate systems. Therefore, in the implementation, firstly, the map data established by the two sensors needs to be acquired.
S202, extracting mapping motion tracks of the two map data, and performing linear interpolation processing on the mapping motion tracks;
in specific implementation, because the map forms established by the two sensors are different, and the calculation frequencies of the positioning algorithms of the two sensors are also different, the mapping motion tracks of the map data established by the two sensors are extracted, and linear interpolation processing is carried out on the mapping motion tracks.
S203, corresponding the pose of the mapping motion track according to the timestamp, and constructing a least square problem solving transformation matrix to obtain a sparse feature point map and a two-dimensional code occupying grid map;
in specific implementation, the poses of the mapping motion tracks are aligned according to the time stamps, so that the two maps are synchronously processed in time, the corresponding relation is determined by adopting a linear interpolation mode for the position tracks due to different algorithm frequencies of the laser sensor and the vision sensor, then a least square problem is established, a transformation matrix between the two tracks is solved, and a sparse feature point map and a two-dimensional code grid map are obtained.
S204, matching the sparse feature point map to the two-dimensional code occupation grid map according to a transformation relation to obtain a composite map;
in specific implementation, the sparse feature point map is matched to the two-dimensional code occupation grid map according to a transformation relation so as to obtain a composite map.
And during map building, the two sensors are simultaneously carried out and time synchronization processing is carried out, so that the corresponding pose relationship is determined by using the time stamp, and the matching relationship is determined by the time stamp after the linear interpolation processing of the track is required due to different calculation frequencies of the two algorithms, thereby obtaining the composite map.
It should be noted that, in some other optional embodiments, in order to store and load a map, after the map is created, the newly created map is stored as a file in a binary form, and the binary file can be restored to a composite map when the map is subsequently used, so that the map utilization rate is improved.
S205, acquiring the initial pose of the robot determined by the vision sensor according to the composite map and acquiring pose data of an odometer;
in specific implementation, because the starting position of the robot is uncertain, the initial pose is determined and the pose data of the odometer is acquired by using the positioning algorithm of the vision sensor with rich information in the initialization stage, so that the positioning effect is faster and more robust. When the robot is started, the problem that the visual features are difficult to extract at the current starting visual angle is considered, namely the robot is started and then moves randomly in a small range, an area with rich textures is found, and the success rate of positioning is improved.
S206, taking the initial pose as an initial value of the laser sensor, calculating a positioning result according to a laser positioning algorithm of the laser sensor, and judging whether the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than a preset threshold value;
in specific implementation, the positioning accuracy of the laser positioning algorithm based on the laser sensor is higher than that of the visual positioning algorithm of the visual sensor, so that the result is used as an initial value to be sent to the laser sensor after the initial pose is determined by the positioning algorithm of the visual sensor, and the positioning result is calculated according to the laser positioning algorithm of the laser sensor.
And after a positioning result is calculated, judging whether the deviation of the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than a preset threshold value.
It can be understood that the pose estimated by the odometer is not accurate enough but does not shift too far in a short time, and the laser positioning algorithm of the laser sensor is accurate but can shift seriously if the pose data of the laser sensor is not matched with the global map, so that the comparison result of the two and the covariance matrix output by the laser positioning algorithm according to the laser sensing are used as a judgment condition.
S207, if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than the preset threshold value, generating a plurality of recovery points which are beneficial to positioning from the visual feature points in the composite map in a clustering mode;
in specific implementation, if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than the preset threshold, the visual feature points are clustered in the composite map to generate a plurality of recovery points which are beneficial to positioning, and the recovery points can enable the visual sensor to quickly position.
S208, controlling the robot to move to the recovery point by taking the pose data closest to the current time in the saved pose data of the odometer as a starting point, and returning to execute the step S205;
in specific implementation, the robot is controlled to move to a recovery point by taking the pose data closest to the current time in the saved pose data of the continuous odometer as a starting point, the initial pose of the robot is determined again according to a vision sensor, the pose data of the odometer is acquired, the initial pose is taken as the initial value of the laser sensor, a positioning result is calculated according to a laser positioning algorithm of the laser sensor, whether the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than a preset threshold value or not is judged until the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is less than or equal to the preset threshold value, and then a closed loop is found.
S209, if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is less than or equal to the preset threshold, planning a path for the robot according to the positioning result, so that the robot moves to a specified position according to the planned path;
in specific implementation, if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is less than or equal to the preset threshold, the positioning result is used for planning a path for the robot, and the laser sensor is used for avoiding obstacles, so that the robot moves to an appointed position according to the planned path.
S210, continuously supplementing map features in the existing composite map through a visual positioning algorithm of the visual sensor.
It should be noted that, in the process of the robot navigation movement, the visual positioning algorithm of the visual sensor will continue to supplement the features of the surrounding map in the existing map, so that the map is richer, and at the same time, the characteristic of rich information is utilized to detect the closed loop to prepare for the mode switching of the visual sensor and the laser sensor at any time.
When the current positioning result is detected to be low in reliability, namely if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is larger than the preset threshold, the positioning is inaccurate, the vision sensor is used for repositioning, the positioning result is transmitted to the laser sensor as the initial pose, so that the laser positioning algorithm of the laser sensor calculates the positioning result according to the initial pose until the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is smaller than or equal to the preset threshold, a closed loop is formed, a path is planned according to the positioning result when the closed loop is formed, and the robot is controlled to move to a target position according to the planned path.
In the present application, it is further determined whether the target point position is reached, and if the target point position is not reached, step S205 is repeatedly executed until a closed loop is formed. And when the current positioning is lost, the directivity is moved toward the recovery point, and step S205 is repeated until a closed loop is formed.
It should be noted that in some alternative embodiments, a GPS sensor or other hardware devices with the same function may be used instead of the odometer to acquire the pose data.
In summary, in the sensor fusion positioning method in the above embodiment of the present invention, the map data of the visual sensor and the map data of the laser sensor are subjected to time synchronization processing and motion track recording, and are aligned according to the transformation relationship to form a composite map, so that the map data of the two sensors are effectively fused, and the overall positioning accuracy is improved; meanwhile, when the robot is started at any position, the self position is quickly determined by using the visual sensor with rich information, positioning optimization is carried out through the laser sensor, the positioning accuracy is further improved, effective positioning data is formed, and an accurate positioning position is found; when the positioning is uncertain in the moving process, the vision sensor is switched to reposition, and the directional orientation is favorable for the movement of the recovery point of the positioning when the position is lost, thereby improving the positioning recovery capability after the position is lost in the dynamic environment.
EXAMPLE III
In another aspect, the present invention further provides a sensor fusion positioning system, please refer to fig. 3, which shows a sensor fusion positioning system according to a third embodiment of the present invention, where the sensor is disposed on a robot, the sensor at least includes a vision sensor and a laser sensor, and the sensor fusion positioning system includes:
the first acquisition module 11 is configured to acquire map data of the visual sensor and the map data of the laser sensor, and perform time synchronization processing and motion trajectory recording on the two map data to obtain a sparse feature point map and a two-dimensional code occupation grid map;
further, the first obtaining module 11 includes:
the first processing unit is used for extracting mapping motion tracks of the two map data and carrying out linear interpolation processing on the mapping motion tracks;
and the second processing unit is used for corresponding the pose of the mapping motion track according to the timestamp, constructing a least square problem solving transformation matrix and obtaining a sparse feature point map and a two-dimensional code occupying grid map.
The conversion module 12 is configured to match the sparse feature point map to the two-dimensional code occupation grid map according to a transformation relation, so as to obtain a composite map;
a second obtaining module 13, configured to obtain an initial pose of the robot determined by the vision sensor according to the composite map and obtain pose data of an odometer;
a judging module 14, configured to use the initial pose as an initial value of the laser sensor, calculate a positioning result according to a laser positioning algorithm of the laser sensor, and judge whether a deviation between the positioning result and pose data of the odometer or a covariance matrix of the positioning result is greater than a preset threshold;
and the first processing module 15 is configured to plan a path for the robot according to the positioning result if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is less than or equal to the preset threshold, so that the robot moves to an assigned position according to the planned path.
A second processing module 16, configured to control the robot to move to a recovery point if the positioning result deviates from the pose data of the odometer or a covariance matrix of the positioning result is greater than the preset threshold; returning to execute the steps of obtaining the initial pose of the robot determined by the vision sensor according to the composite map and obtaining pose data of the odometer;
further, the second processing module 16 includes:
the generating unit is used for generating a plurality of recovery points which are beneficial to positioning in the composite map by clustering the visual feature points;
and the control unit is used for controlling the robot to move to the recovery point by taking the pose data which is closest to the current time in the saved pose data of the odometer as a starting point.
And a supplement module 17, configured to continuously supplement map features in the existing composite map through a visual positioning algorithm of the visual sensor.
The functions or operation steps of the above modules when executed are substantially the same as those of the above method embodiments, and are not described herein again.
The sensor fusion positioning system provided by the third embodiment of the present invention has the same implementation principle and technical effect as the foregoing method embodiments, and for brief description, reference may be made to the corresponding contents in the foregoing method embodiments for the parts of the system embodiments that are not mentioned.
Example four
Referring to fig. 4, a vehicle according to a fourth embodiment of the present invention includes a memory 10, a processor 20, and a computer program 30 stored in the memory 10 and executable on the processor 20, wherein the processor 20 executes the computer program 30 to implement the sensor fusion positioning method.
The memory 10 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 20 may in some embodiments be an internal storage unit of the vehicle, such as a hard disk of the vehicle. The memory 20 may also be an external storage device in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 20 may also include both an internal storage unit and an external storage device of the vehicle. The memory 20 may be used not only to store application software installed in the vehicle and various types of data, but also to temporarily store data that has been output or will be output.
In some embodiments, the processor 20 may be an Electronic Control Unit (ECU), a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor or other data Processing chip, and is configured to run program codes stored in the memory 10 or process data, such as executing an access restriction program.
It should be noted that the configuration shown in fig. 4 does not constitute a limitation of the computer, and in other embodiments the computer may include fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
The embodiment of the present invention further provides a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the sensor fusion positioning method as described above.
Those of skill in the art will understand that the logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be viewed as implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description of the present specification, various technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the various technical features in the embodiments described above are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A sensor fusion positioning method is provided, wherein a sensor is arranged on a robot, the sensor at least comprises a visual sensor and a laser sensor, and the sensor fusion positioning method is characterized by comprising the following steps:
respectively acquiring map data of the visual sensor and the laser sensor, and carrying out synchronous processing and motion track recording on the two map data in time to obtain a sparse feature point map and a two-dimensional code occupying grid map;
matching the sparse feature point map to the two-dimensional code occupation grid map according to a transformation relation to obtain a composite map;
acquiring the initial pose of the robot determined by the vision sensor according to the composite map and acquiring pose data of an odometer;
taking the initial pose as an initial value of the laser sensor, calculating a positioning result according to a laser positioning algorithm of the laser sensor, and judging whether the deviation of the positioning result and the pose data of the odometer or a covariance matrix of the positioning result is greater than a preset threshold value;
and if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is less than or equal to the preset threshold, planning a path for the robot according to the positioning result so that the robot moves to an appointed position according to the planned path.
2. The sensor fusion positioning method according to claim 1, wherein after the step of determining whether the deviation of the positioning result from the pose data of the odometer or the covariance matrix of the positioning result is greater than a preset threshold, the method further comprises:
if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than the preset threshold value, controlling the robot to move to a recovery point;
and returning to execute the steps of obtaining the initial pose of the robot determined by the vision sensor according to the composite map and obtaining the pose data of the odometer.
3. The sensor fusion positioning method according to claim 1, wherein the step of performing time synchronization processing and motion trajectory recording on the two map data to obtain a sparse feature point map and a two-dimensional code occupying grid map comprises:
extracting mapping motion tracks of the two map data, and performing linear interpolation processing on the mapping motion tracks;
and corresponding the pose of the mapping motion track according to the timestamp, and constructing a least square problem solving transformation matrix to obtain a sparse feature point map and a two-dimensional code occupation grid map.
4. The sensor fusion localization method of claim 2, wherein the step of controlling the robot to move to a recovery point comprises:
generating a plurality of recovery points which are beneficial to positioning in the composite map by clustering the visual feature points;
and controlling the robot to move to the recovery point by taking the pose data closest to the current time in the saved pose data of the odometer as a starting point.
5. The sensor fusion localization method of claim 1, further comprising:
and continuously supplementing map features in the existing composite map through a visual positioning algorithm of the visual sensor.
6. The utility model provides a sensor fuses positioning system, the sensor is located on the robot, the sensor includes visual sensor and laser sensor at least, its characterized in that, sensor fuses positioning system includes:
the first acquisition module is used for respectively acquiring map data of the visual sensor and the laser sensor, and carrying out synchronous processing and motion track recording on the two map data in time to obtain a sparse feature point map and a two-dimensional code occupying grid map;
the conversion module is used for matching the sparse feature point map to the two-dimensional code occupation grid map according to a transformation relation to obtain a composite map;
the second acquisition module is used for acquiring the initial pose of the robot determined by the vision sensor according to the composite map and acquiring pose data of the odometer;
the judging module is used for taking the initial pose as an initial value of the laser sensor, calculating a positioning result according to a laser positioning algorithm of the laser sensor, and judging whether the deviation of the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than a preset threshold value;
and the first processing module is used for planning a path for the robot according to the positioning result if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is less than or equal to the preset threshold value, so that the robot moves to a specified position according to the planned path.
7. The sensor fusion localization system of claim 6, further comprising:
the second processing module is used for controlling the robot to move to a recovery point if the deviation between the positioning result and the pose data of the odometer or the covariance matrix of the positioning result is greater than the preset threshold; and returning to execute the steps of obtaining the initial pose of the robot determined by the vision sensor according to the composite map and obtaining the pose data of the odometer.
8. The sensor fusion positioning system of claim 6, wherein the first acquisition module comprises:
the first processing unit is used for extracting mapping motion tracks of the two map data and carrying out linear interpolation processing on the mapping motion tracks;
and the second processing unit is used for corresponding the pose of the mapping motion track according to the timestamp, constructing a least square problem solving transformation matrix and obtaining a sparse feature point map and a two-dimensional code occupying grid map.
9. A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the sensor fusion localization method according to any one of claims 1 to 5.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the sensor fusion localization method according to any one of claims 1-5 when executing the computer program.
CN202111265056.6A 2021-10-28 2021-10-28 Sensor fusion positioning method and system, readable storage medium and computer equipment Pending CN113985465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111265056.6A CN113985465A (en) 2021-10-28 2021-10-28 Sensor fusion positioning method and system, readable storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111265056.6A CN113985465A (en) 2021-10-28 2021-10-28 Sensor fusion positioning method and system, readable storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN113985465A true CN113985465A (en) 2022-01-28

Family

ID=79743676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111265056.6A Pending CN113985465A (en) 2021-10-28 2021-10-28 Sensor fusion positioning method and system, readable storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN113985465A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116026335A (en) * 2022-12-26 2023-04-28 广东工业大学 Mobile robot positioning method and system suitable for unknown indoor environment
CN117490705A (en) * 2023-12-27 2024-02-02 合众新能源汽车股份有限公司 Vehicle navigation positioning method, system, device and computer readable medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116026335A (en) * 2022-12-26 2023-04-28 广东工业大学 Mobile robot positioning method and system suitable for unknown indoor environment
CN116026335B (en) * 2022-12-26 2023-10-03 广东工业大学 Mobile robot positioning method and system suitable for unknown indoor environment
CN117490705A (en) * 2023-12-27 2024-02-02 合众新能源汽车股份有限公司 Vehicle navigation positioning method, system, device and computer readable medium
CN117490705B (en) * 2023-12-27 2024-03-22 合众新能源汽车股份有限公司 Vehicle navigation positioning method, system, device and computer readable medium

Similar Documents

Publication Publication Date Title
CN103064416B (en) Crusing robot indoor and outdoor autonomous navigation system
Sack et al. A comparison of methods for line extraction from range data
CN104520732A (en) Method of locating sensor and related apparatus
CN113985465A (en) Sensor fusion positioning method and system, readable storage medium and computer equipment
CN108303096B (en) Vision-assisted laser positioning system and method
CN111133336A (en) Method and system for performing positioning
JP4985166B2 (en) Self-position estimation device
CN110333495A (en) The method, apparatus, system, storage medium of figure are built in long corridor using laser SLAM
Kim et al. SLAM in indoor environments using omni-directional vertical and horizontal line features
CN103781685A (en) Autonomous driving control system for vehicle
Ahn et al. A practical approach for EKF-SLAM in an indoor environment: fusing ultrasonic sensors and stereo camera
CN105324729A (en) Method for modelling the surroundings of a vehicle
Lee et al. Vision-based kidnap recovery with SLAM for home cleaning robots
CN110659548A (en) Vehicle and target detection method and device thereof
US20220187845A1 (en) Method for estimating positioning of moving object by using big cell grid map, recording medium in which program for implementing same is stored, and computer program stored in medium in order to implement same
Donoso-Aguirre et al. Mobile robot localization using the Hausdorff distance
KR102601141B1 (en) mobile robots and Localization method using fusion image sensor and multiple magnetic sensors
CN109635692B (en) Scene re-identification method based on ultrasonic sensor
Kume et al. Vehicle localization along a previously driven route using an image database
CN116563376A (en) LIDAR-IMU tight coupling semantic SLAM method based on deep learning and related device
CN113112478B (en) Pose recognition method and terminal equipment
Zhu Binocular vision-slam using improved sift algorithm
CN113611112B (en) Target association method, device, equipment and storage medium
Lu et al. Vision-based real-time road detection in urban traffic
Tas et al. High-definition map update framework for intelligent autonomous transfer vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination