CN110570465B - Real-time positioning and map construction method and device and computer readable storage medium - Google Patents

Real-time positioning and map construction method and device and computer readable storage medium Download PDF

Info

Publication number
CN110570465B
CN110570465B CN201810569341.9A CN201810569341A CN110570465B CN 110570465 B CN110570465 B CN 110570465B CN 201810569341 A CN201810569341 A CN 201810569341A CN 110570465 B CN110570465 B CN 110570465B
Authority
CN
China
Prior art keywords
current
visual image
robot
data
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810569341.9A
Other languages
Chinese (zh)
Other versions
CN110570465A (en
Inventor
魏青铜
杨少鹏
孙元栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikrobot Co Ltd
Original Assignee
Hangzhou Hikrobot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikrobot Technology Co Ltd filed Critical Hangzhou Hikrobot Technology Co Ltd
Priority to CN201810569341.9A priority Critical patent/CN110570465B/en
Priority to PCT/CN2019/088413 priority patent/WO2019233299A1/en
Publication of CN110570465A publication Critical patent/CN110570465A/en
Application granted granted Critical
Publication of CN110570465B publication Critical patent/CN110570465B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a real-time positioning and map construction method, a real-time positioning and map construction device and a computer readable storage medium, and belongs to the technical field of robots. The method comprises the following steps: acquiring a current visual image and current laser data acquired at a current position; when detecting that an object in a current visual image moves relative to an object in a historical visual image, writing the current visual image and current laser data into map data as optimization node information, wherein the historical visual image is a visual image with the acquisition time earlier than that of the current visual image. When the method detects that the object in the current visual image moves relative to the object in the historical visual image, the current visual image and the current laser data can be used as optimization node information to be written into the map data, and the inaccuracy of positioning and map building of the robot due to improper operation and analysis of the robot is avoided.

Description

Real-time positioning and map construction method and device and computer readable storage medium
Technical Field
The invention relates to the technical field of robots, in particular to a real-time positioning and map construction method, a real-time positioning and map construction device and a computer readable storage medium.
Background
The autonomous robot is a robot which carries various necessary sensors and controllers and can independently complete certain tasks under the condition of no external human information input and control in the operation process. For example, a common autonomous robot is a sweeping robot. The autonomy of the robot is mainly embodied in the autonomy navigation of the robot, and in an unknown environment, the autonomy navigation of the robot is realized by drawing a map of the environment where the robot is located.
Currently, maps can be drawn by SLAM (Simultaneous Localization and Mapping, real-time Localization and Mapping) algorithms. When the map is drawn through the SLAM algorithm, laser data and visual data are respectively acquired through a laser sensor and a visual sensor which are carried on the robot, the laser data are data describing the distance between the robot and an obstacle, and the visual data are data describing the scene where the robot is located. And then, determining whether the robot moves or not through a wheel type odometer arranged on wheels of the robot, and performing closed-loop detection on the robot through visual data and laser data when the wheel type odometer determines that the robot deviates from the current position to a specified threshold value, so that the robot is positioned and a map is constructed according to a closed-loop detection result.
However, when the wheel type odometer is used for determining that the robot moves, the position of the robot may not change, for example, when the robot hits a wall, wheels of the robot still rotate, the wheel type odometer performs normal data acquisition, then performs closed-loop detection according to visual data and laser data, and performs robot positioning and map construction according to a closed-loop detection result, the robot positioning is inaccurate, the constructed map is not in accordance with an actual scene, and the accuracy of the constructed map is reduced.
Disclosure of Invention
The embodiment of the invention provides a real-time positioning and map building method, a real-time positioning and map building device and a computer readable storage medium, which are used for solving the problems of inaccurate robot positioning and low map building accuracy in the related technology. The technical scheme is as follows:
in a first aspect, a real-time positioning and mapping method is provided, where the method includes:
acquiring a current visual image and current laser data acquired at a current position;
when detecting that an object in a current visual image moves relative to an object in a historical visual image, writing the current visual image and current laser data serving as optimization node information into map data, wherein the historical visual image is a visual image with the acquisition time earlier than that of the current visual image.
Optionally, before writing the current visual image and the current laser data as optimization node information into the map data, the method further includes:
respectively extracting the characteristics of the current visual image and the historical visual image to obtain a first visual characteristic point and a second visual characteristic point;
determining scene coincidence degree between the current visual image and the historical visual image according to the first visual feature point and the second visual feature point;
when the scene coincidence degree is smaller than a first preset threshold value, determining that the object in the current visual image moves relative to the object in the historical visual image.
Optionally, before writing the current visual image and the current laser data as optimization node information into the map data, the method further includes:
when a target object does not exist in the current visual image, determining that the object in the current visual image moves relative to the object in the historical visual image, wherein the target object is the object in the historical visual image;
determining a position of the target object in the historical visual image and determining a position of the target object in the current visual image when the target object is present in the current visual image;
determining that the object in the current visual image has moved relative to the object in the historical visual image when the position of the target object in the current visual image is not the same as the position in the historical visual image.
Optionally, before writing the current visual image and the current laser data as optimization node information into the map data, the method further includes:
acquiring odometer data of the robot;
determining the moving state of the robot according to the odometry data of the robot;
correspondingly, the writing the current visual image and the current laser data as optimization node information into the map data includes:
and when the robot is determined to move through the moving state, writing the current visual image and the current laser data into map data as optimization node information.
Optionally, the odometry data of the robot includes displacement change in the X direction, displacement change in the Y direction, and angle change of the robot;
the determining the movement state of the robot according to the odometry data of the robot comprises:
and when the displacement change of the robot in any one of the X-direction displacement change and the Y-direction displacement change exceeds a preset displacement threshold value, or the angle change of the robot exceeds a preset angle threshold value, determining that the robot has moved.
Optionally, the writing the current visual image and the current laser data as optimization node information into map data includes:
performing visual closed-loop detection based on the current visual image and a visual database to determine closed-loop key data, wherein the visual database is used for storing the visual data acquired by the robot, and the closed-loop key data is data describing that the current position of the robot is a position passed before the current time;
when the closed loop key data are determined, determining a laser data search range based on the laser data acquired when the closed loop key data are acquired and the current laser data;
performing laser closed loop detection according to the laser data search range based on the current laser data and a laser database to determine historical laser data matched with the current laser data, wherein the laser database is used for storing the laser data acquired by the robot;
and writing the current visual image and the current laser data into map data as optimization node information based on the historical laser data.
Optionally, the writing the current visual image and the current laser data as optimized node information into map data based on the historical laser data includes:
determining a position error between the position of the robot and the current position when the historical laser data is collected;
and when the position error is not 0, writing the current visual image and the current laser data into the map data as optimization node information.
Optionally, the determining a laser data search range based on the laser data acquired when the closed-loop key data is acquired and the current laser data includes:
acquiring pose data of the robot when the closed loop key data are acquired and pose data of the robot when the current laser data are acquired;
determining a pose distance between a position of the robot when the closed-loop key data is acquired and a current position of the robot when the closed-loop key data is acquired based on pose data of the robot when the closed-loop key data is acquired and pose data of the robot when the current laser data is acquired;
determining the laser data search range based on the pose distance.
Optionally, the determining the laser data search range based on the pose distance includes:
determining the maximum value of the laser data searching range based on the pose distance through the following formula;
dist_adapitve=log(1+a*dist_diff)+b
wherein the dist _ adapt ve is used for representing the maximum value of the laser data search range, the dist _ diff is used for representing the pose distance, the a is a constant, and the b is a constant;
determining a range between 0 and the maximum value as the laser data search range.
In a second aspect, a real-time positioning and mapping apparatus is provided, the apparatus comprising:
the first acquisition module is used for acquiring a current visual image and current laser data acquired at a current position;
and the writing module is used for writing the current visual image and the current laser data into the map data as optimization node information when detecting that the object in the current visual image moves relative to the object in the historical visual image, wherein the historical visual image is a visual image with the acquisition time earlier than that of the current visual image.
Optionally, the apparatus further comprises:
the characteristic extraction module is used for respectively extracting the characteristics of the current visual image and the historical visual image to obtain a first visual characteristic point and a second visual characteristic point;
a first determining module, configured to determine, according to the first visual feature point and the second visual feature point, a scene overlap ratio between the current visual image and the historical visual image;
and the second determination module is used for determining that the object in the current visual image moves relative to the object in the historical visual image when the scene coincidence degree is smaller than a first preset threshold value.
The device further comprises:
a third determining module, configured to determine that an object in the current visual image moves relative to an object in the historical visual image when a target object does not exist in the current visual image, where the target object is an object in the historical visual image;
a fourth determining module, configured to determine a position of the target object in the historical visual image and determine a position of the target object in the current visual image when the target object exists in the current visual image;
a fifth determining module, configured to determine that the object in the current visual image has moved relative to the object in the historical visual image when the position of the target object in the current visual image is different from the position in the historical visual image.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring odometer data of the robot;
a sixth determining module, configured to determine a moving state of the robot according to odometry data of the robot;
correspondingly, the writing module is used for:
and when the robot is determined to move through the moving state, writing the current visual image and the current laser data into map data as optimization node information.
Optionally, the odometry data of the robot includes displacement change in the X direction, displacement change in the Y direction, and angle change of the robot;
the sixth determining module is configured to:
and when the displacement change of the robot in any one of the X-direction displacement change and the Y-direction displacement change exceeds a preset displacement threshold value, or the angle change of the robot exceeds a preset angle threshold value, determining that the robot has moved.
Optionally, the writing module includes:
the first detection submodule is used for carrying out visual closed-loop detection on the basis of the current visual image and a visual database so as to determine closed-loop key data, the visual database is used for storing the visual data acquired by the robot, and the closed-loop key data is data describing that the current position of the robot is a position which has passed before the current time;
the first determining submodule is used for determining a laser data searching range based on the laser data acquired during acquisition of the closed-loop key data and the current laser data when the closed-loop key data are determined;
the second detection submodule is used for carrying out laser closed-loop detection according to the laser data search range based on the current laser data and a laser database so as to determine historical laser data matched with the current laser data, and the laser database is used for storing the laser data acquired by the robot;
and the second determining submodule is used for writing the current visual image and the current laser data into map data as optimized node information based on the historical laser data.
Optionally, the second determining submodule is configured to:
determining a position error between the position of the robot and the current position when the historical laser data is collected;
and when the position error is not 0, writing the current visual image and the current laser data into the map data as optimization node information.
Optionally, the first determining sub-module is configured to:
acquiring pose data of the robot when the closed loop key data are acquired and pose data of the robot when the current laser data are acquired;
determining a pose distance between a position of the robot when the closed-loop key data is acquired and a current position of the robot when the closed-loop key data is acquired based on pose data of the robot when the closed-loop key data is acquired and pose data of the robot when the current laser data is acquired;
determining the laser data search range based on the pose distance.
Optionally, the first determining sub-module is further configured to:
determining the maximum value of the laser data searching range through the following formula based on the pose distance;
dist_adapitve=log(1+a*dist_diff)+b
wherein the dist _ adapt is used for representing the maximum value of the laser data search range, the dist _ diff is used for representing the pose distance, a is a constant, and b is a constant;
determining a range between 0 and the maximum value as the laser data search range.
In a third aspect, a computer-readable storage medium is provided, in which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the method of any of the first aspect above.
In a fourth aspect, a terminal is provided, which includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any one of the methods provided by the first aspect above.
In a fifth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of any of the methods provided in the first aspect above.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
in the embodiment of the invention, the current visual image and the historical visual image can be acquired, and when the object in the current visual image is detected to move relative to the object in the historical visual image, the current visual image and the current laser data can be written into the map data as the optimization node information. Whether the robot moves or not can be determined through the change of the object in the current visual image and the object in the historical visual image, whether the robot moves or not is not required to be determined through the wheel type odometer, and inaccurate positioning and map construction caused by improper operation analysis of the robot are avoided, so that the accuracy of the positioning and map construction of the robot is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a real-time positioning and mapping method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another method for real-time position determination and mapping according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of determining whether an object in a current visual image moves relative to an object in a historical visual image according to an embodiment of the present invention;
FIG. 4 is a flow chart of determining mapping data provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of determining a search range of laser data according to an embodiment of the present invention;
fig. 6 is a schematic diagram for describing a relationship between a pose distance and a laser data search range according to an embodiment of the present invention;
fig. 7 is a schematic diagram of whether the robot moves or not determined by the wheel type odometer according to the embodiment of the invention;
FIG. 8 is a schematic structural diagram of a first real-time positioning and mapping apparatus according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of a second real-time positioning and mapping apparatus according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of a third real-time positioning and mapping apparatus according to an embodiment of the present invention;
FIG. 11 is a schematic structural diagram of a fourth real-time positioning and mapping apparatus according to an embodiment of the present invention;
FIG. 12 is a block diagram of a write module according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Before explaining the embodiments of the present invention in detail, an application scenario related to the embodiments of the present invention is explained.
With the development of autonomous robots, the technology of the robots for positioning and constructing maps by themselves is receiving more and more attention. At present, when a robot autonomously positions and constructs a map, after it is determined that the robot moves through a wheel-type odometer installed on a wheel of the robot, the robot is subjected to closed-loop detection based on laser data acquired by a laser sensor installed on the robot and visual data acquired by a visual sensor installed on the robot, and the robot is positioned and constructed according to a closed-loop detection result. However, when the wheel type odometer is used for determining that the robot moves, the position of the robot may not change, for example, when the robot hits a wall, wheels of the robot still rotate, the wheel type odometer performs normal data acquisition, then performs closed-loop detection according to visual data and laser data, and performs robot positioning and map construction according to a closed-loop detection result, the robot positioning is inaccurate, the constructed map is not in accordance with an actual scene, and the accuracy of the constructed map is reduced.
Based on the scene, the embodiment of the invention provides a real-time positioning and map building method capable of improving the accuracy of robot positioning and map building.
After the application scenarios of the embodiment of the present invention are introduced, a detailed description will be given to a real-time positioning and mapping method provided by the embodiment of the present invention with reference to the accompanying drawings.
Fig. 1 is a flowchart of a real-time positioning and mapping method according to an embodiment of the present invention, and referring to fig. 1, the method includes the following steps.
Step 101: and the terminal or the robot acquires the current visual image and the current laser data acquired at the current position.
The robot is usually provided with a vision sensor and a laser sensor, so that the robot can acquire a vision image of the current position through the installed vision sensor to obtain a current vision image, and acquire laser data of the current position through the laser sensor to obtain current laser data.
It should be noted that the terminal is a terminal capable of communicating with the robot, and when the robot acquires a current visual image through the installed visual sensor and acquires current laser data through the installed laser sensor, the robot may send the acquired current visual image and current laser data to the terminal, so that the terminal may acquire the current visual image and current laser data.
Step 102: when detecting that an object in a current visual image moves relative to an object in a historical visual image, the terminal or the robot writes the current visual image and current laser data into map data as optimization node information, wherein the historical visual image is a visual image with the acquisition time earlier than that of the current visual image.
Because the robot positioning and the map construction are usually performed only when the robot moves, and when the robot moves, the visual images acquired by the visual sensor installed on the robot before and after the robot moves are different, when the terminal or the robot detects that the object in the current visual image moves relative to the object in the historical visual image, the robot moves, and the current visual image and the current laser data can be written into the map data as the optimized node information.
In the embodiment of the invention, the terminal or the robot can acquire the current visual image and the historical visual image, and when detecting that the object in the current visual image moves relative to the object in the historical visual image, the current visual image and the current laser data can be written into the map data as the optimization node information. Whether the robot moves or not can be determined through the change of the object in the current visual image and the object in the historical visual image, whether the robot moves or not is not required to be determined through the wheel type odometer, and inaccurate positioning and map construction caused by improper operation analysis of the robot are avoided, so that the accuracy of the positioning and map construction of the robot is improved.
Fig. 2 is a flowchart of a real-time positioning and mapping method according to an embodiment of the present invention, and the embodiment of the present invention will be explained in detail with reference to fig. 2 for the embodiment shown in fig. 1, and with reference to fig. 2, the method specifically includes the following steps.
Step 201: and the terminal or the robot acquires the current visual image and the current laser data acquired at the current position.
The operation of acquiring the current visual image and the current laser data acquired at the current position by the terminal or the robot may refer to the operation of step 101, which is not described in detail in this embodiment of the present invention.
Step 202: when detecting that an object in a current visual image moves relative to an object in a historical visual image, the terminal or the robot writes the current visual image and current laser data into map data as optimization node information, wherein the historical visual image is a visual image with the acquisition time earlier than that of the current visual image.
As can be seen from the above step 102, the terminal or the robot may determine whether to write the current visual image and the current laser data as the optimized node information into the map data according to whether the object in the current visual image moves relative to the object in the historical visual image. Therefore, in order to smoothly perform positioning and map construction, a terminal or a robot needs to detect whether an object in a current visual image moves relative to an object in a historical visual image before writing the current visual image and current laser data as optimization nodes into map data.
The following describes an operation of a terminal or a robot detecting whether an object in a current visual image moves relative to an object in a historical visual image, and an operation of writing the current visual image and current laser data as optimized node information into map data when the object in the current visual image moves relative to the object in the historical visual image, respectively.
Detecting whether an object in a current visual image is moving relative to an object in a historical visual image
The operation of the terminal or the robot detecting whether the object in the current visual image moves relative to the object in the historical visual image may include the following two ways.
In the first mode, a terminal or a robot respectively extracts the characteristics of a current visual image and a historical visual image to obtain a first visual characteristic point and a second visual characteristic point; determining the scene coincidence degree between the current visual image and the historical visual image according to the first visual feature point and the second visual feature point; when the scene coincidence degree is smaller than a first preset threshold value, determining that the object in the current visual image moves relative to the object in the historical visual image; when the scene coincidence degree is larger than or equal to a first preset threshold value, determining that the object in the current visual image has no motion relative to the object in the historical visual image. The specific process can be seen in FIG. 3.
The feature extraction of the current visual image and the historical visual image may be extraction of feature points such as an outline, a color feature, a texture feature, and a spatial relationship of an object in the current visual image and the historical visual image.
The terminal or the robot can compare the first visual feature points with the second visual feature points one by one, determine the same number of the first visual feature points with the same features as the second visual feature points in the current visual image, and divide the same number by the total number of the first data feature points in the current visual image to obtain the scene coincidence degree between the current visual image and the historical visual image.
For example, the terminal or the robot obtains 10 first visual feature points from the current visual image, obtains 10 second visual feature points from the historical visual image, compares each of the 10 first visual feature points with the 10 second visual feature points, and when the same number of the first visual feature points with the same feature as the second visual feature points in the current visual image is 8, divides 8 by the total number 10 of the first data feature points in the current visual image, so as to obtain the scene coincidence degree between the current visual image and the historical visual image as 80%.
It should be noted that the terminal or the robot may perform feature extraction through a Scale-invariant feature transform (SIFT) algorithm, an object FAST and Rotated BRIEF (ORB) algorithm, a FAST feature point extraction and description (FAST forward) algorithm, a FAST free filtered segment (FAST forward) algorithm, or the like.
In addition, the first preset threshold may be set in advance, for example, the first preset threshold may be 90%, 95%, and the like.
In addition, the terminal or the robot may further set a second preset threshold in advance, where the second preset threshold is smaller than the first preset threshold, and when the coincidence degree of the scene is smaller than the second preset threshold, it indicates that the robot is abnormal in movement, and a problem may occur in a vision sensor of the robot. At this time, the terminal or the robot may perform an alarm prompt for prompting the user that the position of the visual sensor changes and needs to be adjusted.
It should be noted that the second preset threshold may be set in advance, for example, the second preset threshold may be 10%, 5%, and so on.
Furthermore, in the embodiment of the present invention, the historical visual image is a visual image whose acquisition time is earlier than that of the current visual image, and therefore, the historical visual image may be any one of a plurality of visual images acquired earlier than that of the current time. When the historical visual image is an image acquired based on the latest time of the current acquisition time, if the moving speed of the robot is very slow and the acquisition speed of the visual image is very fast, the possibility that the scene coincidence degree between the current visual image and the historical visual image is greater than a first preset threshold value is very high, so that the subsequent operation of writing the current visual image and the current laser data into the map data as the optimization node information cannot be performed. Therefore, in order to smoothly perform an operation of writing the current visual image and the current laser data into the map data as the optimized node information, the terminal or the robot may determine the visual image whose acquisition time is a preset time from the acquisition time of the current visual image, or the visual image whose acquisition time is longer than the preset time from the acquisition time of the current visual image, as the historical visual image. Alternatively, the visual image included in the node information of the last written map data is determined as the history visual image.
It should be noted that the preset time period may be set in advance, for example, the preset time period may be 5 seconds, 6 seconds, 7 seconds, and the like.
A second mode, when there is no target object in the current visual image, determining that the object in the current visual image has moved relative to the object in the historical visual image, and the target object is the object in the historical visual image; when the target object exists in the current visual image, determining the position of the target object in the historical visual image and determining the position of the target object in the current visual image; when the position of the target object in the current visual image is not the same as the position in the historical visual image, it is determined that the object in the current visual image has moved relative to the object in the historical visual image.
Since the contents of the plurality of visual images captured by the robot during the non-movement period should be the same when the robot is not moving, that is, the position of the target object in the current visual image is the same as the position in the historical visual image when the robot is not moving. Thus, it is possible to determine whether the object in the current visual image has moved relative to the object in the history visual image by determining whether the target object exists in the current visual image, and, when the target object exists, by determining whether the position of the target object in the current visual image and the position of the target object in the history visual image are the same.
Writing the current visual image and the current laser data into the map data as the optimized node information
In general, when a robot moves, a scene where the robot is located may be a part where positioning and mapping are not performed, so that in order to perfect a map to facilitate a subsequent robot to complete an autonomous task, positioning and mapping are generally required to be performed on the scene where the robot is located after the robot moves. In positioning and mapping, a visual image and laser data are usually written into map data as node information, and therefore, a terminal or a robot needs to write a current visual image and current laser data into map data as optimized node information. And the operation of the terminal or the robot writing the current visual image and the current laser data as the optimized node information to the map data may include the following steps a to E, referring specifically to the flowchart shown in fig. 4.
Step A: the terminal or the robot performs visual closed-loop detection based on the current visual image and a visual database to determine closed-loop key data, wherein the visual database is used for storing the visual data acquired by the robot, and the closed-loop key data is data describing that the current position of the robot is a position which has passed before the current time.
The terminal or the robot can perform feature matching on the current visual image and all visual data in the visual database, and when the visual data matched with the current visual image exists in the visual database, the visual data matched with the current visual image is determined as closed-loop key data.
It should be noted that, for the terminal or the robot to perform the feature matching operation on the current visual image and all the visual data in the visual database, reference may be made to related technologies, and this is not described in detail in the embodiment of the present invention.
In addition, because the current position of the robot may be a position which has never passed before, the terminal or the robot may not be able to determine the key data of the closed loop after performing the visual closed loop detection, and at this time, the terminal or the robot may end the operation of the positioning and the map building, and no further follow-up laser closed loop detection operation is performed.
It should be noted that, when the terminal or the robot cannot determine the closed-loop key data, it is indicated that the current position of the robot is a position that has not passed before the current time, if the subsequent laser closed-loop detection operation is continued, then the robot is inaccurate in positioning, and after the map is constructed, the constructed map is also very inaccurate. Therefore, when the terminal or the robot cannot determine the closed-loop key data, the operation of the positioning and the map building can be finished.
And B: when the terminal or the robot determines the closed-loop key data, the laser data search range is determined based on the laser data acquired when the closed-loop key data is acquired and the current laser data.
Because the visual image and the laser data cannot be separated in closed-loop detection of the robot under normal conditions, the robot can simultaneously acquire the laser data and determine the current pose data of the robot each time the visual image is acquired, and therefore when the terminal or the robot determines the closed-loop key data, the terminal or the robot can determine the laser data searching range based on the laser data acquired when the closed-loop key data is acquired and the current laser data.
It should be noted that the positioning and mapping are related to the position of the robot, so that the terminal or the robot can determine the current pose data of the robot while acquiring the visual image and the laser data each time. Therefore, the operation of determining the laser data search range by the terminal or the robot based on the laser data acquired when the closed-loop key data is acquired and the current laser data may be: acquiring pose data of the robot when acquiring closed-loop key data and pose data of the robot when acquiring current laser data; determining a pose distance between the position of the robot when the closed-loop key data is acquired and the current position of the robot when the closed-loop key data is acquired based on pose data of the robot when the closed-loop key data is acquired and pose data of the robot when the current laser data is acquired; and determining the laser data search range based on the pose distance. The specific process can be seen in FIG. 5.
The pose data may describe the position and pose of the robot, the pose may be the orientation of the front of the robot, and the like. Since the pose distance can be used to describe the spatial distance of the object, the pose distance can be a Euclidean distance or other type of distance.
In addition, the terminal or the robot may determine the maximum value of the laser data search range by a specified formula as follows based on the pose distance, and determine the range between 0 and the maximum value as the laser data search range.
dist_adapitve=log(1+a*dist_diff)+b (1)
In the above-mentioned specific formula (1), dist _ adaptive is used to represent the maximum value of the laser data search range; dist _ diff is used to represent pose distance, a is a constant and b is a constant.
Further, when the constants a and b are unknown, the terminal or the robot needs to determine the constants a and b before determining the maximum value of the laser data search range by a specified formula based on the pose distance, thereby determining the specified formula.
Wherein, the terminal or the robot can set a logarithmic formula with logarithmic formula unknown constants a and b, a maximum pose distance, a minimum pose distance, a maximum value of a laser data search range corresponding to the maximum pose distance and a maximum value of a laser data search range corresponding to the minimum pose distance in advance, the maximum values of the laser data search ranges corresponding to the maximum pose distance and the maximum pose distance are substituted into the logarithmic formula with the unknown constants a and b to obtain a first unknown formula, the maximum values of the laser data search ranges corresponding to the minimum pose distance and the minimum pose distance are substituted into the logarithmic formula with the unknown constants a and b to obtain a second unknown formula, and solving the first unknown formula and the second unknown formula to obtain the numerical values of unknown constants a and b, and substituting the constants a and b into a logarithmic formula to obtain the specified formula.
Step C: and the terminal or the robot performs laser closed-loop detection according to the laser data search range based on the current laser data and a laser database to determine historical laser data matched with the current laser data, wherein the laser database is used for storing the laser data acquired by the robot.
Wherein, because the closed-loop key data is data describing that the current position of the robot is a position that has passed before the current time, the laser data that is the same as or similar to the current laser data may exist in the laser database. Therefore, when the terminal or the robot performs laser closed-loop detection, the probability that the current laser data is the laser data in the laser data search range can be respectively determined according to the current laser data and the laser data in the laser data search range, and the laser data with the maximum probability is determined to be the historical laser data matched with the current laser data.
It should be noted that, in the embodiment of the present invention, the terminal or the robot may perform laser closed-loop detection according to the laser data search range based on the current laser data and the laser database, or may perform laser closed-loop detection directly based on the current laser data and the laser database after determining the closed-loop key data.
It should be noted that, because the laser data search range is dynamically determined according to the current visual image, and the laser data in the laser data search range is only partial data in the laser database, when the terminal or the robot performs the laser closed-loop detection according to the laser data search range, the calculation amount is small, the consumed operation resources are small, and thus the stability of the terminal or the robot system is improved.
In addition, since the prescribed formula may be a logarithmic formula, the relationship between the pose distance and the maximum value of the laser data search range may be described by a coordinate system. For example, referring to fig. 6, the pose distance is described by the abscissa axis of coordinates, and the ordinate axis of coordinates describes the maximum value of the laser data search range. As can be clearly seen from fig. 6, when the pose distance is small, a relatively large laser data search range still needs to be selected in order to successfully determine the historical laser data; when the pose distance is large, only a relatively small laser data searching range is needed, so that the overlong searching time can be avoided by performing laser closed-loop detection in the laser data searching range, and the stability of the terminal or the robot system is improved.
Step D: and the terminal or the robot writes the current visual image and the current laser data into the map data as optimized node information based on the historical laser data.
The terminal or the robot determines a position error between the position of the robot and the current position when the historical laser data are collected; when the position error is not 0, the current visual image and the current laser data are written into the map data as the optimized node information.
Therefore, after the terminal or the robot determines the historical laser data, the position of the robot when the robot collects the historical laser data can be determined from the historical pose data, the current position of the robot is determined from the current pose data, and the position error between the position of the robot when the robot collects the historical laser data and the current position is determined. The historical pose data is pose data of the robot determined when the historical laser data is collected, and the current pose data is pose data of the robot determined when the current laser data is collected.
The terminal or the robot can determine that the position where the robot is located before moving is the initial position of the robot, the offset of the current position where the robot is located relative to the initial position can be determined through a speedometer installed on the robot when the subsequent robot collects visual images and laser data each time, and the current position where the robot is located is determined according to the initial position of the robot and the offset relative to the initial position.
In addition, when the odometer is a wheel-type odometer, the wheel-type odometer can record the number of rotations of the left and right wheels of the robot and the rotation angle of the robot, so that the terminal or the robot can obtain the displacement offset of the robot by multiplying the number of rotations of the left and right wheels by the circumference of the left and right wheels, and determine the angle offset of the robot relative to the positive direction according to the rotation angle of the robot by taking the direction in which the front face of the robot faces before the robot moves as the positive direction.
In addition, the operation of the terminal or the robot for determining the current position of the robot according to the initial position of the robot and the offset relative to the initial position may be: establishing a plane direct coordinate system, determining the initial position of the robot as a (0, 0) point, determining the front facing direction of the robot before moving as an X axis, determining the coordinate of the robot according to the angle offset and the displacement offset, and determining the position described by the coordinate as the current position of the robot.
Further, as can be seen from the above description, when the terminal or the robot determines that the object in the current visual image moves relative to the object in the historical visual image, the robot may change due to the autonomous movement of the robot or may move due to the movement of the visual sensor on the robot, and when the terminal or the robot determines that the object in the current visual image moves relative to the object in the historical visual image due to the movement of the visual sensor, the subsequent positioning and mapping may be inaccurate. Therefore, in order to improve the accuracy of subsequent positioning and map construction, when the terminal or the robot determines that the object in the current visual image moves relative to the object in the historical visual image, the odometer data of the robot recorded by the wheel type odometer arranged on the wheels of the robot can be acquired, and the moving state of the robot is determined according to the odometer data; when the robot is determined to have moved through the moving state, the operation of writing the current visual image and the current laser data as the optimization node information into the map data in the step 202 is executed, and when the robot is determined not to have moved through the moving state, the position of the visual sensor of the robot is determined to have changed, and subsequent positioning and map building operations are not required. The specific process can be seen in FIG. 7.
When the odometry data of the robot comprise displacement change in the X direction, displacement change in the Y direction and angle change of the robot, if the displacement change of the robot in any one of the X direction and the Y direction exceeds a preset displacement threshold value or the angle change of the robot exceeds a preset angle threshold value, the terminal or the robot determines that the robot has moved; and when the displacement change of the robot in the X direction and the displacement change of the robot in the Y direction do not exceed the preset displacement threshold value and the angle change of the robot does not exceed the preset angle threshold value, the terminal or the robot determines that the robot does not move.
It should be noted that the angle of the robot refers to an angle that the robot rotates during the moving process, for example, if the robot rotates 180 degrees on the spot, then 180 degrees is the angle of the robot.
The wheel type odometer can record the rotating number of the left wheel and the right wheel of the robot and the rotating angle of the robot, so that the terminal or the robot can determine the odometer data of the robot through the rotating number of the left wheel and the right wheel of the wheel type odometer and the rotating angle of the robot.
The terminal or the robot takes the mapping time of the previous time as a starting point, the number of rotating circles of the left wheel and the right wheel recorded by the wheel type odometer when the current visual image is acquired and the rotating angle of the robot are determined, and the number of rotating circles of the left wheel and the right wheel is multiplied by the circumference of the left wheel and the right wheel to obtain the displacement change of the robot; determining the advancing direction of the robot as the Y direction, determining that the robot has displacement change in the Y direction when the rotation angle of the robot is not changed, determining that the robot has displacement change in the X direction when the rotation angle of the robot is changed to 90 degrees or-90 degrees, and determining that the angle of the robot is 90 degrees or-90 degrees; when the robot does not have displacement change in the X direction or the Y direction but the wheel type odometer records a rotation angle, the rotation angle is determined as the angle of the robot.
It should be noted that the preset displacement threshold may be set in advance, for example, the preset displacement threshold may be 15 centimeters, 30 centimeters, and the like. The preset angle threshold may also be set in advance, for example, the preset angle threshold may be 20 degrees, 30 degrees, and the like.
Moreover, when the position of the visual sensor of the robot changes, the subsequent operations of positioning and mapping may be affected, so that the terminal or the robot can give an alarm to remind after determining that the position of the visual sensor of the robot changes.
Further, in the embodiment of the present invention, when the terminal or the robot continues to determine the moving state of the robot through the wheel type odometer installed on the robot after determining that the object in the current visual image moves relative to the object in the historical visual image, the terminal or the robot may correspondingly store the current visual image and the current laser data after determining that the robot moves.
Because the visual image is stored in the visual database and the laser data is stored in the laser database, the terminal or the robot can store the current visual image in the visual database and store the current laser data in the laser database. In general, since the database can be associated with the database, the visual database and the laser database can be associated with each other, so as to realize the corresponding storage between the current visual image and the current laser data.
In addition, the terminal or the robot can also determine the pose data of the robot when the robot collects visual images and laser data each time, and the subsequent operation of writing the current visual images and the current laser data into map data as optimization node information is related to the position of the robot. Therefore, the terminal or the robot can also store the pose data of the robot determined each time into the pose database, and the pose database is respectively associated with the visual database and the laser database, so that the corresponding storage among the current visual image, the current laser data and the current pose data is realized.
It should be noted that, in general, a terminal or a robot stores a visual image acquired by a visual sensor and laser data acquired by a laser sensor, so that after the robot moves for a long time, the memory and storage space occupied by the visual image and the laser data will become larger and larger, which results in that the subsequent calculation is complicated and the terminal or the robot is difficult to operate. In the embodiment of the invention, after the terminal or the robot determines that the object in the current visual image moves relative to the object in the historical visual image, or the object in the current visual image moves relative to the object in the historical visual image and the robot moves according to the odometry data of the robot, the current visual image and the current laser data are correspondingly stored, so that the occupation of the memory and the storage space of the terminal or the robot is greatly reduced. Meanwhile, the smaller the data quantity stored in the visual database and the laser database is, the less the computation quantity of algorithms such as closed-loop detection, positioning and map construction of the terminal or the robot is, so that the resource consumption of the terminal or the robot is reduced.
Further, after the terminal or the robot writes the optimized node information into the map data, an optimized condition can be generated according to the optimized node information and other node information in the map data, and pose data included in other node information in the map data is optimized according to the optimized condition and the position error, so that the map is optimized.
In the embodiment of the invention, the terminal or the robot can detect whether the object in the current visual image moves relative to the object in the historical visual image, and continuously determine whether the robot moves through the wheel type odometer after the object moves, so that the accuracy of the robot motion analysis is improved. And then, after the robot moves, performing closed-loop detection on the robot according to the current visual image and the current laser data so as to determine a position error between the position where the robot is located and the current position when the historical laser data is acquired, and writing the current visual image and the current laser data into the map data as optimization node information under the condition that the error is not 0. Because whether the robot moves or not is determined without independently using the wheel type odometer, the inaccuracy of robot positioning and map construction caused by improper operation analysis of the robot is avoided, and the accuracy of the robot positioning and map construction is improved.
After explaining the real-time positioning and map constructing method provided by the embodiment of the present invention, a description is given to a real-time positioning and map constructing apparatus provided by the embodiment of the present invention.
Fig. 8 is a block diagram of a real-time positioning and mapping apparatus provided by an embodiment of the present disclosure, and referring to fig. 8, the real-time positioning and mapping apparatus may be implemented by software, hardware, or a combination of the two. The device includes: a first acquisition module 801 and a write module 802.
A first obtaining module 801, configured to obtain a current visual image and current laser data acquired at a current position;
a writing module 802, configured to, when it is detected that an object in a current visual image moves relative to an object in a historical visual image, write the current visual image and current laser data as optimization node information into map data, where the historical visual image is a visual image whose acquisition time is earlier than that of the current visual image.
Optionally, referring to fig. 9, the apparatus further comprises:
a feature extraction module 803, configured to perform feature extraction on the current visual image and the historical visual image respectively to obtain a first visual feature point and a second visual feature point;
a first determining module 804, configured to determine, according to the first visual feature point and the second visual feature point, a scene overlap ratio between the current visual image and the historical visual image;
a second determining module 805, configured to determine that the scene where the robot is located has changed when the scene overlap ratio is smaller than a first preset threshold.
Optionally, referring to fig. 10, the apparatus further comprises:
a third determining module 806, configured to determine that the object in the current visual image has moved relative to the object in the historical visual image when the target object is not present in the current visual image, where the target object is the object in the historical visual image;
a fourth determining module 807 for determining a position of the target object in the historical visual image and determining a position of the target object in the current visual image when the target object is present in the current visual image;
a fifth determining module 808, configured to determine that the object in the current visual image has moved relative to the object in the historical visual image when the position of the target object in the current visual image is different from the position in the historical visual image.
Optionally, referring to fig. 11, the apparatus further comprises:
the second obtaining module 809 is used for obtaining odometer data of the robot;
a sixth determining module 810, configured to determine a moving state of the robot according to odometry data of the robot;
accordingly, the write module 802 is configured to:
and when the robot is determined to move through the moving state, writing the current visual image and the current laser data into map data as optimization node information.
Optionally, the odometry data of the robot includes displacement change in the X direction, displacement change in the Y direction, and angle change of the robot;
the sixth determining module 810 is configured to:
and when the displacement change of the robot in any one of the X-direction displacement change and the Y-direction displacement change exceeds a preset displacement threshold value, or the angle change of the robot exceeds a preset angle threshold value, determining that the robot has moved.
Optionally, referring to fig. 12, the writing module 802 includes:
a first detection submodule 8021, configured to perform visual closed-loop detection based on the current visual image and a visual database, so as to determine closed-loop key data, where the visual database is used to store visual data acquired by the robot, and the closed-loop key data is data describing that a current position of the robot is a position that has passed before a current time;
a first determining submodule 8022, configured to, when the closed-loop key data is determined, determine a laser data search range based on the laser data acquired when the closed-loop key data is acquired and the current laser data;
the second detection submodule 8023 is configured to perform laser closed-loop detection according to the laser data search range based on the current laser data and a laser database, so as to determine historical laser data matched with the current laser data, where the laser database is used to store laser data acquired by the robot;
a second determining sub-module 8024, configured to write, based on the historical laser data, the current visual image and the current laser data as optimization node information into map data.
Optionally, the second determining submodule 8024 is configured to:
determining a position error between the position of the robot and the current position when the historical laser data are collected;
and when the position error is not 0, writing the current visual image and the current laser data into the map data as optimization node information.
Optionally, the first determining submodule 8022 is configured to:
acquiring pose data of the robot when the closed loop key data are acquired and pose data of the robot when the current laser data are acquired;
determining a pose distance between a position of the robot when the closed-loop key data is acquired and a current position of the robot when the closed-loop key data is acquired based on pose data of the robot when the closed-loop key data is acquired and pose data of the robot when the current laser data is acquired;
determining the laser data search range based on the pose distance.
Optionally, the first determining submodule 8022 is further configured to:
determining the maximum value of the laser data searching range through the following formula based on the pose distance;
dist_adapitve=log(1+a*dist_diff)+b
wherein the dist _ adapt is used for representing the maximum value of the laser data search range, the dist _ diff is used for representing the pose distance, a is a constant, and b is a constant;
determining a range between 0 and the maximum value as the laser data search range.
In summary, in the embodiment of the present invention, the terminal or the robot may detect whether the object in the current visual image moves relative to the object in the historical visual image, and continue to determine whether the robot moves through the wheel-type odometer after the object moves, so as to improve the accuracy of the robot motion analysis. And then, after the robot moves, performing closed-loop detection on the robot according to the current visual image and the current laser data so as to determine a position error between the position where the robot is located and the current position when the historical laser data is acquired, and writing the current visual image and the current laser data into the map data as optimization node information under the condition that the error is not 0. Because whether the robot moves or not is determined without independently using the wheel type odometer, the inaccuracy of robot positioning and map construction caused by improper operation analysis of the robot is avoided, and the accuracy of the robot positioning and map construction is improved.
It should be noted that: in the real-time positioning and mapping apparatus provided in the foregoing embodiment, when performing real-time positioning and mapping, only the division of the functional modules is illustrated, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the real-time positioning and map building device and the real-time positioning and map building method provided by the above embodiments belong to the same concept, and the specific implementation process thereof is described in detail in the method embodiments and is not described herein again.
Fig. 13 is a block diagram illustrating a terminal 1300 according to an exemplary embodiment of the present invention. The terminal 1300 may be: a robot, a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compress standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compress standard Audio Layer 4), a notebook computer or a desktop computer. Terminal 1300 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, terminal 1300 includes: a processor 1301 and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 1301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 1301 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in a wake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, processor 1301 may further include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. The memory 1302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1302 is used to store at least one instruction for execution by processor 1301 to implement the real-time localization and mapping methods provided by method embodiments herein.
In some embodiments, terminal 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. Processor 1301, memory 1302, and peripheral interface 1303 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1303 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, touch display 1305, camera 1306, audio circuitry 1307, positioning component 1308, and power supply 1309.
Peripheral interface 1303 may be used to connect at least one peripheral associated with I/O (Input/Output) to processor 1301 and memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1301, the memory 1302, and the peripheral device interface 1303 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1304 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1304 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1305 is a touch display screen, the display screen 1305 also has the ability to capture touch signals on or over the surface of the display screen 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1305 may be one, providing the front panel of terminal 1300; in other embodiments, the display 1305 can be at least two, respectively disposed on different surfaces of the terminal 1300 or in a folded design; in still other embodiments, display 1305 may be a flexible display disposed on a curved surface or on a folded surface of terminal 1300. Even further, the display 1305 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display 1305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for realizing voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1300. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1301 or the radio frequency circuitry 1304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1307 may also include a headphone jack.
The positioning component 1308 is used for positioning the current geographic position of the terminal 1300 for implementing navigation or LBS (Location Based Service). The Positioning component 1308 can be a Positioning component based on the GPS (Global Positioning System) of the united states, the beidou System of china, the graves System of russia, or the galileo System of the european union.
Power supply 1309 is used to provide power to various components in terminal 1300. The power source 1309 may be alternating current, direct current, disposable or rechargeable. When the power source 1309 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyro sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1301 may control the touch display screen 1305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1311. The acceleration sensor 1311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1312 may detect the body direction and the rotation angle of the terminal 1300, and the gyro sensor 1312 may cooperate with the acceleration sensor 1311 to acquire a 3D motion of the user with respect to the terminal 1300. Processor 1301, based on the data collected by gyroscope sensor 1312, may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1313 may be disposed on a side bezel of terminal 1300 and/or underlying touch display 1305. When the pressure sensor 1313 is disposed on the side frame of the terminal 1300, a user's holding signal to the terminal 1300 may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the holding signal acquired by the pressure sensor 1313. When the pressure sensor 1313 is disposed at a lower layer of the touch display screen 1305, the processor 1301 controls an operability control on the UI interface according to a pressure operation of the user on the touch display screen 1305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1314 is used for collecting the fingerprint of the user, and the processor 1301 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user according to the collected fingerprint. When the identity of the user is identified as a trusted identity, the processor 1301 authorizes the user to perform relevant sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1314 may be disposed on the front, back, or side of the terminal 1300. When a physical button or vendor Logo is provided on the terminal 1300, the fingerprint sensor 1314 may be integrated with the physical button or vendor Logo.
The optical sensor 1315 is used to collect ambient light intensity. In one embodiment, the processor 1301 can control the display brightness of the touch display screen 1305 according to the intensity of the ambient light collected by the optical sensor 1315. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1305 is increased; when the ambient light intensity is low, the display brightness of the touch display 1305 is turned down. In another embodiment, the processor 1301 can also dynamically adjust the shooting parameters of the camera assembly 1306 according to the ambient light intensity collected by the optical sensor 1315.
Proximity sensor 1316, also known as a distance sensor, is typically disposed on a front panel of terminal 1300. Proximity sensor 1316 is used to gather the distance between the user and the front face of terminal 1300. In one embodiment, the touch display 1305 is controlled by the processor 1301 to switch from the bright screen state to the dark screen state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 is gradually reduced; the touch display 1305 is controlled by the processor 1301 to switch from the rest state to the bright state when the proximity sensor 1316 detects that the distance between the user and the front face of the terminal 1300 gradually becomes larger.
That is, not only is an embodiment of the present invention provide a terminal including a processor and a memory for storing processor-executable instructions, where the processor is configured to execute the method in the embodiment shown in fig. 1 or 2, but also an embodiment of the present invention provides a computer-readable storage medium having a computer program stored therein, where the computer program can implement the mapping method in the embodiment shown in fig. 1 or 2 when executed by the processor.
Those skilled in the art will appreciate that the configuration shown in fig. 13 is not intended to be limiting with respect to terminal 1300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be employed.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (16)

1. A real-time positioning and mapping method, the method comprising:
acquiring a current visual image and current laser data acquired at a current position;
when detecting that an object in a current visual image moves relative to an object in a historical visual image, performing visual closed-loop detection based on the current visual image and a visual database to determine closed-loop key data, wherein the visual database is used for storing visual data acquired by a robot, the closed-loop key data is data describing that the current position of the robot is a position past before the current time, and the historical visual image is a visual image with the acquisition time earlier than that of the current visual image;
when the closed loop key data are determined, determining a laser data search range based on the laser data acquired when the closed loop key data are acquired and the current laser data;
performing laser closed loop detection according to the laser data search range based on the current laser data and a laser database to determine historical laser data matched with the current laser data, wherein the laser database is used for storing the laser data acquired by the robot;
and writing the current visual image and the current laser data into map data as optimized node information based on the historical laser data.
2. The method of claim 1, wherein prior to writing the current visual image and current laser data as optimization node information to map data, further comprising:
respectively extracting the characteristics of the current visual image and the historical visual image to obtain a first visual characteristic point and a second visual characteristic point;
determining scene coincidence degree between the current visual image and the historical visual image according to the first visual feature point and the second visual feature point;
when the scene coincidence degree is smaller than a first preset threshold value, determining that the object in the current visual image moves relative to the object in the historical visual image.
3. The method of claim 1, wherein prior to writing the current visual image and current laser data as optimization node information to map data, further comprising:
when a target object does not exist in the current visual image, determining that the object in the current visual image moves relative to the object in the historical visual image, wherein the target object is the object in the historical visual image;
determining a position of the target object in the historical visual image and determining a position of the target object in the current visual image when the target object is present in the current visual image;
determining that the object in the current visual image has moved relative to the object in the historical visual image when the position of the target object in the current visual image is not the same as the position in the historical visual image.
4. The method of claim 1, wherein prior to writing the current visual image and current laser data as optimization node information to map data, further comprising:
acquiring odometer data of the robot;
determining the moving state of the robot according to the odometry data of the robot;
correspondingly, the writing the current visual image and the current laser data as optimization node information into the map data includes:
and when the robot is determined to move through the moving state, writing the current visual image and the current laser data into map data as optimization node information.
5. The method of claim 4, wherein the odometry data for the robot includes a change in displacement in an X direction, a change in displacement in a Y direction, and a change in angle of the robot;
the determining the moving state of the robot according to the odometry data of the robot comprises:
and when the displacement change of the robot in any one of the X-direction displacement change and the Y-direction displacement change exceeds a preset displacement threshold value, or the angle change of the robot exceeds a preset angle threshold value, determining that the robot has moved.
6. The method of claim 1, wherein said writing the current visual image and the current laser data as optimized node information to map data based on the historical laser data comprises:
determining a position error between the position of the robot and the current position when the historical laser data is collected;
and when the position error is not 0, writing the current visual image and the current laser data into the map data as optimization node information.
7. The method of claim 1, wherein determining a laser data search range based on the laser data collected while collecting the closed loop key data and the current laser data comprises:
acquiring pose data of the robot when the closed loop key data are acquired and pose data of the robot when the current laser data are acquired;
determining a pose distance between a position of the robot when the closed-loop key data is acquired and a current position of the robot when the closed-loop key data is acquired based on pose data of the robot when the closed-loop key data is acquired and pose data of the robot when the current laser data is acquired;
determining the laser data search range based on the pose distance.
8. The method of claim 7, wherein the determining the laser data search range based on the pose distance comprises:
determining the maximum value of the laser data searching range through the following formula based on the pose distance;
dist_adapitve=log(1+a*dist_diff)+b
wherein the dist _ adapt is used for representing the maximum value of the laser data search range, the dist _ diff is used for representing the pose distance, a is a constant, and b is a constant;
determining a range between 0 and the maximum value as the laser data search range.
9. A real-time positioning and mapping apparatus, the apparatus comprising:
the first acquisition module is used for acquiring a current visual image and current laser data acquired at a current position;
the write-in module comprises a first detection submodule, a first determination submodule, a second detection submodule and a second determination submodule;
the first detection submodule is used for performing visual closed-loop detection based on a current visual image and a visual database to determine closed-loop key data when detecting that an object in the current visual image moves relative to an object in a historical visual image, wherein the visual database is used for storing the visual data acquired by the robot, the closed-loop key data is data describing that the current position of the robot is a position which has passed before the current time, and the historical visual image is a visual image which has earlier acquisition time than the current visual image;
the first determining submodule is used for determining a laser data searching range based on the laser data acquired during the acquisition of the closed-loop key data and the current laser data when the closed-loop key data is determined;
the second detection submodule is used for carrying out laser closed-loop detection according to the laser data search range based on the current laser data and a laser database so as to determine historical laser data matched with the current laser data, and the laser database is used for storing the laser data acquired by the robot;
the second determining submodule is used for writing the current visual image and the current laser data into map data as optimization node information based on the historical laser data.
10. The apparatus of claim 9, wherein the apparatus further comprises:
the characteristic extraction module is used for respectively extracting the characteristics of the current visual image and the historical visual image to obtain a first visual characteristic point and a second visual characteristic point;
a first determining module, configured to determine, according to the first visual feature point and the second visual feature point, a scene overlap ratio between the current visual image and the historical visual image;
and the second determining module is used for determining that the object in the current visual image moves relative to the object in the historical visual image when the scene coincidence degree is smaller than a first preset threshold value.
11. The apparatus of claim 9, wherein the apparatus further comprises:
a third determining module, configured to determine that an object in the current visual image moves relative to an object in the historical visual image when a target object does not exist in the current visual image, where the target object is an object in the historical visual image;
a fourth determining module, configured to determine a position of the target object in the historical visual image and determine a position of the target object in the current visual image when the target object exists in the current visual image;
a fifth determining module, configured to determine that the object in the current visual image has moved relative to the object in the historical visual image when the position of the target object in the current visual image is different from the position in the historical visual image.
12. The apparatus of claim 9, wherein the apparatus further comprises:
the second acquisition module is used for acquiring odometer data of the robot;
a sixth determining module, configured to determine a moving state of the robot according to odometry data of the robot;
correspondingly, the writing module is used for:
and when the robot is determined to move through the moving state, writing the current visual image and the current laser data into map data as optimization node information.
13. The apparatus of claim 12, wherein the odometry data for the robot includes a change in displacement in an X direction, a change in displacement in a Y direction, and a change in angle of the robot;
the sixth determining module is configured to:
and when the displacement change of the robot in any one of the X-direction displacement change and the Y-direction displacement change exceeds a preset displacement threshold value, or the angle change of the robot exceeds a preset angle threshold value, determining that the robot has moved.
14. The apparatus of claim 9, wherein the second determination submodule is to:
determining a position error between the position of the robot and the current position when the historical laser data is collected;
and when the position error is not 0, writing the current visual image and the current laser data into the map data as optimization node information.
15. The apparatus of claim 9, wherein the first determination submodule is to:
acquiring pose data of the robot when the closed-loop key data are acquired and pose data of the robot when the current laser data are acquired;
determining a pose distance between a position of the robot when the closed-loop key data is acquired and a current position of the robot when the closed-loop key data is acquired based on pose data of the robot when the closed-loop key data is acquired and pose data of the robot when the current laser data is acquired;
determining the laser data search range based on the pose distance.
16. The apparatus of claim 15, wherein the first determination submodule is further to:
determining the maximum value of the laser data searching range through the following formula based on the pose distance;
dist_adapitve=log(1+a*dist_diff)+b
wherein the dist _ adapt is used for representing the maximum value of the laser data search range, the dist _ diff is used for representing the pose distance, a is a constant, and b is a constant;
determining a range between 0 and the maximum value as the laser data search range.
CN201810569341.9A 2018-06-05 2018-06-05 Real-time positioning and map construction method and device and computer readable storage medium Active CN110570465B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810569341.9A CN110570465B (en) 2018-06-05 2018-06-05 Real-time positioning and map construction method and device and computer readable storage medium
PCT/CN2019/088413 WO2019233299A1 (en) 2018-06-05 2019-05-24 Mapping method and apparatus, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810569341.9A CN110570465B (en) 2018-06-05 2018-06-05 Real-time positioning and map construction method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110570465A CN110570465A (en) 2019-12-13
CN110570465B true CN110570465B (en) 2022-05-20

Family

ID=68772307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810569341.9A Active CN110570465B (en) 2018-06-05 2018-06-05 Real-time positioning and map construction method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110570465B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111060118B (en) * 2019-12-27 2022-01-07 炬星科技(深圳)有限公司 Scene map establishing method, device and storage medium
CN111352911A (en) * 2020-02-24 2020-06-30 北京小马智行科技有限公司 Compression method and device of electronic map, storage medium and electronic equipment
WO2022016320A1 (en) * 2020-07-20 2022-01-27 深圳元戎启行科技有限公司 Map update method and apparatus, computer device, and storage medium
CN112833890A (en) * 2020-12-30 2021-05-25 深圳市海柔创新科技有限公司 Map construction method, map construction device, map construction equipment, robot and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831446A (en) * 2012-08-20 2012-12-19 南京邮电大学 Image appearance based loop closure detecting method in monocular vision SLAM (simultaneous localization and mapping)
CN103278170A (en) * 2013-05-16 2013-09-04 东南大学 Mobile robot cascading map building method based on remarkable scenic spot detection
CN103900583A (en) * 2012-12-25 2014-07-02 联想(北京)有限公司 Device and method used for real-time positioning and map building
CN106897666A (en) * 2017-01-17 2017-06-27 上海交通大学 A kind of closed loop detection method of indoor scene identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831446A (en) * 2012-08-20 2012-12-19 南京邮电大学 Image appearance based loop closure detecting method in monocular vision SLAM (simultaneous localization and mapping)
CN103900583A (en) * 2012-12-25 2014-07-02 联想(北京)有限公司 Device and method used for real-time positioning and map building
CN103278170A (en) * 2013-05-16 2013-09-04 东南大学 Mobile robot cascading map building method based on remarkable scenic spot detection
CN106897666A (en) * 2017-01-17 2017-06-27 上海交通大学 A kind of closed loop detection method of indoor scene identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于视觉与惯性传感器的移动机器人环境地图创建方法研究";张磊;《中国优秀博硕士论文全文数据库 信息科技辑》;20131215;第2-4章 *

Also Published As

Publication number Publication date
CN110570465A (en) 2019-12-13

Similar Documents

Publication Publication Date Title
US11205282B2 (en) Relocalization method and apparatus in camera pose tracking process and storage medium
US11276183B2 (en) Relocalization method and apparatus in camera pose tracking process, device, and storage medium
CN110148178B (en) Camera positioning method, device, terminal and storage medium
CN110967024A (en) Method, device, equipment and storage medium for detecting travelable area
CN110570465B (en) Real-time positioning and map construction method and device and computer readable storage medium
CN110967011A (en) Positioning method, device, equipment and storage medium
CN110986930B (en) Equipment positioning method and device, electronic equipment and storage medium
CN109558837B (en) Face key point detection method, device and storage medium
CN111127509B (en) Target tracking method, apparatus and computer readable storage medium
CN109166150B (en) Pose acquisition method and device storage medium
CN110134744B (en) Method, device and system for updating geomagnetic information
CN109886208B (en) Object detection method and device, computer equipment and storage medium
CN110633336B (en) Method and device for determining laser data search range and storage medium
CN110920631A (en) Method and device for controlling vehicle, electronic equipment and readable storage medium
CN112749590B (en) Object detection method, device, computer equipment and computer readable storage medium
CN111753606A (en) Intelligent model upgrading method and device
CN111928861B (en) Map construction method and device
CN111127541A (en) Vehicle size determination method and device and storage medium
CN111354378B (en) Voice endpoint detection method, device, equipment and computer storage medium
CN113432620B (en) Error estimation method and device, vehicle-mounted terminal and storage medium
WO2019233299A1 (en) Mapping method and apparatus, and computer readable storage medium
CN112990421B (en) Method, device and storage medium for optimizing operation process of deep learning network
CN111984755A (en) Method and device for determining target parking point, electronic equipment and storage medium
CN113592874A (en) Image display method and device and computer equipment
CN111402873A (en) Voice signal processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 310051 room 304, B / F, building 2, 399 Danfeng Road, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: Hangzhou Hikvision Robot Co.,Ltd.

Address before: 310051 5th floor, building 1, building 2, no.700 Dongliu Road, Binjiang District, Hangzhou City, Zhejiang Province

Patentee before: HANGZHOU HIKROBOT TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address