CN110866496A - Robot positioning and mapping method and device based on depth image - Google Patents

Robot positioning and mapping method and device based on depth image Download PDF

Info

Publication number
CN110866496A
CN110866496A CN201911114259.8A CN201911114259A CN110866496A CN 110866496 A CN110866496 A CN 110866496A CN 201911114259 A CN201911114259 A CN 201911114259A CN 110866496 A CN110866496 A CN 110866496A
Authority
CN
China
Prior art keywords
image
current frame
frame
pose
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911114259.8A
Other languages
Chinese (zh)
Other versions
CN110866496B (en
Inventor
方宝富
王浩
杨静
韩健英
韩修萌
卢德玖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN201911114259.8A priority Critical patent/CN110866496B/en
Publication of CN110866496A publication Critical patent/CN110866496A/en
Application granted granted Critical
Publication of CN110866496B publication Critical patent/CN110866496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a robot positioning and mapping method, a robot positioning and mapping device, a computer device and a storage medium based on a depth image, wherein the method comprises the following steps: using an RGB-D camera to detect the surrounding environment, acquiring an RGB image and a depth image, and based on RGB image and depth image, determining continuous image frame, calculating continuous image frame by sparse direct method to obtain initial pose of current frame, realizing determination of initial pose by less amount of calculation, increasing pose acquisition speed, meanwhile, the initial pose of the current frame is calculated and optimized by adopting a characteristic point method to obtain the accurate pose of the current frame, the pose estimation accuracy is ensured when illumination changes or moves rapidly, then the key frame is selected according to the accurate pose of the current frame to obtain a key frame sequence, local mapping and optimization are carried out based on the key frame sequence to generate an environment map, the environment map is generated efficiently and accurately, and the efficiency and the accuracy of robot positioning and mapping are improved.

Description

Robot positioning and mapping method and device based on depth image
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for positioning and mapping a robot, a computer device, and a storage medium.
Background
In recent years, technologies such as unmanned driving, robots, unmanned planes, AR/VR and the like have been rapidly developed, and positioning and mapping have become a hot problem of research, and are considered to be key basic technologies in these fields. This is because in an unknown environment, accurate positioning of the robot requires an accurate environment map, and in order to construct an accurate environment map, the robot also knows its exact location in the environment. The slam (simultaneous Localization and mapping) technology enables a robot and other carriers to start at an unknown place in an unknown environment, a series of sensors (laser radar, GPS, IMU, camera and the like) carried by the robot are used for observing the environmental characteristics of the robot, the moving pose of the robot is calculated, and an unknown environment map is constructed in an incremental manner according to the pose and the position of the robot. Finally, a complete global consistent environment map can be constructed so as to provide necessary support for later navigation, obstacle avoidance, path planning and other applications.
Among a plurality of sensors applied to the SLAM technology, compared with a laser SLAM built based on a laser radar, a visual sensor (a monocular camera, a binocular camera and an RGB-D camera) is cheaper and can provide more and richer environment information. The RGB-D camera can provide RGB images and corresponding depth maps at the same time, and a large amount of computing resources can be saved. Therefore, in indoor mapping, it is increasingly popular to implement visual SLAM using RGB-D cameras.
Among the conventional visual SLAM implemented by using an RGB-D camera, ptam (parallel Tracking and mapping) is a representative visual SLAM system based on a key frame. The system realizes the parallel tracking and mapping process by using a multithread method, and performs back-end optimization by using nonlinear optimization. The PTAM satisfies the real-time requirement of the visual SLAM, but in the process of implementing the present application, the inventors found that the prior art has at least the following problems: in a large scene, the visual SLAM relocation realized by adopting the PTAM is easy to fail, so that the problem to be solved urgently is solved on how to accurately position and construct the robot in the large scene.
Disclosure of Invention
The embodiment of the application aims to provide a robot positioning and mapping method and device based on a depth image, computer equipment and a storage medium, so that the accuracy of robot positioning and mapping in a large scene is improved.
In order to solve the above technical problem, an embodiment of the present application provides a depth image-based robot positioning and mapping method, including:
detecting the surrounding environment by using an RGB-D camera, acquiring an RGB image and a depth image, and determining continuous image frames based on the RGB image and the depth image;
calculating the continuous image frames by adopting a sparse direct method to obtain the initial pose of the current frame;
calculating and optimizing the initial pose of the current frame by adopting a characteristic point method to obtain the accurate pose of the current frame;
selecting a key frame according to the accurate pose of the current frame to obtain a key frame sequence;
and carrying out local mapping and optimization based on the key frame sequence to generate an environment map.
Further, the determining successive image frames based on the RGB image and the depth image comprises:
extracting ORB characteristics of each RGB image;
calculating the space coordinates of the feature points according to the depth images corresponding to the RGB images;
and obtaining the image frame based on the ORB feature and the space coordinate.
Further, the calculating the continuous image frames by using a sparse direct method to obtain the initial pose of the current frame includes:
taking the image frame of the common map point cloud which has the most quantity with the current frame as a reference key frame of the current frame;
for the reference key frame of the current frame, determining the initial pose corresponding to the current frame by calculating the minimum luminosity error between the current frame image and the reference key frame, and further, obtaining the accurate pose of the current frame by calculating and optimizing the initial pose of the current frame by using a feature point method includes:
and calculating the reprojection error of the image frame and the current frame image based on the initial pose aiming at the current frame image, and re-optimizing and positioning the pose by selecting the minimized reprojection error to obtain the accurate pose of the current frame.
Further, selecting a key frame according to the accurate pose of the current frame to obtain a key frame sequence, wherein the key frame sequence comprises:
calculating the motion amplitude of the current frame relative to the last key frame based on the accurate pose for the current frame;
and if the motion amplitude exceeds a preset distance threshold, inserting the current frame into the key frame sequence.
Further, the calculating, for the current frame, the motion amplitude of the current frame relative to the previous key frame based on the precise pose comprises:
converting the accurate pose of the current frame into a corresponding rotation matrix and displacement offset, and converting the rotation matrix into a rotation Euler angle;
and calculating the two-norm of the rotational Euler angle and the displacement deviation, and taking the obtained value as a measurement value corresponding to the motion amplitude of the current frame relative to the previous key frame image.
In order to solve the above technical problem, an embodiment of the present application further provides a depth image-based robot positioning and mapping apparatus, including:
the frame image acquisition module is used for detecting the surrounding environment by using an RGB-D camera, acquiring an RGB image and a depth image, and constructing continuous image frames based on the RGB image and the depth image;
the initial pose determining module is used for calculating the continuous image frames by adopting a sparse direct method to obtain the initial pose of the current frame;
the accurate pose determining module is used for calculating and optimizing the initial pose of the current frame by adopting a characteristic point method to obtain the accurate pose of the current frame;
the key frame selection module is used for selecting key frames according to the accurate pose of the current frame to obtain a key frame sequence;
and the local mapping module is used for carrying out local mapping based on the key frame sequence to generate a local map.
Further, the frame image acquisition module includes:
a feature extraction unit for extracting an ORB feature of each of the RGB images;
the coordinate calculation unit is used for calculating the space coordinates of the feature points according to the depth images corresponding to the RGB images;
and the image redrawing unit is used for obtaining the image frame based on the ORB characteristics and the space coordinates.
Further, the initial pose determination module includes:
the target frame image determining unit is used for taking the image frame of the common map point cloud which has the most quantity with the current frame as a reference key frame of the current frame;
and the initial pose determining unit is used for determining the initial pose corresponding to the current frame by calculating the minimum luminosity error of the current frame image and the reference key frame.
And the pose storage unit is used for storing the initial pose corresponding to the current frame image to the initial pose of the current frame.
Further, the accurate pose determination module includes:
and the accurate pose determining unit is used for calculating the reprojection error of the image frame and the current frame image based on the initial pose aiming at the current frame image, and re-optimizing the positioning pose by selecting the minimized reprojection error to obtain the accurate pose of the current frame.
Further, the key frame selecting module comprises:
the motion amplitude determining unit is used for calculating the motion amplitude of the current frame relative to the last key frame based on the accurate pose aiming at the current frame;
a key frame determining unit, configured to insert the current frame into the sequence of key frames if the motion amplitude exceeds a preset distance threshold;
further, the motion amplitude determination unit includes:
the pose analyzing subunit is used for converting the accurate pose of the current frame into a corresponding rotation matrix and displacement offset and converting the rotation matrix into a rotation Euler angle;
and the measurement value determining subunit is used for calculating a two-norm of the rotational Euler angle and the displacement deviation, and taking the obtained value as the measurement value corresponding to the motion amplitude of the current frame relative to the previous key frame image.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the robot positioning and mapping method based on depth images when executing the computer program.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of positioning and mapping the robot based on the depth image are implemented.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the surrounding environment detection is carried out by using an RGB-D camera, an RGB image and a depth image are acquired, and based on RGB image and depth image, determining continuous image frame, calculating continuous image frame by sparse direct method to obtain initial pose of current frame, realizing determination of initial pose by less amount of calculation, increasing pose acquisition speed, and improving robot positioning efficiency, meanwhile, the initial pose of the current frame is calculated and optimized by adopting a feature point method to obtain the accurate pose of the current frame, the pose estimation accuracy is ensured when illumination changes or moves rapidly, then a key frame is selected according to the accurate pose of the current frame to obtain a key frame sequence, local mapping and optimization are carried out based on the key frame to generate an environment map, the map is generated efficiently and accurately, and the efficiency and the accuracy of robot positioning and mapping are improved.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a depth image based robot positioning and mapping method of the present application;
FIG. 3 is a schematic diagram of an embodiment of a depth image based robot positioning and mapping apparatus according to the present application;
FIG. 4 is a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like.
The terminal devices 101, 102, 103 may be various electronic devices having display screens and supporting web browsing, including but not limited to smart phones, tablet computers, E-book readers, MP3 players (Moving Picture E interface displays a properties Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture E interface displays a properties Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background server providing support for pages displayed on the terminal devices 101, 102, 103.
The robot positioning and mapping method based on the depth image provided by the embodiment of the present application is executed by a server, and accordingly, the robot positioning and mapping device based on the depth image is disposed in the server.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. Any number of terminal devices, networks and servers may be provided according to implementation needs, and the terminal devices 101, 102 and 103 in this embodiment may specifically correspond to an application system in actual production.
Continuing to refer to FIG. 2, a flow diagram of one embodiment of a method of interface display according to the present application is shown. The robot positioning and mapping method based on the depth image comprises the following steps:
s201: and detecting the surrounding environment by using an RGB-D camera, acquiring an RGB image and a depth image, and constructing continuous image frames based on the RGB image and the depth image.
Specifically, an RGB-D camera is used for detecting the surrounding environment, a group of images including an RGB image and a depth image are acquired each time, each group of RGB image and depth image is converted and integrated to obtain an image frame of a unified space coordinate system, and continuous image frames are obtained according to the sequence of time points.
Wherein a depth image (depth image) is an image with a depth map, which is an image or image channel containing information about the distance of the surface of the scene object from the viewpoint, similar to a grayscale image, except that the value expressed by each pixel value thereof is the actual distance of the sensor from the object.
The RGB-D camera is shooting equipment with a depth measurement added to the function of an RGB common camera.
S202: and calculating the continuous image frames by adopting a sparse direct method to obtain the initial pose of the current frame.
Specifically, when the camera pose is estimated by adopting a sparse direct method, only the calculation is constrained according to the pixel gray value difference between two frames of pictures, and due to the fact that the acting objects are sparse feature points and the descriptors do not need to be calculated, the operation speed is very high, and the initial pose of the current frame is rapidly obtained.
Among these, sparse direct methods include, but are not limited to: a Semi-Direct method visual odometer (SVO) algorithm, a depth-scale Direct simple slope (LSD-slope) and a minimum photometric error.
S203: and (4) calculating and optimizing the initial pose of the current frame again by adopting a characteristic point method to obtain the accurate pose of the current frame.
Specifically, in consideration of the problem that the sparse direct method is high in sensitivity to illumination changes and is prone to failure in tracking when the camera moves fast, and in a practical application scene, the illumination changes are difficult to regulate and control, so that the initial pose of the current frame needs to be further optimized. Specifically, reference may also be made to the description of the subsequent embodiments, and in order to avoid repetition, the description is not repeated here.
S204: and selecting a key frame according to the accurate pose of the current frame to obtain a key frame sequence.
Specifically, when the scene is large or the shooting device turns slowly, the poses in the accurate poses of the current frame of the image frames in the whole environment obtained by the camera during movement are often more, in order to avoid an excessive amount of calculation during subsequent image creation and optimization, in this embodiment, the key frames are selected from the image frame list, so that the number of the image frames participating in image creation and optimization is reasonably reduced, and specifically, the key frames are selected by judging the motion amplitude of the pose of the previous key frame in the accurate poses of the current frame, and selecting the key frames according to the motion amplitude, and the description of the subsequent embodiment can be referred to in the specific implementation mode, and the description is not repeated here.
S205: and local mapping and optimization are carried out based on the key frame sequence to generate the environment map.
Specifically, after the key frames are obtained, the local map is constructed through splicing and optimization of the key frames.
In the embodiment, the peripheral environment is detected by using an RGB-D camera, an RGB image and a depth image are acquired, continuous image frames are determined based on the RGB image and the depth image, the continuous image frames are calculated by adopting a sparse direct method to obtain the initial pose of a current frame, the initial pose is determined by a small amount of calculation, the pose acquisition speed is improved, the efficiency of robot positioning and mapping is improved, meanwhile, the initial pose of the current frame is calculated and optimized by adopting a characteristic point method to obtain the accuracy of the current frame, the pose estimation accuracy is ensured when illumination changes or moves rapidly, key frames are selected according to the accurate pose of the current frame to obtain a key frame sequence, local mapping is carried out based on the key frame sequence to generate an environment map, and the environment map is generated efficiently and accurately, the efficiency and the accuracy of robot location and mapping have been improved.
In some optional implementations of the embodiment, in step S201, determining the consecutive image frames based on the RGB image and the depth image includes:
extracting ORB characteristics of each RGB image;
calculating the space coordinates of the feature points according to the depth images corresponding to the RGB images;
based on the ORB features and the spatial coordinates, an image frame is obtained.
Specifically, the camera device or the sensor moves and rotates when acquiring the image, so that the acquired image has different angles and spatial positions, and in order to facilitate subsequent accurate robot positioning and mapping, the spatial coordinates of each feature point need to be calculated according to the depth image, so that the acquired frame images are in the same world coordinate system, which is beneficial to improving the accuracy of subsequent robot positioning and mapping.
The orb (organized FAST and Rotated bright) feature refers to a more prominent feature point in an image, such as a contour point, a bright point in a darker area, a dark point in a lighter area, etc., and the feature point can be detected by FAST (features from calculated segment test) algorithm, which mainly finds out a point where the local pixel gray level changes obviously, i.e. a point is compared with its surrounding points, and if the gray level of the point and the gray level of most of the points are different greatly (over-bright or over-dark), it can be considered as a feature point.
In this embodiment, the ORB feature of each RGB image is extracted, the spatial coordinate is calculated according to the depth image corresponding to the RGB image, and the image frame is obtained based on the ORB feature and the spatial coordinate. The robot positioning and mapping method has the advantages that images captured by the camera equipment or the sensor are converted into image frames with a uniform coordinate system and a time relation, and subsequently, the robot positioning and mapping are carried out through the image frames, so that the accuracy of positioning and mapping is improved.
In some optional implementation manners of this embodiment, in step S202, calculating consecutive image frames by using a sparse direct method, and obtaining an initial pose of the current frame includes:
taking the image frame of the common map point cloud which has the most quantity with the current frame as a reference key frame of the current frame;
and aiming at the reference key frame of the current frame, determining the initial pose corresponding to the current frame by calculating the minimum luminosity error between the current frame image and the reference key frame.
Specifically, minimizing photometric errors is also called minimizing grayscale errors, and in the direct method, the pose change between two frames is found by minimizing photometric errors. The direct method estimates the motion of the camera according to the gray value difference information of the pixels, completely does not need feature matching, can well run as long as the illumination in the environment does not obviously change, and determines the initial pose through the direct method, thereby avoiding a large amount of calculation, improving the acquisition speed of the initial pose and being beneficial to improving the positioning and mapping efficiency of the robot.
In the embodiment, the image frame of the common map point cloud having the most number with the current frame is used as the reference key frame of the current frame, and then the initial pose corresponding to the current frame image is determined by calculating the minimum luminosity error of the current frame image and the reference key frame aiming at the reference key frame, so that the problem that a great amount of calculation is needed to obtain the pose by using a characteristic point matching mode in the prior art is avoided, the pose obtaining speed is increased, and the robot positioning and mapping efficiency is improved.
In some optional implementation manners of this embodiment, in step S203, recalculating and optimizing the initial pose of the current frame by using a feature point method, and obtaining the accurate pose of the current frame includes:
and calculating a reprojection error between the image frame and the current frame image based on the initial pose aiming at the current frame image, and re-optimizing the positioning pose by selecting the minimized reprojection error to obtain the accurate pose of the current frame.
The minimized Reprojection Error (Reprojection Error) is that when the initial pose of the current frame is optimized, a cost function is often constructed by using the Reprojection Error of the matched feature points, and then the cost function is minimized to optimize the pose of the current frame. The reprojection error is used because it not only calculates the error of a single matching feature point, but also takes into account the measurement errors of all matching feature points in the image as a whole, so its accuracy will be higher.
Specifically, for the current frame image, the reprojection error between the image frame and the current frame image in the target frame image is calculated according to the initial pose determined by the current frame image, pose optimization is performed again by selecting the minimized reprojection error, a more accurate pose than the initial pose is obtained, and the accurate pose is stored in an image frame pose list.
In the embodiment, the optimal positioning pose is recalculated based on the reprojection error of the reference key frame image and the current frame image to obtain the accurate pose of the current frame, the pose estimation accuracy is further improved by adopting a mode of calculating the minimized reprojection error, the problem of low estimation accuracy of the initial pose caused by illumination change or rapid movement is avoided, and the accuracy of robot positioning and mapping is favorably improved.
In some optional implementation manners of this embodiment, in step S204, selecting a key frame from the accurate pose of the current frame to obtain a key frame sequence includes:
calculating the motion amplitude of the current frame relative to the previous key frame based on the accurate pose for the current frame;
and if the motion amplitude exceeds a preset distance threshold, inserting the current frame into the key frame sequence.
Specifically, after the accurate pose of the current frame is obtained, the key frame needs to be selected according to the accurate pose of the current frame, so that the situation that when the key frame is too dense, huge calculation pressure is brought to back-end optimization processing, and the continuously generated key frame cannot be processed in real time by a vision SLAM system is avoided; when the number of the keyframes is too small, the accuracy of pose estimation is reduced, and an accurate environment map cannot be established.
On the basis of the conventional SLAM key frame selection mechanism, the application provides a new key frame generation mechanism: when the RGB-D camera collects an image frame, the image frame is compared with a key frame on the image frame, the motion amplitude of the image frame is calculated, and if the motion amplitude exceeds a threshold value, the image frame is stored as the key frame and is used for a local image building thread and a back-end optimization thread.
In the embodiment, the motion amplitude of the current frame image relative to the previous key frame is obtained, whether the motion amplitude exceeds a preset distance threshold is judged, if the motion amplitude exceeds the preset distance threshold, the motion amplitude of the current frame image relative to the previous key frame is determined to be large, and the accurate pose corresponding to the current frame image is inserted into the key frame sequence; if the current frame image does not exceed the preset distance threshold, the motion amplitude of the current frame image relative to the previous key frame is determined to be small, the current frame image does not need to be selected as the key frame and is counted into the key frame sequence, and the key frame is ensured not to be excessively dense to select.
The preset distance threshold may be set according to actual requirements, and is not limited herein.
The motion amplitude refers to the magnitude of influence on the pose estimation caused by the change of the current frame image relative to the previous key frame, the larger the motion amplitude is, the larger the influence on the pose estimation is, and the smaller the motion amplitude is, the smaller the influence on the pose estimation is.
In this embodiment, for a current frame, a motion amplitude of the current frame relative to a previous key frame is calculated based on an accurate pose, and when the motion amplitude exceeds a preset distance threshold, the current frame is inserted into a key frame sequence as a key frame, so that it is ensured that the key frame data is reasonably selected, and a key frame sequence is obtained, so that when a subsequent image is built and optimized based on the key frame sequence, too large calculation pressure is not generated, and meanwhile, the accuracy of the image building is also improved.
In some optional implementations of this embodiment, for the current frame image, calculating the motion amplitude of the current frame relative to the previous key frame based on the precise pose includes:
converting the accurate pose of the current frame image into a corresponding rotation matrix and displacement offset, and converting the rotation matrix into a rotation Euler angle;
and calculating the two norms of the rotational Euler angle and the displacement deviation, and taking the obtained value as a measurement value corresponding to the motion amplitude of the current frame relative to the previous key frame.
Specifically, the precise pose ξ is converted into a corresponding rotation matrix R and displacement offset t (t)x,ty,tz). However, when the rotation matrix is used as the evaluation parameter of the rotation, the intensity of the rotation cannot be intuitively reflected. The present application adopts a method of converting a rotation matrix into a rotational euler angle to express the degree of rotation, as shown in formula (1) and formula (2):
Figure BDA0002273628180000121
Figure BDA0002273628180000122
where the atan2() function returns the azimuth angle, (yaw, pitch, roll) is the euler angle, yaw represents the yaw angle, pitch represents the pitch angle, and roll represents the roll angle. After the rotation matrix is converted into the euler angle, the rotation amplitude can be measured according to the two-norm euler angle for calculating the rotation euler angle and the displacement deflection, as shown in formula (3):
distance=α||tx,ty,tz||+β||yaw,pitch,roll|| (3)
the weight α and the weight β are predetermined weights, and the distance is a two-norm of the rotational euler angle and the displacement offset, that is, the motion amplitude, the weight α and the weight β can be predetermined according to actual needs.
It should be noted that when the camera is moving, if only pure translational motion is generated, the influence of the camera view change is small, and if the camera is rotating, the influence on the camera view is large, so that when the values of α and β are set, β is much larger than α, i.e., the euler angle is considered to be weighted more than the displacement offset.
In this embodiment, the accurate pose of the current frame image is converted into the corresponding rotation matrix and displacement offset, the rotation matrix is converted into the rotation euler angle, the two norms of the rotation euler angle and the displacement offset are calculated, and the obtained value is used as the measurement value corresponding to the motion amplitude of the current frame relative to the previous key frame, so that the motion amplitude is measured numerically, and the key frame is selected according to the digitized motion amplitude subsequently.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
With further reference to fig. 3, as an implementation of the method shown in fig. 2, the present application provides an embodiment of a depth image-based robot positioning and mapping apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 3, the depth image-based robot positioning and mapping apparatus according to this embodiment includes: a frame image acquisition module 31, an initial pose determination module 32, a precise pose determination module 33, a key frame extraction module 34, and a local mapping module 35. Wherein:
the frame image acquisition module 31 is configured to perform ambient environment detection using an RGB-D camera, acquire an RGB image and a depth image, and determine a continuous image frame based on the RGB image and the depth image;
the initial pose determining module 32 is configured to calculate a continuous image frame by using a sparse direct method to obtain an initial pose of the current frame;
the accurate pose determining module 33 is configured to calculate and optimize the initial pose of the current frame by using a feature point method to obtain the accurate pose of the current frame;
the key frame selecting module 34 is configured to select a key frame according to the accurate pose of the current frame to obtain a sequence of key frames;
and a local mapping module 35, configured to perform local mapping based on the sequence of key frames to generate a local map.
Further, the frame image acquiring module 31 includes:
a feature extraction unit 311 for extracting ORB features of each RGB image;
a coordinate calculation unit 312, configured to calculate spatial coordinates of the feature points according to the depth image corresponding to the RGB image;
and an image redrawing unit 313 for obtaining an image frame based on the ORB features and the spatial coordinates.
Further, the initial pose determination module 32 includes:
a target frame image determining unit 321 for setting an image frame of a common map point cloud having the most number with a current frame as a reference key frame of the current frame;
the initial pose determining unit 322 is configured to determine, for a reference key frame of a current frame, an initial pose corresponding to the current frame by calculating a minimum luminosity error between an image of the current frame and the reference key frame.
And the initial pose saving unit 323 is configured to save the initial pose corresponding to the current frame image to the initial pose of the current frame.
Further, the accurate pose determination module 33 includes:
and the accurate pose determining unit 331 is configured to calculate a reprojection error between the image frame and the current frame image based on the initial pose for the current frame image, and re-optimize the positioning pose by selecting the minimized reprojection error to obtain the accurate pose of the current frame.
Further, the key frame selecting module 34 includes:
a motion amplitude determining unit 341, configured to calculate, for a current frame, a motion amplitude of the current frame relative to a previous key frame based on the accurate pose;
a key frame determining unit 342, configured to insert the current frame into the sequence of key frames if the motion amplitude exceeds a preset distance threshold;
further, the motion amplitude determination unit 341 includes:
a pose analyzing subunit 3411, configured to convert the precise pose of the current frame into a corresponding rotation matrix and displacement offset, and convert the rotation matrix into a rotational euler angle;
the metric determination subunit 3412 is configured to calculate a two-norm of the rotational euler angle and the displacement offset, and use the obtained value as a metric corresponding to the motion amplitude of the current frame relative to the previous key frame image.
With respect to the depth image-based robot positioning and mapping apparatus in the above embodiment, the specific manner in which each module performs operations has been described in detail in the embodiment related to the method, and will not be described in detail here.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 4, fig. 4 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It is noted that only the computer device 4 having the components connection memory 41, processor 42, network interface 43 is shown, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or D interface display memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 4. Of course, the memory 41 may also include both internal and external storage devices of the computer device 4. In this embodiment, the memory 41 is generally used for storing an operating system installed in the computer device 4 and various types of application software, such as program codes of a robot positioning and mapping method. Further, the memory 41 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute the program code stored in the memory 41 or process data, for example, execute the program code of the depth image-based robot positioning and mapping method.
The network interface 43 may comprise a wireless network interface or a wired network interface, and the network interface 43 is generally used for establishing communication connection between the computer device 4 and other electronic devices.
The present application further provides another embodiment, which is to provide a computer readable storage medium storing an interface display program, which is executable by at least one processor to cause the at least one processor to perform the steps of the depth-image-based robot positioning and mapping method as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. A robot positioning and mapping method based on depth images is characterized by comprising the following steps:
detecting the surrounding environment by using an RGB-D camera, acquiring an RGB image and a depth image, and determining continuous image frames based on the RGB image and the depth image;
calculating the continuous image frames by adopting a sparse direct method to obtain the initial pose of the current frame;
calculating and optimizing the initial pose of the current frame by adopting a characteristic point method to obtain the accurate pose of the current frame;
selecting a key frame according to the accurate pose of the current frame to obtain a key frame sequence;
and carrying out local mapping and optimization based on the key frame sequence to generate an environment map.
2. The depth-image based robot positioning and mapping method of claim 1, wherein the determining successive image frames based on the RGB image and depth image comprises:
extracting ORB characteristics of each RGB image;
calculating the space coordinates of the feature points according to the depth images corresponding to the RGB images;
and constructing and obtaining the image frame based on the ORB characteristics and the space coordinates.
3. The depth image-based robot positioning and mapping method according to claim 1, wherein the calculating the continuous image frames by using a sparse direct method to obtain the initial pose of the current frame comprises:
taking the image frame of the common map point cloud which has the most quantity with the current frame as a reference key frame of the current frame;
and aiming at the reference key frame of the current frame, determining the initial pose corresponding to the current frame by calculating the minimum luminosity error of the current frame image and the reference key frame.
4. The depth image-based robot positioning and mapping method of claim 3, wherein the computing and optimizing the initial pose according to the current frame by using a feature point method to obtain the accurate pose of the current frame comprises:
and calculating the reprojection error of the image frame and the current frame image based on the initial pose aiming at the current frame image, and re-optimizing and positioning the pose by selecting the minimized reprojection error to obtain the accurate pose of the current frame.
5. The depth image-based robot positioning and mapping method according to claim 3 or 4, wherein the obtaining of the sequence of key frames by key frame selection according to the accurate pose of the current frame comprises:
calculating the motion amplitude of the current frame relative to the last key frame based on the accurate pose for the current frame;
and if the motion amplitude exceeds a preset distance threshold, inserting the current frame into the key frame sequence.
6. The depth-image-based robot positioning and mapping method of claim 5, wherein the calculating, for the current frame, the motion amplitude of the current frame relative to the previous key frame based on the precise pose comprises:
converting the accurate pose of the current frame into a corresponding rotation matrix and displacement offset, and converting the rotation matrix into a rotation Euler angle;
and calculating the two-norm of the rotational Euler angle and the displacement deviation, and taking the obtained value as a measurement value corresponding to the motion amplitude of the current frame relative to the previous key frame.
7. A robot positioning and mapping device based on depth images is characterized by comprising:
the frame image acquisition module is used for detecting the surrounding environment by using an RGB-D camera, acquiring an RGB image and a depth image, and determining continuous image frames based on the RGB image and the depth image;
the initial pose determining module is used for calculating the continuous image frames by adopting a sparse direct method to obtain the initial pose of the current frame;
the accurate pose determining module is used for calculating and optimizing the initial pose of the current frame by adopting a characteristic point method to obtain the accurate pose of the current frame;
the key frame selecting module is used for selecting key frames according to the accurate pose of the current frame to obtain a key frame sequence;
and the local mapping module is used for carrying out local mapping based on the key frame sequence to generate a local map.
8. The depth-image-based robot positioning and mapping apparatus according to claim 7, wherein the frame image obtaining module comprises:
a feature extraction unit for extracting an ORB feature of each of the RGB images;
the coordinate calculation unit is used for calculating the space coordinates of the feature points according to the depth images corresponding to the RGB images;
and the image redrawing unit is used for obtaining the image frame based on the ORB characteristics and the space coordinates.
9. A computer device comprising a memory in which a computer program is stored and a processor which, when executing the computer program, carries out the steps of the depth-image based robot positioning and mapping method according to any of claims 1 to 6.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the depth-image based robot positioning and mapping method according to any one of claims 1 to 6.
CN201911114259.8A 2019-11-14 2019-11-14 Robot positioning and mapping method and device based on depth image Active CN110866496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911114259.8A CN110866496B (en) 2019-11-14 2019-11-14 Robot positioning and mapping method and device based on depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911114259.8A CN110866496B (en) 2019-11-14 2019-11-14 Robot positioning and mapping method and device based on depth image

Publications (2)

Publication Number Publication Date
CN110866496A true CN110866496A (en) 2020-03-06
CN110866496B CN110866496B (en) 2023-04-07

Family

ID=69654227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911114259.8A Active CN110866496B (en) 2019-11-14 2019-11-14 Robot positioning and mapping method and device based on depth image

Country Status (1)

Country Link
CN (1) CN110866496B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445526A (en) * 2020-04-22 2020-07-24 清华大学 Estimation method and estimation device for pose between image frames and storage medium
CN111583331A (en) * 2020-05-12 2020-08-25 北京轩宇空间科技有限公司 Method and apparatus for simultaneous localization and mapping
CN111829522A (en) * 2020-07-02 2020-10-27 浙江大华技术股份有限公司 Instant positioning and map construction method, computer equipment and device
CN111882494A (en) * 2020-06-28 2020-11-03 广州文远知行科技有限公司 Pose graph processing method and device, computer equipment and storage medium
CN112288811A (en) * 2020-10-30 2021-01-29 珠海市一微半导体有限公司 Key frame fusion control method for multi-frame depth image positioning and visual robot
CN112631303A (en) * 2020-12-26 2021-04-09 北京云迹科技有限公司 Robot positioning method and device and electronic equipment
CN112630745A (en) * 2020-12-24 2021-04-09 深圳市大道智创科技有限公司 Environment mapping method and device based on laser radar
CN112767481A (en) * 2021-01-21 2021-05-07 山东大学 High-precision positioning and mapping method based on visual edge features
CN112967340A (en) * 2021-02-07 2021-06-15 咪咕文化科技有限公司 Simultaneous positioning and map construction method and device, electronic equipment and storage medium
CN112991449A (en) * 2021-03-22 2021-06-18 华南理工大学 AGV positioning and mapping method, system, device and medium
CN113108771A (en) * 2021-03-05 2021-07-13 华南理工大学 Movement pose estimation method based on closed-loop direct sparse visual odometer
CN113190120A (en) * 2021-05-11 2021-07-30 浙江商汤科技开发有限公司 Pose acquisition method and device, electronic equipment and storage medium
CN113205560A (en) * 2021-05-06 2021-08-03 Oppo广东移动通信有限公司 Calibration method, device and equipment of multi-depth camera and storage medium
CN113297952A (en) * 2021-05-21 2021-08-24 哈尔滨工业大学(深圳) Measuring method and system for rope-driven flexible robot in complex environment
CN113379911A (en) * 2021-06-30 2021-09-10 深圳市银星智能科技股份有限公司 SLAM method, SLAM system and intelligent robot
CN113420590A (en) * 2021-05-13 2021-09-21 北京航空航天大学 Robot positioning method, device, equipment and medium in weak texture environment
CN113628335A (en) * 2021-07-28 2021-11-09 深圳优艾智合机器人科技有限公司 Point cloud map construction method and device and computer readable storage medium
CN113674424A (en) * 2021-08-31 2021-11-19 北京三快在线科技有限公司 Method and device for drawing electronic map
CN113689485A (en) * 2021-08-25 2021-11-23 北京三快在线科技有限公司 Method and device for determining depth information of unmanned aerial vehicle, unmanned aerial vehicle and storage medium
CN113744308A (en) * 2021-08-06 2021-12-03 高德软件有限公司 Pose optimization method, pose optimization device, electronic device, pose optimization medium, and program product
CN114199243A (en) * 2020-09-18 2022-03-18 浙江舜宇智能光学技术有限公司 Pose estimation and motion planning method and device for robot and robot
CN114972514A (en) * 2022-05-30 2022-08-30 歌尔股份有限公司 SLAM positioning method, device, electronic equipment and readable storage medium
WO2022247286A1 (en) * 2021-05-27 2022-12-01 浙江商汤科技开发有限公司 Positioning method, apparatus, device, and storage medium
WO2023279868A1 (en) * 2021-07-07 2023-01-12 北京字跳网络技术有限公司 Simultaneous localization and mapping initialization method and apparatus and storage medium
CN115830110A (en) * 2022-10-26 2023-03-21 北京城市网邻信息技术有限公司 Instant positioning and map construction method and device, terminal equipment and storage medium
CN117419690A (en) * 2023-12-13 2024-01-19 陕西欧卡电子智能科技有限公司 Pose estimation method, device and medium of unmanned ship
CN111882494B (en) * 2020-06-28 2024-05-14 广州文远知行科技有限公司 Pose graph processing method and device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816769A (en) * 2017-11-21 2019-05-28 深圳市优必选科技有限公司 Scene based on depth camera ground drawing generating method, device and equipment
CN109816696A (en) * 2019-02-01 2019-05-28 西安全志科技有限公司 A kind of robot localization and build drawing method, computer installation and computer readable storage medium
WO2019205853A1 (en) * 2018-04-27 2019-10-31 腾讯科技(深圳)有限公司 Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109816769A (en) * 2017-11-21 2019-05-28 深圳市优必选科技有限公司 Scene based on depth camera ground drawing generating method, device and equipment
WO2019205853A1 (en) * 2018-04-27 2019-10-31 腾讯科技(深圳)有限公司 Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium
CN109816696A (en) * 2019-02-01 2019-05-28 西安全志科技有限公司 A kind of robot localization and build drawing method, computer installation and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张国良等: "融合直接法与特征法的快速双目SLAM算法", 《机器人》 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445526A (en) * 2020-04-22 2020-07-24 清华大学 Estimation method and estimation device for pose between image frames and storage medium
CN111445526B (en) * 2020-04-22 2023-08-04 清华大学 Method, device and storage medium for estimating pose of image frame
CN111583331A (en) * 2020-05-12 2020-08-25 北京轩宇空间科技有限公司 Method and apparatus for simultaneous localization and mapping
CN111583331B (en) * 2020-05-12 2023-09-01 北京轩宇空间科技有限公司 Method and device for simultaneous localization and mapping
CN111882494B (en) * 2020-06-28 2024-05-14 广州文远知行科技有限公司 Pose graph processing method and device, computer equipment and storage medium
CN111882494A (en) * 2020-06-28 2020-11-03 广州文远知行科技有限公司 Pose graph processing method and device, computer equipment and storage medium
CN111829522A (en) * 2020-07-02 2020-10-27 浙江大华技术股份有限公司 Instant positioning and map construction method, computer equipment and device
CN114199243A (en) * 2020-09-18 2022-03-18 浙江舜宇智能光学技术有限公司 Pose estimation and motion planning method and device for robot and robot
CN112288811A (en) * 2020-10-30 2021-01-29 珠海市一微半导体有限公司 Key frame fusion control method for multi-frame depth image positioning and visual robot
CN112630745A (en) * 2020-12-24 2021-04-09 深圳市大道智创科技有限公司 Environment mapping method and device based on laser radar
CN112631303A (en) * 2020-12-26 2021-04-09 北京云迹科技有限公司 Robot positioning method and device and electronic equipment
CN112631303B (en) * 2020-12-26 2022-12-20 北京云迹科技股份有限公司 Robot positioning method and device and electronic equipment
CN112767481A (en) * 2021-01-21 2021-05-07 山东大学 High-precision positioning and mapping method based on visual edge features
CN112767481B (en) * 2021-01-21 2022-08-16 山东大学 High-precision positioning and mapping method based on visual edge features
CN112967340A (en) * 2021-02-07 2021-06-15 咪咕文化科技有限公司 Simultaneous positioning and map construction method and device, electronic equipment and storage medium
CN113108771A (en) * 2021-03-05 2021-07-13 华南理工大学 Movement pose estimation method based on closed-loop direct sparse visual odometer
CN112991449A (en) * 2021-03-22 2021-06-18 华南理工大学 AGV positioning and mapping method, system, device and medium
CN113205560B (en) * 2021-05-06 2024-02-23 Oppo广东移动通信有限公司 Calibration method, device, equipment and storage medium of multi-depth camera
CN113205560A (en) * 2021-05-06 2021-08-03 Oppo广东移动通信有限公司 Calibration method, device and equipment of multi-depth camera and storage medium
CN113190120B (en) * 2021-05-11 2022-06-24 浙江商汤科技开发有限公司 Pose acquisition method and device, electronic equipment and storage medium
CN113190120A (en) * 2021-05-11 2021-07-30 浙江商汤科技开发有限公司 Pose acquisition method and device, electronic equipment and storage medium
CN113420590A (en) * 2021-05-13 2021-09-21 北京航空航天大学 Robot positioning method, device, equipment and medium in weak texture environment
CN113420590B (en) * 2021-05-13 2022-12-06 北京航空航天大学 Robot positioning method, device, equipment and medium in weak texture environment
CN113297952A (en) * 2021-05-21 2021-08-24 哈尔滨工业大学(深圳) Measuring method and system for rope-driven flexible robot in complex environment
WO2022247286A1 (en) * 2021-05-27 2022-12-01 浙江商汤科技开发有限公司 Positioning method, apparatus, device, and storage medium
CN113379911A (en) * 2021-06-30 2021-09-10 深圳市银星智能科技股份有限公司 SLAM method, SLAM system and intelligent robot
WO2023279868A1 (en) * 2021-07-07 2023-01-12 北京字跳网络技术有限公司 Simultaneous localization and mapping initialization method and apparatus and storage medium
CN113628335A (en) * 2021-07-28 2021-11-09 深圳优艾智合机器人科技有限公司 Point cloud map construction method and device and computer readable storage medium
CN113744308A (en) * 2021-08-06 2021-12-03 高德软件有限公司 Pose optimization method, pose optimization device, electronic device, pose optimization medium, and program product
CN113744308B (en) * 2021-08-06 2024-02-20 高德软件有限公司 Pose optimization method, pose optimization device, electronic equipment, medium and program product
CN113689485A (en) * 2021-08-25 2021-11-23 北京三快在线科技有限公司 Method and device for determining depth information of unmanned aerial vehicle, unmanned aerial vehicle and storage medium
CN113674424B (en) * 2021-08-31 2023-02-03 北京三快在线科技有限公司 Method and device for drawing electronic map
CN113674424A (en) * 2021-08-31 2021-11-19 北京三快在线科技有限公司 Method and device for drawing electronic map
CN114972514A (en) * 2022-05-30 2022-08-30 歌尔股份有限公司 SLAM positioning method, device, electronic equipment and readable storage medium
CN115830110A (en) * 2022-10-26 2023-03-21 北京城市网邻信息技术有限公司 Instant positioning and map construction method and device, terminal equipment and storage medium
CN115830110B (en) * 2022-10-26 2024-01-02 北京城市网邻信息技术有限公司 Instant positioning and map construction method and device, terminal equipment and storage medium
CN117419690A (en) * 2023-12-13 2024-01-19 陕西欧卡电子智能科技有限公司 Pose estimation method, device and medium of unmanned ship
CN117419690B (en) * 2023-12-13 2024-03-12 陕西欧卡电子智能科技有限公司 Pose estimation method, device and medium of unmanned ship

Also Published As

Publication number Publication date
CN110866496B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN110866496B (en) Robot positioning and mapping method and device based on depth image
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
US11002840B2 (en) Multi-sensor calibration method, multi-sensor calibration device, computer device, medium and vehicle
CN110866497B (en) Robot positioning and mapping method and device based on dotted line feature fusion
CN109242913B (en) Method, device, equipment and medium for calibrating relative parameters of collector
KR101749017B1 (en) Speed-up template matching using peripheral information
CN107633526B (en) Image tracking point acquisition method and device and storage medium
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN112560684B (en) Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN113223064A (en) Method and device for estimating scale of visual inertial odometer
US20210304411A1 (en) Map construction method, apparatus, storage medium and electronic device
CN113610702B (en) Picture construction method and device, electronic equipment and storage medium
CN110956131B (en) Single-target tracking method, device and system
CN113763466A (en) Loop detection method and device, electronic equipment and storage medium
CN117132649A (en) Ship video positioning method and device for artificial intelligent Beidou satellite navigation fusion
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium
Nozawa et al. Indoor human navigation system on smartphones using view-based navigation
CN114972465A (en) Image target depth detection method and device, electronic equipment and storage medium
CN114187509A (en) Object positioning method and device, electronic equipment and storage medium
CN113763468A (en) Positioning method, device, system and storage medium
CN114387405B (en) Machine vision-based method and device for quickly positioning tiny features across orders of magnitude
CN115578432B (en) Image processing method, device, electronic equipment and storage medium
CN116576866B (en) Navigation method and device
CN115619958B (en) Target aerial view generation method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant