WO2021004416A1 - 一种基于视觉信标建立信标地图的方法、装置 - Google Patents

一种基于视觉信标建立信标地图的方法、装置 Download PDF

Info

Publication number
WO2021004416A1
WO2021004416A1 PCT/CN2020/100306 CN2020100306W WO2021004416A1 WO 2021004416 A1 WO2021004416 A1 WO 2021004416A1 CN 2020100306 W CN2020100306 W CN 2020100306W WO 2021004416 A1 WO2021004416 A1 WO 2021004416A1
Authority
WO
WIPO (PCT)
Prior art keywords
type
pose
beacon
image frame
visual beacon
Prior art date
Application number
PCT/CN2020/100306
Other languages
English (en)
French (fr)
Inventor
唐恒博
Original Assignee
杭州海康机器人技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康机器人技术有限公司 filed Critical 杭州海康机器人技术有限公司
Priority to JP2022500643A priority Critical patent/JP7300550B2/ja
Priority to KR1020227002958A priority patent/KR20220025028A/ko
Priority to EP20837645.9A priority patent/EP3995988A4/en
Publication of WO2021004416A1 publication Critical patent/WO2021004416A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3807Creation or updating of map data characterised by the type of data
    • G01C21/383Indoor data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • This application relates to the field of computational vision, and in particular, to a method and device for establishing a beacon map based on visual beacons.
  • Position and attitude are collectively called pose, where position refers to the coordinate position of an object in a three-dimensional space coordinate system, and attitude refers to the rotation angle of the object relative to the three coordinate axes in this coordinate system, including pitch angle, yaw angle and Roll angle.
  • Visual beacon navigation technology is one of the common robot navigation solutions.
  • the robot realizes positioning based on a beacon map constructed using visual beacons.
  • Existing visual beacon mapping usually requires physical measurement of the working environment in advance, and based on the physical measurement results, manually analyze the position and posture of the visual beacon to be deployed, that is, the position and posture of the visual beacon to be deployed.
  • the relevant personnel arrange the visual beacons in the environment according to the posture of the visual beacon to be deployed; further, when generating the beacon map, form the beacon map based on the position and posture of the visual beacon to be deployed.
  • This application provides a method and device for establishing a beacon map based on visual beacons, so as to reduce the dependence on the placement accuracy of visual beacons.
  • the specific plan is as follows:
  • an embodiment of the present application provides a method for establishing a beacon map based on a visual beacon, including: acquiring target detection data including at least an image frame, and target measurement data;
  • the vision beacon performs an image collection process, and the collected images, the target measurement data include: the pose of the robot body of the robot in the odometer coordinate system at the moment when the image frame is collected;
  • the first type of pose is the pose of the robot body in the world coordinate system at the moment when the image frame is sampled;
  • the second type of pose is the pose of the visual beacon in the world coordinate system
  • the first type of residuals about the image frame acquired based on the target measurement data and the second type of residuals about the image frame and the visual beacon acquired based on the target detection data are performed as follows:
  • One type of pose is a joint optimization of state variables and the second type of pose is a joint optimization of state variables to obtain the optimization result of the second type of pose;
  • the first type of residual is the relative value of the image frame
  • the second type of residual is the constrained residual of the reprojection of the feature points of the visual beacon on the image frame;
  • the initial value of the first type of pose is the first An initialization result of the second type of pose, where the initial value of the second type of pose is the initialization result of the second type of pose;
  • an embodiment of the present application provides an apparatus for establishing a beacon map based on a visual beacon, including:
  • the data acquisition module is used to acquire target detection data including at least an image frame and target measurement data;
  • the image frame is the image collected by the robot during the image acquisition process of the pre-arranged visual beacon, the target measurement
  • the data includes: the pose of the robot body of the robot in the odometer coordinate system at the time when the image frame is collected;
  • the initialization module is used to obtain the initialization result of the first type of pose corresponding to the image frame based on the target measurement data; the first type of pose is that the robot body is in the world coordinate system at the moment when the image frame is sampled And based on the target detection data, obtain the initialization result of the second type of pose corresponding to the visual beacon; wherein, the second type of pose is that the visual beacon is in the world coordinate system
  • the pose is used to obtain the initialization result of the first type of pose corresponding to the image frame based on the target measurement data; the first type of pose is that the robot body is in the world coordinate system at the moment when the image frame is sampled And based on the target detection data, obtain the initialization result of the second type of pose corresponding to the visual beacon; wherein, the second type of pose is that the visual beacon is in the world coordinate system
  • the first joint optimization module is configured to determine the first type of residuals about the image frame acquired based on the target measurement data and the second type of residuals about the image frame and visual beacon acquired based on the target detection data Residual, performing joint optimization using the first type of pose as a state variable and the second type of pose as a state variable to obtain the optimization result of the second type of pose; wherein, the first type of residual The difference is the constrained residual of the image frame with respect to the odometer, the second type of residual is the constrained residual of the reprojection of the feature points of the visual beacon on the image frame; the first type of pose The initial value of is the initialization result of the first type of pose, and the initial value of the second type of pose is the initialization result of the second type of pose;
  • the mapping module is used to build a beacon map based on the obtained optimization result of the second type of pose.
  • an embodiment of the present application provides an electronic device for establishing a beacon map based on a visual beacon.
  • the electronic device includes a memory and a processor, wherein:
  • the memory is used to store computer programs
  • the processor is configured to execute a program stored in the memory to implement the method for establishing a beacon map based on a visual beacon provided in the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium in which a computer program is stored, and the computer program is executed by a processor to establish a beacon map based on a visual beacon provided in the first aspect Methods.
  • an embodiment of the present application provides a computer program that, when executed by a processor, implements the method for establishing a beacon map from a visual beacon provided in the present application.
  • the target measurement data and target detection data are used to jointly construct the optimization problem through the odometer constraint and the beacon observation constraint (including at least the reprojection constraint of all the feature points of the visual beacon on the key frame), and
  • the global pose of the visual beacon is obtained, and then the obtained global pose of the visual beacon is used to construct a beacon map. Since the global pose of the visual beacon used in the construction of the map provided by this application is determined by a specific calculation method, it reduces the requirements for the mapping data set and does not rely on the visual beacon being deployed The global position or angle information used at the time does not depend on the type of visual beacon or any prior information, while ensuring the efficiency and accuracy of mapping.
  • this solution can reduce the dependence on the placement accuracy of the visual beacon.
  • this solution does not rely on the posture information on which the visual beacon is deployed when constructing the map, it can reduce the requirements for environmental measurement or omit the process of environmental measurement, and can reduce the deployment of visual beacons. Difficulty, reducing the consumption of manpower and time.
  • FIG. 1 is a flowchart of a method for creating a visual beacon image according to an embodiment of the application.
  • Figure 2 is a schematic diagram of data collection.
  • FIG. 3 shows a schematic diagram of the association relationship of the image data involved in the above steps 101-110
  • Fig. 4 is a schematic diagram of an apparatus for establishing a beacon map based on a visual beacon according to an embodiment of the application.
  • Existing visual beacon mapping usually requires physical measurement of the working environment in advance, and based on the physical measurement results, manually analyze the position and posture of the visual beacon to be deployed, that is, the position and posture of the visual beacon to be deployed.
  • the relevant personnel arrange the visual beacons in the environment according to the posture of the visual beacon to be deployed; further, when generating the beacon map, form the beacon map based on the position and posture of the visual beacon to be deployed.
  • the main problems of the existing visual beacon mapping methods include: high dependence on the placement accuracy of the visual beacons during mapping. Once the placement accuracy of the visual beacons is low, the accuracy of the mapping is undoubtedly low.
  • this application provides a method and device for establishing a beacon map based on visual beacons.
  • the following first introduces a method for establishing a beacon map based on a visual beacon provided by this application.
  • the world coordinate system the camera coordinate system (camera coordinate system), the beacon coordinate system, the odometer coordinate system, and the robot body coordinate system are involved.
  • the camera coordinate system camera coordinate system
  • the beacon coordinate system the odometer coordinate system
  • the robot body coordinate system the robot body coordinate system
  • the world coordinate system is the absolute coordinate system of the system. Choose a reference coordinate system in the environment to describe the position of the camera and any object in the environment;
  • the camera coordinate system is the origin of the camera's optical center, the x-axis and y-axis are parallel to the X and Y axes of the image, and the z-axis is the camera's optical axis and perpendicular to the imaging plane.
  • the space rectangular coordinate system formed by this is also called camera coordinates.
  • the camera coordinate system is a three-dimensional coordinate system. Among them, the intersection of the optical axis and the image plane is the origin of the image coordinate system, the rectangular coordinate system formed with the X and Y axes of the image is the image coordinate system, and the image coordinate system is a two-dimensional coordinate system.
  • the beacon coordinate system is used to describe the three-dimensional spatial position of the feature points in the visual beacon, for example, the positions of the positioning feature points on the four corners of the two-dimensional code.
  • Each visual beacon has its own beacon coordinate system.
  • the robot body coordinate system is: the coordinate system set for the robot body.
  • the origin of the robot body coordinate system can coincide with the center of the robot, but of course it is not limited to this.
  • the odometer coordinate system is a coordinate system used to describe the movement of the robot body in a two-dimensional plane, and is determined by the pose of the robot body at the initial moment when the pose calculation is started. It should be emphasized that the odometer coordinate system is the robot body coordinate system at the initial time, which is used to reflect the measured value of the relative motion of the robot, that is, the robot body coordinate system is relative to the odometer coordinate system at the subsequent moments after the initial time.
  • the pose is specifically: the pose change value of the robot body (the pose under the coordinates of the robot body) at the subsequent moment relative to the initial moment.
  • the initial moment may be: the starting moment of the robot, or any moment during the movement of the robot.
  • Odometer an accumulative positioning algorithm for mobile robots, can be implemented in a variety of ways depending on the sensor and mechanical structure used.
  • the output result is the robot's pose in the two-dimensional plane of the odometer coordinate system.
  • PnP Perspective-n-Point
  • n-point perspective calculates the pose of the camera based on the coordinates of a set of feature points (n points) in the image and the three-dimensional position of the feature point.
  • n is at least 4 different feature points.
  • this application provides a method for establishing a beacon map based on visual beacons.
  • a certain number of visual beacons are randomly arranged in the physical environment of the beacon map to be built in advance, and the visual beacons include feature points for positioning.
  • the method for establishing a beacon map based on visual beacons may include the following steps:
  • target detection data including at least the image frame and target measurement data;
  • the image frame is the image collected by the robot during the image acquisition process of the pre-arranged vision beacon, and the target measurement data includes: the robot at the moment when the image frame is collected The pose of the robot body in the odometer coordinate system;
  • the first type of pose is the pose of the robot body in the world coordinate system at the moment when the image frame is sampled;
  • the initialization result of the second type of pose corresponding to the visual beacon Based on the target detection data, obtain the initialization result of the second type of pose corresponding to the visual beacon; where the second type of pose is the pose of the visual beacon in the world coordinate system;
  • the first type of residuals about the image frame acquired based on target measurement data and the second type of residuals about the image frame and visual beacon acquired based on the target detection data are performed with the first type of pose as the state variable and the second type of residual Pose-like is the joint optimization of state variables to obtain the optimization result of the second-type pose; among them, the first-type residual is the constrained residual of the image frame about the odometer, and the second-type residual is the feature point of the visual beacon
  • the constrained residual of the reprojection on the image frame; the initial value of the first type of pose is the initialization result of the first type of pose, and the initial value of the second type of pose is the initialization result of the second type of pose;
  • the number of image frames can be one or more.
  • the target measurement data and target detection data are used to jointly construct the optimization problem through the odometer constraint and the beacon observation constraint (including at least the reprojection constraint of all the feature points of the visual beacon on the key frame), and
  • the global pose of the visual beacon is obtained, and then the obtained global pose of the visual beacon is used to construct a beacon map. Since the global pose of the visual beacon used in the construction of the map provided by this application is determined by a specific calculation method, it reduces the requirements for the mapping data set and does not rely on the visual beacon being deployed The global position or angle information used at the time does not depend on the type of visual beacon or any prior information, while ensuring the efficiency and accuracy of mapping.
  • this solution can reduce the dependence on the placement accuracy of the visual beacon.
  • this solution does not rely on the posture information on which the visual beacon is deployed when constructing the map, it can reduce the requirements for environmental measurement or omit the process of environmental measurement, and can reduce the deployment of visual beacons. Difficulty, reducing the consumption of manpower and time.
  • the method further includes:
  • the third-type residual is a constrained residual about a local position of the visual beacon in the image frame;
  • first-type residuals and the third-type residuals For the first-type residuals and the third-type residuals, perform joint optimization using the first-type pose as a state variable and the global position in the second-type pose as a state variable to obtain The preliminary optimization result of the first type of pose and the preliminary optimization result of the global position in the second type of pose; wherein the initial value of the first type of pose is the initialization result of the first type of pose, so The initial value of the global position in the second type of pose is an initialization result of the global position in the second type of pose;
  • the method of obtaining the global pose in the second type of pose corresponding to the visual beacon includes:
  • an initialization result of the global pose in the second type of pose corresponding to the visual beacon is acquired.
  • the number of the image frames is multiple;
  • the results can include:
  • the number of image frames is multiple;
  • Perform joint optimization with the first type of pose as the state variable and the second type of pose as the state variable to obtain the optimization result of the second type of pose which may include:
  • the process of obtaining the first-type residual of the image frame obtained based on the target measurement data may include:
  • the first-type relative pose and the second-type relative pose are equal Is the changing pose of the robot body when the image frame is collected relative to the moment when the adjacent image frame is collected; the first type of relative pose is determined based on the state variable of the first type of pose, and the second type The relative pose is determined based on the target measurement data;
  • the first-type residuals for each image frame are accumulated to obtain the first-type residuals for all image frames.
  • the process of obtaining the third type of residual error about the visual beacon may include:
  • the local position of the visual beacon in the camera coordinate system corresponding to the image frame passes through the camera external parameter pose corresponding to the visual beacon, the current state vector of the first type pose corresponding to the image frame, and the visual signal
  • the current state vector of the global position in the second type pose corresponding to the mark is obtained;
  • the current state vector of the first type pose corresponding to the image frame is the initialization of the first type pose corresponding to the image frame at the first iteration Result;
  • the current state vector of the global position in the second-type pose corresponding to the visual beacon at the first iteration is: the vector of the initialization result of the global position in the second-type pose corresponding to the visual beacon.
  • the process of determining the second-type residual of the image frame and the visual beacon acquired based on the target detection data may include:
  • the constrained residuals of the reprojection of all the feature points of the beacon on the image frame can include:
  • the current state variable of the first type of pose corresponding to the image frame For any feature point of any visual beacon in each image frame, according to the camera internal parameter matrix, the current state variable of the first type of pose corresponding to the image frame, and the second type of position corresponding to the visual beacon The current state variable of the pose, and the difference between the projection function of the feature point’s local position in the beacon coordinate system and the measured value of the feature point’s image coordinates, to obtain the feature point in the visual beacon in its image frame The error of the upper reprojection,
  • the reprojection error of each feature point of each visual beacon is accumulated, and the reprojection constraint residual error of all the feature points of all visual beacons on the image frame is obtained.
  • obtaining the initialization result of the first type of pose corresponding to the image frame based on the target measurement data may include:
  • obtaining the initialization result of the second type of pose corresponding to the visual beacon may include:
  • the camera pose corresponding to the image frame to which the visual beacon belongs when the visual beacon is first observed and the target detection data are used to obtain the visual beacon’s position in the world coordinate system.
  • the visual beacon detection data includes the observed measurement value of the visual beacon, which is obtained from the local pose of the visual beacon relative to the camera coordinate system;
  • the local pose of the visual beacon relative to the camera coordinate system is obtained according to the image coordinates of the feature points in the visual beacon, the camera internal parameters, and the beacon parameters;
  • the camera pose corresponding to the image frame to which the visual beacon belongs is: the camera pose when the camera obtains the image frame to which the visual beacon belongs.
  • the camera pose corresponding to the image frame to which the visual beacon belongs when the visual beacon is first observed and the target detection data are respectively Obtaining the initialization result of the position of the visual beacon in the world coordinate system and the initialization result of the posture of the visual beacon in the world coordinate system includes:
  • the visual beacon is obtained according to the vector of the first type of pose corresponding to the image frame, the vector of the camera external parameters, and the vector of the observed measurement value
  • the initialization result of the position in the world coordinate system among them, based on the first-type pose vector corresponding to the image frame of the visual beacon and the pose vector of the camera external parameters, the visual beacon is obtained when the visual beacon is first observed Camera pose corresponding to the image frame to which the beacon belongs;
  • the initialization result of the posture of the vision beacon in the world coordinate system is obtained;
  • the global rotation vector of the robot body coordinate system is obtained from the initialization result of the first type of pose
  • the rotation vector of the camera external parameter is obtained from the pose of the camera external parameter
  • the local rotation vector of the visual beacon in the camera coordinate system is obtained from The local pose of the visual beacon relative to the camera coordinate system is obtained.
  • acquiring target detection data including at least image frames and target measurement data may include: acquiring images of each visual beacon arranged in the physical environment of the beacon map to be built at least once to obtain It includes at least the target detection data of the image frame; the target measurement data is measured at the same time when the image frame is collected.
  • the number of the image frames is multiple;
  • the target detection data including at least the image frame and the target measurement data, it further includes: filtering out key frames from multiple image frames according to any one of the following conditions or a combination thereof:
  • the current frame When the position translation deviation between the current frame's measurement on the odometer and the existing key frame's measurement on the odometer is greater than the set translation threshold, the current frame will be used as the key frame;
  • the current frame When the rotation deviation between the current frame's odometer measurement and the existing key frame's odometer measurement is greater than the set rotation threshold, the current frame will be used as the key frame;
  • the first frame is a key frame
  • the initialization result of the first type of pose corresponding to the image frame is obtained, including:
  • the first-type residuals about the image frame obtained based on the target measurement data include:
  • the first-type residual about the key frame acquired based on the target measurement data
  • the second type of residual is the constrained residual of the reprojection of the feature points of the visual beacon on the key frame.
  • the following uses key frames as an example of an image used for mapping to introduce in detail each step in the method for building a beacon map based on a visual beacon provided in this application. It should be noted that when the key frame is not used, the image used in the mapping is the image frame in the original data, and the steps in the mapping process are similar to the corresponding steps when the key frame is used to create the image.
  • a certain number of visual beacons are randomly arranged in the physical environment of the beacon map to be built, and the visual beacons include feature points for positioning.
  • FIG. 1 is a flowchart of a method for establishing a beacon map based on a visual beacon provided by an embodiment of the application. The method includes:
  • Step 101 Obtain target detection data and target measurement data including at least image frames.
  • the image frame is the image collection process of the pre-arranged vision beacon by the robot.
  • the collected image and target measurement data include: the pose of the robot body of the robot in the odometer coordinate system at the moment when the image frame is collected. It is understandable that the number of image frames included in the target detection data can be one or more, and the image frames included in the target detection data include at least an image about the visual beacon; correspondingly, the target measurement data includes The pose of the robot body of the robot in the odometer coordinate system at the moment when one or more image frames are collected.
  • FIG. 2 is a schematic diagram of data collection. Control the robot to move so that the robot uses at least one camera to collect images of each visual beacon arranged in the physical environment of the beacon map to be built, and each visual beacon is collected at least once; through image collection, at least Including the target detection data of the image frame; and using the wheel encoder to obtain the target measurement data, the target measurement data includes at least the measurement data of the robot body at the moment when the image frame is collected, the measurement data of the odometer, and the measurement data of the odometer Specifically, the pose of the robot body in the odometer coordinate system. There is no restriction on the robot's moving trajectory, and it is sufficient to collect all the visual beacons at least once; in Figure 2, a total of 4 visual beacons are arranged, and the robot's moving trajectory is the odometer trajectory.
  • the raw data collected in the data collection process includes target measurement data and target detection data.
  • the target measurement data includes the pose of the robot body in the odometer coordinate system;
  • the target detection data includes the collected image frames and visual beacon signals. Beacon ID and the image coordinates of each feature point of the visual beacon.
  • system parameters can be obtained in advance, including: camera internal parameters (optical center position, lens focal length, distortion coefficient, etc.), camera external parameters (the relative pose between the camera and the robot body), and beacon parameters (each feature point is in the beacon The local coordinate position in the coordinate system).
  • image frames without visual beacons and measurement data of the image frames are usually collected. These data can also belong to target detection data and target measurement data respectively to form the collected original data .
  • Step 102 Based on the collected raw data, a certain number of key frames are filtered from the raw data according to the key frame filtering strategy.
  • the original data can be filtered, so as to filter out the key frames from the image frames included in the large amount of original data, thereby improving the efficiency of beacon map construction.
  • the current frame is taken as the key frame, that is, the adjacent key frame
  • the translation deviation of the odometer is greater than the set translation threshold
  • the current frame is taken as the key frame, that is, the adjacent key frame
  • the rotation deviation of the odometer is greater than the set rotation threshold
  • the first frame is a key frame.
  • the specific key frame filtering method may include: if there is a beacon image in the current frame, select this frame as a key frame; and filter key frames according to any of the following conditions or a combination thereof :
  • the current frame When the position translation deviation between the current frame's measurement on the odometer and the existing key frame's measurement on the odometer is greater than the set translation threshold, the current frame will be used as the key frame;
  • the current frame When the rotation deviation between the current frame's odometer measurement and the existing key frame's odometer measurement is greater than the set rotation threshold, the current frame will be used as the key frame;
  • the first frame is a key frame.
  • the filtered key frames include key frames with beacon images and key frames without beacon images.
  • the above-mentioned measurement on the odometer of the current frame refers to the pose of the robot body in the odometer coordinate system at the time when the current frame is collected
  • the measurement on the odometer of the key frame refers to the time when the key frame is collected.
  • Step 103 Obtain the local pose of the visual beacon relative to the camera coordinate system according to the key frame including the visual beacon and the beacon parameters of the visual beacon obtained in step 101, and add the local pose to the target detection data , The measured value that is observed as a visual beacon.
  • the beacon parameters of the visual beacon include the local coordinate position of each feature point in the beacon coordinate system.
  • the local pose of the visual beacon relative to the camera coordinate system specifically refers to the pose of the visual beacon in the camera coordinate system.
  • the solution of the local pose of the visual beacon relative to the camera coordinate system is essentially to solve the PnP (Perspective-n-Point) problem, that is, to obtain the visual beacon based on the image coordinates of the feature points of the visual beacon, the camera internal parameters, and the beacon parameters
  • PnP Perspective-n-Point
  • this application contains at least 4 feature points.
  • Step 104 Initialize the first-type pose corresponding to the key frame and the global position in the second-type pose corresponding to the visual beacon to obtain the initialization result of the first-type pose corresponding to the key frame and the corresponding visual beacon The initialization result of the global position in the second type of pose; where the first type of pose corresponding to the key frame is the pose of the robot body in the world coordinate system when the key frame is collected, and the second type of visual beacon corresponds The pose is the pose of the visual beacon in the world coordinate system.
  • the initialization of the first type of pose corresponding to the key frame directly uses the measurement data result, that is, the pose of the robot body in the odometer coordinate system at the time when the key frame included in the target measurement data is collected.
  • the initialization result of the global position in the second type of pose corresponding to the visual beacon is the camera pose corresponding to the key frame to which the visual beacon belongs when the visual beacon is first observed, and the visual beacon included in the target detection data
  • the observed measurement value is calculated and obtained, where the camera pose corresponding to the key frame of the visual beacon refers to the camera pose when the camera obtains the key frame of the visual beacon.
  • Is the global rotation vector of the robot body in the world coordinate system Is the global position vector of the robot body in the world coordinate system, the By combining the global pose vector of the robot body It is obtained by the conversion from two-dimensional space to three-dimensional space, where the global pose vector of the robot body can be expressed as Is the yaw angle of the robot body; the initial value of the first-type pose vector corresponding to the key frame of the visual beacon is based on the target measurement data,
  • Is the pose vector of the camera external parameters (the position of the camera relative to the robot coordinate system b in the camera coordinate system c), which can be expressed as Is the rotation vector of the camera external parameters, Is the position vector of the camera external parameters;
  • Step 105 Based on the initialization result of the first type of pose corresponding to the key frame and the initialization result of the global position in the second type of pose corresponding to the visual beacon, respectively calculate the first type residuals about the key frame and the visual beacon
  • the third type of residuals is referred to as M constrained residuals for short; among them, the first type of residuals is the constrained residuals of the key frame about the odometer; the third type of residuals is the key frame Constrained residuals of the local position of the visual beacon;
  • the first-type relative pose between adjacent key-frames and the second-type relative pose between adjacent key-frames are determined, where the first-type relative pose
  • the relative pose of the second type is the change pose of the robot body at the time when the key frame is collected relative to the time when the adjacent key frame is collected; the relative pose of the first type is determined based on the state variable of the first type of pose.
  • the second type of relative pose is determined based on target measurement data. Exemplary,
  • One of the implementation manners may be to calculate the difference between the first type of relative pose and the second type of relative pose of adjacent key frames, for example, the difference between the two, which can be expressed as:
  • e o, ⁇ i> represents the residual item of the first type residual of the key frame i
  • Represents the pose of the robot body in the odometer coordinate system when the key frame i+1 is collected from Get the first type of relative pose between adjacent key frames, from Obtain the second type of relative pose between adjacent key frames.
  • the process of obtaining the third type residual of the visual beacon in the key frame includes:
  • the constrained residual of the local position of the visual beacon j is: the local position of the visual beacon in the camera coordinate system corresponding to the key frame where it is located and the visual beacon’s The difference between the observed measurements.
  • the local position of the visual beacon in the camera coordinate system corresponding to the key frame where the visual beacon is located, through the camera external parameter pose corresponding to the visual beacon, and the second corresponding to the key frame (the key frame where the visual beacon is located) The current state vector of a type of pose and the current state vector of the global position in the second type of pose corresponding to the visual beacon are obtained;
  • the current state vector of the first type of pose corresponding to the key frame is obtained at the first iteration Is: the initialization result of the first type of pose corresponding to the key frame;
  • the current state vector of the global position in the second type of pose corresponding to the visual beacon at the first iteration is: the second corresponding to the visual beacon
  • one of the implementation manners may be that the difference between the local position of the visual beacon in the camera coordinate system corresponding to the key frame where it is located and the observed measurement value of the visual beacon can be expressed as:
  • em , ⁇ i,j> represents the constrained residual of the local position of the visual beacon j in the key frame i
  • a vector representing the observed measurement value of visual beacon j in key frame i by Obtain the local position of the visual beacon j in the camera coordinate system corresponding to the key frame i.
  • Step 106 Perform joint optimization on the first-type residuals of all the key frames selected and the third-type residuals of all visual beacons in all the key frames to generate a preliminary first-type pose corresponding to all the key frames
  • the optimization result and the preliminary optimization result of the global position in the second type of pose corresponding to all the visual beacons in the key frame provide the initial value for the subsequent joint optimization of the reprojection constraint of the beacon feature point image
  • the first objective function combining the above-mentioned constrained residuals is constructed, and the iteration for the nonlinear optimization problem is adopted.
  • Type optimization algorithm for example, LM (Levenberg-Marquardt) algorithm, through the initialization of the first type of pose corresponding to the key frame, and the initialization of the global position in the second type of pose corresponding to the visual beacon in the key frame The result is that the initial value undergoes multiple iterations to converge.
  • the first type of pose corresponding to the key frame and the global position of the second type of pose in the key frame are used as state variables, that is, the optimized state variable is determined by the first
  • the global position in the class pose and the second class pose is composed of two parts; the first class pose and the global position in the second class pose when the first objective function obtains the minimum value are solved.
  • the first objective function can be expressed as:
  • x k is the first type of pose corresponding to the key frame
  • x m is the second type of pose corresponding to the visual beacon
  • N is all key frames
  • M is the number of all visual beacons in all key frames
  • ⁇ o, ⁇ i> is the constraint information matrix corresponding to the first type residual of key frame i
  • ⁇ m, ⁇ i,j> is the constraint information matrix corresponding to the third type residual of visual beacon j in key frame i
  • the value of the information matrix is related to the system error model and is usually given manually.
  • the third type residuals of all visual beacons j in the set It consists of a key frame i with any visual beacon j.
  • the initial value of the state variable of the first type of pose corresponding to the key frame is the initialization result of the first type of pose corresponding to the key frame obtained in step 104
  • the second type of visual beacon corresponds to
  • the initial value of the state variable of the global position in the pose is the initialization result of the global position in the second type of pose corresponding to the visual beacon obtained in step 104; in the iteration process after the first iteration, the value obtained in the previous iteration
  • the obtained state variable is used as the current state variable to solve the optimized state variable in each current iteration.
  • the global position in the first-type pose corresponding to the key frame and the second-type pose corresponding to the visual beacon can be made more robust, and it is not easy to fall into the local pole during the iteration.
  • a small value helps to improve the efficiency of the subsequent joint optimization of the reprojection constraints of the beacon feature point image.
  • Step 107 Initialize the global pose (angle) in the second type of pose corresponding to the visual beacon based on the key frame with the visual beacon;
  • the global pose (angle) in the second type of pose corresponding to the visual beacon is initialized according to the camera pose corresponding to the key frame to which the visual beacon belongs when the visual beacon is first observed, and the observed measurement value included in the target detection data It can be calculated as:
  • Is the global rotation vector corresponding to the visual beacon (the rotation angle of the beacon coordinate system m relative to the world coordinate system w), that is, the global posture in the second type of posture corresponding to the visual beacon
  • Is the global rotation vector of the robot body coordinate system (the rotation angle of the robot body coordinate system b relative to the world coordinate system w)
  • Is the rotation vector of the camera external parameters (the rotation angle of the camera coordinate system c relative to the robot body coordinate system b)
  • It is the local rotation vector of the visual beacon in the camera coordinate system (the rotation angle of the beacon coordinate system m relative to the camera coordinate system c), which is obtained in step 103 from the obtained local pose of the visual beacon relative to the camera coordinate system.
  • Step 108 Based on the initialization result of the second type of posture corresponding to the visual beacon, calculate the second type of residuals about the key frame and the visual beacon.
  • the second type of residuals about the visual beacon are the feature points in the visual beacon.
  • the reprojection constraint residual on the frame that is, the reprojection error of the visual beacon feature point on the key frame camera image, is referred to as V constraint in this application, and can be expressed as:
  • e v, ⁇ i,j,k> is the reprojection constraint residual of the feature point k of the beacon j in the key frame i on the key frame, that is, the second type of residual error about the key frame and the visual beacon
  • Proj(.) is the image projection function
  • A is the camera internal parameter matrix
  • Is the local position vector of the feature point k under the visual beacon j (the coordinate position of the feature point k in the image coordinate system f relative to the beacon j in the beacon coordinate system m, that is, the beacon parameter)
  • Is the vector of the local position of the feature point k under the camera image the feature point k under the image
  • the feature point k under the image coordinate system f is relative to the image coordinate position of the key frame i under the camera coordinate system c, that is, the image coordinate measurement value of the feature point k)
  • It is the initial value of the second type of pose of the visual beacon j, which is obtained from the initial result of the global pose (angle) of the visual beacon in
  • Step 109 Perform joint optimization on all the first residuals selected and all the second-type residuals, construct a second objective function that combines the aforementioned two constrained residuals, and solve the key when the second objective function obtains the minimum value
  • the preliminary optimization result of the first type of pose corresponding to the key frame obtained in step 106 is used as the initial value of the first type of pose corresponding to the key frame, and the global position of the second type of pose corresponding to the visual beacon obtained in step 106
  • the initial optimization result of and the global pose result of the second type of pose corresponding to the visual beacon obtained in step 107 is used as the initial value of the second type of pose corresponding to the visual beacon, using an iterative optimization algorithm for nonlinear optimization problems, For example, the LM (Levenberg-Marquardt) algorithm uses the initial value of the first type of pose corresponding to the key frame and the initial value of the second type of pose corresponding to the visual beacon to perform multiple iterations of convergence.
  • the first type of pose corresponding to the key frame, and the second type of pose corresponding to the visual beacon (global position And global pose )
  • the state variable that is, the optimized state variable is composed of two parts: the first type of pose corresponding to the key frame and the second type of pose corresponding to the visual beacon; the key frame corresponding to the minimum value of the second objective function is solved
  • the first type of pose and the second type of pose corresponding to the visual beacon can be expressed as:
  • the residual term of the first type of residual in the formula is consistent with the first type of residual in step 106, e v, ⁇ i,j,k> is the feature point k of the visual beacon j in the key frame i.
  • the reprojection constraint residuals on the key frame that is, the second type residuals about the key frame i and the visual beacon j, ⁇ v, ⁇ i,j,k> is the information matrix corresponding to the V constraint residual, that is, about the key
  • the information matrix corresponding to the second type residuals of frame i and visual feature j Indicates the cumulative V-constrained residuals of all feature points k in all visual beacons j.
  • the initial value of the state variable of the first type of pose corresponding to the key frame is the initialization result of the first type of pose corresponding to the key frame obtained in step 104, and the first type of pose corresponding to the visual beacon
  • the initial value of the state variable of the global position in the second type of pose is the initialization result of the global position in the second type of pose corresponding to the visual beacon obtained in step 104; in the iteration process after the first time, the previous iteration
  • the obtained state variable is used as the current state variable to solve the optimized state variable in the current iteration.
  • S110 Construct a beacon map based on the obtained second-type pose corresponding to the visual beacon.
  • a stepwise joint optimization strategy is adopted.
  • M-constrained joint optimization is used to estimate the global position of the visual beacon, and then the V-constrained joint optimization is performed to estimate the global pose of the visual beacon, which reduces the optimization algorithm from falling into a local minimum. The probability.
  • the joint optimization of the first-type residual of the key frame and the M-constrained residual of the visual beacon may not be performed.
  • the initial value of the iterative convergence of the second objective function can be the first value corresponding to the key frame obtained in step 104.
  • the initialization result of the first type of pose and the initialization result of the second type of pose corresponding to the visual beacon is obtained in step 104, and the visual beacon
  • the initialization result of the global pose in the corresponding second type pose is obtained in step 107.
  • FIG. 3 shows a schematic diagram of the association relationship of the image data involved in the above steps 101-110.
  • FIG. 3 shows a schematic diagram of the association relationship of the image data involved in the above steps 101-110.
  • the collected raw data (image frames) are filtered to obtain key frames;
  • the local pose of each visual beacon relative to the camera coordinate system is obtained, including the local posture of the visual beacon relative to the camera coordinate system position
  • the local posture of the visual beacon relative to the camera That is, the measured value of the visual beacon being observed
  • the target measurement data result as the initialization result of the first type pose corresponding to the key frame, including the initialization result of the global position in the first type pose corresponding to the key frame
  • the initialization result of the global position in the second type of pose is the joint optimization of the initial value (not shown in the figure), and the preliminary optimization result of the first type of pose corresponding to all key frames and the first optimization result of all visual beacons in all key frames are obtained.
  • the camera internal parameter matrix, the beacon parameters, and the local position of the feature points in the camera image calculate each vision in all key frames
  • the V-constrained residuals of each feature point of the beacon for all the first-type residuals and all the second-type residuals, perform the initialization results of the first-type poses corresponding to the respective key frames and the respective visual beacons
  • the initialization result of the corresponding second type of pose is the joint optimization of the initial value, and the optimization results of the first type of pose corresponding to all key frames and the optimization results of the second type of pose corresponding to all visual beacons in all key frames are obtained.
  • the present invention uses target detection data and target measurement data to construct the first objective function of the constraints on the odometer and the constraints on the local position of the visual beacon, and the constraints on the odometer and the image reprojection of the visual beacon feature points. Constrain the second objective function, and obtain the global pose of the visual beacon by solving the joint optimization problem of the first objective function and the second objective function, thereby constructing a beacon map. It can be seen that this solution can reduce the dependence on the placement accuracy of the visual beacon. In addition, because this solution does not rely on the posture information on which the visual beacon is deployed when constructing the map, it can reduce the requirements for environmental measurement or omit the process of environmental measurement, and can reduce the deployment of visual beacons. Difficulty, reducing the consumption of manpower and time. In addition, by filtering the original data, obtaining key frame data, and performing optimization calculations for the key frames, the calculation efficiency of the optimization algorithm is greatly improved.
  • FIG. 4 is a schematic diagram of an apparatus for establishing a beacon map based on a visual beacon in an embodiment of the present application.
  • the device includes:
  • the data acquisition module is used to acquire target detection data including at least an image frame and target measurement data;
  • the image frame is the image collected by the robot during the image acquisition process of the pre-arranged visual beacon, the target measurement
  • the data includes: the pose of the robot body of the robot in the odometer coordinate system at the time when the image frame is collected;
  • the initialization module is used to obtain the initialization result of the first type of pose corresponding to the image frame based on the target measurement data;
  • the first type of pose is that the robot body is in the world coordinate system at the moment when the image frame is sampled And based on the target detection data, obtain the initialization result of the second type of pose corresponding to the visual beacon;
  • the second type of pose is the visual beacon in the world coordinate system Posture
  • the first joint optimization module is configured to determine the first type of residuals about the image frame acquired based on the target measurement data and the second type of residuals about the image frame and visual beacon acquired based on the target detection data Residual, performing joint optimization using the first type of pose as a state variable and the second type of pose as a state variable to obtain the optimization result of the second type of pose; wherein, the first type of residual The difference is the constrained residual of the image frame with respect to the odometer, the second type of residual is the constrained residual of the reprojection of the feature points of the visual beacon on the image frame; the first type of pose The initial value of is the initialization result of the first type of pose, and the initial value of the second type of pose is the initialization result of the second type of pose;
  • the mapping module is used to build a beacon map based on the obtained optimization result of the second type of pose.
  • the data collection module is specifically used to collect the images of each visual beacon arranged in the physical environment of the beacon map to be built at least once to obtain target detection data including at least the image frame; perform target measurement at the same time when the image frame is collected Data measurement.
  • the data acquisition module includes an odometer measurement module for obtaining target measurement data, and a visual beacon detection module for obtaining target detection data.
  • the initialization module includes a first initialization sub-module for obtaining an initialization result of the first type of pose corresponding to the image frame based on the target measurement data, and a first initialization submodule for obtaining the target detection data based on the target detection data.
  • the second initialization sub-module of the initialization result of the second type of pose corresponding to the visual beacon where,
  • the first initialization submodule is specifically configured to use the pose of the robot body of the robot in the odometer coordinate system at the moment when the image frame is collected as the initialization result of the first type of pose corresponding to the image frame ;
  • the second initialization module is specifically used for:
  • the camera pose corresponding to the image frame to which the visual beacon belongs when the visual beacon is observed for the first time and the target detection data are used to obtain the visual beacon’s position in the world coordinate system.
  • the visual beacon detection data includes the observed measurement value of the visual beacon, which is obtained from the local pose of the visual beacon relative to the camera coordinate system;
  • the local pose of the visual beacon relative to the camera coordinate system is obtained according to the image coordinates of the feature points in the visual beacon, the camera internal parameters, and the beacon parameters;
  • the camera pose corresponding to the image frame to which the visual beacon belongs is: the camera pose when the camera obtains the image frame to which the visual beacon belongs.
  • the second initialization submodule is specifically used for:
  • the visual beacon is obtained according to the vector of the first type of pose corresponding to the image frame, the vector of the camera external parameters, and the vector of the observed measurement value
  • the initialization result of the position in the world coordinate system among them, based on the first-type pose vector corresponding to the image frame of the visual beacon and the pose vector of the camera external parameters, the visual beacon is obtained when the visual beacon is first observed Camera pose corresponding to the image frame to which the beacon belongs;
  • the initialization result of the posture of the vision beacon in the world coordinate system is obtained
  • the global rotation vector of the robot body coordinate system is obtained from the initialization result of the first type of pose
  • the rotation vector of the camera external parameter is obtained from the pose of the camera external parameter
  • the visual beacon is in the camera
  • the local rotation vector in the coordinate system is obtained from the local pose of the visual beacon relative to the camera coordinate system.
  • the device also includes: a second joint optimization module for:
  • the third-type residual is a constrained residual about a local position of the visual beacon in the image frame;
  • first-type residuals and the third-type residuals For the first-type residuals and the third-type residuals, perform joint optimization using the first-type pose as a state variable and the global position in the second-type pose as a state variable to obtain The preliminary optimization result of the first type of pose and the preliminary optimization result of the global position in the second type of pose; wherein the initial value of the first type of pose is the initialization result of the first type of pose, so The initial value of the global position in the second type of pose is an initialization result of the global position in the second type of pose;
  • the method of obtaining the global pose in the second type of pose corresponding to the visual beacon includes:
  • an initialization result of the global pose in the second type of pose corresponding to the visual beacon is acquired.
  • the number of the image frames is multiple; the second joint optimization module is specifically configured to:
  • the first joint optimization module is specifically configured to:
  • the process of obtaining the first-type residual of the image frame obtained by the first joint optimization module based on the target measurement data includes:
  • the first type of relative pose and the second type of relative pose between the image frame and its adjacent image frames are acquired; wherein, the first type of relative pose and The second type of relative pose is the changing pose of the robot body when the image frame is collected relative to the moment when the adjacent image frame is collected; the first type of relative pose is based on the state variable of the first type of pose Determining that the second type of relative pose is determined based on the target measurement data;
  • the first-type residuals for each of the image frames are accumulated to obtain the first-type residuals for all the image frames.
  • the process for the second joint optimization module to obtain the third-type residual of the visual beacon includes:
  • the local position of the visual beacon in the camera coordinate system corresponding to the image frame passes through the camera external parameter pose corresponding to the visual beacon, the current state vector of the first type pose corresponding to the image frame, and the The current state vector of the global position in the second-type pose corresponding to the visual beacon is obtained;
  • the current state vector of the first-type pose corresponding to the image frame is at the first iteration: the first-type position corresponding to the image frame
  • the current state vector of the global position in the second type of pose corresponding to the visual beacon at the first iteration is: the vector of the initialization result of the global position in the second type of pose corresponding to the visual beacon .
  • the process of determining the second-type residual of the image frame and the visual beacon obtained by the first joint optimization module based on the target detection data includes:
  • the first joint optimization module acquires the image frame according to the current state variable of the second type of pose corresponding to each visual beacon in the image frame and the target detection data
  • the constrained residuals of the reprojection of all feature points of all visual beacons on the image frame include:
  • the current state variable of the first type of pose corresponding to the image frame For any feature point of any visual beacon in each image frame, according to the camera internal parameter matrix, the current state variable of the first type of pose corresponding to the image frame, and the second type of position corresponding to the visual beacon The current state variable of the pose, and the difference between the projection function of the feature point’s local position in the beacon coordinate system and the measured value of the feature point’s image coordinates, to obtain the feature point in the visual beacon in its image frame The error of the upper reprojection,
  • the reprojection error of each feature point of each visual beacon is accumulated, and the reprojection constraint residual error of all the feature points of all visual beacons on the image frame is obtained.
  • the number of the image frames is multiple;
  • the device also includes a key frame filtering module, which is used to obtain target detection data including at least image frames, and after target measurement data, filter key frames from multiple image frames according to any one of the following conditions or a combination thereof:
  • the current frame When the position translation deviation between the current frame's measurement on the odometer and the existing key frame's measurement on the odometer is greater than the set translation threshold, the current frame will be used as the key frame;
  • the current frame When the rotation deviation between the current frame's odometer measurement and the existing key frame's odometer measurement is greater than the set rotation threshold, the current frame will be used as the key frame;
  • the first frame is a key frame
  • Obtaining an initialization result of the first type of pose corresponding to the image frame based on the target measurement data includes: obtaining an initialization result of the first type of pose corresponding to the key frame based on the target measurement data;
  • the first type residuals about the image frame acquired based on the target measurement data include:
  • the first-type residual about the key frame acquired based on the target measurement data
  • the second type of residual is the constrained residual of the reprojection of the feature point of the visual beacon on the key frame.
  • This application also provides an electronic device for establishing a beacon map based on a visual beacon.
  • the electronic device includes a memory and a processor, wherein the memory is used to store a computer program; the processor is used to execute the memory
  • the program stored on the above implements the method of establishing a beacon map based on the visual beacon.
  • the memory may include random access memory (Random Access Memory, RAM), and may also include non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk storage.
  • the memory may also be at least one storage device located far away from the foregoing processor.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit (CPU), a network processor (Network Processor, NP), etc.; it can also be a digital signal processor (Digital Signal Processing, DSP), a dedicated integrated Circuit (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • the embodiment of the present application also provides a computer-readable storage medium in which a computer program is stored, and when the computer program is executed by a processor, the following method of establishing a beacon map based on a visual beacon provided in this application is implemented Method steps.
  • the embodiment of the present application provides a computer program that, when executed by a processor, implements the method for establishing a beacon map from a visual beacon provided in the present application.
  • the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Processing (AREA)

Abstract

一种基于视觉信标建立信标地图的方法、装置。该方法包括:获取至少包括图像帧的目标检测数据,以及目标测量数据;基于所述目标测量数据,获得所述图像帧对应的第一类位姿的初始化结果;基于所述目标检测数据,获取所述视觉信标对应的第二类位姿的初始化结果;对基于所述目标测量数据获取的关于所述图像帧的第一类残差、和基于所述目标检测数据获取的关于所述图像帧和视觉信标的第二类残差,进行以所述第一类位姿为状态变量和所述第二类位姿为状态变量的联合优化,获得所述第二类位姿的优化结果;基于所得到的所述第二类位姿的优化结果,建立信标地图。通过本方案可以减少对视觉信标定位信息的依赖。

Description

一种基于视觉信标建立信标地图的方法、装置
本申请要求于2019年7月5日提交中国专利局、申请号为201910603494.5发明名称为“一种基于视觉信标建立信标地图方法、装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算视觉领域,特别地,涉及一种基于视觉信标建立信标地图的方法、装置。
背景技术
在计算视觉中,位于空间的物体(刚体)可以用位置和姿态来描述。位置和姿态合称为位姿,其中,位置是指物体在三维空间坐标系下的坐标位置,姿态是指物体在该坐标系下相对三个坐标轴的旋转角度,包括俯仰角度、偏转角度和横滚角度。
视觉信标导航技术是常见的机器人导航方案之一。机器人基于采用视觉信标所构建的信标地图来实现定位。现有的视觉信标建图通常需要事先对工作环境进行物理测量,并基于物理测量结果,通过人工方式分析待布置的视觉信标的位置和姿态,即待布置的视觉信标的位姿,并由相关人员按照待布置的视觉信标的位姿,在环境中布置视觉信标;进而,在生成信标地图时,基于待布置的视觉信标的位置和姿态,形成信标地图。
发明内容
本申请提供了一种基于视觉信标建立信标地图的方法及装置,以减少对视觉信标的布置精度的依赖。具体方案如下:
第一方面,本申请实施例提供了一种基于视觉信标建立信标地图的方法,包括:获取至少包括图像帧的目标检测数据,以及目标测量数据;所述图像帧为机器人在对预先布置的视觉信标进行图像采集过程,所采集到的图像,所述目标测量数据包括:所述图像帧被采集时刻下所述机器人的机器人本体在里程计坐标系的位姿;
基于所述目标测量数据,获得所述图像帧对应的第一类位姿的初始化结果;所述第一类位姿为所述图像帧被采样时刻下机器人本体在世界坐标系下的位姿;
基于所述目标检测数据,获取所述视觉信标对应的第二类位姿的初始化结果;其中,所述第二类位姿为所述视觉信标在世界坐标系下的位姿;
对基于所述目标测量数据获取的关于所述图像帧的第一类残差、和基于所述目标检测数据获取的关于所述图像帧和视觉信标的第二类残差,进行以所述第一类位姿为状态变量和所述第二类位姿为状态变量的联合优化,获得所述第二类位姿的优化结果;其中,所述第一类残差为所述图像帧的关于里程计的约束残差,所述第二类残差为所述视觉信标的特征点在所述图像帧上重投影的约束残差;所述第一类位姿的初始值为所述第一类位姿的初始化结果,所述第二类位姿的初始值为所述第二类位姿的初始化结果;
基于所得到的所述第二类位姿的优化结果,建立信标地图。
第二方面,本申请实施例提供了一种基于视觉信标建立信标地图的装置,包括:
数据采集模块,用于获取至少包括图像帧的目标检测数据,以及目标测量数据;所述图像帧为机器人在对预先布置的视觉信标进行图像采集过程,所采集到的图像,所述目标测量数据包括:所述图像帧被采集时刻下所述机器人的机器人本体在里程计坐标系的位姿;
初始化模块,用于基于所述目标测量数据,获得所述图像帧对应的第一类位姿的初始化结果;所述第一类位姿为所述图像帧被采样时刻下机器人本体在世界坐标系下的位姿;以及基于所述目标检测数据,获取所述视觉信标对应的第二类位姿的初始化结果;其中,所述第二类位姿为所述视觉信标在世界坐标系下的位姿;
第一联合优化模块,用于对基于所述目标测量数据获取的关于所述图像帧的第一类残差、和基于所述目标检测数据获取的关于所述图像帧和视觉信标的第二类残差,进行以所述第一类位姿为状态变量和所述第二类位姿为状态变量的联合优化,获得所述第二类位姿的优化结果;其中,所述第一类残差为所述图像帧的关于里程计的约束残差,所述第二类残差为所述视觉信标的特征点在所述图像帧上重投影的约束残差;所述第一类位姿的初始值为所述第一类位姿的初始化结果,所述第二类位姿的初始值为所述第二类位姿的初始化结果;
建图模块,用于基于所得到的所述第二类位姿的优化结果,建立信标地 图。
第三方面,本申请实施例提供了一种基于视觉信标建立信标地图的电子设备,该电子设备包括存储器和处理器,其中,
所述存储器用于存放计算机程序;
所述处理器,用于执行所述存储器上所存放的程序,实现第一方面所提供的基于视觉信标建立信标地图的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行第一方面所提供的基于视觉信标建立信标地图的方法。
第五方面,本申请实施例提供了一种计算机程序,所述计算机程序由处理器执行时实现本申请所提供的于视觉信标建立信标地图的方法。
本申请实施例所提供方案中,利用目标测量数据和目标检测数据,通过里程计约束和信标观测约束(至少包括视觉信标的所有特征点在关键帧上的重投影约束)共同构建优化问题,并通过求解该问题获取视觉信标的全局位姿,进而利用所获取的视觉信标的全局位姿来构建信标地图。由于本申请所提供方案在构建地图时所利用的视觉信标的全局位姿是通过特定的计算方式所确定的,因此,降低了对建图数据集的要求,既不依赖视觉信标在被部署时所利用的全局位置或角度信息,也不依赖视觉信标的类型,又不依赖任何先验信息,同时保证了建图效率和精度。可见,通过本方案可以减少对视觉信标的布置精度的依赖。另外,由于本方案在构建地图时,不依赖视觉信标被部署时所基于的位姿信息,因此,可以降低对环境测量的要求或省略对于环境测量的过程,并且可以降低视觉信标的布置工作难度,减少对于人力和时间的消耗。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的实施例。
图1为本申请实施例视觉信标建图方法的一种流程图。
图2为数据采集的一种示意图。
图3示出了以上述步骤101-110所涉及的图像数据的关联关系的一种示意图
图4为本申请实施例基于视觉信标建立信标地图的装置的一种示意图。
具体实施方式
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
现有的视觉信标建图通常需要事先对工作环境进行物理测量,并基于物理测量结果,通过人工方式分析待布置的视觉信标的位置和姿态,即待布置的视觉信标的位姿,并由相关人员按照待布置的视觉信标的位姿,在环境中布置视觉信标;进而,在生成信标地图时,基于待布置的视觉信标的位置和姿态,形成信标地图。
可见,现有视觉信标建图方法主要存在的问题包括:在建图时对视觉信标的布置精度的依赖较高,一旦视觉信标的布置精度较低,无疑使得建图的准确性较低。
为了解决现有技术问题,本申请提供了一种基于视觉信标建立信标地图的方法及装置。下面首先对本申请所提供的一种基于视觉信标建立信标地图的方法进行介绍。
为了使本申请的目的、技术手段和优点更加清楚明白,以下结合附图对本申请做进一步详细说明。为便于理解本申请实施例,对涉及的概念说明如下。
在本实施例中,涉及世界坐标系、摄像机坐标系(相机坐标系)、信标坐标系、里程计坐标系、机器人本体坐标系,其中,
世界坐标系是系统的绝对坐标系,在环境中选择一个基准坐标系来描述摄像机和环境中任何物体的位置;
摄像机坐标系是原点为摄像机光心,x轴、y轴与图像的X、Y轴平行,z轴为摄像机光轴且与成像平面垂直,以此构成的空间直角坐标系,也称为相机坐标系,摄像机坐标系是三维坐标系。其中,光轴与图像平面的交点,即 为图像坐标系的原点,与图像的X、Y轴构成的直角坐标系即为图像坐标系,图像坐标系是二维坐标系。
信标坐标系用于描述视觉信标中特征点的三维空间位置,例如,二维码中位于四个角上的定位特征点的位置。各个视觉信标分别具有各自信标坐标系。
机器人本体坐标系是:为机器人的本体设定的坐标系。并且,机器人本体坐标系的原点可以与机器人中心重合,当然并不局限于此。
里程计坐标系,用于描述机器人本体在二维平面内的运动的坐标系,由开始计算位姿的初始时刻下的机器人本体的位姿决定。需要强调的是,里程计坐标系是初始时刻下的机器人本体坐标系,用于体现机器人相对运动的测量值,也就是,初始时刻的后续时刻下,机器人本体坐标系关于里程计坐标系的相对位姿,具体为:后续时刻下机器人本体的位姿(在机器人本体坐标下的位姿)相对于初始时刻的位姿变化值。示例性的,初始时刻可以为:机器人的开机时刻,或,机器人运动过程中的任一时刻。
里程计,一种针对移动机器人的累加式定位算法,根据所采用的传感器及机械结构的不同,可以有多种实现方式,其输出结果为机器人在里程计坐标系二维平面内的位姿。
在以下文中,“局部”是与“全局”相对的概念,全局系指在世界坐标系而言,局部系指在非世界坐标系而言。
PnP(Perspective-n-Point),又称为n点透视,通过图像中一组特征点(n个点)的坐标以及该特征点的三维位置,计算相机的位姿。其中,n至少为不相同的4个特征点。
为了解决现有技术问题,本申请提供了一种基于视觉信标的建立信标地图方法。为构建信标地图,预先在待建信标地图的物理环境中随机布置一定数量的视觉信标,所述视觉信标包括有用于定位的特征点。该基于视觉信标的建立信标地图方法,可以包括以下步骤:
获取至少包括图像帧的目标检测数据,以及目标测量数据;图像帧为机器人在对预先布置的视觉信标进行图像采集过程,所采集到的图像,目标测量数据包括:图像帧被采集时刻下机器人的机器人本体在里程计坐标系的位姿;
基于目标测量数据,获得图像帧对应的第一类位姿的初始化结果;第一类位姿为所述图像帧被采样时刻下机器人本体在世界坐标系的位姿;
基于目标检测数据,获取视觉信标对应的第二类位姿的初始化结果;其中,第二类位姿为所述视觉信标在世界坐标系下的位姿;
对基于目标测量数据获取的关于图像帧的第一类残差、和基于目标检测数据获取的关于图像帧和视觉信标的第二类残差,进行以第一类位姿为状态变量和第二类位姿为状态变量的联合优化,获得第二类位姿的优化结果;其中,第一类残差为图像帧的关于里程计的约束残差,第二类残差为视觉信标的特征点在图像帧上重投影的约束残差;第一类位姿的初始值为第一类位姿的初始化结果,第二类位姿的初始值为第二类位姿的初始化结果;
基于所得到的第二类位姿的优化结果,建立信标地图。
其中,图像帧的数量可以为一个或多个。
本申请实施例所提供方案中,利用目标测量数据和目标检测数据,通过里程计约束和信标观测约束(至少包括视觉信标的所有特征点在关键帧上的重投影约束)共同构建优化问题,并通过求解该问题获取视觉信标的全局位姿,进而利用所获取的视觉信标的全局位姿来构建信标地图。由于本申请所提供方案在构建地图时所利用的视觉信标的全局位姿是通过特定的计算方式所确定的,因此,降低了对建图数据集的要求,既不依赖视觉信标在被部署时所利用的全局位置或角度信息,也不依赖视觉信标的类型,又不依赖任何先验信息,同时保证了建图效率和精度。可见,通过本方案可以减少对视觉信标的布置精度的依赖。另外,由于本方案在构建地图时,不依赖视觉信标被部署时所基于的位姿信息,因此,可以降低对环境测量的要求或省略对于环境测量的过程,并且可以降低视觉信标的布置工作难度,减少对于人力和时间的消耗。
可选地,在一种实现方式中,基于目标检测数据,获取视觉信标对应的第二类位姿的初始化结果之后,还包括:
获取关于所述视觉信标的第三类残差;其中,所述第三类残差为关于所述图像帧中的视觉信标的局部位置的约束残差;
对所述第一类残差、以及第三类残差,进行以所述第一类位姿为状态变量、及以所述第二类位姿中的全局位置为状态变量的联合优化,获得第一类 位姿的初步优化结果和所述第二类位姿中全局位置的初步优化结果;其中,所述第一类位姿的初始值为所述第一类位姿的初始化结果,所述第二类位姿中全局位置的初始值为所述第二类位姿中全局位置的初始化结果;
将所述第一类位姿的初步优化结果作为所述第一类位姿的初始化结果;
将所述第二类位姿中全局位置的初始优化结果作为所述第二类位姿中全局位置的初始化结果;
基于所述目标检测数据,获取所述视觉信标对应的第二类位姿中全局姿态的方式,包括,
基于所述视觉信标首次被观测时所述视觉信标所属图像帧对应的相机位姿,以及所述目标检测数据,获取所述视觉信标对应的第二类姿态中全局姿态的初始化结果。
可选地,在一种实现方式中,所述图像帧的数量为多个;
进行以第一类位姿为状态变量、及以第二类位姿中全局位置为状态变量的联合优化,获得第一类位姿的初步优化结果和第二类位姿中全局位置的初步优化结果,可以包括:
构建融合各个第一类残差和各个第三类残差的第一目标函数,通过以各个第一类位姿的初始化结果、和各个第二类位姿中全局位置的初始化结果为迭代初始值的迭代,获取使得第一目标函数为最小值时的各个第一类位姿的初步优化结果、和各个第二类位姿中全局位置的初步优化结果。
可选地,在一种实现方式中,图像帧的数量为多个;
进行以第一类位姿为状态变量和第二类位姿为状态变量的联合优化,获得第二类位姿的优化结果,可以包括:
构建融合各个第一类残差和各个第二类残差的第二目标函数,通过以各个第一类位姿的初始化结果、和各个第二类位姿的初始化结果为迭代初始值的迭代,获取使得第二目标函数为最小值时的各个第二类位姿的优化结果。
可选地,在一种实现方式中,基于目标测量数据获取的关于图像帧的第一类残差的获取过程,可以包括:
针对每一图像帧,获取该图像帧与其相邻图像帧之间的第一类相对位姿和第二类相对位姿;其中,所述第一类相对位姿和第二类相对位姿均为该图 像帧被采集时刻相对于相邻图像帧被采集时刻,机器人本体的变化位姿;所述第一类相对位姿基于所述第一类位姿的状态变量确定,所述第二类相对位姿基于所述目标测量数据确定;
由第一类相对位姿和第二类相对位姿的差异,计算关于该图像帧的第一类残差,
累计关于各个图像帧的第一类残差,得到关于所有图像帧的第一类残差。
可选地,在一种实现方式中,获取关于视觉信标的第三类残差的过程,可以包括,
针对每一图像帧中任一视觉信标,获取该视觉信标在该图像帧对应的相机坐标系下的局部位置与该视觉信标的被观测的测量值之间的差异,得到关于该视觉信标的第三类残差;
累计关于各个视觉信标的第三类残差,得到关于所有视觉信标的第三类残差;
其中,视觉信标在该图像帧对应的相机坐标系下的局部位置通过该视觉信标对应的相机外参位姿、该图像帧对应的第一类位姿的当前状态向量、以及该视觉信标对应的第二类位姿中全局位置的当前状态向量获得;该图像帧对应的第一类位姿的当前状态向量在第一次迭代时为该图像帧对应的第一类位姿的初始化结果;该视觉信标对应的第二类位姿中全局位置的当前状态向量在第一次迭代时为:该视觉信标对应的第二类位姿中全局位置的初始化结果的向量。
可选地,在一种实现方式中,基于目标检测数据获取的关于图像帧和视觉信标的第二类残差的确定过程,可以包括:
针对每一图像帧,根据该图像帧中各个视觉信标对应的第二类位姿的当前状态变量、以及目标检测数据,获取该图像帧中所有视觉信标的所有特征点在该图像帧上的重投影的约束残差。
可选地,在一种实现方式中,针对每一图像帧,根据该图像帧中各个视觉信标对应的第二类位姿的当前状态变量、以及目标检测数据,获取该图像帧中所有视觉信标的所有特征点在该图像帧上的重投影的约束残差,可以包括:
针对每一图像帧中的任一视觉信标所具有的任一特征点,根据相机内参 矩阵、该图像帧对应的第一类位姿的当前状态变量、该视觉信标对应的第二类位姿的当前状态变量、以及该特征点在信标坐标系下的局部位置的投影函数与该特征点的图像坐标的测量值之间的差异,获得该视觉信标中该特征点在其图像帧上重投影的误差,
累计各个视觉信标的各个特征点的重投影误差,得到关于所有视觉信标的所有特征点在图像帧上的重投影约束残差。
可选地,在一种实现方式中,基于目标测量数据,获得图像帧对应的第一类位姿的初始化结果,可以包括:
将图像帧被采集时刻下所述机器人的机器人本体在里程计坐标系下的位姿,作为图像帧对应的第一类位姿的初始化结果;
基于目标检测数据,获取视觉信标对应的第二类位姿的初始化结果,可以包括:
对于图像帧中任一视觉信标,通过该视觉信标首次被观测时的该视觉信标所属图像帧对应的相机位姿、以及该目标检测数据,分别获得该视觉信标在世界坐标系下的位置的初始化结果、以及该视觉信标在世界坐标系下的姿态的初始化结果;
其中,视觉信标检测数据包括视觉信标被观测的测量值,从视觉信标相对相机坐标系的局部位姿获得;
视觉信标相对于相机坐标系的局部位姿根据视觉信标中特征点的图像坐标、相机内参、以及信标参数获得;
该视觉信标所属图像帧对应的相机位姿为:相机获取该视觉信标所属图像帧时的相机位姿。
可选地,在一种实现方式中,对于图像帧中任一视觉信标,通过该视觉信标首次被观测时该视觉信标所属图像帧对应的相机位姿、以及该目标检测数据,分别获得该视觉信标在世界坐标系下的位置的初始化结果、以及该视觉信标在世界坐标系下的姿态的初始化结果,包括:
对于所述图像帧中任一视觉信标,根据所述图像帧对应的第一类位姿的向量、相机外参的位姿向量、以及被观测的测量值的向量,获得所述视觉信标在世界坐标系下的位置的初始化结果;其中,基于该视觉信标所属图像帧对应的第一类位姿的向量和相机外参的位姿向量,得到该视觉信标首次被观 测时该视觉信标所属图像帧对应的相机位姿;
根据机器人本体坐标系的全局旋转向量、相机外参的旋转向量、该视觉信标在相机坐标系中的局部旋转向量,获得该视觉信标在世界坐标系下的姿态的初始化结果;
其中,机器人本体坐标系的全局旋转向量从第一类位姿的初始化结果获得,相机外参的旋转向量从相机外参的位姿获得,该视觉信标在相机坐标系中的局部旋转向量从该视觉信标相对相机坐标系的局部位姿获得。
可选地,在一种实现方式中,获取至少包括图像帧的目标检测数据,以及目标测量数据,可以包括:采集布置于待建信标地图物理环境中的各个视觉信标的图像至少一次,得到至少包括图像帧的目标检测数据;在采集图像帧的同一时刻进行目标测量数据的测量。
可选地,在一种实现方式中,所述图像帧的数量为多个;
获取至少包括图像帧的目标检测数据,以及目标测量数据之后,还包括:按照如下任一条件或其组合从多个图像帧中筛选出关键帧:
当前帧中存在信标图像时,则将该帧作为关键帧;
当前帧的关于里程计的测量与已有关键帧的关于里程计的测量之间的位置平移偏差,大于设定的平移阈值时,则将当前帧作为关键帧;
当前帧的关于里程计的测量与已有关键帧的关于里程计的测量之间的旋转偏差,大于设定的旋转阈值时,则将当前帧作为关键帧;
第一帧为关键帧;
基于目标测量数据,获得图像帧对应的第一类位姿的初始化结果,包括:
基于目标测量数据,获得关键帧对应的第一类位姿的初始化结果;
所述基于目标测量数据获取的关于图像帧的第一类残差,包括:
基于所述目标测量数据获取的关于所述关键帧的第一类残差;
第二类残差为视觉信标的特征点在关键帧上重投影的约束残差。
为了方案清楚,下文以关键帧作为建图所利用图像为例,对本申请所提供的基于视觉信标建立信标地图的方法中的各个步骤进行详细介绍。需要说明的是,针对未采用关键帧时,建图所利用图像为原始数据中的图像帧,建 图过程中的各个步骤与利用关键帧建图时的相应步骤类似。
为构建信标地图,在待建信标地图的物理环境中随机布置一定数量的视觉信标,所述视觉信标包括有用于定位的特征点。
参见图1所示,图1为本申请实施例提供的基于视觉信标建立信标地图的方法的一种流程图。该方法包括:
步骤101,获取至少包括图像帧的目标检测数据和目标测量数据。
其中,图像帧为机器人在对预先布置的视觉信标进行图像采集过程,所采集到的图像,目标测量数据包括:图像帧被采集时刻下该机器人的机器人本体在里程计坐标系的位姿。可以理解的是,目标检测数据中所包括的图像帧的数量可以为一个或多个,并且,目标检测数据中所包括的图像帧中至少存在关于视觉信标的图像;相应的,目标测量数据包括一个或多个图像帧被采集时刻下机器人的机器人本体在里程计坐标系的位姿。
参见图2所示,图2为数据采集的一种示意图。控制机器人移动,以使得机器人通过至少一个相机,对布置在待建信标地图的物理环境中的各个视觉信标进行图像采集,且每个视觉信标被采集至少一次;通过图像采集,获得至少包括图像帧的目标检测数据;并利用轮式编码器等方式获得目标测量数据,该目标测量数据至少包括图像帧被采集时刻下机器人本体的、关于里程计的测量数据,关于里程计的测量数据具体为机器人本体在里程计坐标系的位姿。机器人移动轨迹没有限制,满足采集所有视觉信标至少一次即可;如图2中,共布置了4个视觉信标,机器人移动轨迹为里程计轨迹。
这样,数据采集过程中所采集的原始数据包括目标测量数据和目标检测数据,其中,目标测量数据包括机器人本体在里程计坐标系的位姿;目标检测数据包括采集的图像帧、视觉信标的信标ID、以及视觉信标的各个特征点的图像坐标。
此外,还可预先获取系统参数,包括:相机内参(光心位置、镜头焦距、畸变系数等)、相机外参(相机与机器人本体的相对位姿)、信标参数(各个特征点在信标坐标系中的局部坐标位置)。
在关于视觉信标的图像数据采集的过程中,通常还会采集到无视觉信标的图像帧以及该图像帧的测量数据,这些数据也可以分别属于目标检测数据和目标测量数据,形成采集的原始数据。
步骤102,基于所采集的原始数据,按照关键帧筛选策略,从原始数据中筛选出一定数量的关键帧。
考虑到计算效率,可以对原始数据进行筛选,以便从大量的原始数据所包括的图像帧中筛选出关键帧,从而提升信标地图构建效率。
其中,关键帧筛选策略可以有多种,本实施例中的一种策略是:
当前帧中存在信标图像时,选择该帧为关键帧;和/或
当前帧的关于里程计的测量与已有关键帧的关于里程计的测量之间的位置平移偏差,大于设定的平移阈值时,则将当前帧作为关键帧,即,使得相邻关键帧的关于里程计的平移偏差大于设定的平移阈值;和/或
当前帧的关于里程计的测量与已有关键帧的关于里程计的测量之间的旋转偏差,大于设定的旋转阈值时,则将当前帧作为关键帧,即,使得相邻关键帧的关于里程计的旋转偏差大于设定的旋转阈值;和/或
第一帧为关键帧。
可以理解的是,在具体应用中,具体的关键帧筛选方式可以包括:若当前帧中存在信标图像时,则选择该帧为关键帧;并且,按照如下任一条件或其组合筛选关键帧:
当前帧的关于里程计的测量与已有关键帧的关于里程计的测量之间的位置平移偏差,大于设定的平移阈值时,则将当前帧作为关键帧;
当前帧的关于里程计的测量与已有关键帧的关于里程计的测量之间的旋转偏差,大于设定的旋转阈值时,则将当前帧作为关键帧;
第一帧为关键帧。
这样,所筛选的关键帧包括了具有信标图像的关键帧和不具有信标图像的关键帧。
上述当前帧的关于里程计的测量是指当前帧被采集时刻下机器人本体在里程计坐标系下的位姿,关键帧的关于里程计的测量是指关键帧被采集时刻下机器人本体在里程计坐标系下的位姿。也就是,任一帧的关于里程计的测量具体为该帧被采集时刻下机器人本体在里程计坐标系下的位姿。
步骤103,根据包括有视觉信标的关键帧、以及步骤101所获得的视觉信标的信标参数,获得视觉信标相对相机坐标系的局部位姿,并将该局部位 姿添加至目标检测数据中,作为视觉信标被观测的测量值。其中,视觉信标的信标参数包括各个特征点在信标坐标系中的局部坐标位置。
在该步骤中,视觉信标相对相机坐标系的局部位姿具体指视觉信标在相机坐标系下的位姿。视觉信标相对相机坐标系的局部位姿的解算,实质是求解PnP(Perspective-n-Point)问题,即,根据视觉信标的特征点的图像坐标、相机内参、信标参数,获得视觉信标相对相机坐标系的局部位姿。考虑到求解PnP问题对特征点数量的要求,本申请中至少包含4个特征点。
步骤104,对关键帧对应的第一类位姿初始化、及视觉信标对应的第二类位姿中全局位置进行初始化,得到关键帧对应的第一类位姿的初始化结果和视觉信标对应的第二类位姿中全局位置的初始化结果;其中,关键帧对应的第一类位姿为关键帧被采集时刻下机器人本体在世界坐标系下的位姿,视觉信标对应的第二类位姿为视觉信标在世界坐标系下的位姿。
关键帧对应的第一类位姿的初始化直接采用测量数据结果,即目标测量数据中所包括的关键帧在被采集时刻下机器人本体在里程计坐标系下的位姿。视觉信标对应的第二类位姿中全局位置的初始化结果是通过该视觉信标首次被观测时该视觉信标所属关键帧对应的相机位姿、及目标检测数据所包括的该视觉信标被观测的测量值计算获得,其中,该视觉信标所属关键帧对应的相机位姿是指:相机获取该视觉信标所属关键帧时的相机位姿。
具体计算公式可以如下:
Figure PCTCN2020100306-appb-000001
其中,符号左侧上下角标为坐标系标识,
Figure PCTCN2020100306-appb-000002
为视觉信标在世界坐标系下的全局位置的向量(在信标坐标系m中的视觉信标相对世界坐标系w的坐标位置),即视觉信标在世界坐标系下的全局位置的初始化结果;
Figure PCTCN2020100306-appb-000003
为视觉信标所属关键帧对应的第一类位姿的向量(在机器人坐标系b中关键帧被采集时刻下的机器人本体相对世界坐标系w的位姿),可表示为
Figure PCTCN2020100306-appb-000004
Figure PCTCN2020100306-appb-000005
为机器人本体在世界坐标系下的全局旋转向量,
Figure PCTCN2020100306-appb-000006
为机 器人本体在世界坐标系下的全局位置向量,该
Figure PCTCN2020100306-appb-000007
通过将机器人本体的全局位姿向量
Figure PCTCN2020100306-appb-000008
进行从二维空间到三维空间的转换而得到,其中,机器人本体的全局位姿向量可表示为
Figure PCTCN2020100306-appb-000009
Figure PCTCN2020100306-appb-000010
为机器人本体的偏航角;视觉信标所属关键帧对应的第一类位姿的向量的初始值以目标测量数据为初始值,
Figure PCTCN2020100306-appb-000011
为相机外参的位姿向量(在相机坐标系c中相机相对机器人坐标系b的位置),可表示为
Figure PCTCN2020100306-appb-000012
Figure PCTCN2020100306-appb-000013
为相机外参的旋转向量,
Figure PCTCN2020100306-appb-000014
为相机外参的位置向量;
Figure PCTCN2020100306-appb-000015
为视觉信标相对相机坐标系的局部位置的向量(在信标坐标系m中的视觉信标相对相机坐标系c的坐标位置,即,被观测的测量值),由步骤103获得,可表示为
Figure PCTCN2020100306-appb-000016
符号
Figure PCTCN2020100306-appb-000017
表示右乘,有例如,
Figure PCTCN2020100306-appb-000018
以及
Figure PCTCN2020100306-appb-000019
符号⊙表示投影,有例如,
Figure PCTCN2020100306-appb-000020
Figure PCTCN2020100306-appb-000021
得到视觉信标首次被观测时该视觉信标所属关键帧对应的相机位姿。
步骤105,基于关键帧对应的第一类位姿的初始化结果及视觉信标对应的第二类位姿中全局位置的初始化结果,分别计算关于关键帧的第一类残差、关于视觉信标的第三类残差,在本申请中,第三类残差简称为M约束残差;其中,第一类残差为关键帧的关于里程计的约束残差;第三类残差为关键帧中视觉信标的局部位置的约束残差;
对于任一关键帧i的第一类残差,由相邻关键帧之间的第一类相对位姿和相邻关键帧之间的第二类相对位姿,其中,第一类相对位姿和第二类相对位姿均为该关键帧被采集时刻相对于相邻关键帧被采集时刻,机器人本体的变化位姿;第一类相对位姿基于第一类位姿的状态变量确定,第二类相对位姿 基于目标测量数据确定。示例性的,
实施方式之一可以是,计算相邻关键帧的第一类相对位姿与该第二类相对位姿之间差异,例如为两者之间的差值,可表示为:
Figure PCTCN2020100306-appb-000022
其中,e o,<i>表示关键帧i的第一类残差的残差项,
Figure PCTCN2020100306-appb-000023
表示关键帧i对应的第一类位姿的向量,
Figure PCTCN2020100306-appb-000024
表示与关键帧i相邻的关键帧i+1对应的第一类位姿的向量,
Figure PCTCN2020100306-appb-000025
表示关键帧i被采集时刻下机器人本体在里程计坐标系下的位姿,
Figure PCTCN2020100306-appb-000026
表示关键帧i+1被采集时刻下机器人本体在里程计坐标系下的位姿,从
Figure PCTCN2020100306-appb-000027
得到相邻关键帧之间的第一类相对位姿,从
Figure PCTCN2020100306-appb-000028
得到相邻关键帧之间的第二类相对位姿。
关于关键帧中视觉信标的第三类残差的获取过程,包括:
针对任一关键帧i中任一视觉信标j,视觉信标j的局部位置的约束残差为:视觉信标在其所在关键帧对应的相机坐标系下的局部位置与该视觉信标的被观测的测量值之间的差异。具体而言:视觉信标在其所在关键帧对应的相机坐标系下的局部位置,通过该视觉信标对应的相机外参位姿、该关键帧(该视觉信标所在关键帧)对应的第一类位姿的当前状态向量、以及该视觉信标对应的第二类位姿中全局位置的当前状态向量获得;该关键帧对应的第一类位姿的当前状态向量在第一次迭代时为:该关键帧对应的第一类位姿的初始化结果;该视觉信标对应的第二类位姿中全局位置的当前状态向量在第一次迭代时为:该视觉信标对应的第二类位姿中全局位置的初始化结果的向量。
示例性的,实施方式之一可以是,视觉信标在其所在关键帧对应的相机坐标系下的局部位置与该视觉信标的被观测的测量值之间的差值,可表示为:
Figure PCTCN2020100306-appb-000029
其中,e m,<i,j>表示关键帧i中视觉信标j的局部位置的约束残差,
Figure PCTCN2020100306-appb-000030
为视觉信标j对应的第二类位姿中全局位置的向量,其初始值由步骤104中的第二 类位姿的初始化结果所获得,
Figure PCTCN2020100306-appb-000031
表示关键帧i中视觉信标j的被观测的测量值的向量,由
Figure PCTCN2020100306-appb-000032
得到视觉信标j在其所在关键帧i对应的相机坐标系下的局部位置。
步骤106,对筛选出的所有关键帧的第一类残差、以及所有关键帧中的所有视觉信标的第三类残差进行联合优化,以生成所有关键帧对应的第一类位姿的初步优化结果和关键帧中所有视觉信标对应的第二类位姿中全局位置的初步优化结果,为后续的信标特征点图像重投影约束的联合优化提供初始值;
在该步骤中,根据筛选出的所有关键帧的第一类残差、以及所有视觉信标的第三类残差,构建融合上述约束残差的第一目标函数,采用针对非线性优化问题的迭代式优化算法,例如,LM(Levenberg-Marquardt)算法,通过对以关键帧对应的第一类位姿的初始化结果、和关键帧中视觉信标对应的第二类位姿中的全局位置的初始化结果为初始值进行多次迭代收敛,在迭代过程中以关键帧对应的第一类位姿、和关键帧中第二类位姿中全局位置为状态变量,即,优化的状态变量由第一类位姿、第二类位姿中的全局位置两部分组成;求解使得第一目标函数获得最小值时的第一类位姿和第二类位姿中全局位置。其中,第一目标函数可表示为:
Figure PCTCN2020100306-appb-000033
其中,
Figure PCTCN2020100306-appb-000034
其中
Figure PCTCN2020100306-appb-000035
x k为关键帧对应的第一类位姿,x m为视觉信标对应的第二类位姿;N为所有关键帧,M为所有关键帧中的所有视觉信标的数量;
Ω o,<i>为关键帧i的第一类残差对应的约束信息矩阵,Ω m,<i,j>为关键帧i中视觉信标j的第三类残差对应的约束信息矩阵,信息矩阵的取值与系统误差模型相关,通常由人工给定。
Figure PCTCN2020100306-appb-000036
表示累计筛选出的所有关键帧N的第一类残差,
Figure PCTCN2020100306-appb-000037
表示累计集合
Figure PCTCN2020100306-appb-000038
中所有视觉信标j的第三类残差,所述集合
Figure PCTCN2020100306-appb-000039
由具有任一视觉信标j的关键帧i构成。在优化过 程中,第一迭代时,关键帧对应的第一类位姿的状态变量初始值为步骤104获取的关键帧对应的第一类位姿的初始化结果,视觉信标对应的第二类位姿中全局位置的状态变量的初始值为步骤104获取的视觉信标对应的第二类位姿中全局位置的初始化结果;在第一次之后的迭代过程中,则将上一次迭代所求得的状态变量作为当前状态变量,以进行各个当前迭代时所优化的状态变量的求解。
通过步骤106的联合优化,能够使得关键帧对应的第一类位姿和视觉信标对应的第二类位姿中全局位置具有更好的鲁棒性,更加不容易在迭代过程中陷入局部极小值,有利于提高后续的信标特征点图像重投影约束的联合优化的效率。
步骤107,基于具有视觉信标的关键帧,对视觉信标对应的第二类姿态中全局姿态(角度)进行初始化;
视觉信标对应的第二类姿态中全局姿态(角度)初始化根据该视觉信标首次被观测时该视觉信标所属关键帧对应的相机位姿、及目标检测数据所包括的被观测的测量值计算获得,可以表示为:
Figure PCTCN2020100306-appb-000040
其中,
Figure PCTCN2020100306-appb-000041
为视觉信标对应的全局旋转向量(信标坐标系m相对世界坐标系w的旋转角度),即视觉信标对应的第二类姿态中全局姿态,
Figure PCTCN2020100306-appb-000042
为机器人本体坐标系的全局旋转向量(机器人本体坐标系b相对世界坐标系w的旋转角度),
Figure PCTCN2020100306-appb-000043
为相机外参的旋转向量(相机坐标系c相对机器人本体坐标系b的旋转角度),
Figure PCTCN2020100306-appb-000044
为视觉信标在相机坐标系中的局部旋转向量(信标坐标系m相对相机坐标系c的旋转角度),在步骤103中从所获得视觉信标相对相机坐标系的局部位姿中获得。
步骤108,基于视觉信标对应的第二类姿态的初始化结果,计算关于关键帧和视觉信标的第二类残差,关于视觉信标的第二类残差为视觉信标中的特征点在关键帧上的重投影约束残差,即,视觉信标特征点在关键帧相机图像上的重投影误差,在本申请中简称为V约束,可以表示为:
Figure PCTCN2020100306-appb-000045
其中,e v,<i,j,k>为关键帧i中信标j所具有的特征点k在该关键帧上的重投影约束残差,即关于关键帧和视觉信标的第二类残差,proj(.)为图像投影函数,A为相机内参矩阵,
Figure PCTCN2020100306-appb-000046
为特征点k在视觉信标j下的局部位置向量(在图像坐标系f下的特征点k相对于信标坐标系m中信标j的坐标位置,即信标参数),
Figure PCTCN2020100306-appb-000047
为特征点k在相机图像下的局部位置的向量(图像坐标系f下的特征点k相对于相机坐标系c下关键帧i的图像坐标位置,即特征点k的图像坐标测量值),
Figure PCTCN2020100306-appb-000048
为视觉信标j的第二类位姿的初始值,由步骤107中的视觉信标的全局姿态(角度)初始化结果和步骤106得到的全局位置的初步优化结果获得。
步骤109,对筛选出的所有第一残差、以及所有第二类残差进行联合优化,构建融合前述两者约束残差的第二目标函数,求解使得第二目标函数获得最小值时的关键帧对应的第一类位姿和视觉信标对应的第二类位姿;
将步骤106获得的关键帧对应的第一类位姿的初步优化结果作为关键帧对应的第一类位姿的初始值,将步骤106获得的视觉信标对应的第二类位姿中全局位置的初步优化结果和步骤107获得的视觉信标对应的第二类位姿中全局姿态结果作为视觉信标对应的第二类位姿的初始值,采用针对非线性优化问题的迭代式优化算法,例如,LM(Levenberg-Marquardt)算法,通过关键帧对应的第一类位姿的初始值、和视觉信标对应的第二类位姿的初始值进行多次迭代收敛,在迭代过程中,以关键帧对应的第一类姿态、和视觉信标对应的第二类位姿(全局位置
Figure PCTCN2020100306-appb-000049
和全局位姿
Figure PCTCN2020100306-appb-000050
)为状态变量,即,优化的状态变量由关键帧对应的第一类姿态、视觉信标对应的第二类位姿两部分组成;求解使得第二目标函数获得最小值时的关键帧对应的第一类位姿和视觉信标对应的第二类位姿。其中,第二目标函数可表示为:
Figure PCTCN2020100306-appb-000051
其中,
Figure PCTCN2020100306-appb-000052
其中
Figure PCTCN2020100306-appb-000053
公式中的第一类残差的残差项与步骤106中的第一类残差一致,e v,<i,j,k>为关键帧i中视觉信标j所具有的特征点k在关键帧上的重投影约束残差,即关于关键帧i和视觉信标j的第二类残差,Ω v,<i,j,k>为V约束残差对应的信息矩阵,即关于关键帧i和视觉特征j的第二类残差对应的信息矩阵,
Figure PCTCN2020100306-appb-000054
表示累计所有视觉信标j中的所有特征点k的V约束残差。
在优化过程中,第一次迭代时,关键帧对应的第一类位姿的状态变量的初始值为步骤104获取的关键帧对应的第一类位姿的初始化结果,视觉信标对应的第二类位姿中全局位置的状态变量的初始值为步骤104获取的视觉信标对应的第二类位姿中全局位置的初始化结果;在第一次之后的迭代过程中,则将上一次迭代所求得的状态变量作为当前状态变量,以进行当前迭代时所优化的状态变量的求解。
S110,基于所获得的视觉信标对应的第二类位姿,构建信标地图。
在上述步骤中,采用分步式的联合优化策略,先采用M约束联合优化估计视觉信标的全局位置,再执行V约束联合优化估计视觉信标的全局位姿,降低了优化算法陷入局部极小点的概率。当然,关键帧的第一类残差与关于视觉信标的M约束残差的联合优化可以不执行,这样,第二目标函数的进行迭代收敛的初始值可以采用步骤104得到的关键帧对应的第一类位姿的初始化结果、以及视觉信标对应的第二类位姿的初始化结果,其中,视觉信标对应的第二类位姿中的全局位置的初始化结果由步骤104获得,视觉信标对应的第二类位姿中全局姿态的初始化结果由步骤107获得。
参见图3所示,图3示出了以上述步骤101-110所涉及的图像数据的关联关系的一种示意图。图中:
采集的原始数据(图像帧)经过筛选,得到关键帧;
基于关键帧,根据相机内参的矩阵、信标参数,特征点在关键帧的图像坐标位置,得到各视觉信标相对相机坐标系下的局部位姿,包括视觉信标相对相机坐标系下的局部位置
Figure PCTCN2020100306-appb-000055
视觉信标相对相机的局部姿态
Figure PCTCN2020100306-appb-000056
即,视觉信标被观测的测量值;
基于目标测量数据,将目标测量数据结果作为关键帧对应的第一类位姿的初始化结果,包括关键帧对应的第一类位姿中全局位置的初始化结果
Figure PCTCN2020100306-appb-000057
关键帧对应的第二类位姿中全局姿态的初始化结果
Figure PCTCN2020100306-appb-000058
从相机外参获得相机外参的位姿
Figure PCTCN2020100306-appb-000059
以及相机外参的姿态
Figure PCTCN2020100306-appb-000060
即相机外参的旋转向量;
根据所述
Figure PCTCN2020100306-appb-000061
(椭圆虚线中),获得各个视觉信标的第二类姿态中全局位置的初始化结果
Figure PCTCN2020100306-appb-000062
通过所述
Figure PCTCN2020100306-appb-000063
获得各视觉信标在其所在关键帧相机坐标系下的局部位置,根据该局部位置与在其所在关键帧对应的图像坐标系下的局部位置之间的差值,得到各视觉信标的M约束残差,
对关于所有关键帧的第一类残差和关于所有关键帧中视觉信标的第三类残差,进行以各个关键帧对应的第一类位姿的初始化结果、及各个视觉信标对应的第二类位姿中全局位置的初始化结果为初始值(图中未示出)的联合优化,获得所有关键帧对应的第一类姿态的初步优化结果和所有关键帧中所有视觉信标对应的第二类姿态中全局位置的初步优化结果;其中,所述所有关键帧的第一类残差基于目标测量数据获得;
通过视觉信标的第一类位姿的初始化结果(包括姿态的初始化结果和位置的初始化结果)、相机内参矩阵、信标参数、特征点在相机图像中的局部位置,计算所有关键桢中各个视觉信标的各个特征点的V约束残差,对所有第一类残差和所有第二类残差,进行以所述各个关键帧对应的第一类姿态的初始化结果、和所述各个视觉信标对应的第二类位姿的初始化结果为初始值的联合优化,获得所有关键帧对应的第一类位姿的优化结果和所有关键帧中所有视觉信标对应的第二类位姿的优化结果;
基于所得到的所述第二类位姿的优化结果,建立信标地图。
本发明利用目标检测数据和目标测量数据,通过分别构建关于里程计的约束与关于视觉信标的局部位置的约束的第一目标函数、以及关于里程计的约束与视觉信标特征点的图像重投影约束的第二目标函数,并通过对第一目标函数和第二目标函数的求解联合优化问题,来获得视觉信标的全局位姿,从而构建信标地图。可见,通过本方案可以减少对视觉信标的布置精度的依赖。另外,由于本方案在构建地图时,不依赖视觉信标被部署时所基于的位姿信息,因此,可以降低对环境测量的要求或省略对于环境测量的过程,并 且可以降低视觉信标的布置工作难度,减少对于人力和时间的消耗。并且,通过对原始数据进行筛选,获取关键帧数据,并针对关键帧进行优化计算,大幅提升了优化算法的计算效率。
参见图4所示,图4为本申请实施例基于视觉信标建立信标地图的装置的一种示意图。该装置包括:
数据采集模块,用于获取至少包括图像帧的目标检测数据,以及目标测量数据;所述图像帧为机器人在对预先布置的视觉信标进行图像采集过程,所采集到的图像,所述目标测量数据包括:所述图像帧被采集时刻下所述机器人的机器人本体在里程计坐标系的位姿;
初始化模块,用于基于所述目标测量数据,获得所述图像帧对应的第一类位姿的初始化结果;所述第一类位姿为所述图像帧被采样时刻下机器人本体在世界坐标系的位姿;以及基于所述目标检测数据,获取所述视觉信标对应的第二类位姿的初始化结果;其中,所述第二类位姿为所述视觉信标在世界坐标系下的位姿;
第一联合优化模块,用于对基于所述目标测量数据获取的关于所述图像帧的第一类残差、和基于所述目标检测数据获取的关于所述图像帧和视觉信标的第二类残差,进行以所述第一类位姿为状态变量和所述第二类位姿为状态变量的联合优化,获得所述第二类位姿的优化结果;其中,所述第一类残差为所述图像帧的关于里程计的约束残差,所述第二类残差为所述视觉信标的特征点在所述图像帧上重投影的约束残差;所述第一类位姿的初始值为所述第一类位姿的初始化结果,所述第二类位姿的初始值为所述第二类位姿的初始化结果;
建图模块,用于基于所得到的所述第二类位姿的优化结果,建立信标地图。
其中,所述数据采集模块具体用于采集布置于待建信标地图物理环境中的各个视觉信标的图像至少一次,得到至少包括图像帧的目标检测数据;在采集图像帧的同一时刻进行目标测量数据的测量。
所述数据采集模块包括,用于获得目标测量数据的里程计测量模块,和用于获得目标检测数据的视觉信标检测模块。
所述初始化模块包括,用于基于所述目标测量数据,获得所述图像帧对应的第一类位姿的初始化结果第一初始化子模块,和,用于基于所述目标检测数据,获取所述视觉信标对应的第二类位姿的初始化结果的第二初始化子模块;其中,
所述第一初始化子模块具体用于将所述图像帧被采集时刻下所述机器人的机器人本体在里程计坐标系下的位姿,作为所述图像帧对应的第一类位姿的初始化结果;
所述第二初始化模块子具体用于:
对于所述图像帧中任一视觉信标,通过该视觉信标首次被观测时该视觉信标所属图像帧对应的相机位姿、以及该目标检测数据,分别获得该视觉信标在世界坐标系下的位置的初始化结果、以及该视觉信标在世界坐标系下的姿态的初始化结果;
其中,所述视觉信标检测数据包括视觉信标被观测的测量值,从视觉信标相对相机坐标系的局部位姿获得;
所述视觉信标相对相机坐标系的局部位姿根据视觉信标中特征点的图像坐标、相机内参、以及信标参数获得;
该视觉信标所属图像帧对应的相机位姿为:相机获取该视觉信标所属图像帧时的相机位姿。
所述第二初始化子模块具体用于:
对于所述图像帧中任一视觉信标,根据所述图像帧对应的第一类位姿的向量、相机外参的位姿向量、以及被观测的测量值的向量,获得所述视觉信标在世界坐标系下的位置的初始化结果;其中,基于该视觉信标所属图像帧对应的第一类位姿的向量和相机外参的位姿向量,得到该视觉信标首次被观测时该视觉信标所属图像帧对应的相机位姿;
根据机器人本体坐标系的全局旋转向量、相机外参的旋转向量、该视觉信标在相机坐标系中的局部旋转向量,获得该视觉信标在世界坐标系下的姿态的初始化结果,
其中,所述机器人本体坐标系的全局旋转向量从所述第一类位姿的初始化结果获得,所述相机外参的旋转向量从所述相机外参的位姿获得,该视觉信标在相机坐标系中的局部旋转向量从该视觉信标相对相机坐标系的局部位 姿获得。
该装置还包括:第二联合优化模块用于:
获取关于所述视觉信标的第三类残差;其中,所述第三类残差为关于所述图像帧中视觉信标的局部位置的约束残差;
对所述第一类残差、以及第三类残差,进行以所述第一类位姿为状态变量、及以所述第二类位姿中的全局位置为状态变量的联合优化,获得第一类位姿的初步优化结果和所述第二类位姿中全局位置的初步优化结果;其中,所述第一类位姿的初始值为所述第一类位姿的初始化结果,所述第二类位姿中全局位置的初始值为所述第二类位姿中全局位置的初始化结果;
将所述第一类位姿的初步优化结果作为所述第一类位姿的初始化结果;
将所述第二类位姿中全局位置的初始优化结果作为所述第二类位姿中全局位置的初始化结果;
基于所述目标检测数据,获取所述视觉信标对应的第二类位姿中全局姿态的方式,包括,
基于所述视觉信标首次被观测时所述视觉信标所属图像帧对应的相机位姿,以及所述目标检测数据,获取所述视觉信标对应的第二类姿态中全局姿态的初始化结果。
可选地,所述图像帧的数量为多个;所述第二联合优化模块具体用于:
构建融合各个第一类残差和各个第三类残差的第一目标函数,通过以各个第一类位姿的初始化结果、和各个第二类位姿中全局位置的初始化结果为迭代初始值的迭代,获取使得所述第一目标函数为最小值时的各个第一类位姿的初步优化结果、和各个第二类位姿中全局位置的初步优化结果。可选地,所述图像帧的数量为多个,所述第一联合优化模块具体用于:
构建融合各个第一类残差和各个第二类残差的第二目标函数,通过以各个第一类位姿的初始化结果、和各个第二类位姿的初始化结果为迭代初始值的迭代,获取使得所述第二目标函数为最小值时的各个第二类位姿的优化结果。
可选地,所述第一联合优化模块基于所述目标测量数据获取的关于所述图像帧的第一类残差的获取过程,包括:
针对每一图像帧,基于所述目标测量数据,获取该图像帧与其相邻图像帧之间的第一类相对位姿和第二类相对位姿;其中,所述第一类相对位姿和第二类相对位姿均为该图像帧被采集时刻相对于相邻图像帧被采集时刻,机器人本体的变化位姿;所述第一类相对位姿基于所述第一类位姿的状态变量确定,所述第二类相对位姿基于所述目标测量数据确定;
由所述第一类相对位姿和第二类相对位姿的差异,计算关于该图像帧的第一类残差,
累计关于各个所述图像帧的第一类残差,得到关于所有所述图像帧的第一类残差。
可选地,所述第二联合优化模块获取关于所述视觉信标的第三类残差的过程,包括,
针对每一图像帧中任一视觉信标,获取该视觉信标在该图像帧对应的相机坐标系下的局部位置与该视觉信标的被观测的测量值之间的差异,得到关于该视觉信标的第三类残差;
累计关于各个视觉信标的第三类残差,得到关于所有视觉信标的第三类残差;
其中,所述视觉信标在该图像帧对应的相机坐标系下的局部位置通过该视觉信标对应的相机外参位姿、该图像帧对应的第一类位姿的当前状态向量、以及该视觉信标对应的第二类位姿中全局位置的当前状态向量获得;该图像帧对应的第一类位姿的当前状态向量在第一次迭代时为:该图像帧对应的第一类位姿的初始化结果;该视觉信标对应的第二类位姿中全局位置的当前状态向量在第一次迭代时为:该视觉信标对应的第二类位姿中全局位置的初始化结果的向量。
可选地,所述第一联合优化模块基于所述目标检测数据获取的关于所述图像帧和视觉信标的第二类残差的确定过程,包括:
针对每一图像帧,根据该图像帧中各个视觉信标对应的第二类位姿的当前状态变量、以及所述目标检测数据,获取该图像帧中所有视觉信标的所有特征点在该图像帧上的重投影的约束残差。
可选地,所述第一联合优化模块针对每一图像帧,根据该图像帧中各个视觉信标对应的第二类位姿的当前状态变量、以及所述目标检测数据,获取 该图像帧中所有视觉信标的所有特征点在该图像帧上的重投影的约束残差,包括:
针对每一图像帧中的任一视觉信标所具有的任一特征点,根据相机内参矩阵、该图像帧对应的第一类位姿的当前状态变量、该视觉信标对应的第二类位姿的当前状态变量、以及该特征点在信标坐标系下的局部位置的投影函数与该特征点的图像坐标的测量值之间的差异,获得该视觉信标中该特征点在其图像帧上重投影的误差,
累计各个视觉信标的各个特征点的重投影误差,得到关于所有视觉信标的所有特征点在图像帧上的重投影约束残差。
可选地,所述图像帧的数量为多个;
所述装置还包括关键帧筛选模块,用于获取至少包括图像帧的目标检测数据,以及目标测量数据之后,按照如下任一条件或其组合从多个图像帧中筛选出关键帧:
当前帧中存在信标图像时,则将该帧作为关键帧;
当前帧的关于里程计的测量与已有关键帧的关于里程计的测量之间的位置平移偏差,大于设定的平移阈值时,则将当前帧作为关键帧;
当前帧的关于里程计的测量与已有关键帧的关于里程计的测量之间的旋转偏差,大于设定的旋转阈值时,则将当前帧作为关键帧;
第一帧为关键帧;
基于所述目标测量数据,获得所述图像帧对应的第一类位姿的初始化结果,包括:基于所述目标测量数据,获得所述关键帧对应的第一类位姿的初始化结果;
所述基于所述目标测量数据获取的关于所述图像帧的第一类残差,包括:
基于所述目标测量数据获取的关于所述关键帧的第一类残差;
所述第二类残差为所述视觉信标的特征点在所述关键帧上重投影的约束残差。
本申请还提供了一种基于视觉信标建立信标地图的电子设备,该电子设备包括存储器和处理器,其中,所述存储器用于存放计算机程序;所述处理器,用于执行所述存储器上所存放的程序,实现上述基于视觉信标建立信标 地图的方法。
存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离前述处理器的存储装置。上述的处理器可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)等;还可以是数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
本申请实施例还提供了一种计算机可读存储介质,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现如下本申请所提供的基于视觉信标建立信标地图的方法的步骤。
本申请实施例提供了一种计算机程序,所述计算机程序由处理器执行时实现本申请所提供的于视觉信标建立信标地图的方法。
对于装置/网络侧设备/存储介质实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。

Claims (16)

  1. 一种基于视觉信标建立信标地图的方法,包括:
    获取至少包括图像帧的目标检测数据,以及目标测量数据;所述图像帧为机器人在对预先布置的视觉信标进行图像采集过程,所采集到的图像,所述目标测量数据包括:所述图像帧被采集时刻下所述机器人的机器人本体在里程计坐标系的位姿;
    基于所述目标测量数据,获得所述图像帧对应的第一类位姿的初始化结果;所述第一类位姿为所述图像帧被采样时刻下机器人本体在世界坐标系下的位姿;
    基于所述目标检测数据,获取所述视觉信标对应的第二类位姿的初始化结果;其中,所述第二类位姿为所述视觉信标在世界坐标系下的位姿;
    对基于所述目标测量数据获取的关于所述图像帧的第一类残差、和基于所述目标检测数据获取的关于所述图像帧和视觉信标的第二类残差,进行以所述第一类位姿为状态变量和所述第二类位姿为状态变量的联合优化,获得所述第二类位姿的优化结果;其中,所述第一类残差为所述图像帧的关于里程计的约束残差,所述第二类残差为所述视觉信标的特征点在所述图像帧上重投影的约束残差;所述第一类位姿的初始值为所述第一类位姿的初始化结果,所述第二类位姿的初始值为所述第二类位姿的初始化结果;
    基于所得到的所述第二类位姿的优化结果,建立信标地图。
  2. 如权利要求1所述的方法,其中,所述基于所述目标检测数据,获取所述视觉信标对应的第二类位姿的初始化结果之后,还包括:
    获取关于所述视觉信标的第三类残差;其中,所述第三类残差为关于所述图像帧中的视觉信标的局部位置的约束残差;
    对所述第一类残差、以及第三类残差,进行以所述第一类位姿为状态变量、及以所述第二类位姿中的全局位置为状态变量的联合优化,获得第一类位姿的初步优化结果和所述第二类位姿中全局位置的初步优化结果;其中,所述第一类位姿的初始值为所述第一类位姿的初始化结果,所述第二类位姿中全局位置的初始值为所述第二类位姿中全局位置的初始化结果;
    将所述第一类位姿的初步优化结果作为所述第一类位姿的初始化结果;
    将所述第二类位姿中全局位置的初始优化结果作为所述第二类位姿中全局位置的初始化结果;
    基于所述目标检测数据,获取所述视觉信标对应的第二类位姿中全局姿态的方式,包括,
    基于所述视觉信标首次被观测时所述视觉信标所属图像帧对应的相机位姿,以及所述目标检测数据,获取所述视觉信标对应的第二类姿态中全局姿态的初始化结果。
  3. 如权利要求2所述的方法,其中,所述图像帧的数量为多个;
    所述进行以所述第一类位姿为状态变量、及以所述第二类位姿中全局位置为状态变量的联合优化,获得第一类位姿的初步优化结果和所述第二类位姿中全局位置的初步优化结果,包括:
    构建融合各个第一类残差和各个第三类残差的第一目标函数,通过以各个第一类位姿的初始化结果、和各个第二类位姿中全局位置的初始化结果为迭代初始值的迭代,获取使得所述第一目标函数为最小值时的各个第一类位姿的初步优化结果、和各个第二类位姿中全局位置的初步优化结果。
  4. 如权利要求1至3任一所述的方法,其中,所述图像帧的数量为多个;
    所述进行以所述第一类位姿为状态变量和所述第二类位姿为状态变量的联合优化,获得所述第二类位姿的优化结果,包括:
    构建融合各个第一类残差和各个第二类残差的第二目标函数,通过以各个第一类位姿的初始化结果、和各个第二类位姿的初始化结果为迭代初始值的迭代,获取使得所述第二目标函数为最小值时的各个第二类位姿的优化结果。
  5. 如权利要求4所述的方法,其中,所述基于所述目标测量数据获取的关于所述图像帧的第一类残差的获取过程,包括:
    针对每一图像帧,基于所述目标测量数据,获取该图像帧与其相邻图像帧之间的第一类相对位姿和第二类相对位姿;其中,其中,所述第一类相对位姿和第二类相对位姿均为该图像帧被采集时刻相对于相邻图像帧被采集时刻,机器人本体的变化位姿;所述第一类相对位姿基于所述第一类位姿的状态变量确定,所述第二类相对位姿基于所述目标测量数据确定;
    由所述第一类相对位姿和第二类相对位姿的差异,计算关于该图像帧的第一类残差,
    累计关于各个所述图像帧的第一类残差,得到关于所有所述图像帧的第一类残差。
  6. 如权利要求4所述的方法,其中,所述获取关于所述视觉信标的第三类残差的过程,包括,
    针对每一图像帧中任一视觉信标,获取该视觉信标在该图像帧对应的相机坐标系下的局部位置与该视觉信标的被观测的测量值之间的差异,得到关于该视觉信标的第三类残差;
    累计关于各个视觉信标的第三类残差,得到关于所有视觉信标的第三类残差;
    其中,所述视觉信标在该图像帧对应的相机坐标系下的局部位置通过该视觉信标对应的相机外参位姿、该图像帧对应的第一类位姿的当前状态向量、以及该视觉信标对应的第二类位姿中全局位置的当前状态向量获得;该图像帧对应的第一类位姿的当前状态向量在第一次迭代时为:该图像帧对应的第一类位姿的初始化结果;该视觉信标对应的第二类位姿中全局位置的当前状态向量在第一次迭代时为:该视觉信标对应的第二类位姿中全局位置的初始化结果的向量。
  7. 如权利要求4所述的方法,其中,所述基于所述目标检测数据获取的关于所述图像帧和视觉信标的第二类残差的确定过程,包括:
    针对每一图像帧,根据该图像帧中各个视觉信标对应的第二类位姿的当前状态变量、以及所述目标检测数据,获取该图像帧中所有视觉信标的所有特征点在该图像帧上的重投影的约束残差。
  8. 如权利要求7所述的方法,其中,所述针对每一图像帧,根据该图像帧中各个视觉信标对应的第二类位姿的当前状态变量、以及所述目标检测数据,获取该图像帧中所有视觉信标的所有特征点在该图像帧上的重投影的约束残差,包括,
    针对每一图像帧中的任一视觉信标所具有的任一特征点,根据相机内参矩阵、该图像帧对应的第一类位姿的当前状态变量、该视觉信标对应的第二 类位姿的当前状态变量、以及该特征点在信标坐标系下的局部位置的投影函数与该特征点的图像坐标的测量值之间的差异,获得该视觉信标中该特征点在其图像帧上重投影的误差,
    累计各个视觉信标的各个特征点的重投影误差,得到关于所有视觉信标的所有特征点在图像帧上的重投影约束残差。
  9. 如权利要求1所述的方法,其中,所述基于所述目标测量数据,获得所述图像帧对应的第一类位姿的初始化结果,包括:
    将所述图像帧被采集时刻下所述机器人的机器人本体在里程计坐标系下的位姿,作为所述图像帧对应的第一类位姿的初始化结果;
    所述基于所述目标检测数据,获取所述视觉信标对应的第二类位姿的初始化结果,包括:
    对于所述图像帧中任一视觉信标,通过该视觉信标首次被观测时该视觉信标所属图像帧对应的相机位姿、以及该目标检测数据,分别获得该视觉信标在世界坐标系下的位置的初始化结果、以及该视觉信标在世界坐标系下的姿态的初始化结果;
    其中,所述视觉信标检测数据包括视觉信标被观测的测量值,从视觉信标相对相机坐标系的局部位姿获得;
    所述视觉信标相对相机坐标系的局部位姿根据视觉信标中特征点的图像坐标、相机内参、以及信标参数获得;
    该视觉信标所属图像帧对应的相机位姿为:相机获取该视觉信标所属图像帧时的相机位姿。
  10. 如权利要求9所述的方法,其中,所述对于所述图像帧中任一视觉信标,通过该视觉信标首次被观测时该视觉信标所属图像帧对应的相机位姿、以及该目标检测数据,分别获得该视觉信标在世界坐标系下的位置的初始化结果、以及该视觉信标在世界坐标系下的姿态的初始化结果,包括:
    对于所述图像帧中任一视觉信标,根据所述图像帧对应的第一类位姿的向量、相机外参的位姿向量、以及被观测的测量值的向量,获得所述视觉信标在世界坐标系下的位置的初始化结果;其中,基于该视觉信标所属图像帧对应的第一类位姿的向量和相机外参的位姿向量,得到该视觉信标首次被观 测时该视觉信标所属图像帧对应的相机位姿;根据机器人本体坐标系的全局旋转向量、相机外参的旋转向量、该视觉信标在相机坐标系中的局部旋转向量,获得该视觉信标在世界坐标系下的姿态的初始化结果,
    其中,所述机器人本体坐标系的全局旋转向量从所述第一类位姿的初始化结果获得,
    所述相机外参的旋转向量从所述相机外参的位姿获得,
    该视觉信标在相机坐标系中的局部旋转向量从该视觉信标相对相机坐标系的局部位姿获得。
  11. 如权利要求4所述的方法,其中,所述获取至少包括图像帧的目标检测数据,以及目标测量数据,包括:
    采集布置于待建信标地图物理环境中的各个视觉信标的图像至少一次,得到至少包括图像帧的目标检测数据;
    在采集图像帧的同一时刻进行目标测量数据的测量。
  12. 如权利要求1所述的方法,其中,所述图像帧的数量为多个;
    所述获取至少包括图像帧的目标检测数据,以及目标测量数据之后,还包括:按照如下任一条件或其组合从多个图像帧中筛选出关键帧:
    当前帧中存在信标图像时,则将该帧作为关键帧;
    当前帧的关于里程计的测量与已有关键帧的关于里程计的测量之间的位置平移偏差,大于设定的平移阈值时,则将当前帧作为关键帧;
    当前帧的关于里程计的测量与已有关键帧的关于里程计的测量之间的旋转偏差,大于设定的旋转阈值时,则将当前帧作为关键帧;
    第一帧为关键帧;
    基于所述目标测量数据,获得所述图像帧对应的第一类位姿的初始化结果,包括:基于所述目标测量数据,获得所述关键帧对应的第一类位姿的初始化结果;
    所述基于所述目标测量数据获取的关于所述图像帧的第一类残差,包括:
    基于所述目标测量数据获取的关于所述关键帧的第一类残差;
    所述第二类残差为所述视觉信标的特征点在所述关键帧上重投影的约束残差。
  13. 一种基于视觉信标建立信标地图的装置,包括:
    数据采集模块,用于获取至少包括图像帧的目标检测数据,以及目标测量数据;所述图像帧为机器人在对预先布置的视觉信标进行图像采集过程,所采集到的图像,所述目标测量数据包括:所述图像帧被采集时刻下所述机器人的机器人本体在里程计坐标系的位姿;
    初始化模块,用于基于所述目标测量数据,获得所述图像帧对应的第一类位姿的初始化结果;所述第一类位姿为所述图像帧被采样时刻下机器人本体在世界坐标系下的位姿;以及基于所述目标检测数据,获取所述视觉信标对应的第二类位姿的初始化结果;其中,所述第二类位姿为所述视觉信标在世界坐标系下的位姿;
    第一联合优化模块,用于对基于所述目标测量数据获取的关于所述图像帧的第一类残差、和基于所述目标检测数据获取的关于所述图像帧和视觉信标的第二类残差,进行以所述第一类位姿为状态变量和所述第二类位姿为状态变量的联合优化,获得所述第二类位姿的优化结果;其中,所述第一类残差为所述图像帧的关于里程计的约束残差,所述第二类残差为所述视觉信标的特征点在所述图像帧上重投影的约束残差;所述第一类位姿的初始值为所述第一类位姿的初始化结果,所述第二类位姿的初始值为所述第二类位姿的初始化结果;
    建图模块,用于基于所得到的所述第二类位姿的优化结果,建立信标地图。
  14. 一种基于视觉信标建立信标地图的电子设备,该电子设备包括存储器和处理器,其中,所述存储器用于存放计算机程序;所述处理器,用于执行所述存储器上所存放的程序,实现权利要求1-12任一所述基于视觉信标建立信标地图的方法。
  15. 一种计算机可读存储介质,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行权利要求1-12任一所述基于视觉信标建立信标地图的方法。
  16. 一种计算机程序,所述计算机程序由处理器执行时实现权利要求1-12任一所述基于视觉信标建立信标地图的方法。
PCT/CN2020/100306 2019-07-05 2020-07-04 一种基于视觉信标建立信标地图的方法、装置 WO2021004416A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2022500643A JP7300550B2 (ja) 2019-07-05 2020-07-04 視覚標識に基づき標識マップを構築する方法、装置
KR1020227002958A KR20220025028A (ko) 2019-07-05 2020-07-04 시각적 비콘 기반의 비콘 맵 구축 방법, 장치
EP20837645.9A EP3995988A4 (en) 2019-07-05 2020-07-04 METHOD AND APPARATUS FOR ESTABLISHING A BEACON MAP BASED ON VISUAL BEACONS

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910603494.5A CN112183171B (zh) 2019-07-05 2019-07-05 一种基于视觉信标建立信标地图方法、装置
CN201910603494.5 2019-07-05

Publications (1)

Publication Number Publication Date
WO2021004416A1 true WO2021004416A1 (zh) 2021-01-14

Family

ID=73915694

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/100306 WO2021004416A1 (zh) 2019-07-05 2020-07-04 一种基于视觉信标建立信标地图的方法、装置

Country Status (5)

Country Link
EP (1) EP3995988A4 (zh)
JP (1) JP7300550B2 (zh)
KR (1) KR20220025028A (zh)
CN (1) CN112183171B (zh)
WO (1) WO2021004416A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112902953A (zh) * 2021-01-26 2021-06-04 中国科学院国家空间科学中心 一种基于slam技术的自主位姿测量方法
CN114111803A (zh) * 2022-01-26 2022-03-01 中国人民解放军战略支援部队航天工程大学 一种室内卫星平台的视觉导航方法
CN114463429A (zh) * 2022-04-12 2022-05-10 深圳市普渡科技有限公司 机器人、地图创建方法、定位方法及介质
WO2023280274A1 (en) * 2021-07-07 2023-01-12 The Hong Kong University Of Science And Technology Geometric structure aided visual localization method and system

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112880687B (zh) * 2021-01-21 2024-05-17 深圳市普渡科技有限公司 一种室内定位方法、装置、设备和计算机可读存储介质
CN112862894B (zh) * 2021-04-12 2022-09-06 中国科学技术大学 一种机器人三维点云地图构建与扩充方法
CN113204039B (zh) * 2021-05-07 2024-07-16 深圳亿嘉和科技研发有限公司 一种应用于机器人的rtk-gnss外参标定方法
CN114295138B (zh) * 2021-12-31 2024-01-12 深圳市普渡科技有限公司 机器人、地图扩建方法、装置和可读存储介质
CN114739382A (zh) * 2022-03-03 2022-07-12 深圳市优必选科技股份有限公司 机器人及其建图方法、装置及存储介质
CN114777757A (zh) * 2022-03-14 2022-07-22 深圳市优必选科技股份有限公司 信标地图构建方法、装置、设备及存储介质
CN114415698B (zh) * 2022-03-31 2022-11-29 深圳市普渡科技有限公司 机器人、机器人的定位方法、装置和计算机设备
CN117570969B (zh) * 2024-01-16 2024-09-10 锐驰激光(深圳)有限公司 割草机视觉定位方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120101661A1 (en) * 2006-07-14 2012-04-26 Irobot Corporation Autonomous behaviors for a remote vehicle
CN105487557A (zh) * 2015-12-07 2016-04-13 浙江大学 一种基于日盲区紫外成像的无人机自主着陆引导系统
CN107179080A (zh) * 2017-06-07 2017-09-19 纳恩博(北京)科技有限公司 电子设备的定位方法和装置、电子设备、电子定位系统
US20170316701A1 (en) * 2016-04-29 2017-11-02 United Parcel Service Of America, Inc. Methods for landing an unmanned aerial vehicle
CN107516326A (zh) * 2017-07-14 2017-12-26 中国科学院计算技术研究所 融合单目视觉和编码器信息的机器人定位方法和系统
CN109648558A (zh) * 2018-12-26 2019-04-19 清华大学 机器人曲面运动定位方法及其运动定位系统

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7689321B2 (en) 2004-02-13 2010-03-30 Evolution Robotics, Inc. Robust sensor fusion for mapping and localization in a simultaneous localization and mapping (SLAM) system
JP5222971B2 (ja) 2011-03-31 2013-06-26 富士ソフト株式会社 歩行ロボット装置及びその制御プログラム
CN104374395A (zh) * 2014-03-31 2015-02-25 南京邮电大学 基于图的视觉slam方法
JP7262776B2 (ja) 2017-01-26 2023-04-24 ザ・リージェンツ・オブ・ザ・ユニバーシティ・オブ・ミシガン シーンの2次元表現を生成する方法、及び、移動可能な物体の位置を判定するための方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120101661A1 (en) * 2006-07-14 2012-04-26 Irobot Corporation Autonomous behaviors for a remote vehicle
CN105487557A (zh) * 2015-12-07 2016-04-13 浙江大学 一种基于日盲区紫外成像的无人机自主着陆引导系统
US20170316701A1 (en) * 2016-04-29 2017-11-02 United Parcel Service Of America, Inc. Methods for landing an unmanned aerial vehicle
CN107179080A (zh) * 2017-06-07 2017-09-19 纳恩博(北京)科技有限公司 电子设备的定位方法和装置、电子设备、电子定位系统
CN107516326A (zh) * 2017-07-14 2017-12-26 中国科学院计算技术研究所 融合单目视觉和编码器信息的机器人定位方法和系统
CN109648558A (zh) * 2018-12-26 2019-04-19 清华大学 机器人曲面运动定位方法及其运动定位系统

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI, YUEHUA ET AL.: "Improved Visual SLAM Algorithm in Factory Environment", ROBOT, vol. 41, no. 1, 31 January 2019 (2019-01-31), pages 95 - 103, XP009533452, ISSN: 1002-0446, DOI: 10.13973/j.cnki.robot.180062 *
See also references of EP3995988A4

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112902953A (zh) * 2021-01-26 2021-06-04 中国科学院国家空间科学中心 一种基于slam技术的自主位姿测量方法
WO2023280274A1 (en) * 2021-07-07 2023-01-12 The Hong Kong University Of Science And Technology Geometric structure aided visual localization method and system
CN114111803A (zh) * 2022-01-26 2022-03-01 中国人民解放军战略支援部队航天工程大学 一种室内卫星平台的视觉导航方法
CN114463429A (zh) * 2022-04-12 2022-05-10 深圳市普渡科技有限公司 机器人、地图创建方法、定位方法及介质
CN114463429B (zh) * 2022-04-12 2022-08-16 深圳市普渡科技有限公司 机器人、地图创建方法、定位方法及介质

Also Published As

Publication number Publication date
EP3995988A4 (en) 2022-08-17
JP7300550B2 (ja) 2023-06-29
EP3995988A1 (en) 2022-05-11
JP2022539422A (ja) 2022-09-08
CN112183171B (zh) 2024-06-28
KR20220025028A (ko) 2022-03-03
CN112183171A (zh) 2021-01-05

Similar Documents

Publication Publication Date Title
WO2021004416A1 (zh) 一种基于视觉信标建立信标地图的方法、装置
CN112894832B (zh) 三维建模方法、装置、电子设备和存储介质
WO2019170164A1 (zh) 基于深度相机的三维重建方法、装置、设备及存储介质
WO2020206903A1 (zh) 影像匹配方法、装置及计算机可读存储介质
CN110176032B (zh) 一种三维重建方法及装置
CN112184824B (zh) 一种相机外参标定方法、装置
WO2020062434A1 (zh) 一种相机外部参数的静态标定方法
WO2021043213A1 (zh) 标定方法、装置、航拍设备和存储介质
CN111127524A (zh) 一种轨迹跟踪与三维重建方法、系统及装置
CN107358633A (zh) 一种基于三点标定物的多相机内外参标定方法
CN112444242A (zh) 一种位姿优化方法及装置
CN110874854B (zh) 一种基于小基线条件下的相机双目摄影测量方法
CN102509304A (zh) 基于智能优化的摄像机标定方法
CN109949232B (zh) 图像与rtk结合的测量方法、系统、电子设备及介质
CN112183506A (zh) 一种人体姿态生成方法及其系统
CN111383264B (zh) 一种定位方法、装置、终端及计算机存储介质
WO2023116430A1 (zh) 视频与城市信息模型三维场景融合方法、系统及存储介质
CN111998862A (zh) 一种基于bnn的稠密双目slam方法
WO2024093635A1 (zh) 相机位姿估计方法、装置及计算机可读存储介质
Wang et al. LF-VIO: A visual-inertial-odometry framework for large field-of-view cameras with negative plane
TW202238449A (zh) 室內定位系統及室內定位方法
CN113034347A (zh) 倾斜摄影图像处理方法、装置、处理设备及存储介质
CN114998447A (zh) 多目视觉标定方法及系统
JP2018173882A (ja) 情報処理装置、方法、及びプログラム
CN113628284B (zh) 位姿标定数据集生成方法、装置、系统、电子设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20837645

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022500643

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20227002958

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020837645

Country of ref document: EP

Effective date: 20220207