CN117315015A - Robot pose determining method and device, medium and electronic equipment - Google Patents

Robot pose determining method and device, medium and electronic equipment Download PDF

Info

Publication number
CN117315015A
CN117315015A CN202311080110.9A CN202311080110A CN117315015A CN 117315015 A CN117315015 A CN 117315015A CN 202311080110 A CN202311080110 A CN 202311080110A CN 117315015 A CN117315015 A CN 117315015A
Authority
CN
China
Prior art keywords
pose
robot
initial
visual image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311080110.9A
Other languages
Chinese (zh)
Inventor
侯杰虎
李昊鑫
刘宁
李谣顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Shanghai Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shanghai Robotics Co Ltd filed Critical Cloudminds Shanghai Robotics Co Ltd
Priority to CN202311080110.9A priority Critical patent/CN117315015A/en
Publication of CN117315015A publication Critical patent/CN117315015A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The disclosure relates to a method, a device, a medium and electronic equipment for determining the pose of a robot, comprising the following steps: according to the initial information acquired by each sensor, determining the initial pose of the robot when acquiring the visual image of the current frame; determining a key frame image of the acquired visual image of the current frame from the visual point cloud map, and constructing a point cloud matching pair according to the characteristic points in the acquired visual image of the current frame and the characteristic points matched from the key frame image; inputting the initial pose into a pre-trained nerve radiation field model to obtain a predicted visual image of the robot predicted by the nerve radiation field model under the initial pose; according to the point Yun Pi pairing, predicting a visual image, acquiring the visual image, and the wheel speed information and positioning information in the initial information, determining pose change information of the robot; and determining the pose of the robot when the robot collects the visual image of the current frame according to the corresponding initial pose, pose change information and the corresponding pose when the robot collects the key frame image.

Description

Robot pose determining method and device, medium and electronic equipment
Technical Field
The disclosure relates to the technical field of visual positioning, in particular to a method, a device, a medium and electronic equipment for determining the pose of a robot.
Background
Visual SLAM (Simultaneous Localization And Mapping ) localization is a technology widely applied to the field of indoor and outdoor localization of robots, namely, visual localization, i.e., matching image data acquired through visual sensors with a three-dimensional visual map known in advance, and calculating the pose of the robot in real time by combining an IMU or a visual odometer.
In a use scene of visual positioning, for example, in an environment with weak texture, repeated texture and large angle change, the problem of positioning accuracy is reduced, and for example, the problem of positioning drift and positioning accuracy is reduced when the current walking direction of the robot is too far from the key frame direction in the constructed map.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method, an apparatus, a medium, and an electronic device for determining a pose of a robot.
According to a first aspect of an embodiment of the present disclosure, there is provided a robot pose determining method, including:
according to initial information acquired by each sensor configured on the robot, determining an initial pose of the robot when acquiring the visual image of the current frame;
Determining a key frame image of an acquired visual image of a current frame from a visual point cloud map, and constructing a point cloud matching pair according to characteristic points in the acquired visual image of the current frame and the characteristic points matched from the visual points of the key frame image;
inputting the initial pose corresponding to the current frame into a pre-trained nerve radiation field model to obtain a predicted visual image of the robot predicted by the nerve radiation field model under the initial pose;
determining pose change information of the robot according to the point cloud matching pair, the predicted visual image, the acquired visual image, the wheel speed information and the positioning information in the initial information;
and determining the pose of the robot when the robot acquires the visual image of the current frame according to the corresponding initial pose, the pose change information and the corresponding pose of the robot when the robot acquires the key frame image.
Optionally, the determining the pose of the robot when acquiring the visual image of the current frame according to the corresponding initial pose, the pose change information and the corresponding pose when the robot acquires the key frame image includes:
Determining whether the pose change information meets a preset condition according to the corresponding initial pose and the corresponding pose when the robot acquires the key frame image;
and under the condition that the pose change information does not meet the preset condition, executing the step of inputting the initial pose corresponding to the current frame into a pre-trained nerve radiation field model to determine whether the pose change information meets the preset condition according to the corresponding initial pose and the pose corresponding to the moment when the robot collects the key frame image until the pose change information meets the preset condition, and determining the pose when the robot collects the visual image of the current frame according to the initial pose and the pose change information when the robot collects the visual image of the current frame.
Optionally, the determining pose change information of the robot according to the point cloud matching pair, the predicted visual image, the collected visual image, the wheel speed information in the initial information and the positioning information includes:
determining an image distance between the predicted visual image and the acquired visual image according to the channel color distance between pixels at the same position in the predicted visual image and the acquired visual image;
Determining a jacobian matrix of the initial pose of the current frame and a pose observation function value according to the image distance, the point cloud matching pair, the wheel speed information in the initial information and the positioning information;
and determining pose change information of the robot according to the jacobian matrix and the pose observation function values.
Optionally, the neural radiation field model is trained by:
inputting a sample pose of the robot and a sample visual image acquired by the robot in the sample pose into a neural radiation field model to be trained, and constructing a visual ray of the robot under the sample pose according to the orientation of the robot corresponding to the sample pose;
determining a plurality of sampling points on the visual ray, and determining the color and the volume density of each sampling point;
determining the channel color corresponding to the direction indicated by the visual ray according to the color of the sampling point and the volume density;
rendering to obtain a predicted sample visual image corresponding to the sample pose according to the channel color corresponding to the direction indicated by the visual ray;
and according to the predicted sample visual image and the corresponding sample acquisition visual image, carrying out back propagation update on the loss function of the nerve radiation field model to be trained to obtain the pre-trained nerve radiation field model.
Optionally, the determining, according to initial information collected by each sensor configured on the robot, an initial pose of the robot when collecting a collected visual image of a current frame includes:
according to the acquired visual image of the current frame in the initial information and the acquired visual image acquired before the current frame, extracting characteristic points and tracking the characteristic points to obtain optical flow information;
and determining the initial pose of the robot when the acquired visual image of the current frame is acquired according to the optical flow information and the inertial information corresponding to the acquired visual image of the current frame in the initial information.
Optionally, the constructing a point cloud matching pair according to the feature points in the acquired visual image of the current frame and the feature points matched from the visual points of the key frame image includes:
matching the characteristic points in the acquired visual image of the current frame with the visual points of the key frame image to construct a plurality of initial matching pairs;
determining a transformation matrix according to the non-collinear matching pairs of the initial matching pairs;
calculating the projection error of each initial matching pair and the transformation matrix respectively;
And determining a point cloud matching pair from the initial matching pair according to the projection error.
Optionally, the determining a point cloud matching pair from the initial matching pair according to the projection error includes:
if the projection error is larger than a preset error threshold, eliminating the initial matching pair corresponding to the projection error;
and if the projection error is smaller than or equal to the preset error threshold, determining the initial matching pair corresponding to the projection error as the point cloud matching pair.
According to a second aspect of the embodiments of the present disclosure, there is provided a robot pose determining apparatus, including:
the first determining module is configured to determine the initial pose of the robot when the vision image of the current frame is acquired according to the initial information acquired by each sensor configured on the robot;
the construction module is configured to determine a key frame image of the acquired visual image of the current frame from the visual point cloud map, and construct a point cloud matching pair according to characteristic points in the acquired visual image of the current frame and the characteristic points matched from the visual points of the key frame image;
the input module is configured to input the initial pose corresponding to the current frame into a pre-trained nerve radiation field model to obtain a predicted visual image of the robot predicted by the nerve radiation field model under the initial pose;
The second determining module is configured to determine pose change information of the robot according to the point cloud matching pair, the predicted visual image, the collected visual image, the wheel speed information and the positioning information in the initial information;
and the third determining module is configured to determine the pose of the robot when the robot acquires the visual image of the current frame according to the corresponding initial pose, the pose change information and the corresponding pose of the robot when the robot acquires the key frame image.
According to a third aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of the method of the first aspect of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided an electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of the first aspect of the disclosure.
Through the technical scheme, the positioning accuracy of the robot under the degraded scene is improved, for example, the positioning accuracy of the robot under the environment with weak textures, repeated textures and insufficient visual features is improved, the positioning accuracy of the robot is improved, and when the difference between the current walking direction of the robot and the key frame direction in the visual point cloud map is too large, the drift of positioning is avoided, and the positioning stability of the robot is improved.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate the disclosure and together with the description serve to explain, but do not limit the disclosure. In the drawings:
fig. 1 is a flowchart illustrating a robot pose determination method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a robot pose determination method according to an exemplary embodiment.
FIG. 3 is a diagram illustrating a training step of a neural radiation field model, according to an exemplary embodiment.
Fig. 4 is a block diagram illustrating a robot pose determination apparatus according to an exemplary embodiment.
Fig. 5 is a block diagram illustrating a robot pose determination apparatus according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating a robot pose determination apparatus according to an exemplary embodiment.
Detailed Description
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the disclosure, are not intended to limit the disclosure.
It should be noted that, all actions for obtaining signals, information or information in the present disclosure are performed under the condition of conforming to the corresponding policy of information protection regulations in the country of the place and obtaining the authorization given by the owner of the corresponding device.
Before introducing the method, the device, the medium and the electronic equipment for determining the pose of the robot, firstly, the application scene of the disclosure is introduced.
In a use scene of visual positioning, for example, under the environment with a plurality of repeated textures, the situation that a visual odometer tracks drift and is erroneously matched with a constructed point cloud map can occur, or under the situation that visual features are not abundant, the situation that a tracking link is too short and the matching pair with the constructed point cloud map is too few can occur, the situation can cause the problem of positioning drift or positioning accuracy reduction, and under the situation that the current walking direction of a robot is too great from the key frame direction in the constructed map, the problem that a point cloud matching pair cannot be found, the positioning drift, the positioning accuracy reduction and the like also exist.
In view of this, in order to improve the positioning accuracy of a robot, the present disclosure provides a robot pose determining method, and fig. 1 is a flowchart of a robot pose determining method according to an exemplary embodiment, as shown in fig. 1, including the following steps.
In step S1, according to initial information collected by each sensor configured on the robot, an initial pose of the robot when a collected visual image of a current frame is collected is determined.
In one embodiment, the sensors configured on the robot include at least one of a camera, an IMU (Inertial Measurement Unit ), a wheel speed meter, a GNSS (Global Navigation Satellite System, global satellite navigation system).
The camera may be a monocular camera, a multi-view camera, or a Depth camera, such as a TOF (Time of flight) camera, and the acquired visual image acquired by the robot-configured camera may be an RGB (Red Green Blue, red Green Blue color three-channel) image or an RGB-D (RGB-Depth Map) image. The IMU can collect inertial information of the robot, the inertial information comprises angular speed and acceleration of the robot, the speedometer can collect wheel speed information of the robot, and the GNSS can collect position information of the robot.
In one embodiment, the robot collects initial information through a sensor and inputs a Visual odometer (Visual odometer), and further predicts an initial pose of a current frame of the robot in the Visual odometer, and it is worth noting that the Visual odometer is used to estimate camera motion between adjacent images. In the visual odometer, feature point extraction is carried out on collected visual images corresponding to different frames, optical flow tracking is carried out on the feature points, optical flow information of the collected visual images is determined, and the pose of the current frame of the robot is further predicted through the optical flow information and the inertial information of the robot, wherein the pose of the robot comprises the current position coordinates and the current orientation of the robot.
Optionally, after each sensor of the robot acquires initial information, time synchronization can be performed on the initial information, so that the synchronization and accuracy of data acquired by the sensors are improved, and further the accuracy of predicting the current frame pose of the robot is improved.
In step S2, a key frame image of the acquired visual image of the current frame is determined from the visual point cloud map, and a point cloud matching pair is constructed according to the feature points in the acquired visual image of the current frame and the feature points matched from the visual points of the key frame image.
Optionally, before confirming the pose of the robot, a visual point cloud map corresponding to the current scene needs to be loaded, where the visual point cloud map may be a sparse map, a semi-dense map, a grid map or an octree map, and the visual point cloud map includes point cloud information, feature point information and depth information of a three-dimensional space corresponding to the current scene at each position.
It should be noted that the visual points of the key frame image may be feature points of the key frame image.
In one embodiment, after the camera obtains the visual image of the current frame, determining a key frame visual image in the visual image before the current frame in a visual point cloud map, respectively extracting feature points of the visual image of the current frame and the visual image of the key frame, further calculating feature descriptors corresponding to the feature points, comparing the feature descriptors of the visual image of the current frame with the feature descriptors of the visual image of the key frame, preliminarily determining matching point pairs according to the similarity of the features and positions of different feature points, eliminating error matching pairs influencing point cloud matching, denoising initial corresponding point data, performing rigid transformation on the matching point pairs after eliminating the error matching points, and determining the point cloud matching pairs.
In step S3, the initial pose corresponding to the current frame is input into a pre-trained neural radiation field model, so as to obtain a predicted visual image of the robot under the initial pose, which is predicted by the neural radiation field model.
In one embodiment, the neural radiation field model is an implicit neural rendering model based on NeRF (Neural Radiance Fields, neural radiation field), that is, a three-dimensional scene is represented as a multi-layer perception neural network model, the multi-layer perception neural network model is used for rendering the input poses with different visual angles to obtain a rendered picture, meanwhile, a visual image is collected under the current visual angle as a loss supervision, training and learning are performed on the neural radiation field model, and finally, a network model suitable for current scene rendering is trained.
In step S4, pose change information of the robot is determined according to the point cloud matching pair, the predicted visual image, the collected visual image, the wheel speed information and the positioning information in the initial information.
It is worth noting that by comparing the predicted visual image with the acquired visual image, the comparison relationship between the real image acquired by the camera and the image rendered according to the nerve radiation field model can be determined.
In one embodiment, the comparison of the real image acquired by the camera and the image rendered from the neuro-radiation field model is determined by calculating the L2 distance (Euclidean Distance ) of pixel values between the predicted visual image and the acquired visual image.
In one embodiment, the initial pose of the current frame and the poses of the robots corresponding to the key frames are taken as state quantities, wherein the key frames are selected from a plurality of common frames before the current frame, L2 distance, wheel speed information, positioning information and point cloud matching pairs between the predicted visual image and the acquired visual image are taken as observables, nonlinear joint optimization is performed based on the state quantities and observables, and the poses of the current frame when the acquired visual image is acquired are determined.
The nonlinear joint optimization method can use Gauss Newton method or LM (Levenberg Marquardt, levenberg-Marquardt) method.
In one embodiment, in order to implement the nonlinear joint optimization method, pose observance functions are required to be constructed through all observance quantities, namely, a five-element primary function and a jacobian matrix corresponding to the five-element primary function are constructed through point cloud matching pairs, prediction of visual images, acquisition of visual images and wheel speed information and positioning information in initial information, further, an increment equation is determined according to the jacobian matrix, a transpose matrix of the jacobian matrix and the pose observance functions, the increment equation is solved, and pose change information is determined according to a solving result.
In step S5, according to the corresponding initial pose, the pose change information, and the pose corresponding to the moment when the robot collects the key frame image, the pose when the robot collects the visual image of the current frame is determined.
According to the embodiment, the corresponding initial pose, pose change information and the pose corresponding to the robot when acquiring the key frame image are used as state quantities to perform nonlinear joint optimization, specifically, under the condition that the state quantity corresponding to the current frame is equal to the sum of the state quantity of the last frame corresponding to the current frame and the pose change information, the magnitude of the pose change information is calculated iteratively, if the pose change of the robot corresponding to the pose change information is smaller than or equal to a preset change threshold, the step S1 is returned, and if the pose change of the robot corresponding to the pose change information is smaller than or equal to the preset change threshold, iterative calculation is stopped, and the pose of the robot when acquiring the visual image of the current frame is output.
In one embodiment, referring to a multi-sensor-based robot pose determining method shown in fig. 2, firstly, a visual point cloud map corresponding to a current scene and a neural radiation field model trained in advance based on the current scene are loaded, initial information acquired by a plurality of sensors is acquired, and the acquired initial poses are subjected to data processing and time synchronization. In the visual odometer, feature point extraction is carried out on the acquired visual image in the synchronized initial information, optical flow tracking is carried out on the extracted feature points, optical flow information used for representing the motion change of the robot is determined, and further, the initial pose of the current frame of the robot is predicted by combining the optical flow information and the inertial information in the initial information. On one hand, the collected visual image of the current frame and the collected visual image before the current frame are subjected to point cloud matching in a mode of local force matching or reprojection matching, on the other hand, the initial pose of the current frame is input into a nerve radiation field model, a rendered predicted visual image is obtained, and the L2 distance between the predicted visual image and the collected visual image is calculated. And taking the initial pose of the current frame and the poses of the robot corresponding to a plurality of key frames as state quantities, wherein the key frames are selected from a plurality of common frames before the current frame, taking L2 distance between a predicted visual image and an acquired visual image, wheel speed information, positioning information and point cloud matching pairs as observed quantities, and carrying out nonlinear joint optimization based on the state quantities and the observed quantities to determine the pose when the acquired visual image of the current frame is acquired.
Through the technical scheme, the positioning accuracy of the robot under the degraded scene is improved, for example, the positioning accuracy of the robot is improved under the environment with weak textures, repeated textures and insufficient visual features, when the difference between the current walking direction of the robot and the key frame direction in the visual point cloud map is too large, the drift of positioning is avoided, and the positioning stability of the robot is improved.
Optionally, the determining the pose of the robot when acquiring the visual image of the current frame according to the corresponding initial pose, the pose change information and the corresponding pose when the robot acquires the key frame image includes:
and determining whether the pose change information meets a preset condition according to the corresponding initial pose and the corresponding pose when the robot acquires the key frame image.
In one embodiment, the preset condition is that the pose change information is less than or equal to a preset change threshold, and/or the iteration number reaches a preset iteration number threshold.
And under the condition that the pose change information does not meet the preset condition, executing the step of inputting the initial pose corresponding to the current frame into a pre-trained nerve radiation field model to determine whether the pose change information meets the preset condition according to the corresponding initial pose and the pose corresponding to the moment when the robot collects the key frame image until the pose change information meets the preset condition, and determining the pose when the robot collects the visual image of the current frame according to the initial pose and the pose change information when the robot collects the visual image of the current frame.
Optionally, the determining pose change information of the robot according to the point cloud matching pair, the predicted visual image, the collected visual image, the wheel speed information in the initial information and the positioning information includes:
and determining the image distance between the predicted visual image and the acquired visual image according to the channel color distance between pixels at the same position in the predicted visual image and the acquired visual image.
In one embodiment, the similarity between the two images is determined by calculating the L2 distance between the predicted visual image and the collected visual image, and it is easy to understand that there is a one-to-one correspondence between each point on the predicted visual image and the collected visual image, and thus, the L2 distance between the predicted visual image and the collected visual image can be regarded as a straight line distance between one point on the predicted visual image and a point corresponding to the predicted visual image, or an average value of straight line distances between each point on the predicted visual image and each point corresponding to the predicted visual image. Specifically, the image distance d between the predicted visual image and the captured visual image is calculated by the following formula:
Wherein x (x 1 ,...,x n ) To predict a point on a visual image, y (y 1 ,...,y n ) To acquire a point on the visual image corresponding to x, n represents a dimension.
And determining a jacobian matrix of the initial pose of the current frame and a pose observation function value according to the image distance, the point cloud matching pair, the wheel speed information in the initial information and the positioning information.
In one embodiment, the image distance, the point cloud matching pair, the wheel speed information and the positioning information are used as observables of nonlinear joint optimization, a pose observation function with four elements is constructed, and a jacobian matrix of the initial pose of the current frame is determined according to the pose observation function.
And determining pose change information of the robot according to the jacobian matrix and the pose observation function values.
In one embodiment, an increment equation is determined according to the jacobian matrix, the transposed matrix of the jacobian matrix and the observed function, the increment equation is solved, and pose change information is determined according to the solving result, wherein the increment equation is as follows: j (J) T (x i )J(x i )Δx=-J T (x i )f(x i )。
Wherein J is T (x i ) Is the transpose of the Jacobian matrix, J (x i ) Is Jacobian matrix, Δx is pose change information, f (x i ) As a function of observed quantity.
Alternatively, as shown in fig. 3, the neural radiation field model is trained by:
s301, inputting a sample pose of the robot and a sample visual image acquired by the robot in the sample pose into a nerve radiation field model to be trained, and constructing a visual ray of the robot under the sample pose according to the orientation of the robot corresponding to the sample pose.
The visual ray of the robot is a ray taking the center of the camera as a starting point and the direction corresponding to the sample pose of the robot as a direction, and the visual ray is projected into a visual point cloud map corresponding to the current scene.
S302, determining a plurality of sampling points on the visual ray, and determining the color and the volume density of each sampling point.
It is worth to say that the neural radiation field model trained by deep learning can generate the color and the volume density of each pixel point in the current scene according to parameters such as the sampling point, the direction and the distance of the robot and the like.
In one embodiment, a plurality of sampling points are selected on a visual ray, and the color and bulk density of each sampling point is determined by an MLP (Multilayer Perceptron, multi-layer perceptron).
S303, determining the channel color corresponding to the direction indicated by the visual ray according to the color of the sampling point and the volume density.
In one embodiment, the channel color of the pixel point corresponding to the visual ray is determined by radiation integration of a plurality of points on the visual ray according to the color and the volume density of each sampling point.
And S304, rendering to obtain a predicted sample visual image corresponding to the sample pose according to the channel color corresponding to the direction indicated by the visual ray.
In one embodiment, according to the channel color corresponding to the visual ray, a predicted sample visual image corresponding to the sample pose is obtained through a volume rendering equation.
And S305, according to the predicted sample visual image and the corresponding sample acquisition visual image, carrying out back propagation update on the loss function of the nerve radiation field model to be trained to obtain the pre-trained nerve radiation field model.
It is worth noting that the loss function can be calculated by predicting the sample visual image and the corresponding acquired visual image, and updating the nerve radiation field model according to the calculation result.
Optionally, the determining, according to initial information collected by each sensor configured on the robot, an initial pose of the robot when collecting a collected visual image of a current frame includes:
And extracting characteristic points and tracking the characteristic points according to the acquired visual image of the current frame and the acquired visual image acquired before the current frame in the initial information to obtain optical flow information.
It is worth to say that, through optical flow tracking, the correspondence existing between the visual images of the adjacent frames can be determined, and then the motion change of the robot between the adjacent frames is calculated.
In one embodiment, feature points are extracted from an acquired visual image of a current frame and an acquired visual image acquired before the current frame, the current frame is selected as the first frame to track the feature points, the feature points tracked in the first frame are iterated by using an LK algorithm, the iteration mode is to continuously input the visual image of a previous frame compared with the first frame and the visual image of the current frame into an optical flow density function, according to a settlement result of the optical flow density function, the settlement result of the optical flow density function comprises the feature points and feature point states of the current frame, the feature point states comprise 0 and 1, if a corresponding feature point in the visual image of a previous frame is found in the visual image of the current frame, the state of the feature point is 1, otherwise, the feature point with the feature point state of 1 is reserved, the feature point with the feature point state of 0 is removed, and further optical flow information is determined.
And determining the initial pose of the robot when the acquired visual image of the current frame is acquired according to the optical flow information and the inertial information corresponding to the acquired visual image of the current frame in the initial information.
In one embodiment, knowing the pose of the robot of the previous frame corresponding to the current frame, predicting the initial pose of the current frame of the robot in a visual odometer according to the acquired optical flow information and inertial information acquired by the IMU, specifically, determining the pose of the previous frame corresponding to the current frame of the robot according to the optical flow information, taking the pose of the previous frame as a basic pose, updating the basic pose according to the basic pose and the angular velocity information in the inertial information, further, converting acceleration information output by the IMU into acceleration under a robot coordinate system, and performing secondary integration operation on the updated pose according to the initial velocity of the robot to determine the initial pose of the current frame of the robot.
Optionally, the constructing a point cloud matching pair according to the feature points in the acquired visual image of the current frame and the feature points matched from the visual points of the key frame image includes:
and matching the characteristic points in the acquired visual image of the current frame with the visual points of the key frame image to construct a plurality of initial matching pairs.
In one embodiment, the similarity matching can be performed on the feature points in the acquired visual image of the current frame and the feature points in the key frame image, the similarity is determined according to the distance between the feature descriptors of the feature points and the key frame image, then a plurality of matching pairs are arranged according to the similarity, and the matching pair with the highest similarity degree is selected as the initial matching pair.
In another embodiment, the initial matching pair of the feature point in the acquired visual image of the current frame and the feature point in the key frame image can be determined through reprojection matching, wherein the reprojection matching comprises reverse reprojection matching and forward reprojection matching, and the reverse reprojection matching is a method for traversing the key frame image by using the visual image of the current frame, namely, the feature point in the visual image of the current frame is projected into different key frame images, the feature point corresponding to the key frame is searched, the projection weight is determined according to the matching degree of the feature point corresponding to the visual image of the current frame and the feature point in different key frame images, and the initial matching pair is determined according to the projection weight.
And determining a transformation matrix according to the matched pairs which are not collinear with each other in the initial matched pair.
In one embodiment, a RANSAC (Random Sample Consensus ) based algorithm randomly selects four sample initial matching pairs from the initial matching pairs and ensures that the four sample initial matching pairs are not collinear with each other, and calculates a homography matrix from the four sample initial matching pairs and records as an M model.
And respectively calculating the projection error of each initial matching pair and the transformation matrix.
Along with the above embodiment, all initial matching pairs are tested with the M model, and the initial matching pairs satisfying the M model and the projection error, which is a cost function, are calculated.
And determining a point cloud matching pair from the initial matching pair according to the projection error.
Continuing to use the embodiment, determining an initial matching pair corresponding to the projection error less than or equal to the preset error threshold as a point cloud matching pair, and adding the point cloud matching pair into the inner point set.
Optionally, the determining a point cloud matching pair from the initial matching pair according to the projection error includes:
and if the projection error is larger than a preset error threshold, eliminating the initial matching pair corresponding to the projection error.
And if the projection error is smaller than or equal to the preset error threshold, determining the initial matching pair corresponding to the projection error as the point cloud matching pair.
In one embodiment, after determining the initial matching pair, repeatedly performing the above steps, performing iterative updating on the point cloud matching pair, if the number of feature points in the current interior point set is greater than the number of feature points in the optimal interior point set, updating the optimal point set while updating the preset iteration number, where it is worth noting that, if the current iteration number is greater than or equal to the preset iteration number, the iterative process is exited, if the current iteration number is less than the preset iteration number, otherwise, the iteration number is increased by 1, and randomly selecting the initial matching pair which is not collinear with each other again, constructing a transformation matrix, and performing the above steps.
Referring to fig. 4, the present disclosure further provides a robot pose determining apparatus 200, including:
the first determining module 201 is configured to determine an initial pose of the robot when the vision image of the current frame is acquired according to initial information acquired by each sensor configured on the robot.
The construction module 202 is configured to determine a key frame image of the acquired visual image of the current frame from the visual point cloud map, and construct a point cloud matching pair according to the feature points in the acquired visual image of the current frame and the feature points matched from the visual points of the key frame image.
The input module 203 is configured to input the initial pose corresponding to the current frame into a pre-trained neural radiation field model, so as to obtain a predicted visual image of the robot under the initial pose, which is predicted by the neural radiation field model.
A second determining module 204 is configured to determine pose change information of the robot according to the point cloud matching pair, the predicted visual image, the collected visual image, the wheel speed information and the positioning information in the initial information.
And a third determining module 205 configured to determine a pose of the robot when the robot collects the visual image of the current frame according to the corresponding initial pose, the pose change information and the corresponding pose of the robot when the robot collects the key frame image.
Optionally, the third determination module 205 is configured to:
and determining whether the pose change information meets a preset condition according to the corresponding initial pose and the corresponding pose when the robot acquires the key frame image.
And under the condition that the pose change information does not meet the preset condition, executing the step of inputting the initial pose corresponding to the current frame into a pre-trained nerve radiation field model to determine whether the pose change information meets the preset condition according to the corresponding initial pose and the pose corresponding to the moment when the robot collects the key frame image until the pose change information meets the preset condition, and determining the pose when the robot collects the visual image of the current frame according to the initial pose and the pose change information when the robot collects the visual image of the current frame.
Optionally, the second determination module 204 is configured to:
and determining the image distance between the predicted visual image and the acquired visual image according to the channel color distance between pixels at the same position in the predicted visual image and the acquired visual image.
And determining a jacobian matrix of the initial pose of the current frame and a pose observation function value according to the image distance, the point cloud matching pair, the wheel speed information in the initial information and the positioning information.
And determining pose change information of the robot according to the jacobian matrix and the pose observation function values.
Optionally, the input module 203 is configured to:
inputting the sample pose of the robot and a sample visual image acquired by the robot in the sample pose into a neural radiation field model to be trained, and constructing a visual ray of the robot under the sample pose according to the orientation of the robot corresponding to the sample pose.
A plurality of sampling points are determined on the vision ray, and the color and bulk density of each sampling point are determined.
And determining the channel color corresponding to the direction indicated by the visual ray according to the color of the sampling point and the volume density.
And rendering to obtain a predicted sample visual image corresponding to the sample pose according to the channel color corresponding to the direction indicated by the visual ray.
And according to the predicted sample visual image and the corresponding sample acquisition visual image, carrying out back propagation update on the loss function of the nerve radiation field model to be trained to obtain the pre-trained nerve radiation field model.
Optionally, the first determining module 201 is configured to:
and extracting characteristic points and tracking the characteristic points according to the acquired visual image of the current frame and the acquired visual image acquired before the current frame in the initial information to obtain optical flow information.
And determining the initial pose of the robot when the acquired visual image of the current frame is acquired according to the optical flow information and the inertial information corresponding to the acquired visual image of the current frame in the initial information.
Optionally, the build module 202 is configured to:
and matching the characteristic points in the acquired visual image of the current frame with the visual points of the key frame image to construct a plurality of initial matching pairs.
And determining a transformation matrix according to the matched pairs which are not collinear with each other in the initial matched pair.
And respectively calculating the projection error of each initial matching pair and the transformation matrix.
And determining a point cloud matching pair from the initial matching pair according to the projection error.
Optionally, the build module 202 is configured to:
and if the projection error is larger than a preset error threshold, eliminating the initial matching pair corresponding to the projection error.
And if the projection error is smaller than or equal to the preset error threshold, determining the initial matching pair corresponding to the projection error as the point cloud matching pair.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
The present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the robot pose determination method of the present disclosure.
The present disclosure also provides an electronic device, including:
a memory having a computer program stored thereon.
And a processor for executing the computer program in the memory to implement the steps of the robot pose determination method of the present disclosure.
Fig. 5 is a block diagram of an electronic device 700, according to an example embodiment. As shown in fig. 5, the electronic device 700 may include: a processor 701, a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700, so as to complete all or part of the steps in the above-mentioned method for determining the pose of the robot. The memory 702 is used to store various types of information to support operation on the electronic device 700, which may include, for example, instructions for any application or method operating on the electronic device 700, as well as application-related information, such as contact information, messages sent and received, pictures, audio, video, and so forth. The Memory 702 may be implemented by any type or combination of volatile or non-volatile Memory devices, such as static random access Memory (Static Random Access Memory, SRAM for short), electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM for short), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM for short), programmable Read-Only Memory (Programmable Read-Only Memory, PROM for short), read-Only Memory (ROM for short), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia component 703 can include a screen and an audio component. Wherein the screen may be, for example, a touch screen, the audio component being for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signals may be further stored in the memory 702 or transmitted through the communication component 705. The audio assembly further comprises at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, which may be a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is for wired or wireless communication between the electronic device 700 and other devices. Wireless communication, such as Wi-Fi, bluetooth, near field communication (Near Field Communication, NFC for short), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or one or a combination of more of them, is not limited herein. The corresponding communication component 705 may thus comprise: wi-Fi module, bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic device 700 may be implemented by one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), digital signal processors (Digital Signal Processor, abbreviated as DSP), digital signal processing devices (Digital Signal Processing Device, abbreviated as DSPD), programmable logic devices (Programmable Logic Device, abbreviated as PLD), field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described method of determining a pose of a robot.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the robot pose determination method described above is also provided. For example, the computer readable storage medium may be the memory 702 including program instructions described above, which are executable by the processor 701 of the electronic device 700 to perform the robot pose determination method described above.
Fig. 6 is a block diagram illustrating an electronic device 1900 according to an example embodiment. For example, electronic device 1900 may be provided as a server. Referring to fig. 6, the electronic device 1900 includes a processor 1922, which may be one or more in number, and a memory 1932 for storing computer programs executable by the processor 1922. The computer program stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, the processor 1922 may be configured to execute the computer program to perform the robot pose determination method described above.
In addition, the electronic device 1900 may further include a power component 1926 and a communication component 1950, the power component 1926 may be configured to perform power management of the electronic device 1900, and the communication component 1950 may be configured to enable communication of the electronic device 1900, e.g., wired or wireless communication. In addition, the electronic device 1900 may also include an input/output (I/O) interface 1958. Electronic device 1900 may operate based on an operating system stored in memory 1932.
In another exemplary embodiment, a computer readable storage medium comprising program instructions which, when executed by a processor, implement the steps of the robot pose determination method described above is also provided. For example, the non-transitory computer readable storage medium may be the memory 1932 including program instructions described above that are executable by the processor 1922 of the electronic device 1900 to perform the robotic pose determination method described above.
In another exemplary embodiment, a computer program product is also provided, comprising a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described robot pose determination method when being executed by the programmable apparatus.
The preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, but the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solutions of the present disclosure within the scope of the technical concept of the present disclosure, and all the simple modifications belong to the protection scope of the present disclosure.
In addition, the specific features described in the foregoing embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, the present disclosure does not further describe various possible combinations.
Moreover, any combination between the various embodiments of the present disclosure is possible as long as it does not depart from the spirit of the present disclosure, which should also be construed as the disclosure of the present disclosure.

Claims (10)

1. The method for determining the pose of the robot is characterized by comprising the following steps of:
according to initial information acquired by each sensor configured on the robot, determining an initial pose of the robot when acquiring the visual image of the current frame;
determining a key frame image of an acquired visual image of a current frame from a visual point cloud map, and constructing a point cloud matching pair according to characteristic points in the acquired visual image of the current frame and the characteristic points matched from the visual points of the key frame image;
Inputting the initial pose corresponding to the current frame into a pre-trained nerve radiation field model to obtain a predicted visual image of the robot predicted by the nerve radiation field model under the initial pose;
determining pose change information of the robot according to the point cloud matching pair, the predicted visual image, the acquired visual image, the wheel speed information and the positioning information in the initial information;
and determining the pose of the robot when the robot acquires the visual image of the current frame according to the corresponding initial pose, the pose change information and the corresponding pose of the robot when the robot acquires the key frame image.
2. The pose determining method according to claim 1, wherein determining the pose of the robot when acquiring the visual image of the current frame according to the corresponding initial pose, the pose change information, and the pose corresponding to the moment when the robot acquires the key frame image comprises:
determining whether the pose change information meets a preset condition according to the corresponding initial pose and the corresponding pose when the robot acquires the key frame image;
And under the condition that the pose change information does not meet the preset condition, executing the step of inputting the initial pose corresponding to the current frame into a pre-trained nerve radiation field model to determine whether the pose change information meets the preset condition according to the corresponding initial pose and the pose corresponding to the moment when the robot collects the key frame image until the pose change information meets the preset condition, and determining the pose when the robot collects the visual image of the current frame according to the initial pose and the pose change information when the robot collects the visual image of the current frame.
3. The pose determination method according to claim 1, wherein determining pose change information of the robot based on the point cloud matching pair, the predicted visual image, the collected visual image, wheel speed information and positioning information in the initial information comprises:
determining an image distance between the predicted visual image and the acquired visual image according to the channel color distance between pixels at the same position in the predicted visual image and the acquired visual image;
Determining a jacobian matrix of the initial pose of the current frame and a pose observation function value according to the image distance, the point cloud matching pair, the wheel speed information in the initial information and the positioning information;
and determining pose change information of the robot according to the jacobian matrix and the pose observation function values.
4. The pose determination method according to claim 1, wherein the neural radiation field model is trained by:
inputting a sample pose of the robot and a sample visual image acquired by the robot in the sample pose into a neural radiation field model to be trained, and constructing a visual ray of the robot under the sample pose according to the orientation of the robot corresponding to the sample pose;
determining a plurality of sampling points on the visual ray, and determining the color and the volume density of each sampling point;
determining the channel color corresponding to the direction indicated by the visual ray according to the color of the sampling point and the volume density;
rendering to obtain a predicted sample visual image corresponding to the sample pose according to the channel color corresponding to the direction indicated by the visual ray;
And according to the predicted sample visual image and the corresponding sample acquisition visual image, carrying out back propagation update on the loss function of the nerve radiation field model to be trained to obtain the pre-trained nerve radiation field model.
5. The pose determining method according to claim 1, wherein determining the initial pose of the robot when acquiring the acquired visual image of the current frame according to the initial information acquired by each sensor configured on the robot comprises:
according to the acquired visual image of the current frame in the initial information and the acquired visual image acquired before the current frame, extracting characteristic points and tracking the characteristic points to obtain optical flow information;
and determining the initial pose of the robot when the acquired visual image of the current frame is acquired according to the optical flow information and the inertial information corresponding to the acquired visual image of the current frame in the initial information.
6. The pose determination method according to any one of claims 1-5, wherein the constructing a point cloud matching pair from feature points in the acquired visual image of the current frame and feature points matched from visual points of the key frame image comprises:
Matching the characteristic points in the acquired visual image of the current frame with the visual points of the key frame image to construct a plurality of initial matching pairs;
determining a transformation matrix according to the non-collinear matching pairs of the initial matching pairs;
calculating the projection error of each initial matching pair and the transformation matrix respectively;
and determining a point cloud matching pair from the initial matching pair according to the projection error.
7. The pose determination method according to claim 6, wherein determining a pair of point clouds from the initial pair of pairs according to the projection error comprises:
if the projection error is larger than a preset error threshold, eliminating the initial matching pair corresponding to the projection error;
and if the projection error is smaller than or equal to the preset error threshold, determining the initial matching pair corresponding to the projection error as the point cloud matching pair.
8. A robot pose determining device, characterized by comprising:
the first determining module is configured to determine the initial pose of the robot when the vision image of the current frame is acquired according to the initial information acquired by each sensor configured on the robot;
The construction module is configured to determine a key frame image of the acquired visual image of the current frame from the visual point cloud map, and construct a point cloud matching pair according to characteristic points in the acquired visual image of the current frame and the characteristic points matched from the visual points of the key frame image;
the input module is configured to input the initial pose corresponding to the current frame into a pre-trained nerve radiation field model to obtain a predicted visual image of the robot predicted by the nerve radiation field model under the initial pose;
the second determining module is configured to determine pose change information of the robot according to the point cloud matching pair, the predicted visual image, the collected visual image, the wheel speed information and the positioning information in the initial information;
and the third determining module is configured to determine the pose of the robot when the robot acquires the visual image of the current frame according to the corresponding initial pose, the pose change information and the corresponding pose of the robot when the robot acquires the key frame image.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method according to any one of claims 1-7.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any one of claims 1-7.
CN202311080110.9A 2023-08-24 2023-08-24 Robot pose determining method and device, medium and electronic equipment Pending CN117315015A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311080110.9A CN117315015A (en) 2023-08-24 2023-08-24 Robot pose determining method and device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311080110.9A CN117315015A (en) 2023-08-24 2023-08-24 Robot pose determining method and device, medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117315015A true CN117315015A (en) 2023-12-29

Family

ID=89254385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311080110.9A Pending CN117315015A (en) 2023-08-24 2023-08-24 Robot pose determining method and device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117315015A (en)

Similar Documents

Publication Publication Date Title
CN109087359B (en) Pose determination method, pose determination apparatus, medium, and computing device
JP6812404B2 (en) Methods, devices, computer-readable storage media, and computer programs for fusing point cloud data
CN110246182B (en) Vision-based global map positioning method and device, storage medium and equipment
JP7236565B2 (en) POSITION AND ATTITUDE DETERMINATION METHOD, APPARATUS, ELECTRONIC DEVICE, STORAGE MEDIUM AND COMPUTER PROGRAM
Sola et al. Fusing monocular information in multicamera SLAM
CN109084746A (en) Monocular mode for the autonomous platform guidance system with aiding sensors
CN107748569B (en) Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system
WO2018009263A1 (en) Systems and methods for mapping an environment
CN113284240A (en) Map construction method and device, electronic equipment and storage medium
CN111121754A (en) Mobile robot positioning navigation method and device, mobile robot and storage medium
CN111899282B (en) Pedestrian track tracking method and device based on binocular camera calibration
US11380078B2 (en) 3-D reconstruction using augmented reality frameworks
KR20200075727A (en) Method and apparatus for calculating depth map
CN112750203A (en) Model reconstruction method, device, equipment and storage medium
CN111652113A (en) Obstacle detection method, apparatus, device, and storage medium
CN112950710A (en) Pose determination method and device, electronic equipment and computer readable storage medium
US20210118160A1 (en) Methods, devices and computer program products for 3d mapping and pose estimation of 3d images
EP4261789A1 (en) Method for displaying posture of robot in three-dimensional map, apparatus, device, and storage medium
CN114217303A (en) Target positioning and tracking method and device, underwater robot and storage medium
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
CN109816726B (en) Visual odometer map updating method and system based on depth filter
CN117315015A (en) Robot pose determining method and device, medium and electronic equipment
TWM637241U (en) Computing apparatus and model generation system
CN113034538B (en) Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment
CN113763468A (en) Positioning method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination