CN114897931A - Tracking method and device of image feature points, electronic equipment and program product - Google Patents

Tracking method and device of image feature points, electronic equipment and program product Download PDF

Info

Publication number
CN114897931A
CN114897931A CN202210249292.7A CN202210249292A CN114897931A CN 114897931 A CN114897931 A CN 114897931A CN 202210249292 A CN202210249292 A CN 202210249292A CN 114897931 A CN114897931 A CN 114897931A
Authority
CN
China
Prior art keywords
tracked
current frame
feature point
tracking
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210249292.7A
Other languages
Chinese (zh)
Inventor
王圣懿
张涛
韩冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Autonavi Software Co Ltd
Original Assignee
Autonavi Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Autonavi Software Co Ltd filed Critical Autonavi Software Co Ltd
Priority to CN202210249292.7A priority Critical patent/CN114897931A/en
Publication of CN114897931A publication Critical patent/CN114897931A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses a method, a device, an electronic device and a program product for tracking image feature points, wherein the method comprises the following steps: acquiring a current frame; predicting the predicted position of the feature point to be tracked in the current frame by using the adjacent frame of the current frame; determining the predicted position as an initial position of the feature point to be tracked in an optical flow tracking process; and carrying out optical flow tracking on the feature point to be tracked based on the initial position to obtain the tracking position of the feature point to be tracked in the current frame. The technical scheme can improve the accuracy of the optical flow tracking result, and can save the number of image pyramid layers processed in the optical flow tracking process, thereby saving the computational power of optical flow tracking.

Description

Tracking method and device of image feature points, electronic equipment and program product
Technical Field
The present disclosure relates to the field of image technologies, and in particular, to a method and an apparatus for tracking image feature points, an electronic device, and a program product.
Background
Because the vision sensor has the advantages of high positioning precision and low cost, a vision SLAM (synchronized positioning and Mapping) system is widely concerned. Visual SLAM is largely divided into two parts: the system comprises a front end and a rear end, wherein the front end is also a Visual Odometer (VO), and the front end roughly estimates the motion information of the image acquisition equipment according to the information of adjacent images and provides a better initial value for the rear end. Extraction and matching of image feature points is a fundamental problem in visual odometry. For terminal devices with limited computing resources, such as mobile phones, how to reduce the computing power of the visual odometer in the matching process of the image feature points and reduce the consumption of the computing resources is very important for the stable operation of the terminal devices, and is also one of the main technical problems to be solved by the technical personnel in the field at present.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for tracking image feature points, electronic equipment and a program product.
In a first aspect, an embodiment of the present disclosure provides a method for tracking image feature points, where the method includes:
acquiring a current frame;
predicting the predicted position of the feature point to be tracked in the current frame by using the adjacent frame of the current frame;
determining the predicted position as an initial position of the feature point to be tracked in an optical flow tracking process;
and carrying out optical flow tracking on the feature point to be tracked based on the initial position to obtain the tracking position of the feature point to be tracked in the current frame.
Further, the adjacent frames are the first two frames of the current frame; predicting the prediction position of the feature point to be tracked in the current frame by using the adjacent frame of the current frame, wherein the prediction position comprises the following steps:
predicting the pose of the current frame based on the poses of the previous two frames by utilizing a uniform motion model between adjacent frames;
and projecting the three-dimensional coordinates of the feature points to be tracked in the adjacent frames to the current frame under the pose to obtain the predicted position.
Further, the adjacent frame is a previous frame of the current frame; predicting the prediction position of the feature point to be tracked in the current frame by using the adjacent frame of the current frame, wherein the prediction position comprises the following steps:
determining a rotation relation between the previous frame and the current frame by using inertial sensing data on the premise of no displacement between the previous frame and the current frame;
and projecting the two-dimensional coordinates of the feature point to be tracked in the previous frame to the current frame based on the rotation relation, so as to obtain the predicted position.
Further, the method further comprises:
dividing the current frame into a plurality of grids by utilizing a characteristic point homogenization strategy;
extracting angular points in each grid;
determining, for the corner points extracted from the grid, whether a response value of the corner point with the highest response value is higher than or equal to a response threshold;
and when the response value of the corner with the highest response value is higher than or equal to the response threshold value, determining the corner with the highest response value as the image feature point extracted from the current frame.
Further, the method further comprises:
determining the distance between the tracking position and the prediction position of the feature point to be tracked;
determining median of the distances corresponding to the plurality of feature points to be tracked;
determining a distance threshold based on the median;
and eliminating the characteristic points to be tracked, of which the distance between the tracking position and the predicted position exceeds the distance threshold, from the characteristic points to be tracked, which are successfully tracked.
Further, the method further comprises:
determining a basic matrix for representing the epipolar constraint between the current frame and the previous frame by utilizing the feature points to be tracked successfully;
determining outliers in the feature points to be tracked which are tracked successfully based on the basic matrix;
and removing the outliers from the feature points to be tracked which are successfully tracked.
In a second aspect, an embodiment of the present invention provides a method for providing a location-based service, where the method includes: determining a tracking position of a map point to be tracked in a current frame by using the method of the first aspect, and providing a location-based service for a navigated object based on the tracking position, wherein the location-based service comprises: one or more of AR navigation, map rendering, route planning.
In a third aspect, an embodiment of the present invention provides an apparatus for tracking image feature points, where the apparatus includes:
a first obtaining module configured to obtain a current frame;
the prediction module is configured to predict the prediction position of the feature point to be tracked in the current frame by using the adjacent frames of the current frame;
a first determination module configured to determine the predicted position as an initial position of the feature point to be tracked in an optical flow tracking process;
and the tracking module is configured to perform optical flow tracking on the feature point to be tracked based on the initial position to obtain a tracking position of the feature point to be tracked in the current frame.
In a fourth aspect, an embodiment of the present disclosure provides a location-based service providing apparatus, including: determining a tracking position of a map point to be tracked in a current frame by using the device of the third aspect, and providing a position-based service for the navigated object based on the tracking position, wherein the position-based service comprises: one or more of AR navigation, map rendering, route planning.
The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions.
In one possible design, the apparatus includes a memory configured to store one or more computer instructions that enable the apparatus to perform the corresponding method, and a processor configured to execute the computer instructions stored in the memory. The apparatus may also include a communication interface for the apparatus to communicate with other devices or a communication network.
In a fifth aspect, the disclosed embodiments provide an electronic device comprising a memory, a processor, and a computer program stored on the memory, wherein the processor executes the computer program to implement the method of any of the above aspects.
In a sixth aspect, the disclosed embodiments provide a computer-readable storage medium for storing computer instructions for use by any of the above apparatuses, the computer instructions, when executed by a processor, being configured to implement the method of any of the above aspects.
In a seventh aspect, the disclosed embodiments provide a computer program product comprising computer instructions, which when executed by a processor, are configured to implement the method of any one of the above aspects.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, in the image tracking process of the visual SLAM system, the position of the feature point to be tracked in the current frame is predicted according to the motion relationship between the adjacent frame and the current frame and the position of the feature point to be tracked in the adjacent frame, so as to obtain the predicted position of the feature point to be tracked in the current frame, wherein the predicted position can be used as an initial value of optical flow tracking and used for optical flow tracking, and the tracking position of the feature point to be tracked can be tracked from the current frame through the optical flow tracking. In the optical flow tracking process, due to the fact that the predicted position of the feature point to be tracked in the current frame is used as the initial position, compared with the prior art that the position of the feature point to be tracked in the previous frame or a random initialization value is directly used as the initial position, the accuracy of the optical flow tracking result can be improved, the number of image pyramid layers processed in the optical flow tracking process can be saved, and further the computing power of the optical flow tracking is saved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other features, objects, and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
fig. 1 illustrates a flowchart of a tracking method of image feature points according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a process of predicting the pose of a current frame by using an inter-adjacent uniform motion model according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a predicted position of a feature point to be tracked based on IMU integral prediction according to an embodiment of the present disclosure;
FIG. 4 illustrates a schematic diagram of an application scenario in AR navigation according to an embodiment of the present disclosure;
fig. 5 is a block diagram showing a configuration of an image feature point tracking apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device suitable for implementing a tracking method of image feature points and/or a location-based service providing method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, actions, components, parts, or combinations thereof, and do not preclude the possibility that one or more other features, numbers, steps, actions, components, parts, or combinations thereof are present or added.
It should be further noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
The details of the embodiments of the present disclosure are described in detail below with reference to specific embodiments.
Fig. 1 illustrates a flowchart of a tracking method of image feature points according to an embodiment of the present disclosure. As shown in fig. 1, the method for tracking image feature points includes the following steps:
in step S101, a current frame is acquired;
in step S102, predicting a predicted position of a feature point to be tracked in a current frame by using adjacent frames of the current frame;
in step S103, determining the predicted position as an initial position of the feature point to be tracked in the optical flow tracking process;
in step S104, performing optical flow tracking on the feature point to be tracked based on the initial position, and obtaining a tracking position of the feature point to be tracked in the current frame.
In this embodiment, the tracking method for the image feature points may be executed on a terminal device, and is used for front-end calculation in a visual SLAM system, that is, used in a visual odometer of the terminal device. The current frame to be image-tracked may be an image currently captured by an image device of the visual odometer, for example, an image frame in front of a road captured by an image capturing device on a vehicle device.
The visual odometer usually tracks in a current frame based on feature points which have been successfully tracked in a previous frame, determines the tracking position of a feature point to be tracked in the current frame, further determines the motion information of the image acquisition equipment from the previous frame to the current frame based on the tracking position, and provides an initial value for the rear-end optimization of the visual SLAM system.
In some embodiments, the feature points to be tracked may be corner points or other feature points on the image. The feature points to be tracked may be feature points that are successfully tracked in each of the previous image frames that are continuously acquired, starting from the first frame image acquired by the image acquisition device. Based on the tracking of the feature points, a three-dimensional space model of the three-dimensional environment where the image acquisition equipment is located can be continuously established, and then the three-dimensional space model is applied to the AR technology, for example, the three-dimensional space model and a map can be fused, and AR navigation is realized after operations such as rendering.
In the embodiment of the present disclosure, a predicted position of a feature point to be tracked in a current frame is predicted by using adjacent frames of the current frame, where the predicted position may include, but is not limited to, a pixel position of the feature point to be tracked in the adjacent frames, and the adjacent frames may be a previous frame, a previous two frames, … …, and a previous N frame (N is a positive integer greater than 2) of the current frame. The predicted position is merely a value predicted based on information such as the pixel position of the feature point to be tracked in the adjacent frame, and is not necessarily a true value.
In the embodiment of the disclosure, in order to reduce the number of image pyramid layers tracked in the optical flow tracking process and save calculation power, the predicted position in the current frame is used as an initial value of the feature point to be tracked in the optical flow tracking process, and then the tracking position of the feature point to be tracked is tracked from the current frame based on the initial value, and the tracking position can be regarded as the real position of the feature point to be tracked in the current frame.
The visual odometer can also calculate the current pose of the image acquisition equipment based on the relation between the tracking positions of the feature points to be tracked in the adjacent frames and the current frame. The current pose can be provided for the rear end of the visual SLAM system to perform pose optimization, then the three-dimensional coordinates of the feature points to be tracked in the three-dimensional space are determined based on the optimized pose of the image acquisition equipment, the construction of a three-dimensional space model of the environment where the image acquisition equipment is located can be completed by using the pose of the image acquisition equipment and the three-dimensional coordinates of the feature points to be tracked, and the AR technology application such as AR navigation can be realized based on the three-dimensional space model.
In the embodiment of the disclosure, in the image tracking process of the visual SLAM system, the position of the feature point to be tracked in the current frame is predicted according to the motion relationship between the adjacent frame and the current frame and the position of the feature point to be tracked in the adjacent frame, so as to obtain the predicted position of the feature point to be tracked in the current frame, the predicted position can be used as an initial value of optical flow tracking for optical flow tracking, and the tracking position of the feature point to be tracked can be tracked from the current frame through the optical flow tracking. In the optical flow tracking process, due to the fact that the predicted position of the feature point to be tracked in the current frame is used as the initial position, compared with the prior art that the position of the feature point to be tracked in the previous frame or a random initialization value is directly used as the initial position, the accuracy of the optical flow tracking result can be improved, the number of image pyramid layers processed in the optical flow tracking process can be saved, and further the computing power of the optical flow tracking is saved.
In an optional implementation manner of this embodiment, the adjacent frames are the first two frames of the current frame; step S102, namely, the step of predicting the predicted position of the feature point to be tracked in the current frame by using the adjacent frame of the current frame further includes the following steps:
predicting the pose of the current frame based on the poses of the two previous frames by using a constant motion model between adjacent frames;
and projecting the three-dimensional coordinates of the feature points to be tracked in the adjacent frames to the current frame under the pose to obtain the predicted position.
In this optional implementation manner, in the optical flow tracking process, the predicted position of the feature point to be tracked in the current frame is predicted based on the adjacent frame of the current frame, and the predicted position is used as the initial value of the feature point to be tracked in the optical flow tracking process.
One way to predict the position of the feature point to be tracked in the current frame is: and extrapolating to obtain the pose of the current frame by using the poses of the two previous frames of the current frame (the poses of the two previous frames are obtained in the previous tracking process). In this way, it can be assumed that the first two frames and the current frame satisfy the uniform motion, so that a uniform motion model between the three frames can be obtained based on the pose relationship between the first two frames, and then the pose of the current frame is extrapolated from the poses of the first two frames based on the uniform motion model.
Fig. 2 is a schematic diagram illustrating a process of predicting a pose of a current frame by using an inter-adjacent-frame uniform motion model according to an embodiment of the present disclosure. As shown in fig. 2, the first two frames of the current frame i include a previous frame i-1 and a previous two frames i-2, the previous frame i-1 corresponds to a pose i-1, and the previous two frames correspond to a pose i-2, and on the premise that a Constant velocity motion model (Constant V) is satisfied among the current frame i, the previous frame i-1, and the previous two frames i-2, the pose of the current frame i can be estimated based on a motion relationship among the previous frame i-1 and the previous two frames i-2, such as the predicted pose i shown in fig. 2.
And when the current frame i is supposed to be under the prediction pose i, mapping the three-dimensional coordinates of the feature point to be tracked to the current frame to obtain the prediction position of the feature point to be tracked in the current frame.
In an optional implementation manner of this embodiment, the adjacent frame is a previous frame of the current frame; step S102, namely, the step of predicting the predicted position of the feature point to be tracked in the current frame by using the adjacent frame of the current frame further includes the following steps:
determining a rotation relation between the previous frame and the current frame by using inertial sensing data on the premise of no displacement between the previous frame and the current frame;
and projecting the two-dimensional coordinates of the feature point to be tracked in the previous frame to the current frame based on the rotation relation, so as to obtain the predicted position.
In this optional implementation manner, as described above, in the optical flow tracking process, the predicted position of the feature point to be tracked in the current frame is predicted based on the adjacent frame of the current frame, and the predicted position is used as the initial value of the feature point to be tracked in the optical flow tracking process.
Another way to predict the position of the feature point to be tracked in the current frame is: if the visual odometer can obtain inertial sensor data (IMU), the IMU can be used for replacing a constant-speed motion model, further, on the premise that no displacement change exists between two adjacent frames, the rotation relation between the previous frame and the current frame, namely a rotation matrix, is calculated through IMU data between the previous frame and the current frame, further, two-dimensional coordinates of the feature point to be tracked on the previous frame are projected in the current frame based on the rotation matrix, and the predicted position of the feature point to be tracked in the current frame is obtained. The prediction results obtained in this way are more accurate than those obtained using the uniform motion model described above. Therefore, in some embodiments, if the visual odometer is able to obtain IMU data, this manner may be preferentially used to predict the predicted position of the feature point to be tracked in the current frame, and this predicted position may be used as the initial position of the feature point to be tracked in the optical flow tracking process.
Fig. 3 is a schematic diagram illustrating a predicted position of a feature point to be tracked based on IMU integration prediction according to an embodiment of the present disclosure. As shown in fig. 3, a rotation matrix R between a previous frame i-1 and a current frame i is calculated based on IMU data, and then the current frame i (as shown by a dotted line frame in fig. 3) is obtained by rotating the previous frame i-1 (as shown by a solid line frame in fig. 3) by the rotation matrix R, and a predicted position in the current frame can be obtained by rotating a pixel position of a feature point to be tracked in the previous frame i by the rotation matrix R, and a predicted position of each feature point to be tracked in the current frame can be predicted based on the principle.
In an optional implementation manner of this embodiment, the method further includes the following steps:
dividing the current frame into a plurality of grids by using a characteristic point homogenization strategy;
extracting angular points in each grid;
determining, for the corner points extracted from the grid, whether a response value of the corner point with the highest response value is higher than or equal to a response threshold;
and when the response value of the corner with the highest response value is higher than or equal to the response threshold value, determining the corner with the highest response value as the image feature point extracted from the current frame.
In this optional implementation, the visual odometer has an important function of extracting image feature points, and for the previous frames (for example, the first frame) of images acquired by the image acquisition device, the predicted position of the feature point to be tracked in the current frame cannot be predicted by using adjacent frames, or new image feature points need to be re-extracted from the current frame based on a three-dimensional modeling requirement, the visual odometer may extract image feature points from the current frame by using an image feature extraction method, and then use the extracted image feature points as subsequent feature points to be tracked.
In the feature extraction process, a feature point homogenization strategy can be used for dividing the current frame into a plurality of grids, the feature point homogenization strategy can adopt the existing Bucket homogenization strategy, and the principle of the feature point homogenization strategy can be based on the fact that the number of feature points existing in the divided grids is relatively uniform, and the grids of the image are divided. Therefore, the feature points in each grid in the divided current frame are distributed relatively uniformly, for example, each grid may include one feature point.
After the current frame is divided into a plurality of grids, corners can be detected from each grid by using a corner extraction method, for example, an existing FAST method, and for each grid, the corners are sorted according to the magnitude of a corner response value, when the response value of the corner with the largest response value in the same grid is greater than or equal to a response threshold value, the corner with the largest response value is determined as a final corner extracted from the grid, and the final corner extracted from all grids is used as an image feature point in the current frame. And if the response values of all corner points in one grid are smaller than the response threshold value, the grid is considered to be incapable of extracting good image feature points, so that the grid can be skipped, and the image feature points are not extracted from the grid.
For the previous frames of images acquired by the image acquisition equipment, the image feature points extracted from the current frame can be used as subsequent feature points to be tracked, and the subsequent frames are tracked. For other situations, the image feature points extracted from the current frame can be used as supplements of the feature points to be tracked, and can also be tracked together with the successfully tracked feature points to be tracked in the subsequent frames, so that the number of pyramid layers of the image to be processed in the optical flow tracking process can be reduced, and further the computing power of the optical flow tracking is reduced.
In an optional implementation manner of this embodiment, the method further includes the following steps:
determining the distance between the tracking position and the prediction position of the feature point to be tracked;
determining median of the distances corresponding to the plurality of feature points to be tracked;
determining a distance threshold based on the median;
and eliminating the characteristic points to be tracked, of which the distance between the tracking position and the predicted position exceeds the distance threshold, from the characteristic points to be tracked, which are successfully tracked.
In this optional implementation manner, after the tracking position of the feature point to be tracked is determined through optical flow tracking, the tracking position of the feature point to be tracked obtained through optical flow tracking may be filtered in consideration of the possibility of error in optical flow tracking, that is, the feature point with error in tracking is removed.
In the embodiment of the disclosure, the distance between the tracking position of the feature point to be tracked and the predicted position is determined, then the median of the distance corresponding to all or most of the feature points to be tracked is counted, and the median is determined as the distance threshold. In some embodiments, the distance may be a hamming distance between the tracked location and the predicted location. When the distance is greater than the distance threshold, the probability of the feature point to be tracked is considered to be the feature point with the tracking error, so that the feature point to be tracked can be determined as the point with the tracking failure, and then eliminated without being used for subsequent tracking. By eliminating points with tracking errors in this way, the robustness of optical flow tracking can be improved.
In an optional implementation manner of this embodiment, the method further includes the following steps:
determining a basic matrix for representing epipolar constraint between the current frame and the previous frame by using the feature points to be tracked which are successfully tracked;
determining outliers in the feature points to be tracked which are tracked successfully based on the basic matrix;
and removing the outliers from the feature points to be tracked which are successfully tracked.
In this optional implementation manner, it is considered that, in feature points to be tracked which are successfully tracked after optical flow tracking and filtering, outliers may still exist, and the reason why the outliers exist may include optical flow tracking errors, a dynamic object existing in an environment where an image is acquired, and the like. In view of the situation, the embodiment of the disclosure utilizes the characteristic that the same pixel point has epipolar constraint between adjacent frames, and utilizes the epipolar constraint to find out and remove outliers from all successfully tracked feature points to be tracked, thereby further enhancing the stability of the tracking result of the visual odometer.
Therefore, the embodiment of the disclosure determines a basic matrix representing epipolar constraint between a current frame and a previous frame based on a tracking position of a feature point to be tracked successfully tracked in the current frame and a pixel position of the feature point to be tracked in the previous frame, finds out a feature point which does not meet epipolar constraint from all feature points to be tracked successfully tracked based on the basic matrix, and removes the feature point as an outlier from the feature points to be tracked successfully tracked.
In some embodiments, embodiments of the present disclosure may utilize the RANSAC algorithm to compute a basis matrix between two frames.
A location-based service providing method according to an embodiment of the present disclosure includes: determining the tracking position of the map point to be tracked in the current frame by using the tracking method of the image feature point, and providing a position-based service for the navigated object based on the tracking position, wherein the position-based service comprises the following steps: one or more of AR navigation, map rendering, route planning.
In this embodiment, the location-based service providing method may be executed on a terminal, where the terminal is a mobile phone, an ipad, a computer, a smart watch, a vehicle, or the like. In the embodiment of the disclosure, an image acquisition device on a terminal continuously acquires images and provides the acquired images to a visual odometer deployed on the terminal, the visual odometer tracks a map point to be tracked in a received current frame, and after the tracking is successful, the visual odometer is fused with a previously established three-dimensional map model, and renders the fused three-dimensional map model, so that three-dimensional map navigation information is displayed on the terminal based on path planning, and position-based service is provided for a navigated object
Fig. 4 is a schematic diagram illustrating an application scenario in AR navigation according to an embodiment of the present disclosure. As shown in fig. 4, the navigation device starts a visual mileage calculation method, and uses an image acquisition device to acquire a live-action image in real time, the live-action image is input into a visual odometer, the visual odometer tracks a map point to be tracked from the live-action image, and establishes a three-dimensional space model based on an image feature point which is tracked successfully, the three-dimensional space model is fused with map data, path planning data, navigation data and the like which are sent by a navigation server, and the three-dimensional space model is output to a navigation interface of the navigation device after image rendering, so as to provide AR navigation service for a navigated object.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 5 is a block diagram showing a configuration of an image feature point tracking device according to an embodiment of the present disclosure. The apparatus may be implemented as part or all of an electronic device through software, hardware, or a combination of both. As shown in fig. 5, the image feature point tracking device includes:
a first obtaining module 501 configured to obtain a current frame;
a prediction module 502 configured to predict a predicted position of a feature point to be tracked in a current frame by using adjacent frames of the current frame;
a first determining module 503, configured to determine the predicted position as an initial position of the feature point to be tracked in an optical flow tracking process;
a tracking module 504 configured to perform optical flow tracking on the feature point to be tracked based on the initial position, and obtain a tracking position of the feature point to be tracked in the current frame.
In this embodiment, the tracking device for the image feature points may be executed on a terminal device, and is used for front-end calculation in the visual SLAM system, that is, used in a visual odometer of the terminal device. The current frame to be image-tracked may be an image currently captured by an image device of the visual odometer, for example, an image frame in front of a road captured by an image capturing device on a vehicle device.
The visual odometer usually tracks in a current frame based on feature points which have been successfully tracked in a previous frame, determines the tracking position of a feature point to be tracked in the current frame, further determines the motion information of the image acquisition equipment from the previous frame to the current frame based on the tracking position, and provides an initial value for the rear-end optimization of the visual SLAM system.
In some embodiments, the feature points to be tracked may be corner points or other feature points on the image. The feature points to be tracked may be feature points that are successfully tracked in each of the previous image frames that are continuously acquired, starting from the first frame image acquired by the image acquisition device. Based on the tracking of the feature points, a three-dimensional space model of the three-dimensional environment where the image acquisition equipment is located can be continuously established, and then the three-dimensional space model is applied to the AR technology, for example, the three-dimensional space model and a map can be fused, and AR navigation is realized after operations such as rendering.
In the embodiment of the present disclosure, a predicted position of a feature point to be tracked in a current frame is predicted by using adjacent frames of the current frame, where the predicted position may include, but is not limited to, a pixel position of the feature point to be tracked in the adjacent frames, and the adjacent frames may be a previous frame, a previous two frames, … …, and a previous N frame (N is a positive integer greater than 2) of the current frame. The predicted position is merely a value predicted based on information such as the pixel position of the feature point to be tracked in the adjacent frame, and is not necessarily a true value.
In the embodiment of the disclosure, in order to reduce the number of image pyramid layers tracked in the optical flow tracking process and save calculation power, the predicted position in the current frame is used as an initial value of the feature point to be tracked in the optical flow tracking process, and then the tracking position of the feature point to be tracked is tracked from the current frame based on the initial value, and the tracking position can be regarded as the real position of the feature point to be tracked in the current frame.
The visual odometer can also calculate the current pose of the image acquisition equipment based on the relation between the tracking positions of the feature points to be tracked in the adjacent frames and the current frame. The current pose can be provided for the rear end of the visual SLAM system to perform pose optimization, then the three-dimensional coordinates of the feature points to be tracked in the three-dimensional space are determined based on the optimized pose of the image acquisition equipment, the construction of a three-dimensional space model of the environment where the image acquisition equipment is located can be completed by using the pose of the image acquisition equipment and the three-dimensional coordinates of the feature points to be tracked, and the AR technology application such as AR navigation can be realized based on the three-dimensional space model.
In the embodiment of the disclosure, in the image tracking process of the visual SLAM system, the position of the feature point to be tracked in the current frame is predicted according to the motion relationship between the adjacent frame and the current frame and the position of the feature point to be tracked in the adjacent frame, so as to obtain the predicted position of the feature point to be tracked in the current frame, the predicted position can be used as an initial value of optical flow tracking for optical flow tracking, and the tracking position of the feature point to be tracked can be tracked from the current frame through the optical flow tracking. In the optical flow tracking process, due to the fact that the predicted position of the feature point to be tracked in the current frame is used as the initial position, compared with the prior art that the position of the feature point to be tracked in the previous frame or a random initialization value is directly used as the initial position, the accuracy of the optical flow tracking result can be improved, the number of image pyramid layers processed in the optical flow tracking process can be saved, and further the computing power of the optical flow tracking is saved.
In an optional implementation manner of this embodiment, the adjacent frames are the first two frames of the current frame; the prediction module comprises:
the prediction sub-module is configured to predict the pose of the current frame based on the poses of the previous two frames by utilizing a uniform motion model between adjacent frames;
and the first acquisition submodule is configured to project the three-dimensional coordinates of the feature points to be tracked in the adjacent frames to the current frame under the pose to obtain the predicted position.
In this optional implementation manner, in the optical flow tracking process, the predicted position of the feature point to be tracked in the current frame is predicted based on the adjacent frame of the current frame, and the predicted position is used as the initial value of the feature point to be tracked in the optical flow tracking process.
One way to predict the position of the feature point to be tracked in the current frame is: and extrapolating to obtain the pose of the current frame by using the poses of the two previous frames of the current frame (the poses of the two previous frames are obtained in the previous tracking process). In this way, it can be assumed that the first two frames and the current frame satisfy the uniform motion, so that a uniform motion model between the three frames can be obtained based on the pose relationship between the first two frames, and then the pose of the current frame is extrapolated from the poses of the first two frames based on the uniform motion model.
In an optional implementation manner of this embodiment, the adjacent frame is a previous frame of the current frame; the prediction module comprises:
a first determining submodule configured to determine a rotational relationship between the previous frame and the current frame using inertial sensing data on the assumption that there is no displacement between the previous frame and the current frame;
and the second obtaining submodule is configured to project the two-dimensional coordinates of the feature point to be tracked in the previous frame to the current frame based on the rotation relation, so as to obtain the predicted position.
In this optional implementation manner, as described above, in the optical flow tracking process, the predicted position of the feature point to be tracked in the current frame is predicted based on the adjacent frame of the current frame, and the predicted position is used as the initial value of the feature point to be tracked in the optical flow tracking process.
Another way to predict the position of the feature point to be tracked in the current frame is: if the visual odometer can obtain inertial sensor data (IMU), the IMU can be used for replacing a constant-speed motion model, further, on the premise that no displacement change exists between two adjacent frames, the rotation relation between the previous frame and the current frame, namely a rotation matrix, is calculated through IMU data between the previous frame and the current frame, further, two-dimensional coordinates of the feature point to be tracked on the previous frame are projected in the current frame based on the rotation matrix, and the predicted position of the feature point to be tracked in the current frame is obtained. The prediction results obtained in this way are more accurate than those obtained using the uniform motion model described above. Therefore, in some embodiments, if the visual odometer is able to obtain IMU data, this manner may be preferentially used to predict the predicted position of the feature point to be tracked in the current frame, and this predicted position may be used as the initial position of the feature point to be tracked in the optical flow tracking process.
In an optional implementation manner of this embodiment, the apparatus further includes:
a partitioning module configured to partition the current frame into a plurality of meshes using a feature point uniformization strategy;
an extraction module configured to extract corner points in each of the meshes;
a second determination module configured to determine, for the corner extracted from the mesh, whether a response value of the corner having a highest response value is higher than or equal to a response threshold;
a third determining module configured to determine the corner point with the highest response value as the image feature point extracted from the current frame when the response value of the corner point with the highest response value is higher than or equal to the response threshold.
In this optional implementation, the visual odometer has an important function of extracting image feature points, and for the previous frames (for example, the first frame) of images acquired by the image acquisition device, the predicted position of the feature point to be tracked in the current frame cannot be predicted by using adjacent frames, or new image feature points need to be re-extracted from the current frame based on a three-dimensional modeling requirement, the visual odometer can extract the image feature points from the current frame by using an image feature extraction device, and then the extracted image feature points are used as the feature points to be tracked subsequently.
In the feature extraction process, a feature point homogenization strategy can be used for dividing the current frame into a plurality of grids, the feature point homogenization strategy can adopt the existing Bucket homogenization strategy, and the principle of the feature point homogenization strategy can be based on the fact that the number of feature points existing in the divided grids is relatively uniform, and the grids of the image are divided. Therefore, the feature points in each grid in the divided current frame are distributed relatively uniformly, for example, each grid may include one feature point.
After dividing the current frame into a plurality of grids, a corner extraction device, such as an existing FAST device, may be used to detect corners from each grid, and for each grid, the corners are sorted according to the magnitude of the corner response value, when the response value of the corner with the largest response value in the same grid is greater than or equal to the response threshold value, the corner with the largest response value is determined as the final corner extracted from the grid, and the final corner extracted from all grids is used as the image feature point in the current frame. And if the response values of all corner points in one grid are smaller than the response threshold value, the grid is considered to be incapable of extracting good image feature points, so that the grid can be skipped, and the image feature points are not extracted from the grid.
For the previous frames of images acquired by the image acquisition equipment, the image feature points extracted from the current frame can be used as the subsequent feature points to be tracked, and the tracking can be performed in the subsequent frame. For other cases, the image feature points extracted from the current frame can be used as supplements of the feature points to be tracked, and are tracked together with the successfully tracked feature points to be tracked in the subsequent frame, so that the number of image pyramid layers to be processed in the optical flow tracking process can be reduced, and further the computational power of the optical flow tracking is reduced.
In an optional implementation manner of this embodiment, the apparatus further includes:
a fourth determination module configured to determine a distance between the tracking position and the predicted position of the feature point to be tracked;
a fifth determining module configured to determine a median of the distances corresponding to a plurality of the feature points to be tracked;
a sixth determination module configured to determine a distance threshold based on the median;
a first culling module configured to cull the feature points to be tracked, for which the distance between the tracking position and the predicted position exceeds the distance threshold, from the feature points to be tracked, which are successfully tracked.
In this optional implementation manner, after the tracking position of the feature point to be tracked is determined through optical flow tracking, the tracking position of the feature point to be tracked obtained through optical flow tracking may be filtered in consideration of the possibility of error in optical flow tracking, that is, the feature point with error in tracking is removed.
In the embodiment of the disclosure, the distance between the tracking position of the feature point to be tracked and the predicted position is determined, then the median of the distance corresponding to all or most of the feature points to be tracked is counted, and the median is determined as the distance threshold. In some embodiments, the distance may be a hamming distance between the tracked location and the predicted location. When the distance is greater than the distance threshold, the probability of the feature point to be tracked is considered to be the feature point with the tracking error, so that the feature point to be tracked can be determined as the point with the tracking failure, and then eliminated without being used for subsequent tracking. By eliminating points with tracking errors in this way, the robustness of optical flow tracking can be improved.
In an optional implementation manner of this embodiment, the apparatus further includes:
a seventh determining module, configured to determine, by using the feature point to be tracked successfully tracked, a base matrix for representing epipolar constraint between the current frame and the previous frame;
an eighth determining module, configured to determine outliers in the feature points to be tracked, which are tracked successfully, based on the basis of the basis matrix;
and the second removing module is configured to remove the outliers from the feature points to be tracked, which are tracked successfully.
In this optional implementation manner, it is considered that, in feature points to be tracked which are successfully tracked after optical flow tracking and filtering, outliers may still exist, and the reason why the outliers exist may include optical flow tracking errors, a dynamic object existing in an environment where an image is acquired, and the like. In view of the situation, the embodiment of the disclosure utilizes the characteristic that the same pixel point has epipolar constraint between adjacent frames, and utilizes the epipolar constraint to find out and remove outliers from all successfully tracked feature points to be tracked, thereby further enhancing the stability of the tracking result of the visual odometer.
Therefore, the embodiment of the disclosure determines a basic matrix representing epipolar constraint between a current frame and a previous frame based on a tracking position of a feature point to be tracked successfully tracked in the current frame and a pixel position of the feature point to be tracked in the previous frame, finds out a feature point which does not meet epipolar constraint from all feature points to be tracked successfully tracked based on the basic matrix, and removes the feature point as an outlier from the feature points to be tracked successfully tracked.
In some embodiments, the disclosed embodiments may utilize the RANSAC algorithm to compute a basis matrix between two frames.
A location-based service providing apparatus according to an embodiment of the present disclosure includes: determining the tracking position of the map point to be tracked in the current frame by utilizing the tracking device of the image feature point, and providing a position-based service for the navigated object based on the tracking position, wherein the position-based service comprises the following steps: one or more of AR navigation, map rendering, route planning.
In this embodiment, the location-based service providing apparatus may be implemented on a terminal, where the terminal is a mobile phone, an ipad, a computer, a smart watch, a vehicle, or the like. In the embodiment of the disclosure, an image acquisition device on a terminal continuously acquires images and provides the acquired images to a visual odometer deployed on the terminal, the visual odometer tracks a map point to be tracked in a received current frame, and after the tracking is successful, the visual odometer is fused with a previously established three-dimensional map model, and renders the fused three-dimensional map model, so that three-dimensional map navigation information is displayed on the terminal based on path planning, and position-based service is provided for a navigated object
Fig. 6 is a schematic structural diagram of an electronic device suitable for implementing a tracking method of image feature points and/or a location-based service providing method according to an embodiment of the present disclosure.
As shown in fig. 6, electronic device 600 includes a processing unit 601, which may be implemented as a CPU, GPU, FPGA, NPU, or like processing unit. The processing unit 601 may perform various processes in the embodiments of any one of the above-described methods of the present disclosure according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. A driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to embodiments of the present disclosure, any of the methods described above with reference to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing any of the methods of the embodiments of the present disclosure. In such embodiments, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the present disclosure also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (10)

1. A tracking method of image feature points comprises the following steps:
acquiring a current frame;
predicting the predicted position of the feature point to be tracked in the current frame by using the adjacent frame of the current frame;
determining the predicted position as an initial position of the feature point to be tracked in an optical flow tracking process;
and carrying out optical flow tracking on the feature point to be tracked based on the initial position to obtain the tracking position of the feature point to be tracked in the current frame.
2. The method of claim 1, wherein the adjacent frames are the first two frames of the current frame; predicting the prediction position of the feature point to be tracked in the current frame by using the adjacent frame of the current frame, wherein the prediction position comprises the following steps:
predicting the pose of the current frame based on the poses of the previous two frames by utilizing a uniform motion model between adjacent frames;
and projecting the three-dimensional coordinates of the feature points to be tracked in the adjacent frames to the current frame under the pose to obtain the predicted position.
3. The method of claim 1 or 2, wherein the neighboring frame is a previous frame of the current frame; predicting the prediction position of the feature point to be tracked in the current frame by using the adjacent frame of the current frame, wherein the prediction position comprises the following steps:
determining a rotation relation between the previous frame and the current frame by using inertial sensing data on the premise of no displacement between the previous frame and the current frame;
and projecting the two-dimensional coordinates of the feature point to be tracked in the previous frame to the current frame based on the rotation relation, so as to obtain the predicted position.
4. The method according to claim 1 or 2, wherein the method further comprises:
dividing the current frame into a plurality of grids by utilizing a characteristic point homogenization strategy;
extracting angular points in each grid;
determining, for the corner points extracted from the grid, whether a response value of the corner point with the highest response value is higher than or equal to a response threshold;
and when the response value of the corner with the highest response value is higher than or equal to the response threshold value, determining the corner with the highest response value as the image feature point extracted from the current frame.
5. The method according to claim 1 or 2, wherein the method further comprises:
determining the distance between the tracking position and the prediction position of the feature point to be tracked;
determining median of the distances corresponding to the plurality of feature points to be tracked;
determining a distance threshold based on the median;
and eliminating the characteristic points to be tracked, of which the distance between the tracking position and the predicted position exceeds the distance threshold, from the characteristic points to be tracked, which are successfully tracked.
6. The method according to claim 1 or 2, wherein the method further comprises:
determining a basic matrix for representing the epipolar constraint between the current frame and the previous frame by utilizing the feature points to be tracked successfully;
determining outliers in the feature points to be tracked which are tracked successfully based on the basic matrix;
and removing the outliers from the feature points to be tracked which are successfully tracked.
7. A location-based service providing method, comprising: determining a tracking position of a map point to be tracked in a current frame by using the method of any one of claims 1-6, and providing a location-based service for the navigated object based on the tracking position, the location-based service comprising: one or more of AR navigation, map rendering, route planning.
8. An apparatus for tracking image feature points, comprising:
a first obtaining module configured to obtain a current frame;
the prediction module is configured to predict the prediction position of the feature point to be tracked in the current frame by using the adjacent frames of the current frame;
a first determination module configured to determine the predicted position as an initial position of the feature point to be tracked in an optical flow tracking process;
and the tracking module is configured to perform optical flow tracking on the feature point to be tracked based on the initial position to obtain a tracking position of the feature point to be tracked in the current frame.
9. An electronic device comprising a memory, a processor, and a computer program stored on the memory, wherein the processor executes the computer program to implement the method of any of claims 1-7.
10. A computer program product comprising computer instructions, wherein the computer instructions, when executed by a processor, implement the method of any one of claims 1-7.
CN202210249292.7A 2022-03-14 2022-03-14 Tracking method and device of image feature points, electronic equipment and program product Pending CN114897931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210249292.7A CN114897931A (en) 2022-03-14 2022-03-14 Tracking method and device of image feature points, electronic equipment and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210249292.7A CN114897931A (en) 2022-03-14 2022-03-14 Tracking method and device of image feature points, electronic equipment and program product

Publications (1)

Publication Number Publication Date
CN114897931A true CN114897931A (en) 2022-08-12

Family

ID=82716470

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210249292.7A Pending CN114897931A (en) 2022-03-14 2022-03-14 Tracking method and device of image feature points, electronic equipment and program product

Country Status (1)

Country Link
CN (1) CN114897931A (en)

Similar Documents

Publication Publication Date Title
CN110246147B (en) Visual inertial odometer method, visual inertial odometer device and mobile equipment
US20160305784A1 (en) Iterative kalman smoother for robust 3d localization for vision-aided inertial navigation
US11676303B2 (en) Method and apparatus for improved location decisions based on surroundings
CN109300143B (en) Method, device and equipment for determining motion vector field, storage medium and vehicle
CN109100730A (en) A kind of fast run-up drawing method of more vehicle collaborations
JP7422105B2 (en) Obtaining method, device, electronic device, computer-readable storage medium, and computer program for obtaining three-dimensional position of an obstacle for use in roadside computing device
CN113029128B (en) Visual navigation method and related device, mobile terminal and storage medium
CN111829532B (en) Aircraft repositioning system and method
CN111209978A (en) Three-dimensional visual repositioning method and device, computing equipment and storage medium
CN114565863B (en) Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image
CN112116655A (en) Method and device for determining position information of image of target object
CN113888639A (en) Visual odometer positioning method and system based on event camera and depth camera
CN114217665A (en) Camera and laser radar time synchronization method, device and storage medium
CN113223064A (en) Method and device for estimating scale of visual inertial odometer
CN113932796A (en) High-precision map lane line generation method and device and electronic equipment
CN113378605B (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN114897931A (en) Tracking method and device of image feature points, electronic equipment and program product
CN115937305A (en) Image processing method and device and electronic equipment
CN112037261A (en) Method and device for removing dynamic features of image
CN113763468A (en) Positioning method, device, system and storage medium
CN111383337A (en) Method and device for identifying objects
CN110660134B (en) Three-dimensional map construction method, three-dimensional map construction device and terminal equipment
CN115128655B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115294234B (en) Image generation method and device, electronic equipment and storage medium
CN115619857A (en) Data acquisition method, device, electronic equipment, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination