CN113674320B - Visual navigation feature point acquisition method and device and computer equipment - Google Patents

Visual navigation feature point acquisition method and device and computer equipment Download PDF

Info

Publication number
CN113674320B
CN113674320B CN202110973194.3A CN202110973194A CN113674320B CN 113674320 B CN113674320 B CN 113674320B CN 202110973194 A CN202110973194 A CN 202110973194A CN 113674320 B CN113674320 B CN 113674320B
Authority
CN
China
Prior art keywords
feature point
current
frame image
grid
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110973194.3A
Other languages
Chinese (zh)
Other versions
CN113674320A (en
Inventor
徐朋飞
袁涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Goke Microelectronics Co Ltd
Original Assignee
Hunan Goke Microelectronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Goke Microelectronics Co Ltd filed Critical Hunan Goke Microelectronics Co Ltd
Priority to CN202110973194.3A priority Critical patent/CN113674320B/en
Publication of CN113674320A publication Critical patent/CN113674320A/en
Application granted granted Critical
Publication of CN113674320B publication Critical patent/CN113674320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Navigation (AREA)

Abstract

The invention provides a visual navigation feature point acquisition method, which comprises the following steps: carrying out gridding treatment of a preset grid size on the current frame image, and obtaining the current feature point through optical flow tracking according to the feature point of the previous frame image; when the number of the current feature points is determined to be lower than a preset value, calculating the difference value between the number of the current feature points and the preset value to be the supplement number; acquiring a feature point supplement grid of the current frame image by using the pose of the previous frame image and a preset algorithm; and extracting supplementary feature points from the feature point supplementary grid according to the supplementary quantity. In the process of extracting the characteristic points of the current frame image by visual navigation, the invention can improve the efficiency of extracting the characteristic points by locating the characteristic point supplementing grid of the current frame image and extracting the required characteristic points from the characteristic point supplementing grid when the characteristic points are required to be extracted from the current frame image again, thereby reducing the delay of visual navigation, reducing the calculated amount of extracting the characteristic points and reducing the power consumption.

Description

Visual navigation feature point acquisition method and device and computer equipment
Technical Field
The present invention relates to the field of visual navigation, and in particular, to a method, an apparatus, a computer device, and a readable storage medium for obtaining a feature point of visual navigation.
Background
In the existing visual navigation system, when the number of extracted feature points does not meet the requirement in the process of extracting the feature points of the current frame image, the feature points need to be extracted again in the whole image so that the number of the feature points meets the requirement, and then the next pose calculation is performed, so that the efficiency of feature point extraction is low, delay exists in visual navigation, and in the process of feature point extraction, the operation amount is large, and the power consumption of equipment is affected.
Disclosure of Invention
In view of the above, the present invention provides a visual navigation feature point acquisition method, apparatus, computer device, and readable storage medium, to improve the efficiency of feature point extraction, thereby reducing the delay of visual navigation, and reducing the calculation amount of feature point extraction, and reducing the power consumption.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a visual navigation feature point acquisition method comprises the following steps:
carrying out gridding treatment of a preset grid size on the current frame image, and obtaining the current feature point through optical flow tracking according to the feature point of the previous frame image;
when the number of the current feature points is determined to be lower than a preset value, calculating the difference value between the number of the current feature points and the preset value to be the supplementing number;
acquiring a feature point supplement grid of the current frame image by using the pose of the previous frame image and a preset algorithm;
and extracting supplemental feature points from the feature point supplemental grid according to the supplemental quantity.
Preferably, in the method for obtaining a feature point of a visual navigation, the obtaining a feature point supplement grid of the current frame image by using a pose of the previous frame image and a preset algorithm includes:
calculating by using the pose of the previous frame of image through a preset pose estimation algorithm to obtain the current predicted pose of the equipment;
calculating the lens movement distance of the image acquisition module according to the predicted pose;
and calculating the position of the feature point supplement grid according to the lens moving distance.
Preferably, in the visual navigation feature point obtaining method, the preset pose estimation algorithm includes a PnP algorithm or an ICP algorithm.
Preferably, in the visual navigation feature point obtaining method, the lens movement distance includes at least one movement distance of an upper movement distance, a lower movement distance, a left movement distance and a right movement distance.
Preferably, the method for obtaining the visual navigation feature points further comprises:
and calculating the current pose of the equipment by using the current feature points and the supplementary feature points extracted from the current frame image.
Preferably, the method for obtaining the visual navigation feature points further comprises:
and when the number of the current feature points is determined to be greater than or equal to the preset value, calculating the current pose of the equipment by using the current feature points.
The invention also provides a visual navigation feature point acquisition device, which comprises:
the gridding module is used for carrying out gridding processing of a preset grid size on the current frame image and obtaining the current feature point through optical flow tracking according to the feature point of the previous frame image;
the supplement quantity calculation module is used for calculating the difference value between the number of the current feature points and the preset value to be the supplement quantity when the number of the current feature points is determined to be lower than the preset value;
the supplementing grid acquisition module is used for acquiring the characteristic point supplementing grid of the current frame image by utilizing the pose of the previous frame image and a preset algorithm;
and the feature point supplementing module is used for extracting supplementing feature points from the feature point supplementing grid according to the supplementing quantity.
Preferably, the visual navigation feature point obtaining device, the augmentation grid obtaining module includes:
the predicted pose calculation unit is used for obtaining the current predicted pose of the equipment by utilizing the pose of the previous frame of image to operate through a preset pose estimation algorithm;
the moving distance calculating unit is used for calculating the lens moving distance of the image acquisition module according to the predicted pose;
and the grid position calculation unit is used for calculating the position of at least one feature point supplementary grid according to the lens moving distance.
The invention also provides a computer device comprising a memory and a processor, the memory storing a computer program which, when run on the processor, performs the visual navigation feature point acquisition method.
The present invention also provides a readable storage medium storing a computer program which, when run on a processor, is said to be a method of visual navigation feature point acquisition.
The invention provides a visual navigation feature point acquisition method, which comprises the following steps: carrying out gridding treatment of a preset grid size on the current frame image, and obtaining the current feature point through optical flow tracking according to the feature point of the previous frame image; when the number of the current feature points is determined to be lower than a preset value, calculating the difference value between the number of the current feature points and the preset value to be the supplementing number; acquiring a feature point supplement grid of the current frame image by using the pose of the previous frame image and a preset algorithm; supplemental feature points are extracted from the feature point supplemental grid. According to the visual navigation feature point acquisition method, in the process of extracting the feature points of the current frame image through visual navigation, if the number of the feature points is insufficient through optical flow tracking, and the feature points need to be extracted from the current frame image again, the required feature points are extracted from the feature point supplementing grid by positioning the feature point supplementing grid of the current frame image, and compared with the feature point extracting process of the whole image, the feature point extracting efficiency can be improved, so that the delay of visual navigation is reduced, the calculated amount of feature point extracting is reduced, and the power consumption is reduced.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the present invention, the drawings that are required for the embodiments will be briefly described, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope of the present invention. Like elements are numbered alike in the various figures.
Fig. 1 is a flowchart of a method for obtaining a feature point of visual navigation according to embodiment 1 of the present invention;
fig. 2 is a flowchart for obtaining an augmented mesh provided in embodiment 2 of the present invention;
fig. 3 is a flowchart of a method for obtaining a feature point of visual navigation according to embodiment 3 of the present invention;
FIG. 4 is a flowchart of another method for obtaining visual navigation feature points according to embodiment 3 of the present invention;
fig. 5 is a schematic structural diagram of a visual navigation feature point obtaining device according to embodiment 4 of the present invention;
fig. 6 is a schematic structural diagram of an augmented mesh acquisition module according to embodiment 4 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
The terms "comprises," "comprising," "including," or any other variation thereof, are intended to cover a specific feature, number, step, operation, element, component, or combination of the foregoing, which may be used in various embodiments of the present invention, and are not intended to first exclude the presence of or increase the likelihood of one or more other features, numbers, steps, operations, elements, components, or combinations of the foregoing.
Furthermore, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the invention belong. The terms (such as those defined in commonly used dictionaries) will be interpreted as having a meaning that is the same as the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in connection with the various embodiments of the invention.
Example 1
Fig. 1 is a flowchart of a method for obtaining a feature point of visual navigation according to embodiment 1 of the present invention, where the method includes the following steps:
step S11: and carrying out gridding processing of a preset grid size on the current frame image, and obtaining the current feature point through optical flow tracking according to the feature point of the previous frame image.
In the embodiment of the invention, in the vsram field (vsram, visual simultaneous localization and mapping, visual synchronous positioning and mapping), after the current frame image is obtained, the characteristic points of the current frame image are extracted, and then the current frame image is matched with the global map, and the pose of the current device is calculated, so that the positioning and the next operation are performed according to the pose. When extracting the feature points of the current frame image, the feature points of the current frame image are generally extracted according to the feature points determined by the previous frame image by an optical flow tracking method. The optical flow tracking method may cause the loss of the feature points, and in the case that the number of the feature points cannot reach a preset value, the accuracy of the calculated pose is affected, so that the accuracy of visual positioning navigation is affected, and therefore, the current frame image is generally re-detected, the re-feature points are extracted, the number of the feature points reaches the preset value, and the accuracy of the pose is ensured.
In the embodiment of the invention, the real-time image can be acquired as the current frame image through the camera, the camera can be arranged at the preset position of the robot, the image of the robot in the advancing process is acquired, and the characteristic points in the image are extracted for visual positioning navigation. After the current frame image is acquired, an algorithm or an application program may be used to perform gridding processing of a preset grid size, for example, an application program for gridding processing may be set in the robot, and after the current frame image is acquired, the current frame image is input into the application program to obtain the current frame image after gridding processing. The grid size may also be adjusted by the application, which is not limited herein.
In the embodiment of the invention, after the current frame image is obtained, the corresponding characteristic points are obtained according to the historical characteristic points and optical flow tracking of the previous frame image, so that the current pose of the equipment is calculated, namely the equipment is used for visual positioning navigation.
Step S12: and when the number of the current feature points is determined to be lower than a preset value, calculating the difference value between the number of the current feature points and the preset value to be the supplementing number.
In the embodiment of the invention, after the feature points of the current frame image are obtained through optical flow tracking, whether the number of the feature points is lower than a preset value is judged, and when the number of the feature points obtained in the current optical flow tracking process is lower than the preset value, namely the number of the feature points obtained in the current optical flow tracking process is insufficient, the feature points are required to be extracted again from the current frame image to complement the number of the feature points so as to ensure the accuracy of pose calculation.
Step S13: and acquiring the feature point supplement grid of the current frame image by using the pose of the previous frame image and a preset algorithm.
In the embodiment of the invention, after the fact that the number of the current feature points is insufficient and feature point supplementation is needed is determined, the pose of the previous frame image is acquired, and the corresponding feature point supplementation grid is positioned from the current frame image by utilizing the pose of the previous frame image and a preset algorithm, so that the corresponding supplementation feature points are extracted from the feature point supplementation grid, and the feature points do not need to be extracted from the whole image of the current frame image again. The robot can be pre-stored with an application program based on the preset algorithm, and the feature point supplement grid of the current frame image is positioned according to the pose of the previous frame image through the application program.
Step S14: and extracting supplemental feature points from the feature point supplemental grid according to the supplemental quantity.
In the embodiment of the invention, after the feature point supplement grid of the current frame image is positioned, the supplement feature points with supplement quantity are extracted from the feature point supplement grid, namely, the supplement feature points are extracted from partial images within the scope of the feature point supplement grid of the current frame image, for example, the corresponding grid image can be segmented from the current frame image according to the feature point supplement grid, and then the supplement feature points with supplement quantity are extracted from the grid image, and the speed of extracting the supplement feature points is far faster than the speed of re-extracting the feature points from the current frame image because the size of the grid image is far smaller than the size of the current frame image.
In the embodiment of the invention, in the process of extracting the characteristic points of the current frame image by visual navigation, if the quantity of the characteristic points is insufficient by optical flow tracking, and the characteristic points need to be extracted from the current frame image again, the required characteristic points are extracted from the characteristic point supplementing grid by positioning the characteristic point supplementing grid of the current frame image, so that the efficiency of extracting the characteristic points can be improved compared with the process of extracting the characteristic points again in a whole image, the delay of visual navigation is reduced, the calculated amount of extracting the characteristic points is reduced, and the power consumption is reduced.
Example 2
Fig. 2 is a flowchart for obtaining an augmentation grid according to embodiment 2 of the present invention, comprising the steps of:
step S21: and calculating by using the pose of the previous frame of image through a preset pose estimation algorithm to obtain the current predicted pose of the equipment.
In the embodiment of the present invention, the preset pose estimation algorithm includes PnP algorithm (PnP) or ICP algorithm (ICP, iterative Closest Point, iteration closest point). That is, an application program based on PnP algorithm or ICP algorithm may be preset in the robot, and when the current predicted pose of the device needs to be estimated, the pose of the previous frame of image is input to the application program, so as to obtain a corresponding predicted pose output.
Step S22: and calculating the lens moving distance of the image acquisition module according to the predicted pose.
In the embodiment of the invention, the lens moving distance includes at least one moving distance of an upper moving distance, a lower moving distance, a left moving distance and a right moving distance. That is, by comparing the predicted pose with the pose of the previous frame of image, the lens moving distance and the corresponding moving direction of the image acquisition module of the device can be calculated.
Step S23: and calculating the position of the feature point supplement grid according to the lens moving distance.
According to the distribution of the current feature points on the current frame image and the moving distance of the lens, the position of the grid where the feature points which are not acquired during optical flow tracking are located, namely the position of the feature point supplement grid, can be judged.
Example 3
Fig. 3 is a flowchart of a method for obtaining a feature point of visual navigation according to embodiment 3 of the present invention, where the method includes the following steps:
step S31: and carrying out gridding processing of a preset grid size on the current frame image, and obtaining the current feature point through optical flow tracking according to the feature point of the previous frame image.
This step corresponds to the above step S11, and will not be described here again.
Step S32: and when the number of the current feature points is determined to be lower than a preset value, calculating the difference value between the number of the current feature points and the preset value to be the supplementing number.
This step corresponds to the above step S12, and will not be described here again.
Step S33: and acquiring the feature point supplement grid of the current frame image by using the pose of the previous frame image and a preset algorithm.
This step corresponds to the above step S13 and will not be described here again.
Step S34: and extracting supplemental feature points from the feature point supplemental grid according to the supplemental quantity.
This step corresponds to the above step S14 and will not be described here again.
Step S35: and calculating the current pose of the equipment by using the current feature points and the supplementary feature points extracted from the current frame image.
In the embodiment of the invention, after the supplementary feature points are obtained, the current pose of the equipment can be calculated by combining the previous current feature points, and the accuracy of the current pose is ensured, so that the accuracy of visual navigation is ensured.
Fig. 4 is a flowchart of another method for obtaining a feature point of visual navigation according to embodiment 3 of the present invention, where the method further includes the following steps:
step S36: and when the number of the current feature points is determined to be greater than or equal to the preset value, calculating the current pose of the equipment by using the current feature points.
Example 4
Fig. 5 is a schematic structural diagram of a visual navigation feature point obtaining device according to embodiment 4 of the present invention.
The visual navigation feature point obtaining apparatus 500 includes:
the gridding module 510 is configured to perform gridding processing of a preset grid size on a current frame image, and acquire a current feature point through optical flow tracking according to a feature point of a previous frame image;
the supplement number calculation module 520 is configured to calculate a difference between the number of the current feature points and the preset value as a supplement number when the number of the current feature points is determined to be lower than the preset value;
the augmentation grid acquisition module 530 is configured to acquire a feature point augmentation grid of the current frame image by using the pose of the previous frame image and a preset algorithm;
and a feature point augmentation module 540 for extracting augmentation feature points from the feature point augmentation grid according to the augmentation quantity.
As shown in fig. 6, the supplementary grid acquisition module 530 includes:
the predicted pose calculating unit 531 is configured to obtain a current predicted pose of the device by using the pose of the previous frame of image to perform an operation through a preset pose estimation algorithm;
a movement distance calculating unit 532, configured to calculate a lens movement distance of the image acquisition module according to the predicted pose;
a grid position calculating unit 533 for calculating a position of at least one of the feature point supplement grids according to the lens movement distance.
In the embodiment of the present invention, the more detailed functional description of each module may refer to the content of the corresponding portion in the foregoing embodiment, which is not described herein.
The invention further provides a computer device comprising a memory and a processor, the memory being operable to store a computer program, the processor being operable to cause the computer device to perform the above-described method or the functions of the respective modules in the above-described visual navigation feature point obtaining apparatus by running the computer program.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the computer device (such as audio data, phonebooks, etc.), and the like. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The invention also provides a computer storage medium for storing a computer program for use in the above computer device.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, of the flow diagrams and block diagrams in the figures, which illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules or units in various embodiments of the invention may be integrated together to form a single part, or the modules may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a smart phone, a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. The method for acquiring the visual navigation feature points is characterized by comprising the following steps of:
carrying out gridding treatment of a preset grid size on the current frame image, and obtaining the current feature point through optical flow tracking according to the feature point of the previous frame image;
when the number of the current feature points is determined to be lower than a preset value, calculating the difference value between the number of the current feature points and the preset value to be the supplementing number;
acquiring a feature point supplement grid of the current frame image by using the pose of the previous frame image and a preset algorithm;
extracting supplemental feature points from the feature point supplemental grid according to the supplemental quantity;
the step of obtaining the feature point supplement grid of the current frame image by using the pose of the previous frame image and a preset algorithm comprises the following steps:
calculating by using the pose of the previous frame of image through a preset pose estimation algorithm to obtain the current predicted pose of the equipment;
comparing the predicted pose with the pose of the previous frame of image, and calculating the lens moving distance of the image acquisition module;
and calculating the position of the feature point supplement grid according to the lens moving distance and the distribution of the current feature point on the current frame image, wherein the position of the feature point supplement grid indicates the position of the grid where the feature point which is not acquired during optical flow tracking is located.
2. The visual navigation feature point obtaining method according to claim 1, wherein the preset pose estimation algorithm includes a PnP algorithm or an ICP algorithm.
3. The method according to claim 1, wherein the lens movement distance includes at least one movement distance of an upper movement distance, a lower movement distance, a left movement distance, and a right movement distance.
4. The visual navigation feature point obtaining method according to claim 1, further comprising:
and calculating the current pose of the equipment by using the current feature points and the supplementary feature points extracted from the current frame image.
5. The visual navigation feature point obtaining method according to claim 1, further comprising:
and when the number of the current feature points is determined to be greater than or equal to the preset value, calculating the current pose of the equipment by using the current feature points.
6. A visual navigation feature point acquisition device, characterized by comprising:
the gridding module is used for carrying out gridding processing of a preset grid size on the current frame image and obtaining the current feature point through optical flow tracking according to the feature point of the previous frame image;
the supplement quantity calculation module is used for calculating the difference value between the number of the current feature points and the preset value to be the supplement quantity when the number of the current feature points is determined to be lower than the preset value;
the supplementing grid acquisition module is used for acquiring the characteristic point supplementing grid of the current frame image by utilizing the pose of the previous frame image and a preset algorithm;
a feature point supplementing module for extracting supplementing feature points from the feature point supplementing grid according to the supplementing quantity;
the augmentation grid acquisition module comprises:
the predicted pose calculation unit is used for obtaining the current predicted pose of the equipment by utilizing the pose of the previous frame of image to operate through a preset pose estimation algorithm;
the moving distance calculating unit is used for comparing the predicted pose with the pose of the previous frame of image and calculating the lens moving distance of the image acquisition module;
and the grid position calculation unit is used for calculating the position of at least one feature point supplementing grid according to the lens moving distance and the distribution of the current feature points on the current frame image, wherein the position of the feature point supplementing grid indicates the position of the grid where the feature points which are not acquired during optical flow tracking are located.
7. A computer device, characterized by comprising a memory and a processor, the memory storing a computer program which, when run on the processor, performs the visual navigation feature point acquisition method according to any one of claims 1 to 5.
8. A readable storage medium, characterized in that it stores a computer program which, when run on a processor, performs the visual navigation feature point acquisition method of any one of claims 1 to 5.
CN202110973194.3A 2021-08-24 2021-08-24 Visual navigation feature point acquisition method and device and computer equipment Active CN113674320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110973194.3A CN113674320B (en) 2021-08-24 2021-08-24 Visual navigation feature point acquisition method and device and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110973194.3A CN113674320B (en) 2021-08-24 2021-08-24 Visual navigation feature point acquisition method and device and computer equipment

Publications (2)

Publication Number Publication Date
CN113674320A CN113674320A (en) 2021-11-19
CN113674320B true CN113674320B (en) 2024-03-22

Family

ID=78545468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110973194.3A Active CN113674320B (en) 2021-08-24 2021-08-24 Visual navigation feature point acquisition method and device and computer equipment

Country Status (1)

Country Link
CN (1) CN113674320B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10043076B1 (en) * 2016-08-29 2018-08-07 PerceptIn, Inc. Visual-inertial positional awareness for autonomous and non-autonomous tracking
CN108682036A (en) * 2018-04-27 2018-10-19 腾讯科技(深圳)有限公司 Pose determines method, apparatus and storage medium
CN111696142A (en) * 2020-06-12 2020-09-22 广东联通通信建设有限公司 Rapid face detection method and system
CN112154479A (en) * 2019-09-29 2020-12-29 深圳市大疆创新科技有限公司 Method for extracting feature points, movable platform and storage medium
CN112880687A (en) * 2021-01-21 2021-06-01 深圳市普渡科技有限公司 Indoor positioning method, device, equipment and computer readable storage medium
CN112884840A (en) * 2021-03-29 2021-06-01 湖南国科微电子股份有限公司 Visual positioning method, device, equipment and storage medium
CN113112542A (en) * 2021-03-25 2021-07-13 北京达佳互联信息技术有限公司 Visual positioning method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10043076B1 (en) * 2016-08-29 2018-08-07 PerceptIn, Inc. Visual-inertial positional awareness for autonomous and non-autonomous tracking
CN108682036A (en) * 2018-04-27 2018-10-19 腾讯科技(深圳)有限公司 Pose determines method, apparatus and storage medium
CN112154479A (en) * 2019-09-29 2020-12-29 深圳市大疆创新科技有限公司 Method for extracting feature points, movable platform and storage medium
WO2021056501A1 (en) * 2019-09-29 2021-04-01 深圳市大疆创新科技有限公司 Feature point extraction method, movable platform and storage medium
CN111696142A (en) * 2020-06-12 2020-09-22 广东联通通信建设有限公司 Rapid face detection method and system
CN112880687A (en) * 2021-01-21 2021-06-01 深圳市普渡科技有限公司 Indoor positioning method, device, equipment and computer readable storage medium
CN113112542A (en) * 2021-03-25 2021-07-13 北京达佳互联信息技术有限公司 Visual positioning method and device, electronic equipment and storage medium
CN112884840A (en) * 2021-03-29 2021-06-01 湖南国科微电子股份有限公司 Visual positioning method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113674320A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN110631554B (en) Robot posture determining method and device, robot and readable storage medium
CN109816762B (en) Image rendering method and device, electronic equipment and storage medium
CN111583097A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111091091A (en) Method, device and equipment for extracting target object re-identification features and storage medium
CN109376256B (en) Image searching method and device
CN110533694B (en) Image processing method, device, terminal and storage medium
US11042991B2 (en) Determining multiple camera positions from multiple videos
CN112116639B (en) Image registration method and device, electronic equipment and storage medium
CN112233055B (en) Video mark removing method and video mark removing device
CN112818955B (en) Image segmentation method, device, computer equipment and storage medium
CN109636730B (en) Method for filtering pseudo pixels in a depth map
KR20140009013A (en) Method and apparatus for modeling 3d face, method and apparatus for tracking face
CN110114801B (en) Image foreground detection device and method and electronic equipment
CN112738640B (en) Method and device for determining subtitles of video stream and readable storage medium
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN114066926A (en) Method and device for estimating image depth
JP5192437B2 (en) Object region detection apparatus, object region detection method, and object region detection program
CN113674320B (en) Visual navigation feature point acquisition method and device and computer equipment
CN111860287A (en) Target detection method and device and storage medium
CN105516735A (en) Representation frame acquisition method and representation frame acquisition apparatus
KR102240570B1 (en) Method and apparatus for generating spanning tree,method and apparatus for stereo matching,method and apparatus for up-sampling,and method and apparatus for generating reference pixel
KR102633159B1 (en) Apparatus and method for restoring 3d-model using the image-processing
CN113436349B (en) 3D background replacement method and device, storage medium and terminal equipment
CN113379922A (en) Foreground extraction method, device, storage medium and equipment
CN111079643B (en) Face detection method and device based on neural network and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant