CN115705658A - Camera pose estimation method and device, computer equipment and storage medium - Google Patents

Camera pose estimation method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115705658A
CN115705658A CN202110918447.7A CN202110918447A CN115705658A CN 115705658 A CN115705658 A CN 115705658A CN 202110918447 A CN202110918447 A CN 202110918447A CN 115705658 A CN115705658 A CN 115705658A
Authority
CN
China
Prior art keywords
line
point
features
error function
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110918447.7A
Other languages
Chinese (zh)
Inventor
胡锦丽
孟俊彪
刘阳兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan TCL Group Industrial Research Institute Co Ltd
Original Assignee
Wuhan TCL Group Industrial Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan TCL Group Industrial Research Institute Co Ltd filed Critical Wuhan TCL Group Industrial Research Institute Co Ltd
Priority to CN202110918447.7A priority Critical patent/CN115705658A/en
Publication of CN115705658A publication Critical patent/CN115705658A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a camera pose estimation method, a camera pose estimation device, computer equipment and a storage medium. The method comprises the following steps: extracting the features of the target image to obtain point features and line features of the target image; constructing a point characteristic pose solving error function according to the point characteristics, and constructing a line characteristic pose solving error function according to the line characteristics; and determining a camera pose estimation result according to a fusion error function obtained by solving the error function according to the point feature pose and the line feature pose. According to the method, the robustness of feature matching is enhanced by adding line features in a weak texture region, and the camera pose estimation is performed by using the constructed error function of point feature and line feature fusion, so that the accuracy of camera pose estimation is improved.

Description

Camera pose estimation method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a camera pose estimation method, apparatus, computer device, and storage medium.
Background
Camera pose estimation is a classic problem in the field of computer vision technology, and is the known of some 3D coordinate points in three-dimensional space and their 2D projected coordinates on the camera image to estimate the position and pose of the camera in three-dimensional space.
The most commonly used method in camera pose estimation is PnP (perceptual points), but in the implementation process, the inventors found that the accuracy of camera pose estimation is not high when point features are small because the conventional PnP algorithm relies on matching with the point features. In some weak texture regions, such as some regular patterns (e.g. rectangles), the point features are few, but there are many line features, and the mere dependence on the point features is not ideal for the accuracy of camera pose estimation.
Disclosure of Invention
Based on this, the present invention provides a camera pose estimation method, apparatus, computer device, and storage medium capable of improving camera pose estimation accuracy.
In a first aspect, the present invention provides a camera pose estimation method, including:
performing feature extraction on the target image to obtain point features and line features of the target image;
constructing a point characteristic pose solving error function according to the point characteristics, and constructing a line characteristic pose solving error function according to the line characteristics;
and determining a camera pose estimation result according to a fusion error function obtained by solving the error function according to the point feature pose and the line feature pose.
In a second aspect, the present invention provides a camera pose estimation apparatus, including:
the characteristic extraction module is used for extracting the characteristics of the target image to obtain the point characteristics and the line characteristics of the target image;
the construction module is used for constructing a point characteristic pose solving error function according to the point characteristics and constructing a line characteristic pose solving error function according to the line characteristics;
and the pose determining module is used for determining a camera pose estimation result according to a fusion error function obtained by solving the error function according to the point feature pose and the line feature pose.
In a third aspect, the present invention provides a computer device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to implement the steps in the camera pose estimation method described above.
In a fourth aspect, the present invention provides a computer-readable storage medium having stored thereon a computer program, the computer program being executed by a processor to implement the steps in the camera pose estimation method described above.
The invention provides a camera pose estimation method, a camera pose estimation device, computer equipment and a storage medium, which are used for simultaneously extracting and matching point features and line features in a weak texture region and a point feature sparse region to enhance the robustness of feature matching, and performing point-line fusion camera pose estimation by using a constructed error function of point feature and line feature fusion to improve the accuracy of camera pose estimation.
Drawings
Fig. 1 is a schematic flow chart of a camera pose estimation method in an embodiment of the present application.
Fig. 2 is another schematic flow chart of the camera pose estimation method in the embodiment of the present application.
Fig. 3 is a schematic structural diagram of a camera pose estimation apparatus in an embodiment of the present application.
Fig. 4 is another schematic structural diagram of the camera pose estimation apparatus in the embodiment of the present application.
Fig. 5 is an internal structural diagram of a computer device in the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The inventor finds that in the camera pose estimation scheme in the prior art, the matching with the point features is relied on, in some weak texture regions, such as some regular patterns (such as rectangles), the point features are fewer, but there are many line features, and the mere reliance on the point features is not ideal for the accuracy of the camera pose estimation.
In order to overcome the above problems in the prior art, the present embodiment discloses a camera pose estimation method, apparatus, computer device, and storage medium, where the camera pose estimation method may be run on a camera, specifically, on a processor of the camera, and simultaneously extract and match point features and line features in a weak texture region to enhance the robustness of feature matching, and perform camera pose estimation by point-line fusion by using a constructed error function of point feature and line feature fusion, thereby improving the accuracy of camera pose estimation.
The camera pose method, apparatus, computer device, and storage medium provided by the present invention are described in further detail below with reference to the accompanying drawings and specific embodiments.
In one example, as shown in fig. 1-2, there is provided a camera pose estimation method, an execution subject being a computer device, the method comprising:
s1, extracting the features of the target image to obtain point features and line features of the target image.
It should be noted that the camera pose estimation method is applied to a camera, and the processor of the camera executes the steps of the camera pose estimation method, so that the accuracy of camera pose estimation is improved.
The target image is an image obtained by shooting a weak texture region by a camera, and the weak texture region is a region of which the gradient statistical average value is in a preset range; when the camera shoots a target image, a processor of the camera extracts features from the target image to obtain point features and line features corresponding to the target image. The camera may be a monocular camera or a binocular camera.
In one example, after selecting the target scene area and using a scene image captured by the camera in the weak texture area, the processor of the camera pre-processes the scene image (target image). The pre-processing of the target image by the processor of the camera comprises the steps of carrying out distortion correction processing and mask processing on the target image.
In order to avoid that the characteristics are extracted from the boundary of the image and the frame and the shot interference object by mistake and influence the subsequent characteristic matching, after distortion correction is carried out on the image by using a corresponding tool such as a PIE SDK (Pixel Information Expert SDK orthorectification, which is a processing process for correcting the image space and geometric distortion to generate an orthoimage of a multi-center projection plane), the image is subjected to mask processing. It should be noted that, because the masks are made according to different lenses, each lens has a unique fixed mask, and the mask of each lens has adaptability to all images captured by the lens to which the mask belongs.
The feature extraction method may be selected according to actual requirements, and in one example, the step S1 includes:
and S11, extracting point Features in the target image by using a point feature extraction algorithm, and describing the point Features by using a first descriptor (the first descriptor can be rBRIEF: rotation-aware BRIEF, BRIEF: binary Robust Independent element Features, which is a Binary coded descriptor).
And S12, extracting Line features in the target image by using a Line feature extraction algorithm, and describing the Line features by using a second descriptor (the second descriptor can be LBD: line descriptor).
In one example, the point feature extraction and description method includes, but is not limited to, SIFT (Scale-invariant feature transform, which is a description for the image processing field), SURF (speedup Robust Features, which is an improvement on SIFT), ORB (organized FAST and Rotation BRIEF) method.
For example, by adopting an ORB feature extraction and feature description method, the running time of the ORB feature description algorithm is far better than SIFT and SURF, and the ORB feature description algorithm can be used for real-time feature detection. The ORB Features are based on the technique of detecting and describing the point Features of the FAST (Features from accessed Segment Test) corner, and have invariance to scale and rotation, as well as invariance to noise and perspective affine.
The ORB feature detection is mainly divided into feature extraction and feature description, and comprises the following two steps:
first, directional FAST point feature detection.
FAST corner detection is a FAST corner feature detection algorithm based on machine learning, FAST point feature detection with directions is to judge 16 pixel points on the circumference of an interest point, and if the judged current center pixel point is dark or bright, determine whether the pixel point is a corner. FAST corner detection is realized by an acceleration algorithm, and usually, a point set on a return circle is firstly sequenced, so that the calculation process is greatly optimized.
Next, a first descriptor feature description.
The key point information of the features extracted from the image is usually only the position information (possibly including scale and direction information) of the features in the image, and the matching of the point features cannot be well performed only by using the information, so that more detailed information is needed to distinguish the features, namely, a feature descriptor. In addition, the descriptors are used for eliminating the change of the scale and the direction of the image caused by the change of the visual angle, and the images can be better matched.
The first descriptor mainly forms small interest areas by randomly selecting a plurality of points in the area around the interest points, the gray levels of the small interest areas are binarized and analyzed into binary code strings, the string features are used as descriptors of the point features, the first descriptor selects the area near the key points and compares the intensity of each bit, and then whether the current key point code is 0 or 1 is judged according to two binary points in the image block. Since all the encodings of the first descriptor are binary, computer memory space is saved.
In one example, line features are extracted using the Hough transform method.
It should be noted that the straight line extraction method includes, but is not limited to, the Hough transform method. The Hough transform is proposed by Hough in 1962 and is used for detecting curves, such as straight lines, circles, parabolas, ellipses and the like, in images, and the shapes of the curves can be described by certain functional relations. The basic principle of Hough transformation is to transform a curve (including a straight line) in an image space into a parameter space, and determine a description parameter of the curve by detecting an extreme point in the parameter space, thereby extracting a regular curve in the image.
In one embodiment, after step S1, the method further includes: and respectively screening the point characteristics and the line characteristics to obtain screened point characteristics and screened line characteristics. In this embodiment, step S2: constructing a point feature pose solving error function according to the point features and constructing a line feature pose solving error function according to the line features, and replacing the point feature pose solving error function with the line feature pose solving error function: and constructing a point feature pose solving error function according to the screened point features, and constructing a line feature pose solving error function according to the screened line features.
In one embodiment, the step of respectively screening the point features and the line features to obtain screened point features and screened line features includes:
and screening the point features based on the first descriptor to obtain the point features of which the dot product calculation result with the current point features is smaller than a preset threshold, and forming the screened point features by the point features of which the dot product calculation result with the current point features is smaller than the preset threshold. Prior to constructing a line feature pose from line features and solving an error function, match line features are required.
And screening the line characteristics based on the second descriptor to obtain the line segments of which the included angles, the distances and the lengths with the current line segment all meet the set conditions, and forming the screened line characteristics by the line segments of which the included angles, the distances and the lengths with the current line segment all meet the set conditions.
In one example, after extracting the point features in the image according to the above ORB method, the point features in the image are described according to the features of the point features, and the point feature matching method includes, but is not limited to, brute force search matching, K nearest neighbor matching, and the like.
For example, using K nearest neighbor matching, K points with the most similar point features are selected during matching, and if the difference between the K points is large enough, the most similar point is selected as a matching point, and usually K =2 is selected, that is, nearest neighbor matching is performed. Two nearest neighbor matches are returned for each match, and if the first match and the second match are sufficiently distant apart by a ratio (the vector distance is sufficiently far apart), this is considered a correct match, with a threshold value for the ratio typically around 2.
In one example, after extracting the line features in the image, performing feature matching on the line features in the image, including obtaining a line matching pair according to line neighbor matching, where each line is represented by two end points of a line on the image.
The straight line neighbor matching method comprises the following steps: for each straight line, calculating the included angles between the straight line and all other straight lines, eliminating the straight lines with the included angles larger than a certain threshold (such as 5 degrees), calculating the distance between the center of the straight line and the current straight line for the rest straight lines, and matching the current straight line with the nearest straight line if the nearest distance is smaller than a certain threshold (such as 10 pixels).
The included angle, the distance and the length all meet the set conditions, namely the included angle is smaller than a preset threshold, the distance is smaller than the preset threshold, and the length is within a set error range.
And S3, determining a camera pose estimation result according to a fusion error function obtained by solving the error function according to the point feature pose and the line feature pose.
The camera pose estimation result may be used for instantaneous positioning and map reconstruction, which is not limited herein.
In one example, the fusion error function is:
E point_line =E line,d (p d ,q d ,I)+E point
pose solution error function E of point characteristics point =(K -1 p 2 ) T EK -1 p 1 Wherein:
e is an intrinsic matrix, p1 and p2 are two point characteristics matched in space and are given by pixel coordinates, the relative motion from p2 to p1 is determined by an external parameter { R, t } and an internal parameter K, R is a rotation matrix, and t is a translation matrix; since P1 and P2 are projections of the same point P in space, then
d 1 p 1 =KP,
d 2 p 2 =K(RP+t);
Wherein, d 1 Is the distance from point p1 to the straight line, d 2 Is the distance from point p1 to the line;
order to
x 1 =K -1 p 1
x 2 =K -1 p 2
By using the eigenmatrix relation of antipodal geometry to obtain
E=t*R,
Figure BDA0003206548100000071
Pose solution error function of line features
Figure BDA0003206548100000072
Wherein:
P,Q∈R 3 are two end points of a spatial straight line, p d ,q d ∈R 2 For its projected coordinates in the image, note
Figure BDA0003206548100000073
Is the corresponding space coordinate to obtain the linear parameter
Figure BDA0003206548100000074
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003206548100000075
is the space coordinate of the projection point;
defining the projection error of the line feature as the point-to-line distance E between the end point of the projected line segment and the line detected in the image pl And then the pose solution error of line feature matching is
Figure BDA0003206548100000076
E pl =I T π(P,θ,K);
I is a straight line parameter, pi (P, theta, K) is a coordinate of a space point P projected to an image plane, theta and K are external reference and internal reference of a camera respectively, and ideally, the projection point of the space point P is on a straight line, so that the distance from the point to the straight line is
E pl =0。
And extracting and matching point features and line features in the weak texture region simultaneously to enhance the robustness of feature matching, and performing point-line fusion camera pose estimation by using a constructed error function fused with the point features and the line features, so that the camera pose estimation precision is improved.
It should be understood that although the various steps in the flow charts of fig. 1-2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least some of the steps in fig. 1-2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 3 to 4, there is provided a camera pose estimation apparatus including a feature extraction module 10, a construction module 20, and a pose determination module 30, wherein:
the feature extraction module 10 is configured to perform feature extraction on the target image to obtain a point feature and a line feature of the target image.
And the constructing module 20 is configured to construct a point feature pose solution error function according to the point features and construct a line feature pose solution error function according to the line features.
And the pose determining module 30 is configured to determine a camera pose estimation result according to a fusion error function obtained by solving the error function for the point feature pose and the line feature pose.
In one example, feature extraction module 10 includes a point feature extraction unit 11 and a line feature extraction unit 12, where:
and the point feature extraction unit 11 is configured to extract a point feature in the target image by using a point feature extraction algorithm, and describe the point feature by using the first descriptor.
A line feature extracting unit 12, configured to extract a line feature in the target image by using a line feature extraction algorithm, and describe the line feature by using a second descriptor.
For specific definition of the camera pose estimation apparatus, reference may be made to the definition of the camera pose estimation method above, and details are not repeated here. The respective modules in the above camera pose estimation apparatus may be realized in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, as shown in fig. 5, there is provided a computer device including a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to implement the steps in the camera pose estimation method in any of the above embodiments.
In one embodiment, a computer device is provided, an internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The database of the computer device is used to store camera pose estimation data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a camera pose estimation method.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, there is provided a computer-readable storage medium having a computer program stored thereon, the computer program being executed by a processor to implement the steps in the camera pose estimation method in any one of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct Rambus Dynamic RAM (DRDRAM), and Rambus Dynamic RAM (RDRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not to be construed as limiting the claims. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (10)

1. A camera pose estimation method, comprising:
performing feature extraction on a target image to obtain point features and line features of the target image;
constructing a point feature pose solving error function according to the point features, and constructing a line feature pose solving error function according to the line features;
and determining a camera pose estimation result according to a fusion error function obtained by the point feature pose solving error function and the line feature pose solving error function.
2. The camera pose estimation method according to claim 1, wherein after the feature extraction is performed on the target image to obtain the point feature and the line feature of the target image, the method further comprises:
respectively screening the point features and the line features to obtain screened point features and screened line features;
the method for constructing a point feature pose solution error function according to the point features and constructing a line feature pose solution error function according to the line features comprises the following steps:
and constructing a point characteristic pose solving error function according to the screened point characteristics, and constructing a line characteristic pose solving error function according to the screened line characteristics.
3. The camera pose estimation method according to claim 2, wherein the target image is an image obtained by shooting a weak texture region, and the weak texture region is a region in which a gradient statistical average value is within a preset range.
4. The camera pose estimation method according to claim 2 or 3, characterized in that the fusion error function is:
E point_line =E line,d (p d ,q d ,I)+E point
the pointsError function E for solving characteristic pose point =(K -1 p 2 ) T EK -1 p 1
Wherein: e is an intrinsic matrix, p1 and p2 are two point characteristics matched in space, and K is an internal reference of the camera; solving error function of the line characteristic pose
Figure FDA0003206548090000011
Wherein: p and q are two end points of a space straight line; p is a radical of formula d ,q d Corresponding projection coordinates of P and Q in the image; i is a straight line parameter.
5. The camera pose estimation method according to claim 4, wherein the performing feature extraction on the target image to obtain point features and line features of the target image includes:
extracting point features in the target image by using a point feature extraction algorithm, and describing the point features by using a first descriptor;
and extracting line features in the target image by using a line feature extraction algorithm, and describing the line features by using a second descriptor.
6. The camera pose estimation method according to claim 5, wherein the screening the point features and the line features to obtain screened point features and screened line features respectively comprises:
screening the point features based on the first descriptor to obtain point features of which the dot product calculation results with the current point features are smaller than a preset threshold, wherein the point features of which the dot product calculation results with the current point features are smaller than the preset threshold form the screened point features;
and screening the line characteristics based on the second descriptor to obtain the line segments of which the included angles, the distances and the lengths with the current line segment all meet the set conditions, wherein the line segments of which the included angles, the distances and the lengths with the current line segment all meet the set conditions form the screened line characteristics.
7. A camera pose estimation device, comprising:
the characteristic extraction module is used for extracting the characteristics of a target image to obtain the point characteristics and the line characteristics of the target image;
the construction module is used for constructing a point feature pose solving error function according to the point features and constructing a line feature pose solving error function according to the line features;
and the pose determining module is used for determining a camera pose estimation result according to a fusion error function obtained by the point feature pose solving error function and the line feature pose solving error function.
8. The apparatus of claim 7, wherein the feature extraction module comprises:
the point feature extraction unit is used for extracting point features in the target image by using a point feature extraction algorithm and describing the point features by using a first descriptor;
and the line feature extraction unit is used for extracting line features in the target image by using a line feature extraction algorithm and describing the line features by using a second descriptor.
9. A computer device characterized by comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to realize the steps in the camera pose estimation method according to any one of claims 1 to 6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program, which is executed by a processor to implement the steps in the camera pose estimation method according to any one of claims 1 to 6.
CN202110918447.7A 2021-08-11 2021-08-11 Camera pose estimation method and device, computer equipment and storage medium Pending CN115705658A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110918447.7A CN115705658A (en) 2021-08-11 2021-08-11 Camera pose estimation method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110918447.7A CN115705658A (en) 2021-08-11 2021-08-11 Camera pose estimation method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115705658A true CN115705658A (en) 2023-02-17

Family

ID=85179743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110918447.7A Pending CN115705658A (en) 2021-08-11 2021-08-11 Camera pose estimation method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115705658A (en)

Similar Documents

Publication Publication Date Title
Melekhov et al. Dgc-net: Dense geometric correspondence network
CN109035299B (en) Target tracking method and device, computer equipment and storage medium
US11270148B2 (en) Visual SLAM method and apparatus based on point and line features
EP3680809A1 (en) Visual slam method and apparatus based on point and line characteristic
US9418313B2 (en) Method for searching for a similar image in an image database based on a reference image
CN109886878B (en) Infrared image splicing method based on coarse-to-fine registration
CN109099915B (en) Mobile robot positioning method, mobile robot positioning device, computer equipment and storage medium
CN109711419A (en) Image processing method, device, computer equipment and storage medium
US20120033873A1 (en) Method and device for determining a shape match in three dimensions
CN112348116A (en) Target detection method and device using spatial context and computer equipment
CN105335952A (en) Matching cost calculation method and apparatus, and parallax value calculation method and equipment
CN111192194A (en) Panoramic image splicing method for curtain wall building vertical face
Ye et al. Local affine preservation with motion consistency for feature matching of remote sensing images
CN112580434A (en) Face false detection optimization method and system based on depth camera and face detection equipment
CN113902808A (en) Camera calibration method, device, equipment and storage medium
CN111860582A (en) Image classification model construction method and device, computer equipment and storage medium
Hirner et al. FC-DCNN: A densely connected neural network for stereo estimation
CN111046755A (en) Character recognition method, character recognition device, computer equipment and computer-readable storage medium
CN115705658A (en) Camera pose estimation method and device, computer equipment and storage medium
EP4254354A1 (en) System and method using pyramidal and uniqueness matching priors for identifying correspondences between images
CN113763485B (en) Temperature drift coefficient acquisition method, electronic device, storage medium, and image correction method
Phogat et al. Different image registration methods—an overview
Liu et al. Geometrized transformer for self-supervised homography estimation
CN114078245A (en) Image processing method and image processing device
CN109242894B (en) Image alignment method and system based on mobile least square method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication