CN108447092B - Method and device for visually positioning marker - Google Patents

Method and device for visually positioning marker Download PDF

Info

Publication number
CN108447092B
CN108447092B CN201810118800.1A CN201810118800A CN108447092B CN 108447092 B CN108447092 B CN 108447092B CN 201810118800 A CN201810118800 A CN 201810118800A CN 108447092 B CN108447092 B CN 108447092B
Authority
CN
China
Prior art keywords
camera
marker
current frame
frame image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810118800.1A
Other languages
Chinese (zh)
Other versions
CN108447092A (en
Inventor
吴毅红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201810118800.1A priority Critical patent/CN108447092B/en
Publication of CN108447092A publication Critical patent/CN108447092A/en
Application granted granted Critical
Publication of CN108447092B publication Critical patent/CN108447092B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of computer vision, and particularly relates to a method and a device for visually positioning a marker. The problem that in the prior art, a camera is inaccurate in positioning due to existing identification is solved. The invention provides a method for visually positioning a marker, which comprises the steps of obtaining a current frame image of an input video, and extracting edge image points of the current frame image; carrying out clustering analysis on the edge image points, and carrying out quadratic curve fitting according to different classes; calculating an epipolar line of the current frame image based on the quadratic curve, and obtaining an intersection point of the quadratic curve and the epipolar line; and calculating the camera pose parameters of the current frame image according to the pre-acquired camera intrinsic parameters and intersection points of the current frame image to realize the positioning of the marker. The method has the advantages of simplicity, convenience, high speed, small occupied memory and high precision and robustness.

Description

Method and device for visually positioning marker
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a method and a device for visually positioning a marker.
Background
With the rapid development of virtual reality, augmented reality, and robotics, tracking and calculation of camera position has received a great deal of attention and research in academia and industry as an indispensable technology among them. One of the most effective methods for calculating the position of a camera in the prior art is based on a planar marker, color and scale information is added on the basis of the proposed concentric circle markers, and then various markers such as an annular marker, a matrix with simple figures and textures, a square marker, a marker with four circles positioned at four corners of a rectangle, a black rectangle with black and white blocks and the like are successively generated. However, if there is a matching error between a space point and an image point, the position and the posture of the camera are calculated inaccurately, even if a ransac (random Sample consensus) algorithm is used to eliminate the point with the matching error, because there are fewer points in the identifier in the prior art, it cannot be guaranteed that the eliminated point is the point with the matching error, and meanwhile, the complexity of the algorithm and the waste of resources are increased.
Therefore, how to propose a solution to the problem that the existing mark causes inaccurate positioning of the camera is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In order to solve the above problems in the prior art, that is, to solve the problem in the prior art that the positioning of the camera is inaccurate due to the existing mark, the present invention provides a method for visually positioning a marker, comprising:
acquiring a current frame image of an input video, and extracting edge image points of the current frame image;
performing cluster analysis on the edge image points, and performing quadratic curve fitting according to different classes;
calculating epipolar lines of the current frame image based on the quadratic curve, and obtaining intersection points of the quadratic curve and the epipolar lines;
and calculating the camera pose parameter of the current frame image according to the pre-acquired intersection point of the camera intrinsic parameter of the current frame image and the epipolar line to realize the positioning of the marker.
In a preferred embodiment of the above method, the method further comprises:
constructing a circular marker comprising a circular profile and marker points, the marker points being located inside the circular profile or outside the circular profile, the circular profile being used for quadratic curve fitting.
In a preferred technical solution of the above method, the camera pose parameters of the current frame image include a rotation matrix of the camera and a translation vector of the camera.
In a preferred technical solution of the above method, the method for calculating the camera pose parameter of the current frame image includes:
according to the imaging process of a camera, establishing a mathematical relation between the intersection point of the quadratic curve and the epipolar line and a camera rotation matrix of the camera pose parameter, wherein the specific mathematical relation is shown as the following formula:
Figure BDA0001571454480000021
calculating the camera pose parameter of the current frame image according to the spatial position relation between the marker and the camera, wherein the specific calculation method is shown as the following formula:
Figure BDA0001571454480000022
where t denotes the translation vector of the camera, s0、s1All represent intermediate variables, r1First column, m, representing the camera rotation matrix0Image of origin, m, representing world coordinate system1Marker point image representing marker,/Representing said epipolar line, LxCoordinates in a spatial coordinate system of a marker point representing a marker, r11、r21、r31Respectively representing corresponding elements in the camera rotation matrix, (u)0,v0) Represents m0Coordinates of (u), (u)1,v1) Represents m1The coordinates of (a);
the spatial position relationship between the marker and the camera is as follows: the marker is located in front of the camera and is directed towards the camera in a direction normal to the plane in which the marker is located.
In a preferred technical solution of the above method, "extracting an edge image point of the current frame image", the method includes:
and extracting edge image points of the current frame image by adopting an edge detection algorithm.
In a preferred embodiment of the above method, "calculating epipolar lines of the current frame image based on the quadratic curve" includes:
when the number of the quadratic curves is two, calculating the epipolar line according to quasi-affine invariance;
and when the number of the quadratic curves is one, extracting points inside the quadratic curves as an origin of a world coordinate system, calculating an epipolar line of the origin of the world coordinate system, and taking the epipolar line as the epipolar line of the quadratic curves.
The invention also provides a storage device in which a plurality of programs are stored, said programs being adapted to be loaded by a processor and to carry out the method of visual positioning a marker as described above.
The invention also provides a processing device, which comprises a processor and a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is adapted to be loaded by a processor and to carry out the method of visually positioning a marker as described above.
Compared with the closest prior art, the method for visually positioning the marker comprises the steps of obtaining a current frame image of an input video, and extracting edge image points of the current frame image; acquiring a current frame image of an input video, and extracting edge image points of the current frame image; calculating an epipolar line of the current frame image based on the quadratic curve, and obtaining an intersection point of the quadratic curve and the epipolar line; and calculating the camera pose parameters of the current frame image according to the pre-acquired camera intrinsic parameters and intersection points of the current frame image to realize the positioning of the marker.
The technical scheme at least has the following beneficial effects: the technical scheme of the invention can completely resolve the position and the posture of the camera on the basis of not carrying out multipoint selection matching, and has the characteristics of simplicity, convenience, high speed, small occupied memory, high precision and high robustness. In addition, the marker of the scheme of the invention is simple and easy to manufacture, and the daily natural scene often contains the information, the result of positioning the marker can be used as a test standard based on natural scene positioning, reference data is provided for the situation that the camera positioning has no true value in the natural scene, and the camera positioning can be carried out on line in real time on a low-configuration CPU resource.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for visually locating a marker in accordance with an embodiment of the present invention;
FIG. 2 is a schematic representation of a first type of identifier in accordance with an embodiment of the present invention;
FIG. 3 is a schematic representation of an identifier of a second type in accordance with an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention
Preferred embodiments of the present invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are only for explaining the technical principle of the present invention, and are not intended to limit the scope of the present invention.
Referring to FIG. 1, FIG. 1 is a flow chart illustrating a method for visually locating a marker in accordance with an embodiment of the present invention. As shown in FIG. 1, the method for visually locating a marker in this embodiment includes the steps of:
step S1: acquiring a current frame image of an input video, and extracting edge image points of the current frame image;
the method for visually positioning the marker can realize the positioning of the marker on the current frame image of the input video, and the edge detection is taken as an important link of image analysis and identification, which can influence the identification accuracy to a great extent, so that the edge image point of the current frame image needs to be extracted for subsequent operation.
In a preferred embodiment of the present invention, an edge detection algorithm is used to extract edge image points of the current frame image.
The edge detection plays an important role in applications such as computer vision, image analysis and the like, other features of the image are derived from basic features such as edges and regions, and the effect of the edge detection directly influences the segmentation and recognition performance of the image. The edge detection actually realizes detection by using the difference between the object and the background on the image characteristics, and specifically, the edge detection can be divided into four steps of filtering, enhancing, detecting and positioning to obtain extracted edge image points.
Step S2: performing cluster analysis on the edge image points, and performing quadratic curve fitting according to different classes;
after the edge image points are obtained, clustering analysis is carried out on the edge image points, wherein the clustering analysis refers to a technology for searching internal structures among data, all data examples are organized into a plurality of similar groups by clustering, the similar groups can become clusters, and the clustering analysis is also an unsupervised learning. After the edge image points are obtained, the edge image points are classified into different categories according to the attributes of the edge image points, and quadratic curve fitting is carried out according to the different categories.
Step S3: calculating epipolar lines of the current frame image based on the quadratic curve, and obtaining intersection points of the quadratic curve and the epipolar lines;
in practical application, after fitting the secondary curves according to different classes, different numbers of secondary curves can be obtained, and methods for calculating epipolar lines of the current frame image by using the secondary curves with different numbers are also different.
Step S4: and calculating the camera pose parameters of the current frame image according to the pre-acquired camera intrinsic parameters of the current frame image and the intersection point to realize the positioning of the marker.
In the embodiment of the invention, the camera intrinsic parameters can be known in advance, or when the camera intrinsic parameters are unknown, the camera pose parameters of the current frame image are calculated according to the camera intrinsic parameters, the intersection points of the quadratic curve and the epipolar line after the camera intrinsic parameters are solved, and the pose parameters of the camera are calculated according to the imaging process of the camera and the space positions of the camera and the markers and by combining the marker points of the markers, so that the markers are positioned.
The method is based on a circular planar marker, multipoint matching is not needed, the position and the posture of the camera can be completely analyzed, and the method is convenient, high in processing speed, small in occupied memory and high in precision and robustness.
In a preferred embodiment of the present invention, the image of the current frame of the video may be acquired, the intrinsic parameters of the camera may be calculated according to the information of each image, the image may be transformed according to the intrinsic parameters of the camera to obtain a transformed image, and the pose parameters of the camera may be solved according to the transformed image.
Specifically, toDetermining a pair of marked images, imaging a circle in the marked images into a secondary curve through a perspective camera, extracting points in an edge image of a specific secondary curve in the images, marking the specific secondary curve as M, and marking the extracted points as MiCalculating the extracted point miGeometric distance d to quadratic curve Mfa(miC), wherein C is a coefficient matrix of a quadratic curve M, linear weighted iteration is carried out on the geometric distance, and a matrix C related to C is obtained by using a singular value decomposition method1Using a distance based on the shortest geometric distance d (m)iC) constructing an objective function, and calculating when C ═ C1A minute amount Δ u when minimizing the objective functioni、ΔviAnd a scale parameter λiBy a minute amount of Δ ui、ΔviAnd a scale parameter λiCarrying out nonlinear optimization solution on the objective function to obtain a coefficient matrix C2According to a coefficient matrix C2Generating a quadratic curve M after optimization of a specific quadratic curve M in an image2
Further, as shown in fig. 2, fig. 2 exemplarily shows a schematic diagram of the first type of marker, where the center of a circle with a black dot in the image (a) in fig. 2 is taken as the origin of the world coordinate system, and a connection line between the centers of the two circles is taken as the X-axis of the world coordinate system; taking the center of a concentric circle of the image (b) in the image (2) as the origin of a world coordinate axis, taking a connecting line between the origin and a black point in the circle as an X axis of the world coordinate system, and recording the origin image of the world coordinate system as m0Recording the image of the second circle center or the image of the black point as m1The coordinates of this spatial point are noted as (L)x,0,0,1). Fig. 3 is a schematic diagram of a second type of marker, which is composed of a circle, a center of the circle, and a black dot outside the circle, wherein the center of the circle is used as the origin of the world coordinate system, and the center of the circle and the black dot outside the circle are connected to form the X axis of the world coordinate system, wherein (a) is a specific form, and (b) is two examples of (a).
Based on FIG. 2, using quasi-affine invariance to calculate the images of the special points in (a) and (b) images in FIG. 2, respectively, and recording the real part and imaginary part of the image of the special point as mr1,mr2Connecting lines of special points asThe infinite line image of the plane where the marked image is located is marked as lAnd calculating the epipolar point of the infinite straight line image relative to the quadratic curve, namely the image point at the center of the space circle in the image. The image points at the centers of two circles of the image (a) in FIG. 2 are respectively marked as m0、m1The center of the concentric circle of the image (b) in FIG. 2 is denoted as m0The image of the black dot is denoted as m1
Based on FIG. 3, the image of the center of the circle is directly extracted as m0. Calculate m0The antipodal line for the quadratic curve is i。m1An image of a black dot. Calculating lThe intersection point of the two secondary curves is a pair of complex conjugate points, the real and imaginary parts are as before, and are respectively marked as mr1And mr2
In the case of known camera intrinsic parameters, the transformation of a spatial point in the world coordinate system into the camera coordinate system is denoted as R (R)1,r2,r3) T, where R represents a rotation matrix of 3 x 3, R1,r2,r3For 3 columns of the rotation matrix, t represents a vector with a length of 3, representing the translation of the camera, and according to the imaging process of the camera, the formula (1) can be obtained:
Figure BDA0001571454480000071
calculating the camera pose parameter of the current frame image according to the spatial position relation between the marker and the camera, wherein the specific calculation method is shown as formula (2):
Figure BDA0001571454480000072
where t denotes the translation vector of the camera, s0、s1All represent intermediate variables, r1First column, m, representing the camera rotation matrix0Image of origin, m, representing world coordinate system1Marker point image representing marker,/Representing said epipolar line, LxCoordinates in a spatial coordinate system of a marker point representing a marker, r11、r21、r31Respectively representing corresponding elements in the camera rotation matrix, (u)0,v0) Represents m0Coordinates of (u), (u)1,v1) Represents m1The coordinates of (a);
the spatial position relationship between the marker and the camera is as follows: the marker is positioned in front of the camera, the normal direction of the plane where the marker is positioned is directed to the camera, t can be uniquely determined according to the formula (1), and the space position relation between the camera and the marker and r33<0, s can be uniquely determined3Is thus r3Can also be uniquely determined, then r2=r3×r1Can also be uniquely determined, and therefore, the pose parameter R ═ (R) of the camera1,r2r3) And t is solved.
Under the condition that the intrinsic parameters of the camera are unknown, the intrinsic parameters of the camera are solved, the image is transformed according to the intrinsic parameters, and the pose parameters of the camera are solved according to the method.
Specifically, the method for solving the camera intrinsic parameters is as follows:
the intrinsic parameter matrix of the camera is set as shown in formula (3):
Figure BDA0001571454480000073
wherein f is1And f2Is a variable parameter according to the above formula and mr1、mr2The internal parameter K can be calculated by the following specific method as shown in formula (4) and formula (5):
Figure BDA0001571454480000081
Figure BDA0001571454480000082
wherein,
Figure BDA0001571454480000083
the equations (4) and (5) relate to g1、g2Linear equation of (c), assuming mr1And mr2Respectively is mr1=(a1,a2,a3)、mr1=(a1,a2,a3) M isr1And mr2Substituting the value of (A) into the formula (4) and the formula (5), and solving to obtain f1And f2The value of (c):
Figure BDA0001571454480000084
Figure BDA0001571454480000085
according to the information of each image, the intrinsic parameters of the camera can be calculated, and the intrinsic parameters of the camera can change along with the change of the number of the image frames, so that the method is also suitable for a zoom camera.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The invention also provides a storage device in which a plurality of programs are stored, said programs being adapted to be loaded by a processor and to carry out the method of visual positioning a marker as described above.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working processes and related descriptions of the storage device according to the embodiment of the present invention may refer to corresponding processes in the foregoing method embodiment of the visual positioning identifier, and have the same beneficial effects as the foregoing method of the visual positioning identifier, and are not described herein again.
A processing apparatus comprising a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; the program is adapted to be loaded by a processor and to carry out the method of visually positioning a marker as described above.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working processes and related descriptions of the processing apparatus according to the embodiment of the present invention may refer to corresponding processes in the foregoing method embodiment of the visual positioning identifier, and have the same beneficial effects as the foregoing method of the visual positioning identifier, and are not described herein again.
Those of skill in the art will appreciate that the method steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described above generally in terms of their functionality in order to clearly illustrate the interchangeability of electronic hardware and software. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (8)

1. A method of visually locating a marker, comprising:
acquiring a current frame image of an input video, and extracting edge image points of the current frame image;
performing cluster analysis on the edge image points, and performing quadratic curve fitting according to different classes;
calculating epipolar lines of the current frame image based on the quadratic curve, and obtaining intersection points of the quadratic curve and the epipolar lines;
and calculating the camera pose parameter of the current frame image according to the pre-acquired intersection point of the camera intrinsic parameter of the current frame image and the epipolar line to realize the positioning of the marker.
2. The method of claim 1, further comprising:
constructing a circular marker comprising a circular profile and a marker point, the marker point being located inside the circular profile or outside the circular profile, the circular marker being for quadratic curve fitting.
3. The method of claim 1, wherein the camera pose parameters of the current frame image comprise a rotation matrix of a camera and a translation vector of a camera.
4. The method according to claim 3, wherein the camera pose parameters of the current frame image are calculated by:
according to the imaging process of a camera, establishing a mathematical relation between the intersection point of the quadratic curve and the epipolar line and a camera rotation matrix of the camera pose parameter, wherein the specific mathematical relation is shown as the following formula:
Figure FDA0002485781440000011
calculating the camera pose parameter of the current frame image according to the spatial position relation between the marker and the camera, wherein the specific calculation method is shown as the following formula:
Figure FDA0002485781440000021
where t denotes the translation vector of the camera, s0、s1All represent intermediate variables, r1First column, m, representing the camera rotation matrix0Image of origin, m, representing world coordinate system1Marker point image representing marker,/Representing said epipolar line, LxCoordinates in a spatial coordinate system of a marker point representing a marker, r11、r21、r31Respectively representing corresponding elements in the camera rotation matrix, (u)0,v0) Represents m0Coordinates of (u), (u)1,v1) Represents m1The coordinates of (a);
the spatial position relationship between the marker and the camera is as follows: the marker is located in front of the camera and is directed towards the camera in a direction normal to the plane in which the marker is located.
5. The method of claim 1, wherein the method of extracting the edge image point of the current frame image comprises:
and extracting edge image points of the current frame image by adopting an edge detection algorithm.
6. The method according to claim 1, wherein calculating epipolar lines of the current frame image based on the quadratic curve comprises:
when the number of the quadratic curves is two, calculating the epipolar line according to quasi-affine invariance;
and when the number of the quadratic curves is one, extracting points inside the quadratic curves as an origin of a world coordinate system, calculating an epipolar line of the origin of the world coordinate system, and taking the epipolar line as the epipolar line of the quadratic curves.
7. A storage means, in which a plurality of programs are stored, characterized in that said programs are adapted to be loaded by a processor and to carry out a method of visual positioning of a marker according to any of claims 1-6.
8. A processing apparatus comprising a processor, a storage device; a processor adapted to execute various programs; a storage device adapted to store a plurality of programs; characterized in that the program is adapted to be loaded by a processor and to carry out the method of visual positioning an identifier according to any of claims 1-6.
CN201810118800.1A 2018-02-06 2018-02-06 Method and device for visually positioning marker Active CN108447092B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810118800.1A CN108447092B (en) 2018-02-06 2018-02-06 Method and device for visually positioning marker

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810118800.1A CN108447092B (en) 2018-02-06 2018-02-06 Method and device for visually positioning marker

Publications (2)

Publication Number Publication Date
CN108447092A CN108447092A (en) 2018-08-24
CN108447092B true CN108447092B (en) 2020-07-28

Family

ID=63192012

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810118800.1A Active CN108447092B (en) 2018-02-06 2018-02-06 Method and device for visually positioning marker

Country Status (1)

Country Link
CN (1) CN108447092B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111062233A (en) * 2018-10-17 2020-04-24 北京地平线机器人技术研发有限公司 Marker representation acquisition method, marker representation acquisition device and electronic equipment
EP4148375A4 (en) * 2020-05-19 2023-07-05 Huawei Technologies Co., Ltd. Ranging method and apparatus
CN113936010A (en) * 2021-10-15 2022-01-14 北京极智嘉科技股份有限公司 Shelf positioning method and device, shelf carrying equipment and storage medium
CN116558504B (en) * 2023-07-11 2023-09-29 之江实验室 Monocular vision positioning method and device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007025863A (en) * 2005-07-13 2007-02-01 Advanced Telecommunication Research Institute International Photographing system, photographing method, and image processing program
JP2008224641A (en) * 2007-03-12 2008-09-25 Masahiro Tomono System for estimation of camera attitude
CN103247048A (en) * 2013-05-10 2013-08-14 东南大学 Camera mixing calibration method based on quadratic curve and straight lines
CN103258329B (en) * 2013-05-24 2016-04-06 西安电子科技大学 A kind of camera marking method based on ball one-dimensional
CN104517291B (en) * 2014-12-15 2017-08-01 大连理工大学 Pose measuring method based on target coaxial circles feature
CN105069809B (en) * 2015-08-31 2017-10-03 中国科学院自动化研究所 A kind of camera localization method and system based on planar hybrid marker
CN106558081B (en) * 2016-11-28 2019-07-09 云南大学 The method for demarcating the circular cone catadioptric video camera of optical resonator system

Also Published As

Publication number Publication date
CN108447092A (en) 2018-08-24

Similar Documents

Publication Publication Date Title
CN111199564B (en) Indoor positioning method and device of intelligent mobile terminal and electronic equipment
Kurka et al. Applications of image processing in robotics and instrumentation
CN108447092B (en) Method and device for visually positioning marker
CN1954342B (en) Parameter estimation method, parameter estimation device, and collation method
JP6899189B2 (en) Systems and methods for efficiently scoring probes in images with a vision system
CN112257676A (en) Pointer instrument reading method and system and inspection robot
CN114119864A (en) Positioning method and device based on three-dimensional reconstruction and point cloud matching
CN104574401A (en) Image registration method based on parallel line matching
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN106056121A (en) Satellite assembly workpiece fast-identification method based on SIFT image feature matching
CN112102404B (en) Object detection tracking method and device and head-mounted display equipment
Eichhardt et al. Relative pose from deep learned depth and a single affine correspondence
CN117870659A (en) Visual inertial integrated navigation algorithm based on dotted line characteristics
Briales et al. A minimal closed-form solution for the perspective three orthogonal angles (P3oA) problem: application to visual odometry
CN116935013B (en) Circuit board point cloud large-scale splicing method and system based on three-dimensional reconstruction
CN117911827A (en) Multi-mode target detection method, device, equipment and storage medium
CN111179271B (en) Object angle information labeling method based on retrieval matching and electronic equipment
CN116894876A (en) 6-DOF positioning method based on real-time image
CN116402867A (en) Three-dimensional reconstruction image alignment method for fusing SIFT and RANSAC
CN113570535B (en) Visual positioning method, and related device and equipment
Alturki Principal point determination for camera calibration
CN114972451A (en) Rotation-invariant SuperGlue matching-based remote sensing image registration method
Wang et al. Stereo rectification based on epipolar constrained neural network
Chen et al. Performance evaluation of 3D keypoints and descriptors
Gasz et al. The Registration of Digital Images for the Truss Towers Diagnostics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant