CN115493568A - Monocular camera indoor coordinate positioning method based on machine vision - Google Patents

Monocular camera indoor coordinate positioning method based on machine vision Download PDF

Info

Publication number
CN115493568A
CN115493568A CN202211038608.4A CN202211038608A CN115493568A CN 115493568 A CN115493568 A CN 115493568A CN 202211038608 A CN202211038608 A CN 202211038608A CN 115493568 A CN115493568 A CN 115493568A
Authority
CN
China
Prior art keywords
pose
monocular camera
point
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211038608.4A
Other languages
Chinese (zh)
Inventor
陈强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Ruanjiang Turing Artificial Intelligence Technology Co ltd
Original Assignee
Chongqing Ruanjiang Turing Artificial Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Ruanjiang Turing Artificial Intelligence Technology Co ltd filed Critical Chongqing Ruanjiang Turing Artificial Intelligence Technology Co ltd
Priority to CN202211038608.4A priority Critical patent/CN115493568A/en
Publication of CN115493568A publication Critical patent/CN115493568A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the technical field of indoor positioning, and particularly relates to a monocular camera indoor coordinate positioning method based on machine vision. The invention provides the method for correcting the image distortion, improves the image quality and ensures that the coordinate positioning precision based on the monocular camera is higher. The method adopted by the invention only needs one-time position selection and parameter calibration, and does not need a specific mobile detection carrier, so that the measuring system has low cost and simple structure. For a scene in which the monocular camera rotates, a mapping matrix from any pose to a reference pose is calculated by adopting a feature point matching method, the pixel coordinates of the detected target in any pose are transformed to an image space of the reference pose by using the mapping matrix, and the problems that different camera pose measurement coordinate systems do not have uniformity and the application based on the existing measurement method is limited are solved.

Description

Monocular camera indoor coordinate positioning method based on machine vision
Technical Field
The invention belongs to the technical field of indoor positioning, and particularly relates to a monocular camera indoor coordinate positioning method based on machine vision.
Background
The literature 'study of real-time distance measurement method based on monocular vision [ J ]. Chinese Picture and graphics, 2006 (01): 74-81' proposes a geometric relationship solution of the distance between a vehicle and a front obstacle suitable for automatic vehicle driving based on monocular vision, but the method is sensitive to the change of the pitch angle of a monocular camera, and when the change of the pitch angle is 0.4 degrees, the change of the measurement distance is 5.4445m, and the method can not meet the precision requirement of indoor coordinate positioning.
A distance measuring method based on a monocular camera, which is a chinese patent with application number "201611243556.9", proposes to construct a calculation formula by using the distance between a ground boundary point in an image of the monocular camera and the pixel coordinates of a measurement point in the image to obtain the physical space distance between an object point and the monocular camera, but the method has poor effect in practical operation, cannot meet the precision requirement of coordinate positioning, and cannot cope with the influence of rotation of the monocular camera.
The Chinese patent with the application number of '201811579928.4' provides a photogrammetry method and a photogrammetry system based on a monocular camera, and provides that the width and the height of an image of the monocular camera are calibrated, the vertical field angle, the horizontal field angle and the optical axis pitch angle of the monocular camera are used for collecting two images and the moving distance of a mobile detection carrier, and the spatial position of any target in the field of vision of the monocular camera is obtained through a solution formula.
In an indoor measurement scene, the used monocular camera often has an automatic rotation function or has the condition that the pose of the camera is adjusted artificially, and the position coordinate system measured under each camera pose has no uniformity due to the change of the pose of the camera. The application of the existing measuring method based on the monocular camera is limited under the requirements of low cost and simple structure.
Disclosure of Invention
Aiming at the problems, the invention provides a monocular camera indoor coordinate positioning method based on machine vision, which aims to solve the problems that different camera pose measurement coordinate systems are not unified and the application based on the existing measurement method is limited.
The technical scheme of the invention is as follows:
a monocular camera indoor coordinate positioning method based on machine vision comprises the following steps:
the method comprises the following steps that S1, a monocular camera is arranged at a position higher than the indoor ground, a projection point of the monocular camera on the indoor ground is defined as a point I, a vertical middle axis of an indoor scene imaging graph of the monocular camera is defined as EF, and the installation pose of the monocular camera is selected and used as a reference pose based on the requirement that an extension line of the point I needs to be coincident with EF and the vision field requirement of imaging of the monocular camera;
s2, distortion correction of monocular camera imaging is carried out by adopting a Zhang calibration method, internal parameters and distortion parameters of the monocular camera are obtained, and an image obtained by the monocular camera at a reference pose is subjected to distortion correction and cutting to be used as a reference image;
s3, collecting parameter data: measuring the installation height of the monocular camera relative to the indoor ground, defining a coordinate point of the monocular camera as an O point, and establishing a coordinate system based on a reference image, wherein the intersection point of a horizontal central axis KJ and a vertical central axis EF of the image is used as an original point G, the vertical central axis is a Y axis, and the horizontal central axis is an X axis; acquiring a vertical field angle 2 α, a horizontal field angle 2 β, and an optical axis pitch angle θ, and a pixel width W and a pixel height H of the image, wherein:
Figure BDA0003817094780000021
Figure BDA0003817094780000022
Figure BDA0003817094780000023
wherein point F is located on line IG;
s4, acquiring a scene image by using a monocular camera, and acquiring pixel coordinates of a target in the image based on a target detection method of machine vision;
s5, calculating a mapping matrix from any pose of the monocular camera to a reference pose by adopting a feature point matching method, and transforming the target pixel coordinate obtained in the step S4 to an image space of the reference pose by using the mapping matrix, wherein the method specifically comprises the following steps:
respectively acquiring ORB characteristics of a scene imaging graph of a reference pose and a scene imaging graph of any pose, wherein the ORB characteristics comprise characteristic points and descriptors;
calculating the distance between two groups of feature points by using a feature matching method BFMatcher, and solving a plurality of pairs of optimal matching points through sorting and screening;
calculating an optimal single mapping transformation matrix between a plurality of pairs of feature points by using a mapping matrix solving method FindHomography, wherein the mapping matrix represents the mapping relation from a scene imaging graph of any pose to a scene imaging graph of a reference pose;
transforming the target pixel coordinates of any pose to an image space of a reference pose by using a mapping matrix;
s6, calculating to obtain the position coordinates of the target on the indoor ground according to the pixel coordinates of the target in the reference pose image space and the vertical field angle, the horizontal field angle and the optical axis pitch angle obtained in the S3, and completing target positioning, wherein the method specifically comprises the following steps:
defining the coordinates of the target in the image space of the reference pose as (x ', Y'), and the corresponding Y-axis coordinate of the target P in the reference pose as P y The dimension information of the target in the X-axis and Y-axis directions is GP y And P y P:
Figure BDA0003817094780000031
Figure BDA0003817094780000032
Figure BDA0003817094780000033
Thereby obtaining the location coordinates of the target.
In the above scheme, the adopted monocular camera is a non-zoom monocular camera, so the horizontal field angle and the vertical field angle obtained in S3 are constant, and will not change due to the change of the pose of the camera.
The method has the advantages that the image distortion correction is carried out, the image quality is improved, and the coordinate positioning accuracy based on the monocular camera is higher. The method adopted by the invention only needs one-time position selection and parameter calibration, and does not need a specific mobile detection carrier, so that the measuring system has low cost and simple structure. For a scene in which the monocular camera rotates, a mapping matrix from any pose to a reference pose is calculated by adopting a feature point matching method, the pixel coordinates of a detected target in any pose are transformed to an image space of the reference pose by using the mapping matrix, and the problems that different camera pose measurement coordinate systems do not have uniformity and the application based on the existing measurement method is limited are solved.
Drawings
Fig. 1 is a flowchart of the indoor coordinate positioning method of the monocular camera based on machine vision according to the present invention.
FIG. 2 is a schematic diagram of the type of distortion present in monocular camera-based scene imaging; 2 (a) is a normal image, 2 (b) is barrel distortion, and 2 (c) is pincushion distortion.
FIG. 3 is a schematic diagram of monocular camera scene imaging; a. the four points b, c and d are four angular points of a scene imaging graph, the x axis and the y axis are respectively a horizontal central axis and a vertical central axis of the scene imaging graph, the points e and f are intersection points of the vertical central axis and an image boundary, the points k and j are intersection points of the horizontal central axis and the image boundary, the point g is intersection point of the horizontal central axis and the vertical central axis, and the points H and W are respectively pixel height and pixel width of the image. The x-g-y coordinate system is a pixel position coordinate system of scene imaging, the p point is a pixel point of a target to be positioned on the image, and the projection points of the p point on the x axis and the y axis are p respectively x ,p y
FIG. 4 is a schematic diagram of a monocular camera scene imaging model; the point O is a monocular camera mounting position point, the four points A, B, C and D are angular points on indoor ground corresponding to monocular camera scene imaging, and a plane formed by ABCD represents a ground plane; points A, B, C, D, E, F, G, K, J and P on the ground plane respectively correspond to points a, B, C, D, E, F, G, K, J and P on a scene imaging graph; the X axis and the Y axis on the ground plane respectively correspond to the X axis and the Y axis on a scene imaging graph, and the point I is a projection point of a monocular camera mounting point on the indoor ground. The X-G-Y coordinate system is the physical position coordinate system of the detected target on the ground, and the point P is the target to be positioned on the ground.
FIG. 5 is a schematic diagram of a model for solving the Y-axis coordinates of the target points.
FIG. 6 is a schematic diagram of a model for solving the X-axis coordinates of the target points.
Fig. 7 is a camera pose schematic diagram, where 7 (a) is a camera pose schematic diagram in a reference pose; and 7 (b) is a schematic diagram of the pose of the camera in any pose.
FIG. 8 is a schematic view of the relationship between the scene imaging map of the monocular camera in the reference pose and the scene imaging map in the arbitrary pose; wherein 8 (a) is a scene imaging diagram of the camera in the reference pose, and 8 (b) is a scene imaging diagram of the camera in any pose; the straight line segment on the graph is a corresponding relation line of the key points of the two images based on key point matching; and 8 (b) white broken lines on the images are the visual field boundaries of the images in any poses after transformation operation to the reference poses.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
The method is used for the coordinate positioning of the indoor target with the camera having the automatic rotation function or the camera pose being adjusted. In the method, the setting of the monocular camera needs to meet necessary conditions, as shown in fig. 3, in a scene imaging graph of the monocular camera, four points a, b, c and d are four corner points of the scene imaging graph, an axis x and an axis y are respectively a horizontal central axis and a vertical central axis of the scene imaging graph, a point e and a point f are intersection points of a vertical central axis and image edges ab and cd, a point k and a point j are intersection points of the horizontal central axis and the image edges ad and bc, g is intersection point of the horizontal central axis and the vertical central axis, and g is an origin, and H and W are respectively pixel height and pixel width of the image. The x-g-y coordinate system is a pixel position coordinate system of scene imaging, the p point is a pixel point of a target to be positioned on the image, and the projection points of the p point on the x axis and the y axis are p respectively x ,p y . Corresponding the scene imaging diagram of the monocular camera to the indoor positioning scene, as shown in FIG. 4, the point O is the monocular cameraMounting position points, wherein A, B, C and D four points are angular points on the indoor ground corresponding to scene imaging of the monocular camera, and a plane formed by ABCD represents a ground plane; points A, B, C, D, E, F, G, K, J and P on the ground plane respectively correspond to points a, B, C, D, E, F, G, K, J and P on a scene imaging graph; the X axis and the Y axis on the ground plane respectively correspond to the X axis and the Y axis on a scene imaging graph, and the point I is a projection point of a monocular camera mounting point on the indoor ground. The X-G-Y coordinate system is the physical position coordinate system of the detected target on the ground, and the point P is the target to be positioned on the ground. The condition that the monocular camera needs to meet in the scheme of the present invention is that a straight line of a corresponding position of a vertical perpendicular bisector of imaging of the monocular camera on the indoor ground passes through a projection point of a camera mounting point on the indoor ground, that is, an extension line of a point I needs to coincide with an EF, and meanwhile, the camera is a non-zoom camera.
As shown in fig. 1, the method of the present invention comprises the steps of:
step S1: as shown in fig. 4, the requirement that the point I passes through the EF straight line is satisfied, and a camera reference pose with a good view field is selected.
In order to acquire more scene information, the pose of the monocular camera installed indoors is firstly adjusted to enable the monocular camera to have a better visual field. In order to make the position coordinate calculation model based on fig. 5 and 6 feasible and to achieve centimeter-level accuracy of position coordinate calculation in indoor scenes, it is necessary to ensure that the I point passes through the EF straight line, i.e. the straight line of the corresponding position of the imaging vertical midperpendicular of the monocular camera on the indoor ground, and passes through the projection point of the camera installation point on the indoor ground. If the point I is far away from the straight line EF, the depth of the scene imaging does not meet the symmetry for the y axis, so that the position coordinate calculation method utilizing the similarity principle and the symmetry is invalid. Generally, the camera pose is the state shown in fig. 7 (a), and it is basically ensured that the point I passes through the line EF.
And S2, correcting image distortion and cutting edges, and acquiring a scene imaging graph of a reference pose.
Generally, due to the structural design of the camera or the assembly error of the camera lens, the imaging of the camera has certain distortion, such as barrel distortion shown in fig. 2 (b) and pincushion distortion shown in fig. 2 (c). Because the position coordinate calculation model adopts operations such as point-to-coordinate axis projection and the like, the image has high quality, and image correction is required.
Let the Cartesian coordinate of a point on the image plane be [ u, v ]] T In the form of polar coordinates of [ r, phi ]] T R denotes the distance of the point from the origin of coordinates and phi denotes the angle to the horizontal axis. Image distortion can be divided into radial distortion and tangential distortion, and is expressed mathematically as:
radial distortion mathematical model:
Figure BDA0003817094780000051
tangential distortion mathematical model:
Figure BDA0003817094780000061
[u′,v′] T is the distorted point coordinate, and in the distortion correction process, k is used 1 、k 2 、k 3 、p 1 And p 2 The 5 distortion items can obtain distortion vectors by adopting a Zhang calibration method, and the image can be corrected by using the distortion vectors. Since the image correction is performed, pixel information is lost due to the correction operation on the edge of the corrected image, and therefore, image cropping is necessary.
And after the image distortion correction and the cutting are finished, acquiring a scene imaging graph of a reference pose.
And S3, acquiring parameter data required to be used by the positioning method, and measuring the installation height of the camera, the sizes of the vertical view boundary point, the horizontal view boundary point and the optical axis position point.
Because the non-zoom monocular camera is adopted, the horizontal field angle and the vertical field angle are constants with the inherent characteristics of the camera and cannot be changed due to the change of the pose of the camera, and the horizontal field angle and the vertical field angle can be calibrated at any camera pose after distortion correction. For convenience of operation, the measurement data are all carried out in a camera reference pose.
Measuring the installation height of a camera, namely the size of IO; measuring position point size information of an optical axis, namely the size of IG; measuring vertical field boundary information, i.e., the size of the IF, used to calculate the vertical field angle; measuring horizontal visual field boundary information, i.e., the size of GJ, used for calculating a horizontal visual field angle; the pixel height H and pixel width W of the image are measured.
And S4, acquiring the pixel coordinates of the detected target from the image of any camera pose by using a target detection method of machine vision.
The monocular camera only acquires scene image information under a corresponding view field, and in order to complete indoor coordinate positioning, a target detection method of machine vision is used for carrying out target detection on the scene image so as to obtain pixel coordinates of a detection target.
Firstly, a target detection model is constructed, the target detection model based on deep learning is generally adopted, and the detection model has a detection function with strong generalization capability through training and testing of a large amount of data. And then inputting the corrected and cut image into a target detection model, and outputting the position information of the detected target by the model through a series of anchor frame extraction, convolution pooling and post-processing operations in the model. Finally, the position information is transformed into the x-g-y coordinate system shown in fig. 3.
And S5, solving a mapping matrix for any camera pose by using a characteristic point matching method, and transforming the detection target coordinate of any pose to an image space of a reference pose.
After the camera automatically rotates or the pose of the camera is manually adjusted, the optical axis of the camera deviates from the reference position. Because any camera pose is rotated and corresponding position calibration and size data measurement are not carried out, the position of a coordinate system is not clear in scene imaging under a new camera pose. As shown in fig. 7 (b), the projected point I of the camera mounting point O on the indoor floor at this time is not already on the straight line on which the Y axis is located, and therefore the application of the target position calculation method based on step S106 is limited. In order to enable a measured position coordinate system to have uniformity and enable the calculation method to be still used when a camera is in a special pose, a feature point matching method is adopted to calculate a mapping matrix from any pose to a reference pose, and the mapping matrix is used for transforming the detection target coordinate of any pose to an image space of a reference position.
The method for solving the mapping matrix based on the feature point matching comprises the following steps: firstly, acquiring ORB characteristics of a scene imaging graph of a reference pose and a scene imaging graph of any pose respectively, wherein the ORB characteristics comprise characteristic points and descriptors, the characteristic points are used for screening and comparing special points, and the descriptors are used for describing the characteristics around a certain point; secondly, calculating the distance between two groups of feature points by using a feature matching method BFMatcher, and obtaining a plurality of pairs of optimal matching points through sorting and screening; then, calculating an optimal single mapping transformation matrix between a plurality of pairs of feature points by using a mapping matrix solving method FindHomopraphy, wherein the mapping matrix represents the mapping relation from a scene imaging graph at any pose to a scene imaging graph at a reference pose; and finally, transforming the detection target coordinates of any pose to the image space of the reference pose by using a mapping matrix. The operational effect of this step is shown in fig. 8.
Step S6: and calculating the coordinates of the detection target pixel coordinates on the reference pose image space in a physical position coordinate system X-G-Y by using a position calculation model based on similar triangles and triangular transformation.
And (5) setting the coordinates of the detected target in the X-G-Y coordinate system of the reference pose obtained in the steps S4 and S5 as (X ', Y'), and then obtaining the coordinates of the X-G-Y coordinate system of the reference pose according to the data collected in the step S3 and the flow of the step in sequence.
Optical axis pitch angle θ, i.e. the size of ═ IOG in fig. 5:
Figure BDA0003817094780000071
vertical field angle 2 α, i.e. the size of &eofin fig. 5:
Figure BDA0003817094780000072
horizontal field angle 2 β, i.e. the size of @ KOJ in fig. 6:
Figure BDA0003817094780000073
as shown in FIG. 5, there is a line segment ML parallel to fe passing through point G, where point L is on the extension of line segment fF and point M is on line segment OE. ML and OP y Intersect at point Z; p y Point is the projection of point P on the Y-axis; the meanings of the remaining characters are the same as those in fig. 3 and 4. The size information of the P point in the Y-axis direction in the X-G-Y coordinate system is GP y
According to the trigonometric function relationship and the similar triangle, the method comprises the following steps:
Figure BDA0003817094780000074
Figure BDA0003817094780000081
Figure BDA0003817094780000082
therefore, the first and second electrodes are formed on the substrate,
Figure BDA0003817094780000083
Figure BDA0003817094780000084
while
Figure BDA0003817094780000085
Then
Figure BDA0003817094780000086
As shown in FIG. 6, PP y The extension line of (A) and the BC line segment intersect at a point R, and the size information of the point P in the X-axis direction on the X-G-Y coordinate system is P y P。
At triangle Δ ROP y In (1),
Figure BDA0003817094780000087
similar triangles from the P-point imaging are available,
Figure BDA0003817094780000088
then
Figure BDA0003817094780000089
Figure BDA0003817094780000091
The size information P of the P point in the X-axis and Y-axis directions in the X-G-Y coordinate system can be obtained through the step S106 y P and GP y
In summary, the monocular camera indoor coordinate positioning method based on machine vision can obtain indoor ground position coordinates with a unified coordinate system for targets in camera images at any pose.

Claims (2)

1. A monocular camera indoor coordinate positioning method based on machine vision is characterized by comprising the following steps:
s1, arranging a monocular camera at a position higher than indoor ground, defining a projection point of the monocular camera on the indoor ground as a point I, defining a vertical central axis of an indoor scene imaging graph of the monocular camera as EF, and selecting an installation pose of the monocular camera as a reference pose based on a requirement that an extension line of the point I needs to be coincident with the EF and a visual field requirement of imaging of the monocular camera;
s2, distortion correction of monocular camera imaging is carried out by adopting a Zhang calibration method, internal parameters and distortion parameters of the monocular camera are obtained, and an image obtained by the monocular camera at a reference pose is subjected to distortion correction and cutting and then is used as a reference image;
s3, acquiring parameter data: measuring the installation height of the monocular camera relative to the indoor ground, defining a coordinate point of the monocular camera as an O point, then obtaining the height as IO, establishing a coordinate system based on the reference image, and taking the intersection point of a horizontal central axis KJ and a vertical central axis EF of the image as an origin G, the vertical central axis as a Y axis and the horizontal central axis as an X axis; acquiring a vertical field angle 2 α, a horizontal field angle 2 β, and an optical axis pitch angle θ, and a pixel width W and a pixel height H of the image, wherein:
Figure FDA0003817094770000011
Figure FDA0003817094770000012
Figure FDA0003817094770000013
wherein point F is located on line IG;
s4, acquiring a scene image by using a monocular camera, and acquiring pixel coordinates of a target in the image based on a target detection method of machine vision;
s5, calculating a mapping matrix from any pose of the monocular camera to a reference pose by adopting a feature point matching method, and transforming the target pixel coordinate obtained in the step S4 to an image space of the reference pose by using the mapping matrix, wherein the method specifically comprises the following steps:
respectively acquiring ORB characteristics of a scene imaging graph of a reference pose and a scene imaging graph of any pose, wherein the ORB characteristics comprise characteristic points and descriptors;
calculating the distance between two groups of feature points by using a feature matching method BFMatcher, and obtaining a plurality of pairs of optimal matching points through sequencing and screening;
calculating an optimal single mapping transformation matrix between a plurality of pairs of feature points by using a mapping matrix solving method FindHomography, wherein the mapping matrix represents the mapping relation from a scene imaging graph of any pose to a scene imaging graph of a reference pose;
transforming the target pixel coordinates of any pose to an image space of a reference pose by using a mapping matrix;
s6, calculating to obtain the position coordinates of the target on the indoor ground according to the pixel coordinates of the target in the reference pose image space and the vertical field angle, the horizontal field angle and the optical axis pitch angle obtained in the S3, and completing target positioning, wherein the method specifically comprises the following steps:
defining the coordinates of the target in the image space of the reference pose as (x ', Y'), and the corresponding Y-axis coordinate of the target P in the reference pose as P y Then the size information of the target in the X-axis and Y-axis directions is GP y And P y P:
Figure FDA0003817094770000021
Figure FDA0003817094770000022
Figure FDA0003817094770000023
Thereby obtaining the positioning coordinates of the target.
2. The indoor coordinate positioning method for the monocular camera based on machine vision as set forth in claim 1, wherein the monocular camera is a non-zoom monocular camera, so the horizontal field angle and the vertical field angle obtained in S3 are constant and do not change due to the change of the pose of the camera.
CN202211038608.4A 2022-08-26 2022-08-26 Monocular camera indoor coordinate positioning method based on machine vision Pending CN115493568A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211038608.4A CN115493568A (en) 2022-08-26 2022-08-26 Monocular camera indoor coordinate positioning method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211038608.4A CN115493568A (en) 2022-08-26 2022-08-26 Monocular camera indoor coordinate positioning method based on machine vision

Publications (1)

Publication Number Publication Date
CN115493568A true CN115493568A (en) 2022-12-20

Family

ID=84466365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211038608.4A Pending CN115493568A (en) 2022-08-26 2022-08-26 Monocular camera indoor coordinate positioning method based on machine vision

Country Status (1)

Country Link
CN (1) CN115493568A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197170A (en) * 2023-11-02 2023-12-08 佛山科学技术学院 Method and system for measuring angle of vision of monocular camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197170A (en) * 2023-11-02 2023-12-08 佛山科学技术学院 Method and system for measuring angle of vision of monocular camera
CN117197170B (en) * 2023-11-02 2024-02-09 佛山科学技术学院 Method and system for measuring angle of vision of monocular camera

Similar Documents

Publication Publication Date Title
CN109035320B (en) Monocular vision-based depth extraction method
CN109146980B (en) Monocular vision based optimized depth extraction and passive distance measurement method
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
CN108805934B (en) External parameter calibration method and device for vehicle-mounted camera
CN111750820B (en) Image positioning method and system
CN109272574B (en) Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation
CN107507235B (en) Registration method of color image and depth image acquired based on RGB-D equipment
CN110031829B (en) Target accurate distance measurement method based on monocular vision
CN109211198B (en) Intelligent target detection and measurement system and method based on trinocular vision
CN112667837A (en) Automatic image data labeling method and device
CN112819903A (en) Camera and laser radar combined calibration method based on L-shaped calibration plate
CN110555813B (en) Rapid geometric correction method and system for remote sensing image of unmanned aerial vehicle
CN109255818B (en) Novel target and extraction method of sub-pixel level angular points thereof
CN105118086A (en) 3D point cloud data registering method and system in 3D-AOI device
CN114283201A (en) Camera calibration method and device and road side equipment
CN113205603A (en) Three-dimensional point cloud splicing reconstruction method based on rotating platform
CN111325800A (en) Monocular vision system pitch angle calibration method
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
CN105335977A (en) Image pickup system and positioning method of target object
CN110119670A (en) A kind of vision navigation method based on Harris Corner Detection
CN114372992A (en) Edge corner point detection four-eye vision algorithm based on moving platform
CN115493568A (en) Monocular camera indoor coordinate positioning method based on machine vision
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium
CN108801225A (en) A kind of unmanned plane tilts image positioning method, system, medium and equipment
CN111612849A (en) Camera calibration method and system based on mobile vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination