CN117635675A - Same name point pairing method for multi-camera - Google Patents

Same name point pairing method for multi-camera Download PDF

Info

Publication number
CN117635675A
CN117635675A CN202311344516.3A CN202311344516A CN117635675A CN 117635675 A CN117635675 A CN 117635675A CN 202311344516 A CN202311344516 A CN 202311344516A CN 117635675 A CN117635675 A CN 117635675A
Authority
CN
China
Prior art keywords
point
camera
points
mark
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311344516.3A
Other languages
Chinese (zh)
Inventor
杨泺岱
周朗明
邓文平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Shibite Robot Co Ltd
Original Assignee
Hunan Shibite Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Shibite Robot Co Ltd filed Critical Hunan Shibite Robot Co Ltd
Priority to CN202311344516.3A priority Critical patent/CN117635675A/en
Publication of CN117635675A publication Critical patent/CN117635675A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a homonymy point pairing method of a multi-camera, which comprises the following steps: identifying pixel coordinates and space coordinates of mark points in each mark image under each camera view angle, and obtaining geometric parameters of mark images in each mark image; taking geometric parameters of the logo as preset conditions, and projecting the logo points under different camera view angles to the same plane according to pixel coordinates and space coordinates of the logo points under each camera view angle; according to the distance difference of projection of the mark points on the same plane under the view angles of all cameras, determining candidate homonymous points; carrying out multi-plane polar projection on the mark points under different camera view angles according to the pixel coordinates and the space coordinates of the mark points under each camera view angle, and determining a polar line segment equation; and determining the final homonymy point in the candidate homonymy points according to the vertical distance between the candidate homonymy points and the polar line segment.

Description

Same name point pairing method for multi-camera
Technical Field
The invention relates to the field of image recognition, in particular to a homonymous point pairing method of a multi-camera.
Background
In the prior art, when the characteristics of an object to be identified are not obvious and the target identification task needs high identification precision and positioning precision, a mark graph needs to be manually added, and the mark points of the mark graph are used as spatial position marks to supplement the characteristics.
Meanwhile, in the process, a plurality of cameras are mostly adopted to collect the mark images so as to identify each mark point. In the identification process, the same name point pairing is needed for a plurality of mark points acquired by each multi-camera so as to carry out subsequent work, and if the point-to-point pairing is not accurate, errors are caused to subsequent coordinate determination, positioning identification and the like.
Therefore, how to improve the pairing accuracy of the homonymy points is a technical problem to be solved in the field.
Disclosure of Invention
In order to solve the technical problems, the invention provides a homonymy point pairing method of a multi-camera, comprising the following steps:
identifying pixel coordinates and space coordinates of mark points in each mark image under each camera view angle, and obtaining geometric parameters of mark images in each mark image;
taking geometric parameters of the logo as preset conditions, and projecting the logo points under different camera view angles to the same plane according to pixel coordinates and space coordinates of the logo points under each camera view angle;
according to the distance difference of projection of the mark points on the same plane under the view angles of all cameras, determining candidate homonymous points;
carrying out multi-plane polar projection on the mark points under different camera view angles according to the pixel coordinates and the space coordinates of the mark points under each camera view angle, and determining a polar line segment equation;
And determining the final homonymy point in the candidate homonymy points according to the vertical distance between the candidate homonymy points and the polar line segment.
Further, identifying pixel coordinates and spatial coordinates of a marker point in each marker image at each camera view angle includes:
acquiring mark image data;
extracting a first edge point set P according to the gradient value and the gradient direction of the mark image data sub
Extracting a second edge point set E according to the gray value of the mark image data s
According to the correspondence between the points in the first edge point set and the second edge point setThe distance of the points, determining the actual edge point set F s
According to the actual edge point set, determining pixel coordinates of the mark points in the mark image and geometric parameters of the mark image;
and determining the space coordinates of the mark points under the camera coordinate system according to the pixel coordinates of the mark points and the geometric parameters of the mark map.
Further, the step of extracting the first edge point set includes:
according to the mark image data, calculating the comprehensive gradient value and gradient direction of each pixel point;
judging whether the comprehensive gradient value of each pixel point is larger than the high and low threshold values of the comprehensive gradient value;
if the pixel point is higher than the high threshold value of the comprehensive gradient value, the pixel point is regarded as an edge point;
if the pixel point is lower than the comprehensive gradient value low threshold value, the pixel point is regarded as a non-edge point;
If the integrated gradient value is between the high integrated gradient value threshold and the low integrated gradient value threshold, searching the integrated gradient value of the same link point before and after the pixel point, if the edge point with the integrated gradient value higher than the high integrated gradient value threshold exists in the same link point, considering the pixel point as an edge point, otherwise, considering the pixel point as a non-edge point; a first set of edge points is obtained.
Further, the step of extracting the second set of edge points includes:
performing threshold filtering binarization processing on the mark image data to obtain a binary image B of the mark image data m
Scan-through binary image B m If the value of the scanning point is 1, performing 8-direction surrounding communication search, if the surrounding pixel point with the value of 0 exists, marking the scanning point as a boundary point, determining coordinates of each boundary point, and simultaneously recording boundary point sets belonging to different outlines according to the change of the internal and external conditions of the boundary where the current scanning point is located and the internal and external conditions of the boundary of the adjacent point to obtain a second edge point set E of each marking point instance s
Further, the step of determining the actual edge point set specifically includes:
setting a distance threshold; traversing a first set of edge points P sub Judging a second edge point set E s If there is a point with a distance from the current traversal point less than a distance threshold; if the current traversal point exists, the current traversal point is regarded as an actual edge point; if not, the current traversal point is regarded as an unreal edge point; obtaining an actual edge point set F s
Or: setting a distance threshold; traversing a first set of edge points P sub Judging whether points with the distance smaller than a distance threshold value from the current traversal point exist in the second edge point set Es; if so, replacing the point with the distance smaller than the distance threshold value from the current traversal point in the second edge point set with the current traversal point; if not, not processing; after the traversal is finished, determining the second edge point set obtained finally as an actual edge point set F s
Further, according to the pixel coordinates of the marker points and the geometric parameters of the marker map, the spatial coordinates of the marker points under the camera coordinate system are determined, specifically:
if the monocular camera is adopted to acquire the mark image data, the method comprises the following steps:
determining a scaling factor according to the geometric parameters of the logo image;
according to the camera internal reference and the scaling factor, converting the pixel coordinates of the mark points into space coordinates under a camera coordinate system;
or: the step of acquiring the marker image data by adopting the multi-view camera comprises the following steps:
determining a scaling factor of each camera according to the geometric parameters of each logo;
according to the camera internal parameters and the scaling factors of the cameras, converting the pixel coordinates of the mark points into space coordinates under the coordinate systems of the cameras respectively;
Obtaining a corrected scaling factor z' according to the external parameters of each camera and the space coordinates under the coordinate system of each camera;
and converting the pixel coordinates of the mark points into corrected space coordinates under each camera coordinate system according to the corrected scaling factors z'.
Further, converting pixel coordinates of the marker points to spatial coordinates in a camera coordinate system according to the camera internal parameters and the scaling factors; or: according to the camera internal parameters and the scaling factors of the cameras, converting the pixel coordinates of the mark points into space coordinates under the coordinate systems of the cameras respectively; formula (11) is used:
wherein, (u, v) is the pixel coordinates of the marker point, Z is the scaling factor; f (f) x ,f y Corresponding to focal lengths of the camera in x and y directions, c x ,c y Respectively the origin coordinates, X of a pixel plane coordinate system c ,Y c ,Z c Representing the spatial coordinates of the marker points in the camera coordinate system.
Further, according to the external parameters of each camera and the space coordinates under the coordinate system of each camera, a corrected scaling factor z' is obtained; the method comprises the following steps:
adopting a formula (13), and calculating parallax distances according to the space coordinates of the mark points under each camera coordinate system;
calculating a corrected scaling factor z' according to the external parameters of each camera and the parallax distance by adopting a formula (14);
Wherein, (X cij ,Y cij ,Z cij )、(x cij ,y cij ,z cij ) Respectively the image coordinates of the j-th mark point in the mark image data shot by the i-th camera; alpha is the parallax distance; f. t is an external parameter of the camera, wherein f is a focal length of the camera, and T is a base line distance between optical centers of the ith camera and the jth camera; z' is the corrected scaling factorAnd (5) a seed.
Further, determining candidate homonymous points according to the distance difference of projection of the mark points on the same plane under the view angles of the cameras, including:
after the pixel coordinates of all the mark points of all the cameras on the same plane are determined through projection, candidate homonymous points of the multi-view imaging are extracted based on the principle of distance nearest in a unified coordinate system.
Further, according to the pixel coordinates and the space coordinates of the marker points under each camera view angle, the marker points under different camera view angles are subjected to multi-plane epipolar projection, and epipolar line segment equations are determined, including:
satisfy formula (18):
wherein, (u) ij ,v ij (u) and (1) ij ’,v ij ' 1) the pixel coordinates of the jth pixel point of the ith camera under the two-dimensional planes of different view angles; on the basis, set upThe basis matrix is F, (u) 11 ,v 11 (u) and (1) 11 ’,v 11 '1) are x and x', then the epipolar constraint line segment on the projection plane with respect to point x 'can be expressed as Ie', as in equation (19):
And one projection point corresponds to one polar line segment.
Drawings
FIG. 1 is a flow chart of one embodiment of a method of identifying a landmark of the present invention;
FIG. 2 is a schematic diagram illustrating the construction of one embodiment of marker image data acquisition according to the present invention;
FIG. 3 is a schematic diagram of another embodiment of marker image data acquisition according to the present invention;
FIG. 4 is a schematic diagram of an embodiment of a combination of the identification method of the marker point and the peer pairing method of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, in the embodiment of the present invention, directional indications such as up, down, left, right, front, and rear … … are referred to, and the directional indications are merely used to explain the relative positional relationship, movement conditions, and the like between the components in a specific posture, and if the specific posture is changed, the directional indications are correspondingly changed. In addition, if there are descriptions of "first, second", "S1, S2", "step one, step two", etc. in the embodiments of the present invention, the descriptions are only for descriptive purposes, and are not to be construed as indicating or implying relative importance or implying that the number of technical features indicated or indicating the execution sequence of the method, etc. it will be understood by those skilled in the art that all matters in the technical concept of the present invention are included in the scope of this invention without departing from the gist of the present invention.
As shown in fig. 1, the present invention provides a method for identifying a marker point, including:
s1: acquiring mark image data; specifically, as shown in fig. 2, the image capturing module 100, such as a monocular camera, a binocular camera, a multi-view camera, a video camera, etc., is optionally but not limited to connected to the processing module 200 by a wired or wireless manner, receives an image capturing trigger signal sent by the processing module 200, such as a terminal device, etc., and captures the marker image data including the marker map M in the field of view.
More specifically, the number, shape, kind, etc. of the logo images in the logo image can be set arbitrarily by those skilled in the art according to actual requirements, and as illustrated in fig. 2, the logo images in the logo image are optionally, but not limited to, logo circles M, preferably reflective logo circles. As illustrated in fig. 3, the logo image in the logo image is optionally but not limited to 4 logo circles M 1 、M 2 、M 3 、M 4 And are different in size. It should be noted that fig. 2 and fig. 3 are only examples of the adaptation of the logo, and are not limited thereto, and the number, shape and kind thereof can be arbitrarily set by those skilled in the art. By way of example, assuming that there are multiple objects within the field of view that need to be identified, different objects may alternatively, but not exclusively, be identified with different shapes or sizes of logo.
The processing module 200 identifies the spatial positions of the marker points of the marker map in the marker image data by the marker point identification method of the present invention. Taking the circular sign diagram shown in fig. 2 as an example, the sign points of the circular sign diagram are optionally but not limited to circle centers or/and top, bottom, left and right vertexes, and the spatial positions of the feature points are identified, and are optionally but not limited to spatial coordinates for identifying the circle centers or/and the top, bottom, left and right vertexes; more preferably, when the marker image data includes a plurality of marker images of a plurality of shapes, 4 marker circles M of two different sizes are shown in FIG. 3 1 、M 2 、M 3 、M 4 The identification method of the mark point of the invention also optionally but not exclusively comprises the following steps: the category of each marking point in the marking image data is identified, for example, the specific marking point belongs to a characteristic point of a large circle or a small circle.
It should be noted that the identification method and the drawing of the marker point of the present invention are exemplified by the marker circles shown in fig. 2 and 3 as the marker patterns, but the invention is not limited thereto. By way of example, the logo image in the logo image may alternatively, but not exclusively, be triangular, square, or a mixture of various graphics, such as a circle, a triangle, etc. By way of example, taking a triangle as an example, the marker points can be selected from but not limited to three vertexes or center points of the triangle, etc.; the mark points of other patterns can also be symbolized characteristic points of the pattern.
S2: extracting a first edge point set P according to the gradient value and the gradient direction of the mark image data sub The method comprises the steps of carrying out a first treatment on the surface of the Specifically, from the angles of gradient values and gradient directions, detection and extraction of pixels of the preset mark pattern shape are performed on the captured single mark image data, and a first edge point set of the mark image on the two-dimensional image data is obtained. More specifically, the first set of edge points of the ellipse is extracted, optionally but not exclusively by taking the landmark graph as shown in fig. 2 and 3 as an example of a landmark circle, which generally appears as an ellipse on a two-dimensional projection imaged by a camera.
Specifically, step S2, optionally but not limited to, includes:
s21: according to the mark image data, calculating the comprehensive gradient value and gradient direction of each pixel point;
s22: judging whether the comprehensive gradient value of each pixel point is larger than the high and low threshold values of the comprehensive gradient value;
s23: if the pixel point is higher than the high threshold value of the comprehensive gradient value, the pixel point is regarded as an edge point;
s24: if the pixel point is lower than the comprehensive gradient value low threshold value, the pixel point is regarded as a non-edge point;
s25: if the integrated gradient value is between the high integrated gradient value threshold and the low integrated gradient value threshold, searching the integrated gradient value of the same link point before and after the pixel point, if the edge point with the integrated gradient value higher than the high integrated gradient value threshold exists in the same link point, considering the pixel point as an edge point, otherwise, considering the pixel point as a non-edge point; obtaining a first edge point set P sub
Specifically, in step S21, a pixel point a (m, n) of the pixel coordinate (m, n) on the image data is taken as an example, and the gray value thereof is optionally but not limited to g (m, n). More, theFor a particular such gray value g (m, n), the gray value of the original image data is optional but not limited to; or the gray value, preferably the gray value g of the Gaussian filtered image σ (m, n), σ represents the gaussian kernel size. More specifically, the gaussian filtering is optionally but not limited to two-dimensional gaussian filtering as shown in formula (1);
for the original image g (m, n) or the Gaussian filtered image g σ The gradient values g in the x and y directions are obtained by multiplying the Sobel operator by the pixel points A (m, n) in (m, n) x (m,n),g y (m, n), and calculate the integrated gradient value G (m, n) and the gradient direction θ (m, n) by the formulas (2) - (3).
More specifically, a first set of edge points P sub Optionally, but not limited to, in step S21, calculating coordinates of each pixel, such as pixel a (m, n), and determining in step S25 which pixels are included in the first edge point set to pick the coordinates of the corresponding point; or in order to reduce the calculation amount, after the first edge point set is determined in step S25, each pixel point P in the first edge point set is calculated suba (m, n). Taking pixel a (m, n) as an example, the coordinates of the pixel a may be optionally, but not limited to, determined by the following formula:
s201: interpolation calculation is performed on each pixel, and the pixel A (m, n) is taken as an example, and the distance in the gradient direction theta (m, n) is a preset pixel size, such as the integrated gradient values G '(m, n) and G' (m, n) of two adjacent points A '(m, n) and A' (m, n) of one pixel size;
Specifically, for each pixel point a (m, n), the interpolation calculates the integrated gradient values G '(m, n), G "(m, n) of two adjacent points a' (m, n), a" (m, n) whose distance in the θ (m, n) direction is a predetermined pixel size, such as one pixel size, and specifically calculates the integrated gradient values G '(m, n), G "(m, n) of two adjacent points a' (m, n), a" (m, n) as shown in formulas (4) - (5).
S202: calculating the coordinate P (A) of each pixel point A (m, n) according to the integrated gradient value G (m, n) of each pixel point A (m, n) and the integrated gradient values G '(m, n), G' (m, n) of the adjacent two points A '(m, n), A' (m, n) x ,A y ) To determine coordinates of each pixel point in the first set of edge points.
Specifically, formulas (6) - (8) are optionally but not limited to:
A x =m+λ*cos(θ(m,n))(7)
A y =n+λ*sin(θ(m,n))(8)
more specifically, in step S25, the step of determining the same link point before and after, optionally but not limited to, includes:
judging the sequence of a previous point or a next point B (m, n) of a current traversing point, such as a pixel point A (m, n), according to a formula (9), if the formula (9) is satisfied, judging B (m, n) as the next point of A (m, n), otherwise, judging B (m, n) as the previous point; meanwhile, judging whether the Euclidean distance of a new link point is smaller than that of the existing forward or backward point for the traversing point with the forward or backward point existing in the traversing process, if not, determining that B (m, n) is the same link point of A (m, n); if yes, determining to replace the link point with a new link point;
Wherein,represents the vector between the two points A (m, n), B (m, n), g (A) T Representing the gradient direction of the current traversal point a (m, n).
S3: extracting a second edge point set E according to the gray value of the mark image data s The method comprises the steps of carrying out a first treatment on the surface of the Specifically, optionally but not limited to, from the gray value perspective, extracting a set of possible contour pixel points in the marker image data, and performing instance segmentation of a single marker pattern to obtain a second set of edge points E of the pattern data s . Specific:
s31: performing threshold filtering binarization processing on the mark image data to obtain a binary image B of the mark image data m . Specific: and traversing the shot gray image, judging whether the gray value of each pixel point is larger than a preset threshold value, setting the pixel point with the gray value larger than the preset threshold value as 1, otherwise setting the pixel point as 0, and obtaining a binary image of which the gray value is not 0, namely 1.
S32: scan-through binary image B m If the value of the scanning point is 1, performing 8-direction surrounding communication search, if the surrounding pixel point with the value of 0 exists, marking the scanning point as a boundary point, determining coordinates of each boundary point, and simultaneously recording boundary point sets belonging to different outlines according to the change of the internal and external conditions of the boundary where the current scanning point is located and the internal and external conditions of the boundary of the adjacent point to obtain a second edge point set E of each marking point instance s
Specifically, if only one marker image is included in the marker image data, taking 1 marker image as shown in fig. 2 as an example, the boundary point sets belonging to different outlines are recorded according to the change of the boundary internal and external conditions of the current scanning point and the boundary internal and external conditions of the adjacent point, and only one boundary point set is obtained, so that the edge point set of one marker image is obtained. If there are multiple marker images in the marker image data, taking 4 marker images as shown in FIG. 3 as an example, the four image edges of the image can be selected but not limited toThe edges are arranged in a first contour order, followed by a binary image B m And performing scanning traversal, performing 8-direction surrounding communication search if the value of the scanning point is 1, marking the scanning point as a boundary point if the surrounding pixel point with the value of 0 exists, and recording boundary point sets belonging to different outlines according to the change of the internal and external conditions of the boundary where the current scanning point is located and the internal and external conditions of the adjacent points. Further, traverse B m Boundary points and a contour sequence set thereof are obtained, contour instance segmentation is carried out through the contour sequence, an edge point set of a single ellipse instance is obtained, and a second edge point set E of all ellipses is further obtained s
S4: according to the first edge point set P sub And a second set of edge points E s The distance of the corresponding point in the image is used for determining an actual edge point set F s . Specifically, the following are optional but not limited to:
the first way is: setting a distance threshold; traversing a first set of edge points P sub Judging a second edge point set E s If there is a point with a distance from the current traversal point less than a distance threshold; if the current traversal point exists, the current traversal point is regarded as an actual edge point; if not, the current traversal point is regarded as an unreal edge point; obtaining an actual edge point set F s
Or: the second way is: setting a distance threshold; traversing a first set of edge points P sub Judging a second edge point set E s If there is a point with a distance from the current traversal point less than a distance threshold; if so, replacing the point with the distance smaller than the distance threshold value from the current traversal point in the second edge point set with the current traversal point; if not, not processing; and traversing the result, and determining the finally obtained second edge point set as an actual edge point set Fs.
In particular, the distance threshold may be optionally but not limited to be set by a person skilled in the art according to the actual situation. Traversing a first set of edge points P sub In a second set of edge points E s Finding out whether a point with a distance smaller than a set distance threshold value from the current traversal point exists, if so, determining accuracyThe first edge point with higher degree is an actual edge point; if not, the current traversal point is ignored. Finally, forming a new set by using all the actual edge points in the first edge point set as the finally determined actual edge points; or replacing the second edge point set with the actual edge point to serve as the finally determined actual edge point. It should be noted that the above only gives examples of two methods for determining the actual edge points, but is not limited thereto. The invention is characterized in that 1, a gradient value and a gradient direction angle extract a first edge point set; 2. extracting a second set of edge points from the gray value perspective; namely: edge features are extracted from two angles respectively, and then the actual edge points are determined according to the distances of the edge feature points extracted from the two angles, so that the two angles are mutually fused and corrected, the extraction precision of the edge features can be further improved, and the precision of the subsequent mark point segmentation and identification is further improved.
S5: according to the actual edge point set F s Determining pixel coordinates (u, v) of a marker point in the marker image and geometric parameters of the marker map; specifically, the actual edge point set F is obtained s On the basis of the above, the frame (the edge of the marking circle shown in fig. 2) of the marking image can be fitted with data, and the upper left corner of the image is taken as the origin for example, a pixel coordinate system is established, so that the pixel coordinates and the geometric parameters of the marking points in the marking image are determined. Also taking the marking image as an example of marking circle as shown in fig. 2, the marking image is optionally but not limited to a set F according to the actual edge points s The RANSAC shape fitting of each ellipse example is performed to obtain a marked circle, which is generally represented as an ellipse on a pixel plane, and a mark point with a certain symbolism, such as the pixel coordinate of the mark point, namely the circle center, is extracted, and the geometric parameters such as the diameter, the long axis, the short axis, the ellipse orientation angle and the like of the mark point are selected as characteristic parameters, but not limited to the mark point.
It should be noted that the above-mentioned sign image, in which the sign image is a sign circle, is only exemplified, and the sign image may be alternatively but not exclusively in other shapes such as triangle, and may be alternatively but not exclusively in accordance with the actual edge point set F s Fitting the shape to obtain a marked triangle, andand extracting pixel coordinates (u, v) of mark points with certain symbolism, such as triangles and other mark points, and geometrical parameters such as side lengths and the like, as characteristic parameters.
S6: determining the spatial coordinates (X) of the marker point in the camera coordinate system based on the pixel coordinates (u, v) of the marker point and the geometric parameters of the marker map c ,Y c ,Z c ). Specifically, under the actions of camera internal parameters, geometric parameters and the like, the pixel coordinates of the mark points on the pixel plane are converted into the space coordinates of the mark points under the camera coordinate system.
Specifically, in step S6: if step S1, a monocular camera is used to obtain a marker image data, and through steps S2-S5, the pixel coordinates (u, v) of the marker points in a marker image and the geometric parameters of the marker map are identified, then a first mode is adopted:
s61: determining a scaling factor z according to the geometric parameters of the logo;
s62: converting pixel coordinates (u, v) of the marker points to spatial coordinates (X) in the camera coordinate system based on the camera internal reference and the scaling factor c ,Y c ,Z c )。
Specific: taking a ring as shown in fig. 2 as an example, according to a pinhole imaging model, it is known that a three-dimensional space circular pattern is projected onto a two-dimensional plane to form an ellipse, and a line segment of a diameter of the space ring passing through a center of a circle is parallel to the imaging plane of the camera, and the projection represents a long axis of the two-dimensional ellipse in the image, so that a ratio of a diameter D of the space ring to a pixel length L of the long axis of the ellipse projected onto the imaging plane of the camera and a ratio of a distance Z of the center of the circle P from the optical center O of the camera are the same as a ratio of a distance focal length F of the optical center O from the imaging plane.
Therefore, taking the mark pattern in the mark image as the mark circle as an example, the spatial coordinates (X) of the mark point in the camera coordinate system are determined in step S6 c ,Y c ,Z c ) Specifically, as shown in formulas (10) - (11):
wherein D is the diameter of the marking circle; l is the length L of the oval long-axis pixel of the mark circle projected on the imaging plane of the camera; z is the distance between the center of the marking circle and the optical center of the camera, namely a scaling factor; f is the focal length F from the camera optical center to the imaging plane; according to equation (10), the diameter D of the marker circle is known per se, L, F can be based on the actual set of edge points F s And determining the fitted mark circle, namely calculating the distance between the center of the mark circle and the optical center of the camera according to the formula (10), namely a scaling factor Z.
Furthermore, the distance Z of the center of the marking circle under the camera coordinate system, namely the scaling factor, determined according to the camera internal parameter and the formula (10) can further estimate the three-dimensional space coordinate [ X ] of the center of the marking circle under the camera coordinate system according to the formula (11) c ,Y c ,Z c ]I.e. the spatial coordinates of the marker points.
Wherein, (u, v) is the pixel coordinate of the circle center of the marking circle (marking point in the example), Z is the distance between the circle center of the marking circle and the camera optical center, namely the scaling factor, and is calculated and determined by a formula (10); f (f) x ,f y Focal lengths of the camera in x and y directions (known from camera references), c x ,c y Respectively, the origin coordinates of a pixel plane coordinate system (optionally, but not limited to, taking the upper left corner of the pixel plane as the origin coordinates), X c ,Y c ,Z c Representing the three-dimensional space coordinates of the circle center of the marking circle relative to the camera coordinate system, namely the space coordinates of the marking point.
Preferably, in some application scenarios with high accuracy requirements, step S1 preferably uses a multi-view camera, such as a two-view camera and a three-view camera, to shoot the same logo image, and simultaneously captures multiple logo image data of the same logo image in the field of view, and then through steps S2-S5, the pixel coordinates of the logo points and the geometric parameters of the logo map in each logo image data can be identified, and then a second mode is adopted:
s61': determining a scaling factor z of each camera according to the geometric parameters of each logo;
s62': based on the camera parameters and the scaling factors of each camera, the pixel coordinates (u ij ,v ij ) Conversion to spatial coordinates (X) cij ,Y cij ,Z cij );
S63': obtaining a corrected scaling factor z' according to the external parameters of each camera and the space coordinates under the coordinate system of each camera;
s64': according to the corrected scaling factor Z', the pixel coordinates (u ij ,v ij ) Conversion to corrected spatial coordinates (X cij ,Y cij ,Z cij )’。
Specifically, taking a binocular camera to shoot a marker circle as shown in fig. 2 as an example, two marker image data of different angles can be obtained, and through steps S2-S6, the image coordinates (X c11 ,Y c11 ,Z c11 ) And image coordinates (X) of the first marker point in the marker image data captured by the second camera c21 ,Y c21 ,Z c21 ) The method comprises the steps of carrying out a first treatment on the surface of the The parallax distance alpha (summarized as formula (13)) of the mark point can be calculated by adopting a formula (12), and then the distance between the center of the mark circle and the optical center of the camera is corrected by adopting a formula (14), namely a corrected scaling factor Z';
wherein, (X c11 ,Y c11 ,Z c11 )、(X c21 ,Y c21 ,Z c21 ) The image coordinates of a first mark point in mark image data shot by a first camera and a second camera respectively; (X) cij ,Y cij ,Z cij )、(x cij ,y cij ,z cij ) Respectively the image coordinates of the jth mark point in the mark image data shot by different ith cameras; here, since the image sitting specimen of the j-th marker point in the marker image data photographed by the i-th camera can be used: (X) cij ,Y cij ,Z cij ) And (3) representing. However, the image coordinates represented by the two different cameras are represented in the formula (13), so that one is represented by uppercase and one is represented by lowercase to show distinction. Alpha is the parallax distance; f is the focal length of the camera, T is the baseline distance between the optical centers of the first camera and the second camera, and is known per se; z' is the distance between the corrected marker point and the camera optical center, i.e. the corrected scaling factor. Calculating the three-dimensional space coordinate of the circle center of the marking circle under the binocular camera coordinate system according to the formula (11), wherein the three-dimensional space coordinate corresponds to the corrected three-dimensional space coordinate [ X ] cij ,Y cij ,Z cij ]’。
In this embodiment, the identification method of the marker point of the present invention is provided, which extracts the first edge point set P on the basis of the marker image data at two levels of the gradient amplitude direction and the gray value, respectively sub And a second set of edge points E s Setting a distance threshold on the basis to obtain an actual edge point set F s . That is, it is at the original second set of edge points E s On the basis, with more accurate first edge point set P sub As correction points, two directions are fused to obtain a final actual edge point set F s On the basis, the pixel coordinates (u, v) of the mark points in the image and the geometric parameters of the mark map are determined in a fitting manner, so that the accuracy is improved, and the space coordinates (X) of the finally determined mark points can be further improved c ,Y c ,Z c ) The accuracy of the identification of the marker points is improved. More preferably, on the basis, a multi-camera is also preferably adopted to capture the same from different view anglesThe marker image, the scale factor Z of the comparative example and the space coordinate are further corrected and optimized, and the space coordinate (X) of the finally determined marker point can be further improved c ,Y c ,Z c ) Is a function of the accuracy of the (c).
More preferably, as shown in FIG. 3, a preferred embodiment of the logo image is provided, on which there is not only one logo circle but 4 logo circles, including 2 large circles M, as compared with that shown in FIG. 2 1 、M 2 And two small circles M 3 、M 4 The method comprises the steps of carrying out a first treatment on the surface of the Then after capturing the marker images in the visual field range by adopting the multi-view camera, 4 marker points can be respectively identified through the steps S1-S5. The identification method of the marker point of the present invention may also optionally include, but is not limited to:
s7: if a plurality of mark points exist in the mark image, classifying the attributes of the mark points according to the geometric parameters of the mark image. Taking the 4 marker points of fig. 3 as an example, the above method further includes, but is not limited to: and determining whether the mark point belongs to a large circle mark point or a small circle mark point according to geometric parameters such as the diameter, the two-dimensional size of the mark point or the shape of the pattern. More examples are triangle marks, etc. And further filtering and extracting suitable ellipses by setting a long and short axis threshold value to obtain a final mark pattern set K, wherein the final mark pattern set K can optionally but not exclusively comprise pixel coordinates, space coordinates, attribute classification and the like of each mark point in a mark image.
More preferably, when the multi-camera captures the marker image data of the plurality of marker graphs shown in fig. 3, in the subsequent recognition process, the 4 marker points acquired by each single-camera need to be paired in the same name, that is, the corresponding marker point is found in the marker image data captured by each single-camera, and then the coordinate conversion step is performed in the case of the multi-camera in step S6.
Therefore, on the other hand, the invention also provides a method for matching homonymous points of the multi-camera, which is optional but not limited to adopting any mode in the prior art or the identification method of the marking points, identifying the pixel coordinates of each marking point under each monocular camera, then correcting the pixel coordinates of the marking points under other monocular cameras by taking the pixel coordinates of the marking points under one monocular camera as a basis, and projecting other pixel coordinates to a camera pixel plane of the basic pixel coordinates to obtain the space coordinates of the corrected marking points; or the pixel coordinates of the mark points under each monocular camera are projected to the unified pixel plane for correction, so that the space coordinates of the corrected mark points are obtained. In order to avoid that the method does not meet the requirement of singleness, the identification method of the mark point and the pairing method of the same name point are protected respectively.
Specifically, the method for pairing homonymous points of the multi-camera comprises the following steps:
p1: identifying pixel coordinates and space coordinates of mark points in each mark image under each camera view angle, and obtaining geometric parameters of mark images in each mark image; for example, the multi-camera may be, but not limited to, a binocular camera, and mixed marker points are two marker points with different sizes of marker circles as shown in fig. 3, and the method for pairing the same name points of the multi-camera is further explained, but the specific number of multi-camera, the specific shape, the category, the number and the like of the marker images are not limited thereto.
Specifically, first, optionally but not limited to any manner in the prior art or the identification method of the mark points, the pixel coordinates and the space coordinates of 4 mark points in the first camera and the second camera are respectively obtained. Alternatively but not exclusively: 4 marker points M of the first camera 1 -M 4 Is defined by the pixel coordinates of: (u) 11 ,v 11 ),(u 12 ,v 12 ),(u 13 ,v 13 ),(u 14 ,v 14 ) The method comprises the steps of carrying out a first treatment on the surface of the 4 marker points M of the second camera 1 -M 4 Is defined by the pixel coordinates of: (u) 21 ,v 21 ),(u 22 ,v 22 ),(u 23 ,v 23 ),(u 24 ,v 24 ) The method comprises the steps of carrying out a first treatment on the surface of the 4 marker points M of the first camera 1 -M 4 Is defined by the spatial coordinates of: (X) c11 ,Y c11 ,Z c11 )、(X c12 ,Y c12 ,Z c12 )、(X c13 ,Y c13 ,Z c13 )、(X c14 ,Y c14 ,Z c14 ) The method comprises the steps of carrying out a first treatment on the surface of the Second camera4 marker points M of (2) 1 -M 4 Is defined by the spatial coordinates of: (X) c21 ,Y c21 ,Z c21 )、(X c22 ,Y c22 ,Z c22 )、(X c23 ,Y c23 ,Z c13 )、(X c24 ,Y c24 ,Z c24 ) The method comprises the steps of carrying out a first treatment on the surface of the It is also known to acquire geometric parameters of each marker image, taking the marker circle as an example, and optionally but not exclusively selecting a diameter as the geometric parameter.
P2: taking geometric parameters of the logo as preset conditions, and projecting the logo points under different camera view angles to the same plane according to pixel coordinates and space coordinates of the logo points under each camera view angle; specifically, taking the marking image of the present invention as an example of a marking circle, optionally but not limited to taking the diameter of the marking circle in space as a preset condition, performing multi-plane projection on the edge point set under different camera view angles, which is exemplified as a two-dimensional ellipse set in the embodiment, and projecting the marking points under each camera view angle to the same plane. More specifically: optionally but not limited to, projecting other cameras to a certain camera monocular camera based on its pixel plane; or alternatively a new plane is selected and all cameras are projected onto the new independent plane. Taking a binocular camera as an example, the binocular camera is simply referred to as a first camera and a second camera, optionally but not limited to, projecting a pixel plane of the first camera to a pixel plane of the second camera, or projecting a pixel plane of the second camera to a pixel plane of the first camera, or projecting pixel planes of the first camera and the second camera to new third planes respectively.
More exemplary, consider a binocular camera in which a first camera projects onto a second camera pixel plane, with 4 marker points M in the first camera 1 -M 4 The pixel coordinates are: (u) 11 ,v 11 ),(u 12 ,v 12 ),(u 13 ,v 13 ),(u 14 ,v 14 ) The method comprises the steps of carrying out a first treatment on the surface of the 4 marker points M of the first camera 1 -M 4 Is defined by the spatial coordinates of: (X) c11 ,Y c11 ,Z c11 )、(X c12 ,Y c12 ,Z c12 )、(X c13 ,Y c13 ,Z c13 )、(X c14 ,Y c14 ,Z c14 ) The method comprises the steps of carrying out a first treatment on the surface of the Optionally but not limited to marking point M 1 For example, the external parameter R|t between binocular cameras calibrated in advance can be used alternatively but not exclusively by using the formula (15)]Capturing a mark point M captured by a first camera 1 Projecting the space coordinates of the mark points M to the second camera to obtain mark points M in the first camera 1 Pixel coordinates (u) at a second camera view angle 11 ’,v 11 ' s); likewise, other marker points M in the first camera may be calculated 2 -M 4 As shown in formula (16), wherein i represents the number of the camera and j represents the number of the marker point;
wherein [ R|t ]]Representing a rotational-translational transformation between a first camera and a second camera, M -1 Is a camera internal reference inverse matrix calibrated in advance, [ X ] c11 ,Y c1 ,Z c1 ]Is the spatial coordinates of the first marker point in the first camera, (u) 11 ’,v 11 ') is the pixel coordinate of the first mark point projected onto the second camera two-dimensional imaging plane; [ X ] cij ,Y cij ,Z cij ]Is the spatial coordinates of the jth marker point in the ith camera, (u) ij ’,v ij ') is the pixel coordinate of the jth marker point in the ith camera projected onto another imaging plane; it is noted that the pixel coordinates on the other imaging plane may be any camera imaging plane other than the own camera imaging plane, or another uniformly projected plane, such as a third plane other than the binocular camera plane.
P3: according to the distance difference of projection of the mark points on the same plane under the view angles of all cameras, determining candidate homonymous points; in particular, but not limited to, in pixel coordinatesCalculate the distance difference, or calculate the distance difference in spatial coordinates. Taking pixel coordinates as an example: after the projection determines the pixel coordinates of each mark point of each camera on the same plane, candidate homonymous points of the multi-order imaging are extracted based on the nearest principle of distance (optionally Euclidean distance) under a unified coordinate system. That is to say: traversing proxels (u) ij ,v ij ) A set, calculating the distance between the projection point and the center point of the existing ellipse in the projected plane, wherein the Euclidean distance is smaller than the preset threshold value and is regarded as the possible nearest point, namely obtaining the candidate point homonymous point set P same
P4: and carrying out multi-plane epipolar projection on the mark points under different camera view angles according to the pixel coordinates and the space coordinates of the mark points under each camera view angle, and determining an epipolar line segment equation. Specifically, taking a mark image as an example of a mark circle, the ellipse center point multi-plane epipolar line is projected.
For example, also in the case of a binocular camera, where the first camera projects below the second camera pixel plane, the pixel coordinates (u) in the two-dimensional plane at different viewing angles can be determined according to the epipolar constraint principle 11 ,v 11 (u) and (1) 11 ’,v 11 ' 1) satisfies equation (17) so that a line segment equation for projection onto the two-dimensional imaging plane can be determined and further a line segment parameter can be determined. Wherein t is TT R is an eigenvector obtained in the pre-calibration of the first camera and the second camera, K 2 -T t TT RK 1 -1 As a basis matrix, (u) 11 ,v 11 (u) and (1) 11 ’,v 11 ' 1) is the pixel coordinates of a single pixel point in a two-dimensional plane at different viewing angles. Notably, the (u 11 ,v 11 (u) and (1) 11 ’,v 11 ' 1) is only illustrative, but not limiting, and can be generalized to formula (18), (u ij ,v ij (u) and (1) ij ’,v ij ' 1) is the pixel coordinate of the jth pixel point of the ith camera under the two-dimensional plane of different viewing angles. More illustratively, it may also be based on (u 21 ,v 21 (u) and (1) 21 ’,v 21 ', 1) determining. More worth notingFor example, when the first camera projects onto the second camera, the first camera (u 21 ’,v 21 ', 1), i.e. with its original pixel coordinates (u 21 ,v 21 1) is consistent, and projection is not needed; if the first camera and the second camera are projected to a uniform coordinate system, the first camera and the second camera are projected to a uniform coordinate system (u 21 ’,v 21 ' 1), and its original pixel coordinates (u) 21 ,v 21 1) non-uniform, is its projection to the unified coordinate system, i.e. the third coordinate system.
On the basis, set upThe basis matrix is F, (u) 11 ,v 11 (u) and (1) 11 ’,v 11 '1) are x and x', then the epipolar constraint line segment on the projection plane with respect to point x 'can be expressed as Ie' as described in equation (19).
And one projection point corresponds to one polar line segment.
P5: and determining the final homonymy point in the candidate homonymy points according to the vertical distance between the candidate homonymy points and the polar line segment. Specifically, taking the projection of the first camera to the second camera as an example, the candidate homonymous point set P is further processed same Traversing, calculating the position of the center point (u) of the ellipse in the projected plane, i.e. the second camera imaging plane 2j ,v 2j 1) and the projection point (u) 11 ,v 11 1) the vertical distance of the corresponding projected epipolar line segment, wherein the point with the smallest distance is selected as the projected point (u) 11 ,v 11 The homonym point pair of 1) obtains a homonym point pairing set M p Taking the binocular camera as an example, namely, 4 marker points of a first camera are matched with 4 marker points of a second camera, traversing the set M p And carrying out triangular distance measurement estimation on the same name points to obtain the space three-dimensional coordinates relative to the binocular camera.
On the basis, the homonymy point pairing method can be applied to the identification method of the mark points, when the mark image data are acquired by adopting a multi-camera and a plurality of mark points exist in each mark image data, the homonymy point pairing method is adopted to pair the identified homonymy points, and then the space coordinates of each mark point are determined according to the step S6.
In this embodiment, the method for matching homonymous points of the multi-camera is given, taking a binocular camera as an example, by projecting the marker points under a unified coordinate system through internal references among the binocular cameras calibrated in advance, and carrying out homonymous point matching jointly according to the two-dimensional distance and polar constraint among the projected marker points. It should be noted that, as shown in fig. 3, when there are two kinds of marker circles with different sizes, in step P2, the subsequent pairing of the same name points is performed by optionally, but not limited to, extracting different diameters as preset conditions. That is, when there are a plurality of types of logo patterns in the logo image data, the search matching of the homonym point pairs may be performed again by changing the preset condition to obtain the pairing set.
The invention has the beneficial effects that:
1. compared with the mode that only the edge extraction of the gray level image and the image fitting filtering of the non-ideal edge points are carried out in the image to obtain the ideal fitting pattern in the prior art, the identification method of the mark points effectively filters the edge points which do not form the preset pattern by adopting a method of combining sub-pixel edge detection and example edge detection, and the dimension of the preset two-dimensional mark pattern is used as the filtering condition for filtering the fitted pattern, so that errors and non-ideal pattern detection results generated in the two-dimensional image detection fitting process are further effectively filtered, and furthermore, the homonymous point matching flow of the mark points which do not meet the preset condition category is effectively filtered by adopting the method of projection calculation under the preset condition, and the error rate of mark point pattern detection extraction is also reduced.
2. Compared with the prior art that similar region searching is directly carried out in patterns with different view angles to extract homonymous points or spatial features extracted by fitting patterns, the homonymous point pairing method reduces the homonymous point mismatching rate under images with different view angles through monocular distance estimation and subsequent multi-view image projection under the spatial condition of preset mark points (for example, the diameter is taken as a preset condition), and combines polar constraint conditions of multi-view projection points. It should be noted that, as shown in fig. 4, a specific flow chart is provided when the identification method of the marker point and the homonym point pairing method based on the multi-camera are combined, if different types of marker patterns do not exist in the marker image, the homonym point pairing step is not needed, and if the same exists, the homonym point pairing step is needed.
3. Compared with the method for carrying out special coding and further identification on the mark points to support the classification of the mark points in the prior art, the method for carrying out projection homonymous point matching filtering directly through the space characteristic condition of the preset mark points in the invention also realizes the support of mixed mark point detection under the condition that high-cost coding pattern printing and complex identification flow are not required to be carried out on the mark points.
In another aspect, the present invention also provides a computer storage medium storing executable program code; the executable program code is used for executing the identification method of any mark point or/and the homonymous point pairing method based on the multi-camera.
In another aspect, the present invention further provides a terminal device, including a memory and a processor; the memory stores program code executable by the processor; the program code is used for executing the identification method of any mark point or/and the homonymous point pairing method based on the multi-camera.
For example, the program code may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to perform the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments describe the execution of the program code in the terminal device.
The terminal equipment can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The terminal device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that the terminal devices may also include input-output devices, network access devices, buses, and the like.
The processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage may be an internal storage unit of the terminal device, such as a hard disk or a memory. The memory may also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device. Further, the memory may also include both an internal storage unit of the terminal device and an external storage device. The memory is used for storing the program codes and other programs and data required by the terminal equipment. The memory may also be used to temporarily store data that has been output or is to be output.
The technical effects and advantages of the computer storage medium and the terminal device created based on the identification method of the mark point or/and the homonymous point pairing method based on the multi-camera are not described herein, and each technical feature of the above embodiment may be arbitrarily combined, so that the description is concise, and all possible combinations of each technical feature in the above embodiment are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope described in the specification.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (10)

1. A method for pairing homonymous points of a multi-camera, comprising:
identifying pixel coordinates and space coordinates of mark points in each mark image under each camera view angle, and obtaining geometric parameters of mark images in each mark image;
taking geometric parameters of the logo as preset conditions, and projecting the logo points under different camera view angles to the same plane according to pixel coordinates and space coordinates of the logo points under each camera view angle;
according to the distance difference of projection of the mark points on the same plane under the view angles of all cameras, determining candidate homonymous points;
carrying out multi-plane polar projection on the mark points under different camera view angles according to the pixel coordinates and the space coordinates of the mark points under each camera view angle, and determining a polar line segment equation;
And determining the final homonymy point in the candidate homonymy points according to the vertical distance between the candidate homonymy points and the polar line segment.
2. The method of peer pairing for a multi-camera according to claim 1, wherein identifying pixel coordinates and spatial coordinates of a marker point in each marker image at each camera view angle comprises:
acquiring mark image data;
extracting a first edge point set P according to the gradient value and the gradient direction of the mark image data sub
According to the markExtracting a second edge point set E from gray values of the log image data s
Determining an actual edge point set F according to the distance between the point in the first edge point set and the corresponding point in the second edge point set s
According to the actual edge point set, determining pixel coordinates of the mark points in the mark image and geometric parameters of the mark image;
and determining the space coordinates of the mark points under the camera coordinate system according to the pixel coordinates of the mark points and the geometric parameters of the mark map.
3. The method of peer pairing for a multi-camera of claim 2, wherein the step of extracting the first set of edge points comprises:
according to the mark image data, calculating the comprehensive gradient value and gradient direction of each pixel point;
judging whether the comprehensive gradient value of each pixel point is larger than the high and low threshold values of the comprehensive gradient value;
If the pixel point is higher than the high threshold value of the comprehensive gradient value, the pixel point is regarded as an edge point;
if the pixel point is lower than the comprehensive gradient value low threshold value, the pixel point is regarded as a non-edge point;
if the integrated gradient value is between the high integrated gradient value threshold and the low integrated gradient value threshold, searching the integrated gradient value of the same link point before and after the pixel point, if the edge point with the integrated gradient value higher than the high integrated gradient value threshold exists in the same link point, considering the pixel point as an edge point, otherwise, considering the pixel point as a non-edge point; a first set of edge points is obtained.
4. The method of peer pairing for a multi-camera of claim 2, wherein the step of extracting the second set of edge points comprises:
performing threshold filtering binarization processing on the mark image data to obtain a binary image B of the mark image data m
Scan-through binary image B m If the value of the scanning point is 1, 8-way surrounding communication search is performed, if the surrounding existsAt the pixel point with the value of 0, marking the scanning point as a boundary point, determining the coordinates of each boundary point, and simultaneously recording boundary point sets belonging to different outlines according to the change of the boundary internal and external conditions of the current scanning point and the boundary internal and external conditions of the adjacent point to obtain a second edge point set E of each mark point instance s
5. The method for pairing homonymous points of a multi-camera according to claim 2, wherein the step of determining the actual set of edge points is specifically:
setting a distance threshold; traversing a first set of edge points P sub Judging a second edge point set E s If there is a point with a distance from the current traversal point less than a distance threshold; if the current traversal point exists, the current traversal point is regarded as an actual edge point; if not, the current traversal point is regarded as an unreal edge point; obtaining an actual edge point set F s
Or: setting a distance threshold; traversing a first set of edge points P sub Judging whether points with the distance smaller than a distance threshold value from the current traversal point exist in the second edge point set Es; if so, replacing the point with the distance smaller than the distance threshold value from the current traversal point in the second edge point set with the current traversal point; if not, not processing; after the traversal is finished, determining the second edge point set obtained finally as an actual edge point set F s
6. The method for pairing homonymous points of a multi-camera according to claim 2, wherein the determining of the spatial coordinates of the marker points under the camera coordinate system is based on the pixel coordinates of the marker points and the geometric parameters of the marker map, specifically:
If the monocular camera is adopted to acquire the mark image data, the method comprises the following steps:
determining a scaling factor according to the geometric parameters of the logo image;
according to the camera internal reference and the scaling factor, converting the pixel coordinates of the mark points into space coordinates under a camera coordinate system;
or: the step of acquiring the marker image data by adopting the multi-view camera comprises the following steps:
determining a scaling factor of each camera according to the geometric parameters of each logo;
according to the camera internal parameters and the scaling factors of the cameras, converting the pixel coordinates of the mark points into space coordinates under the coordinate systems of the cameras respectively;
obtaining a corrected scaling factor z' according to the external parameters of each camera and the space coordinates under the coordinate system of each camera;
and converting the pixel coordinates of the mark points into corrected space coordinates under each camera coordinate system according to the corrected scaling factors z'.
7. The method of peer pairing for a multi-camera according to claim 6, wherein the pixel coordinates of the marker are converted to spatial coordinates under a camera coordinate system according to the camera internal reference and the scaling factor; or: according to the camera internal parameters and the scaling factors of the cameras, converting the pixel coordinates of the mark points into space coordinates under the coordinate systems of the cameras respectively; formula (11) is used:
Wherein, (u, v) is the pixel coordinates of the marker point, Z is the scaling factor; f (f) x ,f y Corresponding to focal lengths of the camera in x and y directions, c x ,c y Respectively the origin coordinates, X of a pixel plane coordinate system c ,Y c ,Z c Representing the spatial coordinates of the marker points in the camera coordinate system.
8. The method for pairing homonymous points of a plurality of cameras according to claim 7, wherein the corrected scaling factor z' is obtained according to the external parameters of each camera and the space coordinates under the coordinate system of each camera; the method comprises the following steps:
adopting a formula (13), and calculating parallax distances according to the space coordinates of the mark points under each camera coordinate system;
calculating a corrected scaling factor z' according to the external parameters of each camera and the parallax distance by adopting a formula (14);
wherein, (X cij ,Y cij ,Z cij )、(x cij ,y cij ,z cij ) Respectively the image coordinates of the j-th mark point in the mark image data shot by the i-th camera; alpha is the parallax distance; f. t is an external parameter of the camera, wherein f is a focal length of the camera, and T is a base line distance between optical centers of the ith camera and the jth camera; z' is the corrected scaling factor.
9. The method of peer pairing for a multi-camera according to claim 1, wherein determining candidate peers according to a distance difference between projection of marker points on the same plane under each camera view comprises:
After the pixel coordinates of all the mark points of all the cameras on the same plane are determined through projection, candidate homonymous points of the multi-view imaging are extracted based on the principle of distance nearest in a unified coordinate system.
10. The method for peer pairing of multiple cameras according to any one of claims 1 to 9, wherein the step of performing multi-plane epipolar projection of the marker points at different camera angles according to the pixel coordinates and the spatial coordinates of the marker points at each camera angle, and determining an epipolar line segment equation includes:
satisfy formula (18):
wherein, (u) ij ,v ij (u) and (1) ij ’,v ij ' 1) the pixel coordinates of the jth pixel point of the ith camera under the two-dimensional planes of different view angles; on the basis, set upThe basis matrix is F, (u) 11 ,v 11 (u) and (1) 11 ’,v 11 '1) are x and x', then the epipolar constraint line segment on the projection plane with respect to point x 'can be expressed as Ie', as in equation (19):
and one projection point corresponds to one polar line segment.
CN202311344516.3A 2023-10-17 2023-10-17 Same name point pairing method for multi-camera Pending CN117635675A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311344516.3A CN117635675A (en) 2023-10-17 2023-10-17 Same name point pairing method for multi-camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311344516.3A CN117635675A (en) 2023-10-17 2023-10-17 Same name point pairing method for multi-camera

Publications (1)

Publication Number Publication Date
CN117635675A true CN117635675A (en) 2024-03-01

Family

ID=90032803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311344516.3A Pending CN117635675A (en) 2023-10-17 2023-10-17 Same name point pairing method for multi-camera

Country Status (1)

Country Link
CN (1) CN117635675A (en)

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN109035320B (en) Monocular vision-based depth extraction method
US8447099B2 (en) Forming 3D models using two images
US8452081B2 (en) Forming 3D models using multiple images
KR101666959B1 (en) Image processing apparatus having a function for automatically correcting image acquired from the camera and method therefor
JP5870273B2 (en) Object detection apparatus, object detection method, and program
US9886759B2 (en) Method and system for three-dimensional data acquisition
WO2019160032A1 (en) Three-dimensional measuring system and three-dimensional measuring method
CN109784250B (en) Positioning method and device of automatic guide trolley
CN109479082B (en) Image processing method and apparatus
CN103971378A (en) Three-dimensional reconstruction method of panoramic image in mixed vision system
CN109118544B (en) Synthetic aperture imaging method based on perspective transformation
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN109741241B (en) Fisheye image processing method, device, equipment and storage medium
JP2013178656A (en) Image processing device, image processing method, and image processing program
KR20180105875A (en) Camera calibration method using single image and apparatus therefor
CA3233222A1 (en) Method, apparatus and device for photogrammetry, and storage medium
CN107680035B (en) Parameter calibration method and device, server and readable storage medium
CN113034605A (en) Target object position determining method and device, electronic equipment and storage medium
JP6086491B2 (en) Image processing apparatus and database construction apparatus thereof
KR101673144B1 (en) Stereoscopic image registration method based on a partial linear method
Ventura et al. Structure and motion in urban environments using upright panoramas
JP2006113832A (en) Stereoscopic image processor and program
CN117115242B (en) Identification method of mark point, computer storage medium and terminal equipment
US9378555B2 (en) Enhanced outlier removal for 8 point algorithm used in camera motion estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination