WO2023040404A1 - 一种线段匹配方法、装置、计算机设备和存储介质 - Google Patents

一种线段匹配方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2023040404A1
WO2023040404A1 PCT/CN2022/101439 CN2022101439W WO2023040404A1 WO 2023040404 A1 WO2023040404 A1 WO 2023040404A1 CN 2022101439 W CN2022101439 W CN 2022101439W WO 2023040404 A1 WO2023040404 A1 WO 2023040404A1
Authority
WO
WIPO (PCT)
Prior art keywords
line segment
point
point feature
matching
feature
Prior art date
Application number
PCT/CN2022/101439
Other languages
English (en)
French (fr)
Inventor
吴伟
徐宽
Original Assignee
北京极智嘉科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京极智嘉科技股份有限公司 filed Critical 北京极智嘉科技股份有限公司
Publication of WO2023040404A1 publication Critical patent/WO2023040404A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • the present disclosure relates to the fields of vision and computer technology, in particular, to a line segment matching method, device, computer equipment and storage medium.
  • SLAM Simultaneous Localization and Mapping
  • Embodiments of the present disclosure at least provide a line segment matching method, device, computer equipment, storage medium, and computer program.
  • an embodiment of the present disclosure provides a line segment matching method, including:
  • first image and the second image are images in the same environment
  • the first set and the second set determine the matching degree of any first line segment and any second line segment, and match the first line segment and the second line segment whose matching degree meets the preset condition.
  • the two line segments act as line segments that match each other.
  • the determining the matching degree between any first line segment and any second line segment based on the point feature matching result, the first set, and the second set includes:
  • the first set, and the second set determine a matching degree between any first line segment in the first set and any second line segment in the second set.
  • the determining the target point feature in any first line segment in the first set based on the point feature matching result includes:
  • determining any first line segment in the first set and any line segment in the second set The matching degree of the second line segment, including:
  • the second number of first point features included in the first line segment in the first set Based on the first number of target point features, the second number of first point features included in the first line segment in the first set, and the third number of second point features included in the second line segment in the second set quantity, to determine the matching degree between any first line segment in the first set and any second line segment in the second set.
  • the first number of feature points based on the target point, the second number of first point features included in the first line segment in the first set, and the second number of feature points in the second set The third quantity of the second point features included in the two line segments determines the degree of matching between any first line segment in the first set and any second line segment in the second set, including:
  • determining the matching degree between any first line segment in the first set and any second line segment in the second set based on the first ratio and the second ratio include:
  • Said taking the first line segment and the second line segment whose matching degree meets the preset condition as the matching line segment includes:
  • the target ratio is greater than or equal to a first preset threshold, it is determined that the first line segment and the second line segment are matching line segments.
  • the method before matching the first point feature in the first image with the second point feature in the second image to obtain a point feature matching result, the method further includes:
  • the method further includes:
  • the matching of the first point feature in the first image with the second point feature in the second image to obtain a point feature matching result includes:
  • the first point feature and the second point feature are matched to obtain a point feature matching result.
  • determining the first of the first line segment including the first point feature in the first line segment set collection including:
  • the first line segment corresponding to the first target point whose distance is less than or equal to the fourth preset threshold constitutes the first set, wherein the first point corresponding to the first target point is characterized as belonging to the first The point feature of the first line segment in the collection.
  • the determining, based on the second point feature set and the second line segment set, a second set of second line segments including second point features in the second line segment set includes:
  • the second line segment corresponding to the second target point whose distance is less than or equal to the fifth preset threshold constitutes the second set, wherein the second point corresponding to the second target point is characterized as belonging to the second set Point feature for any second line segment.
  • the embodiment of the present disclosure also provides a line segment matching device, including:
  • An acquisition module configured to acquire a first image and a second image, wherein the first image and the second image are images in the same environment;
  • a determining module configured to acquire a first point feature set and a first line segment set in the first image, and determine the first line segment based on the first point feature set and the first line segment set a first set of first line segments comprising the first point feature in the set;
  • a first matching module configured to match the first point feature in the first image with the second point feature in the second image to obtain a point feature matching result
  • the second matching module is configured to determine the matching degree of any first line segment and any second line segment based on the point feature matching result, the first set and the second set, and make the matching degree meet a preset condition
  • the first line segment and the second line segment of are used as matching line segments.
  • an embodiment of the present disclosure further provides a computer device, including: a processor, a memory, and a bus, the memory stores machine-readable instructions executable by the processor, and when the computer device is running, the processing The processor communicates with the memory through a bus, and when the machine-readable instructions are executed by the processor, the above-mentioned first aspect, or the steps of any possible line segment matching method in the first aspect are executed.
  • embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned first aspect, or any of the first aspects of the first aspect, may be executed. Steps of one possible line segment matching method.
  • an embodiment of the present disclosure further provides a computer program, wherein when the computer program is executed in a computer, the computer is instructed to execute the steps of the above line segment matching method.
  • the embodiments of the present disclosure provide a line segment matching method, device, computer equipment, storage medium and computer program, by acquiring a first image and a second image, wherein the first image and the second image are in the same environment image; obtain the first point feature set and the first line segment set in the first image, and determine the first line segment set based on the first point feature set and the first line segment set A first set of first line segments including first point features; and obtaining a second set of point features and a second set of line segments in the second image, and based on the second set of point features and the second set of line segments, Determining a second set of second line segments including second point features in the second set of line segments; matching the first point features in the first image with the second point features in the second image to obtain Point feature matching result; based on the point feature matching result, the first set and the second set, determine the matching degree of any first line segment and any second line segment, and match the matching degree to the first set of preset conditions A line segment and a second line segment are matched with each other
  • the matching of the above-mentioned line segment is based on the matching result of the first point feature and the second point feature. Therefore, there is no need to extract the feature description information of the line segment, which simplifies the extraction process of the feature description information of the line segment, and saves the amount of calculation and the calculation time. The efficiency of line segment matching is further improved.
  • FIG. 1 shows a flow chart of a line segment matching method provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic flowchart of an algorithm provided by an embodiment of the present disclosure
  • Fig. 3 shows a schematic diagram of a line segment matching device provided by an embodiment of the present disclosure
  • Fig. 4 shows a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
  • line segment features play an important role, and can provide accurate constraints when performing pose calculations.
  • Most of the current line segment feature matching methods are based on the feature description information in the extracted image area or line segment endpoints.
  • line segment detection endpoints and lengths such as line segment end points or middle occlusions, and line segments that are too long cause Across two frames of images, etc., will lead to a significant drop in the accuracy of the matching results;
  • the matching of the feature description information of the extracted line segments and the matching of the extracted point features are mutually Independently, the amount of calculation and the calculation time are increased, and the efficiency of line segment matching is reduced.
  • this disclosure provides a line segment matching method, device, computer equipment, storage medium and computer program, using point features to represent line segments, that is, the first line segment to which each first point feature belongs and each second line segment
  • the second line segment that the point feature belongs to improves the accuracy of the line segment matching results when the length of the line segment changes and the endpoint changes. For example, part of the feature of the line segment can be used to match the entire line segment.
  • the matching of the above-mentioned line segment is based on the matching result of the first point feature and the second point feature. Therefore, there is no need to extract the feature description information of the line segment, which simplifies the extraction process of the feature description information of the line segment, and saves the amount of calculation and the calculation time. The efficiency of line segment matching is further improved.
  • Superpoint is a point feature detection and descriptor (feature description information) extraction algorithm based on self-supervised training.
  • the straight line segment detection algorithm (Line Segment Detector, LSD), first calculates the gradient size and direction of all points in the image, and then uses the gradient direction change small and adjacent points as a connected domain, and then according to the rectangle of each domain It is judged whether it needs to be disconnected according to the rules to form multiple domains with large rectangularity, and finally all the generated domains are improved and screened, and the domains that meet the conditions are retained, which is the final straight line detection result.
  • OpenCV is a cross-platform computer vision and machine learning software library released based on BSD license (open source), which can run on Linux, Windows, Android and Mac OS operating systems.
  • SuperGlue a feature matching algorithm based on a graph convolutional neural network, is used for point feature matching in the embodiments of the present disclosure.
  • the Bag-of-words model is a simplified expression model under natural language processing and information retrieval (IR).
  • the line segment matching method provided by the embodiment of the present invention can be applied to visual SLAM.
  • the line segment In some weak texture areas, the line segment It can be used as a front-end feature to supplement the shortcomings of insufficient target points, and improve the robustness of visual SLAM front-end tracking.
  • the detection of line segments is more accurate than the detection of target points, and can provide more accurate pose calculations. constraints.
  • the execution subject of the line segment matching method provided in the embodiments of the present disclosure is generally a computer device with certain computing capabilities.
  • the line segment matching method may be implemented by a processor invoking computer-readable instructions stored in a memory.
  • FIG. 1 is a flow chart of a line segment matching method provided by the embodiment of the present disclosure, and the method includes steps S101 to S104 ,in:
  • S101 Acquire a first image and a second image; the first image and the second image are images in the same environment.
  • the first image and the second image may be images in the same environment acquired by using a shooting device, such as a camera.
  • the environment may be the running environment of the robot, for example, the running environment when the robot creates a map.
  • the first image may be the current frame or the image of the current position acquired by the robot;
  • the second image may be the second frame (excluding the current frame) or other frames acquired by the robot (excluding the current frame and the second frame), or images of other locations (excluding the current location) in the robot operating environment acquired by the robot.
  • Target points with the same feature are included in two consecutive frames of images.
  • the first image may be the image (including the first image and the second image) acquired in the above embodiment during the map creation process.
  • the second image may be an image acquired by the robot in the same operating environment during the relocation process.
  • the first image of location A collected during the map creation process and the second image of location A collected during the relocation process include target points with the same point features.
  • S102 Extract multiple first point features in the first image, and determine the first line segment to which each first point feature belongs, and extract multiple second point features in the second image, and determine each first line segment The second line segment to which the two-point feature belongs.
  • the first point feature is a feature corresponding to the first target point in the first image.
  • the second point feature is a feature corresponding to the second target point in the second image.
  • the first target point is a target point with obvious features in the first image
  • the second target point is a target point with obvious features in the second image.
  • the first image includes multiple first target points
  • the second image includes multiple second target points. Therefore, the first image includes multiple first point features, and the second image includes multiple second point features.
  • Target points with obvious features for example, the first image captured of the robot in the warehouse, including shelves, corners, workstations, etc.
  • the first target points can include points on the shelves, corners, or workstations, etc.
  • the first point feature may include a point feature on a shelf, a point feature on a wall corner, or a point feature on a workstation, and the like.
  • Secondary point features can also include point features for obvious warehouse features such as shelves, wall corners, or workstations.
  • the first feature description information corresponding to the multiple first point features may be further determined based on the extracted multiple first point features.
  • second feature description information corresponding to the multiple second point features may be further determined based on the extracted multiple second point features.
  • the first feature description information is the information describing the feature of the first point
  • the second feature description information is the information describing the feature of the second point.
  • the Superpoint network can be used to detect point features and extract feature description information corresponding to point features.
  • the Superpoint network can be utilized to extract a plurality of first point features in the first image, and determine the position information of each first point feature in the first image; extract a plurality of second point features in the second image, And determine the position information of each second point feature in the second image. And, detecting the first feature description information corresponding to each first point feature and the second feature description information corresponding to the second point feature.
  • first feature description information and the second feature description information may be used to detect two point features that match each other, for example, to detect whether the first point feature and the second point feature are the same point feature.
  • the LSD algorithm in OpenCV can also be used to detect line segments in the image, for example, to extract a plurality of first line segments in the first image, and determine the position information of the first line segments in the first image; and, extract the second multiple second line segments in the image, and determine position information of the second line segments in the second image.
  • the first set of first point features included in any first line segment may also be determined, and it may be determined that any second line segment includes The second set of features for the second point of .
  • a plurality of first point features extracted form a first point feature set, which is recorded as ⁇ P_1, P_2, ..., P_n ⁇ , where P_1, P_2, ..., P_n are the first Point features, determine the first feature description information corresponding to each first point feature, and then combine the first feature description information corresponding to multiple first point features into a first feature description information set, denoted as ⁇ D_1, D_2, ... ..., D_(n ⁇ ') ⁇ , where D_1, D_2, ..., D_(n ⁇ ') are the first feature description information corresponding to the first point feature respectively.
  • the extracted multiple second-point features form a second-point feature set, which is recorded as ⁇ P ⁇ _1 ⁇ ', P_2 ⁇ ', ..., P_(n ⁇ ') ⁇ ' ⁇ , where P_1 ⁇ ', P_2 ⁇ ', ..., P_(n ⁇ ') ⁇ ' are the second point features respectively, determine the second feature description information corresponding to each second point feature, and then, the multiple second point features corresponding to The second feature description information constitutes the second feature description information set, which is recorded as ⁇ D ⁇ _1 ⁇ ', D_2 ⁇ ', ..., D_(n ⁇ ') ⁇ ' ⁇ where D_1 ⁇ ', D_2 ⁇ ', ...
  • a plurality of first line segments in the extracted first image are composed into a first line segment set, namely ⁇ L_1, L_2, ..., L_m ⁇ , wherein, L_1, L_2, ..., L_m are respectively the first line segment; and, a plurality of second line segments in the extracted second image are formed into a second line segment set, which is denoted as ⁇ L_1 ⁇ ', L_2 ⁇ ', ..., L_(m ⁇ ') ⁇ ' ⁇ , wherein , L_1 ⁇ ', L_2 ⁇ ', ..., L_(m ⁇ ') ⁇ ' are respectively the second line segment.
  • n represents the number of first point features in the first point feature set
  • n ⁇ ' represents the number of second point features in the second point feature set
  • m represents the first line in the first line segment set
  • the number of segments, m ⁇ ' indicates the number of second line segments in the second line segment set.
  • the distance from the first target point corresponding to each first point feature to each first line segment can be traversed to determine whether the first point feature belongs to the first line segment, and the first target point belonging to the first line segment
  • the corresponding first point feature is used as the point feature set of the first line segment, that is, the first set, respectively recorded as L_1: ⁇ P_11, P_12, ..., P_1i ⁇ , L_2: ⁇ P_21, P_22, ..., P_1j ⁇ , ...,
  • L_m ⁇ P_m1, P_m2,..., P_mk ⁇ , where i represents the number of first point features in L_1, j represents the number of first point features in L_2, and k represents the number of first point features in L_m and, traversing the distance from the second target point corresponding to each second point feature to each second line segment, judging whether the second point feature belongs to the second line segment, and corresponding to the second target point belonging to the first line segment
  • the second point feature is the point feature set of the second line segment, that is, the second set, respectively recorded as L_1 ⁇ ': ⁇ P ⁇ _11 ⁇ ', P_12 ⁇ ', ..., P_( ⁇ 1i ⁇ ') ⁇ ' ⁇ ,
  • L_2 ⁇ ' ⁇ P ⁇ _21 ⁇ ', P_22 ⁇ ', ..., P_( ⁇ 2j ⁇ ') ⁇ ' ⁇ , ..., L_(m ⁇ ') ⁇ ': ⁇ P ⁇ _(m ⁇ '1) ⁇ ', P_(m ⁇ '2) ⁇ ', ..., P_( ⁇ m ⁇ 'k ⁇ ') ⁇ ' ⁇ , wherein, i ⁇ ' represents the second feature in L_1 ⁇ ' The number, j ⁇ ' indicates the number of the second feature in L_2 ⁇ ', and k ⁇ ' indicates the number of the second feature in L_(m ⁇ ') ⁇ '.
  • the point features in the first point feature set and the second point feature set determined in S102 may be matched to determine a matching result.
  • the matching methods of point features can be different, such as the following two methods:
  • SuperGlue can be used to match the first point feature and the second point feature. In specific implementation, it can be based on the position information of the first point feature, the first feature description information corresponding to the first point feature, The position information of the second point feature and the second feature description information corresponding to the second point feature match the first point feature and the second point feature, and determine the matching result of the first point feature and the second point feature.
  • the position information of the first point feature and the position information of the second point feature can be used to match the preset position ranges of the first point feature and the second point feature in the same environment, and then match the same preset
  • the first feature description information corresponding to the first point feature within the position range and the second feature description information corresponding to the second point feature in order to achieve feature matching, each first point feature in the first point feature set and The matching result of each second point feature in the second point feature set is respectively denoted as M_1, M_2, ..., M_f, where the matching result can be a matching pair in which the first point feature matches the second point feature,
  • the matching results can be respectively recorded as M_1: ⁇ P_1, P_10 ⁇ ' ⁇ , M_2: ⁇ P_2, P_3 ⁇ ' ⁇ , ..., M_f: ⁇ P_n, P_(n ⁇ ⁇
  • the bag-of-words model can be used to match the first point feature and the second point feature.
  • it can be based on the first feature description information corresponding to the first point feature and the second point feature correspondence
  • the second feature description information of the first point feature is matched with the second point feature to determine the matching result of the first point feature and the second point feature.
  • each second point feature is traversed, and the second feature description information corresponding to the second point feature is sequentially compared with the second feature description information corresponding to the unmatched first point feature, and the second feature description information is determined.
  • S104 Based on the matching result, determine the matching degree of any first line segment and any second line segment, and use the first line segment and the second line segment whose matching degree meets a preset condition as mutually matching line segments.
  • any first line segment is any line segment extracted from the first image.
  • Any second line segment may be any line segment extracted from the second image. Since the line segment is composed of multiple target points, the matching result of the point feature corresponding to the target point, that is, the matching result of the first point feature and the second point feature, can be recorded as ⁇ M_1, M_2, ..., M_f ⁇ to further determine the degree of matching between any first line segment and any second line segment.
  • the matching degree between any first line segment and any second line segment can be determined according to S1041-S1042:
  • the matching results include ⁇ M_1, M_2, ..., M_f ⁇ .
  • S1042 Based on the first number of target point features, the second number of first point features included in any first line segment, and the third number of second point features included in any second line segment, determine any first line segment and any The degree of matching of the second line segment.
  • the first number of target point features is the number of target points corresponding to the target point feature.
  • the second number of first point features included in any first line segment is the number of first target points included in any first line segment.
  • the third number of second point features included in any second line segment is the number of second target points included in any second line segment.
  • a matrix may be constructed in real time to represent the first quantity of target point features. Specifically, taking the application scenario of relocation as an example, the second image is obtained, any second line segment in the second image is traversed, and each first line segment is respectively matched to determine the first number of target point features.
  • the first number of target point features matching the second line segment L_1 ⁇ ' and the first line segment L_1 is S_11, and determining the number of target point features matching the second line segment L_1 ⁇ ' and the first line segment L_2
  • the first number is S_12,..., and the first number of target point features that determine that the second line segment L_1 ⁇ ' matches the first line segment L_m is S_1m.
  • Determine the first quantity of the target point feature that the second line segment L_2 ⁇ ' matches the first line segment L_1 is S_21
  • determine that the first quantity of the target point feature that the second line segment L_2 ⁇ ' matches the first line segment L_2 is S_22, . . .
  • the first number of target point features matching the second line segment L_2 ⁇ ' with the first line segment L_m is S_2m.
  • Determine the first quantity of the target point feature that the second line segment L_(m ⁇ ') ⁇ ' matches with the first line segment L_1 is S_(m ⁇ '1)
  • the first number of target point features matched by the first line segment L_2 is S_(m ⁇ '2),..., to determine the target point that the second line segment L_(m ⁇ ') ⁇ ' matches the first line segment L_m
  • the first number of features is S_(m ⁇ 'm). Specifically, refer to the matrix shown in Table 1.
  • the first number of target point features that match the first line segment and the second line segment shown in Table 1 may be 0, that is, the first line segment does not match the second line segment.
  • the matrix in Table 1 is constructed in real time to represent the first quantity of target point features. Only the first line segment and the second line segment with target point features after matching can be constructed, which can save computation and improve matching efficiency.
  • the first number and the second number of features are determined.
  • the ratio of the two quantities and determining the ratio of the first quantity to the third quantity can determine the matching degree of any first line segment and any second line segment based on the first ratio and the second ratio.
  • the first The ratio is S_(m ⁇ 'm)/N_m
  • the second ratio is S_(m ⁇ 'm)/(N_(m ⁇ ') ⁇ ').
  • the above includes but is not limited to taking the maximum value of the first ratio and the second ratio as the matching degree, and can also optimize the first ratio and/or the second ratio, and determine the weight values of the first ratio and the second ratio respectively, based on the first ratio and the weight value of the second ratio to comprehensively determine the matching degree.
  • the embodiment of the present disclosure does not limit this. Without departing from the scope of the present disclosure, those skilled in the art can make various substitutions and modifications for the process of determining the matching degree between the first ratio and the second ratio. These substitutions and Modifications should fall within the scope of this disclosure.
  • the matching degree it is judged whether the first line segment matches with the second line segment.
  • the target ratio can be determined based on the first ratio and the second ratio, and the target ratio is used as the matching degree; when the target ratio is greater than or equal to the first In the case of a preset threshold, it is determined that the first line segment and the second line segment are matching line segments.
  • v_(m ⁇ 'm) is greater than or equal to the first preset threshold, it may be determined that the first line segment L_m and the second line segment L_(m ⁇ ') ⁇ ' are matching line segments.
  • the target ratio may be the maximum value of the target ratios screened out above, or may also be an optimized ratio, which is not limited in this embodiment of the present disclosure.
  • the value range of the first preset threshold is between 0 and 1, which may be the result of debugging parameters by those skilled in the art, and the specific data are not limited in the embodiments of the present disclosure.
  • the first line segment to which the first point feature belongs may be further determined whether the first line segment is the first line segment to be matched.
  • the first line segment to be matched may be preset, including a line segment whose number of first point features is greater than a second preset threshold.
  • the number of first point features in the first line segment is determined, and when the number of first point features is less than or equal to the second preset threshold, the first line segment and the first point feature in the first line segment are eliminated. little character.
  • the second line segment to be matched may be preset, including a line segment whose number of second point features is greater than a third preset threshold.
  • the quantity of the second point feature in the second line segment is determined, and when the quantity of the second point feature is less than or equal to the third preset threshold, the second line segment and the second point feature in the second line segment are eliminated .
  • S1021 For each first point feature among the plurality of first point features, determine a distance from a first target point corresponding to the first point feature to any first line segment.
  • S1022 Use the first point feature corresponding to the first target point whose distance is less than or equal to the fourth preset threshold as the point feature belonging to any first line segment.
  • the determined position information of each first target point can be used to determine the distance from the first target point to any first line segment d_1, after that, using the preset fourth preset threshold T_(d_1), the first point feature corresponding to the first target point of d_1 ⁇ T_(d_1) is used as the point feature belonging to any first line segment, that is, the first point feature A point feature is a point feature in the first set corresponding to the arbitrary first line segment.
  • S1024 Use the second point feature corresponding to the second target point whose distance is less than or equal to the fifth preset threshold as the point feature belonging to any second line segment.
  • the determined position information of each second target point can be used to determine the distance d_2 from the second target point to any second line segment, Afterwards, using the preset fourth preset threshold T_(d_2), the second point feature corresponding to the second target point of d_2 ⁇ T_(d_2) is used as a point feature belonging to any second line segment, that is, the second point feature is the point feature in the second set corresponding to the arbitrary second line segment.
  • second preset threshold, third preset threshold, fourth preset threshold and fifth preset threshold can be determined by those skilled in the art based on empirical values, and are not specifically limited in the embodiments of the present disclosure.
  • the point feature is used to represent the line segment, that is, the first line segment to which each first feature point belongs and the second line segment to which each second point feature belongs, which improves the length change of the line segment and the time when the endpoint changes.
  • the accuracy of line segment matching results for example, it is possible to use some features of a line segment to match the entire line segment.
  • the matching of the above-mentioned line segment is based on the matching result of the first point feature and the second point feature. Therefore, there is no need to extract the feature description information of the line segment, which simplifies the extraction process of the feature description information of the line segment, and saves the amount of calculation and the calculation time. The efficiency of line segment matching is further improved.
  • the embodiment of the present disclosure also provides a schematic diagram of an algorithm flow, wherein the algorithm involved in the embodiment of the present disclosure includes the following modules, wherein 211 represents the first image Corresponding Superpoint and LSD module, 212 represents the Superpoint and LSD module corresponding to the second image, 221 represents the point-line association module corresponding to the first point feature and the first line segment, 222 represents the point corresponding to the second point feature and the second line segment
  • the line association module 23 represents the point feature matching module
  • 24 represents the line segment matching module.
  • the first set and the second set determine the matching degree of any first line segment and any second line segment, and match the first line segment and the second line segment whose matching degree meets the preset condition.
  • the two line segments act as line segments that match each other.
  • the determining, based on the first point feature set and the first line segment set, a first set of first line segments including first point features in the first line segment set includes:
  • the first line segment corresponding to the first target point whose distance is less than or equal to the fourth preset threshold constitutes the first set, wherein the first point corresponding to the first target point is characterized as belonging to the first The point feature of the first line segment in the collection.
  • determining a second set of second line segments including second point features in the second line segment set based on the second point feature set and the second line segment set includes:
  • the second line segment corresponding to the second target point whose distance is less than or equal to the fifth preset threshold constitutes the second set, wherein the second point corresponding to the second target point is characterized as belonging to the second set
  • the point feature of the second line segment in is characterized as belonging to the second set
  • the matching of the first point feature in the first image with the second point feature in the second image to obtain a point feature matching result includes:
  • the first point feature and the second point feature are matched to obtain a point feature matching result.
  • the determining the matching degree between any first line segment and any second line segment based on the point feature matching result, the first set, and the second set includes:
  • the first set, and the second set determine a matching degree between any first line segment in the first set and any second line segment in the second set.
  • the determining the target point feature in any first line segment in the first set based on the point feature matching result includes:
  • determining the matching degree of any first line segment in the first set and any second line segment in the second set include:
  • the second number of first point features included in the first line segment in the first set Based on the first number of target point features, the second number of first point features included in the first line segment in the first set, and the third number of second point features included in the second line segment in the second set quantity, to determine the matching degree between any first line segment in the first set and any second line segment in the second set.
  • the first number of features based on the target point, the second number of first point features included in the first line segment in the first set, and the second point feature included in the second line segment in the second set determines the degree of matching between any first line segment in the first set and any second line segment in the second set, including:
  • the determining the matching degree between any first line segment in the first set and any second line segment in the second set based on the first ratio and the second ratio includes:
  • Said taking the first line segment and the second line segment whose matching degree meets the preset condition as the matching line segment includes:
  • the target ratio is greater than or equal to a first preset threshold, it is determined that the first line segment and the second line segment are matching line segments.
  • the input of module 211 is the first image
  • the output of module 211 is the first point feature set, namely ⁇ P_1, P_2, ..., P_n ⁇
  • the first feature description information namely ⁇ D_1, D_2, ... , D_(n ⁇ ') ⁇
  • the first set of line segments namely ⁇ L_1, L_2, ..., L_m ⁇ .
  • the input of the 212 module is the second image
  • the output of the 212 module is the second point feature set, namely ⁇ P ⁇ _1 ⁇ ', P_2 ⁇ ', ..., P_(n ⁇ ') ⁇ ' ⁇
  • the second feature description Information namely ⁇ D ⁇ _1 ⁇ ', D_2 ⁇ ', ..., D_(n ⁇ ') ⁇ ' ⁇
  • the second line segment set namely ⁇ L_1 ⁇ ', L_2 ⁇ ', ..., L_(m ⁇ ') ⁇ ' ⁇ .
  • the input of the 221 module can be ⁇ P_1, P_2, ..., P_n ⁇ and ⁇ L_1, L_2, ..., L_m ⁇ , associate the first point feature with the first line segment, and use the first point feature corresponding to the first
  • the distance d_1 from the target point to the first line segment determines whether the feature of the first point belongs to the first line segment;
  • the output of the 221 module can be L_1: ⁇ P_11, P_12, ..., P_1i ⁇ , L_2: ⁇ P_21, P_22, ... ..., P_1j ⁇ , ..., L_m:
  • the input of the 222 module can be ⁇ P ⁇ _1 ⁇ ', P_2 ⁇ ', ..., P_(n ⁇ ') ⁇ ' ⁇ and ⁇ L_1 ⁇ ', L_2 ⁇ ', ..., L_(m ⁇ ') ⁇ ' ⁇ , associate the second point feature with the second line segment, and use the distance d_2 from the second target point corresponding to the second point feature to the second line segment to determine whether the second point feature belongs to the second line segment;
  • the output of the 222 module can be For L_1 ⁇ ': ⁇ P ⁇ _11 ⁇ ', P_12 ⁇ ', ..., P_( ⁇ 1i ⁇ ') ⁇ ' ⁇ , L_2 ⁇ ': ⁇ P ⁇ _21 ⁇ ', P_22 ⁇ ',
  • the input of module 23 needs to be determined according to the actual scene.
  • the point feature matching module can be a SuperGlue algorithm model, and its input can be ⁇ P_1, P_2, ..., P_n ⁇ , ⁇ D_1 , D_2, ..., D_(n ⁇ ') ⁇ , ⁇ P ⁇ _1 ⁇ ', P_2 ⁇ ', ..., P_(n ⁇ ') ⁇ ' ⁇ and ⁇ D ⁇ _1 ⁇ ', D_2 ⁇ ' ,..., D_(n ⁇ ') ⁇ ' ⁇ ; its output can be ⁇ M_1, M_2,..., M_f ⁇ .
  • 23 represents a point feature matching module, for example, it can be a bag of words model, and its input can be ⁇ D_1, D_2, ..., D_(n ⁇ ') ⁇ and ⁇ D ⁇ _1 ⁇ ', D_2 ⁇ ', ..., D_(n ⁇ ') ⁇ ' ⁇ ; its output can be ⁇ M_1, M_2, ..., M_f ⁇ .
  • the input of module 24 can be the output of module 221, module 222 and module 23; the output of module 24 can be the degree of matching.
  • the writing order of each step does not mean a strict execution order and constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possible
  • the inner logic is OK.
  • the embodiment of the present disclosure also provides a line segment matching device corresponding to the line segment matching method. Since the problem-solving principle of the device in the embodiment of the present disclosure is similar to the above-mentioned line segment matching method in the embodiment of the present disclosure, the implementation of the device Reference can be made to the implementation of the method, and repeated descriptions will not be repeated.
  • the device includes: an acquisition module 301, a determination module 302, a first matching module 303, and a second matching module 304; wherein,
  • An acquisition module 301 configured to acquire a first image and a second image; the first image and the second image are images in the same environment;
  • a determining module 302 configured to extract multiple first point features in the first image, determine the first line segment to which each first point feature belongs, and extract multiple second point features in the second image. point features, and determine the second line segment to which each second point feature belongs;
  • the first matching module 303 is configured to match the first point feature in the first image with the second point feature in the second image to obtain a matching result
  • the second matching module 304 is configured to determine the matching degree of any first line segment and any second line segment based on the matching result, and match the first line segment and the second line segment whose matching degree meets a preset condition as line segments that match each other.
  • the second matching module 304 is configured to, based on the matching result, select from the first point features included in the any first line segment that are related to the any second line segment.
  • the third quantity of the second point feature determines the degree of matching between the arbitrary first line segment and the arbitrary second line segment.
  • the second matching module 304 is configured to determine a first ratio between the first quantity and the second quantity, and determine the first quantity and the third quantity A second ratio of the second ratio; based on the first ratio and the second ratio, determine the degree of matching between the arbitrary first line segment and the arbitrary second line segment.
  • the second matching module 304 is configured to determine a target ratio based on the first ratio and the second ratio, and use the target ratio as the matching degree; If the target ratio is greater than or equal to a first preset threshold, it is determined that the first line segment and the second line segment are matching line segments.
  • the determining module 302 is further configured to extract multiple first point features in the first image, and determine the first line segment to which each first point feature belongs , and, after extracting a plurality of second point features in the second image, and determining the second line segment to which each first point feature belongs, determine the first set, and determine a second set of second point features included in any second line segment;
  • the second matching module 304 is configured to, based on the matching result, the first set, and the second set, select from the first point features included in the any first line segment that is consistent with the any first line segment.
  • the second point feature included in the second line segment matches the target point feature.
  • the line segment matching device further includes a filtering module 305;
  • the filtering module 305 is configured to determine the first line segment before matching the first point feature in the first image with the second point feature in the second image to obtain a matching result
  • the second point feature in the line segment is configured to determine the first line segment before matching the first point feature in the first image with the second point feature in the second image to obtain a matching result
  • the number of first point features in the first line segment when the number of the first point features is less than or equal to the second preset threshold, the first line segment and the first point features in the first line segment are eliminated and, determine the number of second point features in the second line segment, and remove the second line segment and the second point feature when the
  • the determining module 302 is further configured to, after extracting a plurality of first point features in the first image and a plurality of second point features in the second image, determine First feature description information corresponding to a plurality of first point features and second feature description information corresponding to the plurality of second point features;
  • the first matching module 303 is configured to match the first point feature and the second point feature based on the first feature description information and the second feature description information to obtain the matching result.
  • the determining module 302 is configured to, for each first point feature in the plurality of first point features, determine the first target point corresponding to the first point feature to any The distance of the first line segment; using the first point feature corresponding to the first target point whose distance is less than or equal to the fourth preset threshold as the point feature belonging to the arbitrary first line segment; and, for the plurality of For each second point feature in the second point feature, determine the distance from the second target point corresponding to the second point feature to any second line segment; set the second target whose distance is less than or equal to the fifth preset threshold The second point feature corresponding to the point is used as the point feature belonging to the any second line segment.
  • a schematic diagram of another line segment matching device provided by an embodiment of the present disclosure, the device includes: an acquisition module, a determination module, a first matching module, and a second matching module; wherein,
  • An acquisition module configured to acquire a first image and a second image, wherein the first image and the second image are images in the same environment;
  • a determining module configured to acquire a first point feature set and a first line segment set in the first image, and determine the first line segment based on the first point feature set and the first line segment set a first set of first line segments comprising the first point feature in the set;
  • a first matching module configured to match the first point feature in the first image with the second point feature in the second image to obtain a point feature matching result
  • the second matching module is configured to determine the matching degree of any first line segment and any second line segment based on the point feature matching result, the first set and the second set, and make the matching degree meet a preset condition
  • the first line segment and the second line segment of are used as matching line segments.
  • the second matching module is configured to determine target point features in any first line segment in the first set based on the point feature matching result
  • the first set, and the second set determine a matching degree between any first line segment in the first set and any second line segment in the second set.
  • the second matching module is configured to, based on the point feature matching result, filter the first point features included in any first line segment in the first set to match The second point feature included in any second line segment in the second set matches the target point feature.
  • the second matching module is configured to be based on the first number of target point features, the second number of first point features included in the first line segment in the first set, and The third number of second point features included in the second line segment in the second set determines the degree of matching between any first line segment in the first set and any second line segment in the second set.
  • the second matching module is configured to determine a first ratio between the first quantity and the second quantity, and determine a ratio between the first quantity and the third quantity second ratio;
  • the second matching module is configured to determine a target ratio based on the first ratio and the second ratio, and use the target ratio as the matching degree;
  • Said taking the first line segment and the second line segment whose matching degree meets the preset condition as the matching line segment includes:
  • the target ratio is greater than or equal to a first preset threshold, it is determined that the first line segment and the second line segment are matching line segments.
  • the line segment matching device further includes:
  • a filtering module configured to determine the number of first point features included in the first line segment in the first set, and remove the first point feature when the number of the first point features is less than or equal to a second preset threshold a first line segment and a first point feature included in the first line segment;
  • the determination module is further configured to determine the first feature description information corresponding to the first point feature in the first point feature set and the second point feature in the second point feature set Corresponding second feature description information;
  • the matching of the first point feature in the first image with the second point feature in the second image to obtain a point feature matching result includes:
  • the first point feature and the second point feature are matched to obtain a point feature matching result.
  • the determination module is configured to determine the first target point corresponding to the first point feature in the first point feature set to any first line segment in the first line segment set distance;
  • the first line segment corresponding to the first target point whose distance is less than or equal to the fourth preset threshold constitutes the first set, wherein the first point corresponding to the first target point is characterized as belonging to the first The point feature of the first line segment in the collection.
  • the determining module is configured to determine a distance from a second target point corresponding to a second point feature in the second point feature set to any second line segment in the second line segment set ;
  • the second line segment corresponding to the second target point whose distance is less than or equal to the fifth preset threshold constitutes the second set, wherein the second point corresponding to the second target point is characterized as belonging to the second set
  • the point feature of the second line segment in is characterized as belonging to the second set
  • the embodiment of the present application also provides a computer device.
  • FIG 5 it is a schematic structural diagram of a computer device provided in the embodiment of the present application, including:
  • the processor 41 executes The following steps: S101: Acquire the first image and the second image; the first image and the second image are images in the same environment; S102: Extract multiple first point features in the first image, and determine each first The first line segment to which the point feature belongs, and extracting a plurality of second point features in the second image, and determining the second line segment to which each second point feature belongs; S103: the first point feature in the first image Matching with the second point feature in the second image to obtain a matching result; S104: Based on the matching result, determine the degree of matching between any first line segment and any second line segment, and match the degree of matching to the first line of the preset condition segment and the second segment as segments that match each other.
  • memory 42 comprises memory 421 and external memory 422;
  • Memory 421 here is also called internal memory, is used for temporarily storing the operation data in processor 41, and the data exchanged with external memory 422 such as hard disk, processor 41 communicates with memory 421 through memory 421.
  • the external memory 422 performs data exchange.
  • the processor 41 communicates with the memory 42 through the bus 43, so that the processor 41 executes the execution instructions mentioned in the above method embodiments.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the line segment matching method described in the foregoing method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure further provides a computer program product, including computer instructions, and when the computer instructions are executed by a processor, the above-mentioned steps of the line segment matching method are implemented.
  • the computer program product can be any product that can realize the above-mentioned line segment matching method, and some or all of the solutions in the computer program product that contribute to the prior art can be implemented as a software product (such as a software development kit (Software Development Kit, SDK) ), the software product can be stored in a storage medium, and the computer instructions contained therein make relevant devices or processors execute some or all of the steps of the above-mentioned line segment matching method.
  • a software development kit Software Development Kit, SDK
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules is only a logical function division.
  • multiple modules or components can be combined.
  • some features can be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or modules may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional module in each embodiment of the present disclosure may be integrated into one processing module, each module may exist separately physically, or two or more modules may be integrated into one module.
  • the functions are implemented in the form of software function modules and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本公开提供了一种线段匹配方法、装置、计算机设备和存储介质,其中,该方法包括:获取第一图像和第二图像;第一图像和第二图像为同一环境中的图像;提取第一图像中的多个第一点特征,并确定每个第一点特征所属的第一线段,以及,提取第二图像中的多个第二点特征,并确定每个第二点特征所属的第二线段;将第一图像中的第一点特征与第二图像中的第二点特征进行匹配,得到匹配结果;基于匹配结果,确定任意第一线段和任意第二线段的匹配程度,并将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段。本公开实施例利用点特征来表示线段,提升了线段的长度变化和端点变化时线段匹配结果的准确率。

Description

一种线段匹配方法、装置、计算机设备和存储介质
本申请要求于2021年09月17日提交中国专利局、申请号为202111092811.5、发明名称为“一种线段匹配方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及视觉以及计算机技术领域,具体而言,涉及一种线段匹配方法、装置、计算机设备和存储介质。
背景技术
对于视觉同步定位与地图创建(Simultaneous Localization and Mapping,SLAM)而言,线段特征有着重要的作用,可以在进行位姿计算时,提供准确的约束。
当前线段特征的匹配方法大多基于提取到的图像区域中的或线段端点的特征描述信息进行匹配,但是,由于线段检测端点和长度的不确定性,比如线段端点或中间有遮挡、线段过长导致横跨两帧图像等,将导致匹配结果准确率大幅下降;另外,在将线段特征集成到视觉SLAM里时,提取的线段的特征描述信息的匹配,和,提取的点特征的匹配,是相互独立的,增大了运算量和运算时长,降低了线段匹配效率。
发明内容
本公开实施例至少提供一种线段匹配方法、装置、计算机设备、存储介质和计算机程序。
第一方面,本公开实施例提供了一种线段匹配方法,包括:
获取第一图像和第二图像,其中,所述第一图像和所述第二图像为同一环境中的图像;
获取所述第一图像中的多个第一点特征集合以及第一线段集合,并基于所述第一点特征集合以及所述第一线段集合,确定所述第一线段集合中包括第一点特征的第一线段的第一集合;以及
获取所述第二图像中的多个第二点特征集合以及第二线段集合,并基于所述第二点特征集合以及第二线段集合,确定所述第二线段集合中包括第二点特征的第二线段的第二集合;
将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果;
基于所述点特征匹配结果、所述第一集合以及所述第二集合,确定任意第一线段和任意第二线段的匹配程度,并将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段。
一种可选的实施方式中,所述基于所述点特征匹配结果、所述第一集合以及所述第二集合,确定任意第一线段和任意第二线段的匹配程度,包括:
基于所述点特征匹配结果,确定所述第一集合中的任意第一线段中的目标点特征;
基于所述目标点特征、所述第一集合以及所述第二集合,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度。
一种可选的实施方式中,所述基于所述点特征匹配结果,确定所述第一集合中的任意第一线段中的目标点特征,包括:
基于所述点特征匹配结果,从所述第一集合中的任意第一线段所包括的第一点特征中,筛选与所述第二集合中的任意第二线段所包括的第二点特征相匹配的目标点特征。
一种可选的实施方式中,所述基于所述目标点特征、所述第一集合以及所述第二集合,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度,包 括:
基于所述目标点特征的第一数量、所述第一集合中第一线段包括的第一点特征的第二数量和所述第二集合中第二线段包括的第二点特征的第三数量,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度。
一种可选的实施方式中,所述基于所述目标点特征的第一数量、所述第一集合中第一线段包括的第一点特征的第二数量和所述第二集合中第二线段包括的第二点特征的第三数量,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度,包括:
确定所述第一数量与所述第二数量的第一比值,以及,确定所述第一数量与所述第三数量的第二比值;
基于所述第一比值和所述第二比值,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度。
一种可选的实施方式中,所述基于所述第一比值和所述第二比值,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度,包括:
基于所述第一比值和所述第二比值,确定目标比值,并将所述目标比值作为所述匹配程度;
所述将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段,包括:
在所述目标比值大于或等于第一预设阈值的情况下,确定第一线段和第二线段为相互匹配的线段。
一种可选的实施方式中,在所述将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果之前,还包括:
确定所述第一集合中第一线段所包括的第一点特征的数量,在所述第一点特征的数量小于或等于第二预设阈值的情况下,剔除所述第一线段以及所述第一线段所包括的第一点特征;以及,
确定所述第二集合中第二线段所包括的第二点特征的数量,在所述第二点特征的数量小于或等于第三预设阈值的情况下,剔除所述第二线段以及所述第二线段所包括的第二点特征。
一种可选的实施方式中,在获取所述第一图像中的第一点特征集合和所述第二图像中的第二点特征集合之后,还包括:
确定所述第一点特征集合中第一点特征对应的第一特征描述信息和所述第二点特征集合中第二点特征对应的第二特征描述信息;
所述将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果,包括:
基于所述第一特征描述信息和所述第二特征描述信息,将所述第一点特征和所述第二点特征进行匹配,得到点特征匹配结果。
一种可选的实施方式中,所述基于所述第一点特征集合以及所述第一线段集合,确定所述第一线段集合中包括第一点特征的第一线段的第一集合,包括:
确定所述第一点特征集合中的第一点特征对应的第一目标点到所述第一线段集合中任意第一线段的距离;
所述距离小于或等于第四预设阈值的第一目标点所对应的第一线段构成所述第一集合,其中,所述第一目标点对应的第一点特征为属于所述第一集合中的第一线段的点特征。
一种可选的实施方式中,所述基于所述第二点特征集合以及第二线段集合,确定所述第二线段集合中包括第二点特征的第二线段的第二集合,包括:
确定所述第二点特征集合中的第二点特征对应的第二目标点到所述第二线段集合中第二线段的距离;
所述距离小于或等于第五预设阈值的第二目标点所对应的第二线段构成所述第二集合,其中,所述第二目标点对应的第二点特征为属于第二集合中的任意第二线段的点特征。
第二方面,本公开实施例还提供一种线段匹配装置,包括:
获取模块,用于获取第一图像和第二图像,其中,所述第一图像和所述第二图像为同一环境中的图像;
确定模块,用于获取所述第一图像中的第一点特征集合以及第一线段集合,并基于所述第一点特征集合以及所述第一线段集合,确定所述第一线段集合中包括第一点特征的第一线段的第一集合;以及
获取所述第二图像中的第二点特征集合以及第二线段集合,并基于所述第二点特征集合以及第二线段集合,确定所述第二线段集合中包括第二点特征的第二线段的第二集合;
第一匹配模块,用于将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果;
第二匹配模块,用于基于所述点特征匹配结果、所述第一集合以及所述第二集合,确定任意第一线段和任意第二线段的匹配程度,并将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段。
第三方面,本公开实施例还提供一种计算机设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行上述第一方面,或第一方面中任一种可能的线段匹配方法的步骤。
第四方面,本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述第一方面,或第一方面中任一种可能的线段匹配方法的步骤。
第五方面,本公开实施例还提供一种计算机程序,其中,当所述计算机程序在计算机中执行时,令计算机执行上述线段匹配方法的步骤。
关于上述线段匹配装置、计算机设备、存储介质和计算机程序的效果描述参见上述线段匹配方法的说明,这里不再赘述。
本公开实施例提供的一种线段匹配方法、装置、计算机设备、存储介质和计算机程序,通过获取第一图像和第二图像,其中,所述第一图像和所述第二图像为同一环境中的图像;获取所述第一图像中的第一点特征集合以及第一线段集合,并基于所述第一点特征集合以及所述第一线段集合,确定所述第一线段集合中包括第一点特征的第一线段的第一集合;以及获取所述第二图像中的第二点特征集合以及第二线段集合,并基于所述第二点特征集合以及第二线段集合,确定所述第二线段集合中包括第二点特征的第二线段的第二集合;将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果;基于所述点特征匹配结果、所述第一集合以及所述第二集合,确定任意第一线段和任意第二线段的匹配程度,并将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段,其利用点特征来表示线段,即每个第一点特征所属的第一线段和每个第二点特征所属的第二线段,提升了线段的长度变化和端点变化时线段匹配结果的准确率,比如,能够利用线段的一部分特征匹配整条线段。另外,上述线段的匹配是基于第一点特征和第二点特征的匹配结果,因此,无需提取线段的特征描述信息,简化了线段的特征描述信息的提取过程,节省了运算量和运算时长,进一步提升了线段匹配的效率。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作 简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1示出了本公开实施例所提供的一种线段匹配方法的流程图;
图2示出了本公开实施例所提供的一种算法流程示意图;
图3示出了本公开实施例所提供的一种线段匹配装置的示意图;
图4示出了本公开实施例所提供的一种计算机设备的结构示意图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
另外,本公开实施例中的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。
在本文中提及的“多个或者若干个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
经研究发现,对于SLAM,线段特征有着重要的作用,可以在进行位姿计算时,提供准确的约束。当前线段特征的匹配方法大多基于提取到的图像区域中的或线段端点的特征描述信息进行匹配,但是,由于线段检测端点和长度的不确定性,比如线段端点或中间有遮挡、线段过长导致横跨两帧图像等,将导致匹配结果准确率大幅下降;另外,在将线段特征集成到视觉SLAM里时,提取的线段的特征描述信息的匹配,和,提取的点特征的匹配,是相互独立的,增大了运算量和运算时长,降低了线段匹配效率。
基于上述研究,本公开提供了一种线段匹配方法、装置、计算机设备、存储介质和计算机程序,利用点特征来表示线段,即每个第一点特征所属的第一线段和每个第二点特征所属的第二线段,提升了线段的长度变化和端点变化时线段匹配结果的准确率,比如,能够利用线段的一部分特征匹配整条线段。另外,上述线段的匹配是基于第一点特征和第二点特征的匹配结果,因此,无需提取线段的特征描述信息,简化了线段的特征描述信息的提取过程,节省了运算量和运算时长,进一步提升了线段匹配的效率。
针对以上方案所存在的缺陷,均是发明人在经过实践并仔细研究后得出的结果,因此,上述问题的发现过程以及下文中本公开针对上述问题所提出的解决方案,都应该是发明人在本公开过程中对本公开做出的贡献。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
为便于对本公开实施例进行理解,下面对本公开实施例中所涉及到的名词做详细介绍:
1、Superpoint,是基于自监督训练的点特征检测和描述符(特征描述信息)提取算法。
2、直线段检测算法,(Line Segment Detector,LSD),首先计算图像中所有点的梯度大小和方向,然后将梯度方向变化小且相邻的点作为一个连通域,接着根据每一个域的矩 形度判断是否需要按照规则将其断开以形成多个矩形度较大的域,最后对生成的所有的域做改善和筛选,保留其中满足条件的域,即为最后的直线检测结果。
3、OpenCV是一个基于BSD许可(开源)发行的跨平台计算机视觉和机器学习软件库,可以运行在Linux、Windows、Android和Mac OS操作系统上。
4、SuperGlue,一种基于图卷积神经网络的特征匹配算法,本公开实施例中用于点特征的匹配。
5、词袋模型(Bag-of-words model),是个在自然语言处理和信息检索(IR)下被简化的表达模型。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种线段匹配方法的应用场景进行介绍,本发明实施例提供的线段匹配方法可以应用于视觉SLAM,在一些弱纹理区域内,线段可以作为一种前端特征来补充目标点不足的缺点,提升视觉SLAM前端跟踪的鲁棒性,同时,线段的检测相比较目标点的检测更为准确,可以在进行位姿计算的时候提供更准确的约束。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种线段匹配方法进行详细介绍,本公开实施例所提供的线段匹配方法的执行主体一般为具有一定计算能力的计算机设备。在一些可能的实现方式中,该线段匹配方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
下面以执行主体为计算机设备为例对本公开实施例提供的线段匹配方法加以说明。
基于上述应用场景的介绍,本公开实施例提供了一种线段匹配方法,参见图1所示,其为本公开实施例提供的一种线段匹配方法的流程图,所述方法包括步骤S101~S104,其中:
S101:获取第一图像和第二图像;第一图像和第二图像为同一环境中的图像。
本步骤中,第一图像和第二图像可以是利用拍摄设备,比如相机,获取到的同一环境中的图像。这里,环境可以为机器人的运行环境,比如,在机器人创建地图时的运行环境等。
示例性的,在地图创建的应用场景中,第一图像可以为机器人获取到的当前帧或当前位置的图像;第二图像可以为机器人获取到的第二帧(不包括当前帧)或其他帧(不包括当前帧和第二帧)的图像,或者,为机器人获取到的机器人运行环境中的其他位置(不包括当前位置)的图像。连续两帧图像中包括具有相同点特征的目标点。
延续上例,在地图创建后的重定位的应用场景中,第一图像可以为在上述实施例中获取到的在地图创建过程中的图像(包括第一图像和第二图像)。第二图像可以为机器人在重定位过程中,在同一运行环境下所获取到的图像。在地图创建过程中所采集到的位置A的第一图像和重定位过程中采集到的位置A的第二图像包括具有相同点特征的目标点。
S102:提取第一图像中的多个第一点特征,并确定每个第一点特征所属的第一线段,以及,提取第二图像中的多个第二点特征,并确定每个第二点特征所属的第二线段。
本步骤中,第一点特征是第一图像中第一目标点对应的特征。第二点特征是第二图像中第二目标点对应的特征。第一目标点为第一图像中具有明显特征的目标点,第二目标点为第二图像中具有明显特征的目标点。第一图像中包括多个第一目标点,第二图像中包括多个第二目标点。因此,第一图像中包括有多个第一点特征,第二图像中包括有多个第二点特征。
具有明显特征的目标点,例如,拍摄到的机器人在仓库中的第一图像,包括货架、墙角、工作站等,则第一目标点可以包括货架上的点、墙角的点或工作站的点等,第一点特征可以包括货架上的点特征、墙角的点特征或工作站的点特征等。第二点特征也可以包括货架、墙角或工作站等仓库明显特征的点特征。
这里,在提取了第一图像中的多个第一点特征之后,可以基于提取到的多个第一点特征,进一步确定多个第一点特征对应的第一特征描述信息。在提取了第二图像中的多个第二点特征之后,可以基于提取到的多个第二点特征,进一步确定多个第二点特征对应的第二特征描述信息。其中,第一特征描述信息是描述第一点特征的信息;第二特征描述信息是描述第二点特征的信息。
这里,可以利用Superpoint网络检测点特征并提取点特征对应的特征描述信息。具体的,可以利用Superpoint网络提取第一图像中的多个第一点特征,并确定每个第一点特征在第一图像中的位置信息;提取第二图像中的多个第二点特征,并确定每个第二点特征在第二图像中的位置信息。以及,检测上述每个第一点特征对应的第一特征描述信息和第二点特征对应的第二特征描述信息。
这里,可以利用第一特征描述信息和第二特征描述信息检测相互匹配的两个点特征,例如,检测第一点特征和第二点特征是否为同一点特征。
另外,还可以利用OpenCV中的LSD算法检测图像中的线段,例如,提取第一图像中的多条第一线段,并确定第一图像中第一线段的位置信息;以及,提取第二图像中的多条第二线段,并确定第二图像中第二线段的位置信息。
在一些实施例中,在提取了多条第一线段和多条第二线段之后,还可以确定任意第一线段包括的第一点特征的第一集合,以及,确定任意第二线段包括的第二点特征的第二集合。
示例性的,首先,将提取出的多个第一点特征组成第一点特征集合,记为{P_1,P_2,……,P_n},其中,P_1,P_2,……,P_n分别为第一点特征,确定每个第一点特征对应的第一特征描述信息,之后,将多个第一点特征对应的第一特征描述信息组成第一特征描述信息集合,记为{D_1,D_2,……,D_(n^')},其中,D_1,D_2,……,D_(n^')分别为第一点特征对应的第一特征描述信息。以及,将提取出的多个第二点特征组成第二点特征集合,记为〖{P〗_1^',P_2^',……,P_(n^')^'},其中,P_1^',P_2^',……,P_(n^')^'分别为第二点特征,确定每个第二点特征对应的第二特征描述信息,之后,将多个第二点特征对应的第二特征描述信息组成第二特征描述信息集合,记为〖{D〗_1^',D_2^',……,D_(n^')^'}其中,D_1^',D_2^',……,D_(n^')^'分别为第二点特征对应的第二特征描述信息。之后,将提取出的第一图像中的多条第一线段组成第一线段集合,即{L_1,L_2,……,L_m},其中,L_1,L_2,……,L_m分别为第一线段;以及,将提取出的第二图像中的多条第二线段组成第二线段集合,记为{L_1^',L_2^',……,L_(m^')^'},其中,L_1^',L_2^',……,L_(m^')^'分别为第二线段。上述,n表示第一点特征集合中的第一点特征的个数,n^'表示第二点特征集合中的第二点特征的个数,m表示第一线段集合中的第一线段的条数,m^'表示第二线段集合中的第二线段的条数。之后,可以遍历每个第一点特征对应的第一目标点到每条第一线段的距离,判断第一点特征是否属于第一线段,并将属于第一线段的第一目标点对应的第一点特征作为第一线段的点特征集合,即第一集合,分别记为L_1:{P_11,P_12,……,P_1i},L_2:{P_21,P_22,……,P_1j},……,
L_m:{P_m1,P_m2,……,P_mk},其中,i表示L_1中第一点特征的个数,j表示L_2中第一点特征的个数,k表示L_m中第一点特征的个数;以及,遍历每个第二点特征对应的第二目标点到每条第二线段的距离,判断第二点特征是否属于第二线段,并将属于第一线段的第二目标点对应的第二点特征作为第二线段的点特征集合,即第二集合,分别记为L_1^':〖{P〗_11^',P_12^',……,P_(〖1i〗^')^'},
L_2^':〖{P〗_21^',P_22^',……,P_(〖2j〗^')^'},……,L_(m^')^':〖{P〗_(m^'1)^',P_(m^'2)^',……,P_(〖m^'k〗^')^'},其中,i^'表示L_1^'中第二点特征的个数,j^'表示L_2^'中第二点特征的个数,k^'表示L_(m^')^'中第二点特征的个数。
S103:将第一图像中的第一点特征与第二图像中的第二点特征进行匹配,得到匹配结果。
本步骤中,可以针对S102中确定出的第一点特征集合和第二点特征集合中的点特征进行匹配,确定匹配结果。
针对不同的应用场景,点特征的匹配方式可以不同,如下述两种方式:
方式一、针对地图创建的应用场景,可以利用SuperGlue匹配第一点特征和第二点特征,具体实施时,可以基于第一点特征的位置信息、第一点特征对应的第一特征描述信息、第二点特征的位置信息和第二点特征对应的第二特征描述信息,将第一点特征和第二点特征进行匹配,确定第一点特征和第二点特征的匹配结果。示例性的,首先可以利用第一点特征的位置信息和第二点特征的位置信息,匹配第一点特征和第二点特征在同一环境中的预设位置范围,之后,再匹配同一预设位置范围内的第一点特征对应的第一特征描述信息和第二点特征对应的第二特征描述信息,以实现特征匹配,可以确定出第一点特征集合中的每个第一点特征和第二点特征集合中的每个第二点特征的匹配结果,分别记为M_1,M_2,……,M_f,这里,匹配结果可以为第一点特征与第二点特征相互匹配的匹配对,例如,在P_1与P_10^'匹配、P_2与P_3^'匹配、P_n与P_(n^')^'匹配的情况下,匹配结果可以分别记为M_1:{P_1,P_10^'},M_2:{P_2,P_3^'},……,M_f:{P_n,P_(n^')^'}。
方式二、针对重定位的应用场景,可以利用词袋模型,匹配第一点特征和第二点特征,具体实施时,可以基于第一点特征对应的第一特征描述信息和第二点特征对应的第二特征描述信息,将第一点特征和第二点特征进行匹配,确定第一点特征和第二点特征的匹配结果。示例性的,遍历每个第二点特征,将该第二点特征对应的第二特征描述信息依次与未被匹配上的第一点特征对应的第二特征描述信息进行对比,确定该第二点特征与相匹配的第一点特征的匹配结果。
S104:基于匹配结果,确定任意第一线段和任意第二线段的匹配程度,并将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段。
本步骤中,任意第一线段为从第一图像中提取出的任意一条线段。任意第二线段可以为从第二图像中提取出的任意一条线段。由于线段是由多个目标点组成的,因此,可以通过目标点对应的点特征的匹配结果,即第一点特征和第二点特征的匹配结果,记为{M_1,M_2,……,M_f},进一步确定任意第一线段和任意第二线段的匹配程度。
可以按照S1041~S1042确定任意第一线段和任意第二线段的匹配程度:
S1041:基于匹配结果,从任意第一线段所包括的第一点特征中筛选与任意第二线段所包括的第二点特征相匹配的目标点特征。
本步骤中,匹配结果包括{M_1,M_2,……,M_f}。
示例性的,从第一线段L_1包括的第一点特征的第一集合{P_11,P_12,……,P_1i}中筛选与第二线段L_(m^')^':〖{P〗_(m^'1)^',P_(m^'2)^',……,P_(〖m^'k〗^')^'}中的第二点特征相匹配的目标点特征,如果第一点特征P_11与第二点特征P_(m^'1)^'相匹配,则目标点特征即为P_11或P_(m^'1)^'。需要说明的是,确定相互匹配的第一线段和第二线段中包括多个目标点特征。
S1042:基于目标点特征的第一数量、任意第一线段包括的第一点特征的第二数量和任意第二线段包括的第二点特征的第三数量,确定任意第一线段和任意第二线段的匹配程度。
这里,目标点特征的第一数量,即为目标点特征对应的目标点的数量。任意第一线段包括的第一点特征的第二数量,即为任意第一线段包括的第一目标点的数量。任意第二线段包括的第二点特征的第三数量,即为任意第二线段包括的第二目标点的数量。
在一些实施例中,在确定了任意第一线段和任意第二线段的匹配程度的匹配程度后,可以实时构建矩阵表示目标点特征的第一数量。具体的,以重定位的应用场景为例,获取 第二图像,遍历第二图像中的任意第二线段,分别与每个第一线段进行匹配,确定目标点特征的第一数量。
示例性的,确定第二线段L_1^'与第一线段L_1相匹配的目标点特征的第一数量为S_11,确定第二线段L_1^'与第一线段L_2相匹配的目标点特征的第一数量为S_12,…….,确定第二线段L_1^'与第一线段L_m相匹配的目标点特征的第一数量为S_1m。确定第二线段L_2^'与第一线段L_1相匹配的目标点特征的第一数量为S_21,确定第二线段L_2^'与第一线段L_2相匹配的目标点特征的第一数量为S_22,……,确定第二线段L_2^'与第一线段L_m相匹配的目标点特征的第一数量为S_2m。确定第二线段L_(m^')^'与第一线段L_1相匹配的目标点特征的第一数量为S_(m^'1),确定第二线段L_(m^')^'与第一线段L_2相匹配的目标点特征的第一数量为S_(m^'2),……,确定第二线段L_(m^')^'与第一线段L_m相匹配的目标点特征的第一数量为S_(m^'m)。具体的,可以参见表1所示的矩阵。
表1
Figure PCTCN2022101439-appb-000001
需要说明的是,上述表1中展示的第一线段和第二线段相匹配的目标点特征的第一数量可以为0,即该第一线段与该第二线段不匹配。
在另一些实施例中,实时构建表1矩阵表示目标点特征的第一数量。可以只构建匹配后存在目标点特征的第一线段和第二线段,能够节省运算量,提高匹配效率。
基于上述计算出的目标点特征的第一数量,任意第一线段包括的第一点特征的第二数量,任意第二线段包括的第二点特征的第三数量,确定第一数量与第二数量的比值,以及确定第一数量与第三数量的比值,可以基于第一比值和第二比值,确定任意第一线段和任意第二线段的匹配程度。
延续上例,如果L_m包括的第一点特征的第二数量为N_m,L_(m^')^'包括的第二点特征的第三数量为N_(m^')^',则第一比值为S_(m^'m)/N_m,第二比值为S_(m^'m)/(N_(m^')^')。确定第一比值和第二比值中的最大值,将最大值作为第一线段L_m和第二线段L_(m^')^'的匹配程度v_(m^'m),参见公式一所示:
v_(m^'m)=max(S_(m^'m)/N_m,S_(m^'m)/(N_(m^')^'))          公式一
上述包括但不仅限于取第一比值和第二比值的最大值作为匹配程度,还可以优化第一比值和/或第二比值,分别确定第一比值和第二比值的权重值,基于第一比值的权重值和第二比值的权重值,综合确定匹配程度。本公开实施例对此不进行限定,在不脱离本公开的范围的情况下,本领域技术人员可以针对第一比值和第二比值确定匹配程度的过程做出多种代替和修改,这些代替和修改都应落在本公开的范围内。
根据匹配程度判断第一线段和第二线段之间是否匹配,具体的,可以基于第一比值和第二比值,确定目标比值,并将目标比值作为匹配程度;在目标比值大于或等于第一预设阈值的情况下,确定第一线段和第二线段为相互匹配的线段。
示例性的,在v_(m^'m)大于或等于第一预设阈值的情况下,可以确定第一线段L_m与第二线段L_(m^')^'为相互匹配的线段。
这里,目标比值可以为上述筛选出的目标比值的最大值或者还可以为优化比值等,本公开实施例在此不进行限定。第一预设阈值的取值范围为0~1之间,可以为本领域技术人员调试参数的结果,具体数据本公开实施例不进行限定。
针对S102,在检测到线段中包括的点特征的数量较少的情况下,可以确定检测结果存 在误差,或不存在该条线段,需要过滤上述这些线段,提高后续线段匹配精度。
在一些实施例中,在确定了第一点特征所属的第一线段之后,还可以进一步判断第一线段是否为待匹配的第一线段。这里,待匹配的第一线段可以为预先设置的,包括第一点特征的个数大于第二预设阈值的线段。具体实施时,确定第一线段中的第一点特征的数量,在第一点特征的数量小于或等于第二预设阈值的情况下,剔除第一线段以及第一线段中的第一点特征。
以及,在确定了第二点特征所属的第二线段之后,还可以进一步判断第二线段是否为待匹配的第二线段。这里,待匹配的第二线段可以为预先设置的,包括第二点特征的个数大于第三预设阈值的线段。具体实施时,确定第二线段中的第二点特征的数量,在第二点特征的数量小于或等于第三预设阈值的情况下,剔除第二线段以及第二线段中的第二点特征。
针对S102,确定每个第一点特征所属的第一线段,可以按照以下步骤S1021~S1022:
S1021:针对多个第一点特征中的每个第一点特征,确定第一点特征对应的第一目标点到任意第一线段的距离。
S1022:将距离小于或等于第四预设阈值的第一目标点对应的第一点特征作为属于任意第一线段的点特征。
示例性的,在已知每条第一线段的方程的情况下,可以利用确定出的每个第一目标点的位置信息,比如坐标,确定第一目标点到任意第一线段的距离d_1,之后,利用预先设置的第四预设阈值T_(d_1),将d_1≤T_(d_1)的第一目标点对应的第一点特征作为属于任意第一线段的点特征,即该第一点特征为该任意第一线段对应的第一集合中的点特征。
以及,确定每个第二点特征所属的第二线段,可以按照以下步骤S1023~S1024:
S1023:针对多个第二点特征中的每个第二点特征,确定第二点特征对应的第二目标点到任意第二线段的距离。
S1024:将距离小于或等于第五预设阈值的第二目标点对应的第二点特征作为属于任意第二线段的点特征。
示例性的,在已知每条第二线段的方程的情况下,可以利用确定出的每个第二目标点的位置信息,比如坐标,确定第二目标点到任意第二线段的距离d_2,之后,利用预先设置的第四预设阈值T_(d_2),将d_2≤T_(d_2)的第二目标点对应的第二点特征作为属于任意第二线段的点特征,即该第二点特征为该任意第二线段对应的第二集合中的点特征。
上述的第二预设阈值、第三预设阈值、第四预设阈值和第五预设阈值可以为本领域技术人员基于经验值确定的,本公开实施例不进行具体限定。
通过上述步骤S101~S104,利用点特征来表示线段,即每个第一特征点所属的第一线段和每个第二点特征所属的第二线段,提升了线段的长度变化和端点变化时线段匹配结果的准确率,比如,能够利用线段的一部分特征匹配整条线段。另外,上述线段的匹配是基于第一点特征和第二点特征的匹配结果,因此,无需提取线段的特征描述信息,简化了线段的特征描述信息的提取过程,节省了运算量和运算时长,进一步提升了线段匹配的效率。
针对上述S101~S104,可以参见图2所示,本公开实施例还提供了一种算法流程示意图,其中,本公开实施例所涉及到的算法包括以下几个模块,其中,211表示第一图像对应的Superpoint和LSD模块、212表示第二图像对应的Superpoint和LSD模块、221表示第一点特征和第一线段对应的点线关联模块、222表示第二点特征和第二线段对应的点线关联模块、23表示点特征匹配模块、24表示线段匹配模块。
通过上述图2中的算法流程图,以支持本说明书另一实施例提供的线段匹配方法,具体的:
获取第一图像和第二图像,其中,所述第一图像和所述第二图像为同一环境中的图 像;
获取所述第一图像中的第一点特征集合以及第一线段集合,并基于所述第一点特征集合以及所述第一线段集合,确定所述第一线段集合中包括第一点特征的第一线段的第一集合;以及
获取所述第二图像中的第二点特征集合以及第二线段集合,并基于所述第二点特征集合以及第二线段集合,确定所述第二线段集合中包括第二点特征的第二线段的第二集合;
将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果;
基于所述点特征匹配结果、所述第一集合以及所述第二集合,确定任意第一线段和任意第二线段的匹配程度,并将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段。
具体的,所述基于所述第一点特征集合以及所述第一线段集合,确定所述第一线段集合中包括第一点特征的第一线段的第一集合,包括:
确定所述第一点特征集合中的第一点特征对应的第一目标点到所述第一线段集合中任意第一线段的距离;
所述距离小于或等于第四预设阈值的第一目标点所对应的第一线段构成所述第一集合,其中,所述第一目标点对应的第一点特征为属于所述第一集合中的第一线段的点特征。
进一步地,所述基于所述第二点特征集合以及第二线段集合,确定所述第二线段集合中包括第二点特征的第二线段的第二集合,包括:
确定所述第二点特征集合中的第二点特征对应的第二目标点到所述第二线段集合中任意第二线段的距离;
所述距离小于或等于第五预设阈值的第二目标点所对应的第二线段构成所述第二集合,其中,所述第二目标点对应的第二点特征为属于所述第二集合中的第二线段的点特征。
此外,在获取所述第一图像中的第一点特征集合和所述第二图像中的第二点特征集合之后,还包括:
确定所述第一点特征集合中第一点特征对应的第一特征描述信息和所述第二点特征集合中第二点特征对应的第二特征描述信息;
所述将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果,包括:
基于所述第一特征描述信息和所述第二特征描述信息,将所述第一点特征和所述第二点特征进行匹配,得到点特征匹配结果。
此外,在所述将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果之前,还包括:
确定所述第一集合中第一线段所包括的第一点特征的数量,在所述第一点特征的数量小于或等于第二预设阈值的情况下,剔除所述第一线段以及所述第一线段所包括的第一点特征;以及,
确定所述第二集合中第二线段所包括的第二点特征的数量,在所述第二点特征的数量小于或等于第三预设阈值的情况下,剔除所述第二线段以及所述第二线段所包括的第二点特征。
进一步地,所述基于所述点特征匹配结果、所述第一集合以及所述第二集合,确定任意第一线段和任意第二线段的匹配程度,包括:
基于所述点特征匹配结果,确定所述第一集合中的任意第一线段中的目标点特征;
基于所述目标点特征、所述第一集合以及所述第二集合,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度。
更进一步地,所述基于所述点特征匹配结果,确定所述第一集合中的任意第一线段中的目标点特征,包括:
基于所述点特征匹配结果,从所述第一集合中的任意第一线段所包括的第一点特征中,筛选与所述第二集合中的任意第二线段所包括的第二点特征相匹配的目标点特征;
相应地,所述基于所述目标点特征、所述第一集合以及所述第二集合,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度,包括:
基于所述目标点特征的第一数量、所述第一集合中第一线段包括的第一点特征的第二数量和所述第二集合中第二线段包括的第二点特征的第三数量,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度。
此外,所述基于所述目标点特征的第一数量、所述第一集合中第一线段包括的第一点特征的第二数量和所述第二集合中第二线段包括的第二点特征的第三数量,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度,包括:
确定所述第一数量与所述第二数量的第一比值,以及,确定所述第一数量与所述第三数量的第二比值;
基于所述第一比值和所述第二比值,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度。
进一步地,所述基于所述第一比值和所述第二比值,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度,包括:
基于所述第一比值和所述第二比值,确定目标比值,并将所述目标比值作为所述匹配程度;
所述将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段,包括:
在所述目标比值大于或等于第一预设阈值的情况下,确定第一线段和第二线段为相互匹配的线段。
示例性的,211模块的输入为第一图像,211模块的输出为第一点特征集合,即{P_1,P_2,……,P_n},第一特征描述信息,即{D_1,D_2,……,D_(n^')}和第一线段集合,即{L_1,L_2,……,L_m}。212模块的输入为第二图像,212模块的输出为第二点特征集合,即〖{P〗_1^',P_2^',……,P_(n^')^'},第二特征描述信息,即〖{D〗_1^',D_2^',……,D_(n^')^'}和第二线段集合,即{L_1^',L_2^',……,L_(m^')^'}。
221模块的输入可以为{P_1,P_2,……,P_n}和{L_1,L_2,……,L_m},将第一点特征与第一线段关联起来,利用第一点特征对应的第一目标点到第一线段的距离d_1,判断第一点特征是否属于第一线段;221模块的输出可以为L_1:{P_11,P_12,……,P_1i},L_2:{P_21,P_22,……,P_1j},……,L_m:
{P_m1,P_m2,……,P_mk}。222模块的输入可以为〖{P〗_1^',P_2^',……,P_(n^')^'}和{L_1^',L_2^',……,L_(m^')^'},将第二点特征与第二线段关联起来,利用第二点特征对应的第二目标点到第二线段的距离d_2,判断第二点特征是否属于第二线段;222模块的输出可以为L_1^':〖{P〗_11^',P_12^',……,P_(〖1i〗^')^'},L_2^':〖{P〗_21^',P_22^',
……,P_(〖2j〗^')^'},……,L_(m^')^':〖{P〗_(m^'1)^',P_(m^'2)^',……,P_(〖m^'k〗^')^'}。
23模块的输入需要根据实际场景确定,示例性的,针对地图创建的应用场景,23表示点特征匹配模块可以为SuperGlue算法模型,其输入可以为{P_1,P_2,……,P_n}、{D_1,D_2,……,D_(n^')}、〖{P〗_1^',P_2^',……,P_(n^')^'}和〖{D〗_1^',D_2^',……,D_(n^')^'};其输出可以为{M_1,M_2,……,M_f}。示例性的,针对重定位的应用场景, 23表示点特征匹配模块,例如,可以为词袋模型,其输入可以为{D_1,D_2,……,D_(n^')}和〖{D〗_1^',D_2^',……,D_(n^')^'};其输出可以为{M_1,M_2,……,M_f}。
24模块的输入可以为221模块、222模块和23模块的输出;24模块的输出可以为匹配程度。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了与线段匹配方法对应的线段匹配装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述线段匹配方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。
参照图3所示,为本公开实施例提供的一种线段匹配装置的示意图,所述装置包括:获取模块301、确定模块302、第一匹配模块303和第二匹配模块304;其中,
获取模块301,用于获取第一图像和第二图像;所述第一图像和所述第二图像为同一环境中的图像;
确定模块302,用于提取所述第一图像中的多个第一点特征,并确定每个第一点特征所属的第一线段,以及,提取所述第二图像中的多个第二点特征,并确定每个第二点特征所属的第二线段;
第一匹配模块303,用于将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到匹配结果;
第二匹配模块304,用于基于所述匹配结果,确定任意第一线段和任意第二线段的匹配程度,并将匹配程度符合预设条件的所述第一线段和所述第二线段作为相互匹配的线段。
一种可选的实施方式中,所述第二匹配模块304,用于基于所述匹配结果,从所述任意第一线段所包括的第一点特征中筛选与所述任意第二线段所包括的第二点特征相匹配的目标点特征;基于所述目标点特征的第一数量、所述任意第一线段包括的第一点特征的第二数量和所述任意第二线段包括的第二点特征的第三数量,确定所述任意第一线段和所述任意第二线段的匹配程度。
一种可选的实施方式中,所述第二匹配模块304,用于确定所述第一数量与所述第二数量的第一比值,以及,确定所述第一数量与所述第三数量的第二比值;基于所述第一比值和所述第二比值,确定所述任意第一线段和所述任意第二线段的匹配程度。
一种可选的实施方式中,所述第二匹配模块304,用于基于所述第一比值和所述第二比值,确定目标比值,并将所述目标比值作为所述匹配程度;在所述目标比值大于或等于第一预设阈值的情况下,确定所述第一线段和所述第二线段为相互匹配的线段。
一种可选的实施方式中,所述确定模块302,还用于在所述提取所述第一图像中的多个第一点特征,并确定每个第一点特征所属的第一线段,以及,提取所述第二图像中的多个第二点特征,并确定每个第一点特征所属的第二线段之后,确定所述任意第一线段包括的第一点特征的第一集合,以及,确定所述任意第二线段包括的第二点特征的第二集合;
所述第二匹配模块304,用于基于所述匹配结果、所述第一集合和所述第二集合,从所述任意第一线段所包括的第一点特征中筛选与所述任意第二线段所包括的第二点特征相匹配的目标点特征。
一种可选的实施方式中,所述线段匹配装置还包括过滤模块305;
所述过滤模块305,用于在所述将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到匹配结果之前,确定所述第一线段中的第一点特征的数量,在所述第一点特征的数量小于或等于第二预设阈值的情况下,剔除所述第一线段以及所述第一线段中的第一点特征;以及,确定所述第二线段中的第二点特征的数量,在所述第二点 特征的数量小于或等于第三预设阈值的情况下,剔除所述第二线段以及所述第二线段中的第二点特征。
一种可选的实施方式中,所述确定模块302,还用于在提取所述第一图像中的多个第一点特征和所述第二图像中的多个第二点特征之后,确定多个第一点特征对应的第一特征描述信息和所述多个第二点特征对应的第二特征描述信息;
所述第一匹配模块303,用于基于所述第一特征描述信息和所述第二特征描述信息,将所述第一点特征和所述第二点特征进行匹配,得到所述匹配结果。
一种可选的实施方式中,所述确定模块302,用于针对所述多个第一点特征中的每个第一点特征,确定所述第一点特征对应的第一目标点到任意第一线段的距离;将所述距离小于或等于第四预设阈值的第一目标点对应的第一点特征作为属于所述任意第一线段的点特征;以及,针对所述多个第二点特征中的每个第二点特征,确定所述第二点特征对应的第二目标点到任意第二线段的距离;将所述距离小于或等于第五预设阈值的第二目标点对应的第二点特征作为属于所述任意第二线段的点特征。
关于线段匹配装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述线段匹配方法实施例中的相关说明,这里不再详述。
本公开实施例提供的另一种线段匹配装置的示意图,所述装置包括:获取模块、确定模块、第一匹配模块和第二匹配模块;其中,
获取模块,用于获取第一图像和第二图像,其中,所述第一图像和所述第二图像为同一环境中的图像;
确定模块,用于获取所述第一图像中的第一点特征集合以及第一线段集合,并基于所述第一点特征集合以及所述第一线段集合,确定所述第一线段集合中包括第一点特征的第一线段的第一集合;以及
获取所述第二图像中的第二点特征集合以及第二线段集合,并基于所述第二点特征集合以及第二线段集合,确定所述第二线段集合中包括第二点特征的第二线段的第二集合;
第一匹配模块,用于将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果;
第二匹配模块,用于基于所述点特征匹配结果、所述第一集合以及所述第二集合,确定任意第一线段和任意第二线段的匹配程度,并将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段。
一种可选的实施方式中,所述第二匹配模块,用于基于所述点特征匹配结果,确定所述第一集合中的任意第一线段中的目标点特征;
基于所述目标点特征、所述第一集合以及所述第二集合,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度。
一种可选的实施方式中,所述第二匹配模块,用于基于所述点特征匹配结果,从所述第一集合中的任意第一线段所包括的第一点特征中,筛选与所述第二集合中的任意第二线段所包括的第二点特征相匹配的目标点特征。
一种可选的实施方式中,所述第二匹配模块,用于基于所述目标点特征的第一数量、所述第一集合中第一线段包括的第一点特征的第二数量和所述第二集合中第二线段包括的第二点特征的第三数量,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度。
一种可选的实施方式中,所述第二匹配模块,用于确定所述第一数量与所述第二数量的第一比值,以及,确定所述第一数量与所述第三数量的第二比值;
基于所述第一比值和所述第二比值,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度。
一种可选的实施方式中,所述第二匹配模块,用于基于所述第一比值和所述第二比值,确定目标比值,并将所述目标比值作为所述匹配程度;
所述将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段,包括:
在所述目标比值大于或等于第一预设阈值的情况下,确定第一线段和第二线段为相互匹配的线段。
一种可选的实施方式中,所述线段匹配装置,还包括:
过滤模块,用于确定所述第一集合中第一线段所包括的第一点特征的数量,在所述第一点特征的数量小于或等于第二预设阈值的情况下,剔除所述第一线段以及所述第一线段所包括的第一点特征;以及,
确定所述第二集合中第二线段所包括的第二点特征的数量,在所述第二点特征的数量小于或等于第三预设阈值的情况下,剔除所述第二线段以及所述第二线段所包括的第二点特征。
一种可选的实施方式中,所述确定模块,还用于确定所述第一点特征集合中第一点特征对应的第一特征描述信息和所述第二点特征集合中第二点特征对应的第二特征描述信息;
所述将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果,包括:
基于所述第一特征描述信息和所述第二特征描述信息,将所述第一点特征和所述第二点特征进行匹配,得到点特征匹配结果。
一种可选的实施方式中,所述确定模块,用于确定所述第一点特征集合中的第一点特征对应的第一目标点到所述第一线段集合中任意第一线段的距离;
所述距离小于或等于第四预设阈值的第一目标点所对应的第一线段构成所述第一集合,其中,所述第一目标点对应的第一点特征为属于所述第一集合中的第一线段的点特征。
一种可选的实施方式中,所述确定模块,用于确定所述第二点特征集合中的第二点特征对应的第二目标点到所述第二线段集合中任意第二线段的距离;
所述距离小于或等于第五预设阈值的第二目标点所对应的第二线段构成所述第二集合,其中,所述第二目标点对应的第二点特征为属于所述第二集合中的第二线段的点特征。
关于线段匹配装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述线段匹配方法实施例中的相关说明,这里不再详述。
基于同一技术构思,本申请实施例还提供了一种计算机设备。参照图5所示,为本申请实施例提供的计算机设备的结构示意图,包括:
处理器41、存储器42和总线43。其中,存储器42存储有处理器41可执行的机器可读指令,处理器41用于执行存储器42中存储的机器可读指令,所述机器可读指令被处理器41执行时,处理器41执行下述步骤:S101:获取第一图像和第二图像;第一图像和第二图像为同一环境中的图像;S102:提取第一图像中的多个第一点特征,并确定每个第一点特征所属的第一线段,以及,提取第二图像中的多个第二点特征,并确定每个第二点特征所属的第二线段;S103:将第一图像中的第一点特征与第二图像中的第二点特征进行匹配,得到匹配结果;S104:基于匹配结果,确定任意第一线段和任意第二线段的匹配程度,并将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段。
上述存储器42包括内存421和外部存储器422;这里的内存421也称内存储器,用于暂时存放处理器41中的运算数据,以及与硬盘等外部存储器422交换的数据,处理器41通过内存421与外部存储器422进行数据交换,当计算机设备运行时,处理器41与存储器42之间通过总线43通信,使得处理器41在执行上述方法实施例中所提及的执行指令。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的线段匹配方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例还提供一种计算机程序产品,包括计算机指令,所述计算机指令被处理器执行时实现上述的线段匹配方法的步骤。其中,计算机程序产品可以是任何能实现上述线段匹配方法的产品,该计算机程序产品中对现有技术做出贡献的部分或全部方案可以以软件产品(例如软件开发包(Software Development Kit,SDK))的形式体现,该软件产品可以被存储在一个存储介质中,通过包含的计算机指令使得相关设备或处理器执行上述线段匹配方法的部分或全部步骤。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个模块或组件可以结合,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。
所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。

Claims (12)

  1. 一种线段匹配方法,其特征在于,包括:
    获取第一图像和第二图像,其中,所述第一图像和所述第二图像为同一环境中的图像;
    获取所述第一图像中的第一点特征集合以及第一线段集合,并基于所述第一点特征集合以及所述第一线段集合,确定所述第一线段集合中包括第一点特征的第一线段的第一集合;以及
    获取所述第二图像中的第二点特征集合以及第二线段集合,并基于所述第二点特征集合以及第二线段集合,确定所述第二线段集合中包括第二点特征的第二线段的第二集合;
    将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果;
    基于所述点特征匹配结果、所述第一集合以及所述第二集合,确定任意第一线段和任意第二线段的匹配程度,并将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述点特征匹配结果、所述第一集合以及所述第二集合,确定任意第一线段和任意第二线段的匹配程度,包括:
    基于所述点特征匹配结果,确定所述第一集合中的任意第一线段中的目标点特征;
    基于所述目标点特征、所述第一集合以及所述第二集合,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度。
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述点特征匹配结果,确定所述第一集合中的任意第一线段中的目标点特征,包括:
    基于所述点特征匹配结果,从所述第一集合中的任意第一线段所包括的第一点特征中,筛选与所述第二集合中的任意第二线段所包括的第二点特征相匹配的目标点特征;
    相应地,所述基于所述目标点特征、所述第一集合以及所述第二集合,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度,包括:
    基于所述目标点特征的第一数量、所述第一集合中第一线段包括的第一点特征的第二数量和所述第二集合中第二线段包括的第二点特征的第三数量,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述目标点特征的第一数量、所述第一集合中第一线段包括的第一点特征的第二数量和所述第二集合中第二线段包括的第二点特征的第三数量,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度,包括:
    确定所述第一数量与所述第二数量的第一比值,以及,确定所述第一数量与所述第三数量的第二比值;
    基于所述第一比值和所述第二比值,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度。
  5. 根据权利要求4所述的方法,其特征在于,所述基于所述第一比值和所述第二比值,确定所述第一集合中任意第一线段和所述第二集合中任意第二线段的匹配程度,包括:
    基于所述第一比值和所述第二比值,确定目标比值,并将所述目标比值作为所述匹配程度;
    所述将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段,包括:
    在所述目标比值大于或等于第一预设阈值的情况下,确定第一线段和第二线段为相互匹配的线段。
  6. 根据权利要求1所述的方法,其特征在于,在所述将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果之前,还包括:
    确定所述第一集合中第一线段所包括的第一点特征的数量,在所述第一点特征的数量小于或等于第二预设阈值的情况下,剔除所述第一线段以及所述第一线段所包括的第一点特征;以及,
    确定所述第二集合中第二线段所包括的第二点特征的数量,在所述第二点特征的数量小于或等于第三预设阈值的情况下,剔除所述第二线段以及所述第二线段所包括的第二点特征。
  7. 根据权利要求1所述的方法,其特征在于,在获取所述第一图像中的第一点特征集合和所述第二图像中的第二点特征集合之后,还包括:
    确定所述第一点特征集合中第一点特征对应的第一特征描述信息和所述第二点特征集合中第二点特征对应的第二特征描述信息;
    所述将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果,包括:
    基于所述第一特征描述信息和所述第二特征描述信息,将所述第一点特征和所述第二点特征进行匹配,得到点特征匹配结果。
  8. 根据权利要求1所述的方法,其特征在于,所述基于所述第一点特征集合以及所述第一线段集合,确定所述第一线段集合中包括第一点特征的第一线段的第一集合,包括:
    确定所述第一点特征集合中的第一点特征对应的第一目标点到所述第一线段集合中任意第一线段的距离;
    所述距离小于或等于第四预设阈值的第一目标点所对应的第一线段构成所述第一集合,其中,所述第一目标点对应的第一点特征为属于所述第一集合中的第一线段的点特征。
  9. 根据权利要求1所述的方法,其特征在于,所述基于所述第二点特征集合以及第二线段集合,确定所述第二线段集合中包括第二点特征的第二线段的第二集合,包括:
    确定所述第二点特征集合中的第二点特征对应的第二目标点到所述第二线段集合中任意第二线段的距离;
    所述距离小于或等于第五预设阈值的第二目标点所对应的第二线段构成所述第二集合,其中,所述第二目标点对应的第二点特征为属于所述第二集合中的第二线段的点特征。
  10. 一种线段匹配装置,其特征在于,包括:
    获取模块,用于获取第一图像和第二图像,其中,所述第一图像和所述第二图像为同一环境中的图像;
    确定模块,用于获取所述第一图像中的第一点特征集合以及第一线段集合,并基于所述第一点特征集合以及所述第一线段集合,确定所述第一线段集合中包括第一点特征的第一线段的第一集合;以及
    获取所述第二图像中的第二点特征集合以及第二线段集合,并基于所述第二点特征集合以及第二线段集合,确定所述第二线段集合中包括第二点特征的第二线段的第二集合;
    第一匹配模块,用于将所述第一图像中的第一点特征与所述第二图像中的第二点特征进行匹配,得到点特征匹配结果;
    第二匹配模块,用于基于所述点特征匹配结果、所述第一集合以及所述第二集合,确定任意第一线段和任意第二线段的匹配程度,并将匹配程度符合预设条件的第一线段和第二线段作为相互匹配的线段。
  11. 一种计算机设备,其特征在于,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当计算机设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至9任一项所述的线段匹配方法的步骤。
  12. 一种计算机可读存储介质,其特征在于,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至9任一项所述的线段匹配方法的步骤。
PCT/CN2022/101439 2021-09-17 2022-06-27 一种线段匹配方法、装置、计算机设备和存储介质 WO2023040404A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111092811.5A CN115830353A (zh) 2021-09-17 2021-09-17 一种线段匹配方法、装置、计算机设备和存储介质
CN202111092811.5 2021-09-17

Publications (1)

Publication Number Publication Date
WO2023040404A1 true WO2023040404A1 (zh) 2023-03-23

Family

ID=85515262

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/101439 WO2023040404A1 (zh) 2021-09-17 2022-06-27 一种线段匹配方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN115830353A (zh)
WO (1) WO2023040404A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117474906A (zh) * 2023-12-26 2024-01-30 合肥吉麦智能装备有限公司 脊柱x光图像匹配方法及术中x光机复位方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680514A (zh) * 2013-11-29 2015-06-03 三星泰科威株式会社 使用特征点匹配的图像匹配方法
CN106023183A (zh) * 2016-05-16 2016-10-12 西北工业大学 一种实时的直线段匹配方法
CN109919190A (zh) * 2019-01-29 2019-06-21 广州视源电子科技股份有限公司 直线段匹配方法、装置、存储介质及终端
CN110956081A (zh) * 2019-10-14 2020-04-03 广东星舆科技有限公司 车辆与交通标线位置关系的识别方法、装置及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680514A (zh) * 2013-11-29 2015-06-03 三星泰科威株式会社 使用特征点匹配的图像匹配方法
CN106023183A (zh) * 2016-05-16 2016-10-12 西北工业大学 一种实时的直线段匹配方法
CN109919190A (zh) * 2019-01-29 2019-06-21 广州视源电子科技股份有限公司 直线段匹配方法、装置、存储介质及终端
CN110956081A (zh) * 2019-10-14 2020-04-03 广东星舆科技有限公司 车辆与交通标线位置关系的识别方法、装置及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117474906A (zh) * 2023-12-26 2024-01-30 合肥吉麦智能装备有限公司 脊柱x光图像匹配方法及术中x光机复位方法
CN117474906B (zh) * 2023-12-26 2024-03-26 合肥吉麦智能装备有限公司 基于脊柱x光图像匹配的术中x光机复位方法

Also Published As

Publication number Publication date
CN115830353A (zh) 2023-03-21

Similar Documents

Publication Publication Date Title
US10467756B2 (en) Systems and methods for determining a camera pose of an image
US20220027669A1 (en) Objects and Features Neural Network
Barroso-Laguna et al. Key. net: Keypoint detection by handcrafted and learned cnn filters revisited
KR101581112B1 (ko) 계층적 패턴 구조에 기반한 기술자 생성 방법 및 이를 이용한 객체 인식 방법과 장치
Lee et al. Robust stereo matching using adaptive random walk with restart algorithm
CN107169954B (zh) 一种基于并行卷积神经网络的图像显著性检测方法
Gao et al. Building extraction from RGB VHR images using shifted shadow algorithm
CN111209774B (zh) 目标行为识别及显示方法、装置、设备、可读介质
US20170323149A1 (en) Rotation invariant object detection
Hao et al. Efficient 2D-to-3D correspondence filtering for scalable 3D object recognition
Tagare et al. A maximum-likelihood strategy for directing attention during visual search
WO2023040404A1 (zh) 一种线段匹配方法、装置、计算机设备和存储介质
Perret et al. Evaluation of morphological hierarchies for supervised segmentation
Dixit et al. A fast technique to detect copy-move image forgery with reflection and non-affine transformation attacks
WO2021205219A1 (en) Matching method and apparatus, electronic device, computer-readable storage medium, and computer program
CN107862680A (zh) 一种基于相关滤波器的目标跟踪优化方法
CN108229583B (zh) 一种基于主方向差分特征的快速模板匹配的方法及装置
CN109840529B (zh) 一种基于局部敏感置信度评估的图像匹配方法
Potje et al. Learning geodesic-aware local features from RGB-D images
Chen et al. Regression-based active appearance model initialization for facial feature tracking with missing frames
Carneiro et al. Artificial intelligence for detection and quantification of rust and leaf miner in coffee crop
Avazov et al. Automatic moving shadow detection and removal method for smart city environments
Jiang et al. A dense map optimization method based on common-view geometry
CN105074729A (zh) 光度边缘描述
Eisa et al. Local binary patterns as texture descriptors for user attitude recognition

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE