CN109313809B - Image matching method, device and storage medium - Google Patents

Image matching method, device and storage medium Download PDF

Info

Publication number
CN109313809B
CN109313809B CN201780035664.3A CN201780035664A CN109313809B CN 109313809 B CN109313809 B CN 109313809B CN 201780035664 A CN201780035664 A CN 201780035664A CN 109313809 B CN109313809 B CN 109313809B
Authority
CN
China
Prior art keywords
image
pixel
pixels
matched
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780035664.3A
Other languages
Chinese (zh)
Other versions
CN109313809A (en
Inventor
阳光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen A&E Intelligent Technology Institute Co Ltd
Original Assignee
Shenzhen A&E Intelligent Technology Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen A&E Intelligent Technology Institute Co Ltd filed Critical Shenzhen A&E Intelligent Technology Institute Co Ltd
Publication of CN109313809A publication Critical patent/CN109313809A/en
Application granted granted Critical
Publication of CN109313809B publication Critical patent/CN109313809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Abstract

The invention discloses an image matching method, an image matching device and a storage medium, wherein the image matching method comprises the steps of obtaining the characteristic information of pixels in a first image and a second image, wherein the characteristic information of the pixels is the relation characteristic between the pixels and a plurality of special point pixels in the image where the pixels are located, and the special points comprise at least one of the following: angular and edge points; the first image and the second image are images obtained by shooting the same target at different angles; and matching the pixels in the first image and the second image by using the characteristic information of the pixels so as to realize the matching of the first image and the second image. By the method, the pixels can be matched by acquiring the characteristic points of the pixels and the relation characteristics between the pixels and the characteristic points without acquiring the global constraint information of the image, so that higher matching precision can be obtained, and the calculation amount required by matching can be reduced.

Description

Image matching method, device and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image matching method, an image matching device, and a storage medium.
Background
In the field of image processing, a stereo matching algorithm is a method commonly used for matching two images shot by the same target, and the stereo matching algorithm comprises a local algorithm and a global algorithm. The image matching precision of the global algorithm is higher than that of the local algorithm, so that the global algorithm becomes a stereo matching algorithm which is more concerned in the stereo matching algorithm.
The global matching algorithm is used for image matching by using global constraint information of images, is insensitive to the blur of local images, and has a large calculation amount. Global matching algorithms include dynamic programming, belief propagation, simulated annealing, graph cut methods, genetic methods, and the like. However, any of the above global matching algorithms is usually implemented by constructing a global energy function, and then minimizing the global energy function by an optimization method to obtain a dense disparity map; although higher image matching accuracy can be obtained, the required algorithm is relatively complex and the calculation amount is large.
Disclosure of Invention
The invention aims to provide an image matching method, an image matching device and a storage medium, which can reduce the calculation amount required in image matching.
To achieve the above object, the present invention provides an image matching method, including:
acquiring feature information of pixels in a first image and a second image, wherein the feature information of the pixels is a relation feature between the pixels and a plurality of special point pixels in the image where the pixels are located, and the special points comprise at least one of the following: angular and edge points; the first image and the second image are images obtained by shooting the same target at different angles;
and matching the pixels in the first image and the second image by using the characteristic information of the pixels so as to realize the matching of the first image and the second image.
In another aspect, the present invention provides a pattern matching apparatus, including:
a memory and a processor connected by a bus;
the memory is used for storing an operation instruction executed by the processor, a first image and a second image;
the processor is used for executing the operation instructions to realize the image matching method according to any one of claims 1 to 9.
In another aspect, the present invention proposes a storage medium storing program data that can be executed to implement the image matching method described above.
Has the beneficial effects that: different from the situation of the prior art, the image matching method of the invention obtains the pixel in the first image and the second image and the characteristic relationship between the pixel and a plurality of special point pixels in the image where the pixel is located, and takes the characteristic relationship as the characteristic information for expressing the pixel; and comparing the characteristic information of each pixel in the first image with the characteristic information of each pixel in the second image, and matching each pixel in the first image with each pixel in the second image according to the comparison result. The invention simplifies the overall image information by using the relation characteristics between the pixel points as the characteristic information so as to reduce the calculation amount during image matching.
Drawings
FIG. 1 is a schematic flow chart diagram of a first embodiment of the image matching method of the present invention;
FIG. 2 is a schematic flow chart diagram illustrating one embodiment of step S12 in FIG. 1;
FIG. 3 is a schematic flow chart diagram illustrating one embodiment of step S11 in FIG. 1;
FIG. 4 is a schematic illustration of a first image;
FIG. 5 is a schematic flow chart diagram illustrating another embodiment of step S11 in FIG. 1;
FIG. 6 is a schematic flow chart of step S14 in FIG. 1;
FIG. 7 is a flowchart illustrating a second embodiment of an image matching method according to the present invention;
FIG. 8 is a schematic flow chart of step S23 in FIG. 7;
fig. 9 is a flowchart of step S26 in fig. 7;
FIG. 10 is a schematic structural diagram of an embodiment of an image matching apparatus according to the present invention;
FIG. 11 is a schematic structural diagram of another embodiment of the image matching apparatus of the present invention;
FIG. 12 is a schematic structural diagram of an embodiment of a storage medium of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, those skilled in the art will now describe the present invention in further detail with reference to the accompanying drawings and detailed description. It is to be understood that the described embodiments are merely some embodiments of the invention, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present invention without any creative effort belong to the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of the image matching method of the present invention. As shown in fig. 1, the image matching method of the present embodiment may include the following steps:
in step S11, feature information of pixels in the first image and the second image is acquired.
In this embodiment, the first image and the second image are images obtained by shooting the same target at different angles, and may be understood as two images obtained by shooting the target through the left camera and the right camera in the binocular vision system. Thus, the pixels in the first and second images should be matched, in other words, the feature information of the corresponding pixels in the first and second images for the same point on the object should be corresponding. Therefore, the matching of the first image and the second image can be completed by comparing the characteristic information of the pixel of the first image with the characteristic information of the pixel of the second image.
The feature information of the pixel in this embodiment is a relationship feature between the pixel and a plurality of special point pixels in the image where the pixel is located, and the special point may be a corner point and/or an edge point. For example, a certain pixel in an image has a specific corner point and/or edge point around the pixel, and the relationship between the corner point and/or edge point and the pixel is the feature information of the pixel, and for a certain specific pixel, the feature information should also be certain.
In step S12, the pixels in the first image and the second image are matched using the feature information of the pixels to realize matching of the first image and the second image.
In the present embodiment, the first image and the second image are matched using the feature information of the pixels of the first image and the second image obtained in step S11.
Further, referring to fig. 2, fig. 2 is a schematic flow chart of step S12 in the present embodiment, and as shown in fig. 2, step S12 may include the following steps:
in step S121, each pixel of the first image is extracted as a pixel to be matched, respectively.
In this embodiment, each pixel is extracted from the first image as a pixel to be matched, and each pixel to be matched is matched one by one.
In step S122, the feature information of the pixel to be matched is compared with the feature information of the pixel in the second image.
According to the pixel to be matched in the first image obtained in step S11 and step S121 and the feature information thereof, a pixel matching the pixel to be matched is found in the second image. At this time, it cannot be determined which pixel is matched with the pixel to be matched in the second image, so that the step extracts a plurality of pixels from the second image, obtains the feature information of the plurality of pixels, compares the feature information of the pixel to be matched with the feature information of the plurality of pixels extracted from the second image one by one, obtains the comparison result between the feature information of the pixel to be matched and the feature information of each pixel in the plurality of pixels, and continues to execute the next step according to the comparison result.
In this embodiment, the comparison result between the feature information of the pixel to be matched and the feature information of each of the plurality of pixels is a difference between the feature information of the pixel to be matched and the feature information of each of the plurality of pixels.
In step S123, according to the comparison result, the pixel in the second image that matches the pixel to be matched is found out, so as to complete the matching of the pixel to be matched.
As can be seen from the above analysis, the feature information of the pixel to be matched extracted in step S121 can be determined, and the feature information of the pixel matched with the pixel to be matched in the second image should correspond to the feature information of the pixel to be matched, and in general, the feature information of the pixel matched with the pixel to be matched is consistent with the feature information of the pixel to be matched, but because the angles when the first image and the second image capture the target are not consistent, in practical cases, the difference of the feature information of the pixel matched with the pixel to be matched in the second image with respect to the feature information of the pixel to be matched should be smaller than the difference of the feature information of other pixels in the second image with respect to the feature information of the pixel to be matched.
According to the difference between the feature information of the pixel to be matched and the feature information of each of the plurality of pixels obtained in step S122, further, the pixel with the minimum difference between the feature information of the pixel to be matched and the feature information of the pixel to be matched is found out from the plurality of pixels, and the found pixel is used as the pixel matched with the pixel to be matched, so that the matching of the pixel to be matched can be completed. And repeating the steps to complete the matching of the first image and the second image.
In this embodiment, the image matching method obtains a pixel and a feature relationship between the pixel and a plurality of special point pixels in an image where the pixel is located from a first image and a second image, and uses the feature relationship as feature information for representing the pixel; and comparing the characteristic information of each pixel in the first image with the characteristic information of each pixel in the second image, and matching each pixel in the first image with each pixel in the second image according to the comparison result. The invention simplifies the overall image information by using the relation characteristics between the pixel points as the characteristic information so as to reduce the calculated amount during image matching and not reduce the matching precision of the images.
Further, referring to fig. 3, as shown in fig. 3, the step S11 may include the following steps:
in step S111, edge extraction is performed in the image, resulting in edge lines.
It is understood that the image herein refers to each of the first image and the second image, that is, the present step performs edge extraction on the first image and the second image respectively, and obtains respective edge lines in the first image and the second image respectively. Since the first image and the second image are images taken of the same object, the outlines of the edge lines extracted in the first image and the second image, respectively, should also be similar.
In step S112, virtual rays are made in a plurality of preset directions with each pixel of the image as a center, intersections of the virtual rays and the edge lines are made to be edge points, and the relationship features between the edge points and the pixels are made to be feature information of the pixels.
It will be appreciated that this step is also performed in the first and second images, respectively, where the images include each of the first and second images. As shown in fig. 4, taking the first image as an example, virtual rays D1, D2, and D3 … … D8 (dashed lines in fig. 4) are respectively made in 8 preset directions in the first image with the pixel a as a center. The 8 virtual rays intersect with edge lines (solid lines in fig. 4) around the pixel a in the first image, intersection points of the 8 virtual rays and the edge lines are used as edge points (such as solid line origins in fig. 4), corresponding relationship features exist between the edge points and the pixel a, and then the relationship features between the edge points and the pixel a are used as feature information of the pixel a. It is understood that the manner of extracting the edge points in the second image is the same as the manner of extracting the edge points in the first image, and thus the second image is not illustrated, and the method described with reference to fig. 4 may be performed.
Specifically, the number of the edge points is fixed, and the position relationship between each edge point of the edge points and the pixel a is also fixed, that is, the relationship feature between the edge points and the pixel a is the position relationship between the pixel a and the edge points obtained around the pixel a and the number of the edge points. Similarly, a pixel is extracted from the second image, the extracted pixel and the peripheral edge points thereof are obtained in the same manner, and the position relationship between the pixel and the peripheral edge points and the number of the edge points are obtained. Thus, the characteristic information of the pixels in the first image and the second image can be obtained.
Further, referring to fig. 5, as shown in fig. 5, step S11 may further include the following steps:
in step S113, corner points are extracted in the image.
The image here refers to each of the same first and second images, in other words, this step extracts corner points for the first and second images, respectively. Due to the images of the first and second images taken of the same object, the corner points extracted in the first and second images, respectively, should be corresponding.
In step S114, virtual rays are taken from each pixel of the image in a plurality of preset directions, an angular point in a preset angular region of each virtual ray is found, and a relationship characteristic between the found angular point and the pixel is used as characteristic information of the pixel.
It will be appreciated that this step is also performed in the first and second images, respectively, where the images include each of the first and second images. Further referring to fig. 4, still taking the first image as an example, virtual rays D1, D2, and D3 … … D8 are made in 8 preset directions respectively in the first image with the pixel a as a center, each virtual ray corresponds to a preset angle region, and the angle value of the angle region may be set according to the actual situation, and may be set to 20 °, 30 °, or 35 °. The corner point (e.g. the origin of the dashed line in fig. 4) is found in the preset angle region of each virtual ray. And if the angular points and the pixel A have corresponding relation characteristics, taking the relation characteristics between the angular points and the pixel A as the characteristic information of the pixel A. It is understood that the manner of extracting the corner points from the second image is the same as the manner of extracting the corner points from the first image, and therefore, the second image is not illustrated, and the method described with reference to fig. 4 may be performed.
Specifically, the number of the corner points is fixed, and the position relationship between each of the corner points and the pixel a is also fixed, that is, the relationship characteristics between the corner points and the pixel a are the position relationship between the pixel a and the corner points obtained around the pixel a and the number of the corner points. Similarly, pixels are extracted from the second image, the extracted pixels and the corner points around the pixels are obtained in the same manner, and the position relation between the pixels and the corner points around the pixels and the number of the corner points are obtained. Thus, the characteristic information of the pixels in the first image and the second image can be obtained.
Therefore, the characteristic information of the pixel in this embodiment is the position relationship between the pixel and the plurality of special point pixels in the image where the pixel is located and the number of the plurality of special point pixels.
It should be noted that the two embodiments of step S11 in fig. 3 and fig. 5 may be independent of each other, that is, only the embodiment shown in fig. 3 is used to extract the edge points around the pixel in the image, and the number of edge points and the position relationship between the edge points and the pixel are used to form the feature information of the pixel; or only using the embodiment shown in fig. 5 to extract the corner points around the pixel in the image, and using the number of the corner points and the position relationship between the corner points and the pixel to form the characteristic information of the pixel; the embodiments shown in fig. 3 and 5 are performed at the same time or simultaneously, that is, the feature information of the pixels is formed by using the number of edge points and the positional relationship between the edge points and the pixels, and the number of corner points and the positional relationship between the corner points and the pixels.
As can be seen from the above explanation of the first image and the second image, the difference between the feature information of the pixel to be matched in the first image and the feature information of the pixel matched with the pixel to be matched in the second image is the smallest. In other words, the number of edge points and/or corner points around the pixel matched with the pixel to be matched in the second image is the same as the number of edge points and/or corner points around the pixel to be matched; the position relation between the pixel matched with the pixel to be matched and the edge point and/or the corner point of the pixel in the second image corresponds to the position relation between the pixel to be matched and the edge point and/or the corner point of the pixel.
Thus, with further reference to fig. 6, as shown in fig. 6, step S123 may include the following steps:
in step S1231, according to the comparison result, a first search is performed to obtain at least one pixel in which the number of the special points in the second image is the same as that of the pixel to be matched.
And performing first search according to the number of the edge points and/or the corner points around the pixel to be matched and the number of the edge points and/or the corner points around each pixel of the plurality of pixels in the second image. And finding pixels with the same number of edge points and/or corner points as the pixels to be matched from a plurality of pixels of the second image.
If there is only one found pixel, the found pixel can be directly matched with the pixel to be matched without performing the subsequent steps. If more than one pixel is found, the subsequent steps are continuously executed, and the pixel matched with the pixel to be matched is further found from the plurality of pixels obtained by the first finding.
In step S1232, a second search is performed to obtain, from at least one pixel, a pixel having a positional relationship with the special point that is consistent with the pixel to be matched, and the pixel obtained by the second search is taken as a pixel matched with the pixel to be matched.
And further utilizing the comparison result of the position relationship between the plurality of pixels and the peripheral edge points and/or corner points thereof and the position relationship between the pixel to be matched and the peripheral edge points and/or corner points thereof to search the pixel with the position relationship which is closest to the position relationship between the pixel to be matched and the peripheral edge points and/or corner points thereof from the plurality of pixels, and matching the searched pixel with the pixel to be matched.
Further, according to the features of the first image and the second image obtained in the binocular vision system, the pixels on the epipolar line of the first image are necessarily on the epipolar line of the second image, i.e., the corresponding pixels on the epipolar line of the first image and the second image, and therefore, when the first image and the second image are matched, the pixels on the epipolar line of the images may be matched first.
Therefore, extracting each pixel of the first image as a pixel to be matched in step S121 can be classified into: firstly, extracting pixels on polar lines in the first image as pixels to be matched, and after matching of the pixels to be matched on the polar lines is completed, extracting pixels in other areas except the polar lines in the first image as pixels to be matched.
Further, referring to fig. 7, fig. 7 is a flowchart illustrating an image matching method according to a second embodiment of the present invention. As shown in fig. 7, the image matching method of the present embodiment may include the following steps:
in step S21, feature information of pixels in the first image and the second image is acquired.
The feature information of the pixel in this embodiment is a relationship feature between the pixel and a plurality of special point pixels in the image where the pixel is located, and the special point may be a corner point and/or an edge point. For example, a certain pixel in an image has a specific corner point and/or edge point around the pixel, and the relationship between the corner point and/or edge point and the pixel is the feature information of the pixel, and the feature information of the certain pixel should also be certain.
In the step, pixels are respectively extracted from the first image and the second image, and the characteristic information of the pixels is obtained, so that the pixels in the first image and the second image are matched by utilizing the characteristic information of the pixels in the subsequent step.
In this embodiment, reference may be made to the implementation of step S11 shown in fig. 3 and fig. 5 for an implementation of extracting feature information of a pixel, and details are not described here.
In step S22, pixels on the epipolar line in the first image are extracted as pixels to be matched.
In this embodiment, pixels on the epipolar lines of the first image and the second image are matched first, and therefore, pixels on the epipolar lines in the first image are extracted first as pixels to be matched.
In step S23, the feature information of the pixel to be matched is compared with the feature information of the pixel on the polar line in the second image.
And finding out the pixels matched with the pixels to be matched on the epipolar line of the second image according to the pixels to be matched on the epipolar line of the first image and the characteristic information of the pixels to be matched. The method comprises the steps of extracting a plurality of pixels on the polar line of a second pixel, obtaining the characteristic information of the pixels, comparing the characteristic information of the pixels to be matched with the characteristic information of the pixels extracted on the polar line of the second image one by one, obtaining the comparison result between the characteristic information of the pixels to be matched and the characteristic information of each pixel in the pixels, and continuing to execute the next step according to the comparison result.
In this embodiment, the comparison result between the feature information of the pixel to be matched and the feature information of each of the plurality of pixels is a difference between the feature information of the pixel to be matched and the feature information of each of the plurality of pixels.
Further, referring to fig. 8, as shown in fig. 8, the step S23 may include the steps of:
in step S231, a first sub-segment of the pixel to be matched on the epipolar line is determined, and a second sub-segment corresponding to the first sub-segment in the second image is determined, where the first sub-segment and the second sub-segment are formed by dividing the epipolar line of the edge line including the corner point of the image in which the pixel is located.
Determining edge lines containing the corner points at the epipolar line periphery according to the obtained corner points, wherein the edge lines can divide the epipolar line into a plurality of sections, so that when the points on the epipolar line of the first image and the second image are compared, the plurality of sections on the epipolar line can be respectively compared, and the data volume in each comparison is relatively small.
Further referring to fig. 4, assuming that the epipolar line in the first image overlaps the virtual ray D1-D5, the pixel a on the extracted epipolar line, at this time, the edge lines including the corner points in the first image are L1, L2 and L3, at this time, L1, L2 and L3 may divide the epipolar line into four sub-segments S1, S2, S3 and S4, and it may be determined that the pixel a is on the segment S3 between L2 and L3, then S3 is made the first sub-segment. Due to the correspondence of the corner distribution of the first image and the second image, the first image and the second image are consistent in the division of epipolar lines, and therefore the second image is not additionally drawn. It will be appreciated that four sub-segments corresponding to S1, S2, S3 and S4, respectively, are also available in the second image. In this case, the sub-segment corresponding to S3 where the pixel a is located may be found in the second image, and the sub-segment corresponding to S3 in the second image may be the second sub-segment corresponding to the first sub-segment.
In step S232, the feature information of the pixel to be matched is compared with the feature information of the pixel in the second sub-segment on the epipolar line in the second image.
In the step, only the pixels and the characteristic information of the pixels need to be extracted from the second subsection in the second image. And comparing the characteristic information of the pixel to be matched with the characteristic information of the pixel in the second subsection on the epipolar line in the second image to obtain a comparison result.
It should be noted that, after the comparison of the feature information of the pixels of the current first sub-segment and the current second sub-segment is completed, the pixels are continuously extracted from other segments on the epipolar line in the first image, the first sub-segment and the second sub-segment are re-determined, and then the feature information of the pixels in the re-determined first sub-segment and the re-determined second sub-segment are compared until the comparison of the feature information of all the pixels on the epipolar line is completed.
In the embodiment, the polar line is divided into a plurality of sections, so that the pixels and the characteristic information of the pixels which need to be acquired in the second image in each matching process are reduced, namely, the matching range is reduced, the matching precision can be improved, and the matching speed is also improved.
In step S24, according to the comparison result, the pixel on the polar line in the second image that matches the pixel to be matched is found out, so as to complete the matching of the pixel to be matched.
According to the comparison result of the feature information of the pixels obtained in step S23, the difference between the feature information of the pixels on the epipolar line in the second image and the feature information of the pixels to be matched can be obtained, and further, the pixel with the smallest difference between the feature information of the pixels on the epipolar line and the feature information of the pixels to be matched is the pixel matched with the pixel to be matched, so that the matching of the current pixel to be matched can be completed. Repeating steps S23 and S24 completes matching of all pixels on the epipolar line. Further, step S24 may adopt the same implementation as step S123 in fig. 6, and is not described herein again.
In step S25, pixels in the remaining region in the first image except for the polar line are extracted as pixels to be matched.
And after the pixels on the polar lines in the first image and the second image are matched, starting to match the pixels in the rest areas except the polar lines in the first image, and lifting the pixels in the rest areas except the polar lines in the first image to be used as the current pixels to be matched.
In step S26, the feature information of the pixel to be matched is compared with the feature information of the pixels in the remaining regions except for the polar line in the second image.
And according to the pixels to be matched in the rest areas except the polar line of the first image and the characteristic information thereof, finding out the pixels matched with the pixels to be matched in the rest areas except the polar line of the second image. The method comprises the steps of extracting a plurality of pixels from the rest areas of a second pixel except for a polar line, obtaining the characteristic information of the pixels, comparing the characteristic information of the pixels to be matched with the characteristic information of the pixels extracted from the rest areas of the second image except for the polar line one by one, obtaining the comparison result between the characteristic information of the pixels to be matched and the characteristic information of each pixel in the pixels, and continuing to execute the next step according to the comparison result.
In this embodiment, the comparison result between the feature information of the pixel to be matched and the feature information of each of the plurality of pixels is a difference between the feature information of the pixel to be matched and the feature information of each of the plurality of pixels.
Further, referring to fig. 9, as shown in fig. 9, the step S26 may include the steps of:
in step S261, a first sub-region of the pixel to be matched in the remaining region is determined, and a second sub-region corresponding to the first sub-region in the second image is determined, where the first sub-region and the second sub-region are formed by dividing the remaining region according to the first sub-segment, the second sub-segment, and the edge line including the corner point of the epipolar line of the image in which the first sub-region and the second sub-region are located.
In step S23, the epipolar lines are divided into a plurality of segments by using the edge lines including the corner points, and the edge lines on both sides of each sub-segment are determined, so that the remaining regions except the epipolar lines in the image can be divided into a plurality of sub-regions according to the plurality of segments divided by the epipolar lines in step S23 and the edge lines including the corner points, and the sub-region where the pixels of the remaining regions except the epipolar lines extracted on the first image in step S25 are located is the first sub-region. Since the division of the sub-region of the second image is the same as the division of the sub-region of the first image, the second sub-region corresponding to the first sub-region can be found in the second image.
For example, referring further to fig. 4, based on the edge lines L1, L2, and L3, and the four sub-segments S1, S2, S3, and S4 on the epipolar line, the region on the left side of the edge line L1 in the first graph may be a sub-region, the region between the edge line L1 and the edge line L2 in the first graph may be a sub-region, the region between the edge line L2 and the edge line L3 in the first graph may be a sub-region, and the region on the right side of the edge line L3 in the first graph may be a sub-region, that is, the first image may be divided into a plurality of sub-regions by the edge lines, and it may be determined that the pixel B extracted in step S25 is in the sub-region between the edge line L2 and the edge line L3, that is, the sub-region between the edge line L2 and the edge line L3 in the first sub-region. Due to the correspondence of the corner and edge line distributions of the first image and the second image, the sub-region division of the second image is also consistent with the first image, and therefore the second image is not additionally drawn. It will be appreciated that sub-regions corresponding to respective sub-regions in the first image may also be obtained in the second image, and a second sub-region corresponding to the first sub-region in which the pixel B is located may be determined in the second image.
In step S262, the feature information of the pixel to be matched is compared with the feature information of the pixel in the second sub-region in the second image.
In this step, only the pixels and the feature information of the pixels need to be extracted from the second sub-region in the second image. And comparing the characteristic information of the pixel to be matched with the characteristic information of the pixel in the second sub-area except the polar line in the second image to obtain a comparison result.
It should be noted that after the comparison of the feature information of the pixels of the current first sub-region and the second sub-region is completed, the pixels are continuously extracted from the remaining regions except for the polar line in the first image, and the comparison of the feature information of the pixels in the remaining regions except for the polar line is continuously performed until the comparison of the feature information of all the pixels in the remaining regions except for the polar line is completed.
In step S27, according to the comparison result, the pixels in the remaining area of the second image except the polar line that match the pixels to be matched are found out, so as to complete the matching of the pixels to be matched.
According to the comparison result of the feature information of the pixels obtained in step S26, the difference between the feature information of the pixels in the remaining regions except for the polar line in the second image and the feature information of the pixels to be matched can be obtained, and further, a pixel with the smallest difference between the feature information of the pixels in the remaining regions except for the polar line in the second image and the feature information of the pixels to be matched is the pixel matched with the pixel to be matched, so that the matching of the current pixel to be matched can be completed. Repeating steps S23 and S24 completes matching of pixels in the remaining regions except for the polar lines in the first image and the second image.
Further, step S27 may adopt the same implementation as step S123 in fig. 6, and is not described herein again.
In the embodiment, when the pixels in the first image and the second image are matched, the images are divided into the plurality of areas, and the pixels in each area are matched, which is equivalent to reducing the characteristic information of the pixels and the pixels which need to be obtained in the second image during each matching, that is, reducing the matching range, and thus the matching speed can be improved.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of an image matching apparatus according to the present invention. As shown in fig. 10, the image matching apparatus 100 of the present embodiment may include a memory 12 and a processor 11, wherein the memory 12 and the processor 11 are connected by a bus. The memory 12 is used for storing the operation instructions executed by the processor 11 and the first image and the second image which need to be matched. The processor 11 is configured to execute the operation instructions stored in the memory 12 to implement the steps of the first embodiment of the image matching method and the second embodiment of the image matching method shown in fig. 1 to 9, so as to complete the matching of the first image and the second image. Please refer to the description of the first embodiment of the image matching method and the second embodiment of the image matching method shown in fig. 1 to 9 for detailed description of the steps, which is not repeated herein.
Further, referring to fig. 11, fig. 11 is a schematic structural diagram of another embodiment of the image matching device of the present invention, and the image matching device of the present embodiment is a binocular vision system 200. As shown in fig. 11, the binocular vision system 200 of the present embodiment includes a processor 21 and a memory 22 connected by a bus, and further, the processor 21 is connected with a first camera 23, a second camera 24, and a structured light source 25, respectively.
The memory 22 is used for storing operation instructions executed by the processor 21. The processor 21 is configured to execute the operating instructions stored in the memory 22 to control the structured light source 25 to emit structured light onto the target object 26, control the first camera 23 and the second camera 24 to capture a first image and a second image of the target object 26, respectively, and store the obtained first image and second image in the memory 22. In addition, the processor 21 is further configured to execute the operation instructions stored in the memory 22 to implement the first embodiment of the image matching method and the second embodiment of the image matching method shown in fig. 1 to 9, so as to match the first image and the second image.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a storage medium according to an embodiment of the invention. As shown in fig. 12, the storage medium 300 in the present embodiment stores therein program data 31 that can be executed, the program data 31 being executed to realize the first embodiment of the image matching method and the second embodiment of the image matching method shown in fig. 1 to 9. In this embodiment, the storage medium may be a storage module of the intelligent terminal, a mobile storage device (such as a mobile hard disk, a usb disk, and the like), a network cloud disk, an application storage platform, or other media with a storage function.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (7)

1. An image matching method, comprising:
acquiring feature information of pixels in a first image and a second image, wherein the feature information of the pixels is a relation feature between the pixels and a plurality of special point pixels in the image where the pixels are located, and the special points comprise at least one of the following: angular and edge points; the first image and the second image are images obtained by shooting the same target at different angles;
matching pixels in the first image and the second image by using the characteristic information of the pixels so as to realize the matching of the first image and the second image;
the relation between the pixel and the special point pixels in the image where the pixel is located is characterized by the position relation between the pixel and the special point pixels in the image where the pixel is located and the number of the special point pixels; and/or the presence of a gas in the gas,
the matching of the pixels in the first image and the second image by using the characteristic information of the pixels to realize the matching of the first image and the second image comprises the following steps:
respectively extracting each pixel of the first image as a pixel to be matched; in the step of respectively extracting each pixel of the first image as the pixel to be matched, firstly extracting a pixel on an epipolar line in the first image as the pixel to be matched, and after completing the matching of the pixel to be matched on the epipolar line, extracting pixels in the rest areas except the epipolar line in the first image as the pixel to be matched;
when the pixel to be matched is a pixel on an epipolar line, determining a first sub-segment of the pixel to be matched on the epipolar line, determining a second sub-segment corresponding to the first sub-segment in the second image, and comparing the characteristic information of the pixel to be matched with the characteristic information of the pixel in the second sub-segment on the epipolar line in the second image; and/or determining a first sub-region of the pixel to be matched in the rest regions, determining a second sub-region corresponding to the first sub-region in the second image, and comparing the characteristic information of the pixel to be matched with the characteristic information of the pixel in the second sub-region in the second image, wherein the first sub-region and the second sub-region are formed by dividing the rest regions according to the first sub-segment, the second sub-segment and the edge line containing the corner point of the epipolar line of the image, and the first sub-segment and the second sub-segment are formed by dividing the epipolar line by the edge line containing the corner point of the image;
and searching out the pixels matched with the pixels to be matched in the second image according to the comparison result so as to realize the matching of the pixels to be matched.
2. The method of claim 1, wherein,
the acquiring of the feature information of the pixels in the first image and the second image includes:
each of the first image and the second image is processed as follows:
performing edge extraction in the image to obtain an edge line;
and taking each pixel of the image as a center, taking virtual rays in a plurality of preset directions, taking an intersection point of the virtual rays and the edge line as an edge point, and taking the relation characteristic between the edge point and the pixel as the characteristic information of the pixel.
3. The method of claim 1, wherein,
the acquiring of the feature information of the pixels in the first image and the second image includes:
each of the first image and the second image is processed as follows:
extracting angular points from the image;
and taking each pixel of the image as a center, making virtual rays in a plurality of preset directions, finding out angular points in a preset angle area of each virtual ray, and taking the relation characteristics between the found angular points and the pixels as the characteristic information of the pixels.
4. The method of claim 1, wherein,
the finding out the pixel matched with the pixel to be matched in the second image according to the comparison result comprises the following steps:
and searching out the pixel with the minimum difference between the characteristic information in the second image and the characteristic information of the pixel to be matched according to the comparison result, and taking the searched out pixel as the pixel matched with the pixel to be matched.
5. The method of claim 4, wherein,
the finding out the pixel with the minimum difference between the characteristic information in the second image and the characteristic information of the pixel to be matched according to the comparison result, and taking the found out pixel as the pixel matched with the pixel to be matched comprises the following steps:
according to the comparison result, performing first search to obtain at least one pixel with the same number of special points in the second image as the pixel to be matched;
and executing second searching, obtaining a pixel which is consistent with the pixel to be matched in the position relation with the special point from the at least one pixel, and taking the pixel obtained by the second searching as the pixel matched with the pixel to be matched.
6. An image matching device, comprising a processor and a memory connected with each other;
the memory is used for storing an operation instruction executed by the processor, a first image and a second image;
the processor is used for executing the operation instructions to realize the image matching method according to any one of claims 1 to 5.
7. A storage medium in which program data is stored, the program data being executed to implement the image matching method of any one of claims 1 to 5.
CN201780035664.3A 2017-12-26 2017-12-26 Image matching method, device and storage medium Active CN109313809B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/118752 WO2019127049A1 (en) 2017-12-26 2017-12-26 Image matching method, device, and storage medium

Publications (2)

Publication Number Publication Date
CN109313809A CN109313809A (en) 2019-02-05
CN109313809B true CN109313809B (en) 2022-05-31

Family

ID=65225735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780035664.3A Active CN109313809B (en) 2017-12-26 2017-12-26 Image matching method, device and storage medium

Country Status (2)

Country Link
CN (1) CN109313809B (en)
WO (1) WO2019127049A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754572B (en) * 2019-03-29 2024-04-05 浙江宇视科技有限公司 Image processing method and device
CN110070564B (en) * 2019-05-08 2021-05-11 广州市百果园信息技术有限公司 Feature point matching method, device, equipment and storage medium
CN110717935B (en) * 2019-08-26 2022-05-17 北京中科慧眼科技有限公司 Image matching method, device and system based on image characteristic information
CN114332349B (en) * 2021-11-17 2023-11-03 浙江视觉智能创新中心有限公司 Binocular structured light edge reconstruction method, system and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702056A (en) * 2009-11-25 2010-05-05 安徽华东光电技术研究所 Stereo image displaying method based on stereo image pairs
CN102750537A (en) * 2012-05-08 2012-10-24 中国矿业大学 Automatic registering method of high accuracy images
CN104123715A (en) * 2013-04-27 2014-10-29 株式会社理光 Method and system for configuring parallax value
CN104679831A (en) * 2015-02-04 2015-06-03 腾讯科技(深圳)有限公司 Method and device for matching human model
CN104966281A (en) * 2015-04-14 2015-10-07 中测新图(北京)遥感技术有限责任公司 IMU/GNSS guiding matching method of multi-view images
KR20160060358A (en) * 2014-11-20 2016-05-30 삼성전자주식회사 Method and apparatus for matching stereo images
CN106067172A (en) * 2016-05-27 2016-11-02 哈尔滨工程大学 A kind of underwater topography image based on suitability analysis slightly mates and mates, with essence, the method combined
CN106887021A (en) * 2015-12-15 2017-06-23 株式会社理光 The solid matching method of three-dimensional video-frequency, controller and system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015087941A (en) * 2013-10-30 2015-05-07 オリンパス株式会社 Feature point matching processing device, feature point matching processing method and program
CN103679720A (en) * 2013-12-09 2014-03-26 北京理工大学 Fast image registration method based on wavelet decomposition and Harris corner detection
CN105701766B (en) * 2016-02-24 2019-03-19 网易(杭州)网络有限公司 Image matching method and device
CN106127755A (en) * 2016-06-21 2016-11-16 奇瑞汽车股份有限公司 The image matching method of feature based and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702056A (en) * 2009-11-25 2010-05-05 安徽华东光电技术研究所 Stereo image displaying method based on stereo image pairs
CN102750537A (en) * 2012-05-08 2012-10-24 中国矿业大学 Automatic registering method of high accuracy images
CN104123715A (en) * 2013-04-27 2014-10-29 株式会社理光 Method and system for configuring parallax value
KR20160060358A (en) * 2014-11-20 2016-05-30 삼성전자주식회사 Method and apparatus for matching stereo images
CN104679831A (en) * 2015-02-04 2015-06-03 腾讯科技(深圳)有限公司 Method and device for matching human model
CN104966281A (en) * 2015-04-14 2015-10-07 中测新图(北京)遥感技术有限责任公司 IMU/GNSS guiding matching method of multi-view images
CN106887021A (en) * 2015-12-15 2017-06-23 株式会社理光 The solid matching method of three-dimensional video-frequency, controller and system
CN106067172A (en) * 2016-05-27 2016-11-02 哈尔滨工程大学 A kind of underwater topography image based on suitability analysis slightly mates and mates, with essence, the method combined

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Study of image matching algorithm and sub-pixel fitting algorithm in target tracking;YANG Ming-dong等;《Proceedings of the SPIE》;20150331;第9521卷;1-8 *
利用Helmholtz互易原理由两对图像重建物体三维表面;孙延奎等;《计算机辅助设计与图形学学报》;20091015;第21卷(第10期);1433-1437 *
基于水平树结构的可变权重代价聚合立体匹配算法;彭建建等;《光学学报》;20170912;第38卷(第1期);214-221 *

Also Published As

Publication number Publication date
CN109313809A (en) 2019-02-05
WO2019127049A1 (en) 2019-07-04

Similar Documents

Publication Publication Date Title
US11422261B2 (en) Robot relocalization method and apparatus and robot using the same
CN109313809B (en) Image matching method, device and storage medium
CN110036392B (en) System and method for mapping based on multi-trip data
US20220178688A1 (en) Method and apparatus for binocular ranging
US11067669B2 (en) Method and apparatus for adjusting point cloud data acquisition trajectory, and computer readable medium
US8693785B2 (en) Image matching devices and image matching methods thereof
US20190156092A1 (en) Method and system for recognizing location information in two-dimensional code
CN110852949B (en) Point cloud data completion method and device, computer equipment and storage medium
CN109425348B (en) Method and device for simultaneously positioning and establishing image
WO2017214968A1 (en) Method and apparatus for convolutional neural networks
CN110930453B (en) Target object positioning method, target object positioning device and readable storage medium
CN113869293A (en) Lane line recognition method and device, electronic equipment and computer readable medium
WO2021072709A1 (en) Method for detecting and tracking target, system, device, and storage medium
CN111354022A (en) Target tracking method and system based on kernel correlation filtering
Fanani et al. Keypoint trajectory estimation using propagation based tracking
CN110851639A (en) Method and equipment for searching picture by picture
CN112862730A (en) Point cloud feature enhancement method and device, computer equipment and storage medium
CN112085842B (en) Depth value determining method and device, electronic equipment and storage medium
CN114764823A (en) Self-correcting depth calculation method, system, medium, and depth image processing apparatus
CN115239776B (en) Point cloud registration method, device, equipment and medium
CN109816709B (en) Monocular camera-based depth estimation method, device and equipment
CN113592015B (en) Method and device for positioning and training feature matching network
CN110413716B (en) Data storage and data query method and device and electronic equipment
CN113808196A (en) Plane fusion positioning method and device, electronic equipment and storage medium
CN112529917A (en) Three-dimensional target segmentation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518063 23 Floor (Room 2303-2306) of Desai Science and Technology Building, Yuehai Street High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen AANDE Intelligent Technology Research Institute Co., Ltd.

Address before: 518104 Shajing Industrial Co., Ltd. No. 3 Industrial Zone, Hexiang Road, Shajing Street, Baoan District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen AANDE Intelligent Technology Research Institute Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant