WO2020259365A1 - 一种图像处理方法及装置、计算机可读存储介质 - Google Patents

一种图像处理方法及装置、计算机可读存储介质 Download PDF

Info

Publication number
WO2020259365A1
WO2020259365A1 PCT/CN2020/096549 CN2020096549W WO2020259365A1 WO 2020259365 A1 WO2020259365 A1 WO 2020259365A1 CN 2020096549 W CN2020096549 W CN 2020096549W WO 2020259365 A1 WO2020259365 A1 WO 2020259365A1
Authority
WO
WIPO (PCT)
Prior art keywords
matching
sub
preset
image
feature point
Prior art date
Application number
PCT/CN2020/096549
Other languages
English (en)
French (fr)
Inventor
杨宇尘
陈岩
方攀
金珂
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020259365A1 publication Critical patent/WO2020259365A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches

Definitions

  • This application relates to the field of software engineering, in particular to an image processing method and device, and a computer-readable storage medium.
  • SLAM Simultaneous Localization and Mapping
  • a method of restoring the relative pose that is, the geometric relationship between the cameras of the two images, is usually used for relocation.
  • the commonly used method to restore the relative pose is to violently match the image features of the current scene with the image features of the offline map, and then use the matching point pairs obtained according to the violent matching method to calculate the preset basic matrix or homography matrix, and The result of matrix calculation is used to restore the relative pose between the two images.
  • the matching point pairs obtained according to the violence matching method often have the problem of incorrect matching points or too few matching point pairs. Therefore, the basic matrix or homography matrix calculated from the matching point pairs obtained according to the violence matching method It will not be accurate enough, which will result in a large error in the relative pose between the two restored images, and ultimately make the relocation position recovered based on the relative pose less accurate. Furthermore, the basic matrix is suitable for the restoration of three-dimensional scenes, and the homography matrix is suitable for the restoration of flat scenes or near-flat scenes. If the preset basic matrix or homography matrix is not suitable for the actual scene type during relocation, It will lead to a larger error in the relative pose recovered according to the preset basic matrix or homography matrix, and ultimately make the relocation position recovered from the relative pose less accurate.
  • the embodiment of the present application provides an image processing method, which can improve the accuracy of image processing.
  • the image processing function is a relocation function
  • the accuracy of relocation is finally improved.
  • the embodiment of the application discloses an image processing method, including:
  • an image processing function is performed.
  • An embodiment of the present application provides an image processing device, and the image processing device includes:
  • the acquiring unit is configured to acquire the current frame image to be processed
  • An extraction unit configured to extract the current image feature points of the current frame image from the current frame image, and obtain reference feature points corresponding to the reference frame based on the current image feature points;
  • a matching unit configured to match the current image feature point with the reference feature point based on a first preset matching threshold to obtain a first matching point pair
  • a calculation unit configured to amplify the first preset matching threshold to obtain a second preset matching threshold, and obtain a constraint condition based on the first matching point pair;
  • the matching unit is further configured to match the current image feature point with the reference feature point based on the second preset matching threshold under the constraint condition to obtain a second matching point pair;
  • the processing unit is configured to perform an image processing function based on the second matching point pair.
  • An embodiment of the present application provides an image processing device, and the image processing device includes:
  • Memory used to store executable instructions
  • the processor is configured to implement the image processing method provided in the embodiment of the present application when executing the executable instructions stored in the memory.
  • the embodiment of the present application provides a computer-readable storage medium with executable instructions stored thereon, which is applied to an image processing apparatus, and the executable instruction is executed by a processor to implement the image processing method provided in the embodiments of the present application.
  • the embodiment of the application discloses an image processing method and device, and a computer-readable storage medium.
  • the method includes: an image processing device obtains a current frame image to be processed; the image processing device extracts a current image of the current frame image from the current frame image Feature points, and obtain reference feature points corresponding to the reference frame based on the current image feature points; the image processing device matches the current image feature points with the reference feature points based on the first preset matching threshold to obtain the first matching point pair; image processing device The first preset matching threshold is enlarged to obtain the second preset matching threshold, and the image processing device obtains the constraint condition based on the first matching point pair; under the constraint condition, the image processing device converts the current image based on the second preset matching threshold The feature point is matched with the reference feature point to obtain a second matching point pair; based on the second matching point pair, an image processing function is performed.
  • the image processing device when performing image matching, the image processing device first calculates the constraint condition based on the first matching point pair that is matched once, and then enlarges the first preset matching threshold under the constraint condition to match again, thereby expanding More accurate second matching point pairs. Finally, the image processing device performs image processing according to the second matching point pairs, thereby increasing the number of matching point pairs and matching accuracy, and ultimately improving the accuracy of image processing.
  • the function is a relocation function, the accuracy of relocation can be improved.
  • FIG. 1 is a first flowchart of an image processing method provided by an embodiment of this application
  • Fig. 2 is a second flowchart of an image processing method provided by an embodiment of the application
  • FIG. 3 is a first structural diagram of an image processing device provided by an embodiment of the application.
  • FIG. 4 is a second structural diagram of an image processing apparatus provided by an embodiment of the application.
  • An embodiment of the application provides an image processing method. As shown in FIG. 1, the method may include:
  • the image processing method provided by the embodiments of this application is suitable for the initialization and relocation scenarios of the SLAM system, such as augmented reality technology (AR, Augmented Reality) relocation, autonomous mobile robot relocation, unmanned driving relocation and other scenarios. It is suitable for the recognition and matching of the non-rigid body deformation of the image, such as the image matching between the non-rigid body deformed object and the original state object.
  • augmented reality technology AR, Augmented Reality
  • autonomous mobile robot relocation unmanned driving relocation and other scenarios.
  • the image processing device when the image processing device performs image processing, it first needs to obtain the current frame image to be processed.
  • the image processing device calls the shooting function to take a picture of the scene at the current location, and obtain the current frame image to be processed.
  • S102 Extract the current image feature point of the current frame image from the current frame image, and obtain the reference feature point corresponding to the reference frame based on the current image feature point.
  • the image processing device after the image processing device obtains the current frame image to be processed, the image processing device first extracts feature points from the current frame image as the current image feature points, and then based on the current image feature points, from the reference image frame Find the reference frame that best matches the feature point of the current image, and finally extract the feature point from the reference frame as the reference feature point.
  • the image processing device extracts, from the acquired current frame image, points in the current frame image that can effectively reflect the essential features of the image and can identify objects in the image, as the current image feature points.
  • the image processing device can use the FAST (Features from accelerated segment test) algorithm to extract the feature points of the current image from the current frame image, or other algorithms, and the specific selection is based on the actual situation. Make specific restrictions.
  • FAST Features from accelerated segment test
  • the image processing device after the image processing device obtains the current image feature points, it matches the reference image frame according to the current image feature points, and finds a reference image frame that best matches the current image feature points as the reference frame; image processing The device obtains feature points from the reference frame as reference feature points.
  • the image processing device can calculate the bag-of-words model of the current frame image according to the image description feature points in the current image feature points when matching in the reference image frame based on the current image feature points, and then use the current frame
  • the word bag model of the image is matched, and the image processing device can also use other algorithms for matching, such as the accumulation algorithm (VLAD, vector of locally aggregated descriptors), traversal method, etc.
  • VLAD vector of locally aggregated descriptors
  • traversal method etc.
  • the specific selection is based on the actual situation.
  • the embodiment of this application does not Make a limit.
  • the image processing device calculates the bag-of-words model feature of the current frame image according to the image description feature points in the current image feature point, and according to the bag-of-words model feature of the current frame image, Find a frame that best matches the bag-of-words model feature of the current frame image in the preset map feature library, and use this preset scene map as a reference frame for relocation.
  • the preset map feature library is composed of feature points extracted from at least one preset map image frame by the image processing device during map construction, and each preset map image frame is stored in the form of preset map feature points. Preset map feature library.
  • S103 Match the current image feature point with the reference feature point based on the first preset matching threshold to obtain a first matching point pair.
  • each first matching point pair includes a current frame image feature point and a reference frame feature point, which respectively represent the position coordinates of the pixel points in the same object in the current frame image coordinate system and the reference frame image coordinate system.
  • the image processing device matches the current image feature point with the reference feature point, and determines whether the similarity and salientity between the current image feature point and the reference feature point meets the first preset matching threshold, and if it meets the first preset Set the matching threshold to indicate that the current image feature point and the reference feature point may be pixels of the same part in the same object, which appear in different positions in the current frame image and the reference frame respectively. Therefore, the image processing device determines the current image feature point and the reference feature point as the first matching point pair, wherein the first matching point pair includes at least one pair of matching point pairs of the current image feature point and the reference feature point.
  • S104 Amplify the first preset matching threshold to obtain a second preset matching threshold, and obtain a constraint condition based on the first matching point pair.
  • the image processing device matches the current image feature point with the reference feature point based on the first preset matching threshold. After obtaining the first matching point pair, the image processing device amplifies the first preset matching threshold to obtain The second preset matching threshold, and the image processing device will calculate the mathematical model satisfied by the first matching point pair based on the first matching point pair as a constraint condition.
  • the image processing device since the number of first matching point pairs obtained according to the first preset matching threshold is generally small, and there are often false matching point pairs, if the mathematical model of the first matching point pair is directly used for image processing , Will greatly affect the accuracy of the calculation results, thereby affecting the accuracy of image processing. Therefore, the image processing device amplifies the first preset matching threshold to obtain the second preset matching threshold, and thus uses the relaxed threshold condition to obtain more matching point pairs from the current image feature point and the reference feature point. Further, in order to ensure the accuracy of the matching point pair, the image processing device will calculate the mathematical model that the first matching point pair satisfies according to the first matching point pair, as a constraint condition, to determine the matching point after relaxing the threshold condition For further screening.
  • the constraint relationship when the image processing function is a relocation function, the constraint relationship may be an algebraic representation of the constraint relationship existing in the image points of the same three-dimensional point in the three-dimensional scene under different viewing angles, for example, through the first matching point
  • the basic matrix formula corresponding to the first matching point pair can be calculated as a constraint; the constraint relationship can also be the constraint relationship between the image points when a point on one projective plane is mapped to another projective plane, such as through the first
  • the matching point pair can calculate the homography matrix formula corresponding to the first matching point pair as a constraint condition.
  • the constraint relationship may also be other forms of geometric constraints or mathematical models, which are specifically selected according to actual conditions, which are not limited in the embodiments of the present application.
  • the image processing device amplifies the first preset matching threshold to obtain the second preset matching threshold, and based on the first matching point pair, after obtaining the constraint condition, the image processing device will be below the second preset threshold , The current image feature point and the reference feature point are matched again. Since the second preset threshold has a larger screening range than the first matching preset threshold, the image processing device will get more matching points than the first matching Yes; then, the image processing device uses constraint conditions to verify all matching point pairs obtained from the secondary matching, and regards the matching point pairs that meet the constraint conditions as correct matching point pairs, and the matching point pairs that do not meet the constraint conditions as incorrect matching point pairs , And finally all correct matching point pairs are regarded as the second matching point pairs.
  • the image processing apparatus may also only verify whether the newly added matching point pair in the second matching meets the constraint condition, and then The newly-added matching point pair and the first matching point pair satisfying the constraint condition are regarded as the second matching point pair.
  • the image processing apparatus can match more matching point pairs by using the second preset matching threshold. Because the image processing apparatus uses the The first matching point verifies the calculated constraint condition for the matching point pair obtained according to the second preset matching threshold, thereby ensuring that the accurate matching rate of the second matching point pair that meets the constraint condition is not lower than the first matching Point pairs, therefore, the image processing device can obtain more and more accurate second matching point pairs.
  • the image processing device may also perform a second preset matching threshold for the current image feature point and the reference feature point. Perform a brute force match to obtain N+M pairs of matching points. Then the image processing device can calculate only M pairs of matching points to determine the M1 pairs of matching points that meet the constraint conditions among the M pairs of matching points. Finally, the image The processing device uses the N+M1 pair of matching points as the second matching point pair.
  • the image processing device may calculate the accuracy rate and the recall rate of the matching results corresponding to the brute force matching and the embodiments of the present application respectively.
  • the accuracy rate is the percentage of correct matching point pairs in all matching point pairs among all matched point pairs.
  • the recall rate is the percentage of the correct matching points in the matching point pairs. The higher the accuracy, the more accurate the result. The higher the recall rate, the more complete the correct match found.
  • the image processing device selects the same 10 sets of pictures, and calculates the matching result of the violent match and the accuracy and recall rate of the matching result of the embodiment of the present application under the same parameters.
  • the results are shown in Table 1:
  • S106 Perform an image processing function based on the second matching point pair.
  • the image processing device may perform corresponding image processing functions based on the second matching point pair.
  • the image processing device can use the second matching point pair to restore the relative pose between the current image frame and the reference frame, and finally the image processing device realizes the relocation function through the relative pose.
  • the image processing device may use the second matching point pair to calculate the image matching between the non-rigid deformation object and the original state object Degree to identify the correspondence between non-rigid deformable objects and objects in their original state.
  • the image processing device can first calculate a small number of reliable matches, and then estimate the mathematical model through these matches, thereby expanding more matches, while ensuring the accuracy of matching point pairs, and solving the problem of too few matching point pairs. , There are many wrong matching point pairs, which increases the number and accuracy of matching point pairs, and ultimately improves the accuracy of image processing.
  • the embodiments of the present application also provide an image processing method. As shown in FIG. 2, the method may include:
  • the image processing apparatus acquires a current frame image to be processed.
  • the image processing device extracts a current image feature point of the current frame image from the current frame image, and obtains a reference feature point corresponding to the reference frame based on the current image feature point.
  • the image processing device after the image processing device obtains the current frame image to be processed, the image processing device first extracts the feature points from the current frame image as the current image feature points, and then based on the current image feature points, finds from the reference image frame The frame most matching the current image feature point is used as the reference frame, and finally the feature point is extracted from the reference frame as the reference feature point.
  • the image processing apparatus extracting the current image feature points of the current frame image from the current frame image may specifically include: S2021 to S2024. as follows:
  • S2021 Extract, from the current frame of image, first pixel points representing a corner boundary with a preset number of corner points.
  • the image processing device extracts the corner point from the current frame image as the first pixel point.
  • the image processing device extracts the pixel points of the object such as the corner of the table and the corner of the object in the current frame image, that is, the corner point, because the pixels at the corners, edges, etc. in the image are highly recognizable. It is more representative in image recognition. In some embodiments of the present application, the image processing device will extract 150 corner points.
  • the image processing device extracts features from second pixel points within a preset range of the first pixel point to obtain a first image feature point.
  • the image processing device extracts the first pixel points representing the corner boundary with a preset number of corner points from the current frame image
  • the image processing device extracts the second pixel points within the preset range of the first pixel point Feature to obtain the first image feature point.
  • the image processing device extracts features from the pixel points within the preset range of the first pixel point as the first image feature point, and the first image feature point is used to describe the first pixel point.
  • the feature point may be a binary robust feature (BRIEF, Binary robust independent elementary feature), or other data format, which is specifically selected according to actual conditions, and the embodiment of the present application does not make specific limitations.
  • BRIEF Binary robust independent elementary feature
  • the image processing device extracts the binary robust feature BRIEF from the pixel points within a range of 31*31 around the corner point as the first image feature point.
  • the image processing device extracts features from a third pixel point outside the preset range of the first pixel point to obtain a second image feature point.
  • the image processing device extracts the features of the second pixel points within the preset range of the first pixel point, and after obtaining the first image feature point, the image processing device performs the first pixel point out of the preset range of the first pixel point.
  • the features are extracted from three pixels to obtain the second image feature points.
  • the image processing device will additionally extract more feature points from the current image frame to better characterize the current image frame.
  • the image processing device extracts more feature points for scenes with complex textures, and extracts fewer feature points for general weak texture scenes, such as scenes such as white walls, as second image feature points.
  • the image processing device will extract 500 to 1500 second image feature points.
  • the image processing device uses the first image feature point and the second image feature point as the current image feature point.
  • the image processing device extracts features from a third pixel point outside the preset range of the first pixel point to obtain the second image feature point, and then the image processing device combines the first image feature point and the second image feature point The point is the feature point of the current image.
  • the current image feature point includes the first feature point and the second feature point.
  • the image processing apparatus obtaining the reference feature point corresponding to the reference frame based on the current image feature point may specifically include: S2025 to S2026. as follows:
  • the image processing device matches the current image feature point with a preset map feature library, where the preset map feature library stores at least one preset map feature point of the preset map image frame.
  • the image processing device searches for the most matching preset in the preset map feature library according to the current image feature point The map image frame, wherein the preset map feature library stores at least one frame of preset map feature points in the preset map image frame.
  • the image processing device when the image processing device matches the preset map feature library according to the current image feature points, it can use the word bag model feature matching of the current frame image, or other algorithms, such as the accumulation algorithm VLAD (vector of locally aggregated descriptors), traversal method, etc., which are specifically selected according to actual conditions, and are not limited in the embodiment of this application.
  • VLAD vector of locally aggregated descriptors
  • traversal method etc.
  • the image processing device uses the bag-of-words model of the current frame image to match the preset map feature library, first, the image processing device calculates the current frame according to the first image feature point in the current image feature point Image bag of words model.
  • the bag-of-words model feature is an expression model in which the first image feature point of the current frame image is represented by a "bag" containing a specific word; after that, the image processing device uses the word bag model of the current frame image to compare It is assumed that the preset map feature points contained in each preset map image frame in the map feature library are compared.
  • the image processing device when the image processing device matches the current image feature points with the preset map feature library, it can match the current image feature points with the preset map image frame, or it can match the current image feature points with the preset map image frame.
  • the key frames preset in the map image frame are matched, and the specific selection is made according to actual conditions, and the embodiment of the present application does not make specific limitations.
  • the image processing device uses the preset map image frame that best matches the current image feature point in the preset map feature library as a reference frame, and uses the preset map feature points contained in the reference frame as the reference feature points.
  • the image processing device after the image processing device matches the current image feature point with the preset map feature library, the image processing device finds the preset map image frame that best matches the current image feature point as a reference frame; After the image processing device finds the reference frame, the image processing device obtains the preset map feature points contained in the reference frame as reference feature points.
  • the preset map feature points are composed of feature points extracted from at least one preset map image frame by the image processing device during map construction, and the image processing device extracts feature points from the preset map image frame
  • the principle is the same as steps S2021 to S2024.
  • the image processing device matches the current image feature point with the reference feature point based on the first preset matching threshold to obtain a first matching point pair.
  • the image processing device extracts the current image feature points of the current frame image from the current frame image, and obtains the reference feature points corresponding to the reference frame based on the current image feature points, the image processing device based on the first preset matching threshold The current image feature point is matched with the reference feature point to obtain the first matching point pair.
  • the image processing device matches the current image feature point with the reference feature point based on the first preset matching threshold, and obtaining the first matching point pair may specifically include: S2031 to S2035. as follows:
  • the image processing device combines each current image sub-feature point in the current image feature point with the reference feature point one by one to obtain at least one sub-feature point pair.
  • the image processing device uses the preset map image frame that best matches the current image feature points in the preset map feature library as the reference frame, and uses the preset map feature points contained in the reference frame as the reference feature points After that, the image processing device will use one of the current image feature points to correspond to each reference sub feature point one by one, and combine them one by one until all the current image sub feature points and all reference sub feature points are combined.
  • the image processing device obtains at least one sub-feature point pair.
  • each sub-feature point pair includes a current frame sub-image feature point and a reference sub-feature point.
  • the current image feature points include current image sub-feature points A1, A2, and A3.
  • the image processing device combines the current image sub-feature point A1 with all three reference feature points B, C, D one by one to obtain three sub-feature point pairs (A1, B), (A1, C), and (A1, D) .
  • the image processing device combines all current image feature points with all reference feature points, and can also obtain sub-feature point pairs (A2, B), (A2, C) and (A2, D), (A3, B), (A3, C) and (A3, D), the image processing device regards these 9 sub-feature point pairs as at least one sub-feature point pair.
  • only 150 corner points among the feature points of the current frame image are used for combination with reference feature points and used for matching with reference feature points. If 500 to 1500 additional feature points are also used for matching, it will cause additional calculations and will not greatly improve the number of first matching point pairs and the matching accuracy.
  • the image processing device calculates the Hamming distance from each sub-feature point of the current image to the reference feature point for at least one pair of sub-feature points to obtain at least one Hamming distance, and at least one Hamming distance is a pair of at least one sub-feature point.
  • the image processing device combines each current image sub-feature point in the current image feature point with the reference feature point one by one to obtain at least one sub-feature point pair.
  • the image processing device calculates the Hamming distance between the current image sub-feature point contained therein and the sub-reference feature point, and obtains a Hamming distance corresponding to each sub-feature point pair.
  • the Hamming distance represents the directly different digits of two binary codes.
  • the image processing device calculates the directly different digits of the binary code between the second image feature point in the current image sub feature point and the feature corresponding to the reference feature point to obtain the Hamming distance of each sub feature point pair.
  • the image processing device performs the same calculation to obtain at least one Hamming distance of the at least one sub-feature point pair.
  • the image processing device determines the minimum Hamming distance and the next smallest Hamming distance from the at least one Hamming distance.
  • the image processing device calculates the Hamming distance from each sub-feature point of the current image to the reference feature point for at least one pair of sub-feature points, and after obtaining at least one Hamming distance, the image processing device obtains at least one Hamming distance In the distance, find the smallest Hamming distance calculation value and the second smallest Hamming distance calculation value as the smallest Hamming distance and the second smallest Hamming distance.
  • the image processing device determines whether the smallest Hamming distance is less than a preset saliency threshold multiplied by the second smallest For the Hamming distance, when the minimum Hamming distance is less than the preset saliency threshold multiplied by the next smallest Hamming distance, the image processing device compares the minimum Hamming distance with the preset similarity threshold.
  • the first preset matching threshold includes a preset similarity threshold and a preset significance threshold.
  • the first preset matching threshold includes a preset similarity threshold and a preset significance threshold.
  • the Hamming distance is less than the preset similarity threshold, it means that the current image sub-feature point corresponding to this Hamming distance is sufficiently similar to the reference sub-feature point; when the minimum Hamming distance is less than the preset saliency threshold multiplied by the next smallest Hamming distance It means that the current image sub-feature point corresponding to this Hamming distance matches the reference sub-feature point significantly, and there is no other similar match.
  • the preset saliency threshold may take a value of 0.9
  • the preset similarity threshold may take a value of 50
  • other thresholds may be selected according to actual conditions, which are not limited in the embodiment of the present application.
  • the image processing device calculates the preset significance threshold multiplied by the next smallest Hamming distance, and then compares the calculated result with the minimum Hamming distance. If the minimum Hamming distance is still smaller than the calculated result, it indicates that the minimum Hamming distance corresponds to The matching degree between the current image sub-feature point and the reference sub-feature point is relatively significant, and there is no other similar match.
  • the minimum Hamming distance is 40 and the second smallest Hamming distance is 50
  • the image processing device calculates the preset saliency threshold and multiplies the second smallest Hamming distance distance, that is, 50*0.9, to obtain the calculation result 45.
  • the image processing device after the image processing device verifies the significance of the Hamming distance, it also needs to verify the approximateness of the Hamming distance. Therefore, when the minimum Hamming distance is less than the preset saliency threshold multiplied by the next smallest Hamming distance, the image processing device needs to further verify whether the minimum Hamming distance is less than or equal to the preset similarity threshold.
  • the image processing device determines, from at least one sub-feature point pair, a sub-feature point pair corresponding to each current image sub-feature point corresponding to the minimum Hamming distance , So that the image processing device obtains the first matching point pair corresponding to the current image feature point.
  • the image processing device when the minimum Hamming distance is less than the preset significance threshold multiplied by the next smallest Hamming distance, after the image processing device compares the minimum Hamming distance with the preset similarity threshold, the image processing device will also determine the smallest Hamming distance. Whether the bright distance is less than or equal to the preset similarity threshold, when the minimum Hamming distance is less than or equal to the preset similarity threshold, the image processing device determines the sub feature point pair corresponding to the minimum Hamming distance from at least one sub feature point pair , So that the image processing device obtains the first matching point pair corresponding to the current image feature point.
  • the image processing device sequentially judges each sub-feature point pair, so that in at least one sub-feature point pair, the image processing device can determine all matching sub-feature point pairs corresponding to at least one sub-feature point pair as the first matching point Correct.
  • the preset similarity threshold is 50
  • the preset significance threshold is 0.9.
  • the image processing device calculates the Hamming distance between the current image sub-feature point A and the three reference feature points B, C, and D, and obtains the Hamming distance from A to B of 40, the Hamming distance from A to C of 50, and A to C, respectively.
  • the Hamming distance of D is 60.
  • the image processing device determines the minimum Hamming distance of 40 and the second smallest Hamming distance of 50. After 50 is multiplied by the preset saliency threshold of 0.9, it is still greater than the minimum Hamming distance of 40, which indicates the saliency of the Hamming distance from A to B Significantly higher than the Hamming distance from A to C or A to D.
  • the image processing device compares the minimum Hamming distance 40 with a preset similarity threshold value less than 50, and obtains that the minimum Hamming distance 40 is still less than the preset similarity threshold value.
  • the image processing device can determine that (A, B) is a matching sub Feature point pairs.
  • the image processing device may continue to determine the matching sub-feature point pairs among all the sub-feature point pairs from among at least one sub-feature point pair, as the first matching point pair.
  • the image processing device amplifies the first preset matching threshold to obtain a second preset matching threshold.
  • the image processing device matches the current image feature point with the reference feature point based on the first preset matching threshold, and after obtaining the first matching point pair, the image processing device amplifies the first preset matching threshold to obtain the first matching threshold. 2. Preset matching threshold.
  • the image processing apparatus determines from at least one pair of sub feature points that each current image sub feature point corresponding to the minimum Hamming distance corresponds to After obtaining the first matching point pair corresponding to the current image feature point, the image processing device amplifies the preset similarity threshold and the preset saliency threshold in the first preset matching threshold to obtain the second prediction Set the matching threshold.
  • the image processing device amplifies the preset similarity threshold in the first preset matching threshold to 100, and the preset saliency threshold in the first preset matching threshold to 0.95, and the image processing device sets the preset similarity
  • the sex threshold of 100 and the preset significance threshold of 0.95 are used as the second preset matching threshold.
  • the image processing apparatus obtains at least one preset initial constraint condition.
  • the image processing device amplifies the first preset matching threshold value, and after obtaining the second preset matching threshold value, the image processing device must calculate the constraint condition according to at least one preset initial constraint condition, and the image processing device At least one preset initial condition can be acquired according to different image processing functions.
  • the image processing device amplifies the first preset matching threshold, and obtaining the second preset matching threshold is to obtain more matching point pairs.
  • the image processing device It is also necessary to calculate the constraint conditions satisfied between the first matching point pair to verify whether the matching point pairs obtained by the second matching meet the same constraint conditions, and calculating the constraint conditions first needs to obtain the preset initial constraint conditions.
  • the image processing apparatus needs to obtain at least one preset initial constraint condition according to different image processing functions.
  • the image processing apparatus may use the matrix formula of the basic matrix and the homography matrix as the preset initial constraint condition.
  • the image processing apparatus uses the first matching point pair to calculate at least one preset initial constraint condition to obtain at least one sub-constraint condition.
  • the image processing apparatus uses the first matching point pair to calculate at least one preset initial constraint condition to obtain at least one sub-constraint condition.
  • the image processing apparatus uses the first matching point pair to calculate at least one preset initial constraint condition, and the step of obtaining at least one sub-constraint condition may specifically include: S2061 to S2063. as follows:
  • the image processing device selects a preset number of first matching point pairs from the first matching point pairs for each preset initial constraint condition, and repeats the preset number of times to obtain a number equal to the preset number of matching points.
  • the first matching point pair combination wherein each combination in the first matching point pair combination includes a preset number of first matching point pairs.
  • the image processing apparatus after the image processing apparatus obtains at least one preset initial constraint condition, in order to calculate according to the preset initial constraint condition, the image processing apparatus will select a preset number of first matching points from the first matching point pairs. The point pairs are combined, and the preset times are repeated to obtain the first matching point pair combination equal to the preset times.
  • the preset initial constraint condition is the basic matrix calculation formula
  • at least 8 pairs of matching points are required as calculation parameters. Therefore, the image processing device randomly selects 8 pairs of first matching points from the first matching point pairs.
  • the combination is used to calculate the basic matrix; in order to improve the calculation accuracy and reduce the error, the image processing device can continue to randomly select 8 pairs of matching points from the first matching point pair, repeat 100 times, the image processing device can get 100 sets of random selection
  • the first matching point pair combination each combination contains 8 pairs of matching points.
  • the image processing device randomly selects 4 pairs of first matching points from the first matching point pairs.
  • the combination is used to calculate the homography matrix; in order to improve the calculation accuracy and reduce the error, the image processing device can continue to randomly select 4 pairs of matching points from the first matching point pair, repeat 100 times, the image processing device can obtain 100 sets of random The selected first matching point pair combination, each combination contains 4 pairs of matching point pairs.
  • the image processing device uses the first matching point to calculate each preset initial constraint condition for each combination in the combination, and obtains the preset number of sub-constraint conditions corresponding to each preset initial constraint condition as each preset Let the initial constraint condition be the final sub constraint condition.
  • the image processing device selects a preset number of first matching point pairs from the first matching point pairs for each preset initial constraint condition to combine, and repeats the preset number of times to obtain the preset number After the equal number of first matching point pairs are combined, the image processing device uses each of the first matching point pair combinations to calculate each preset initial constraint condition, where one first matching point pair combination must be separately It is calculated once for each different preset initial condition. Therefore, the image processing device can obtain the preset number of sub-constraints corresponding to each preset initial constraint condition according to the preset number of first matching point pair combinations, as each Preset the final sub-constraint of the initial constraint.
  • the image processing device uses the first matching point pair combination, and the method for calculating the basic matrix is:
  • E is the essential matrix, which can be calculated by the first matching point pair combination
  • K is the camera's internal parameter matrix
  • the superscript T represents the transposition of the matrix
  • K -T represents the transposition and combination of the internal parameter matrix.
  • K -1 represents the inverse of the internal parameter matrix.
  • K can be expressed as:
  • f x and f y are the focal lengths of the camera in the x and y directions, respectively, and c x and c y are the x and y coordinates from the image center to the origin of the image coordinates, in pixels.
  • the camera internal parameter matrix can be directly determined according to the camera parameters, that is, in formula (1), K is a known parameter, and the image processing device needs to calculate through the first matching point pair combination
  • the essential matrix E, and then the E and K of the known parameters are substituted into the calculation formula (1) according to the fundamental matrix, and the calculation formula (1) of the fundamental matrix with known E and K is used as the sub-constraint condition.
  • the method for the image processing device to calculate the essential matrix E by the coordinates of the midpoint of the first matching point pair combination is:
  • e 1 to e 9 represent the value of each matrix element when the essential matrix E is written as a 3*3 matrix.
  • the image processing device can expand and deform the matrix E into e so as to solve the equations, thereby obtaining:
  • the image processing device substitutes formula (5) into formula (4), and multiplies the coordinate points in formula (4).
  • the image processing device can obtain an equation based on a pair of matching points, as shown in (6) Show:
  • the image processing device can solve an essential matrix E through 8 pairs of first matching points, that is, the image processing device can solve an essential matrix E through a set of matching point pairs.
  • the image processing device uses a preset number of first matching point pair combinations to calculate the basic matrix a preset number of times to obtain a preset number of essential matrix E .
  • the basic matrix calculation formula with a preset number of different parameters E is used as the preset initial condition as the final sub-constraint condition corresponding to the basic matrix formula.
  • the image processing device uses the first matching point pair combination, and the method for calculating the homography matrix is:
  • H represents a homography matrix
  • p 1 and p 2 are a pair of matching point pairs included in the first matching point pair combination
  • the image processing device substitutes the pixel coordinates of p 1 and p 2 into the calculation formula (8) of the homography matrix, and expands to obtain:
  • the image processing device can obtain two constraint equations for calculating the homography matrix H according to a pair of matching points.
  • the image processing device expands the homography matrix H as a vector:
  • the image processing device can obtain 8 equations about the homography matrix through the 4 pairs of first matching points, where the value of h 9 is known to be equal to 1. .
  • the calculation method for solving the linear equations usually uses the Direct Linear Transform (DLT, Direct Linear Transform) method, and other methods may also be used according to actual conditions, and the embodiments of the present application do not make specific limitations.
  • DLT Direct Linear Transform
  • DLT Direct Linear Transform
  • the image processing device uses the preset number of first matching point pair combinations to calculate the preset number of times for the homography matrix to obtain the preset number of singles.
  • the image processing device will The homography matrix calculation formula with a preset number of different parameters H is used as the preset initial condition as the final sub-constraint condition corresponding to the homography matrix formula.
  • the image processing apparatus determines, among the at least one preset initial constraint condition, a final sub-constraint condition of each preset initial constraint condition as at least one sub-constraint condition.
  • the image processing device uses the first matching point to calculate each preset initial constraint condition for each combination in the combination to obtain the preset number of sub-constraint conditions corresponding to each preset initial constraint condition. After serving as the final sub-constraint condition of each preset initial constraint condition, the image processing apparatus collectively uses the final sub-constraint condition corresponding to each preset initial constraint condition among the at least one preset initial constraint condition as at least one sub-constraint condition.
  • the image processing device uses the basic matrix calculation formula as the preset initial condition, calculates the combination according to 100 sets of first matching points, and obtains 100 basic matrix formulas with different known parameters as the preset initial condition as the basic matrix Corresponding to 100 sub-constraints; the image processing device uses the homography matrix calculation formula as the preset initial condition, calculates the combination according to the 100 sets of first matching points, and obtains 100 homography matrix formulas with different known parameters as the prediction
  • the initial conditions are 100 sub-constraints corresponding to the homography matrix; the image processing device uses these 200 sub-constraints together as at least one sub-constraint corresponding to at least one preset initial condition.
  • the image processing device determines at least one third mapping value based on the three-dimensional coordinates of each third feature point in the first matching point pair and the at least one sub-constraint condition.
  • the image processing device uses the first matching point pair to calculate at least one preset initial constraint condition, and after obtaining at least one sub-constraint condition, the image processing device performs calculations based on each third matching point pair in the first matching point pair.
  • the three-dimensional coordinates of the feature points and the sub-constraints of each different known parameter determine at least one third mapping value.
  • the image processing device will use the three-dimensional coordinates of each current image feature point in the first matching point pair and the corresponding reference feature point according to the basic matrix formula of one known parameter in at least one sub-constraint condition Or the homography matrix formula with known parameters, the value of the corresponding fundamental matrix or the homography matrix is obtained as the third mapping value.
  • the image processing device calculates all the sub-constraints in at least one sub-constraint condition until the processing is completed, and the image processing device can determine the value of at least one fundamental matrix and the homography matrix as at least one third mapping value.
  • the image processing apparatus determines at least one third matching error based on the at least one third mapping value.
  • the image processing device determines at least one third mapping value based on the three-dimensional coordinates of each third feature point in the first matching point pair and at least one sub-constraint condition
  • the image processing device determines based on the at least one According to the error calculation formula
  • the third mapping value calculates the error of the first matching point pair under the action of the third mapping value as at least one third matching error.
  • the image processing device calculates the error of a first matching point pair combination under the action of the basic matrix, which can be based on formula (16):
  • a 1 , b 1 and c 1 are unknown intermediate parameters.
  • the image processing device calculates a 1 , b 1 and c 1 according to formula (17)
  • the image processing device calculates the error according to formula (18) Calculate the value of error:
  • the value of error calculated according to formula (18) represents the error of the matching point pair under the action of the basic matrix
  • the image processing device compares the error of a set of matching point pairs under the action of the third mapping value corresponding to a basic matrix As a third matching error.
  • the image processing apparatus calculates the error of a first matching point pair combination under the action of the homography matrix, which may be defined as:
  • the image processing device calculates the relative error according to formula (19), and we can get:
  • the value of error calculated according to formula (22) represents the error of the matching point pair under the action of the homography matrix, and the image processing device converts a group of matching point pairs under the action of the third mapping value corresponding to a homography matrix.
  • the error is also regarded as a third matching error.
  • the image processing apparatus can determine the error of each of the at least one first matching point pair combination under the action of the at least one third mapping value based on the at least one third mapping value in the same method, As at least one third matching error.
  • the image processing apparatus determines the first matching point pair corresponding to at least one sub-constraint condition that is less than or equal to a preset error threshold, as a correct matching point pair.
  • the image processing device determines at least one third matching error based on at least one third mapping value
  • the image processing device performs screening based on the at least one third matching error according to a preset error threshold to determine at least The first matching point pair that is less than or equal to the preset error threshold corresponding to a sub-constraint condition is regarded as the correct matching point pair.
  • the image processing apparatus when the error is less than or equal to the preset error threshold, the image processing apparatus confirms that the corresponding first matching point pair is a correct matching point pair, and when the error is greater than the preset error threshold, the image processing apparatus confirms the corresponding first matching point pair.
  • the matching point pair is an error matching point pair.
  • the image processing device performs the verification of the third matching error for each of the at least one sub-constraint condition, and obtains the correct matching point pair corresponding to each sub-constraint condition.
  • the image processing device respectively selects one sub-constraint condition containing the most correct matching point pairs from the at least one preset initial constraint condition as at least one intermediate sub-constraint condition corresponding to each at least one preset initial constraint condition.
  • the image processing device determines the first matching point pair corresponding to at least one sub-constraint condition that is less than or equal to the preset error threshold based on at least one third matching error, as the correct matching point pair, the image processing device From at least one preset initial constraint condition, the one sub-constraint condition that each preset initial constraint condition contains the most correct matching point pairs is selected as at least one intermediate sub-constraint condition corresponding to each at least one preset initial constraint condition. .
  • the image processing device after the image processing device obtains the correct matching point pair corresponding to each sub-constraint condition, the image processing device counts at least one sub-constraint condition corresponding to the preset initial condition under each preset initial constraint condition. , Contains the sub-constraint condition with the most correct matching point pairs as the intermediate sub-constraint condition. The image processing device performs statistics on at least one preset initial constraint condition to obtain at least one intermediate sub-constraint condition.
  • the image processing device counts the 100 basic matrix sub-constraints under the basic matrix formula, and obtains the one basic matrix sub-constraint condition corresponding to the most correct matching point pairs as the intermediate sub-constraint condition corresponding to the basic matrix formula.
  • the processing device counts 100 homography matrix sub-constraints under the homography matrix formula, and obtains the homography matrix sub-constraint condition corresponding to the most correct matching point pairs as the intermediate sub-constraint condition corresponding to the homography matrix formula.
  • the image processing device uses the intermediate sub-constraint condition corresponding to the basic matrix formula and the intermediate sub-constraint condition corresponding to the homography matrix formula as at least one intermediate sub-constraint condition.
  • the image processing device selects the intermediate sub-constraint condition containing the most correct matching point pairs from the at least one intermediate sub-constraint condition as the constraint condition.
  • the image processing apparatus selects one sub-constraint condition that contains the most correct matching point pairs from at least one preset initial constraint condition as the at least one intermediate sub-condition corresponding to each at least one preset initial constraint condition.
  • the image processing device selects the intermediate sub-constraint condition containing the most correct matching point pairs from the at least one intermediate sub-constraint condition as the constraint condition.
  • the image processing device matches the image feature point with the reference feature point based on the second preset matching threshold to obtain an intermediate matching point pair.
  • the image processing device selects the intermediate sub-constraint condition that contains the most correct matching point pairs from at least one intermediate sub-constraint condition, as the constraint condition, the second preset matching threshold after the image processing device enlarges the image The feature points are matched with the reference feature points. Because the threshold is enlarged, the image processing device will obtain more matching point pairs, and the image processing device will obtain the matching point pairs obtained by the secondary matching as intermediate matching point pairs.
  • the image processing device selects a second matching point pair that meets the constraint condition from the intermediate matching point pairs.
  • the image processing device matches the image feature point with the reference feature point based on the second preset matching threshold, and after obtaining the intermediate matching point pair, the image processing device selects the intermediate matching point pairs from the intermediate matching point pairs according to the constraint conditions. The second matching point pair that meets the constraints.
  • the image processing device selects the second matching point pair that satisfies the constraint condition from the intermediate matching point pairs specifically including:
  • the image processing device determines the first mapping value based on the three-dimensional coordinates and constraint conditions of each first feature point in the intermediate matching point pair; the image processing device determines the first matching error based on the first mapping value; the image processing device determines the first matching error based on For the first matching error, a second matching point pair less than or equal to the preset error threshold is determined.
  • the image processing device matches the image feature point with the reference feature point based on the second preset matching threshold, and after obtaining the intermediate matching point pair, the image processing device based on each first feature point in the intermediate matching point pair
  • the three-dimensional coordinates and constraint conditions determine the first mapping value.
  • the image processing device uses the image feature point and the reference feature point in the intermediate matching point pair as the first feature point, and uses the three-dimensional coordinates of the first feature point and the constraint conditions finally obtained in step S211 to calculate The calculation result of the constraint condition is used as the first mapping value.
  • the image processing device obtains the basic matrix calculation formula as the constraint condition.
  • the image processing device uses the three-dimensional coordinates of each first feature point in the intermediate matching point pair, according to the steps
  • the method in S2062 calculates the basic matrix calculation result corresponding to the basic matrix calculation formula, and the image processing device uses the basic matrix calculation result calculated according to the second matching point pair as the first mapping value.
  • the image processing device determines the first mapping value based on the three-dimensional coordinates and constraint conditions of each first feature point in the intermediate matching point pair, the image processing device determines the first mapping value based on the first mapping value. Matching error.
  • the principle of determining the first matching error is the same as S208.
  • the image processing device determines the second matching point pair that is less than or equal to the preset error threshold based on the first matching error.
  • the principle of determining the second matching point pair less than or equal to the preset error threshold is the same as S209.
  • the image processing device selects the second matching point pair that meets the constraint conditions from the intermediate matching point pairs.
  • the image processing device screens out the sub-matching point pairs except the first matching point pair from the intermediate matching point pairs; the image processing device determines the three-dimensional coordinates and constraint conditions of each second feature point in the sub-matching point pairs The second mapping value; the image processing device determines the second matching error based on the second mapping value; the image processing device determines the final sub-matching point pair less than or equal to the preset error threshold based on the second matching error; the image processing device will The first matching point pair and the final sub-matching point pair serve as the second matching point pair.
  • the image processing device matches the image feature point with the reference feature point based on the second preset matching threshold, and after obtaining the intermediate matching point pair, the image processing device selects the first matching point from the intermediate matching point pairs. Sub-matches other than point pairs.
  • the image processing device screens out the sub-matching point pairs other than the first matching point pair, and obtains the newly-added matching point pairs for the secondary matching as the sub-matching point pairs.
  • the image processing device only processes the sub-matching point pairs, which can reduce the amount of calculation.
  • step S2062 the principle of the image processing apparatus for determining the second mapping value based on the three-dimensional coordinates and constraint conditions of each second feature point in the sub-matching point pair is the same as step S2062.
  • step S208 the principle of determining the second matching error by the image processing apparatus based on the second mapping value is the same as step S208.
  • the image processing device determines the final sub-matching point pair less than or equal to the preset error threshold based on the second matching error.
  • step S209 the principle that the image processing apparatus determines the final sub-matching point pair less than or equal to the preset error threshold based on the second matching error is the same as step S209.
  • the image processing device determines the final sub-matching point pair less than or equal to the preset error threshold based on the second matching error, the image processing device regards the first matching point pair and the final sub-matching point pair as the second matching Point right.
  • the image processing device uses the constraint conditions calculated by the first matching, and then verifies the matching point pairs that are matched by the magnification threshold twice, ensuring the accuracy of the matching point pairs, thereby solving the matching point pairs The problem of too few matching points and more wrong matching point pairs resulted in more and more accurate matching point pairs.
  • the image processing device after the image processing device filters out the second matching point pairs that meet the constraint conditions from the intermediate matching point pairs, when the image processing function is the relocation function, the image processing device according to the second matching point pairs, and The preset correspondence between the constraint condition and the camera pose restores at least one relative pose information.
  • the image processing device recovering at least one relative pose information according to the second matching point pair specifically includes: S2141 to S2142. as follows:
  • the image processing device calculates at least one translation amount and at least one rotation in the first shooting coordinate system and the second shooting coordinate system according to the correspondence between the preset constraint conditions and the camera pose and the second matching point pair the amount.
  • the correspondence between the preset constraint condition and the camera pose is the correspondence between the essential matrix E corresponding to the basic matrix and the relative pose of the camera:
  • E is the essential matrix
  • t ⁇ represents the antisymmetric matrix of the translation between the first shooting coordinate system and the second shooting coordinate system
  • R represents the distance between the first shooting coordinate system and the second shooting coordinate system.
  • the image processing device can calculate the value of the essential matrix E through the second matching point pair, and use the singular value decomposition (SVD, Singular Value Decomposition) method on the value of the essential matrix E to obtain the values of t and R, and then obtain the camera relative Pose, SVD method is shown in formula (24):
  • U is the orthogonal matrix
  • V T is the transpose of the orthogonal matrix
  • is the singular value matrix
  • the image processing device uses t 1 ⁇ and t 2 ⁇ as at least one translation amount, and R 1 and R 2 as at least one rotation amount.
  • the correspondence between the preset constraint condition and the camera pose is:
  • H is the homography matrix
  • the image processing device can calculate the value of the homography matrix H through the second matching point pair
  • d represents the up and down translation value between a plane and the zero plane in the space
  • A is the value H according to the homography matrix
  • a, b, c are known plane normal vectors
  • a group of parallel plane families can be determined according to a, b, c, x, y, z are the three-dimensional coordinates of points, and d can be positioned to a The only plane in the group of parallel planes.
  • the image processing device obtains the unit normal vector of the plane:
  • a three-dimensional coordinate point X on the plane satisfies formula (28), and X can be the three-dimensional coordinate of the current image feature point or the reference feature point:
  • the image processing device multiplies formula (28) to t in formula (29):
  • the image processing device can obtain the formula (30):
  • X 1 and X 2 are the coordinates of the three-dimensional coordinates of the same point in the first shooting coordinate system and the second shooting coordinate system, respectively.
  • the three-dimensional coordinates of the same point satisfy the formula between the coordinates x 1 and x 2 in the current frame image coordinate system and the reference frame image coordinate system:
  • is the ratio of the depth of the three-dimensional coordinate point X in the first shooting coordinate system and the second shooting coordinate system, which is equivalent to the non-zero factor s in step S2062.
  • the coordinates of a three-dimensional coordinate point in the first shooting coordinate system may be expressed as (32), and the coordinates of the three-dimensional coordinate point in the second shooting coordinate system may be expressed as (33):
  • X 1 and X 2 are the coordinates of the same three-dimensional coordinate point in the above two different coordinate systems.
  • the image processing device can obtain the value of ⁇ :
  • the image processing device substitutes the value of ⁇ in (34) into formula (31), and multiplies both sides of the equation by the camera internal parameter matrix K.
  • the equation still holds, the image processing device can obtain:
  • the image processing device multiplies ⁇ into formula (30) to obtain:
  • the image processing device performs singular value decomposition of A according to the following formula:
  • the image processing device can obtain:
  • I is the identity matrix.
  • the image processing device brings U and V into the singular value decomposition (38) for shifting terms, and we can get:
  • det stands for determinant. Since U and V are unit orthogonal matrices, the determinant is 1. Here s is different from the non-zero factor s in step S2062.
  • the image processing device can solve the values of R'and t'.
  • n is a unit normal vector and V is a unit orthogonal matrix, it can also be regarded as a rotation matrix.
  • Vn can be regarded as transferring each component in n to the three unit orthogonal basis in V. So n'is also the unit normal vector. Therefore, the image processing device can obtain the following equation:
  • the image processing device eliminates the following equations and eliminates t', which can be obtained:
  • the elimination method can multiply both sides of the first equation by x 2 , then multiply both sides of the second equation by x 1 at the same time, and then subtract both sides of the two equations at the same time.
  • e 2 R'e 2 , that is, R'is a rotation matrix rotating around e 2 .
  • the image processing device can write R'as:
  • the image processing device is based on d 1 ⁇ d 3 and You can get:
  • the image processing device thus obtains cos ⁇ and sin ⁇ :
  • the image processing device can get R'.
  • R′ represents the rotation amount between the first shooting coordinate system and the second shooting coordinate system restored by the homography matrix.
  • t′ represents the translation amount between the first shooting coordinate system and the second shooting coordinate system restored by the homography matrix.
  • the image processing device uses t'as at least one translation amount and R'as at least one translation amount.
  • the image processing device correspondingly combines at least one translation amount and at least one rotation amount to obtain at least one relative pose information.
  • the image processing device calculates at least one translation amount in the first shooting coordinate system and the second shooting coordinate system according to the correspondence between the preset constraint condition and the camera pose, and the second matching point pair After adding at least one rotation amount, the image processing device combines at least one translation amount and at least one rotation amount in a one-to-one correspondence, and uses each combination of the translation amount and the rotation amount as at least one relative pose information.
  • the image processing device When the constraint condition is the basic matrix, according to the different values of t and R obtained in S2141, the image processing device will combine different t and R respectively, and the image processing device can obtain 4 different camera relative poses, each The relative pose of each camera contains a value of t and an R.
  • the constraint condition is a homography matrix, according to the different values of t'and R'obtained in S2141, and based on different singular values, the image processing device can obtain 8 possible situations, as shown in Table 2:
  • the image processing device calculates the first image position coordinates of the current image feature point in the first shooting coordinate system and the second image position coordinates in the second shooting coordinate system from the second matching point pair for each relative pose information.
  • Image position coordinates; the first shooting coordinate system is the shooting coordinate system corresponding to the current image feature point, and the second shooting coordinate system is the shooting coordinate system corresponding to the reference feature point.
  • the image processing device calculates the current image feature from the second matching point pair for each relative pose information The point is in the first image position coordinates of the shooting coordinate system corresponding to the current image feature point, and the second image position coordinates in the shooting coordinate system corresponding to the reference feature point.
  • the image processing device calculates the position coordinates of each current image sub-feature point in the second matching point pair in the current image frame shooting coordinate system; the image processing device calculates again Find out the position coordinates of the same current image sub-feature point in the reference frame shooting coordinate system, until the current image feature points in the second matching point pair are calculated, the image processing device will all current image feature points in the current image frame shooting coordinate system
  • the position coordinates of is used as the first image position coordinates, and the image processing device uses the position coordinates of all current image feature points in the reference shooting coordinate system as the second image position coordinates.
  • the method for the image processing device to calculate the position coordinates of each current image feature point in the current image frame shooting coordinate system, and the position coordinates of the same current image sub-feature point in the reference frame shooting coordinate system According to the relative pose of each camera, the image processing device converts the coordinates of the current sub-image feature points in the image pixel coordinate system into the current sub-image feature points in space, corresponding to the camera corresponding to the current frame image and the reference frame respectively
  • the camera of is the three-dimensional coordinates of the origin of the coordinate system.
  • the coordinates of the first image location point are (x, y, z), where x represents the sub-feature point of the current image in the horizontal direction of the imaging interface of the current image frame shooting camera Coordinates, y represents the horizontal coordinate of the current image sub-feature point on the imaging interface of the current image frame shooting camera, and z represents the imaging focal length of the current image frame shooting camera, that is, the depth value.
  • the image processing device calculates the first reference position coordinates of the reference feature point in the first shooting coordinate system from the second matching point pair; and the second reference position in the second shooting coordinate system Position coordinates.
  • the image processing device calculates the first image position coordinates of the current image feature point in the first shooting coordinate system and the second shooting coordinate for each relative pose information from the second matching point pair After setting the second image position coordinates of the system, the image processing device calculates the first reference position coordinates of the reference feature point in the shooting coordinate system corresponding to the current image feature point from the second matching point pair for each relative pose information; And the second reference position coordinate of the shooting coordinate system corresponding to the reference feature point.
  • the image processing device calculates the position coordinates of each reference sub-feature point in the second matching point pair in the current image frame shooting coordinate system; the image processing device calculates The position coordinates of the same reference sub-feature point in the reference frame shooting coordinate system, until the reference feature point in the second matching point is calculated, the image processing device uses the position coordinates of the reference feature point in the current image frame shooting coordinate system as the first A reference position coordinate, the image processing device uses the position coordinate of the reference feature point in the reference shooting coordinate system as the second reference position coordinate.
  • the method for the image processing device to calculate the position coordinates of each reference sub-feature point in the current image frame shooting coordinate system, and the position coordinates of the same reference sub-feature point in the reference frame shooting coordinate system is: image According to the relative pose of each camera, the processing device converts the coordinates of the reference sub-feature points in the image pixel coordinate system into reference sub-feature points in space.
  • the camera corresponding to the current frame image and the camera corresponding to the reference frame are respectively taken as The three-dimensional coordinates of the origin of the coordinate system, for example, the first reference position coordinates are (x, y, z), where x represents the coordinates of the reference sub-feature point in the horizontal direction of the imaging interface of the current image frame shooting camera, and y represents the reference The coordinate of the sub-feature point in the horizontal direction of the imaging interface of the current image frame shooting camera, z represents the imaging focal length of the current image frame shooting camera, that is, the depth value.
  • the image processing device separately checks each relative pose information based on the first image position coordinates, the second image position coordinates, the first reference position coordinates, and the second reference position coordinates, to obtain at least one corresponding to the relative pose information. Check the result.
  • the image processing device calculates the first reference position coordinates of the reference feature point in the first shooting coordinate system from the second matching point pair for each relative pose information; and in the second shooting coordinate system After the second reference position coordinates of the image processing device, based on the first image position coordinates, the second image position coordinates, the first reference position coordinates, and the second reference position coordinates, whether the position coordinates recovered by each relative pose information are correct Perform verification to obtain at least one verification result corresponding to relative pose information.
  • the image processing device separately checks each relative pose information based on the first image position coordinates, the second image position coordinates, the first reference position coordinates, and the second reference position coordinates, to obtain at least one relative position.
  • the verification result corresponding to the pose information may specifically include: S2171 to S2176. as follows:
  • the image processing device obtains the first image location sub-coordinates and the second image location sub-coordinates corresponding to each current image sub-feature point in the current image feature points according to the first image location coordinates and the second image location coordinates, and a reference The first reference location sub-coordinate and the second reference location sub-coordinate corresponding to each reference sub-feature point in the feature points.
  • each current image sub-feature point is taken as the correct current image sub-feature point, and the first preset condition is that the depth value of the three-dimensional coordinate is a preset value.
  • the image processing device obtains the first image position sub-coordinates and the second image position sub-coordinates corresponding to each current image sub-feature point in the current image feature points according to the first image position coordinates and the second image position coordinates.
  • the image processing device After the coordinates, and the first reference position sub-coordinates and the second reference position sub-coordinates corresponding to each of the reference sub-feature points in the reference feature points, the image processing device for each relative pose information, when each current image sub-feature point When the sub-coordinates of the first image position meet the first preset condition, and the sub-coordinates of the second image position of each current image sub-feature point also meet the first preset condition, each current image sub-feature point is regarded as the correct current image For sub feature points, the first preset condition is that the depth value of the three-dimensional coordinate is a preset value.
  • the first image position sub-coordinates are (x, y, z), where x represents the coordinates of the first image position sub-coordinates in the horizontal direction of the camera imaging interface, and y represents the first image
  • the corresponding second image position sub-coordinate is (x 1 , y 1 , z 1 ). If both z and z 1 are positive, it means that the relative pose of the camera is solved correctly, and the image processing device confirms that the current image sub-feature point is correct. If one of the z or z 1 values is negative, it indicates that this point is behind the camera, that is, the corresponding camera relative pose is incorrect, and the image processing device confirms that the current image sub-feature point is an error current image sub-feature point.
  • the image processing device determines the correct current image sub-feature point in each current image sub-feature point from the current image feature points, so as to obtain the correct current image feature point of the current image feature point.
  • the image processing device for each relative pose information, when the first image position sub-coordinate of each current image sub-feature point meets the first preset condition, and the second When the image position sub-coordinates also meet the first preset condition, each current image sub-feature point is regarded as the correct current image sub-feature point.
  • the first preset condition is that after the depth value of the three-dimensional coordinate is the preset value, the image processing device Among the current image feature points, the correct current image sub feature point in each current image sub feature point is determined, so as to obtain the correct current image feature point of the current image feature point.
  • each reference sub-feature point is taken as the correct reference sub-feature point.
  • the image processing device determines the correct current image sub-feature point in each current image sub-feature point from the current image feature point, so as to obtain the correct current image feature point of the current image feature point. For each relative pose information, when the first reference position sub-coordinate of each reference sub-feature point meets the first preset condition, and the second reference position sub-coordinate of each reference sub-feature point also meets the first preset condition In the condition, each reference sub-feature point is regarded as the correct reference sub-feature point.
  • the first reference position sub-coordinates are (x, y, z), where x represents the coordinates of the first reference position sub-coordinates in the horizontal direction of the camera imaging interface, and y represents the first reference
  • the corresponding second reference position sub-coordinate is (x 1 , y 1 , z 1 ). If both z and z 1 are positive, it means that the relative pose of the camera is solved correctly, and the image processing device confirms that the reference sub-feature point is the correct reference sub-feature point. If one of the z or z 1 values is negative, it indicates that the point is behind the camera, that is, the corresponding camera relative pose is incorrect, and the image processing device confirms that the reference sub-feature point is an incorrect reference sub-feature point.
  • the image processing device determines the correct reference sub-feature point in each reference sub-feature point from the reference feature points, so as to obtain the correct reference feature point of the reference feature point.
  • the image processing device for each relative pose information, when the first reference position sub-coordinate of each reference sub-feature point meets the first preset condition, and the second reference position of each reference sub-feature point When the sub-coordinates also meet the first preset condition, after each reference sub-feature point is used as the correct reference sub-feature point, the image processing device determines the correct reference sub-feature point in each reference sub-feature point from the reference feature points , So as to get the correct reference feature point of the reference feature point.
  • the image processing device uses the correct current image feature point and the correct reference feature point as a verification result corresponding to each relative pose information, thereby obtaining a verification result corresponding to at least one relative pose information.
  • the image processing device calculates the first reference position coordinates of the reference feature point in the first shooting coordinate system from the second matching point pair for each relative pose information; and in the second shooting coordinate system After the second reference position coordinates of the image processing device, the image processing device separately checks each relative pose information based on the first image position coordinates, the second image position coordinates, the first reference position coordinates and the second reference position coordinates to obtain at least A verification result corresponding to relative pose information.
  • the image processing device determines the target pose information from at least one relative pose information based on the verification result.
  • the image processing device separately checks each relative pose information based on the first image position coordinates, the second image position coordinates, the first reference position coordinates, and the second reference position coordinates, to obtain at least one relative position. After the verification result corresponding to the pose information, the image processing device finds the relative pose information that recovers the most correct position coordinates from at least one relative pose information based on the verification result, as the target pose information.
  • the image processing device determining the target pose information from at least one relative pose information based on the verification result specifically includes: S2181 to S2182. as follows:
  • the image processing device calculates the total number of correct sub-current image feature points and correct sub-reference feature points in the verification result corresponding to each relative pose information from the verification result corresponding to at least one relative pose information, In this way, the number of correct results corresponding to each relative pose information is obtained.
  • the image processing device uses the correct current image feature point and the correct reference feature point as the verification result corresponding to each relative pose information, so as to obtain the verification result corresponding to at least one relative pose information, the image The processing device calculates the sum of the correct sub-current image feature points and correct sub-reference feature points in the verification result corresponding to each relative pose information from the verification results corresponding to at least one relative pose information, thereby obtaining each The number of correct results corresponding to the relative pose information.
  • the image processing device determines the target pose information with the largest number of correct results from the correct result quantity corresponding to each relative pose information.
  • the image processing device calculates the correct sub-current image feature points and correct sub-reference feature points in the verification results corresponding to each relative pose information from the verification results corresponding to at least one relative pose information After obtaining the correct result quantity corresponding to each relative pose information, the image processing device determines the target pose information with the largest number of correct results from the correct result quantity corresponding to each relative pose information.
  • the image processing device implements a relocation function according to the target pose information.
  • the image processing device determines the target pose information with the largest number of correct results from the correct result quantity corresponding to each relative pose information
  • the image processing device implements the relocation function according to the target pose information.
  • the image processing device obtains the translation and rotation of the current frame image relative to the reference frame according to the final output pose, thereby obtaining the position of the current frame image in the reference frame, thereby realizing relocation.
  • the image processing device will obtain the current frame image to be processed; then the image processing device extracts the current image feature points of the current frame image from the current frame image, and uses it to find the current frame image based on the current image feature points.
  • the reference frame with the most matching feature point, and the reference feature point corresponding to the reference frame is obtained; then the image processing device matches the current image feature point with the reference feature point for the first time based on the first preset matching threshold to obtain the first matching point pair
  • the image processing device amplifies the first preset matching threshold to obtain the second preset matching threshold, and based on the first matching point pair, the constraint condition is obtained; finally, under the constraint condition, the image processing device matches based on the second preset
  • the threshold matches the current image feature point with the reference feature point to obtain a second matching point pair.
  • image processing can first calculate a small number of reliable matches, and then estimate the mathematical model through these matches, thereby expanding more matches, while ensuring the accuracy of matching point pairs, and solving the problem of matching point pairs. If the number is too small, there will be more wrong matching point pairs, which increases the number and accuracy of matching point pairs, and ultimately improves the accuracy of image processing.
  • the image processing apparatus may calculate at least one sub-constraint condition at the same time when calculating the constraint condition, and determine the constraint condition according to the number of correct matching point pairs in the at least one sub-constraint condition.
  • the image processing function is the relocation function
  • the image processing device can automatically select the most suitable basic matrix or homography matrix for the current scene to restore the relative pose, thereby improving the accuracy of relocation.
  • FIG. 3 is a first structural diagram of an image processing device provided by an embodiment of the application, as shown in FIG. 3, the image processing device 3 include:
  • the acquiring unit 10 is configured to acquire the current frame image to be processed
  • the extracting unit 11 is configured to extract current image feature points of the current frame image from the current frame image, and obtain reference feature points corresponding to the reference frame based on the current image feature points;
  • the matching unit 12 is configured to match the current image feature point with the reference feature point based on a first preset matching threshold to obtain a first matching point pair;
  • the calculation unit 13 is configured to amplify the first preset matching threshold to obtain a second preset matching threshold, and to obtain a constraint condition based on the first matching point pair;
  • the matching unit 12 is further configured to match the current image feature point with the reference feature point based on the second preset matching threshold under the constraint condition to obtain a second matching point pair;
  • the processing unit 14 is configured to perform an image processing function based on the second matching point pair.
  • the above-mentioned matching unit 12 is further configured to match the image feature point with the reference feature point based on the second preset matching threshold to obtain an intermediate matching point pair; Among the intermediate matching point pairs, the second matching point pair that satisfies the constraint condition is filtered out.
  • the matching unit 12 is further configured to determine the first mapping value based on the three-dimensional coordinates of each first feature point in the intermediate matching point pair and the constraint condition; The first mapping value determines a first matching error; based on the first matching error, the second matching point pair that is less than or equal to a preset error threshold is determined.
  • the matching unit 12 is further configured to filter out sub-matching point pairs other than the first matching point pair from the intermediate matching point pairs; based on the sub-matching Determine the second mapping value based on the three-dimensional coordinates of each second feature point in the point pair and the constraint conditions; determine a second matching error based on the second mapping value; determine based on the second matching error A final sub-matching point pair less than or equal to a preset error threshold is generated; and the first matching point pair and the final sub-matching point pair are used as the second matching point pair.
  • the calculation unit 13 is further configured to obtain at least one preset initial constraint condition; use the first matching point pair to calculate the at least one preset initial constraint condition to obtain At least one sub-constraint condition; at least one third mapping value is determined based on the three-dimensional coordinates of each third feature point in the first matching point pair and the at least one sub-constraint condition; based on the at least one third Mapping value to determine at least one third matching error; based on the at least one third matching error, determining the first matching point pair corresponding to the at least one sub-constraint condition that is less than or equal to the preset error threshold, As a correct matching point pair; from the at least one preset initial constraint condition, a sub-constraint condition containing the most correct matching point pairs is selected as the at least one corresponding to each of the at least one preset initial constraint condition.
  • An intermediate sub-constraint condition from the at least one intermediate sub-constraint condition, the intermediate sub-constraint condition that contains the most
  • the calculation unit 13 is further configured to select a preset number of the first matching point pairs from the first matching point pairs for each preset initial constraint condition The combination is repeated a preset number of times to obtain a first matching point pair combination equal to the preset number of times, wherein each combination in the first matching point pair combination includes the preset number of The first matching point pair; using each of the first matching point pair combinations to calculate each preset initial constraint condition to obtain the preset corresponding to each preset initial constraint condition Set the number of sub-constraints as the final sub-constraint of each preset initial constraint; among the at least one preset initial constraint, the final sub-constraint of each preset initial constraint The condition is determined as the at least one sub-constraint condition.
  • the processing unit 14 is further configured to recover at least one piece of relative pose information according to the second pair of matching points; for each relative pose information, from the second matching point In the point pairing, the first image position coordinates of the current image feature point in the first shooting coordinate system and the second image position coordinates in the second shooting coordinate system are calculated; the first shooting coordinate system is the current The shooting coordinate system corresponding to the image feature point, and the second shooting coordinate system is the shooting coordinate system corresponding to the reference feature point; for each relative pose information, calculate from the second matching point pair The first reference position coordinates of the reference feature point in the first shooting coordinate system; and the second reference position coordinates in the second shooting coordinate system; based on the first image position coordinates and the second image The position coordinates, the first reference position coordinates, and the second reference position coordinates are respectively checked for each relative pose information to obtain a check result corresponding to the at least one relative pose information; According to the verification result, the target pose information is determined from the at least one relative pose information; the relocation function is implemented according to the
  • the processing unit 14 is further configured to calculate the first shooting coordinate according to the correspondence between the preset constraint condition and the camera pose, and the second matching point pair At least one translation amount and at least one rotation amount in the second shooting coordinate system; and the at least one translation amount and at least one rotation amount are correspondingly combined to obtain the at least one relative pose information.
  • the processing unit 14 is further configured to obtain each current image sub-feature in the current image feature points according to the first image location coordinates and the second image location coordinates The first image position sub-coordinates and the second image position sub-coordinates corresponding to the points, and the first reference position sub-coordinates and the second reference position sub-coordinates corresponding to each of the reference sub-feature points; For each relative pose information, when the first image position sub-coordinate of each current image sub-feature point meets a first preset condition, and the second image position of each current image sub-feature point When the sub-coordinates also meet the first preset condition, use each of the current image sub-feature points as the correct current image sub-feature points, and the first preset condition is that the depth value of the three-dimensional coordinate is a preset value; Among the current image feature points, the correct current image sub feature point in each current image sub feature point is determined, so as to obtain the correct current image feature point of the current image feature point; for each relative position Attitude information,
  • the processing unit 14 is further configured to count the verification results corresponding to each relative pose information from the verification results corresponding to the at least one relative pose information The sum of the number of correct sub-current image feature points and the correct sub-reference feature points to obtain the correct result quantity corresponding to each relative pose information; from the correct result corresponding to each relative pose information Among the numbers, the target pose information with the largest number of correct results is determined.
  • the extracting unit 11 is further configured to extract, from the current frame image, a preset number of first pixel points representing the corner boundary; Set the second pixel point within the range to extract features to obtain the first image feature point; extract the feature for the third pixel point outside the preset range of the first pixel point to obtain the second image feature point;
  • the first image feature point and the second image feature point are used as the current image feature point;
  • the current image feature point is matched with a preset map feature library, wherein the preset map feature library stores at least A preset map image frame of preset map feature points; the preset map image frame in the preset map feature library that best matches the current image feature point is used as the reference frame, and the reference The preset map feature points included in the frame are used as the reference feature points.
  • the matching unit 12 is further configured to combine each of the current image feature points with the reference feature point one by one to obtain at least one sub feature point Yes; for the at least one pair of sub-feature points, calculate the Hamming distance from each of the current image sub-feature points to the reference feature point to obtain at least one Hamming distance, and the at least one Hamming distance is relative to the At least one sub-feature point corresponds to one to one; from the at least one Hamming distance, the minimum Hamming distance and the second smallest Hamming distance are determined; when the minimum Hamming distance is less than the preset significance threshold multiplied by When the second smallest Hamming distance, compare the minimum Hamming distance with the preset similarity threshold; when the minimum Hamming distance is less than or equal to the preset similarity threshold, start from the at least one sub-feature In the point pair, the sub-feature point pair corresponding to each of the current image sub-feature points corresponding to the minimum Hamming distance is determined, so as to obtain the first
  • FIG. 4 is a second structural diagram of an image processing device provided by an embodiment of the application.
  • the image processing device includes: The processor 20, the memory 21 and the communication bus 22.
  • the above-mentioned processor 20 may be an application specific integrated circuit (ASIC, Application Specific Integrated Circuit), a digital signal processor (DSP, Digital Signal Processor), a digital signal processing device (DSPD, Digital Signal Processing). Device), Programmable Logic Device (PLD, Programmable Logic Device), Field Programmable Gate Array (FPGA, Field Programmable Gate Array), CPU, controller, microcontroller, and microprocessor.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processing device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • CPU controller, microcontroller, and microprocessor.
  • the memory 21 is used to store executable instructions; the processor 20 is used to execute the executable instructions stored in the memory 21, so as to implement the acquisition unit 10 and the extraction unit 11 in the foregoing embodiment. , The operation steps of the matching unit 12, the calculation unit 13 and the processing unit 14.
  • the communication bus 22 is used to implement connection and communication between the processor 20 and the memory 21.
  • the image processing device provided by the embodiment of the application will obtain the current frame image to be processed; next, the image processing device extracts the current image feature points of the current frame image from the current frame image, and uses the current image feature points to find the The reference frame with the most matching feature point of the frame image is obtained, and the reference feature point corresponding to the reference frame is obtained; then the image processing device matches the current image feature point with the reference feature point for the first time based on the first preset matching threshold to obtain the first match Point pair; the image processing device pair amplifies the first preset matching threshold to obtain the second preset matching threshold, and obtains the constraint condition based on the first matching point pair; finally, under the constraint condition, the image processing device is based on the second preset matching threshold Set the matching threshold to match the current image feature point with the reference feature point to obtain the second matching point pair.
  • image processing can first calculate a small number of reliable matches, and then estimate the mathematical model through these matches, thereby expanding more matches, while ensuring the accuracy of matching point pairs, and solving the problem of matching point pairs. If the number is too small, there will be more wrong matching point pairs, which increases the number and accuracy of matching point pairs, and ultimately improves the accuracy of image processing. Further, the image processing apparatus may calculate at least one sub-constraint condition at the same time when calculating the constraint condition, and determine the constraint condition according to the number of correct matching point pairs in the at least one sub-constraint condition.
  • the image processing device can automatically select the most suitable basic matrix or homography matrix for the current scene to restore the relative pose, thereby improving the accuracy of relocation.
  • An embodiment of the present application provides a computer-readable storage medium, the above-mentioned computer-readable storage medium stores one or more programs, and the above-mentioned one or more programs can be executed by one or more processors and applied to an image processing apparatus, When the program is executed by the processor, the image processing method as provided in the embodiment of the present application is implemented.
  • the method of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better. ⁇
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions for making a terminal (which can be a mobile phone, a computer, a network function deployment system, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of the present application.
  • the image processing device can first calculate a small number of reliable matches during the first matching, and then estimate the mathematical model through a small number of reliable matches, so that the enlarged Hamming distance threshold and saliency threshold can be used in the second matching
  • the estimated mathematical model more matching point pairs are screened out, while ensuring the accuracy of matching point pairs, thereby solving the problem of too few matching point pairs and more incorrect matching point pairs, and improving matching
  • the number and accuracy of point pairs ultimately improve the accuracy of image processing.

Abstract

一种图像处理方法及装置、计算机可读存储介质,能够提高图像处理的准确度,该方法可以包括:获取待处理的当前帧图像(S101);从当前帧图像中提取当前帧图像的当前图像特征点,并基于当前图像特征点获取参考帧对应的参考特征点(S102);基于第一预设匹配阈值将当前图像特征点与参考特征点进行匹配,得到第一匹配点对(S103);将第一预设匹配阈值放大,得到第二预设匹配阈值,以及基于第一匹配点对,得到约束条件(S104);在约束条件下,基于第二预设匹配阈值将当前图像特征点与参考特征点进行匹配,得到第二匹配点对(S105);基于第二匹配点对,进行图像处理功能(S106)。

Description

一种图像处理方法及装置、计算机可读存储介质
相关申请的交叉引用
本申请基于申请号为201910570613.1、申请日为2019年06月27日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本申请涉及软件工程领域,尤其涉及一种图像处理方法及装置、计算机可读存储介质。
背景技术
目前,在即时定位与地图构建(SLAM,Simultaneous localization and mapping)领域中,通常使用恢复相对位姿,也就是两张图像的相机之间的几何关系的方式来进行重定位。常用的恢复相对位姿的方法为,将当前场景的图像特征与离线地图的图像特征进行暴力匹配,然后使用根据暴力匹配法得到的匹配点对,计算预先设置的基础矩阵或单应矩阵,将矩阵计算结果用来恢复两张图像之间的相对位姿。
在实际应用中,根据暴力匹配法得到的匹配点对经常出现匹配点对错误或者匹配点对数量过少的问题,因此根据暴力匹配法得到的匹配点对所计算出的基础矩阵或单应矩阵会不够准确,从而导致所恢复出的两张图像之间的相对位姿也存在较大误差,最终使得根据相对位姿恢复出的重定位位置准确度较低。进一步的,基础矩阵适用于恢复立体的场景,单应矩阵适用于恢复平面场景或近似平面的场景,如果预先设置的基础矩阵或单应矩阵与实际重定位时所处的场景类型不适合,也会导致根据预先设置的基础矩阵或单应矩阵所恢复出的相对位姿有较大的误差,最终使得根据相对位姿恢复出的重定位位置准确度较低。
发明内容
本申请实施例提供一种图像处理方法,能够提高图像处理的准确度,当图像处理功能为重定位功能时,最终提高重定位的准确度。
本申请的技术方案是这样实现的:
本申请实施例公开了一种图像处理方法,包括:
获取待处理的当前帧图像;
从所述当前帧图像中提取所述当前帧图像的当前图像特征点,并基于所述当前图像特征点获取参考帧对应的参考特征点;
基于第一预设匹配阈值将所述当前图像特征点与所述参考特征点进行匹配,得到第一匹配点对;
将所述第一预设匹配阈值放大,得到第二预设匹配阈值,以及基于所述第一匹配点对,得到约束条件;
在所述约束条件下,基于所述第二预设匹配阈值将所述当前图像特征点与所述参考特征点进行匹配,得到第二匹配点对;
基于所述第二匹配点对,进行图像处理功能。
本申请实施例提供一种图像处理装置,所述图像处理装置包括:
获取单元,配置为获取待处理的当前帧图像;
提取单元,配置为从所述当前帧图像中提取所述当前帧图像的当前图像特征点,并基于所述当前图像特征点获取参考帧对应的参考特征点;
匹配单元,配置为基于第一预设匹配阈值将所述当前图像特征点与所述参考特征点进行匹配,得到第一匹配点对;
计算单元,配置为将所述第一预设匹配阈值放大,得到第二预设匹配阈值,以及基于所述第一匹配点对,得到约束条件;
所述匹配单元,还配置为在所述约束条件下,基于所述第二预设匹配阈值将所述当前图像特征点与所述参考特征点进行匹配,得到第二匹配点对;
处理单元,配置为基于所述第二匹配点对,进行图像处理功能。
本申请实施例提供一种图像处理装置,所述图像处理装置包括:
存储器,用于存储可执行指令;
处理器,用于执行所述存储器中存储的可执行指令时,实现本申请实施例提供的图像处理方法。
本申请实施例提供一种计算机可读存储介质,其上存储有可执行指令,应用于图像处理装置,该可执行指令被处理器执行时实现本申请实施例提供的图像处理方法。
本申请实施例公开了一种图像处理方法及装置、计算机可读存储介质,该方法包括:图像处理装置获取待处理的当前帧图像;图像处理装置从当前帧图像中提取当前帧图像的当前图像特征点,并基于当前图像特征点获取参考帧对应的参考特征点;图像处理装置基于第一预设匹配阈值将当前图像特征点与参考特征点进行匹配,得到第一匹配点对;图像处理装置将第一预设匹配阈值放大,得到第二预设匹配阈值,以及图像处理装置基于第一匹配点对,得到约束条件;在约束条件下,图像处理装置基于第二预设匹配阈值将当前图像特征点与参考特征点进行匹配,得到第二匹配点对;基于第二匹配点对,进行图像处理功能。采用上述方法实现方案,在进行图像匹配时,图像处理装置先根据一次匹配出的第一匹配点对计算出约束条件,再在约束条件下放大第一预设匹配阈值再匹配一次,从而拓展出更多更准确的第二匹配点对,最后图像处理装置根据第二匹配点对进行图像处理,从而提高了匹配点对的数量和匹配准确率,最终提高了图像处理的准确度,当图像处理功能为重定位功能时,能够提高重定位的准确度。
附图说明
图1为本申请实施例提供的一种图像处理方法流程图一;
图2为本申请实施例提供的一种图像处理方法流程图二;
图3为本申请实施例提供的一种图像处理装置的结构示意图一;
图4为本申请实施例提供的一种图像处理装置的结构示意图二。
具体实施方式
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供一种图像处理方法,如图1所示,该方法可以包括:
S101、获取待处理的当前帧图像。
本申请实施例提供的一种图像处理方法适用于SLAM系统的初始化和重定位场景,如增强现实技术(AR,Augmented Reality)重定位、自主移动机器人重定位、无人驾驶重定位等场景,也适用于图像的非刚体形变的识别和匹配,如非刚体形变后的物体与原始状态的物体的之间图像匹配等场景。
本申请实施例中,图像处理装置在进行图像处理时,首先要获取待处理的当前帧图像。
示例性的,当图像处理功能为重定位功能时,图像处理装置调用拍摄功能对当前所处位置的场景进行拍照,获得待处理的当前帧图像。
S102、从当前帧图像中提取当前帧图像的当前图像特征点,并基于当前图像特征点获取参考帧对应的参考特征点。
本申请实施例中,图像处理装置在获取到待处理的当前帧图像之后,图像处理装置先从当前帧图像中提取特征点,作为当前图像特征点,然后基于当前图像特征点,从参考图像帧中找到与当前图像特征点最匹配的参考帧,最后从参考帧中提取特征点作为参考特征点。
本申请实施例中,图像处理装置从获取的当前帧图像中,提取当前帧图像中能够有效反映图像本质特征及能够标识图像中物体的点,作为当前图像特征点。
本申请实施例中,图像处理装置从当前帧图像中提取当前图像特征点可使用FAST(Features from accelerated segment test)算法,也可以使用其他算法,具体的根据实际情况进行选择,本申请实施例不做具体的限定。
本申请实施例中,图像处理装置得到当前图像特征点之后,根据当前图像特征点,与参考图像帧进行匹配,找到与当前图像特征点最匹配的一帧参考图像帧,作为参考帧;图像处理装置从参考帧中获取特征点,作为参考特征点。
本申请实施例中,图像处理装置根据当前图像特征点,在参考图像帧中进行匹配时,可以根据当前图像特征点中的图像描述特征点计算出当前帧图像的词袋模型,再使用当前帧图像的词袋模型进行匹配,图像处理装置也可以采用其他的算法进行匹配,例如积聚算法(VLAD,vector of locally aggregated descriptors)、遍历法等,具体的根据实际情况进行选择,本申请实施例不做限定。
示例性的,当图像处理功能为重定位功能时,图像处理装置根据当前图像特征点中的图像描述特征点,计算出当前帧图像的词袋模型特征,根据当前帧图像的词袋模型特征,在预设地图特征库中找到和当前帧图像的词袋模型特征最匹配的一帧,将找到的这一帧预设场景地图作为重定位的参考帧。其中,预设地图特征库是在地图构建时,由图像处理装置从至少一个预设地图图像帧中提取出的特征点构成,每一个预设地图图像帧以预设地图特征点的形式存放在预设地图特征库中。
S103、基于第一预设匹配阈值将当前图像特征点与参考特征点进行匹配,得到第一匹配点对。
本申请实施例中,当图像处理装置从当前帧图像中提取当前帧图像的当前图像特征点,并基于当前图像特征点获取参考帧对应的参考特征点之后,图像处理装置将当前帧图像特征点和参考帧特征点进行匹配,将满足第一预设匹配阈值的当前帧图像特征点和参考帧特征点作为第一匹配点对。其中,每个第一匹配点对包含一个当前帧图像特征点和一个参考帧特征点,分别代表了同一个物体中的像素点在当前帧图像坐标系和参考帧图像坐标系中的位置坐标。
本申请实施例中,图像处理装置将当前图像特征点和参考特征点进行匹配时,可以使用暴力匹配法。也可以使用其他匹配算法,具体的根据实际情况进行选择,本申请实施例不做限定。
本申请实施例中,图像处理装置把当前图像特征点与参考特征点进行匹配,判断当前图像特征点与参考特征点的相似程度和显著程度是否满足第一预设匹配阈值,如果满足第一预设匹配阈值,说明当前图像特征点与参考特征点可能是同一个物体中同样部位的像素点,分别出现在当前帧图像和参考帧中的不同位置。因此图像处理装置将当前图像特征点与参考特征点确定为第一匹配点对,其中,第一匹配点对中包含了至少一对当前图像特征点与参考特征点的匹配点对。
S104、将第一预设匹配阈值放大,得到第二预设匹配阈值,以及基于第一匹配点对,得到约束条件。
本申请实施例中,图像处理装置基于第一预设匹配阈值将当前图像特征点与参考特征点进行匹配,在得到第一匹配点对之后,图像处理装置会放大第一预设匹配阈值,得到第二预设匹配阈值,并且,图像处理装置会根据第一匹配点对,计算出第一匹配点对满足的数学模型,作为约束条件。
本申请实施例中,由于根据第一预设匹配阈值得到的第一匹配点对的数量一般较少,且经常存在错误匹配点对,如果直接根据第一匹配点对计算数学模型来进行图像处理,会极大影响计算结果的精确性,从而影响图像处理的准确度。因此,图像处理装置将第一预设匹配阈值放大,得到第二预设匹配阈值,这样使用放宽后的阈值条件,来从当前图像特征点和参考特征点中得到更多的匹配点对。进一步的,为了保证匹配点对的准确率,图像处理装置会根据第一匹配点对,计算出第一匹配点对满足的数学模型,作为约束条件,来对放宽阈值条件后匹配上的匹配点对进行进一步的筛选。
本申请实施例中,当图像处理功能为重定位功能时,约束关系可以是三维场景中的同一个三维点在不同视角下的像点存在着的约束关系的代数表示,比如通过第一匹配点对可以计算出第一匹配点对对应的基础矩阵公式,作为约束条件;约束关系还可以是一个射影平面上的点映射到另一个射影平面上时像点之间的约束关系,比如通过第一匹配点对可以计算出第一匹配点对对应的单应矩阵公式,作为约束条件。
本申请实施例中,当图像处理功能为图像匹配、图像识别等功能时,约束关系还可以是其他形式的几何约束或数学模型,具体的根据实际情况进行选择,本申请实施例不做限定。
S105、在约束条件下,基于第二预设匹配阈值将当前图像特征点与参考特征点进行匹配,得到第二匹配点对。
本申请实施例中,图像处理装置将第一预设匹配阈值放大,得到第二预设匹配阈值,以及基于第一匹配点对,得到约束条件之后,图像处理装置会在第二预设阈值下,对当前图像特征点和参考特征点再进行一次匹配,由于第二预设阈值比第一匹配预设阈值的筛选范围更大,因此图像处理装置会得到比第一次匹配更多的匹配点对;然后,图像处理装置使用约束条件来验证二次匹配得到的全部匹配点对,将满足约束条件的匹配点对作为正确匹配点对,将不满足约束条件的匹配点对作为错误匹配点对,最后将全部正确匹配点对作为第二匹配点对。
本申请实施例中,由于二次匹配出的匹配点对中通常也包含第一匹配点对,因此图像处理装置也可以只验证二次匹配中的新增匹配点对是否满足约束条件,然后将满足约束条件的新增匹配点对和第一匹配点对作为第二匹配点对。
本申请实施例中,由于第二预设匹配阈值大于第一预设匹配阈值,因此图像处理装置使用第二预设匹配阈值可以匹配出更多的匹配点对,由于图像处理装置又使用了根据第一匹配点对计算出的约束条件,对根据第二预设匹配阈值得到的匹配点对进行了验证,从而保证了满足约束条件的二次匹配点对的准确匹配率不低于第一匹配点对,因此,图像处理装置可以得到更多更准确的第二匹配点对。
示例性的,图像处理装置在进行一次暴力匹配,得到了N对匹配点对的第一匹配点对之后,图像处理装置还可以基于第二预设匹配阈值对当前图像特征点和参考特征点再进行一次暴力匹配,得到N+M对匹配点对,然后图像处理装置可以只对M对匹配点对进行计算,确定出M对匹配点对中满足约束条件的M1对匹配点对,最后,图像处理装置将N+M1对匹配点对作为第二匹配点对。
本申请实施例中,为了验证本申请实施例的匹配结果的准确性,图像处理装置可以计算暴力匹配和本申请实施例分别对应的匹配结果的准确率和召回率。其中,准确率是所有匹配到的点对中,正确匹配点对占所有匹配点对的百分比。召回率是匹配点对中,正确匹配点对占所有点对的百分比。准确率越高,说明结果越准。召回率越高,说明找到的正确匹配越全。
示例性的,图像处理装置选取相同的10组图片,在相同参数下计算出暴力匹配的匹配结果和本申请实施例的匹配结果的准确率和召回率,结果如表1所示:
表1
Figure PCTCN2020096549-appb-000001
由表1可知,本申请实施例的准确率和召回率都高于暴力匹配,因此,本申请实施例可以解决匹配点对过少,错误匹配点对较多的问题,从而提高匹配点对的数量和准确率,最终提高图像处理的准确度。
S106、基于第二匹配点对,进行图像处理功能。
本申请实施例中,图像处理装置得到第二匹配点对之后,图像处理装置可以基于第二匹配点对,进行相应的图像处理功能。当图像处理功能为重定位功能时,图像处理装置可以使用第二匹配点对,恢复出当前图像帧和参考帧之间的相对位姿,最后图像处理装置通过相对位姿实现重定位功能。
本申请实施例中,当图像处理功能为图像的非刚体形变的识别和匹配时,图像处理装置可以使用第二匹配点对,计算出非刚体形变物体与原始状态物体之间的图像匹配度,以识别出非刚体形变物体与原始状态物体之间的对应关系。
可以理解的是,图像处理装置可以通过先计算少量可靠匹配,再通过这些匹配估计数学模型,从而拓展出更多的匹配,同时保证了匹配点对的准确率,解决了匹配点对数量过少,存在较多错误匹配点对的问题,提高了匹配点对的数量和准确率,最终提高了图像处理的准确性。
在本申请的一些实施例中,本申请实施例还提供一种图像处理方法,如图2所示,该方法可以包括:
S201、图像处理装置获取待处理的当前帧图像。
S202、图像处理装置从当前帧图像中提取当前帧图像的当前图像特征点,并基于当前图像特征点获取参考帧对应的参考特征点。
本申请实施例中,图像处理装置获取待处理的当前帧图像之后,图像处理装置先从当前帧图像中提取特征点,作为当前图像特征点,然后基于当前图像特征点,从参考图像帧中找到与当前图像特征点最匹配的一帧,作为参考帧,最后从参考帧中提取特征点作为参考特征点。
本申请实施例中,图像处理装置从当前帧图像中提取当前帧图像的当前图像特征点具体可以包括:S2021至S2024。如下:
S2021、从当前帧图像中提取预设角点数量的表征角边界的第一像素点。
本申请实施例中,图像处理装置在获取待处理的当前帧图像之后,图像处理装置会从当前帧图像中提取角点作为第一像素点。
本申请实施例中,图像处理装置对当前帧图像中如桌角、物体的角等物体的像素点,即角点进行提取,因为图像中的角、边缘等处的像素点的辨识度高,在图像识别中更具有代表性。在本申请的一些实施例中,图像处理装置会提取150个角点。
S2022、图像处理装置对第一像素点的预设范围内的第二像素点提取特征,得到第一图像特征点。
本申请实施例中,图像处理装置从当前帧图像中提取预设角点数量的表征角边界的第一像素点之后,图像处理装置对第一像素点的预设范围内的第二像素点提取特征,得到第一图像特征点。
本申请实施例中,图像处理装置在第一像素点预设范围内的像素点中,提取特征,作为第一图像特征 点,第一图像特征点用于对第一像素点进行描述。
本申请实施例中,特征点可以是二进制鲁棒特征(BRIEF,Binary robust independent elementary feature),也可以是其他数据形式,具体的根据实际情况进行选择,本申请实施例不做具体的限定。
示例性的,图像处理装置在角点周围31*31范围内的像素点中提取二进制鲁棒特征BRIEF,作为第一图像特征点。
S2023、图像处理装置对第一像素点的预设范围外的第三像素点提取特征,得到第二图像特征点。
本申请实施例中,图像处理装置对第一像素点的预设范围内的第二像素点提取特征,得到第一图像特征点之后,图像处理装置对第一像素点的预设范围外的第三像素点提取特征,得到第二图像特征点。
本申请实施例中,图像处理装置还会在当前图像帧中额外提取更多的特征点,用以更好的表征当前图像帧。
示例性的,图像处理装置对纹理复杂的场景提取更多的特征点,而对一般的弱纹理场景,如白墙等场景提取较少的特征点,作为第二图像特征点。在本申请的一些实施例中,图像处理装置会提取500到1500个第二图像特征点。
S2024、图像处理装置将第一图像特征点和第二图像特征点作为当前图像特征点。
本申请实施例中,图像处理装置在对第一像素点的预设范围外的第三像素点提取特征,得到第二图像特征点之后,图像处理装置将第一图像特征点和第二图像特征点作为当前图像特征点。
本申请实施例中,当前图像特征点包含了第一特征点和第二特征点。
本申请实施例中,图像处理装置基于当前图像特征点获取参考帧对应的参考特征点具体可以包括:S2025至S2026。如下:
S2025、图像处理装置将当前图像特征点与预设地图特征库进行匹配,其中,预设地图特征库中存储有至少一帧预设地图图像帧的预设地图特征点。
本申请实施例中,图像处理装置在将第一图像特征点和第二图像特征点作为当前图像特征点之后,图像处理装置根据当前图像特征点在预设地图特征库寻找最匹配的一个预设地图图像帧,其中,预设地图特征库中存储有至少一帧预设地图图像帧中的预设地图特征点。
本申请实施例中,图像处理装置根据当前图像特征点,与预设地图特征库进行匹配时,可以采用当前帧图像的词袋模型特征匹配,也可以采用其他的算法,例如积聚算法VLAD(vector of locally aggregated descriptors)、遍历法等,具体的根据实际情况进行选择,本申请实施例不做限定。
本申请实施例中,当图像处理装置使用当前帧图像的词袋模型与预设地图特征库进行匹配时,首先,图像处理装置根据当前图像特征点中的第一图像特征点,计算出当前帧图像的词袋模型。其中,词袋模型特征是将当前帧图像的第一图像特征点用一个装有特定词的“袋子”表示的一种表达模型;之后,图像处理装置根据当前帧图像的词袋模型,与预设地图特征库中的每一帧预设地图图像帧包含的预设地图特征点进行对比。
本申请实施例中,图像处理装置在将当前图像特征点与预设地图特征库进行匹配时,可以将当前图像特征点与预设地图图像帧进行匹配,也可以将当前图像特征点与预设地图图像帧中预设的关键帧进行匹配,具体的根据实际情况进行选择,本申请实施例不做具体的限定。
S2026、图像处理装置将预设地图特征库中与当前图像特征点最匹配的预设地图图像帧作为参考帧,将参考帧中包含的预设地图特征点,作为参考特征点。
本申请实施例中,图像处理装置在图像处理装置将当前图像特征点与预设地图特征库进行匹配之后,图像处理装置找到与当前图像特征点最匹配的预设地图图像帧,作为参考帧;图像处理装置找到参考帧之后,图像处理装置获取参考帧中包含的预设地图特征点,作为参考特征点。
本申请实施例中,预设地图特征点是在地图构建时,由图像处理装置从至少一个预设地图图像帧中提取出的特征点构成,图像处理装置从预设地图图像帧中提取特征点的原理同步骤S2021至S2024。
S203、图像处理装置基于第一预设匹配阈值将当前图像特征点与参考特征点进行匹配,得到第一匹配点对。
本申请实施例中,图像处理装置从当前帧图像中提取当前帧图像的当前图像特征点,并基于当前图像特征点获取参考帧对应的参考特征点之后,图像处理装置基于第一预设匹配阈值将当前图像特征点与参考特征点进行匹配,得到第一匹配点对。
本申请实施例中,图像处理装置基于第一预设匹配阈值将当前图像特征点与参考特征点进行匹配,得到第一匹配点对具体可以包括:S2031至S2035。如下:
S2031、图像处理装置将当前图像特征点中的每个当前图像子特征点与参考特征点逐一进行组合,得到至少一个子特征点对。
本申请实施例中,图像处理装置在将预设地图特征库中与当前图像特征点最匹配的预设地图图像帧作为参考帧,将参考帧中包含的预设地图特征点,作为参考特征点之后,图像处理装置会以当前图像特征点 中的一个当前图像子特征点,逐一对应每一个参考子特征点,进行一一组合,直至全部当前图像子特征点与全部参考子特征点组合完毕,图像处理装置得到至少一个子特征点对。
本申请实施例中,每个子特征点对包含一个当前帧子图像特征点和一个参考子特征点。
示例性的,当前图像特征点包含当前图像子特征点A1,A2,A3。图像处理装置将当前图像子特征点A1逐一与全部3个参考特征点B、C、D组合,可以得到三个子特征点对(A1,B),(A1,C),与(A1,D)。图像处理装置将全部当前图像特征点与全部参考特征点组合,还可以得到子特征点对(A2,B),(A2,C)与(A2,D),(A3,B),(A3,C)与(A3,D),图像处理装置将这9个子特征点对作为至少一个子特征点对。
在本申请的一些实施例中,只将当前帧图像特征点中的150个角点用于与参考特征点进行组合并用于与参考特征点的匹配。如果把500到1500个额外特征点也用于匹配,会造成额外的计算量,并且不会对第一匹配点对的数量和匹配准确率有很大的提升。
S2032、图像处理装置针对至少一个子特征点对,计算每个当前图像子特征点到参考特征点的汉明距离,得到至少一个汉明距离,至少一个汉明距离与至少一个子特征点对一一对应。
本申请实施例中,图像处理装置将当前图像特征点中的每个当前图像子特征点与参考特征点逐一进行组合,得到至少一个子特征点对之后,图像处理装置在每一个子特征点对中,计算一个当前图像子特征点到一个参考子特征点的汉明距离,直至至少一个子特征点对的汉明距离都计算完毕,图像处理装置得到至少一个汉明距离,至少一个汉明距离与至少一个子特征点对一一对应。
本申请实施例中,针对每个子特征点对,图像处理装置计算其中包含的当前图像子特征点与子参考特征点之间的汉明距离,得到每个子特征点对对应的一个汉明距离。
本申请实施例中,具体的,汉明距离代表两个二进制码直接的不同的位数。图像处理装置计算当前图像子特征点中的第二图像特征点与参考特征点对应的特征之间的二进制码直接的不同的位数,得到每一个子特征点对的汉明距离。
本申请实施例中,针对至少一个子特征点对,图像处理装置进行同样的计算,可以得到至少一个子特征点对的至少一个汉明距离。
S2033、图像处理装置从至少一个汉明距离中,确定出最小汉明距离和次小汉明距离。
本申请实施例中,图像处理装置针对至少一个子特征点对,计算每个当前图像子特征点到参考特征点的汉明距离,得到至少一个汉明距离之后,图像处理装置从至少一个汉明距离中,找到最小的汉明距离计算值和第二小的汉明距离计算值,作为最小汉明距离和次小汉明距离。
S2034、当最小汉明距离小于预设显著性阈值乘以次小汉明距离时,图像处理装置对比最小汉明距离与预设相似性阈值。
本申请实施例中,图像处理装置从至少一个汉明距离中,确定出最小汉明距离和次小汉明距离之后,图像处理装置判断最小汉明距离是否小于预设显著性阈值乘以次小汉明距离时,当最小汉明距离小于预设显著性阈值乘以次小汉明距离时,图像处理装置对比最小汉明距离与预设相似性阈值。
本申请实施例中,第一预设匹配阈值包含预设相似性阈值和预设显著性阈值。
本申请实施例中,第一预设匹配阈值包含预设相似性阈值和预设显著性阈值。当汉明距离小于预设相似性阈值时,说明这个汉明距离对应的当前图像子特征点与参考子特征点足够相似;当最小汉明距离小于预设显著性阈值乘以次小汉明距离时,说明这个汉明距离对应的当前图像子特征点与参考子特征点是很显著的匹配,没有其他类似的匹配。
示例性的,预设显著性阈值可以取值0.9,预设相似性阈值可以取值50,也可以根据实际情况选择其他阈值,本申请实施例不做限定。
本申请实施例中,图像处理装置计算预设显著性阈值乘以次小汉明距离,然后将计算结果与最小汉明距离对比,如果最小汉明距离仍然小于计算结果,说明最小汉明距离对应的当前图像子特征点和参考子特征点之间的匹配度比较显著,没有其他类似的匹配。
示例性的,最小汉明距离为40,次小汉明距离为50,图像处理装置计算预设显著性阈值乘以次小汉明距离距离,即50*0.9,得到计算结果45,图像处理装置将计算结果45与最小汉明距离40相比,最小汉明距离40仍然小于计算结果45,说明最小汉明距离40所对应的当前图像子特征点和参考子特征点之间匹配的显著性高。
本申请实施例中,图像处理装置在对汉明距离的显著性进行验证之后,还需要验证汉明距离的近似性。因此,当最小汉明距离小于预设显著性阈值乘以次小汉明距离时,图像处理装置需要进一步验证最小汉明距离是否小于等于预设相似性阈值。
S2035、当最小汉明距离小于等于预设相似性阈值时,图像处理装置从至少一个子特征点对中,确定出与最小汉明距离对应的每个当前图像子特征点对应的子特征点对,从而图像处理装置得到与当前图像特征点对应的第一匹配点对。
本申请实施例中,当最小汉明距离小于预设显著性阈值乘以次小汉明距离时,图像处理装置对比最小汉明距离与预设相似性阈值之后,图像处理装置还要判断最小汉明距离是否小于等于预设相似性阈值,当最小汉明距离小于等于预设相似性阈值时,图像处理装置从至少一个子特征点对中,确定出与最小汉明距离对应的子特征点对,从而图像处理装置得到与当前图像特征点对应的第一匹配点对。
本申请实施例中,如果最小汉明距离小于预设显著性阈值乘以次小汉明距离,并且最小汉明距离小于等于预设相似性阈值,说明最小汉明距离所对应的子特征点对同时满足预设相似性阈值和预设显著性阈值,因此最小汉明距离所对应的子特征点对是匹配的子特征点对。图像处理装置依次对每个子特征点对进行判断,从而在至少一个子特征点对中,图像处理装置可以确定出至少一个子特征点对对应的全部匹配的子特征点对,作为第一匹配点对。
示例性的,预设相似性阈值为50,预设显著性阈值为0.9。图像处理装置计算当前图像子特征点A到3个参考特征点B、C、D之间的汉明距离,分别得到A到B的汉明距离40、A到C的汉明距离50和A到D的汉明距离60。图像处理装置从中确定出最小汉明距离40和次小汉明距离50,由于50乘以预设显著性阈值0.9之后,仍然大于最小汉明距离40,因此说明A到B的汉明距离显著性明显高于A到C或A到D的汉明距离的显著性。图像处理装置再将最小汉明距离40与预设相似性阈值小于50进行比较,得到最小汉明距离40仍然小于预设相似性阈值,图像处理装置可以确定(A,B)的为匹配的子特征点对。图像处理装置可以继续在至少一个子特征点对中,确定出全部子特征点对中匹配的子特征点对,作为第一匹配点对。
S204、图像处理装置将第一预设匹配阈值放大,得到第二预设匹配阈值。
本申请实施例中,图像处理装置基于第一预设匹配阈值将当前图像特征点与参考特征点进行匹配,得到第一匹配点对之后,图像处理装置将第一预设匹配阈值放大,得到第二预设匹配阈值。
本申请实施例中,当最小汉明距离小于等于预设相似性阈值时,图像处理装置从至少一个子特征点对中,确定出与最小汉明距离对应的每个当前图像子特征点对应的子特征点对,从而得到与当前图像特征点对应的第一匹配点对之后,图像处理装置将第一预设匹配阈值中的预设相似性阈值和预设显著性阈值放大,得到第二预设匹配阈值。
示例性的,图像处理装置将第一预设匹配阈值中的预设相似性阈值放大到100,将第一预设匹配阈值中的预设显著性阈值放大到0.95,图像处理装置将预设相似性阈值100和预设显著性阈值0.95作为第二预设匹配阈值。
S205、图像处理装置获取至少一个预设初始约束条件。
本申请实施例中,图像处理装置将第一预设匹配阈值放大,得到第二预设匹配阈值之后,图像处理装置就要根据至少一个预设初始约束条件来进行约束条件的计算,图像处理装置可以根据图像处理功能的不同获取至少一个预设初始条件。
本申请实施例中,图像处理装置将第一预设匹配阈值放大,得到第二预设匹配阈值是为了得到更多的匹配点对,为了进一步保证得到的匹配点对的正确性,图像处理装置还需要计算出第一匹配点对之间满足的约束条件,来对二次匹配出的匹配点对是否满足同样的约束条件进行验证,而计算约束条件首先需要获取预设初始约束条件。
本申请实施例中,根据图像处理功能的不同,图像处理装置需要获取至少一个预设初始约束条件。
示例性的,当图像处理功能为重定位时,图像处理装置可以以基础矩阵和单应矩阵的矩阵公式作为预设初始约束条件。
S206、图像处理装置使用第一匹配点对,对至少一个预设初始约束条件进行计算,得到至少一个子约束条件。
本申请实施例中,图像处理装置获取至少一个预设初始约束条件之后,图像处理装置使用第一匹配点对,对至少一个预设初始约束条件进行计算,得到至少一个子约束条件。
本申请实施例中,图像处理装置使用第一匹配点对,对至少一个预设初始约束条件进行计算,得到至少一个子约束条件的步骤具体可以包括:S2061至S2063。如下:
S2061、图像处理装置针对每个预设初始约束条件,从第一匹配点对中,选取预设个数的第一匹配点对进行组合,重复预设次数,得到与预设次数的数量相等的第一匹配点对组合,其中,第一匹配点对组合中的每个组合包含预设个数个第一匹配点对。
本申请实施例中,图像处理装置获取至少一个预设初始约束条件之后,图像处理装置为了根据预设初始约束条件进行计算,会从第一匹配点对中,选取预设个数个第一匹配点对进行组合,重复预设次数,得到与预设次数的数量相等的第一匹配点对组合。
示例性的,当预设初始约束条件为基础矩阵计算公式时,需要至少8对匹配点对作为计算参数,因此图像处理装置在第一匹配点对中,随机选取8对第一匹配点对进行组合,用于计算基础矩阵;为了提高计算精度,降低误差,图像处理装置还可以继续在第一匹配点对中随机选取8对匹配点对,重复100次,图 像处理装置可得到100组随机选出的第一匹配点对组合,每个组合包含8对匹配点对。
示例性的,当预设初始约束条件为基础矩阵计算公式时,需要至少4对匹配点对作为计算参数,因此图像处理装置在第一匹配点对中,随机选取4对第一匹配点对进行组合,用于计算单应矩阵;为了提高计算精度,降低误差,图像处理装置还可以继续在第一匹配点对中随机选取4对匹配点对,重复100次,图像处理装置可得到100组随机选出的第一匹配点对组合,每个组合包含4对匹配点对。
S2062、图像处理装置使用第一匹配点对组合中的每个组合,对每个预设初始约束条件进行计算,得到每个预设初始约束条件对应的预设次数个子约束条件,作为每个预设初始约束条件的最终子约束条件。
本申请实施例中,图像处理装置针对每个预设初始约束条件,从第一匹配点对中,选取预设个数的第一匹配点对进行组合,重复预设次数,得到与预设次数的数量相等的第一匹配点对组合之后,图像处理装置使用第一匹配点对组合中的每个组合,对每个预设初始约束条件进行计算,其中,一个第一匹配点对组合要分别对每个不同的预设初始条件计算一次,因此图像处理装置根据预设次数个第一匹配点对组合,可以得到每个预设初始约束条件分别对应的预设次数个子约束条件,作为每个预设初始约束条件的最终子约束条件。
本申请实施例中,当预设初始条件为基础矩阵计算公式时,图像处理装置使用第一匹配点对组合,计算基础矩阵的方法为:
示例性的,基础矩阵计算公式为:
F=K -TEK -1                 (1)
在(1)中,E为本质矩阵,可以通过第一匹配点对组合计算得出,K为相机的内参矩阵,上标 T代表对矩阵进行转置,K -T代表对内参矩阵转置并求逆,K -1代表对内参矩阵求逆。其中,K可以表示为:
Figure PCTCN2020096549-appb-000002
其中,f x和f y分别为相机在x和y方向上的焦距,c x和c y分别为图像中心到图像坐标原点的x,y坐标,单位为像素。
本申请实施例中,由(2)可知,相机内参矩阵可以根据相机参数直接确定,即在公式(1)中,K是已知的参数,图像处理装置需要通过第一匹配点对组合计算出本质矩阵E,然后将已知参数的E和K代入根据基础矩阵计算公式(1)中,将E和K已知的基础矩阵计算公式(1)作为子约束条件。
本申请实施例中,图像处理装置通过第一匹配点对组合中点的坐标计算本质矩阵E的方法为:
根据极线约束公式:
Figure PCTCN2020096549-appb-000003
其中,x 1和x 2为在一个第一匹配点对组合中,一对匹配点对的归一化三维坐标x 1=[u 1,v 1,1] T,x 2=[u 2,v 2,1] T,图像处理装置将x 1和x 2代入公式(3),可以得到:
Figure PCTCN2020096549-appb-000004
其中,e 1到e 9代表将本质矩阵E写为3*3矩阵时每个矩阵元素的值。图像处理装置为了求解(4)中e 1到e 9的值,可以将矩阵E展开变形为e,以便列方程求解,从而得到:
e=[e 1 e 2 e 3 e 4 e 5 e 6 e 7 e 8 e 9]               (5)
图像处理装置将公式(5),代入公式(4),并将公式(4)中的坐标点相乘,图像处理装置可以得到根据一对匹配点对列出的一个方程,如(6)所示:
[u 1u 2 u 1v 2 u 1 v 1u 2 v 1v 2 v 1 u 2 v 2 1]·e=0           (6)
以此类推,一个第一匹配点对组合中有8对匹配点对,图像处理装置可以得到关于本质矩阵的8个方程,如(7)所示:
Figure PCTCN2020096549-appb-000005
由于极线约束公式(3)中,等式两边乘上任意实数,等式都成立,说明本质矩阵E具有尺度等价性。也就是说本质矩阵E中缺少尺度信息。也就是说,本质矩阵E中包含的9个未知数可以由8个方程得到。因此,图像处理装置可以通过8对第一匹配点对求解出一个本质矩阵E,即图像处理装置可以通过一组匹 配点对求解出一个本质矩阵E。
本申请实施例中,当预设初始约束条件为基础矩阵公式时,图像处理装置使用预设次数个第一匹配点对组合,对基础矩阵计算预设次数次,得到预设次数个本质矩阵E,图像处理装置再将计算出的本质矩阵E代入预设初始条件即公式(1),可以得到预设次数个不同参数E的基础矩阵计算公式F=K -TEK -1,图像处理装置将预设次数个不同参数E的基础矩阵计算公式作为预设初始条件为基础矩阵公式所对应的最终子约束条件。
本申请实施例中,当预设初始条件为单应矩阵计算公式时,图像处理装置使用第一匹配点对组合,计算单应矩阵的方法为:
单应矩阵计算公式为:
p 2=Hp 1                (8)
其中,H代表单应矩阵,p 1和p 2为第一匹配点对组合中包含的一对匹配点对,p 1代表当前帧图像像素坐标系下的坐标p 1=(u 1,v 1,1),p 2代表参考帧的像素坐标系下的坐标p 2=(u 2,v 2,1)。
图像处理装置将p 1和p 2的像素坐标代入单应矩阵的计算公式(8),展开可得到:
Figure PCTCN2020096549-appb-000006
在(9)中,h 1至h 9代表将单应矩阵写为3*3矩阵时每个矩阵元素的值,s代表非零因子。由于矩阵的第三行乘以p 1为1,即h 7u 1+h 8v 1+h 9=1,因此需要通过非零因子s使等式成立。在本申请的一些实施例中,图像处理装置通过s使h 9=1,再通过(9)去除这个非零因子,可得:
Figure PCTCN2020096549-appb-000007
Figure PCTCN2020096549-appb-000008
整理可得:
h 1u 1+h 2v 1+h 3-h 7u 1u 2-h 8v 1u 2=u 2             (12)
h 4u 1+h 5v 1+h 6-h 7u 1v 2-h 8v 1v 2=v 2             (13)
由(12)和(13)可知,图像处理装置根据1对匹配点对,可以得到2个计算单应矩阵H的约束方程。图像处理装置将单应矩阵H作为向量展开:
h=[h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9]            (14)
h 9=1                                      (15)
以此类推,一个第一匹配点对组合中有4对匹配点对,图像处理装置可以通过4对第一匹配点对得到关于单应矩阵的8个方程,其中h 9的值已知等于1。这样,图像处理装置可以通过求解线性方程组得到单应矩阵,图像处理装置再将已知参数H的p 2=Hp 1公式作为一个子约束条件,即图像处理装置可以通过一组第一匹配点对得到一个已知参数H的p 2=Hp 1公式作为一个子约束条件。
本申请实施例中,求解线性方程组的计算方式通常使用直接线性变换法(DLT,Direct Linear Transform),也可以根据实际情况使用其他方法,本申请实施例不做具体的限定。
本申请实施例中,当预设初始约束条件为单应矩阵公式时,图像处理装置使用预设次数个第一匹配点对组合,对单应矩阵计算预设次数次,得到预设次数个单应矩阵H,图像处理装置将计算出的预设次数个单应矩阵H代入单应矩阵计算公式p 2=Hp 1,可以得到预设次数个不同参数H单应矩阵计算公式,图像处理装置将预设次数个不同参数H的单应矩阵计算公式作为预设初始条件为单应矩阵公式所对应的最终子约束条件。
S2063、图像处理装置将至少一个预设初始约束条件中,每个预设初始约束条件的最终子约束条件确定为至少一个子约束条件。
本申请实施例中,图像处理装置使用第一匹配点对组合中的每个组合,对每个预设初始约束条件进行计算,得到每个预设初始约束条件对应的预设次数个子约束条件,作为每个预设初始约束条件的最终子约束条件之后,图像处理装置将至少一个预设初始约束条件中,每个预设初始约束条件对应的最终子约束条件共同作为至少一个子约束条件。
示例性的,图像处理装置以基础矩阵计算公式为预设初始条件,根据100组第一匹配点对组合进行计算,得到了100个不同已知参数的基础矩阵公式作为预设初始条件为基础矩阵对应的100个子约束条件;图像处理装置又以单应矩阵计算公式为预设初始条件,根据100组第一匹配点对组合进行计算,得到了100 个不同已知参数的单应矩阵公式作为预设初始条件为单应矩阵对应的100个子约束条件;图像处理装置将这200个子约束条件共同作为至少一个预设初始条件对应的至少一个子约束条件。
S207、图像处理装置基于第一匹配点对中的每个第三特征点的三维坐标和至少一个子约束条件,确定出至少一个第三映射值。
本申请实施例中,图像处理装置使用第一匹配点对,对至少一个预设初始约束条件进行计算,得到至少一个子约束条件之后,图像处理装置根据第一匹配点对中的每个第三特征点的三维坐标和每个不同已知参数的子约束条件,确定出至少一个第三映射值。
本申请实施例中,图像处理装置会使用第一匹配点对中的每个当前图像特征点与对应的参考特征点的三维坐标,根据至少一个子约束条件中的一个已知参数的基础矩阵公式或已知参数的单应矩阵公式,得出对应的基础矩阵或单应矩阵的值,作为第三映射值。
本申请实施例中,图像处理装置对至少一个子约束条件中的全部子约束条件进行计算直至处理完成,图像处理装置可以确定出至少一个基础矩阵和单应矩阵的值,作为至少一个第三映射值。
S208、图像处理装置基于至少一个第三映射值,确定出至少一个第三匹配误差。
本申请实施例中,图像处理装置在基于第一匹配点对中的每个第三特征点的三维坐标和至少一个子约束条件,确定出至少一个第三映射值之后,图像处理装置基于至少一个第三映射值,根据误差计算公式,计算出第一匹配点对在第三映射值作用下的误差,作为至少一个第三匹配误差。
本申请实施例中,当第三映射值为基础矩阵对应的第三映射值时,图像处理装置计算一个第一匹配点对组合在基础矩阵作用下的误差,可以根据公式(16):
Figure PCTCN2020096549-appb-000009
在(16)中,
Figure PCTCN2020096549-appb-000010
代表对坐标p 1=(u 1,v 1,1)进行转置,F代表基础矩阵,F在这里为已知的第三映射值,图像处理装置根据公式(16)计算相对误差,可得:
Figure PCTCN2020096549-appb-000011
在(17)中,a 1,b 1和c 1为未知的中间参数,图像处理装置根据公式(17)求得a 1,b 1和c 1之后,图像处理装置根据误差计算公式(18)计算出error的值:
Figure PCTCN2020096549-appb-000012
其中,根据公式(18)所计算出的error的值代表匹配点对在基础矩阵作用下的误差,图像处理装置将一组匹配点对在在一个基础矩阵对应的第三映射值作用下的误差作为一个第三匹配误差。
本申请实施例中,当第三映射值为单应矩阵对应的第三映射值时,图像处理装置计算一个第一匹配点对组合在单应矩阵作用下的误差,可以根据定义:
p 2=Hp 1                (19)
图像处理装置根据公式(19)计算相对误差,可得:
Figure PCTCN2020096549-appb-000013
Figure PCTCN2020096549-appb-000014
在(20)和(21)中,u' 2和v' 2为未知的中间参数,图像处理装置求得u' 2和v' 2之后,图像处理装置使用单应矩阵误差计算公式(22),计算出error的值:
error=(u 2-u' 2) 2+(v 2-v' 2) 2                 (22)
其中,根据公式(22)所计算出的error的值代表匹配点对在单应矩阵作用下的误差,图像处理装置将一组匹配点对在一个单应矩阵对应的第三映射值作用下的误差也作为一个第三匹配误差。
本申请实施例中,图像处理装置以相同的方法,基于至少一个第三映射值,可以确定出至少一个第一匹配点对组合中的每个组合在至少一个第三映射值作用下的误差,作为至少一个第三匹配误差。
S209、图像处理装置基于至少一个第三匹配误差,确定出至少一个子约束条件对应的小于等于预设误差阈值的第一匹配点对,作为正确匹配点对。
本申请实施例中,图像处理装置在基于至少一个第三映射值,确定出至少一个第三匹配误差之后,图像处理装置基于至少一个第三匹配误差,根据预设误差阈值进行筛选,确定出至少一个子约束条件对应的小于等于预设误差阈值的第一匹配点对,作为正确匹配点对。
本申请实施例中,当error小于等于预设误差阈值时,图像处理装置确认对应的第一匹配点对为正确匹 配点对,当error大于预设误差阈值时,图像处理装置确认对应的第一匹配点对为错误匹配点对。
本申请实施例中,图像处理装置分别针对至少一个子约束条件中的每个子约束条件进行第三匹配误差的验证,得到每个子约束条件对应的正确匹配点对。
S210、图像处理装置从至少一个预设初始约束条件中,分别选取包含正确匹配点对最多的一个子约束条件,作为每个至少一个预设初始约束条件分别对应的至少一个中间子约束条件。
本申请实施例中,图像处理装置在基于至少一个第三匹配误差,确定出至少一个子约束条件对应的小于等于预设误差阈值的第一匹配点对,作为正确匹配点对之后,图像处理装置从至少一个预设初始约束条件中,分别选取每个预设初始约束条件包含正确匹配点对最多的一个子约束条件,作为每个至少一个预设初始约束条件分别对应的至少一个中间子约束条件。
本申请实施例中,图像处理装置得到每个子约束条件对应的正确匹配点对之后,图像处理装置在每个预设初始约束条件下,统计该预设初始条件对应的至少一个子约束条件中个,包含正确匹配点对最多的一个子约束条件,作为中间子约束条件。图像处理装置对至少一个预设初始约束条件进行统计,得到至少一个中间子约束条件。
示例性的,图像处理装置对基础矩阵公式下的100个基础矩阵子约束条件进行统计,得到对应正确匹配点对最多的一个基础矩阵子约束条件,作为基础矩阵公式对应的中间子约束条件,图像处理装置对单应矩阵公式下的100个单应矩阵子约束条件进行统计,得到对应正确匹配点对最多的一个单应矩阵子约束条件,作为单应矩阵公式对应的中间子约束条件。图像处理装置将基础矩阵公式对应的中间子约束条件和单应矩阵公式对应的中间子约束条件作为至少一个中间子约束条件。
S211、图像处理装置从至少一个中间子约束条件中,选择包含正确匹配点对最多的中间子约束条件,作为约束条件。
本申请实施例中,图像处理装置从至少一个预设初始约束条件中,分别选取包含正确匹配点对最多的一个子约束条件,作为每个至少一个预设初始约束条件分别对应的至少一个中间子约束条件之后,图像处理装置从至少一个中间子约束条件中,选择包含正确匹配点对最多的中间子约束条件,作为约束条件。
示例性的,图像处理装置在100个基础矩阵子约束条件中,找到包含正确匹配点对最多的一个基础矩阵子约束条件F 1=K -TE 1K -1,作为基础矩阵公式对应的中间子约束条件;图像处理装置在100个单应矩阵子约束条件中,找到包含正确匹配点对最多的一个单应矩阵子约束条件p 2=H 1p 1,作为单应矩阵公式对应的中间子约束条件;图像处理装置再将F 1=K -TE 1K -1和p 2=H 1p 1进行比较,假如F 1=K -TE 1K -1中的正确匹配点对多于p 2=H 1p 1中的正确匹配点对,图像处理装置将基础矩阵子约束条件F 1=K -TE 1K -1作为约束条件。
S212、图像处理装置基于第二预设匹配阈值将图像特征点与参考特征点进行匹配,得到中间匹配点对。
本申请实施例中,图像处理装置从至少一个中间子约束条件中,选择包含正确匹配点对最多的中间子约束条件,作为约束条件之后,图像处理装置放大后的第二预设匹配阈值将图像特征点与参考特征点进行匹配,由于阈值被放大,因此图像处理装置会得到更多的匹配点对,图像处理装置将得到的二次匹配得到的匹配点对作为中间匹配点对。
S213、图像处理装置从中间匹配点对中,筛选出满足约束条件的第二匹配点对。
本申请实施例中,图像处理装置基于第二预设匹配阈值将图像特征点与参考特征点进行匹配,得到中间匹配点对之后,图像处理装置从中间匹配点对中,根据约束条件,筛选出满足约束条件的第二匹配点对。
本申请实施例中,图像处理装置从中间匹配点对中,筛选出满足约束条件的第二匹配点对具体包括:
图像处理装置基于中间匹配点对中的每个第一特征点的三维坐标和约束条件,确定出第一映射值;图像处理装置基于第一映射值,确定出第一匹配误差;图像处理装置基于第一匹配误差,确定出小于等于预设误差阈值的第二匹配点对。
本申请实施例中,图像处理装置基于第二预设匹配阈值将图像特征点与参考特征点进行匹配,得到中间匹配点对之后,图像处理装置基于中间匹配点对中的每个第一特征点的三维坐标和约束条件,确定出第一映射值。
本申请实施例中,图像处理装置将中间匹配点对中的图像特征点与参考特征点作为第一特征点,使用第一特征点的三维坐标,和步骤S211中最终得到的约束条件,计算出约束条件的计算结果,作为第一映射值。
示例性的,图像处理装置得到了基础矩阵计算公式作为约束条件,当图像处理装置得到中间匹配点对时,图像处理装置使用中间匹配点对中的每个第一特征点的三维坐标,根据步骤S2062中的方法计算出基础矩阵计算公式对应的基础矩阵计算结果,图像处理装置将根据第二匹配点对计算出的基础矩阵计算结果作为第一映射值。
本申请实施例中,图像处理装置基于中间匹配点对中的每个第一特征点的三维坐标和约束条件,确定出第一映射值之后,图像处理装置基于第一映射值,确定出第一匹配误差。
本申请实施例中,基于第一映射值,确定出第一匹配误差的原理与S208相同。
本申请实施例中,图像处理装置基于第一映射值,确定出第一匹配误差之后,图像处理装置基于第一匹配误差,确定出小于等于预设误差阈值的第二匹配点对。
本申请实施例中基于第一匹配误差,确定出小于等于预设误差阈值的第二匹配点对的原理与S209相同。
在实际应用中,由于二次匹配出的中间匹配点对通常包含了第一匹配点对,因此,图像处理装置从中间匹配点对中,筛选出满足约束条件的第二匹配点对的过程还可以是:
图像处理装置从中间匹配点对中,筛选出除第一匹配点对之外的子匹配点对;图像处理装置基于子匹配点对中的每个第二特征点的三维坐标和约束条件,确定出第二映射值;图像处理装置基于第二映射值,确定出第二匹配误差;图像处理装置基于第二匹配误差,确定出小于等于预设误差阈值的最终子匹配点对;图像处理装置将第一匹配点对和最终子匹配点对作为第二匹配点对。
本申请实施例中,图像处理装置基于第二预设匹配阈值将图像特征点与参考特征点进行匹配,得到中间匹配点对之后,图像处理装置从中间匹配点对中,筛选出除第一匹配点对之外的子匹配点对。
本申请实施例中,图像处理装置筛选出除第一匹配点对之外的子匹配点对,得到二次匹配新增的匹配点对,作为子匹配点对。图像处理装置只针对子匹配点对进行处理,可以减少计算量。
本申请实施例中,图像处理装置基于子匹配点对中的每个第二特征点的三维坐标和约束条件,确定出第二映射值的原理与步骤S2062相同。
本申请实施例中,图像处理装置基于第二映射值,确定出第二匹配误差的原理与步骤S208相同。
本申请实施例中,图像处理装置基于第二映射值,确定出第二匹配误差之后,图像处理装置基于第二匹配误差,确定出小于等于预设误差阈值的最终子匹配点对。
本申请实施例中,图像处理装置基于第二匹配误差,确定出小于等于预设误差阈值的最终子匹配点对的原理与步骤S209相同。
本申请实施例中,图像处理装置基于第二匹配误差,确定出小于等于预设误差阈值的最终子匹配点对之后,图像处理装置将第一匹配点对和最终子匹配点对作为第二匹配点对。
可以理解的是,本申请实施例中,由于放大汉明距离阈值和显著性阈值后,又对当前帧图像的特征点和地图参考帧的特征点进行了一次匹配,因此图像处理装置会得到更多的匹配点对,图像处理装置使用第一次匹配计算出的约束条件,再对放大阈值二次匹配出的匹配点对进行验证,保证了匹配点对的准确性,从而解决了匹配点对数量过少和错误的匹配点对较多的问题,得到了更多更准确的匹配点对。
S214、当图像处理功能为重定位功能时,根据第二匹配点对,恢复出至少一个相对位姿信息。
本申请实施例中,图像处理装置从中间匹配点对中,筛选出满足约束条件的第二匹配点对之后,当图像处理功能为重定位功能时,图像处理装置根据第二匹配点对,以及约束条件与相机位姿之间预设的对应关系,恢复出至少一个相对位姿信息。
本申请实施例中,图像处理装置根据第二匹配点对,恢复出至少一个相对位姿信息具体包括:S2141至S2142。如下:
S2141、图像处理装置根据预设约束条件与相机位姿之间的对应关系,以及第二匹配点对,计算出第一拍摄坐标系与第二拍摄坐标系下的至少一个平移量和至少一个旋转量。
本申请实施例中,当约束条件是基础矩阵计算公式时,预设约束条件与相机位姿之间的对应关系为基础矩阵对应的本质矩阵E与相机相对位姿之间的对应关系:
E=t^R                           (23)
在(23)中,E为本质矩阵,t^代表第一拍摄坐标系与第二拍摄坐标系之间的平移量的反对称矩阵,R代表第一拍摄坐标系与第二拍摄坐标系之间的旋转量。图像处理装置可以通过第二匹配点对计算出本质矩阵E的值,并对本质矩阵E的值使用奇异值分解(SVD,Singular Value Decomposition)的方法,得到t和R的值,进而得到相机相对位姿,SVD的方法如公式(24)所示:
E=U∑V T                    (24)
在(24)中,U为正交矩阵,V T为正交矩阵的转置,∑为奇异值矩阵。
根据本质矩阵E的内在性质,E的奇异值形式为[σ σ 0] T,其中,σ代表任意数值。则在奇异值分解中,对于任意一个E,都有两个可能的t,R与之对应,如公式(24-1)至公式(24-4):
Figure PCTCN2020096549-appb-000015
Figure PCTCN2020096549-appb-000016
在(24-1)至(24-4)中,
Figure PCTCN2020096549-appb-000017
表示绕Z轴旋转90°的旋转矩阵。
本申请实施例中,当约束条件为基础矩阵时,图像处理装置将t 1^和t 2^作为至少一个平移量,将R 1和R 2作为至少一个旋转量。
本申请实施例中,当约束条件为单应矩阵计算公式时,预设约束条件与相机位姿之间的对应关系为:
A=dR+tn T=dK -1HK             (25)
其中,H为单应矩阵,图像处理装置可以通过第二匹配点对计算出单应矩阵H的值,d表示空间中一个平面与零点平面的上下平移值,A为根据单应矩阵的值H、以及相机内参矩阵K和d计算出的中间值矩阵。
在空间中,由于至少三个点才能构成一个平面,因此某一个平面上的点满足公式(26):
ax+by+cz=d             (26)
在(26)中,a,b,c为已知的平面法向量,根据a,b,c可以确定一组平行平面族,x,y,z为点的三维坐标,通过d可以定位到一组平行平面组中唯一的一个平面。
图像处理装置得到平面的单位法向量:
n=[a b c] T                (27)
那么该平面上的一个三维坐标点X满足公式(28),X可以是当前图像特征点或参考特征点的三维坐标:
Figure PCTCN2020096549-appb-000018
图像处理装置将公式(28)乘到公式(29)的t中:
X 2=RX 1+t                (29)
图像处理装置可以得到公式(30):
Figure PCTCN2020096549-appb-000019
其中,X 1和X 2为同一个点的三维坐标分别在第一拍摄坐标系与第二拍摄坐标系下的坐标。
其中,同一个点的三维坐标在当前帧图像坐标系和参考帧图像坐标系下的坐标x 1,x 2之间满足公式:
αx 2=Hx 1               (31)
在(31)中, α为三维坐标点X在第一拍摄坐标系与第二拍摄坐标系下深度的比值,相当于步骤S2062中的非零因子s。
示例性的,一个三维坐标点在第一拍摄坐标系下的坐标可以表示为(32),该三维坐标点在第二拍摄坐标系下的坐标可以表示为(33):
X 1=[x (1) y (1) z (1)]               (32)
X 2=[x (2) y (2) z (2)]               (33)
其中,X 1和X 2为同一个三维坐标点在上述两个不同坐标系下的坐标。
则图像处理装置可以得到α的值为:
Figure PCTCN2020096549-appb-000020
图像处理装置将(34)中α的值,代入公式(31),并在等式两边同时乘以相机内参矩阵K,等式仍然成立,则图像处理装置可以得到:
Figure PCTCN2020096549-appb-000021
整理可得:
X 2=K -1HKX 1                      (36)
图像处理装置将α乘到公式(30)中可得:
Figure PCTCN2020096549-appb-000022
图像处理装置根据下方公式对A进行奇异值分解:
A=UΛV T                         (38)
Λ=diag(d 1,d 2,d 3),d 1≥d 2≥d 3      (39)
其中,U为正交矩阵,V T为正交矩阵的转置,Λ为奇异值矩阵。根据正交矩阵的定义,图像处理装置 可以得到:
U TU=V TV=I                           (40)
其中,I为单位矩阵。
图像处理装置将U和V带入奇异值分解的(38)中进行移项,可以得到:
Λ=U TAV=dU TRV+(U Tt)(V Tn) T                    (41)
图像处理装置令s=detUdetV=±1。其中,det代表行列式。由于U,V为单位正交矩阵,所以行列式为1。这里的s和步骤S2062中的非零因子s不同。
令:
Figure PCTCN2020096549-appb-000023
图像处理装置可以求解出R'和t'的值。
将R'和t'的值带入到Λ中,可以得到:
Figure PCTCN2020096549-appb-000024
图像处理装置取一组正交基底:e 1=(1 0 0) T,e 2=(0 1 0) T,e 3=(0 0 1) T
n'=(x 1 x 2 x 3) T=x 1e 1+x 2e 2+x 3e 3                (44)
图像处理装置将(44)代入Λ中,可得:
Figure PCTCN2020096549-appb-000025
由于n是单位法向量,V是单位正交矩阵,也可以看作一个旋转矩阵。Vn可以看作将n中的各个分量转到V中的三个单位正交基上。所以n'也是单位法向量。因此图像处理装置可以得到以下等式:
Figure PCTCN2020096549-appb-000026
图像处理装置对下式进行消元,消去t',可得:
Figure PCTCN2020096549-appb-000027
Figure PCTCN2020096549-appb-000028
本申请实施例中,消元方法可以将第一个等式两边乘以x 2,再将第二个等式两边同时乘以x 1,然后将两个等式两边同时相减。
由于R'也是一个旋转矩阵,满足R'R' T=I,所以有||R'X||=||X||,所以图像处理装置将(48)变形,再取二范数,可以得到:
Figure PCTCN2020096549-appb-000029
由此,图像处理装置得到了一个关于
Figure PCTCN2020096549-appb-000030
的齐次方程组,必然有非零解,且行列数必定为零,所以可以先求出d',则有:
Figure PCTCN2020096549-appb-000031
根据奇异值的大小,有三种情况:
d 1>d 2>d 3,d 1=d 2>d 3或d 1>d 2=d 3以及d 1=d 2=d 3
无论如何,都有解d'=±d 2,可以使齐次方程组成立。而d'=d 2和d'=-d 2又分别对应上述三种情况,所以这里有6种可能性(实际上是8种,d 1=d 2>d 3或d 1>d 2=d 3的情况下又有两种)。实际的单应矩阵种只有一个解成立,图像处理装置需要通过以下步骤找到这个解:
在d'=d 2,d 1>d 2>d 3的情况中:
齐次方程组变成:
Figure PCTCN2020096549-appb-000032
由前两个式子可得:x 2=0,将x 2=0带入到(52)中:
Figure PCTCN2020096549-appb-000033
可以得到e 2=R'e 2,即R'是一个围绕e 2旋转的旋转矩阵。图像处理装置可以将R'写成:
Figure PCTCN2020096549-appb-000034
图像处理装置将R'代入到公式48的d'R'(x 1e 3-x 3e 1)=d 3x 1e 3-d 1x 3e 1中,可以得到:
Figure PCTCN2020096549-appb-000035
图像处理装置根据d 1≠d 3
Figure PCTCN2020096549-appb-000036
可以得到:
Figure PCTCN2020096549-appb-000037
图像处理装置从而得到cosθ和sinθ:
Figure PCTCN2020096549-appb-000038
Figure PCTCN2020096549-appb-000039
将cosθ和sinθ代入到公式(53)的
Figure PCTCN2020096549-appb-000040
中,图像处理装置可以得到R'。
其中,R'代表单应矩阵恢复出的代表第一拍摄坐标系与第二拍摄坐标系之间的旋转量。
d 1e 1=d'R'e 1+t'x 1
图像处理装置再将R'带回到(52)的d 2e 2=d'R'e 2+t'x 2中,可以得到
d 3e 3=d'R'e 3+t'x 3
Figure PCTCN2020096549-appb-000041
其中,t'代表单应矩阵恢复出的代表第一拍摄坐标系与第二拍摄坐标系之间的平移量。
本申请实施例中,当约束条件为单应矩阵时,图像处理装置将t'作为至少一个平移量将R'作为至少一个平移量。
S2142、图像处理装置将至少一个平移量和至少一个旋转量进行对应组合,得到至少一个相对位姿信息。
本申请实施例中,图像处理装置根据预设约束条件与相机位姿之间的对应关系,以及第二匹配点对, 计算出第一拍摄坐标系与第二拍摄坐标系下的至少一个平移量和至少一个旋转量之后,图像处理装置将至少一个平移量和至少一个旋转量进行一一对应组合,将平移量和旋转量的每种组合方式作为至少一个相对位姿信息。
当约束条件为基础矩阵时,根据S2141中得到的不同的t和R的值,图像处理装置将分别将不同的t和R进行组合,图像处理装置可以得到4个不同的相机相对位姿,每个相机相对位姿包含一个t和一个R的值。当约束条件为单应矩阵时,根据S2141中得到的不同的t'和R'的值,基于不同的奇异值大小的情况,图像处理装置可以得到8种可能的情况,如表2所示:
表2
Figure PCTCN2020096549-appb-000042
S215、图像处理装置针对每个相对位姿信息,从第二匹配点对中,计算出当前图像特征点在第一拍摄坐标系的第一图像位置坐标,以及在第二拍摄坐标系的第二图像位置坐标;第一拍摄坐标系为当前图像特征点对应的拍摄坐标系,第二拍摄坐标系为参考特征点对应的拍摄坐标系。
本申请实施例中,图像处理装置根据第二匹配点对,恢复出至少一个相对位姿信息之后,图像处理装置针对每个相对位姿信息,从第二匹配点对中,计算出当前图像特征点在当前图像特征点对应的拍摄坐标系的第一图像位置坐标,以及在参考特征点对应的拍摄坐标系的第二图像位置坐标。
本申请实施例中,针对每一个相对位姿信息,图像处理装置逐一计算出第二匹配点对中每个当前图像子特征点在当前图像帧拍摄坐标系下的位置坐标;图像处理装置再计算出同一个当前图像子特征点在参考帧拍摄坐标系下的位置坐标,直至第二匹配点对中当前图像特征点计算完毕,图像处理装置将全部当前图像特征点在当前图像帧拍摄坐标系下的位置坐标作为第一图像位置坐标,图像处理装置将全部当前图像特征点在参考拍摄坐标系下的位置坐标作为第二图像位置坐标。
本申请实施例中,图像处理装置计算每个当前图像特征点在当前图像帧拍摄坐标系下的位置坐标,以及同一个当前图像子特征点在参考帧拍摄坐标系下的位置坐标的方法为:图像处理装置根据每一种相机相对位姿,将当前子图像特征点在图像像素坐标系中的坐标,转化为当前子图像特征点在空间中,分别以当前帧图像对应的相机以及参考帧对应的相机为坐标系原点的三维坐标,示例性的,第一图像位置点坐标为(x,y,z),其中,x代表当前图像子特征点在当前图像帧拍摄相机成像界面的水平方向的坐标,y代表当前图像子特征点在当前图像帧拍摄相机成像界面的水平方向的坐标,z代表当前图像帧拍摄相机成像焦距,即深度值。
S216、图像处理装置针对每个相对位姿信息,从第二匹配点对中,计算出参考特征点在第一拍摄坐标系的第一参考位置坐标;以及在第二拍摄坐标系的第二参考位置坐标。
本申请实施例中,图像处理装置针对每个相对位姿信息,从第二匹配点对中,计算出当前图像特征点在第一拍摄坐标系的第一图像位置坐标,以及在第二拍摄坐标系的第二图像位置坐标之后,图像处理装置针对每个相对位姿信息,从第二匹配点对中,计算出参考特征点在当前图像特征点对应的拍摄坐标系的第一参考位置坐标;以及在参考特征点对应的拍摄坐标系的第二参考位置坐标。
本申请实施例中,针对每一个相对位姿信息,图像处理装置逐一计算出第二匹配点对中每个参考子特征点在当前图像帧拍摄坐标系下的位置坐标;图像处理装置再计算出同一个参考子特征点在参考帧拍摄坐标系下的位置坐标,直至第二匹配点对中参考特征点计算完毕,图像处理装置将参考特征点在当前图像帧拍摄坐标系下的位置坐标作为第一参考位置坐标,图像处理装置将参考特征点在参考拍摄坐标系下的位置 坐标作为第二参考位置坐标。
本申请实施例中,图像处理装置计算每个参考子特征点在当前图像帧拍摄坐标系下的位置坐标,以及同一个参考子特征点在参考帧拍摄坐标系下的位置坐标的方法为:图像处理装置根据每一种相机相对位姿,将参考子特征点在图像像素坐标系中的坐标,转化为参考子特征点在空间中,分别以当前帧图像对应的相机以及参考帧对应的相机为坐标系原点的三维坐标,示例性的,第一参考位置坐标为(x,y,z),其中,x代表参考子特征点在当前图像帧拍摄相机成像界面的水平方向的坐标,y代表参考子特征点在当前图像帧拍摄相机成像界面的水平方向的坐标,z代表当前图像帧拍摄相机成像焦距,即深度值。
S217、图像处理装置基于第一图像位置坐标、第二图像位置坐标第一参考位置坐标和第二参考位置坐标,分别对每个相对位姿信息进行校验,得到至少一个相对位姿信息对应的校验结果。
本申请实施例中,图像处理装置针对每个相对位姿信息,从第二匹配点对中,计算出参考特征点在第一拍摄坐标系的第一参考位置坐标;以及在第二拍摄坐标系的第二参考位置坐标之后,图像处理装置基于第一图像位置坐标、第二图像位置坐标第一参考位置坐标和第二参考位置坐标,分别对每个相对位姿信息恢复出的位置坐标是否正确进行校验,得到至少一个相对位姿信息对应的校验结果。
本申请实施例中,图像处理装置基于第一图像位置坐标、第二图像位置坐标第一参考位置坐标和第二参考位置坐标,分别对每个相对位姿信息进行校验,得到至少一个相对位姿信息对应的校验结果具体可以包括:S2171至S2176。如下:
S2171、图像处理装置根据第一图像位置坐标和第二图像位置坐标,得到当前图像特征点中的每个当前图像子特征点对应的第一图像位置子坐标和第二图像位置子坐标,以及参考特征点中的每个参考子特征点对应的第一参考位置子坐标和第二参考位置子坐标。
S2172、图像处理装置针对每个相对位姿信息,当每个当前图像子特征点的第一图像位置子坐标满足第一预设条件,并且每个当前图像子特征点的第二图像位置子坐标也满足第一预设条件时,将每个当前图像子特征点作为正确当前图像子特征点,第一预设条件为三维坐标的深度值为预设值。
本申请实施例中,图像处理装置根据第一图像位置坐标和第二图像位置坐标,得到当前图像特征点中的每个当前图像子特征点对应的第一图像位置子坐标和第二图像位置子坐标,以及参考特征点中的每个参考子特征点对应的第一参考位置子坐标和第二参考位置子坐标之后,图像处理装置针对每个相对位姿信息,当每个当前图像子特征点的第一图像位置子坐标满足第一预设条件,并且每个当前图像子特征点的第二图像位置子坐标也满足第一预设条件时,将每个当前图像子特征点作为正确当前图像子特征点,第一预设条件为三维坐标的深度值为预设值。
本申请实施例中,示例性的,第一图像位置子坐标为(x,y,z),其中,x代表第一图像位置子坐标在相机成像界面的水平方向的坐标,y代表第一图像位置子坐标在相机成像界面的水平方向的坐标,z代表相机成像焦距,即深度值;对应的第二图像位置子坐标为(x 1,y 1,z 1)。如果z和z 1都为正,说明相机相对位姿求解正确,则图像处理装置确认当前图像子特征点为正确当前图像子特征点。如果z或z 1值有一个为负值,说明这个点在相机后面,即对应的相机相对位姿是不正确的,则图像处理装置确认当前图像子特征点为错误当前图像子特征点。
S2173、图像处理装置从当前图像特征点中,确定出每个当前图像子特征点中的正确当前图像子特征点,从而得到当前图像特征点的正确当前图像特征点。
本申请实施例中,图像处理装置针对每个相对位姿信息,当每个当前图像子特征点的第一图像位置子坐标满足第一预设条件,并且每个当前图像子特征点的第二图像位置子坐标也满足第一预设条件时,将每个当前图像子特征点作为正确当前图像子特征点,第一预设条件为三维坐标的深度值为预设值之后,图像处理装置从当前图像特征点中,确定出每个当前图像子特征点中的正确当前图像子特征点,从而得到当前图像特征点的正确当前图像特征点。
S2174、图像处理装置针对每个相对位姿信息,当每个参考子特征点的第一参考位置子坐标满足第一预设条件,并且每个参考子特征点的第二参考位置子坐标也满足第一预设条件时,将每个参考子特征点作为正确参考子特征点。
本申请实施例中,图像处理装置从当前图像特征点中,确定出每个当前图像子特征点中的正确当前图像子特征点,从而得到当前图像特征点的正确当前图像特征点之后,图像处理装置针对每个相对位姿信息,当每个参考子特征点的第一参考位置子坐标满足第一预设条件,并且每个参考子特征点的第二参考位置子坐标也满足第一预设条件时,将每个参考子特征点作为正确参考子特征点。
本申请实施例中,示例性的,第一参考位置子坐标为(x,y,z),其中,x代表第一参考位置子坐标在相机成像界面的水平方向的坐标,y代表第一参考位置子坐标在相机成像界面的水平方向的坐标,z代表相机成像焦距,即深度值;对应的第二参考位置子坐标为(x 1,y 1,z 1)。如果z和z 1都为正,说明相机相对位姿求 解正确,则图像处理装置确认参考子特征点为正确参考子特征点。如果z或z 1值有一个为负值,说明这个点在相机后面,即对应的相机相对位姿是不正确的,则图像处理装置确认参考子特征点为错误参考子特征点。
S2175、图像处理装置从参考特征点中,确定出每个参考子特征点中的正确参考子特征点,从而得到参考特征点的正确参考特征点。
本申请实施例中,图像处理装置针对每个相对位姿信息,当每个参考子特征点的第一参考位置子坐标满足第一预设条件,并且每个参考子特征点的第二参考位置子坐标也满足第一预设条件时,将每个参考子特征点作为正确参考子特征点之后,图像处理装置从参考特征点中,确定出每个参考子特征点中的正确参考子特征点,从而得到参考特征点的正确参考特征点。
S2176、图像处理装置将正确当前图像特征点与正确参考特征点,作为每个相对位姿信息对应的校验结果,从而得到至少一个相对位姿信息对应的校验结果。
本申请实施例中,图像处理装置针对每个相对位姿信息,从第二匹配点对中,计算出参考特征点在第一拍摄坐标系的第一参考位置坐标;以及在第二拍摄坐标系的第二参考位置坐标之后,图像处理装置基于第一图像位置坐标、第二图像位置坐标、第一参考位置坐标和第二参考位置坐标,分别对每个相对位姿信息进行校验,得到至少一个相对位姿信息对应的校验结果。
S218、图像处理装置基于校验结果,从至少一个相对位姿信息中确定出目标位姿信息。
本申请实施例中,图像处理装置基于第一图像位置坐标、第二图像位置坐标第一参考位置坐标和第二参考位置坐标,分别对每个相对位姿信息进行校验,得到至少一个相对位姿信息对应的校验结果之后,图像处理装置基于校验结果,从至少一个相对位姿信息中找出恢复出正确位置坐标最多的相对位姿信息,作为目标位姿信息。
本申请实施例中,图像处理装置基于校验结果,从至少一个相对位姿信息中确定出目标位姿信息具体包括:S2181至S2182。如下:
S2181、图像处理装置从至少一个相对位姿信息对应的校验结果中,统计出每个相对位姿信息对应的校验结果中的正确子当前图像特征点与正确子参考特征点的数量总和,从而得到每个相对位姿信息对应的正确结果数量。
本申请实施例中,图像处理装置将正确当前图像特征点与正确参考特征点,作为每个相对位姿信息对应的校验结果,从而得到至少一个相对位姿信息对应的校验结果之后,图像处理装置从至少一个相对位姿信息对应的校验结果中,统计出每个相对位姿信息对应的校验结果中的正确子当前图像特征点与正确子参考特征点的数量总和,从而得到每个相对位姿信息对应的正确结果数量。
S2182、图像处理装置从每个相对位姿信息对应的正确结果数量中,确定出正确结果数量最多的目标位姿信息。
本申请实施例中,图像处理装置从至少一个相对位姿信息对应的校验结果中,统计出每个相对位姿信息对应的校验结果中的正确子当前图像特征点与正确子参考特征点的数量总和,从而得到每个相对位姿信息对应的正确结果数量之后,图像处理装置从每个相对位姿信息对应的正确结果数量中,确定出正确结果数量最多的目标位姿信息。
S219、图像处理装置根据目标位姿信息实现重定位功能。
本申请实施例中,图像处理装置从每个相对位姿信息对应的正确结果数量中,确定出正确结果数量最多的目标位姿信息之后,图像处理装置根据目标位姿信息实现重定位功能。
本申请实施例中,图图像处理装置根据最终输出的位姿,得到当前帧图像相对于参考帧之间的平移和旋转,由此得到当前帧图像在参考帧中的位置,从而实现重定位。
可以理解的是,图像处理装置会获取待处理的当前帧图像;接下来图像处理装置从当前帧图像中提取当前帧图像的当前图像特征点,用于根据当前图像特征点,找到与当前帧图像特征点最匹配的参考帧,并获取参考帧对应的参考特征点;然后图像处理装置基于第一预设匹配阈值将当前图像特征点与参考特征点进行第一次匹配,得到第一匹配点对;图像处理装置对将第一预设匹配阈值放大,得到第二预设匹配阈值,以及基于第一匹配点对,得到约束条件;最后,在约束条件下,图像处理装置基于第二预设匹配阈值将当前图像特征点与参考特征点进行匹配,得到第二匹配点对。通过本申请实施例提供的方法,图像处理可以通过先计算少量可靠匹配,再通过这些匹配估计数学模型,从而拓展出更多的匹配,同时保证了匹配点对的准确率,解决了匹配点对数量过少,存在较多错误匹配点对的问题,提高了匹配点对的数量和准确率,最终提高了图像处理的准确性。
进一步的,图像处理装置可以在计算约束条件时,同时计算出至少一个子约束条件,并在至少一个子约束条件中根据正确匹配点对的数量确定出约束条件。当图像处理功能为重定位功能时,图像处理装置可以自动选择当前场景最适合的基础矩阵或单应矩阵进行相对位姿的恢复,从而提高重定位的准确性。
本申请实施例提供一种图像处理装置1,对应于一种图像处理方法;图3为本申请实施例提供的一种图像处理装置的结构示意图一,如图3所示,该图像处理装置3包括:
获取单元10,配置为获取待处理的当前帧图像;
提取单元11,配置为从所述当前帧图像中提取所述当前帧图像的当前图像特征点,并基于所述当前图像特征点获取参考帧对应的参考特征点;
匹配单元12,配置为基于第一预设匹配阈值将所述当前图像特征点与所述参考特征点进行匹配,得到第一匹配点对;
计算单元13,配置为将所述第一预设匹配阈值放大,得到第二预设匹配阈值,以及基于所述第一匹配点对,得到约束条件;
所述匹配单元12,还配置为在所述约束条件下,基于所述第二预设匹配阈值将所述当前图像特征点与所述参考特征点进行匹配,得到第二匹配点对;
处理单元14,配置为基于所述第二匹配点对,进行图像处理功能。
在本申请的一些实施例中,上述匹配单元12,还配置为基于所述第二预设匹配阈值将所述图像特征点与所述参考特征点进行匹配,得到中间匹配点对;从所述中间匹配点对中,筛选出满足所述约束条件的所述第二匹配点对。
在本申请的一些实施例中,所述匹配单元12,还配置为基于所述中间匹配点对中的每个第一特征点的三维坐标和所述约束条件,确定出第一映射值;基于所述第一映射值,确定出第一匹配误差;基于所述第一匹配误差,确定出小于等于预设误差阈值的所述第二匹配点对。
在本申请的一些实施例中,所述匹配单元12,还配置为从所述中间匹配点对中,筛选出除所述第一匹配点对之外的子匹配点对;基于所述子匹配点对中的每个第二特征点的三维坐标和所述约束条件,确定出第二映射值;基于所述第二映射值,确定出第二匹配误差;基于所述第二匹配误差,确定出小于等于预设误差阈值的最终子匹配点对;将所述第一匹配点对和所述最终子匹配点对作为所述第二匹配点对。
在本申请的一些实施例中,所述计算单元13,还配置为获取至少一个预设初始约束条件;使用所述第一匹配点对,对所述至少一个预设初始约束条件进行计算,得到至少一个子约束条件;基于所述第一匹配点对中的每个第三特征点的三维坐标和所述至少一个子约束条件,确定出至少一个第三映射值;基于所述至少一个第三映射值,确定出至少一个第三匹配误差;基于所述至少一个第三匹配误差,确定出所述至少一个子约束条件对应的小于等于所述预设误差阈值的所述第一匹配点对,作为正确匹配点对;从所述至少一个预设初始约束条件中,分别选取包含所述正确匹配点对最多的一个子约束条件,作为每个所述至少一个预设初始约束条件分别对应的至少一个中间子约束条件;从所述至少一个中间子约束条件中,选择包含所述正确匹配点对最多的中间子约束条件,作为所述约束条件。
在本申请的一些实施例中,所述计算单元13,还配置为针对每个预设初始约束条件,从所述第一匹配点对中,选取预设个数的所述第一匹配点对进行组合,重复预设次数,得到与所述预设次数的数量相等的第一匹配点对组合,其中,所述第一匹配点对组合中的每个组合包含所述预设个数个所述第一匹配点对;使用所述第一匹配点对组合中的每个组合,对所述每个预设初始约束条件进行计算,得到所述每个预设初始约束条件对应的所述预设次数个所述子约束条件,作为所述每个预设初始约束条件的最终子约束条件;将所述至少一个预设初始约束条件中,所述每个预设初始约束条件的最终子约束条件确定为所述至少一个子约束条件。
在本申请的一些实施例中,所述处理单元14,还配置为根据所述第二匹配点对,恢复出至少一个相对位姿信息;针对每个相对位姿信息,从所述第二匹配点对中,计算出所述当前图像特征点在第一拍摄坐标系的第一图像位置坐标,以及在第二拍摄坐标系的第二图像位置坐标;所述第一拍摄坐标系为所述当前图像特征点对应的拍摄坐标系,所述第二拍摄坐标系为所述参考特征点对应的拍摄坐标系;针对所述每个相对位姿信息,从所述第二匹配点对中,计算出所述参考特征点在所述第一拍摄坐标系的第一参考位置坐标;以及在所述第二拍摄坐标系的第二参考位置坐标;基于所述第一图像位置坐标、所述第二图像位置坐标、所述第一参考位置坐标和所述第二参考位置坐标,分别对所述每个相对位姿信息进行校验,得到所述至少一个相对位姿信息对应的校验结果;基于所述校验结果,从所述至少一个相对位姿信息中确定出目标位姿信息;根据所述目标位姿信息实现所述重定位功能。
在本申请的一些实施例中,所述处理单元14,还配置为根据预设约束条件与相机位姿之间的对应关系,以及所述第二匹配点对,计算出所述第一拍摄坐标系与所述第二拍摄坐标系下的至少一个平移量和至少一个旋转量;将所述至少一个平移量和至少一个旋转量进行对应组合,得到所述至少一个相对位姿信息。
在本申请的一些实施例中,所述处理单元14,还配置为根据所述第一图像位置坐标和所述第二图像位置坐标,得到所述当前图像特征点中的每个当前图像子特征点对应的第一图像位置子坐标和第二图像位置子坐标,以及所述参考特征点中的每个参考子特征点对应的第一参考位置子坐标和第二参考位置子坐标;针对所述每个相对位姿信息,当所述每个当前图像子特征点的所述第一图像位置子坐标满足第一预设条件, 并且所述每个当前图像子特征点的所述第二图像位置子坐标也满足所述第一预设条件时,将所述每个当前图像子特征点作为正确当前图像子特征点,所述第一预设条件为三维坐标的深度值为预设值;从所述当前图像特征点中,确定出每个当前图像子特征点中的所述正确当前图像子特征点,从而得到所述当前图像特征点的正确当前图像特征点;针对所述每个相对位姿信息,当所述每个参考子特征点的所述第一参考位置子坐标满足所述第一预设条件,并且所述每个参考子特征点的所述第二参考位置子坐标也满足所述第一预设条件时,将所述每个参考子特征点作为正确参考子特征点;从所述参考特征点中,确定出每个参考子特征点中的所述正确参考子特征点,从而得到所述参考特征点的正确参考特征点;将所述正确当前图像特征点与所述正确参考特征点,作为所述每个相对位姿信息对应的校验结果,从而得到所述至少一个相对位姿信息对应的校验结果;基于所述校验结果,从所述至少一个相对位姿信息中确定出目标位姿信息。
在本申请的一些实施例中,所述处理单元14,还配置为从所述至少一个相对位姿信息对应的校验结果中,统计出所述每个相对位姿信息对应的校验结果中的所述正确子当前图像特征点与所述正确子参考特征点的数量总和,从而得到所述每个相对位姿信息对应的正确结果数量;从所述每个相对位姿信息对应的正确结果数量中,确定出所述正确结果数量最多的所述目标位姿信息。
在本申请的一些实施例中,所述提取单元11,还配置为从所述当前帧图像中提取预设角点数量的表征角边界的第一像素点;对所述第一像素点的预设范围内的第二像素点提取特征,得到第一图像特征点;对所述第一像素点的所述预设范围外的第三像素点提取特征,得到第二图像特征点;将所述第一图像特征点和所述第二图像特征点作为所述当前图像特征点;将所述当前图像特征点与预设地图特征库进行匹配,其中,所述预设地图特征库中存储有至少一帧预设地图图像帧的预设地图特征点;将所述预设地图特征库中与所述当前图像特征点最匹配的所述预设地图图像帧作为所述参考帧,将所述参考帧中包含的所述预设地图特征点,作为所述参考特征点。
在本申请的一些实施例中,所述匹配单元12,还配置为将所述当前图像特征点中的每个当前图像子特征点与所述参考特征点逐一进行组合,得到至少一个子特征点对;针对所述至少一个子特征点对,计算所述每个当前图像子特征点到所述参考特征点的汉明距离,得到至少一个汉明距离,所述至少一个汉明距离与所述至少一个子特征点对一一对应;从所述至少一个汉明距离中,确定出最小汉明距离和次小汉明距离;当所述最小汉明距离小于所述预设显著性阈值乘以所述次小汉明距离时,对比所述最小汉明距离与所述预设相似性阈值;当所述最小汉明距离小于等于所述预设相似性阈值时,从所述至少一个子特征点对中,确定出与所述最小汉明距离对应的所述每个当前图像子特征点对应的子特征点对,从而得到与所述当前图像特征点对应的所述第一匹配点对。
本申请实施例提供一种图像处理装置,对应于一种图像处理方法;图4为本申请实施例提供的一种图像处理装置的结构示意图二,如图4所示,该图像处理装置包括:处理器20、存储器21及通信总线22。在具体的实施例的过程中,上述处理器20可以为特定用途集成电路(ASIC,Application Specific Integrated Circuit)、数字信号处理器(DSP,Digital Signal Processor)、数字信号处理设备(DSPD,Digital Signal Processing Device)、可编程逻辑设备(PLD,Programmable Logic Device)、现场可编程门阵列(FPGA,Field Programmable Gate Array)、CPU、控制器、微控制器、微处理器中的至少一种。可以理解地,对于不同的设备,用于实现上述处理器功能的电子器件还可以为其它,本申请实施例不作具体限定。
在本申请的实施例中,所述存储器21,用于存储可执行指令;所述处理器20用于执行存储器21中存储的可执行指令,以实现上述实施例中获取单元10、提取单元11、匹配单元12、计算单元13和处理单元14的操作步骤。
在本申请的实施例中,所述通信总线22,用于实现处理器20和存储器21之间的连接通信。
本申请实施例提供的图像处理装置,会获取待处理的当前帧图像;接下来图像处理装置从当前帧图像中提取当前帧图像的当前图像特征点,用于根据当前图像特征点,找到与当前帧图像特征点最匹配的参考帧,并获取参考帧对应的参考特征点;然后图像处理装置基于第一预设匹配阈值将当前图像特征点与参考特征点进行第一次匹配,得到第一匹配点对;图像处理装置对将第一预设匹配阈值放大,得到第二预设匹配阈值,以及基于第一匹配点对,得到约束条件;最后,在约束条件下,图像处理装置基于第二预设匹配阈值将当前图像特征点与参考特征点进行匹配,得到第二匹配点对。通过本申请实施例提供的方法,图像处理可以通过先计算少量可靠匹配,再通过这些匹配估计数学模型,从而拓展出更多的匹配,同时保证了匹配点对的准确率,解决了匹配点对数量过少,存在较多错误匹配点对的问题,提高了匹配点对的数量和准确率,最终提高了图像处理的准确性。进一步的,图像处理装置可以在计算约束条件时,同时计算出至少一个子约束条件,并在至少一个子约束条件中根据正确匹配点对的数量确定出约束条件。当图像处理功能为重定位功能时,图像处理装置可以自动选择当前场景最适合的基础矩阵或单应矩阵进行相对位姿的恢复,从而提高重定位的准确性。本申请实施例提供一种计算机可读存储介质,上述计算机可读存储介质存储有一个或者多个程序,上述一个或者多个程序可被一个或者多个处理器执行,应用于图像处理装置中,该程序被处理器执行时实现如本申请实施例提供的图像处理方法。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,网络功能部署系统,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
本申请实施例中,图像处理装置可以在第一次匹配时先计算少量可靠匹配,再通过少量可靠匹配估计数学模型,从而可以在第二次匹配中使用放大的汉明距离阈值和显著性阈值,根据估计出的数学模型,筛选出更多的匹配点对,同时保证了匹配点对的准确率,从而解决了匹配点对数量过少,存在较多错误匹配点对的问题,提高了匹配点对的数量和准确率,最终提高了图像处理的准确性。

Claims (15)

  1. 一种图像处理方法,包括:
    获取待处理的当前帧图像;
    从所述当前帧图像中提取所述当前帧图像的当前图像特征点,并基于所述当前图像特征点获取参考帧对应的参考特征点;
    基于第一预设匹配阈值将所述当前图像特征点与所述参考特征点进行匹配,得到第一匹配点对;
    将所述第一预设匹配阈值放大,得到第二预设匹配阈值,以及基于所述第一匹配点对,得到约束条件;
    在所述约束条件下,基于所述第二预设匹配阈值将所述当前图像特征点与所述参考特征点进行匹配,得到第二匹配点对;
    基于所述第二匹配点对,进行图像处理功能。
  2. 根据权利要求1所述的方法,其中,所述在所述约束条件下,基于所述第二预设匹配阈值将所述当前图像特征点与所述参考特征点进行匹配,得到第二匹配点对,包括:
    基于所述第二预设匹配阈值将所述图像特征点与所述参考特征点进行匹配,得到中间匹配点对;
    从所述中间匹配点对中,筛选出满足所述约束条件的所述第二匹配点对。
  3. 根据权利要求2所述的方法,其中,所述从所述中间匹配点对中,筛选出满足所述约束条件的所述第二匹配点对,包括:
    基于所述中间匹配点对中的每个第一特征点的三维坐标和所述约束条件,确定出第一映射值;
    基于所述第一映射值,确定出第一匹配误差;
    基于所述第一匹配误差,确定出小于等于预设误差阈值的所述第二匹配点对。
  4. 根据权利要求2所述的方法,其中,所述从所述中间匹配点对中,筛选出满足所述约束条件的所述第二匹配点对,包括:
    从所述中间匹配点对中,筛选出除所述第一匹配点对之外的子匹配点对;
    基于所述子匹配点对中的每个第二特征点的三维坐标和所述约束条件,确定出第二映射值;
    基于所述第二映射值,确定出第二匹配误差;
    基于所述第二匹配误差,确定出小于等于预设误差阈值的最终子匹配点对;
    将所述第一匹配点对和所述最终子匹配点对作为所述第二匹配点对。
  5. 根据权利要求1所述的方法,其中,所述基于所述第一匹配点对,得到约束条件,包括:
    获取至少一个预设初始约束条件;
    使用所述第一匹配点对,对所述至少一个预设初始约束条件进行计算,得到至少一个子约束条件;
    基于所述第一匹配点对中的每个第三特征点的三维坐标和所述至少一个子约束条件,确定出至少一个第三映射值;
    基于所述至少一个第三映射值,确定出至少一个第三匹配误差;
    基于所述至少一个第三匹配误差,确定出所述至少一个子约束条件对应的小于等于所述预设误差阈值的所述第一匹配点对,作为正确匹配点对;
    从所述至少一个预设初始约束条件中,分别选取包含所述正确匹配点对最多的一个子约束条件,作为每个所述至少一个预设初始约束条件分别对应的至少一个中间子约束条件;
    从所述至少一个中间子约束条件中,选择包含所述正确匹配点对最多的中间子约束条件,作为所述约束条件。
  6. 根据权利要求5所述的方法,其中,所述使用所述第一匹配点对,对所述至少一个预设初始约束条件进行计算,得到至少一个子约束条件,包括:
    针对每个预设初始约束条件,从所述第一匹配点对中,选取预设个数的所述第一匹配点对进行组合,重复预设次数,得到与所述预设次数的数量相等的第一匹配点对组合,其中,所述第一匹配点对组合中的每个组合包含所述预设个数个所述第一匹配点对;
    使用所述第一匹配点对组合中的每个组合,对所述每个预设初始约束条件进行计算,得到所述每个预设初始约束条件对应的所述预设次数个所述子约束条件,作为所述每个预设初始约束条件的最终子约束条件;
    将所述至少一个预设初始约束条件中,所述每个预设初始约束条件的最终子约束条件确定为所述至少一个子约束条件。
  7. 根据权利要求1所述的方法,其中,当所述图像处理功能为重定位功能时,所述基于所述第二匹配点对,进行图像处理功能,包括:
    根据所述第二匹配点对,恢复出至少一个相对位姿信息;
    针对每个相对位姿信息,从所述第二匹配点对中,计算出所述当前图像特征点在第一拍摄坐标系的第一图像位置坐标,以及在第二拍摄坐标系的第二图像位置坐标;所述第一拍摄坐标系为所述当前图像特征点对应的拍摄坐标系,所述第二拍摄坐标系为所述参考特征点对应的拍摄坐标系;
    针对所述每个相对位姿信息,从所述第二匹配点对中,计算出所述参考特征点在所述第一拍摄坐标系的第一参考位置坐标;以及在所述第二拍摄坐标系的第二参考位置坐标;
    基于所述第一图像位置坐标、所述第二图像位置坐标、所述第一参考位置坐标和所述第二参考位置坐标,分别对所述每个相对位姿信息进行校验,得到所述至少一个相对位姿信息对应的校验结果;
    基于所述校验结果,从所述至少一个相对位姿信息中确定出目标位姿信息;
    根据所述目标位姿信息实现所述重定位功能。
  8. 根据权利要求7所述的方法,其中,所述根据所述第二匹配点对,恢复出至少一个相对位姿信息,包括:
    根据预设约束条件与相机位姿之间的对应关系,以及所述第二匹配点对,计算出所述第一拍摄坐标系与所述第二拍摄坐标系下的至少一个平移量和至少一个旋转量;
    将所述至少一个平移量和至少一个旋转量进行对应组合,得到所述至少一个相对位姿信息。
  9. 根据权利要求7所述的方法,其中,所述基于所述第一图像位置坐标、所述第二图像位置坐标、所述第一参考位置坐标和所述第二参考位置坐标,分别对所述每个相对位姿信息进行校验,得到至少一个相对位姿信息对应的校验结果,包括:
    根据所述第一图像位置坐标和所述第二图像位置坐标,得到所述当前图像特征点中的每个当前图像子特征点对应的第一图像位置子坐标和第二图像位置子坐标,以及所述参考特征点中的每个参考子特征点对应的第一参考位置子坐标和第二参考位置子坐标;
    针对所述每个相对位姿信息,当所述每个当前图像子特征点的所述第一图像位置子坐标满足第一预设条件,并且所述每个当前图像子特征点的所述第二图像位置子坐标也满足所述第一预设条件时,将所述每个当前图像子特征点作为正确当前图像子特征点,所述第一预设条件为三维坐标的深度值为预设值;
    从所述当前图像特征点中,确定出每个当前图像子特征点中的所述正确当前图像子特征点,从而得到所述当前图像特征点的正确当前图像特征点;
    针对所述每个相对位姿信息,当所述每个参考子特征点的所述第一参考位置子坐标满足所述第一预设条件,并且所述每个参考子特征点的所述第二参考位置子坐标也满足所述第一预设条件时,将所述每个参考子特征点作为正确参考子特征点;
    从所述参考特征点中,确定出每个参考子特征点中的所述正确参考子特征点,从而得到所述参考特征点的正确参考特征点;
    将所述正确当前图像特征点与所述正确参考特征点,作为所述每个相对位姿信息对应的校验结果,从而得到所述至少一个相对位姿信息对应的校验结果;
    基于所述校验结果,从所述至少一个相对位姿信息中确定出目标位姿信息。
  10. 根据权利要求9所述的方法,其中,所述基于所述校验结果,从所述至少一个相对位姿信息中确定出目标位姿信息,包括:
    从所述至少一个相对位姿信息对应的校验结果中,统计出所述每个相对位姿信息对应的校验结果中的所述正确子当前图像特征点与所述正确子参考特征点的数量总和,从而得到所述每个相对位姿信息对应的正确结果数量;
    从所述每个相对位姿信息对应的正确结果数量中,确定出所述正确结果数量最多的所述目标位姿信息。
  11. 根据权利要求1所述的方法,其中,所述从所述当前帧图像中提取所述当前帧图像的当前图像特征点,并基于所述当前图像特征点获取参考帧对应的参考特征点,包括:
    从所述当前帧图像中提取预设角点数量的表征角边界的第一像素点;
    对所述第一像素点的预设范围内的第二像素点提取特征,得到第一图像特征点;
    对所述第一像素点的所述预设范围外的第三像素点提取特征,得到第二图像特征点;
    将所述第一图像特征点和所述第二图像特征点作为所述当前图像特征点;
    将所述当前图像特征点与预设地图特征库进行匹配,其中,所述预设地图特征库中存储有至少一帧预设地图图像帧的预设地图特征点;
    将所述预设地图特征库中与所述当前图像特征点最匹配的所述预设地图图像帧作为所述参考帧,将所述参考帧中包含的所述预设地图特征点,作为所述参考特征点。
  12. 根据权利要求1所述的方法,其中,所述第一预设匹配阈值包括:预设相似性阈值和预设显著性阈值;所述基于第一预设匹配阈值将所述当前图像特征点与所述参考特征点进行匹配,得到第一匹配点对,包括:
    将所述当前图像特征点中的每个当前图像子特征点与所述参考特征点逐一进行组合,得到至少一个子 特征点对;
    针对所述至少一个子特征点对,计算所述每个当前图像子特征点到所述参考特征点的汉明距离,得到至少一个汉明距离,所述至少一个汉明距离与所述至少一个子特征点对一一对应;
    从所述至少一个汉明距离中,确定出最小汉明距离和次小汉明距离;
    当所述最小汉明距离小于所述预设显著性阈值乘以所述次小汉明距离时,对比所述最小汉明距离与所述预设相似性阈值;
    当所述最小汉明距离小于等于所述预设相似性阈值时,从所述至少一个子特征点对中,确定出与所述最小汉明距离对应的所述每个当前图像子特征点对应的子特征点对,从而得到与所述当前图像特征点对应的所述第一匹配点对。
  13. 一种图像处理装置,包括:
    获取单元,配置为获取待处理的当前帧图像;
    提取单元,配置为从所述当前帧图像中提取所述当前帧图像的当前图像特征点,并基于所述当前图像特征点获取参考帧对应的参考特征点;
    匹配单元,配置为基于第一预设匹配阈值将所述当前图像特征点与所述参考特征点进行匹配,得到第一匹配点对;
    计算单元,配置为将所述第一预设匹配阈值放大,得到第二预设匹配阈值,以及基于所述第一匹配点对,得到约束条件;
    所述匹配单元,还配置为在所述约束条件下,基于所述第二预设匹配阈值将所述当前图像特征点与所述参考特征点进行匹配,得到第二匹配点对;
    处理单元,配置为基于所述第二匹配点对,进行图像处理功能。
  14. 一种图像处理装置,包括:
    存储器,用于存储可执行指令;
    处理器,用于执行所述存储器中存储的可执行指令时,实现权利要求1至12任一项所述的方法。
  15. 一种计算机可读存储介质,其上存储有可执行指令,应用于图像处理装置,该可执行指令被处理器执行时实现权利要求1至12任一项所述的方法。
PCT/CN2020/096549 2019-06-27 2020-06-17 一种图像处理方法及装置、计算机可读存储介质 WO2020259365A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910570613.1A CN110335315B (zh) 2019-06-27 2019-06-27 一种图像处理方法及装置、计算机可读存储介质
CN201910570613.1 2019-06-27

Publications (1)

Publication Number Publication Date
WO2020259365A1 true WO2020259365A1 (zh) 2020-12-30

Family

ID=68143557

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/096549 WO2020259365A1 (zh) 2019-06-27 2020-06-17 一种图像处理方法及装置、计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN110335315B (zh)
WO (1) WO2020259365A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767496A (zh) * 2021-01-22 2021-05-07 阿里巴巴集团控股有限公司 标定方法、设备及系统
CN112990003A (zh) * 2021-03-11 2021-06-18 深圳市无限动力发展有限公司 图像序列重定位判断方法、装置和计算机设备
CN113159161A (zh) * 2021-04-16 2021-07-23 深圳市商汤科技有限公司 目标匹配方法和装置、设备及存储介质
CN113312936A (zh) * 2021-05-13 2021-08-27 阳光电源股份有限公司 一种图像定位标识识别方法及服务器
CN113506369A (zh) * 2021-07-13 2021-10-15 阿波罗智能技术(北京)有限公司 用于生成地图的方法、装置、电子设备和介质
CN113781559A (zh) * 2021-08-31 2021-12-10 南京邮电大学 一种鲁棒的异常匹配点剔除方法及图像室内定位方法
CN113792752A (zh) * 2021-08-03 2021-12-14 北京中科慧眼科技有限公司 基于双目相机的图像特征提取方法、系统和智能终端
CN116797463A (zh) * 2023-08-22 2023-09-22 佗道医疗科技有限公司 特征点对提取方法及图像拼接方法

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110335315B (zh) * 2019-06-27 2021-11-02 Oppo广东移动通信有限公司 一种图像处理方法及装置、计算机可读存储介质
CN111177167B (zh) * 2019-12-25 2024-01-19 Oppo广东移动通信有限公司 增强现实的地图更新方法、装置、系统、存储与设备
SG10201913798WA (en) * 2019-12-30 2021-07-29 Sensetime Int Pte Ltd Image processing method and apparatus, and electronic device
CN111310654B (zh) * 2020-02-13 2023-09-08 北京百度网讯科技有限公司 一种地图要素的定位方法、装置、电子设备及存储介质
SG10202003292XA (en) * 2020-04-09 2021-11-29 Sensetime Int Pte Ltd Matching method and apparatus, electronic device, computer-readable storage medium, and computer program
CN113192081A (zh) * 2021-04-30 2021-07-30 北京达佳互联信息技术有限公司 图像识别方法、装置、电子设备和计算机可读存储介质
CN114170306B (zh) * 2021-11-17 2022-11-04 埃洛克航空科技(北京)有限公司 图像的姿态估计方法、装置、终端及存储介质
CN115937002B (zh) * 2022-09-09 2023-10-20 北京字跳网络技术有限公司 用于估算视频旋转的方法、装置、电子设备和存储介质
CN116258769B (zh) * 2023-05-06 2023-07-25 亿咖通(湖北)技术有限公司 一种定位校验方法、装置、电子设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473565A (zh) * 2013-08-23 2013-12-25 华为技术有限公司 一种图像匹配方法和装置
CN104021542A (zh) * 2014-04-08 2014-09-03 苏州科技学院 基于极限约束的sift特征匹配点优化方法
CN104657986A (zh) * 2015-02-02 2015-05-27 华中科技大学 一种基于子空间融合和一致性约束的准稠密匹配扩展方法
CN107170001A (zh) * 2017-04-25 2017-09-15 北京海致网聚信息技术有限公司 用于对图像进行配准的方法和装置
CN108510530A (zh) * 2017-02-28 2018-09-07 深圳市朗驰欣创科技股份有限公司 一种三维点云匹配方法及其系统
CN110335315A (zh) * 2019-06-27 2019-10-15 Oppo广东移动通信有限公司 一种图像处理方法及装置、计算机可读存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680133A (zh) * 2017-09-15 2018-02-09 重庆邮电大学 一种基于改进闭环检测算法的移动机器人视觉slam方法
CN109840884B (zh) * 2017-11-29 2023-10-10 杭州海康威视数字技术股份有限公司 一种图像拼接方法、装置及电子设备
CN108955718B (zh) * 2018-04-10 2022-08-09 中国科学院深圳先进技术研究院 一种视觉里程计及其定位方法、机器人以及存储介质
CN108648240B (zh) * 2018-05-11 2022-09-23 东南大学 基于点云特征地图配准的无重叠视场相机姿态标定方法
CN109272991B (zh) * 2018-09-29 2021-11-02 阿波罗智联(北京)科技有限公司 语音交互的方法、装置、设备和计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473565A (zh) * 2013-08-23 2013-12-25 华为技术有限公司 一种图像匹配方法和装置
CN104021542A (zh) * 2014-04-08 2014-09-03 苏州科技学院 基于极限约束的sift特征匹配点优化方法
CN104657986A (zh) * 2015-02-02 2015-05-27 华中科技大学 一种基于子空间融合和一致性约束的准稠密匹配扩展方法
CN108510530A (zh) * 2017-02-28 2018-09-07 深圳市朗驰欣创科技股份有限公司 一种三维点云匹配方法及其系统
CN107170001A (zh) * 2017-04-25 2017-09-15 北京海致网聚信息技术有限公司 用于对图像进行配准的方法和装置
CN110335315A (zh) * 2019-06-27 2019-10-15 Oppo广东移动通信有限公司 一种图像处理方法及装置、计算机可读存储介质

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767496B (zh) * 2021-01-22 2023-04-07 阿里巴巴集团控股有限公司 标定方法、设备及系统
CN112767496A (zh) * 2021-01-22 2021-05-07 阿里巴巴集团控股有限公司 标定方法、设备及系统
CN112990003A (zh) * 2021-03-11 2021-06-18 深圳市无限动力发展有限公司 图像序列重定位判断方法、装置和计算机设备
CN112990003B (zh) * 2021-03-11 2023-05-19 深圳市无限动力发展有限公司 图像序列重定位判断方法、装置和计算机设备
CN113159161A (zh) * 2021-04-16 2021-07-23 深圳市商汤科技有限公司 目标匹配方法和装置、设备及存储介质
CN113312936A (zh) * 2021-05-13 2021-08-27 阳光电源股份有限公司 一种图像定位标识识别方法及服务器
CN113506369A (zh) * 2021-07-13 2021-10-15 阿波罗智能技术(北京)有限公司 用于生成地图的方法、装置、电子设备和介质
CN113792752A (zh) * 2021-08-03 2021-12-14 北京中科慧眼科技有限公司 基于双目相机的图像特征提取方法、系统和智能终端
CN113792752B (zh) * 2021-08-03 2023-12-12 北京中科慧眼科技有限公司 基于双目相机的图像特征提取方法、系统和智能终端
CN113781559A (zh) * 2021-08-31 2021-12-10 南京邮电大学 一种鲁棒的异常匹配点剔除方法及图像室内定位方法
CN113781559B (zh) * 2021-08-31 2023-10-13 南京邮电大学 一种鲁棒的异常匹配点剔除方法及图像室内定位方法
CN116797463A (zh) * 2023-08-22 2023-09-22 佗道医疗科技有限公司 特征点对提取方法及图像拼接方法
CN116797463B (zh) * 2023-08-22 2023-11-21 佗道医疗科技有限公司 特征点对提取方法及图像拼接方法

Also Published As

Publication number Publication date
CN110335315A (zh) 2019-10-15
CN110335315B (zh) 2021-11-02

Similar Documents

Publication Publication Date Title
WO2020259365A1 (zh) 一种图像处理方法及装置、计算机可读存储介质
CN110135455B (zh) 影像匹配方法、装置及计算机可读存储介质
CN110348454B (zh) 匹配局部图像特征描述符
US20090052796A1 (en) Match, Expand, and Filter Technique for Multi-View Stereopsis
CN110147708B (zh) 一种图像数据处理方法和相关装置
CN108109169B (zh) 一种基于矩形标识的位姿估计方法、装置及机器人
WO2022199419A1 (zh) 人脸检测方法、装置、终端设备及计算机可读存储介质
US9865061B2 (en) Constructing a 3D structure
CN109086782A (zh) 特征描述子生成方法、装置、设备及计算机可读存储介质
CN112435193A (zh) 一种点云数据去噪的方法、装置、存储介质和电子设备
CN114419349B (zh) 一种图像匹配方法和装置
CN109063776B (zh) 图像再识别网络训练方法、装置和图像再识别方法及装置
CN113436338A (zh) 火灾现场的三维重建方法、装置、服务器及可读存储介质
CN111461998A (zh) 一种环境重建方法及装置
CN111402345B (zh) 基于多目全景图像的模型生成方法及装置
CN111353325A (zh) 关键点检测模型训练方法及装置
JP6937782B2 (ja) 画像処理方法及びデバイス
CN113012084A (zh) 无人机影像实时拼接方法、装置及终端设备
CN112258647A (zh) 地图重建方法及装置、计算机可读介质和电子设备
CN109741245B (zh) 平面信息的插入方法及装置
JP6840968B2 (ja) 形状推定方法、形状推定装置および形状推定プログラム
CN110660091A (zh) 一种图像配准处理方法、装置和拍照批改作业系统
Tal et al. An accurate method for line detection and manhattan frame estimation
JP2023065296A (ja) 平面検出装置及び方法
CN113077379B (zh) 特征潜码的提取方法及装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20833212

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20833212

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20833212

Country of ref document: EP

Kind code of ref document: A1