CN116152521A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN116152521A
CN116152521A CN202111362641.8A CN202111362641A CN116152521A CN 116152521 A CN116152521 A CN 116152521A CN 202111362641 A CN202111362641 A CN 202111362641A CN 116152521 A CN116152521 A CN 116152521A
Authority
CN
China
Prior art keywords
image
matched
feature point
feature
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111362641.8A
Other languages
Chinese (zh)
Inventor
杨玉平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Pateo Network Technology Service Co Ltd
Original Assignee
Shanghai Pateo Network Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Pateo Network Technology Service Co Ltd filed Critical Shanghai Pateo Network Technology Service Co Ltd
Priority to CN202111362641.8A priority Critical patent/CN116152521A/en
Publication of CN116152521A publication Critical patent/CN116152521A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides an image processing method and device and a computer readable storage medium. The method comprises the following steps: performing feature point wide extraction on the image to be processed according to the initial threshold value to obtain a plurality of first feature points; determining an optimized threshold according to the number of the first feature points, the number of target feature points required for processing the image to be processed and the initial threshold; performing feature point detection on each first feature point according to the optimization threshold value to screen a plurality of second feature points; and processing the image to be processed according to the second characteristic points of the image to be processed. By executing the steps, the image processing method can optimize the feature extraction threshold value and the feature point number obtained by actual extraction according to the target feature point number required by image processing, thereby improving the accuracy and instantaneity of image processing.

Description

Image processing method and device
Technical Field
The present invention relates to an image processing technique, and more particularly, to an image processing method, an image processing apparatus, and a computer-readable storage medium.
Background
ORB (Oriented Fast and Rotated Brief) feature extraction algorithm is a widely-used image processing technology, and can rapidly extract key feature points in an image and identify target objects in the image according to the key feature points, so that various image processing requirements such as image identification, image matching, object positioning/tracking, scene reconstruction and the like are met.
Existing ORB feature extraction algorithms need to rely on a preset given threshold θ to extract a corresponding number N of key feature points. However, in the practical application of performing ORB feature extraction on different images, the given threshold θ and the number of feature points N obtained by actual extraction often have a significant difference due to the influence of various factors such as sharpness, resolution, content richness, etc. of the different images, thereby resulting in the number of feature points N obtained by actual extraction and the number of target feature points N to be extracted 0 There is a significant difference between them. In one aspect, an excessive number N of feature points increases the data processing load of the feature extraction algorithm and the subsequent image processing flow, thereby increasing the time consumed by the image processing flow and affecting the real-time performance of the image processing result. On the other hand, too small number of feature points N easily causes mismatching between feature points, which not only cannot meet the accuracy requirement of the subsequent image processing flow, but also causes the requirement of re-extracting key feature points, and also affects the real-time performance of the image processing result.
In order to overcome the above-mentioned drawbacks of the prior art, there is a need in the art for an image processing technique that improves the accuracy and instantaneity of image processing by optimizing the feature extraction threshold θ and the number of feature points N obtained by actual extraction.
Disclosure of Invention
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
In order to overcome the above-described drawbacks of the prior art, the present invention provides an image processing method, an image processing apparatus, and a computer-readable storage medium.
In particular, the first aspect of the invention provides the aboveThe image processing method includes the steps of: performing feature point wide extraction on the image to be processed according to the initial threshold value to obtain a plurality of first feature points; determining an optimized threshold according to the number of the first feature points, the number of target feature points required for processing the image to be processed and the initial threshold; performing feature point detection on each first feature point according to the optimization threshold value to screen a plurality of second feature points; and processing the image to be processed according to the second characteristic points of the image to be processed. By performing these steps, the image processing method can be performed according to the number N of target feature points required for image processing 0 The feature extraction threshold value theta and the feature point number N obtained by actual extraction are optimized, so that the accuracy and instantaneity of image processing are improved.
Further, the above-described image processing apparatus provided in the second aspect of the present invention includes a memory processor. The processor is connected to the memory and configured to implement the above-mentioned image processing method provided by the first aspect of the present invention. By implementing the image processing method, the image processing apparatus can determine the number N of target feature points required for image processing 0 The feature extraction threshold value theta and the feature point number N obtained by actual extraction are optimized, so that the accuracy and instantaneity of image processing are improved.
Further, the third aspect of the present invention provides the above computer-readable storage medium, on which computer instructions are stored. The computer instructions, when executed by a processor, implement the above-mentioned image processing method provided in the first aspect of the present invention. By implementing the image processing method, the computer-readable storage medium can store the target feature point number N according to the image processing requirement 0 The feature extraction threshold value theta and the feature point number N obtained by actual extraction are optimized, so that the accuracy and instantaneity of image processing are improved.
Drawings
The above features and advantages of the present invention will be better understood after reading the detailed description of embodiments of the present disclosure in conjunction with the following drawings. In the drawings, the components are not necessarily to scale and components having similar related features or characteristics may have the same or similar reference numerals.
Fig. 1 illustrates a schematic configuration of an image processing apparatus provided according to some embodiments of the present invention.
Fig. 2 illustrates a flow diagram of an image processing method provided in accordance with some embodiments of the present invention.
Fig. 3 illustrates a flow diagram of an image matching method provided in accordance with some embodiments of the invention.
Detailed Description
Further advantages and effects of the present invention will become apparent to those skilled in the art from the disclosure of the present specification, by describing the embodiments of the present invention with specific examples. While the description of the invention will be presented in connection with a preferred embodiment, it is not intended to limit the inventive features to that embodiment. Rather, the purpose of the invention described in connection with the embodiments is to cover other alternatives or modifications, which may be extended by the claims based on the invention. The following description contains many specific details for the purpose of providing a thorough understanding of the present invention. The invention may be practiced without these specific details. Furthermore, some specific details are omitted from the description in order to avoid obscuring the invention.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In addition, the terms "upper", "lower", "left", "right", "top", "bottom", "horizontal", "vertical" as used in the following description should be understood as referring to the orientation depicted in this paragraph and the associated drawings. This relative terminology is for convenience only and is not intended to be limiting of the invention as it is described in terms of the apparatus being manufactured or operated in a particular orientation.
It will be understood that, although the terms "first," "second," "third," etc. may be used herein to describe various elements, regions, layers and/or sections, these elements, regions, layers and/or sections should not be limited by these terms and these terms are merely used to distinguish between different elements, regions, layers and/or sections. Accordingly, a first component, region, layer, and/or section discussed below could be termed a second component, region, layer, and/or section without departing from some embodiments of the present invention.
As described above, the existing ORB feature extraction algorithm needs to rely on a preset given threshold θ to extract a corresponding number N of key feature points. However, in the practical application of performing ORB feature extraction on different images, the given threshold θ and the number of feature points N obtained by actual extraction often have a significant difference due to the influence of various factors such as sharpness, resolution, content richness, etc. of the different images, thereby resulting in the number of feature points N obtained by actual extraction and the number of target feature points N to be extracted 0 There is a significant difference between them. In one aspect, an excessive number N of feature points increases the data processing load of the feature extraction algorithm and the subsequent image processing flow, thereby increasing the time consumed by the image processing flow and affecting the real-time performance of the image processing result. On the other hand, too small number of feature points N easily causes mismatching between feature points, which not only cannot meet the accuracy requirement of the subsequent image processing flow, but also causes the requirement of re-extracting key feature points, and also affects the real-time performance of the image processing result.
In order to overcome the above-mentioned drawbacks of the prior art, the present invention provides an image processing method, an image processing apparatus, and a computer-readable storage medium capable of determining the number of target feature points N according to the image processing requirements 0 The feature extraction threshold value theta and the feature point number N obtained by actual extraction are optimized, so that the accuracy and instantaneity of image processing are improved.
In some non-limiting embodiments, the above-mentioned image processing method provided by the first aspect of the present invention may be implemented by the above-mentioned image processing apparatus provided by the second aspect of the present invention. The image processing device can be configured in various electronic devices such as vehicles, user terminals, cameras, radars, servers and the like, which are related to various image processing requirements such as image recognition, face recognition, object positioning, object tracking, 3D modeling and the like, in the form of software programs and/or hardware devices.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an image processing apparatus according to some embodiments of the present invention.
As shown in fig. 1, in some embodiments, an image processing device 10 is configured with a memory 11 and a processor 12. The memory 11 includes, but is not limited to, the above-described computer-readable storage medium provided by the third aspect of the present invention, on which computer instructions are stored. The processor 12 is connected to the memory 11 and is configured to execute computer instructions stored on the memory 11 to implement the above-described image processing method provided in the first aspect of the present invention.
The working principle of the image processing apparatus 10 will be described below in connection with some embodiments of the image processing method. It will be appreciated by those skilled in the art that these examples of image processing methods are merely some non-limiting embodiments provided by the present invention, and are intended to clearly illustrate the main concepts of the present invention and to provide some embodiments that are convenient for public implementation, and are not intended to limit the overall functionality or overall operation of the image processing apparatus 10. Similarly, the image processing apparatus 10 is also a non-limiting embodiment provided by the present invention, and does not limit the implementation subject of each step in these image processing methods.
Referring to fig. 2, fig. 2 is a flow chart illustrating an image processing method according to some embodiments of the invention.
As shown in fig. 2, in some embodiments of the invention, the image F is being processed 1 In the course of image processing, the image processing apparatus 10 may first minimize the FAST (Features From Accelerated Segment Test) minimum threshold θ of the ORB feature extraction algorithm 1 As an initial threshold for feature extraction, the image F to be processed 1 Extracting feature points widely to obtain N 1 And a first feature point.
Here, FAST minimum threshold θ 1 Is the minimum universal threshold provided by ORB feature extraction algorithm, is smaller than the number N of target feature points 0 Ideal threshold value theta corresponding to 0 Can extract and obtain the number N of the characteristic points larger than the target 0 Of (i.e. N) 1 >N 0 ). Determining FAST minimum threshold θ by consulting data 1 And according to the FAST minimum threshold value theta 1 From the image F to be processed 1 The specific process of extracting the first feature point belongs to the prior art in the field, and is not described herein.
At the determination of the FAST minimum threshold value theta 1 Can be derived from the image F to be processed 1 The first feature point number N obtained by extraction 1 Thereafter, the image processing apparatus 10 may determine the first feature point number N based on the extraction 1 Processing the image F to be processed 1 Number of target feature points N required 0 And an initial threshold (i.e., FAST minimum threshold θ 1 ) An optimization threshold θ for determining feature extraction is calculated, namely:
θ=θ 1 *N 1 /N 0
by performing the above linear optimization on the threshold value θ of feature extraction, the optimized threshold value θ can be made closer to the target feature point number N 0 Ideal threshold value theta corresponding to 0
Further, in some embodiments, after performing the above-described linear optimization and acquiring the optimization threshold θ, the image processing apparatus 10 may also preferably determine the default threshold θ according to the ORB feature extraction algorithm 2 And a preset scaling factor F, determining whether the optimization threshold θ meets a preset feature extraction criterion, namely:
Figure BDA0003359890430000051
in FAST default threshold value θ 2 Is provided by the ORB feature extraction algorithmThe bezels threshold, the scale factor F, indicates the degree of feature of the feature points that need to be extracted.
In some embodiments, a technician may freely set the scaling factor F to perform preliminary investigation and correction on the unreasonable optimization threshold θ according to an application scenario, an accuracy requirement, and an accuracy requirement of image processing. Specifically, if the optimization threshold θ is less than or equal to the standard threshold indicated by the feature extraction standard (i.e., θ 2 * F) The image processing apparatus 10 may determine that the optimized threshold value θ cannot meet the accuracy requirement of the subsequent image processing, thereby defaulting to the FAST default threshold value θ 2 The subsequent feature point detection flow is performed instead of the optimization threshold θ. Conversely, if the optimization threshold θ is greater than the standard threshold indicated by the feature extraction criteria (i.e., θ 2 * F) The image processing apparatus 10 may determine that the optimized threshold value θ is greater than the FAST default threshold value θ 2 The accuracy requirement of subsequent image processing can be met, so that the subsequent characteristic point detection flow is continuously carried out by the optimized threshold value theta.
By adopting the default threshold value theta of FAST 2 And the scale factor F is used for primarily checking and correcting the optimized threshold value theta, so that the optimized threshold value theta for the characteristic point detection flow can be ensured to be enough to meet the precision requirement and the accuracy requirement of image processing, the accuracy of an image processing result is ensured, and the requirement of re-detecting the characteristic points caused by the error of the processing result is avoided.
As shown in fig. 2, after acquiring the optimization threshold θ satisfying the accuracy requirement and the accuracy requirement of the image processing, the image processing apparatus 10 may perform feature point detection on each first feature point obtained by the wide extraction based on the optimization threshold θ to screen N therefrom 2 And a second feature point.
Specifically, the image processing apparatus 10 may first apply the first feature points to the image F to be processed 1 And a preset number range (for example: 5 to 20), the image F to be processed 1 Performing unequal area meshing so that the first feature point number n in each mesh 1 Are within a preset number range (i.e., 5 to 20). Thereafter, the image processing apparatus 10 can determine the divisionWhether the number of grids is greater than a preset number threshold (e.g., 500). If the number of divided grids is less than or equal to the preset number threshold, the image processing apparatus 10 may sequentially perform serial feature point detection on the first feature points in each grid according to the optimization threshold θ, and filter n from each grid 2i A second feature point for processing the whole image F 1 Medium screening to obtain N 2 And a second feature point. Conversely, if the number of divided grids is greater than the preset number threshold, the image processing apparatus 10 may perform parallel feature point detection on the first feature point in each grid according to the optimization threshold θ, and filter n from each grid 2i A second feature point for processing the whole image F 1 Medium screening to obtain N 2 And a second feature point.
Compared with the method for directly extracting the characteristics of the image to be processed by adopting the optimized threshold value theta, the scheme for detecting the characteristic points of each first characteristic point based on the optimized threshold value theta has smaller data processing amount, so that the efficiency of the image processing method can be improved, and the instantaneity of the image processing result can be improved. Further, by adopting the preferable scheme of dividing grids and detecting the characteristic points in parallel, the invention can combine the advantage of small data processing capacity of the characteristic point detection scheme, fully utilize the data processing capacity of the image processing device 10 to improve the efficiency of the image processing method and the instantaneity of the image processing result.
Further, considering the nonlinear variation relationship between the given threshold and the number of feature points obtained by actual extraction, it can be determined that the optimized threshold θ obtained by linear optimization is still smaller than the ideal threshold θ 0 (i.e. θ < θ 0 ) And the second number of feature points N obtained via feature point detection of the optimization threshold value θ 2 Still greater than the number N of target feature points 0 . Thus, in some preferred embodiments, the image processing apparatus 10 may be based on the image F to be processed 1 The positions of the second feature points and the number N of target feature points 0 For the N 2 Further homogenizing and screening the second characteristic points to obtain the number N of the characteristic points meeting the target 0 Is a third feature point of (a).
For example, the image processing apparatus 10 may first process the image F to be processed 1 Creating quadtree nodes in the image F to be processed according to the second characteristic points 1 Each second feature point is assigned to a corresponding quadtree node. After that, the image processing apparatus 10 can determine the number of feature points in each quadtree node one by one. If the number of feature points in one node is equal to 1, the image processing apparatus 10 will reserve this node and determine the feature point therein as a third feature point. Conversely, if the number of feature points in one node is greater than 1, the image processing apparatus 10 subdivides this node into four child nodes, and further allocates the feature points in this node to these four child nodes. And so on, the image processing device 10 can process the image F to be processed according to each second characteristic point 1 Respectively distributing each second characteristic point to each level of sub-nodes of the multi-level quadtree node until the number of the sub-nodes reaches the number N of the target characteristic points 0 . Then, for the child nodes whose number of feature points is still greater than 1, the image processing apparatus 10 may screen out feature points with smaller response values on each child node, and only retain the feature point with the largest response value on each child node as the third feature point.
Then, the image processing device 10 can process the image F according to the image to be processed 1 The image F to be processed 1 And performing various subsequent image processing flows such as image matching, image recognition and the like.
By executing the homogenization screening operation, the method and the device can further screen out redundant characteristic points to improve the instantaneity of the image processing result on the premise of meeting the image processing precision requirement and the accuracy requirement. Further, at the feature point number N 0 In the fixed case, the above-mentioned homogenization screening operation can effectively screen out the feature points of the position cluster, while emphasizing the feature points of the position dispersion so that the image processing device 10 can perform the image processing according to the image F to be processed 1 And identifying the object in the corresponding region by the third characteristic points of each position, and improving the accuracy and success rate of image processing results such as image identification results, image matching results and the like.
The following image processing procedure of the present invention will be described in connection with some embodiments of the image matching procedure. It will be appreciated by those skilled in the art that these examples of image matching procedures are merely some non-limiting embodiments provided by the present invention, and are intended to clearly illustrate the general concepts of the present invention and to provide some embodiments that are convenient for public implementation, and are not intended to limit the scope of the present invention.
Referring further to fig. 3, fig. 3 is a flow chart illustrating an image matching method according to some embodiments of the invention.
In a specific application of the image matching flow shown in fig. 3, the image processing apparatus 10 may first treat the image F to be processed 1 Layering to obtain multiple first image layers F with resolution reduced from bottom to top by adjusting filtering function (such as Gaussian filtering function) 1i Thereby constructing the image F to be processed 1 Is a golden tower. Then, the image processing apparatus 10 may determine the number N of target feature points to be extracted according to the matching accuracy and/or the specific requirement of the matching accuracy of the image matching 0 And performs the steps of feature point broad extraction, threshold optimization, feature point detection, homogenization screening as described above, thereby obtaining the image from the golden tower F 1 Is not less than the first image layer F 1i Respectively extracting N 0 And a third feature point.
In addition, the image processing apparatus 10 can also treat the matched image F in the same manner 2 Layering and obtaining a plurality of second image layers F with resolution reduced layer by layer from bottom to top by adjusting the filter function layer by layer (e.g. Gaussian filter function) 2i Thereby constructing the image F to be matched 2 Is a golden tower. Thereafter, in view of the fact that two matching images generally have similar sharpness, resolution and content richness, the image processing apparatus 10 may directly respond to the image F to be processed 1 Number of target feature points N 0 The steps of feature point wide extraction, threshold optimization, feature point detection and homogenization screening are executed, thereby obtaining the image golden tower F 2 Each of (3)Two image layers F 2i Respectively extracting N 0 And a third feature point. The image processing apparatus 10 may then process the image data from each of the second image layers F 2i Extracted N 0 Third feature points respectively serving as corresponding second image layers F 2i To be matched.
In determining the image golden tower F 1 Is not less than the first image layer F 1i N of (2) 0 Third feature points and determining the image F to be matched 2 Is of the second image layer F 2i Is matched with at least one feature point FPa to be matched ij Thereafter, the image processing apparatus 10 may calculate each of the first image layers F 1i To the corresponding second image layer F 2i Is matched with each feature point FPa to be matched ij To determine the respective feature points FPa to be matched ij Nearest neighbor feature point FPb of (2) ij Next nearest neighbor feature point FPc ij . Then, the image processing apparatus 10 may determine the nearest neighbor feature point FPb ij To the corresponding feature points to be matched FPa ij Euclidean distance of each secondary neighbor feature point FPc ij To the corresponding feature points to be matched FPa ij To perform the image F to be processed 1 And the image F to be matched 2 Feature matching between them.
Specifically, in judging the image F to be processed 1 And the image F to be matched 2 In the process of matching, the image processing apparatus 10 may first determine the nearest neighbor feature point FPb ij To the corresponding feature points to be matched FPa ij Euclidean distance of (a) and secondary neighbor feature point FPc ij To the corresponding feature points to be matched FPa ij Calculating the nearest neighbor feature point FPb ij And the next neighbor feature point FPc ij To the corresponding feature points to be matched FPa ij The euclidean distance ratio of (2), namely:
Figure BDA0003359890430000091
if the Euclidean distance ratio R is obtained by calculation ij Less than a preset ratio threshold R 0 The image processing apparatus 10 may determine that the image F to be processed is located 1 Is a first image layer F of 1i Is the nearest neighbor feature point FPb of (1) ij Is positioned on the image F to be matched 2 Is a second image layer F of (1) 2i Is matched with the feature point FPa to be matched ij Is a good match to the matching point of the matching point. On the contrary, if the Euclidean distance ratio R is obtained by calculation ij Greater than or equal to a preset ratio threshold R 0 The image processing apparatus 10 can determine the image F to be processed 1 Is a first image layer F of 1i There is no image F to be matched 2 Is a second image layer F of (1) 2i Is matched with the feature point FPa to be matched ij Is a good match to the matching point of the matching point.
By the same token, the image processing apparatus 10 judges the images F to be processed one by one as described above 1 Is a first image layer F of 1i Whether there is an image F to be matched 2 Is a second image layer F of (1) 2i Is matched with each feature point FPa to be matched ij To count the best matching points of the second image layer F 2i Is matched with each feature point FPa to be matched ij At the first image layer F 1i The number of best matching points on the table. Thereafter, the image processing apparatus 10 may preferably employ a random sample consensus (Random sample consensus, RANSAC) algorithm for each feature point FPa to be matched ij And (3) carrying out mismatching screening on the optimal matching points of the matching points so as to screen out the optimal matching points of mismatching. Then, if the number of best matching points reaches a preset number threshold (e.g., 91.15% of the number of feature points to be matched), the image processing apparatus 10 may determine the first image layer F 1i And the second image layer F 2i Matching. Conversely, if the number of best matching points does not reach the preset number threshold, the image processing apparatus 10 may determine that the first image layer F 1i And the second image layer F 2i Mismatch.
Further, the image processing apparatus 10 may also judge the images F to be processed one by analogy 1 Is not less than the first image layer F 1i And the image F to be matched 2 Corresponding second image layers F 2i Whether the images are matched or not, and counting the images F to be processed 1 And the image F to be matched 2 Middle pieceThe number of matched image layers. If the number of matching image layers reaches the preset number threshold, the image processing apparatus 10 may determine the image F to be processed 1 And the image F to be matched 2 Matching. Otherwise, if the number of matching image layers does not reach the preset number threshold, the image processing apparatus 10 may determine the image F to be processed 1 And the image F to be matched 2 Mismatch.
As will be appreciated by those skilled in the art, the above-described image F to be matched from 2 The scheme of extracting the third feature point and taking the extracted third feature point as the feature point to be matched is only a preferred embodiment provided by the invention, and is intended to clearly show the main concept of the invention and provide an image matching scheme with high precision, high accuracy and high success rate, but is not intended to limit the protection scope of the invention.
Alternatively, in other embodiments, the image processing apparatus 10 may also employ other means to determine the image F to be matched 2 Is matched with the feature points to be processed according to the feature points to be matched and the image F to be processed 1 And (3) judging whether the two images are matched or not according to the matching condition of the third feature points.
It will be further appreciated by those skilled in the art that the above-mentioned scheme of determining whether images match based on the euclidean distance ratio is also only a preferred embodiment provided by the present invention, and is intended to clearly illustrate the main concept of the present invention, and to provide a specific scheme for public implementation, not to limit the scope of the present invention.
As shown in fig. 3, in some embodiments of the present invention, the target feature point number N is obtained at the time of extraction 0 After the third feature points, the image processing apparatus 10 may also preferably calculate the image F to be processed 1 To determine a second environmental characteristic of each third feature point. The second environmental feature describes the image F to be processed 1 Other pixel points around each third feature point.
Furthermore, the image processing apparatus 10 may calculate the image F to be matched 2 Is to be matched with at least one feature point to be matchedAnd the descriptor is used for determining the first environment characteristics of each characteristic point to be matched. The first environmental feature describes the image F to be matched 2 Other pixel points around each feature point to be matched.
Thereafter, the image processing apparatus 10 may calculate the images F to be matched based on the respective calculations 2 The first environmental characteristic of each characteristic point to be matched and the image F to be processed 1 Cosine values of the second environmental features of the third feature points to determine the similarity between the first environmental features and the second environmental features, and determining the best matching point of the feature points to be matched from the third feature points according to the similarity. Still further, the image processing apparatus 10 may perform the processing according to the first image layers F as described above 1i The number of the best matching points existing in the image layer F is respectively judged 1i And a corresponding second image layer F 2i Whether or not to match, and according to the image F to be matched 2 And the image F to be processed 1 The number of the matched image layers in the image F to be processed is judged 1 And the image F to be matched 2 Whether there is a match. The specific process of determining whether each image layer and each image are matched is similar to the above embodiment, and will not be described herein.
Those skilled in the art will appreciate that the image matching process illustrated in fig. 3 is only a core step in various image processing applications and is not limiting of the specific image processing application. For example, the technician can take the image to be recognized as the image F to be processed in the above-mentioned flow 1 Taking a plurality of standard images in an image database as images F to be matched in the flow 2 Respectively carrying out images F to be identified 1 With each standard image F 2 The image matching process of the image recognition method is used for achieving the effect of image recognition.
For another example, the technician may take the face image to be recognized as the image F to be processed in the above-mentioned procedure 1 Taking a plurality of standard face images in a face database as images F to be matched in the flow 2 Respectively carrying out face images F to be recognized 1 With each standard face image F 2 Image matching process of the face recognition system to achieve the face recognition effect。
For another example, the technician may take the next image of the next frame in the video as the image F to be processed in the above procedure 1 And takes the previous frame image as the image F to be matched in the process 2 Then the next frame of image F is carried out 1 With the previous frame image F 2 The image matching process of the video camera is used for achieving the effects of object positioning/tracking and video tamper resistance.
Based on the above description, by adopting the image processing method provided by the invention to optimize the feature extraction threshold value theta and the feature point number N obtained by actual extraction, the invention can extract the feature point number N closer to the target according to the precision requirement, the accuracy requirement and/or the success rate requirement of subsequent image processing 0 Even equal to the number N of target feature points 0 Thereby improving the accuracy of feature matching and reducing the false matching rate of the feature points. Especially in the image feature extraction application under long-time and large scenes, the method and the device can effectively improve the success rate of various image processing applications such as image recognition, face recognition, object positioning, object tracking, 3D modeling and the like, thereby reducing the times of repeatedly detecting the feature points and solving the problems of long time consumption and poor instantaneity of the conventional ORB feature extraction algorithm.
While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more embodiments, occur in different orders and/or concurrently with other acts from that shown and described herein or not shown and described herein, as would be understood and appreciated by those skilled in the art.
Those of skill in the art would understand that information, signals, and data may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Although the image processing apparatus 10 described in the above embodiment may be implemented by a combination of software and hardware. It will be appreciated that the image processing apparatus 10 may also be implemented solely in software or hardware. For a hardware implementation, image processing apparatus 10 may be implemented within one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, other electronic devices for performing the functions described above, or a selected combination of the above. For software implementation, image processing device 10 may be implemented with separate software modules, such as program modules (procedures) and function modules (functions), running on a common chip, each of which performs one or more of the functions and operations described herein.
The various illustrative logical modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. An image processing method, characterized by comprising the steps of:
performing feature point wide extraction on the image to be processed according to the initial threshold value to obtain a plurality of first feature points;
determining an optimized threshold according to the number of the first feature points, the number of target feature points required for processing the image to be processed and the initial threshold;
performing feature point detection on each first feature point according to the optimization threshold value to screen a plurality of second feature points; and
and processing the image to be processed according to the second characteristic points of the image to be processed.
2. The image processing method according to claim 1, wherein after the optimization threshold is determined, the image processing method further comprises the steps of:
judging whether the optimized threshold meets a preset feature extraction standard or not; and
and replacing the optimized threshold value by the standard threshold value in response to a judgment result that the optimized threshold value is smaller than or equal to the standard threshold value indicated by the feature extraction standard.
3. The image processing method as claimed in claim 1, wherein the step of performing feature point detection on each of the first feature points according to the optimization threshold to screen a plurality of second feature points therefrom comprises:
grid division is carried out on the image according to the positions of the first characteristic points, so that the number of the first characteristic points in each grid is in a preset number range; and
and carrying out parallel feature point detection on the first feature points in each grid according to the optimization threshold value so as to screen a plurality of second feature points.
4. The image processing method according to claim 1, wherein the step of processing the image to be processed according to the second feature point of the image to be processed comprises:
homogenizing and screening the plurality of second characteristic points according to the positions of the second characteristic points of the image to be processed and the number of target characteristic points to obtain third characteristic points of the number of target characteristic points; and
and processing the image to be processed according to the third characteristic point of the image to be processed.
5. The image processing method as claimed in claim 4, wherein the step of processing the image to be processed according to the third feature point of the image to be processed comprises:
determining at least one feature point to be matched from the image to be matched;
respectively calculating Euclidean distances between each third characteristic point of the image to be processed and the characteristic points to be matched so as to determine nearest neighbor characteristic points and next nearest neighbor characteristic points; and
and carrying out feature matching between the image to be matched and the image to be processed according to the Euclidean distance from the nearest neighbor feature point to the feature point to be matched and the Euclidean distance from the next nearest neighbor feature point to the feature point to be matched.
6. The image processing method of claim 5, wherein the determining at least one feature point to be matched from the image to be matched comprises:
extracting third feature points of the target feature point number from the image to be matched; and
and determining at least one characteristic point to be matched from each third characteristic point of the image to be matched.
7. The image processing method as claimed in claim 6, wherein the step of performing feature matching between the image to be matched and the image to be processed according to the euclidean distance from the nearest neighbor feature point to the feature point to be matched and the euclidean distance from the next nearest neighbor feature point to the feature point to be matched comprises:
calculating the Euclidean distance ratio of the nearest neighbor feature point to the secondary neighbor feature point to the feature point to be matched according to the Euclidean distance between the nearest neighbor feature point and the feature point to be matched and the Euclidean distance between the secondary neighbor feature point and the feature point to be matched; and
and in response to the Euclidean distance ratio being smaller than a preset ratio threshold, judging that the nearest neighbor feature point of the image to be processed is the best matching point of the feature point to be matched of the image to be matched.
8. The image processing method as claimed in claim 7, wherein the step of performing feature matching between the image to be matched and the image to be processed according to the euclidean distance of the nearest neighbor feature point to the feature point to be matched and the euclidean distance of the next nearest neighbor feature point to the feature point to be matched further comprises:
and judging whether the image to be processed is matched with the image to be matched or not according to the number of the optimal matching points of each characteristic point to be matched of the image to be matched in the image to be processed.
9. The image processing method as claimed in claim 8, wherein before the step of judging whether the image to be processed and the image to be matched are matched according to the number of best matching points of each of the feature points to be matched in the image to be processed, the image processing method further comprises the steps of:
and adopting a random sampling consistency algorithm to carry out mismatching screening on the optimal matching points of the feature points to be matched so as to screen out the optimal matching points of mismatching.
10. The image processing method as claimed in claim 8, wherein the step of judging whether the image to be processed and the image to be matched are matched according to the number of the best matching points of each of the feature points to be matched of the image to be matched in the image to be processed comprises:
acquiring the number of feature points to be matched of a plurality of second image layers of the image to be matched, wherein the resolution of each second image layer is different;
determining the number of best matching points of the plurality of first image layers of the image to be processed, wherein the resolution of each first image layer is the same as the resolution of the corresponding second image layer;
judging whether each first image layer is matched with a corresponding second image layer or not according to the number of the best matching points in each first image layer; and
and judging whether the image to be processed is matched with the image to be matched or not according to the number of the image matching layers in the image to be matched and the image to be processed.
11. The image processing method according to claim 4 or 7, wherein the step of processing the image to be processed according to the third feature point of the image to be processed includes:
determining at least one feature point to be matched from the image to be matched;
calculating descriptors of the feature points to be matched of the images to be matched to determine first environmental features of the feature points to be matched;
calculating descriptors of the third feature points of the image to be processed to determine second environment features of the third feature points;
determining the best matching point of each feature point to be matched from each third feature point according to the similarity between the first environmental feature of each feature point to be matched and the second environmental feature of each third feature point; and
and judging whether the image to be processed is matched with the image to be matched or not according to the number of the optimal matching points of each characteristic point to be matched of the image to be matched in the image to be processed.
12. An image processing apparatus, comprising:
a memory; and
a processor connected to the memory and configured to implement the image processing method according to any one of claims 1 to 11.
13. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, implement the image processing method according to any of claims 1 to 11.
CN202111362641.8A 2021-11-17 2021-11-17 Image processing method and device Pending CN116152521A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111362641.8A CN116152521A (en) 2021-11-17 2021-11-17 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111362641.8A CN116152521A (en) 2021-11-17 2021-11-17 Image processing method and device

Publications (1)

Publication Number Publication Date
CN116152521A true CN116152521A (en) 2023-05-23

Family

ID=86349334

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111362641.8A Pending CN116152521A (en) 2021-11-17 2021-11-17 Image processing method and device

Country Status (1)

Country Link
CN (1) CN116152521A (en)

Similar Documents

Publication Publication Date Title
CN107358242B (en) Target area color identification method and device and monitoring terminal
JP6511149B2 (en) Method of calculating area of fingerprint overlap area, electronic device for performing the same, computer program, and recording medium
CN108765465B (en) Unsupervised SAR image change detection method
US20180032784A1 (en) Fingerprint identification method and apparatus
US20230086961A1 (en) Parallax image processing method, apparatus, computer device and storage medium
KR102578209B1 (en) Apparatus and method for image processing
WO2021174940A1 (en) Facial detection method and system
WO2017166597A1 (en) Cartoon video recognition method and apparatus, and electronic device
CN111476234A (en) Method and device for recognizing characters of shielded license plate, storage medium and intelligent equipment
CN116391204A (en) Line width measuring method, line width measuring device, calculating processing apparatus, computer program, and computer readable medium
CN111695373A (en) Zebra crossing positioning method, system, medium and device
Zhang et al. GeoMVSNet: Learning multi-view stereo with geometry perception
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
CN111161348B (en) Object pose estimation method, device and equipment based on monocular camera
CN107392948B (en) Image registration method of amplitude-division real-time polarization imaging system
CN110765993B (en) SEM graph measuring method based on AI algorithm
CN116152521A (en) Image processing method and device
JP2022519398A (en) Image processing methods, equipment and electronic devices
CN116977250A (en) Defect detection method and device for industrial parts and computer equipment
CN116993654A (en) Camera module defect detection method, device, equipment, storage medium and product
CN113378847B (en) Character segmentation method, system, computer device and storage medium
CN112541507B (en) Multi-scale convolutional neural network feature extraction method, system, medium and application
CN113793372A (en) Optimal registration method and system for different-source images
CN112967321A (en) Moving object detection method and device, terminal equipment and storage medium
CN109871867A (en) A kind of pattern fitting method of the data characterization based on preference statistics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination