CN116778141B - ORB algorithm-based method for rapidly identifying and positioning picture - Google Patents

ORB algorithm-based method for rapidly identifying and positioning picture Download PDF

Info

Publication number
CN116778141B
CN116778141B CN202311082677.XA CN202311082677A CN116778141B CN 116778141 B CN116778141 B CN 116778141B CN 202311082677 A CN202311082677 A CN 202311082677A CN 116778141 B CN116778141 B CN 116778141B
Authority
CN
China
Prior art keywords
matching
training
query
graph
matching result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311082677.XA
Other languages
Chinese (zh)
Other versions
CN116778141A (en
Inventor
胡晓球
徐智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lan You Technology Co Ltd
Original Assignee
Shenzhen Lan You Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Lan You Technology Co Ltd filed Critical Shenzhen Lan You Technology Co Ltd
Priority to CN202311082677.XA priority Critical patent/CN116778141B/en
Publication of CN116778141A publication Critical patent/CN116778141A/en
Application granted granted Critical
Publication of CN116778141B publication Critical patent/CN116778141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a picture quick identification and positioning method based on an ORB algorithm, which comprises the following steps: s1, presetting control parameters, matching parameters and strategy parameters of a current query graph and an original training graph, wherein the control parameters comprise a query graph matching minimum proportion A, a query graph matching maximum proportion B, a training graph cutting maximum layer number E and a characteristic point reference quantity Q, the matching parameters comprise a high-quality matching distance F, an image size level dividing size G, an image size level scaling proportion H and an image size level matching quality threshold L, and the strategy parameters comprise a training sub graph transverse splitting quantity C and a training sub graph longitudinal splitting quantity D; by using ORB algorithm to perform feature and description calculation, the positioning query graph can be quickly identified in the training graph, so that the calculation amount of the picture positioning process is reduced, and the calculation speed is increased.

Description

ORB algorithm-based method for rapidly identifying and positioning picture
Technical Field
The invention relates to the technical field of image processing, in particular to a method for quickly identifying and positioning pictures based on an ORB algorithm.
Background
When a training image with larger size and complex texture locates a query image with smaller size, a large number of feature points are required to be reserved by a conventional ORB algorithm to ensure that the reserved feature points of the training image are enough matched with the query image; when there is a large scale difference between the training graph and the query graph, the conventional ORB algorithm needs to keep enough image golden sub-tower layers to ensure coverage to the corresponding matching size, and the FAST algorithm needs to have a lower scale and a larger layer number to cover the feature difference generated by the size change when the picture size is smaller. The processing of the two points greatly increases the calculation amount of the picture positioning process.
Disclosure of Invention
Aiming at the defects of the technical scheme, the invention provides the method for quickly identifying and positioning the picture based on the ORB algorithm, which can quickly identify and position the query graph in the training graph and reduce the calculated amount of the picture positioning process.
The invention provides a method for rapidly identifying and positioning pictures based on an ORB algorithm, which comprises the following steps:
s1, presetting control parameters, matching parameters and strategy parameters of a current query graph and an original training graph, wherein the control parameters comprise a query graph matching minimum proportion A, a query graph matching maximum proportion B, a training graph cutting maximum layer number E and a characteristic point reference quantity Q, the matching parameters comprise a high-quality matching distance F, an image size level dividing size G, an image size level scaling proportion H and an image size level matching quality threshold L, and the strategy parameters comprise a training sub graph transverse splitting quantity C and a training sub graph longitudinal splitting quantity D;
s2, preprocessing an original training diagram and a current query diagram;
s3, generating a query pyramid layer based on the preprocessed query graph, and calculating the characteristics and description of the query pyramid layer and the characteristics and description of the training graph by using an ORB algorithm;
s4, performing BF feature matching on the features and descriptions of the query pyramid layer and the training graph in sequence, and adding matching results into a result list, wherein the result list is ordered according to the matching degree, and the high matching degree is ranked in front;
s5, obtaining a matching result with highest matching degree in the result list, the training sub-graph matching result list being empty and the number of training graph cutting layers being smaller than or equal to the maximum cutting layer number, judging whether a matching result condition is met, if yes, judging the matching quality of the matching result, if the matching result is a successful positioning result, calculating a matching area of an original training graph, if the matching result is a low-quality matching result, carrying out average splitting of the training graph, and if the matching result is a general quality matching result, carrying out matching area interception of the training graph;
and S6, calculating the characteristics and description of the split and cut training subgraph by using an ORB algorithm, sequentially performing BF algorithm characteristic matching on the characteristic points and description of the query pyramid layer in the matching result and the split and cut training subgraph, and adding the matching result into a result list.
In the method for rapidly identifying and positioning the picture based on the ORB algorithm, disclosed by the invention; in the step S1, the minimum matching ratio a of the query graph is a minimum ratio value of the top layer of the query pyramid graph layer generated based on the query graph, the value type is floating point type, the ratio value is smaller than 1 and smaller than 1, and the ratio value is larger than 1 and larger than 1; the maximum query graph matching proportion B is a maximum query pyramid graph layer bottom layer proportion generated based on the query graph, the value type is a floating point type, the proportion value is smaller than 1 and larger than 1, the maximum query graph matching proportion value is larger than or equal to the minimum query graph matching proportion value, the number of the horizontal average splitting of the training subgraphs C is the number of the horizontal average splitting of the training subgraphs when the subgraphs are split, the value type is integer and larger than or equal to 1, the number of the vertical average splitting of the training subgraphs D is the number of the vertical average splitting of the training subgraphs when the subgraphs are split, the value type is integer and larger than or equal to 1, the maximum training graph cutting layer number E is the nesting number when the training subgraphs are cut, the value type is integer, the level of the cut subgraphs is equal to the level of the cut graph plus 1, and the level of the original training graph is 0.
In the method for rapidly identifying and positioning the picture based on the ORB algorithm, disclosed by the invention; in the step S1, the high-quality matching distance F is a filtered value of the feature point matching result distance, and the value type is a floating point type, and the value is greater than or equal to 0; the image size class dividing size G is a statistical value list and is ordered in the order of small to large whenDetermining the size level of the picture as N, whereinThe method comprises the steps of carrying out a first treatment on the surface of the The image size level scaling H is a statistical value list of scaling of the query pyramid diagram of each size level; the image size level matching quality threshold L is a statistical value list of different quality thresholds of image matching degree results of each size level, and each size level quality threshold is a low quality threshold I (N) =L (N) [0 ] in sequence]General quality threshold J (N) =l (N) [1]And a high quality threshold K (N) =l (N) [2 ]]The method comprises the steps of carrying out a first treatment on the surface of the The reference quantity Q of the characteristic points is the maximum quantity of the characteristic points with the image size of 100, and is used for controlling the quantity of the characteristic points reserved during ORB algorithm calculation and reducing the calculation quantity during subsequent characteristic point matching.
In the method for rapidly identifying and positioning the picture based on the ORB algorithm, disclosed by the invention; in the step S2, the preprocessing is to perform gray level diagram conversion on the training diagram and the query diagram, perform equal-proportion reduction processing on the training diagram, control the maximum size of the training diagram, reduce the operand in the image recognition process, and record the reduction ratio R of the training diagram.
In the method for rapidly identifying and positioning the picture based on the ORB algorithm, disclosed by the invention; the step S3 includes the steps of:
s31, controlling the layer number of the query pyramid layers through the minimum proportion A of the query graph matching and the maximum proportion B of the query graph matching, and controlling the size range of the pyramid layers under the condition that the size range of the positioning image is estimated, so that the calculated amount is reduced, wherein the specific control logic comprises the calculation of the image size of the pyramid starting layer and the image size of the pyramid ending layer; the pyramid initial layer image size = the preprocessed query graph size the maximum matching proportion B of the query graph; the size of the pyramid ending layer image = the size of the preprocessed query image = the minimum matching proportion A of the query image, and a next layer image is generated by starting circulation from a starting layer until the size of the image layer is smaller than or equal to the size of the ending layer image; the next layer image scaling is p=image size level scaling H (N), where N is the current pyramid layer size level, where the next layer pyramid image size=current pyramid layer size/P;
and S32, respectively calculating the characteristics and description of the query pyramid layer and the training diagram according to an ORB algorithm, wherein the number of pyramid layers in the ORB algorithm is set to be 1, and the number of reserved characteristic points=Q is equal to the image size/100.
In the method for rapidly identifying and positioning the picture based on the ORB algorithm, disclosed by the invention; the step S4 includes the steps of:
s41, finishing feature matching of the query pyramid layer and the training diagram according to a BF matching algorithm, and adding a matching result into a result list;
s42, carrying out distance filtration on the matching result in the result list, and reserving matching feature points with the distance smaller than or equal to the high-quality matching distance F to obtain a high-quality feature matching result;
s43, calculating the matching degree of the high-quality feature matching result, wherein the matching degree calculation formula is matching degree = high-quality feature matching result feature point number/query pyramid layer picture feature point number;
s44, packaging the matching result and adding the matching result to a result list, wherein the content of the matching result comprises a query pyramid layer of the current matching, a training chart of the current matching, feature points and descriptions of a query golden sub-tower layer, feature points and descriptions of the training chart, matching degree, father-level matching result, training sub-level matching result list, high-quality feature matching result and training chart cutting layer number, the father-level matching result of the matching result and the training sub-level matching result list are both blank, and the training chart cutting layer number is 0.
In the method for rapidly identifying and positioning the picture based on the ORB algorithm, disclosed by the invention; the step S5 includes the steps of:
s51, acquiring an ordered result list, finding out a matching result that the first training sub-graph matching result list is empty and the number of training graph cutting layers is smaller than or equal to the maximum number of training graph cutting layers E, if no condition result is met, failing to identify an original query graph in the original training graph and ending identification, and if the condition result is met, judging the matching quality of the matching result;
and S52, calculating the matching quality according to the matching degree of the matching result, wherein the matching quality judging logic is used for setting the size level of the layer of the query pyramid as N, if the matching degree < =I (N), the matching result is low-quality matching, if I (N) < the matching degree < =J (N), the matching result is general-quality matching, if J (N) < the matching degree < =K (N), the matching result is high-quality matching, if the matching degree > K (N), the matching result is accurate matching, the high-quality matching and the accurate matching result are used as successful positioning results, and the position and the size of the matching region in the original graph are calculated.
In the method for rapidly identifying and positioning the picture based on the ORB algorithm, disclosed by the invention; in the step S52, calculating the position and the size of the matching area in the original graph includes calculating the average value and standard deviation of coordinates of feature points of the high-quality matching result, filtering feature pairs satisfying the coordinates within 2 standard deviations in the high-quality matching result, calculating the feature point area of the training graph of the filtered high-quality matching result, calculating the feature point area of the query pyramid graph layer of the filtered high-quality matching result, calculating the matching area of the training graph and calculating the matching area of the original training graph, wherein calculating the average value and standard deviation of coordinates of feature points of the high-quality matching result includes calculating the average value avgX1 and avgY1 of the feature points of the training graph and the standard deviation devX1 and devY1 of the training graph in the high-quality matching result, and calculating the average value avgX2 and the average value devX2 and devY2 of the feature points of the query pyramid graph layer in the high-quality matching result; the filtering satisfies that the feature pairs with coordinates within 2 standard deviations in the high-quality matching result are avgX1-2 x devx1< = training image feature point abscissa < = avgx1+2 x devx1 and avgY1-2 x devy1< = training image feature point ordinate < = avgy1+2 x devy1 and avgX2-2 x devx2< = query pyramid image layer feature point abscissa < = avgx2+2 x devx2 and avgY2-2 x devy2< = query pyramid image layer feature point ordinate < = avgy2+2 x devy2; the step of calculating the filtered high-quality matching result training image feature point region is to obtain minimum coordinate values minX1 and minY1 and maximum coordinate values maxX1 and maxY1 of training image sub-image matching feature point coordinates in the filtered high-quality matching result, and calculate to obtain training image sub-image matching feature point coordinate matrixes (x 1, y1, w1 and h 1), wherein x1=minx1, y1=miny1, w1=max1-minx1 and h1=maxY 1-minY1; the step of calculating the filtered high-quality matching result query pyramid layer characteristic point region is to obtain maximum coordinate values maxX2 and maxY2 and minimum coordinate values minX2 and minY2 of query pyramid layer matching characteristic point coordinates in the filtered high-quality matching result, and calculate a query pyramid layer characteristic point matrix (x 2, y2, w2 and h 2), wherein x2=minX2, y1=minY2, w1=maxX2-minX2 and h2=maxY2-minY2; the calculation training diagram matching area comprises a calculation area matrix (x 3, y3, w3, h 3), wherein x3=x1-x2×w1/w2, y3=y1-y2×h1/h2, w3=query gold sub-tower diagram width×w1/w2, h3=query gold sub-tower diagram height×h1/h2; the calculation of the matching area of the original training image is to obtain the coordinates (x, y) of the pre-processed training image relative to the upper left corner of the training image by obtaining the matching result, and calculate the matching area (x 4, y4, w4, h 4) of the original training image, wherein x4= (x3+x)/R, y4 = (y3+y)/R, w4 =w3/R, h4 =h3/R.
In the method for rapidly identifying and positioning the picture based on the ORB algorithm, disclosed by the invention; in the step S5, the average splitting of the training graph includes performing a horizontal average splitting of C parts and a vertical average splitting of D parts on the training graph, recording the number of clipping layers of the split sub-graph as the number of clipping layers of the split training graph plus 1, recording the coordinates (X, Y) of the split sub-graph region where the upper left corner is located on the clipped training graph, and calculating the coordinates (X, Y) of each sub-graph where the upper left corner is located on the preprocessed training graph, wherein sub-graph x=training graph x+x and sub-graph y=training graph y+y; and the training image matching area is intercepted and used as an area (X3, Y3, w3 and h 3) to be cut, if X3<0 or Y3<0 or x3+w3> training image width or y3+h3> training image height is calculated, the cutting area exceeds the training image range, the training sub-image of the upper-level matching result is required to be cut, the cutting area is converted into (X5, Y5, w5 and h 5), wherein x5=x3+training image X, Y5=h3+training image Y, w 5=w3 and h5=h3, the cutting layer number of the cut sub-image is the cutting layer number of the cut training image plus 1, the left upper corner of the cutting sub-image area is recorded and is positioned at the coordinates (X, Y) of the cut training image, and the left upper corner of the sub-image is calculated and is positioned at the coordinates (X, Y) of the pre-processed training image, and the sub-image x=training image x+x, y=training image y+y.
In the method for rapidly identifying and positioning the picture based on the ORB algorithm, disclosed by the invention; the step S6 includes the steps of:
s61, finishing inquiring the feature points and descriptions of the golden tower layer graph according to the BF matching algorithm, sequentially matching the feature points and descriptions with the features of the split and cut training subgraph, and adding the matching result into a result list;
s62, carrying out distance filtration on the matching result in the result list, and reserving matching feature points with the distance smaller than or equal to the high-quality matching distance F to obtain a high-quality feature matching result;
s63, calculating the matching degree of the high-quality feature matching result, wherein the matching degree calculation formula is matching degree = high-quality feature matching result feature point number/query pyramid layer picture feature point number;
s64, packaging the matching result and adding the matching result to a result list, wherein the content of the matching result comprises a query pyramid layer of the current matching, a training chart of the current matching, a feature point and description of a query golden sub-tower layer chart, a feature point and description of the training chart, a matching degree, a parent-level matching result, a training sub-image matching result list, a high-quality feature matching result and a training chart nesting layer number, and the training sub-image matching result list of the matching result is empty;
s65, updating a training sub-image matching result list of the parent-level result, and adding the sub-image matching result generated at this time.
According to the ORB algorithm-based method for quickly identifying and positioning the picture, the ORB algorithm is used for carrying out characteristic and description calculation, so that the identification of the positioning query graph in the training graph can be quickly completed, the calculated amount in the picture positioning process is reduced, and the calculation speed is increased. Compared with the traditional ORB algorithm, the method for image positioning by using the ORB algorithm only generates the query graph pyramid, does not generate the training graph golden pyramid, has less calculation amount, can perform shrinkage preprocessing on the training graph before positioning, and reduces the calculation amount in the subsequent matching process
The number of the calculated characteristic points is dynamically changed according to the image size, so that the method is suitable for image positioning of different sizes, and the positioning range of the query graph in the training graph can be quickly contracted.
Drawings
Fig. 1 is a flow chart of an embodiment of a method for quickly identifying and positioning a picture based on an ORB algorithm according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a schematic flow chart of an embodiment of a method for quickly identifying and positioning a picture based on an ORB algorithm. The method for rapidly identifying and positioning the picture based on the ORB algorithm comprises the following steps:
in step S1, preset control parameters, matching parameters and strategy parameters of a current query graph and an original training graph, wherein the control parameters comprise a query graph matching minimum proportion A, a query graph matching maximum proportion B, a training graph clipping maximum layer number E and a characteristic point reference quantity Q, the matching parameters comprise a high-quality matching distance F, an image size level dividing size G, an image size level scaling proportion H and an image size level matching quality threshold L, and the strategy parameters comprise a training sub graph transverse splitting quantity C and a training sub graph longitudinal splitting quantity D;
in step S2, preprocessing an original training diagram and a current query diagram;
in step S3, a query pyramid layer is generated based on the preprocessed query graph, and the characteristics and descriptions of the query pyramid layer and the characteristics and descriptions of the training graph are calculated by using an ORB algorithm;
in step S4, performing BF feature matching on the features and descriptions of the query pyramid layer and the training diagram in sequence, and adding the matching result into a result list, wherein the result list is ordered according to the matching degree, and the high matching degree is ranked in front;
in step S5, obtaining a matching result with highest matching degree in the result list, empty training sub-graph matching result list and number of training graph cutting layers less than or equal to the maximum cutting layer number, judging whether the matching result condition is met, if yes, judging the matching quality of the matching result, if the matching result is a successful positioning result, calculating the matching area of the original training graph, if the matching result is a low quality matching result, carrying out average splitting of the training graph, and if the matching result is a general quality matching result, carrying out matching area interception of the training graph;
in step S6, the ORB algorithm is used for calculating the characteristics and description of the split and cut training subgraph, the characteristic points and description of the query pyramid layer in the matching result are sequentially subjected to BF algorithm characteristic matching with the split and cut training subgraph, and the matching result is added into the result list.
In an embodiment, in the step S1, the minimum matching ratio a of the query graph is a minimum top layer ratio value of the query pyramid graph layer generated based on the query graph, the value type is floating point type, the ratio value is smaller than 1 and smaller than 1, and the ratio value is larger than 1 and larger than 1; the maximum query graph matching proportion B is a maximum query pyramid graph layer bottom layer proportion generated based on the query graph, the value type is a floating point type, the proportion value is smaller than 1 and larger than 1, the maximum query graph matching proportion value is larger than or equal to the minimum query graph matching proportion value, the number of the horizontal average splitting of the training subgraphs C is the number of the horizontal average splitting of the training subgraphs when the subgraphs are split, the value type is integer and larger than or equal to 1, the specific default value is 2, the number of the vertical average splitting of the training subgraphs D is integer and larger than or equal to 1, the specific default value is 2, the maximum number of the training graph cutting layers E is the nesting times when the training graph is cut, the value type is integer, the level of the cut subgraphs is equal to the level of the cut graph plus 1, and the level of the original training graph is 0.
In an embodiment, in the step S1, the high-quality matching distance F is a filtered value of the feature point matching result distance, and the value type is a floating point type, and the value is greater than or equal to 0; the image size class dividing size G is a statistical value list and is ordered in the order of small to large whenThen the picture size level is determined to be N, wherein +.>The method comprises the steps of carrying out a first treatment on the surface of the The specific size class division sizes are as follows: 1. 32, 48, 64, 96, 224, which if the image size is 300, then the size level is 5; the image size level scaling H is a statistical value list of scaling of the query pyramid diagram of each size level; the specific size scale values are in turn: 1.0025, 1.0744, 1.1189, 1.1775, where H (5) = 1.1775; the image size level matching quality threshold L is a statistical value list of different quality thresholds of image matching degree results of each size level, and each size level quality threshold is a low quality threshold I (N) =L (N) [0 ] in sequence]General quality threshold J (N) =l (N) [1]And a high quality threshold K (N) =l (N) [2 ]]The method comprises the steps of carrying out a first treatment on the surface of the If the size level configuration quality threshold is: [[0.03, 0.05, 0.06],[ 0.03, 0.05, 0.06],[ 0.03, 0.06, 0.09],[ 0.1, 0.2, 0.25],[ 0.2, 0.3, 0.4],[ 0.3, 0.4, 0.5]]At this time, I (5) =0.3, j (5) =0.4, and k (5) =0.5. The reference quantity Q of the characteristic points is the maximum quantity of the characteristic points with the image size of 100, and is used for controlling the quantity of the characteristic points reserved during ORB algorithm calculation and reducing the calculation quantity during subsequent characteristic point matching.
In an embodiment, in the step S2, the preprocessing is to perform gray level map conversion on the training map and the query map, perform scaling down processing on the training map, control the maximum size of the training map, so as to reduce the operand in the image recognition process, and record the scaling down ratio R of the training map.
In one embodiment, the step S3 includes the steps of:
in step S31, the number of layers of the query pyramid layer is controlled by the minimum ratio a of the query graph matching and the maximum ratio B of the query graph matching, and the pyramid layer size range is controlled under the condition that the pre-estimated positioning image size range exists, so that the calculated amount is reduced, wherein the specific control logic comprises calculation of the pyramid starting layer image size and the pyramid ending layer image size; the pyramid initial layer image size = the preprocessed query graph size the maximum matching proportion B of the query graph; the size of the pyramid ending layer image = the size of the preprocessed query image = the minimum matching proportion A of the query image, and a next layer image is generated by starting circulation from a starting layer until the size of the image layer is smaller than or equal to the size of the ending layer image; the next layer image scaling is p=image size level scaling H (N), where N is the current pyramid layer size level, where the next layer pyramid image size=current pyramid layer size/P;
in step S32, features and descriptions of the query pyramid layer and the training graph are calculated according to the ORB algorithm, where the number of pyramid layers parameter of the ORB algorithm is set to 1, and the number of reserved feature points=q×image size/100. It should be noted that all ORB algorithms below use the same parameters in calculating the feature quantity and description.
In one embodiment, the step S4 includes the steps of:
in step S41, feature matching of the query pyramid layer and the training diagram is completed according to a BF matching algorithm, and a matching result is added into a result list;
in step S42, performing distance filtering on the matching result in the result list, and reserving matching feature points with a distance less than or equal to the high-quality matching distance F to obtain a high-quality feature matching result;
in step S43, calculating the matching degree of the high-quality feature matching result, where the matching degree calculation formula is matching degree=high-quality feature matching result feature point number/query pyramid layer picture feature point number;
in step S44, the matching result is packaged and added to the result list, where the matching result content includes the query pyramid layer of the current matching, the training graph of the current matching, the feature points and descriptions of the query golden-child tower layer, the feature points and descriptions of the training graph, the matching degree, the parent-level matching result, the training-child matching result list, the high-quality feature matching result and the training graph clipping layer number, where the parent-level matching result and the training-child matching result list of the current matching result are both empty, and the training graph clipping layer number is 0. The high-quality feature matching result comprises matching feature point pairs of a training diagram and a query golden tower layer diagram.
In one embodiment, the step S5 includes the steps of:
in step S51, the ordered result list is obtained, a matching result that the first training sub-graph matching result list is empty and the number of training graph cutting layers is less than or equal to the maximum number of training graph cutting layers E is found, if no condition result is met, the original query graph cannot be identified in the original training graph, and the identification is finished, if the condition result is met, the matching quality judgment of the matching result is performed;
in step S52, the matching quality is calculated according to the matching degree of the matching result, the matching quality judgment logic is to set the layer size level of the query pyramid as N, if the matching degree < =i (N), the matching result is a low quality matching, if I (N) < matching degree < =j (N), the matching result is a general quality matching, if J (N) < matching degree < =k (N), the matching result is a high quality matching, if the matching degree > K (N), the matching result is an accurate matching, the high quality matching and the accurate matching result will be used as positioning success results, and the position and the size of the matching region in the original graph are calculated. And sequencing the results in the matching result list, wherein the sequencing rule is the sequence of the training diagram sub-graphs with the same matching degree before the sequence of the training diagram sub-graphs with the same matching degree, and the sequence of the training diagram sub-graphs with the same matching degree before the sequence of the training diagram sub-graphs with the same matching degree after the sequence of the training diagram sub-.
In an embodiment, in the step S52, calculating the position and the size of the matching area in the original graph includes calculating the average value and the standard deviation of the coordinates of the feature points of the high-quality matching result, filtering the feature pairs satisfying the coordinates within 2 standard deviations in the high-quality matching result, calculating the feature point area of the training graph of the filtered high-quality matching result, calculating the feature point area of the layer of the query pyramid of the filtered high-quality matching result, calculating the matching area of the training graph and calculating the matching area of the original training graph, wherein calculating the average value and the standard deviation of the coordinates of the feature points of the high-quality matching result includes calculating the average value avgX1 and avgY1 of the feature points of the training graph and the standard deviation devX1 and devY1 of the high-quality matching result, and calculating the average value avgX2 and the average value devY2 of the coordinates of the feature points of the layer of the query pyramid of the high-quality matching result; the filtering satisfies that the feature pairs with coordinates within 2 standard deviations in the high-quality matching result are avgX1-2 x devx1< = training image feature point abscissa < = avgx1+2 x devx1 and avgY1-2 x devy1< = training image feature point ordinate < = avgy1+2 x devy1 and avgX2-2 x devx2< = query pyramid image layer feature point abscissa < = avgx2+2 x devx2 and avgY2-2 x devy2< = query pyramid image layer feature point ordinate < = avgy2+2 x devy2; the step of calculating the filtered high-quality matching result training image feature point region is to obtain minimum coordinate values minX1 and minY1 and maximum coordinate values maxX1 and maxY1 of training image sub-image matching feature point coordinates in the filtered high-quality matching result, and calculate to obtain training image sub-image matching feature point coordinate matrixes (x 1, y1, w1 and h 1), wherein x1=minx1, y1=miny1, w1=max1-minx1 and h1=maxY 1-minY1; the step of calculating the filtered high-quality matching result query pyramid layer characteristic point region is to obtain maximum coordinate values maxX2 and maxY2 and minimum coordinate values minX2 and minY2 of query pyramid layer matching characteristic point coordinates in the filtered high-quality matching result, and calculate a query pyramid layer characteristic point matrix (x 2, y2, w2 and h 2), wherein x2=minX2, y1=minY2, w1=maxX2-minX2 and h2=maxY2-minY2; the calculation training diagram matching area comprises a calculation area matrix (x 3, y3, w3, h 3), wherein x3=x1-x2×w1/w2, y3=y1-y2×h1/h2, w3=query gold sub-tower diagram width×w1/w2, h3=query gold sub-tower diagram height×h1/h2; the calculation of the matching area of the original training image is to obtain the coordinates (x, y) of the pre-processed training image relative to the upper left corner of the training image by obtaining the matching result, and calculate the matching area (x 4, y4, w4, h 4) of the original training image, wherein x4= (x3+x)/R, y4 = (y3+y)/R, w4 =w3/R, h4 =h3/R.
In an embodiment, in the step S5, the average splitting of the training graph includes performing a horizontal average splitting of C parts and a vertical average splitting of D parts on the training graph, recording the number of clipping layers of the split sub-graph as the number of clipping layers of the split training graph plus 1, recording coordinates (X, Y) of the split sub-graph region where the upper left corner is located on the clipped training graph, and calculating coordinates (X, Y) of each sub-graph where the upper left corner is located on the preprocessed training graph, where sub-graph x=training graph x+x and sub-graph y=training graph y+y; and the training image matching area is intercepted and used as an area (X3, Y3, w3 and h 3) to be cut, if X3<0 or Y3<0 or x3+w3> training image width or y3+h3> training image height is calculated, the cutting area exceeds the training image range, the training sub-image of the upper-level matching result is required to be cut, the cutting area is converted into (X5, Y5, w5 and h 5), wherein x5=x3+training image X, Y5=h3+training image Y, w 5=w3 and h5=h3, the cutting layer number of the cut sub-image is the cutting layer number of the cut training image plus 1, the left upper corner of the cutting sub-image area is recorded and is positioned at the coordinates (X, Y) of the cut training image, and the left upper corner of the sub-image is calculated and is positioned at the coordinates (X, Y) of the pre-processed training image, and the sub-image x=training image x+x, y=training image y+y.
In one embodiment, the step S6 includes the steps of:
in step S61, the feature points and descriptions of the query golden tower layer diagram are matched with the features of the split and cut training subgraphs in sequence according to the BF matching algorithm, and the matching results are added into a result list;
in step S62, performing distance filtering on the matching result in the result list, and reserving matching feature points with a distance less than or equal to the high-quality matching distance F to obtain a high-quality feature matching result;
in step S63, calculating the matching degree of the high-quality feature matching result, where the matching degree calculation formula is matching degree=high-quality feature matching result feature point number/query pyramid layer picture feature point number;
in step S64, the matching result is packaged and added to a result list, wherein the matching result content includes a query pyramid layer of the current matching, a training graph of the current matching, a feature point and description of a query golden sub-tower layer graph, a feature point and description of the training graph, a matching degree, a parent-level matching result, a training sub-graph matching result list, a high-quality feature matching result and a training graph nesting layer number, and the training sub-graph matching result list of the current matching result is empty;
in step S65, the training child matching result list of the parent level result is updated, and the child matching result generated this time is added.
Wherein the high-quality feature matching result comprises matching feature point pairs of a training diagram and a query golden tower layer diagram,
it should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Therefore, the above description is only a preferred embodiment of the present invention, and the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention should be covered by the scope of the present invention, which is defined by the claims.

Claims (7)

1. The method for rapidly identifying and positioning the picture based on the ORB algorithm is characterized by comprising the following steps of:
s1, presetting control parameters, matching parameters and strategy parameters of a current query graph and an original training graph, wherein the control parameters comprise a query graph matching minimum proportion A, a query graph matching maximum proportion B, a training graph cutting maximum layer number E and a characteristic point reference quantity Q, the matching parameters comprise a high-quality matching distance F, an image size level dividing size G, an image size level scaling proportion H and an image size level matching quality threshold L, and the strategy parameters comprise a training sub graph transverse splitting quantity C and a training sub graph longitudinal splitting quantity D;
s2, preprocessing an original training diagram and a current query diagram;
s3, generating a query pyramid layer based on the preprocessed query graph, and calculating the characteristics and description of the query pyramid layer and the characteristics and description of the training graph by using an ORB algorithm;
s4, performing BF feature matching on the features and descriptions of the query pyramid layer and the training graph in sequence, and adding matching results into a result list, wherein the result list is ordered according to the matching degree, and the high matching degree is ranked in front;
s5, obtaining a matching result with highest matching degree in the result list, the training sub-graph matching result list being empty and the number of training graph cutting layers being smaller than or equal to the maximum cutting layer number, judging whether a matching result condition is met, if yes, judging the matching quality of the matching result, if the matching result is a successful positioning result, calculating a matching area of an original training graph, if the matching result is a low-quality matching result, carrying out average splitting of the training graph, and if the matching result is a general quality matching result, carrying out matching area interception of the training graph;
the step S5 includes the steps of:
s51, acquiring an ordered result list, finding out a matching result that the first training sub-graph matching result list is empty and the number of training graph cutting layers is smaller than or equal to the maximum number of training graph cutting layers E, if no condition result is met, failing to identify an original query graph in the original training graph and ending identification, and if the condition result is met, judging the matching quality of the matching result;
s52, calculating the matching quality according to the matching degree of the matching result, wherein the matching quality judgment logic is as follows: setting the layer size level of the query pyramid as N, if the matching degree is < =I (N), the matching result is low-quality matching, if I (N) < the matching degree is < =J (N), the matching result is general-quality matching, if J (N) < the matching degree is < =K (N), the matching result is high-quality matching, if the matching degree is > K (N), the matching result is accurate matching, the high-quality matching and the accurate matching result are used as successful positioning results, and the position and the size of a matching area in an original graph are calculated;
s6, calculating the characteristics and description of the split and cut training subgraph by using an ORB algorithm, sequentially performing BF matching algorithm characteristic matching on the characteristic points and description of the query pyramid layer in the matching result and the split and cut training subgraph, and adding the matching result into a result list;
in the step S1, the high-quality matching distance F is a filtered value of the feature point matching result distance, and the value type is a floating point type, and the value is greater than or equal to 0; the image size class dividing size G is a statistical value list and is ordered in the order of small to large whenDetermining the size level of the picture as N, whereinThe method comprises the steps of carrying out a first treatment on the surface of the The image size level scaling H is a statistical value list of scaling of the query pyramid diagram of each size level; the image size level matching quality threshold L is a statistical value list of different quality thresholds of image matching degree results of each size level, and each size level quality threshold is a low quality threshold I (N) =L (N) [0 ] in sequence]General quality threshold J (N) =l (N) [1]And a high quality threshold K (N) =l (N) [2 ]]The method comprises the steps of carrying out a first treatment on the surface of the The reference quantity Q of the characteristic points is the maximum quantity of the characteristic points with the image size of 100, and is used for controlling the quantity of the characteristic points reserved during ORB algorithm calculation and reducing the calculation quantity during subsequent characteristic point matching.
2. The method for quickly identifying and positioning pictures based on the ORB algorithm according to claim 1, wherein in the step S1, the query graph matching minimum proportion a is a query pyramid layer top layer minimum proportion value generated based on the query graph, the value type is floating point type, the proportion value is smaller than 1 and larger than 1; the maximum query graph matching proportion B is a maximum query pyramid graph layer bottom layer proportion generated based on the query graph, the value type is a floating point type, the proportion value is smaller than 1 and larger than 1, the maximum query graph matching proportion value is larger than or equal to the minimum query graph matching proportion value, the number of the horizontal average splitting of the training subgraphs C is the number of the horizontal average splitting of the training subgraphs when the subgraphs are split, the value type is integer and larger than or equal to 1, the number of the vertical average splitting of the training subgraphs D is the number of the vertical average splitting of the training subgraphs when the subgraphs are split, the value type is integer and larger than or equal to 1, the maximum training graph cutting layer number E is the nesting number when the training subgraphs are cut, the value type is integer, the level of the cut subgraphs is equal to the level of the cut graph plus 1, and the level of the original training graph is 0.
3. The method for quickly identifying and positioning pictures based on the ORB algorithm according to claim 1, wherein in the step S2, the preprocessing is to perform gray level map conversion on the training map and the query map, perform an equal-scale reduction process on the training map, control the maximum size of the training map, reduce the operand in the image identification process, and record the reduction ratio R of the training map.
4. The method for quick recognition and positioning of pictures based on ORB algorithm according to claim 3, wherein said step S3 comprises the steps of:
s31, controlling the layer number of the query pyramid layers through the minimum proportion A of the query graph matching and the maximum proportion B of the query graph matching, and controlling the size range of the pyramid layers under the condition that the size range of the positioning image is estimated, so that the calculated amount is reduced, wherein the specific control logic comprises the calculation of the image size of the pyramid starting layer and the image size of the pyramid ending layer; the pyramid initial layer image size = the preprocessed query graph size the maximum matching proportion B of the query graph; the size of the pyramid ending layer image = the size of the preprocessed query image = the minimum matching proportion A of the query image, and a next layer image is generated by starting circulation from a starting layer until the size of the image layer is smaller than or equal to the size of the ending layer image; the next-layer image scaling is p=image size level scaling H (N), where N is a query pyramid layer size level, where the next-layer pyramid image size=current pyramid layer size/P;
and S32, respectively calculating the characteristics and description of the query pyramid layer and the training diagram according to an ORB algorithm, wherein the number of pyramid layers in the ORB algorithm is set to be 1, and the number of reserved characteristic points=Q is equal to the image size/100.
5. The method for quick recognition and positioning of pictures based on ORB algorithm according to claim 4, wherein said step S4 comprises the steps of:
s41, finishing feature matching of the query pyramid layer and the training diagram according to a BF matching algorithm, and adding a matching result into a result list;
s42, carrying out distance filtration on the matching result in the result list, and reserving matching feature points with the distance smaller than or equal to the high-quality matching distance F to obtain a high-quality feature matching result;
s43, calculating the matching degree of the high-quality feature matching result, wherein the matching degree calculation formula is matching degree = high-quality feature matching result feature point number/query pyramid layer picture feature point number;
s44, packaging the matching result and adding the matching result to a result list, wherein the content of the matching result comprises a query pyramid layer of the current matching, a training chart of the current matching, feature points and descriptions of a query golden sub-tower layer, feature points and descriptions of the training chart, matching degree, father-level matching result, training sub-level matching result list, high-quality feature matching result and training chart cutting layer number, the father-level matching result of the matching result and the training sub-level matching result list are both blank, and the training chart cutting layer number is 0.
6. The method for quickly identifying and positioning a picture based on the ORB algorithm according to claim 5, wherein in the step S52, calculating the position and the size of the matching area in the original picture includes calculating the average value and the standard deviation of the coordinates of the feature points of the high quality matching result, filtering the feature pairs satisfying the coordinates within 2 standard deviations in the high quality matching result, calculating the feature point area of the training picture of the filtered high quality matching result, calculating the feature point area of the query pyramid of the filtered high quality matching result, calculating the matching area of the training picture, and calculating the matching area of the original training picture, wherein calculating the average value and the standard deviation of the coordinates of the feature points of the training picture includes calculating the average value avgX1 and the standard deviation devX1 of the feature points of the training picture in the high quality matching result, devY1 of the average value avgX2 and the standard deviation devX2 of the feature points of the query pyramid in the high quality matching result; the filtering satisfies that the feature pairs with coordinates within 2 standard deviations in the high-quality matching result are avgX1-2 x devx1< = training image feature point abscissa < = avgx1+2 x devx1 and avgY1-2 x devy1< = training image feature point ordinate < = avgy1+2 x devy1 and avgX2-2 x devx2< = query pyramid image layer feature point abscissa < = avgx2+2 x devx2 and avgY2-2 x devy2< = query pyramid image layer feature point ordinate < = avgy2+2 x devy2; the step of calculating the filtered high-quality matching result training image feature point region is to obtain minimum coordinate values minX1 and minY1 and maximum coordinate values maxX1 and maxY1 of training image sub-image matching feature point coordinates in the filtered high-quality matching result, and calculate to obtain training image sub-image matching feature point coordinate matrixes (x 1, y1, w1 and h 1), wherein x1=minx1, y1=miny1, w1=max1-minx1 and h1=maxY 1-minY1; the step of calculating the filtered high-quality matching result query pyramid layer characteristic point region is to obtain maximum coordinate values maxX2 and maxY2 and minimum coordinate values minX2 and minY2 of query pyramid layer matching characteristic point coordinates in the filtered high-quality matching result, and calculate a query pyramid layer characteristic point matrix (x 2, y2, w2 and h 2), wherein x2=minX2, y1=minY2, w1=maxX2-minX2 and h2=maxY2-minY2; the calculation training diagram matching area comprises a calculation area matrix (x 3, y3, w3, h 3), wherein x3=x1-x2×w1/w2, y3=y1-y2×h1/h2, w3=query gold sub-tower diagram width×w1/w2, h3=query gold sub-tower diagram height×h1/h2; the calculation of the matching area of the original training image is to obtain the coordinates (x, y) of the pre-processed training image relative to the upper left corner of the training image by obtaining the matching result, and calculate the matching area (x 4, y4, w4, h 4) of the original training image, wherein x4= (x3+x)/R, y4 = (y3+y)/R, w4 =w3/R, h4 =h3/R.
7. The method for quick recognition and positioning of pictures based on ORB algorithm according to claim 1, wherein said step S6 comprises the steps of:
s61, finishing inquiring the feature points and descriptions of the golden tower layer graph according to the BF matching algorithm, sequentially matching the feature points and descriptions with the features of the split and cut training subgraph, and adding the matching result into a result list;
s62, carrying out distance filtration on the matching result in the result list, and reserving matching feature points with the distance smaller than or equal to the high-quality matching distance F to obtain a high-quality feature matching result;
s63, calculating the matching degree of the high-quality feature matching result, wherein the matching degree calculation formula is matching degree = high-quality feature matching result feature point number/query pyramid layer picture feature point number;
s64, packaging the matching result and adding the matching result to a result list, wherein the content of the matching result comprises a query pyramid layer of the current matching, a training chart of the current matching, a feature point and description of a query golden sub-tower layer chart, a feature point and description of the training chart, a matching degree, a parent-level matching result, a training sub-image matching result list, a high-quality feature matching result and a training chart nesting layer number, and the training sub-image matching result list of the matching result is empty;
s65, updating a training sub-image matching result list of the parent-level result, and adding the sub-image matching result generated at this time.
CN202311082677.XA 2023-08-28 2023-08-28 ORB algorithm-based method for rapidly identifying and positioning picture Active CN116778141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311082677.XA CN116778141B (en) 2023-08-28 2023-08-28 ORB algorithm-based method for rapidly identifying and positioning picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311082677.XA CN116778141B (en) 2023-08-28 2023-08-28 ORB algorithm-based method for rapidly identifying and positioning picture

Publications (2)

Publication Number Publication Date
CN116778141A CN116778141A (en) 2023-09-19
CN116778141B true CN116778141B (en) 2023-12-22

Family

ID=88013827

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311082677.XA Active CN116778141B (en) 2023-08-28 2023-08-28 ORB algorithm-based method for rapidly identifying and positioning picture

Country Status (1)

Country Link
CN (1) CN116778141B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461196A (en) * 2020-03-27 2020-07-28 上海大学 Method and device for identifying and tracking fast robust image based on structural features
CN113762280A (en) * 2021-04-23 2021-12-07 腾讯科技(深圳)有限公司 Image category identification method, device and medium
WO2022002039A1 (en) * 2020-06-30 2022-01-06 杭州海康机器人技术有限公司 Visual positioning method and device based on visual map
CN114359591A (en) * 2021-12-13 2022-04-15 重庆邮电大学 Self-adaptive image matching algorithm with edge features fused

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753940B (en) * 2019-01-11 2022-02-22 京东方科技集团股份有限公司 Image processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461196A (en) * 2020-03-27 2020-07-28 上海大学 Method and device for identifying and tracking fast robust image based on structural features
WO2022002039A1 (en) * 2020-06-30 2022-01-06 杭州海康机器人技术有限公司 Visual positioning method and device based on visual map
CN113762280A (en) * 2021-04-23 2021-12-07 腾讯科技(深圳)有限公司 Image category identification method, device and medium
CN114359591A (en) * 2021-12-13 2022-04-15 重庆邮电大学 Self-adaptive image matching algorithm with edge features fused

Also Published As

Publication number Publication date
CN116778141A (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN110348264B (en) QR two-dimensional code image correction method and system
CN108345888A (en) A kind of connected domain extracting method and device
CN112381061B (en) Facial expression recognition method and system
CN110399762A (en) A kind of method and device of the lane detection based on monocular image
CN110188780B (en) Method and device for constructing deep learning model for positioning multi-target feature points
CN111149101B (en) Target pattern searching method and computer readable storage medium
CN110046623B (en) Image feature point extraction method and camera
CN116778141B (en) ORB algorithm-based method for rapidly identifying and positioning picture
CN111209908A (en) Method and device for updating label box, storage medium and computer equipment
CN108573510A (en) A kind of grating map vectorization method and equipment
CN110705568B (en) Optimization method for image feature point extraction
CN116778182A (en) Sketch work grading method and sketch work grading model based on multi-scale feature fusion
CN111860287A (en) Target detection method and device and storage medium
CN107292840B (en) Image restoration method and device, computer-readable storage medium and terminal
CN114219757B (en) Intelligent damage assessment method for vehicle based on improved Mask R-CNN
CN106469346A (en) A kind of risk control method based on region and equipment
CN113808014B (en) Image scaling method and device based on dynamic energy adjustment
CN112749713B (en) Big data image recognition system and method based on artificial intelligence
CN109829019B (en) Data conversion method and device of vector data and electronic equipment
CN114398978B (en) Template matching method and device, storage medium and electronic equipment
CN111739025A (en) Image processing method, device, terminal and storage medium
CN117874158B (en) Self-adaptive adjustment method and device for clustered map position data
CN114820441B (en) Method and system for extracting numerical control machining path from bitmap
CN114462492A (en) Training method, electronic device and computer-readable storage medium
CN111259744B (en) Face detection method and device based on skin model and SVM classifier

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant