CN107220580A - Image recognition grab sample unification algorism based on ballot decision-making and least square method - Google Patents

Image recognition grab sample unification algorism based on ballot decision-making and least square method Download PDF

Info

Publication number
CN107220580A
CN107220580A CN201610167808.8A CN201610167808A CN107220580A CN 107220580 A CN107220580 A CN 107220580A CN 201610167808 A CN201610167808 A CN 201610167808A CN 107220580 A CN107220580 A CN 107220580A
Authority
CN
China
Prior art keywords
mrow
voting
points
point
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610167808.8A
Other languages
Chinese (zh)
Other versions
CN107220580B (en
Inventor
王丰
张靖恺
龙文勇
吕虹晓
郑邦雄
曾梦旭
曾艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dun Tai Electronics Co Ltd
Original Assignee
Dun Tai Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dun Tai Electronics Co Ltd filed Critical Dun Tai Electronics Co Ltd
Priority to CN201610167808.8A priority Critical patent/CN107220580B/en
Publication of CN107220580A publication Critical patent/CN107220580A/en
Application granted granted Critical
Publication of CN107220580B publication Critical patent/CN107220580B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Character Input (AREA)
  • Image Analysis (AREA)

Abstract

A kind of image recognition grab sample unification algorism based on ballot decision-making and least square method, introduces ballot decision mode and finds out most possible correct point pair, rotation translation transformation matrix is calculated using principle of least square method.The rotation translation transformation matrix that the present invention is calculated is not only relatively more accurate, speed, and very stable, and each result of calculation will not random fluctuation;The present invention not only optimizes image comparison method, and contributes to the follow-up learning functionality of data processing chip, that is, is continuously increased the characteristic information of reference picture, refuses sincere so as to be effectively improved, make data processing chip more intelligent;The present invention is highly efficient, realize image mosaic exactly also by the rotation translation transformation matrix of continuous renewal.

Description

Image identification random sampling consistent algorithm based on voting decision and least square method
Technical Field
The invention relates to a data processing method, in particular to a data processing method for image comparison and image splicing.
Background
In order to realize image comparison and image splicing according to an image comparison result, in the prior art, a basic image for image comparison is used as a template image, the image for comparison is used as an image to be tested, descriptors of characteristic points of the template image and the image to be tested are subjected to Hamming matching and spatial false removal to obtain a plurality of coarse matching point pairs, and a Random Sampling consistent matching Consensus algorithm, which is abbreviated as RANSAC in English, is used for removing mismatching and calculating a rotation-translation transformation matrix. The conventional random sampling Consensus RANSAC idea is to estimate an H matrix by randomly selecting three pairs of points in a coarse matching point pair set, then use the estimated H matrix to judge a consistency set Consensus set of all the points in two point sets, judge that the Euclidean distance between a point in one point set and a corresponding point in the other point set is smaller than a certain threshold after H matrix mapping, consider that the points belong to the consistency set of the H matrix, and finally select the consistency set containing the most points as a final precise matching point pair, wherein the corresponding H matrix is a final rotational translation transformation matrix.
The traditional random sampling consensus RANSAC algorithm estimates an H matrix by randomly selecting point pairs each time, has stronger randomness, larger operation amount and lower efficiency, and can find a proper H matrix only by repeating iteration for many times, especially when the number of error point pairs in two matching point sets of the input random sampling consensus RANSAC algorithm is more and the ratio is larger, the required repeated iteration times are larger, otherwise the H matrix is possibly not found, but the operation time and the storage amount are increased in proportion when the repeated iteration times are larger, and the H matrix and the consistent set which are calculated after the random sampling consensus RANSAC algorithm are not completely the same each time.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an image identification random sampling consistent algorithm based on voting decision and least square method to avoid the defects of the prior art, introduce a voting decision mode to find out the most probable correct point pair, calculate a rotation translation transformation matrix by adopting the least square method principle, and design a new random sampling consistent RANSAC algorithm to solve the problems. The method obtains two rough matching point sets, namely a template image rough matching point set reference set, an English abbreviation set and a test image rough matching point set testset, after Hamming matching and spatial false removal of descriptors of feature points of a template image and an image to be tested, wherein each point has a sequence label index in the point set, rough matching point pairs in the two point sets are in one-to-one correspondence, but the correctness of the rough matching needs further verification, namely false removal is carried out through a random sampling consistency RANSAC algorithm of the method.
The technical problem to be solved by the invention can be realized by adopting the following technical scheme:
an image identification random sampling consistency algorithm based on voting decision and a least square method is implemented, one of stored image templates is used as a template image, an input image is used as a test image, and whether the test image is matched with the template image or not is discriminated through comparison of feature point information of the test image and feature point information of each template image. Particularly, the feature point information of the test image and the feature point information of each template image are processed by the following steps:
A. selecting characteristic points in the test image to form a test image characteristic point set, and selecting characteristic points in the template image to form a template image characteristic point set;
finding out coarse matching points for each feature point of the test image in the template image feature point set, so that two corresponding coarse matching points belonging to the test image feature point set and the template image feature point set become coarse matching point pairs;
if the number of the rough matching point pairs is larger than the set rough matching threshold, performing the step B; otherwise, performing the step I;
B. the rough matching points belonging to the test image feature point set in the rough matching point pair form a test image rough matching point set, and the rough matching points belonging to the template image feature point set form a template image rough matching point set;
C. finding two mutually corresponding pairs of points with highest votes in the coarse matching point set of the test image and the coarse matching point set of the template image by a voting decision method to form two optimal pairs of points;
D. calculating a rotation translation transformation matrix by using two optimal point pairs consisting of the test image rough matching point set and the template image rough matching point set based on a least square method principle;
E. screening point pairs from the coarse matching point set of the test image and the coarse matching point set of the template image by means of a rotation translation transformation matrix to form a consistent set;
if the number of the consistent set inner point pairs is larger than the set consistent point pair number threshold, performing step F; otherwise, performing the step I;
F. judging that the test image is matched with the template image;
J. the algorithm ends.
The invention also provides an image splicing method based on the random sampling consensus RANSAC algorithm to further improve the image comparison precision and expand the template image range, and the following steps are added between the step F and the step J,
G. calculating an updated rotation translation transformation matrix by using point pairs in a consistent set based on a least square method principle;
I. and mapping points in the test image feature point set into updated splicing feature points by means of the updated rotation translation transformation matrix, and adding the updated splicing feature points into the template image feature point set.
One possible solution adopted by the voting decision method is that the step C includes the following sub-steps:
C11. setting serial numbers for M points of a template image coarse matching point set, sequentially setting a point pair distance voting matrix of M rows and M columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C12. acquiring the template rough matching point distance between any point in the template image rough matching point set and other points and the test rough matching point distance between any point in the test image rough matching point set and other points, and enabling the template rough matching point distances and the test rough matching point distances to be in one-to-one correspondence according to the serial number corresponding relation of the template image rough matching point set and each rough matching point of the test image rough matching point set;
C13. judging whether the difference value between the corresponding template coarse distribution point distance and the test coarse distribution point distance is smaller than a set coarse distribution distance difference threshold value one by one, if so, setting two elements in the point pair distance voting matrix, wherein the two elements are row numbers and column numbers of two points corresponding to the template coarse distribution point distance; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C14. selecting the points of the template image with the serial numbers of two rows with the number of the voting marks arranged in the first two bits, and forming two pairs of optimal points with the points corresponding to the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
The voting decision method is further derived based on the sub-steps C11 to C14, and the step C comprises the following sub-steps:
C21. setting serial numbers for M points of a template image coarse matching point set, sequentially setting a point pair distance voting matrix of M rows and M columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C22. acquiring the template rough matching point distance between any point in the template image rough matching point set and other points and the test rough matching point distance between any point in the test image rough matching point set and other points, and enabling the template rough matching point distances and the test rough matching point distances to be in one-to-one correspondence according to the serial number corresponding relation of the template image rough matching point set and each rough matching point of the test image rough matching point set;
C23. judging whether the difference value between the corresponding template coarse distribution point distance and the test coarse distribution point distance is smaller than a set coarse distribution distance difference threshold value one by one, if so, setting two elements in the point pair distance voting matrix, wherein the two elements are row numbers and column numbers of two points corresponding to the template coarse distribution point distance; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C24. selecting Z points represented by the line sequence numbers of the Z rows arranged at the front Z position in the voting mark number from the point pair distance voting matrix to form a template image initial judgment point set, wherein the template image initial judgment point set forms a test image initial judgment point set at the Z points corresponding to the test image rough matching point set, and Z is a natural number not less than 2;
C25. acquiring vectors between any two points in a template image initial judgment point set to form a template vector set with Y vectors, wherein Y is the combination number of the two points taken from Z points, correspondingly acquiring the vectors between any two points in a test image initial judgment point set to form a test vector set with Y vectors, and enabling the template vectors to correspond to the test vectors one by one according to the corresponding relation of the points between the template image initial judgment point set and the test image initial judgment point set;
C26. setting serial numbers for Y vectors of a template vector set, and sequentially setting and constructing a vector included angle voting matrix of Y rows and Y columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C27. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector voting matrix, which use the two vector sequence numbers as a row number and a column number of each other, as a non-voting mark;
C28. selecting a template vector represented by the row serial number of the row with the most voting marks in the vector included angle voting matrix, thereby finding out two points forming the template vector, and forming two optimal point pairs by the two points and the corresponding two points of the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
Specifically, the Z is a natural number not less than 2 and not more than one tenth of the number of points in the template image rough matching point set.
Another scheme that may be adopted by the voting decision method is that the step C includes the following sub-steps:
C31. acquiring vectors between any two points in the template image coarse matching point set to form a template vector set with X vectors, wherein X is the combination number of two points taken from the points of the template image coarse matching point set, correspondingly acquiring the vectors between any two points in the test image coarse matching point set to form a test vector set with X vectors, and enabling the template vectors to correspond to the test vectors according to the corresponding relation of the points between the template image coarse matching point set and the test image coarse matching point set;
C32. setting serial numbers for X vectors of a template vector set, and sequentially setting and constructing vector angle voting matrixes of X rows and X columns, wherein elements of non-main diagonal lines in the matrixes are voting elements;
C33. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector voting matrix, which use the two vector sequence numbers as a row number and a column number of each other, as a non-voting mark;
C34. selecting a template vector represented by the row serial number of the row with the most voting marks in the vector included angle voting matrix, thereby finding out two points forming the template vector, and forming two optimal point pairs by the two points and the corresponding two points of the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
The voting decision method is further derived based on the sub-steps C31 to C34, and the step C comprises the following sub-steps:
C41. acquiring vectors between any two points in the template image coarse matching point set to form a template vector set with X vectors, wherein X is the combination number of two points taken from the points of the template image coarse matching point set, correspondingly acquiring the vectors between any two points in the test image coarse matching point set to form a test vector set with X vectors, and enabling the template vectors to correspond to the test vectors according to the corresponding relation of the points between the template image coarse matching point set and the test image coarse matching point set;
C42. setting serial numbers for X vectors of a template vector set, and sequentially setting and constructing vector angle voting matrixes of X rows and X columns, wherein elements of non-main diagonal lines in the matrixes are voting elements;
C43. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector included angle voting matrix, which use the serial numbers of the two vectors as row numbers and column numbers of each other, as non-voting marks;
C44. selecting end points of W vectors represented by the row sequence numbers of W rows arranged in the front W positions of the voting mark number in the vector included angle voting matrix as a template image initial judgment point set, wherein the template image initial judgment point set forms a test image initial judgment point set at the end points of the W vectors corresponding to the test image rough matching point set, and W is a natural number not less than 2;
C45. setting serial numbers for N points of a template image initial judgment point set, wherein N is less than or equal to 2 xW, sequentially setting a point pair distance voting matrix of N rows and N columns, and elements of non-main diagonal lines in the matrix are voting elements;
C46. acquiring template initial judgment point distances between any point in a template image initial judgment point set comprising N points and other points and between any point in a test image initial judgment point set comprising N points and other points, and enabling each template initial judgment point distance to correspond to the test initial judgment point distance one by one according to the corresponding relation of the sequence numbers of the template image initial judgment point set and each point of the test image initial judgment point set;
C47. judging whether the difference value between the corresponding template initial judgment point distance and the test initial judgment point distance is smaller than a set initial judgment distance difference threshold value one by one, if so, setting two elements of the serial numbers of two points corresponding to the template initial judgment point distance as a row number and a column number in the point pair distance voting matrix as voting marks; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C48. selecting the points of the template image with the serial numbers of two rows with the number of the voting marks arranged in the first two bits, and forming two pairs of optimal points with the points corresponding to the test image.
The voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
Specifically, the directions of the vectors in the template vector set and the test vector set in the above substeps point from a point with a smaller sequence number to a point with a larger sequence number.
Specifically, in the step D, the coordinates of the two optimal points in the template image point set found in the step C are (x)r0,yr0) And (x)r1,yr1) The corresponding two optimal point coordinates in the test image point set are (x)t0,yt0) And (x)t1,yt1);
Then, the rotational-translational transformation matrix isWherein,
and,
when in useWhen the temperature of the water is higher than the set temperature,when in useWhen the temperature of the water is higher than the set temperature,
the specific scheme of the step E for forming the consistent set and judging the image matching is that the step E comprises the following sub-steps,
E1. the following steps are performed one by one for all points in the set of coarse matching points of the test image,
E11. mapping the coordinates of the current point to the coordinates of the mapping point through a rotation translation transformation matrix,
calculating the sum of the absolute values of the deviations of the coordinates of the corresponding points of the current point in the rough matching point set of the template image and the coordinates of the mapping points as a point pair deviation value,
E12. if the point pair deviation value is smaller than the set consistency set threshold value, adding the point and a point in the template image rough matching point set corresponding to the point into a consistency set as a pair of point pairs; otherwise, not adding the consistent set;
E2. for the consistent set obtained through the substep E1, if the number of the point pairs in the consistent set is greater than the set threshold value of the number of the point pairs, performing a step F; otherwise, performing step I.
In order to obtain an updated rotation translation transformation matrix to realize image splicing, a coarse matching point set belonging to a test image in a consistent set is setThe coordinates of each point of (a) are (x)t0,yt0)、(xt1,yt1)、……、(xtN,ytN) Then the coordinates of their corresponding points in the coarse matching point set of the template image are (x)r0,yr0)、(xr1,yr1)、……、(xrN,yrN) That is, the number of matching point pairs in the consistency set is N + 1;
the updated roto-translational transformation matrix in step G is thenWherein,
compared with the prior art, the technical effect of the image identification random sampling consistent algorithm based on voting decision and least square method of the invention is as follows:
the rotational translation transformation matrix calculated by the method is accurate, high in speed and stable, and the calculation result of each time cannot fluctuate randomly; the invention not only optimizes the image comparison method, but also is beneficial to the subsequent learning function of the data processing chip, namely, the characteristic information of the template image is continuously increased, thereby effectively improving the false rejection rate and enabling the data processing chip to be more intelligent; the invention also realizes image splicing more efficiently and accurately through the continuously updated rotation translation transformation matrix.
Drawings
FIG. 1 is a schematic flow chart of a preferred embodiment of the image recognition random sampling consensus algorithm based on voting decision and least square method in the invention.
Detailed Description
The following is a further detailed description of preferred embodiments with reference to the accompanying drawings.
The invention provides an image identification random sampling consistency algorithm based on voting decision and a least square method, and the preferred embodiment of the invention specifically explains the scheme of the invention by taking a fingerprint identification process applied to equipment unlocking as an example. The image recognition random sample consensus RANSAC algorithm of the present invention is based on hardware devices comprising a memory, a data processor and an input means. Referring to step 101 shown in fig. 1, a fingerprint template capable of unlocking hardware equipment is pre-recorded in a memory, a fingerprint is input from an input device, the fingerprint is compared with the fingerprint template one by using an image recognition random sampling consensus RANSAC algorithm described in detail below, when the fingerprint is judged to be matched with the fingerprint template, the hardware equipment is unlocked, otherwise, the fingerprint image to be tested cannot unlock the hardware equipment. The fingerprint template is the stored image template, and the process of recording the fingerprint template is the process of storing the image characteristic points of the template image Reference. The fingerprint image used for comparison is the template image Reference which is one of the stored image templates. As shown in step 102 of fig. 1, the fingerprint input from the input device is in the form of an image as a Test image, and the Test image is expressed by using feature points as data. Whether the Test image is matched with the template Reference image or not is discriminated through comparison of the Test image data and the template image Reference data, namely whether the input fingerprint is matched with the fingerprint image in the fingerprint template or not is discriminated through comparison of the input fingerprint and the fingerprint image in the fingerprint template. The Test image data and the template Reference image data are processed as follows:
A. as shown in steps 101 and 102 of fig. 1, feature points are selected from the fingerprint image as an input of the Test image to form a Test image feature point set, and feature points are selected from the template Reference image to form a template Reference image feature point set.
As shown in step 103 of fig. 1, finding out coarse matching points in the template Reference image feature point set for each feature point of the Test image, so that two corresponding coarse matching points belonging to the Test image feature point set and the template Reference image feature point set become coarse matching point pairs;
the preferred embodiment of the invention uses the feature from accessed Segment Test high-speed feature detection algorithm, called FAST algorithm for short, to extract feature points and find out the x and y coordinates and main direction of each feature point and the 128-bit and 16-byte ultrashort binary descriptor, called USB descriptor for short. The method for solving the characteristic points, the coordinate main direction of the characteristic points and the USB descriptor is well planned by a program. For each feature point in the Test image, a corresponding matching point is searched in the template Reference image feature point set, that is, each feature point in the Test image and all feature points in the template Reference image respectively carry out 128-bit USB descriptor discrimination Hamming distance search on the feature points and the matching points. And screening out the final matching point pairs according to a set program. However, the matching point pair obtained in the above process is a coarse matching point pair, some mismatching points exist, and the correctness of the matching point pair cannot be predicted, so it is necessary to perform fine matching and false and true removing by the random sample consensus RANSAC algorithm for image recognition provided by the present invention.
As shown in fig. 1, if the number of the rough matching point pairs is greater than the set rough matching threshold, performing step B; otherwise, performing step I.
In the preferred embodiment of the invention, the rough matching threshold is set to be 20 pairs, and if the number of the rough matching point pairs is not less than 20 pairs, the two images are considered to be not matched, and the fingerprint unlocking cannot be carried out. If the number of the matching point pairs exceeds 20 pairs, the subsequent steps are continued, and the coarse matching point pairs are subjected to false removal and true preservation. Typically, the number of coarse matching point pairs of two images of 88 × 88 pixels derived from the same finger can reach 150 to 500 or more.
B. As shown in step 104 in fig. 1, the coarse matching points in the coarse matching point pair belonging to the Test image feature point set constitute a Test image coarse matching point set testset, and the coarse matching points belonging to the template Reference image feature point set constitute a template image coarse matching point set Reference.
Based on two images of 88 × 88 pixels, and reference set and testset come from the same finger, the number of pairs of coarse-matching points entering the subsequent step can often reach 150 to 500 or even more. The coordinates (x, y, θ) of each coarse matching point are already known in the extraction stage.
C. As shown in step 105 of fig. 1, the voting decision method is used to find the best points with the highest votes and corresponding to each other in the coarse matching point set of the test image and the coarse matching point set of the template image, and form the best point pairs. The point pairs are composed of points in the test image rough matching point set and points in the template image rough matching point set.
The voting decision method can adopt a distance voting method, a vector angle voting method or a mode of combining the distance voting and the vector angle voting. For the way of combining the distance voting and the vector angle voting, the sequence of the distance voting and the vector angle voting is not limited in principle. The preferred embodiment of the present invention first performs distance voting and then performs vector angle voting in a manner of combining distance voting and vector angle voting, and then, the preferred embodiment of the present invention describes the distance voting and the vector angle voting in detail, but any of them is used alone, and a variation scheme of first performing the vector angle voting and then performing the distance voting, and linking the two is adopted, which will be apparent to those skilled in the art from the detailed description of the preferred embodiment of the present invention.
And (C) adopting a distance voting method, wherein the step C comprises the following sub-steps:
C11. setting serial numbers for M points of a template image coarse matching point set, sequentially setting a point pair distance voting matrix of M rows and M columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C12. acquiring the template rough matching point distance between any point in the template image rough matching point set and other points and the test rough matching point distance between any point in the test image rough matching point set and other points, and enabling the template rough matching point distances and the test rough matching point distances to be in one-to-one correspondence according to the serial number corresponding relation of the template image rough matching point set and each rough matching point of the test image rough matching point set;
C13. judging whether the difference value between the corresponding template coarse distribution point distance and the test coarse distribution point distance is smaller than a set coarse distribution distance difference threshold value one by one, if so, setting two elements in the point pair distance voting matrix, wherein the two elements are row numbers and column numbers of two points corresponding to the template coarse distribution point distance; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C14. selecting points in the template image feature point set corresponding to the serial numbers of the two rows with the number of the voting marks arranged in the first two bits, and forming two pairs of optimal points with the points in the corresponding test image feature point set;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
In the preferred embodiment of the invention, the importance ptimmontance of each point pair in the template image coarse matching point set (refset for short) and the test image coarse matching point set testset of the random sample consensus RANSAC algorithm for identifying the input image is found, and the importance is obtained by a voting decision method. reference set is hereinafter abbreviated as reference. And respectively and sequentially judging the Euclidean distance between each point pair and all other point pairs, judging whether the Euclidean distances of the spatial positions of the four points are consistent each time, and voting if the difference value between the Euclidean distance of two points in the refset and the Euclidean distance of two points in the testset is smaller than a certain set rough matching distance difference threshold value ransac _ Dist, wherein the rough matching distance difference threshold value ransac _ Dist is set to be 2 in the preferred embodiment of the invention. The "voting" is performed on a constructed matrix whose importance is measured according to distance, that is, a point pair, hereinafter referred to as a D matrix, is labeled 1 from the corresponding position of the voting matrix, and the row number and the column number of the matrix are the index of the two point pairs respectively.
For example: the Distance between the ith and jth points in the refset is Distance1, the Distance between the ith and jth points in the testset is Distance2, if Distance1 minus Distance2 is less than threshold 2, the ith row and jth column elements of the D matrix are set to 1, otherwise 0. After traversing each point pair, a D (distance) matrix is obtained, and the number of 1 in the ith row of the D matrix is represented by ptimmontance [ i ], namely representing the importance of the ith point pair.
The distribution of the D matrix thus obtained in the preferred embodiment of the present invention is such that n is the logarithm of the matching point pairs among the point sets refset and testset of the input image recognition random sample consensus RANSAC algorithm, which is a symmetric matrix with respect to the main diagonal whose elements do not take values. In the preferred embodiment of the present invention, the matrix element value is 0 or 1, meaning that the voting flag is 1, and the non-voting flag is 0.
Point pair serial number
D matrix
The larger the number of 1 in the ith row of the matrix, i.e. the larger the ptimmontance [ i ] value, the more point pairs satisfy the basic euclidean distance condition, i.e. the more important the ith point pair is considered, the more likely it is to be a correct point pair, i.e. the more voted on the ith point pair, the more important it is, the more likely it is to be a correct point. This is the distance voting method, i.e. the scheme of sub-steps C11 to C14 above. Two points of the template image corresponding to the serial numbers of two rows with the numerical value of ptimmontance [ i ] arranged at the first two bits and two points corresponding to the test image form two pairs of optimal points.
However, the importance of the point pairs in the actual fingerprint identification process, namely the ptimmontance value, does not show bipolar differentiation, and considering factors such as rotation of feature points of fingerprint images, the importance of the point pairs is not enough only by using Euclidean distance to judge, namely, the importance of the point pairs is not enough only by using a distance voting method, an A (angle) matrix needs to be added on the basis, and the importance matrix is measured according to an included angle between two vectors, namely, a vector included angle voting method is added.
And C, combining distance voting and angle voting, wherein the step C comprises the following sub-steps:
C21. setting serial numbers for M points of a template image coarse matching point set, sequentially setting a point pair distance voting matrix of M rows and M columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C22. acquiring the template rough matching point distance between any point in the template image rough matching point set and other points and the test rough matching point distance between any point in the test image rough matching point set and other points, and enabling the template rough matching point distances and the test rough matching point distances to be in one-to-one correspondence according to the serial number corresponding relation of the template image rough matching point set and each rough matching point of the test image rough matching point set;
C23. judging whether the difference value between the corresponding template coarse distribution point distance and the test coarse distribution point distance is smaller than a set coarse distribution distance difference threshold value one by one, if so, setting two elements in the point pair distance voting matrix, wherein the two elements are row numbers and column numbers of two points corresponding to the template coarse distribution point distance; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C24. selecting Z points in a template image rough matching point set represented by the row sequence number of the Z rows with the voting mark number arranged at the front Z position from the point pair distance voting matrix to form a template image initial judgment point set, wherein the template image initial judgment point set forms a test image initial judgment point set at the Z points corresponding to the test image rough matching point set, and Z is a natural number not less than 2;
C25. acquiring vectors between any two points in a template image initial judgment point set to form a template vector set with Y vectors, wherein Y is the combination number of the two points taken from Z points, correspondingly acquiring the vectors between any two points in a test image initial judgment point set to form a test vector set with Y vectors, and enabling the template vectors to correspond to the test vectors one by one according to the corresponding relation of the points between the template image initial judgment point set and the test image initial judgment point set;
C26. setting serial numbers for Y vectors of a template vector set, and sequentially setting and constructing a vector included angle voting matrix of Y rows and Y columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C27. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector voting matrix, which use the two vector sequence numbers as a row number and a column number of each other, as a non-voting mark;
C28. selecting a template vector represented by the row serial number of the row with the most voting marks in the vector included angle voting matrix, thereby finding out two points forming the template vector, and forming two optimal point pairs by the two points and the corresponding two points of the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
The process of the substeps C21 to C24 is a distance voting method, and the process of the substeps C25 to C28 is a vector angle voting method.
The method for finding the pair of points with larger ptimmoport [ i ] by using the D matrix, i.e. the distance voting method, has been described in detail above, and on the basis of this method, the first 10 pairs of points with larger values of ptimmoport [ i ] containing the voting flag 1 in each line calculated by using the D matrix are found, and the importance is further determined by using the a matrix, i.e. Z equals 10 in steps C21 to C28. And Z is a natural number not less than 2 and not more than one tenth of the number of points in the template image rough matching point set. The first 10 points with larger ptimmontance [ i ] value are taken out and reconstructed into two point sets, the template image initial judgment point set refset2 contains 10 points, the test image initial judgment point set testset2 contains 10 points, every 10 points in refset2 form a vector in pairs, the direction of the vector is defined as that the point with smaller index points to the point with larger index, 45 vectors are obtained, namely the combination number of 2 taken out of 10 is the vector number, namely Y in the steps C21-C28 is 45. Likewise testset2 also gets the corresponding 45 vectors, one for each of refset 2. For each pair of vectors, the angle is sequentially determined with respect to all other pairs of vectors, and each time it is determined whether the angular relationships of the spatial positions of the four vectors are matched, if the difference between the angle between two vectors in refset2 and the angle between two vectors in testset2 is less than a certain set vector included angle difference threshold value ransac _ angle, "voting" is performed, in the preferred embodiment of the present invention, the vector included angle difference threshold value ransac _ angle is set to 2 degrees. A vector angle voting matrix, namely an A matrix, is constructed according to the importance of the angle, and the row number and the column number of the A matrix are respectively the sequence index _ vector of the two pairs of vectors. For example: the angle between the ith vector and the jth vector in refset2 is angle1, the angle between the ith vector and the jth vector in testset2 is angle2, if the difference obtained by subtracting angle2 from angle1 is less than the vector included angle difference threshold 2 (angle value), the ith row and jth column elements of the a matrix are set to 1, otherwise, the ith row and jth column elements are set to 0. And traversing each pair of vectors to obtain an A matrix, and expressing the number of 1 in the ith row of the A matrix by vectorriportance [ i ], namely representing the importance of the ith pair of vectors.
The distribution of the a matrix thus obtained is possible as follows, where n-45 is the logarithm of the vector pair constructed as described above, and the matrix is symmetric about the main diagonal, and the elements at the main diagonal positions do not take values. In the preferred embodiment of the present invention, the matrix element value is 0 or 1, meaning that the voting flag is 1, and the non-voting flag is 0.
Point pair serial number
A matrix
Finding the largest element vectormportance [ i ] in vectormportance array]The H matrix used to calculate the corresponding pair of vectors (corresponding to two points in refset2 and two points in testset 2) is the theoretically optimal roto-translational transformation matrix, i.e., the matrix
Obviously, the voting decision method can also adopt a vector angle voting method alone, and a method of voting by vector angle first and then distance.
The voting decision method independently adopts a vector angle voting method, and the step C comprises the following sub-steps:
C31. acquiring vectors between any two points in the template image coarse matching point set to form a template vector set with X vectors, wherein X is the combination number of two points taken from the points of the template image coarse matching point set, correspondingly acquiring the vectors between any two points in the test image coarse matching point set to form a test vector set with X vectors, and enabling the template vectors to correspond to the test vectors according to the corresponding relation of the points between the template image coarse matching point set and the test image coarse matching point set;
C32. setting serial numbers for X vectors of a template vector set, and sequentially setting and constructing vector angle voting matrixes of X rows and X columns, wherein elements of non-main diagonal lines in the matrixes are voting elements;
C33. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector included angle voting matrix, which use the serial numbers of the two vectors as row numbers and column numbers of each other, as non-voting marks;
C34. selecting a template vector represented by the row serial number of the row with the most voting marks in the vector included angle voting matrix, thereby finding out two points forming the template vector, and forming two optimal point pairs by the two points and the corresponding two points of the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
The voting decision method adopts a method of voting at a vector included angle first and then voting at a distance, and the step C comprises the following sub-steps:
C41. acquiring vectors between any two points in the template image coarse matching point set to form a template vector set with X vectors, wherein X is the combination number of two points taken from the points of the template image coarse matching point set, correspondingly acquiring the vectors between any two points in the test image coarse matching point set to form a test vector set with X vectors, and enabling the template vectors to correspond to the test vectors according to the corresponding relation of the points between the template image coarse matching point set and the test image coarse matching point set;
C42. setting serial numbers for X vectors of a template vector set, and sequentially setting and constructing vector angle voting matrixes of X rows and X columns, wherein elements of non-main diagonal lines in the matrixes are voting elements;
C43. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector included angle voting matrix, which use the serial numbers of the two vectors as row numbers and column numbers of each other, as non-voting marks;
C44. selecting end points of W vectors represented by the row sequence numbers of W rows arranged in the front W positions of the voting mark number in the vector included angle voting matrix as a template image initial judgment point set, wherein the template image initial judgment point set forms a test image initial judgment point set at the end points of the W vectors corresponding to the test image rough matching point set, and W is a natural number not less than 2;
C45. setting a sequence number for N points of a template image initial judgment point set, wherein N is less than or equal to 2 xW because the end points of each vector are possibly coincident, sequentially setting a point pair distance voting matrix of N rows and N columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C46. acquiring template initial judgment point distances between any point in a template image initial judgment point set comprising N points and other points and between any point in a test image initial judgment point set comprising N points and other points, and enabling each template initial judgment point distance to correspond to the test initial judgment point distance one by one according to the corresponding relation of the sequence numbers of the template image initial judgment point set and each point of the test image initial judgment point set;
C47. judging whether the difference value between the corresponding template initial judgment point distance and the test initial judgment point distance is smaller than a set initial judgment distance difference threshold value one by one, if so, setting two elements of the serial numbers of two points corresponding to the template initial judgment point distance as a row number and a column number in the point pair distance voting matrix as voting marks; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C48. selecting the points of the template image with the serial numbers of two rows with the number of the voting marks arranged in the first two bits, and forming two pairs of optimal points with the points corresponding to the test image.
The voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
The directions of the vectors in the template vector set and the test vector set are from the point with the smaller sequence number to the point with the larger sequence number.
After finding the optimal point pair through voting decision, obtaining a rotation-translation transformation matrix H through the optimal point pair, namely step D:
D. and calculating a rotation translation transformation matrix by using the optimal point pairs corresponding to the test image point set and the template image point set based on a least square method principle.
In the preferred embodiment of the present invention, the H matrix is calculated using the two best point pairs found, i.e. the coordinates of the four points are: (x)r0,yr0)、(xr1,yr1)、(xt0,yt0) And (x)t1,yt1). Calculating a rotation translation transformation matrix H by using a least square method principle, wherein 3 unknowns of the H matrix are required to be solved: the rotation angle theta and the translation amounts dx and dy are used to rotate two points (x) in testsett0,yt0) And (x)t1,yt1) The corresponding mapping points are obtained after the H matrix is rotated and translatedAnd
is that
Is that
In order to make two mapping points after H matrix rotation and translationMatch two points (x) in the refsetr0,yr0)、(xr1,yr1) With minimum deviation therebetween, i.e.To be minimal, the partial derivatives of error with respect to the rotation angle θ and the translation amount dx, dy should be made zero.
error=(xt0·cosθ-yt0·sinθ+dx-xr0)2+(xt0·sinθ+yt0·cosθ+dy-yr0)2+(xt1·cosθ-yt1·sinθ+dx-xr1)2+(xt1·sinθ+yt1·cosθ+dy-yr1)2
BySo as to obtain the compound with the characteristics of,
get it solved
BySo as to obtain the compound with the characteristics of,
get it solved
BySo as to obtain the compound with the characteristics of,
substituting the expression of dx and dy into the expression,
(2xt0·cosθ-2yt0·sinθ-xt0·cosθ+yt0·sinθ+xr1-xt1·cosθ+yt1·sinθ-xr0)·(-xt0·sinθ-yt0·cosθ)++(2xt0·sinθ+2yt0·cosθ-xt0·sinθ-yt0·cosθ+yr1-xt1·sinθ-yt1·cosθ-yr0)·(xt0·cosθ-yt0·sinθ)++(2xt1·cosθ-2yt1·sinθ+xr0-xt0·cosθ+yt0·sinθ-xt1·cosθ+yt1·sinθ-xr1)·(-xt1·sinθ-yt1·cosθ)++(2xt1·sinθ+2yt1·cosθ+yr0-xt0·sinθ-yt0·cosθ-xt1·sinθ-yt1·cosθ-yr1)·(xt1·cosθ-yt1·sinθ)=0
is simple and easy to obtain
[sinθ·(yt1-yt0)+cosθ·(xt0-xt1)+(xr1-xr0)]·(-xt0·sinθ-yt0·cosθ)++[sinθ·(xt0-xt1)+cosθ·(yt0-yt1)+(yr1-yr0)]·(xt0·cosθ-yt0·sinθ)++[sinθ·(yt0-yt1)+cosθ·(xt1-xt0)+(xr0-xr1)]·(-xt1·sinθ-yt1·cosθ)++[sinθ·(xt1-xt0)+cosθ·(yt1-yt0)+(yr0-yr1)]·(xt1·cosθ-yt1·sinθ)=0
Merging simplification and utilization of sin2θ+cos2Theta is 1 ═ d
(xr1-xr0)·(-xt0·sinθ-yt0·cosθ)+(yr1-yr0)·(xt0·cosθ-yt0·sinθ)++(xr0-xr1)·(-xt1·sinθ-yt1·cosθ)+(yr0-yr1)·(xt1·cosθ-yt1·sinθ)=0
Is finished to obtain
Namely, it is
Specific example is ①When the temperature of the water is higher than the set temperature,② whenTime of flight
The acquisition of the roto-translational transformation matrix enables the formation of a consistent set by the following step E.
E. As shown in step 106 of fig. 1, screening point pairs from the coarse matching point set of the test image and the coarse matching point set of the template image by using a rotation-translation transformation matrix to form a consistent set;
if the number of the point pairs in the consistent set is not less than the set threshold value of the number of the consistent point pairs, performing the step F; otherwise, performing the step I;
in the preferred embodiment of the present invention, the specific scheme for constructing the consistent set and determining the image matching in step E is that step E comprises the following sub-steps,
E1. the following steps are performed one by one for all points in the set of coarse matching points of the test image,
E11. mapping the coordinates of the current point to the coordinates of the mapping point through a rotation translation transformation matrix,
calculating the sum of the absolute values of the deviations of the coordinates of the corresponding points of the current point in the rough matching point set of the template image and the coordinates of the mapping points as a point pair deviation value,
E12. if the point pair deviation value is smaller than the set consistency set threshold value, adding the point and a point in the template image rough matching point set corresponding to the point into a consistency set as a pair of point pairs; otherwise, not adding the consistent set;
after all points in the set of coarse match points for the test image have passed through substeps E11 through E12, a consistent set is obtained for substep E2,
E2. for the consistent set obtained through the substep E1, if the number of the point pairs in the consistent set is greater than the set threshold value of the number of the point pairs, performing a step F; otherwise, performing step I.
In the preferred embodiment of the present invention, a consistent set of consensus sets is found according to the calculated H matrix of the rotational-translational transformation matrix. For pairs of points among refset and testset already matched, i.e. coarse matchesIn the obtained point pairs, the sum of absolute values of deviations between coordinates of the points in testset of each point pair after H-matrix mapping and coordinates of the corresponding point in refset is used to obtain err ═ xri-xti·cosθ+yti·sinθ-dx|+|yri-xti·sinθ-yti·cosθ-dy|。
The consistency set threshold is set to 4, i.e. the ith point pair is added to the consistency set when err of the ith point pair is < 4.
Whether the images are matched can be judged through the acquired consistent set, and whether the fingerprints are matched can be judged for the preferred embodiment of the invention, namely the following steps F,
F. in step 107 shown in FIG. 1, it is determined that the test image matches the template image.
The end point of the image recognition random sample consensus RANSAC algorithm of the invention is the following step J,
J. the algorithm ends, as shown in step 110 of fig. 1.
The RANSAC algorithm for image identification and random sampling consistency completes the false and true removal of the rough matching points, namely the image comparison and discrimination process. As an extension, in order to implement image stitching, the accuracy of the rotational-translational transformation matrix is increased, then the rotational-translational transformation matrix, i.e., the H 'matrix, obtained by re-updating all the point pairs in the consistent set is used, and the updated rotational-translational transformation matrix is used, so that the image comparison precision is further improved by the H' matrix, the image stitching is implemented, and the template image range is expanded. Thus, as shown in FIG. 1, the following steps may be added between the above step F and step J,
G. the updated roto-translational transformation matrix is computed using the point pairs in the consistent set, step 108 shown in fig. 1.
The preferred embodiment of the present invention recalculates the updated roto-translational transformation matrix, i.e., the H' matrix, using all point pairs in the consistent set. Let the logarithm of the matching point pairs in the congruence set be N +1, i.e. the subscripts of the point pairs are from 0 to N, respectively.
In order to make N +1 mapping points after the H' matrix rotation translationMatch N +1 matching points (x) in the refsetr0,yr0)、(xr1,yr1)、……、(xrN,yrN) With minimum deviation therebetween, i.e. error functionThe sum of the squares of the euclidean distances is minimized so that the partial derivatives of error with respect to the rotation angle theta and the translation amounts dx, dy are both zero. Namely to
error=(xt0·cosθ-yt0·sinθ+dx-xr0)2+(xt0·sinθ+yt0·cosθ+dy-yr0)2+(xt1·cosθ-yt1·sinθ+dx-xr1)2+(xt1·sinθ+yt1·cosθ+dy-yr1)2+………+(xtN·cosθ-ytN·sinθ+dx-xrN)2+(xtN·sinθ+ytN·cosθ+dy-yrN)2
BySo as to obtain the compound with the characteristics of,
the solution is obtained by dissolving the raw materials,
that is to say that the first and second electrodes,
bySo as to obtain the compound with the characteristics of,
the solution is obtained by dissolving the raw materials,
that is to say that the first and second electrodes,
bySo as to obtain the compound with the characteristics of,
substituting the expression of dx 'and dy',
merging simplification and utilization of sin2θ+cos2The theta is obtained by 1, and the alpha is,
the same items are arranged and combined to obtain,
the left side and the right side of the equation are multiplied by the number N +1 of the point pairs in the consistent set at the same time and are sorted out,
the polynomial containing the sin function is placed to the left of the equal sign, the polynomial containing the cos function is placed to the right of the equal sign, the coefficients of the sin function to the left of the equal sign are,
the coefficients of the cos function to the right of the equal sign are,
order to
The coefficients A, A1, B, B1 of sin and cos functions may be implemented using two for loops, where A1 and B1 accumulate within the inner for loop (loop from 0 to N for k), and A and B accumulate outside the outer for loop (loop from 0 to N for i) and the inner for loop.
Namely, it is
Combining the previously calculated translation amounts dx ' and dy ' to construct an updated rotational-translational transformation H ' matrix,
the image stitching can be realized by using the updated rotation translation transformation H' matrix, namely the following steps I,
I. and mapping points in the test image feature point set into updated splicing feature points by means of the updated rotation translation transformation matrix, and adding the splicing feature points into the template image feature point set.
In the preferred embodiment of the present invention, as shown in step 109 in fig. 1, the feature points in the fingerprint image to be tested are spliced into the template fingerprint image through the following rotation-translation transformation.
Is (x)t0,yt0) And the point coordinates are added into the characteristic point coordinates of the template fingerprint image after being subjected to rotation and translation transformation H'.
And (4) not splicing the points of which the deviation between the coordinates and the matching points is less than a certain threshold after the characteristic points in the image to be tested are subjected to updated rotational translation transformation H' matrix mapping.
Most of fingerprint identification chips required in the current mobile phone market are small in area, fingerprint information recorded in a template at each time usually only occupies a small part of a finger, after two images are matched, if the matching score is high, namely the number of pairs N +1 of matching points in a consensus set in the algorithm is large, the two images can be considered to be from the same real finger, and the two images can be spliced together into a larger fingerprint image containing more characteristic points of the real finger as the template by using a calculated updated rotation translation transformation H' matrix. Meanwhile, the invention is also beneficial to the subsequent learning function of the fingerprint identification chip, namely, the fingerprint information of the real finger is continuously increased, and the latest state information of the finger is used for updating and replacing the old information according to the subtle change of the wetting and drying of the finger in summer in winter, so that the false rejection rate is effectively improved, and the chip is more intelligent.
Experiments in the android system show that: the time for accurately solving the H matrix by removing the false and storing the true in the five hundred or less matching point pairs is less than two milliseconds, and experiments show that the rotation-translation transformation H matrix calculated by the random sample consensus (RANSAC) algorithm for image recognition is accurate, high in speed and stable, and the calculation result does not fluctuate randomly each time.

Claims (11)

1. An image recognition random sampling consistent algorithm based on voting decision and least square method is characterized in that one of stored image templates is used as a template image, an input image is used as a test image, and whether the test image is matched with the template image or not is discriminated through comparison of feature point information of the test image and feature point information of each template image; the method is characterized in that the characteristic point information of the test image and the characteristic point information of each template image are processed by the following steps:
A. selecting characteristic points in the test image to form a test image characteristic point set, and selecting characteristic points in the template image to form a template image characteristic point set;
finding out coarse matching points for each feature point of the test image in the template image feature point set, so that two corresponding coarse matching points belonging to the test image feature point set and the template image feature point set become coarse matching point pairs;
if the number of the rough matching point pairs is larger than the set rough matching threshold, performing the step B; otherwise, performing the step I;
B. the rough matching points belonging to the test image feature point set in the rough matching point pair form a test image rough matching point set, and the rough matching points belonging to the template image feature point set form a template image rough matching point set;
C. finding out the best points which have the highest votes and are corresponding to each other in the coarse matching point set of the test image and the coarse matching point set of the template image by a voting decision method to form a best point pair;
D. calculating a rotation translation transformation matrix by using an optimal point pair consisting of the test image rough matching point set and the template image rough matching point set based on a least square method principle;
E. screening point pairs from the coarse matching point set of the test image and the coarse matching point set of the template image by means of a rotation translation transformation matrix to form a consistent set;
if the number of the consistent set inner point pairs is larger than the set consistent point pair number threshold, performing step F; otherwise, performing the step I;
F. judging that the test image is matched with the template image;
J. the algorithm ends.
2. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
the following steps are added between the step F and the step J,
G. calculating an updated rotation translation transformation matrix by using point pairs in a consistent set based on a least square method principle;
I. and mapping points in the test image feature point set into updated splicing feature points by means of the updated rotation translation transformation matrix, and adding the updated splicing feature points into the template image feature point set.
3. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
the step C comprises the following sub-steps,
C11. setting serial numbers for M points of a template image coarse matching point set, sequentially setting a point pair distance voting matrix of M rows and M columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C12. acquiring the template rough matching point distance between any point in the template image rough matching point set and other points and the test rough matching point distance between any point in the test image rough matching point set and other points, and enabling the template rough matching point distances and the test rough matching point distances to be in one-to-one correspondence according to the serial number corresponding relation of the template image rough matching point set and each rough matching point of the test image rough matching point set;
C13. judging whether the difference value between the corresponding template coarse distribution point distance and the test coarse distribution point distance is smaller than a set coarse distribution distance difference threshold value one by one, if so, setting two elements in the point pair distance voting matrix, wherein the two elements are row numbers and column numbers of two points corresponding to the template coarse distribution point distance; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C14. selecting the points of the template image with the serial numbers of two rows with the number of the voting marks arranged in the first two bits, and forming two pairs of optimal points with the points corresponding to the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
4. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
the step C comprises the following sub-steps,
C21. setting serial numbers for M points of a template image coarse matching point set, sequentially setting a point pair distance voting matrix of M rows and M columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C22. acquiring the template rough matching point distance between any point in the template image rough matching point set and other points and the test rough matching point distance between any point in the test image rough matching point set and other points, and enabling the template rough matching point distances and the test rough matching point distances to be in one-to-one correspondence according to the serial number corresponding relation of the template image rough matching point set and each rough matching point of the test image rough matching point set;
C23. judging whether the difference value between the corresponding template coarse distribution point distance and the test coarse distribution point distance is smaller than a set coarse distribution distance difference threshold value one by one, if so, setting two elements in the point pair distance voting matrix, wherein the two elements are row numbers and column numbers of two points corresponding to the template coarse distribution point distance; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C24. selecting Z points represented by the line sequence numbers of the Z rows arranged at the front Z position in the voting mark number from the point pair distance voting matrix to form a template image initial judgment point set, wherein the template image initial judgment point set forms a test image initial judgment point set at the Z points corresponding to the test image rough matching point set, and Z is a natural number not less than 2;
C25. acquiring vectors between any two points in a template image initial judgment point set to form a template vector set with Y vectors, wherein Y is the combination number of the two points taken from Z points, correspondingly acquiring the vectors between any two points in a test image initial judgment point set to form a test vector set with Y vectors, and enabling the template vectors to correspond to the test vectors one by one according to the corresponding relation of the points between the template image initial judgment point set and the test image initial judgment point set;
C26. setting serial numbers for Y vectors of a template vector set, and sequentially setting and constructing a vector included angle voting matrix of Y rows and Y columns, wherein elements of non-main diagonal lines in the matrix are voting elements;
C27. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector voting matrix, which use the two vector sequence numbers as a row number and a column number of each other, as a non-voting mark;
C28. selecting a template vector represented by the row serial number of the row with the most voting marks in the vector included angle voting matrix, thereby finding out two points forming the template vector, and forming two optimal point pairs by the two points and the corresponding two points of the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
5. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 4, wherein:
and Z is a natural number not less than 2 and not more than one tenth of the number of points in the template image rough matching point set.
6. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
the step C comprises the following sub-steps,
C31. acquiring vectors between any two points in the template image coarse matching point set to form a template vector set with X vectors, wherein X is the combination number of two points taken from the points of the template image coarse matching point set, correspondingly acquiring the vectors between any two points in the test image coarse matching point set to form a test vector set with X vectors, and enabling the template vectors to correspond to the test vectors according to the corresponding relation of the points between the template image coarse matching point set and the test image coarse matching point set;
C32. setting serial numbers for X vectors of a template vector set, and sequentially setting and constructing vector angle voting matrixes of X rows and X columns, wherein elements of non-main diagonal lines in the matrixes are voting elements;
C33. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector voting matrix, which use the two vector sequence numbers as a row number and a column number of each other, as a non-voting mark;
C34. selecting a template vector represented by the row serial number of the row with the most voting marks in the vector included angle voting matrix, thereby finding out two points forming the template vector, and forming two optimal point pairs by the two points and the corresponding two points of the test image;
the voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
7. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
the step C comprises the following sub-steps:
C41. acquiring vectors between any two points in the template image coarse matching point set to form a template vector set with X vectors, wherein X is the combination number of two points taken from the points of the template image coarse matching point set, correspondingly acquiring the vectors between any two points in the test image coarse matching point set to form a test vector set with X vectors, and enabling the template vectors to correspond to the test vectors according to the corresponding relation of the points between the template image coarse matching point set and the test image coarse matching point set;
C42. setting serial numbers for X vectors of a template vector set, and sequentially setting and constructing vector angle voting matrixes of X rows and X columns, wherein elements of non-main diagonal lines in the matrixes are voting elements;
C43. calculating an included angle between any two vectors in the template vector set, correspondingly, calculating an included angle between any two vectors in the test vector set, and judging whether a difference value between the included angle in the template vector set and the included angle in the test vector set with the same serial number is smaller than a set vector included angle difference threshold value or not, if so, setting two elements in the vector included angle voting matrix, which take the serial numbers of the two vectors as a row number and a column number of each other, as voting marks; if not, setting two elements in the vector included angle voting matrix, which use the serial numbers of the two vectors as row numbers and column numbers of each other, as non-voting marks;
C44. selecting end points of W vectors represented by the row sequence numbers of W rows arranged in the front W positions of the voting mark number in the vector included angle voting matrix as a template image initial judgment point set, wherein the template image initial judgment point set forms a test image initial judgment point set at the end points of the W vectors corresponding to the test image rough matching point set, and W is a natural number not less than 2;
C45. setting serial numbers for N points of a template image initial judgment point set, wherein N is less than or equal to 2 xW, sequentially setting a point pair distance voting matrix of N rows and N columns, and elements of non-main diagonal lines in the matrix are voting elements;
C46. acquiring template initial judgment point distances between any point in a template image initial judgment point set comprising N points and other points and between any point in a test image initial judgment point set comprising N points and other points, and enabling each template initial judgment point distance to correspond to the test initial judgment point distance one by one according to the corresponding relation of the sequence numbers of the template image initial judgment point set and each point of the test image initial judgment point set;
C47. judging whether the difference value between the corresponding template initial judgment point distance and the test initial judgment point distance is smaller than a set initial judgment distance difference threshold value one by one, if so, setting two elements of the serial numbers of two points corresponding to the template initial judgment point distance as a row number and a column number in the point pair distance voting matrix as voting marks; if not, setting two elements in the point pair distance voting matrix, which take the serial numbers of the two points as row numbers and column numbers, as non-voting marks;
C48. selecting the points of the template image with the serial numbers of two rows with the number of the voting marks arranged in the first two bits, and forming two pairs of optimal points with the points corresponding to the test image.
The voting flag is 1, and the non-voting flag is 0; alternatively, the voting flag is 0 and the non-voting flag is 1.
8. A voting decision and least squares based image recognition random sample consensus algorithm according to any one of claims 4, 6 or 7, wherein:
the direction of each vector in the template vector set and the test vector set is from a point with a smaller sequence number to a point with a larger sequence number.
9. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
in the step D, the coordinates of two optimal points in the template image point set found in the step C are set as (x)r0,yr0) And (x)r1,yr1) The corresponding two optimal point coordinates in the test image point set are (x)t0,yt0) And (x)t1,yt1);
Then, the rotational-translational transformation matrix isWherein,
<mrow> <mi>d</mi> <mi>x</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>r</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mn>0</mn> </mrow> </msub> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&amp;theta;</mi> <mo>+</mo> <msub> <mi>y</mi> <mrow> <mi>t</mi> <mn>0</mn> </mrow> </msub> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mi>&amp;theta;</mi> <mo>+</mo> <msub> <mi>x</mi> <mrow> <mi>r</mi> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mn>1</mn> </mrow> </msub> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&amp;theta;</mi> <mo>+</mo> <msub> <mi>y</mi> <mrow> <mi>t</mi> <mn>1</mn> </mrow> </msub> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow>
<mrow> <mi>d</mi> <mi>y</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>r</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mn>0</mn> </mrow> </msub> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mi>&amp;theta;</mi> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>t</mi> <mn>0</mn> </mrow> </msub> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&amp;theta;</mi> <mo>+</mo> <msub> <mi>y</mi> <mrow> <mi>r</mi> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mn>1</mn> </mrow> </msub> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mi>&amp;theta;</mi> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>t</mi> <mn>1</mn> </mrow> </msub> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow>
and,
when in useWhen the temperature of the water is higher than the set temperature,when in useWhen the temperature of the water is higher than the set temperature,
10. the voting decision and least squares based image recognition random sampling consensus algorithm of claim 1, wherein:
said step E comprises the sub-steps of,
E1. the following steps are performed one by one for all points in the set of coarse matching points of the test image,
E11. mapping the coordinates of the current point to the coordinates of the mapping point through a rotation translation transformation matrix,
calculating the sum of the absolute values of the deviations of the coordinates of the corresponding points of the current point in the rough matching point set of the template image and the coordinates of the mapping points as a point pair deviation value,
E12. if the point pair deviation value is smaller than the set consistency set threshold value, adding the point and a point in the template image rough matching point set corresponding to the point into a consistency set as a pair of point pairs; otherwise, not adding the consistent set;
E2. for the consistent set obtained through the substep E1, if the number of the point pairs in the consistent set is greater than the set threshold value of the number of the point pairs, performing a step F; otherwise, performing step I.
11. The voting decision and least squares based image recognition random sampling consensus algorithm of claim 2, wherein:
let the coordinates of each point in the consistent set belonging to the coarse matching point set of the test image be (x)t0,yt0)、(xt1,yt1)、……、(xtN,ytN) Then the coordinates of their corresponding points in the coarse matching point set of the template image are (x)r0,yr0)、(xr1,yr1)、……、(xrN,yrN) That is, the number of matching point pairs in the consistency set is N + 1;
the updated roto-translational transformation matrix in step G is thenWherein,
<mrow> <msup> <mi>dx</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>r</mi> <mi>k</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mi>k</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&amp;theta;</mi> <mo>+</mo> <msub> <mi>y</mi> <mrow> <mi>t</mi> <mi>k</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow>
<mrow> <msup> <mi>dy</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>r</mi> <mi>k</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mi>k</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mi>&amp;theta;</mi> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>t</mi> <mi>k</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow>
<mrow> <msup> <mi>&amp;theta;</mi> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <mi>arctan</mi> <mfenced open = "{" close = "}"> <mtable> <mtr> <mtd> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>y</mi> <mrow> <mi>r</mi> <mi>i</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>y</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mrow> <mi>r</mi> <mi>i</mi> </mrow> </msub> <mo>)</mo> <mo>&amp;CenterDot;</mo> <mo>(</mo> <mi>N</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> <mo>+</mo> <mo>&amp;lsqb;</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <mo>(</mo> <msub> <mi>y</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mrow> <mi>r</mi> <mi>k</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>y</mi> <mrow> <mi>r</mi> <mi>k</mi> </mrow> </msub> <mo>)</mo> <mo>&amp;rsqb;</mo> </mtd> </mtr> <mtr> <mtd> <mover> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mrow> <mi>r</mi> <mi>i</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>y</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>y</mi> <mrow> <mi>r</mi> <mi>i</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <mrow> <mo>(</mo> <mi>N</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mo>&amp;lsqb;</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>N</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mrow> <mi>r</mi> <mi>k</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>y</mi> <mrow> <mi>t</mi> <mi>i</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>y</mi> <mrow> <mi>r</mi> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mo>&amp;OverBar;</mo> </mover> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow>5
CN201610167808.8A 2016-03-22 2016-03-22 Image identification random sampling consistent algorithm based on voting decision and least square method Active CN107220580B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610167808.8A CN107220580B (en) 2016-03-22 2016-03-22 Image identification random sampling consistent algorithm based on voting decision and least square method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610167808.8A CN107220580B (en) 2016-03-22 2016-03-22 Image identification random sampling consistent algorithm based on voting decision and least square method

Publications (2)

Publication Number Publication Date
CN107220580A true CN107220580A (en) 2017-09-29
CN107220580B CN107220580B (en) 2022-08-09

Family

ID=59928272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610167808.8A Active CN107220580B (en) 2016-03-22 2016-03-22 Image identification random sampling consistent algorithm based on voting decision and least square method

Country Status (1)

Country Link
CN (1) CN107220580B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288511A (en) * 2019-05-10 2019-09-27 台州宏达电力建设有限公司台州经济开发区运检分公司 Minimum error joining method, device, electronic equipment based on double camera image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100008643A1 (en) * 2008-04-15 2010-01-14 Novafora, Inc. Methods and Systems for Interacting with Viewers of Video Content
CN103310122A (en) * 2013-07-10 2013-09-18 北京航空航天大学 Parallel random sampling consensus method and device
CN104040590A (en) * 2011-12-19 2014-09-10 三菱电机株式会社 Method for estimating pose of object
CN104077769A (en) * 2014-06-06 2014-10-01 华南理工大学 Error matching point pair removing algorithm in image registration
CN104077596A (en) * 2014-06-18 2014-10-01 河海大学 Landmark-free tracking registering method
CN105046235A (en) * 2015-08-03 2015-11-11 百度在线网络技术(北京)有限公司 Lane line recognition modeling method and apparatus and recognition method and apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100008643A1 (en) * 2008-04-15 2010-01-14 Novafora, Inc. Methods and Systems for Interacting with Viewers of Video Content
CN104040590A (en) * 2011-12-19 2014-09-10 三菱电机株式会社 Method for estimating pose of object
CN103310122A (en) * 2013-07-10 2013-09-18 北京航空航天大学 Parallel random sampling consensus method and device
CN104077769A (en) * 2014-06-06 2014-10-01 华南理工大学 Error matching point pair removing algorithm in image registration
CN104077596A (en) * 2014-06-18 2014-10-01 河海大学 Landmark-free tracking registering method
CN105046235A (en) * 2015-08-03 2015-11-11 百度在线网络技术(北京)有限公司 Lane line recognition modeling method and apparatus and recognition method and apparatus

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
GUANGJUN SHI等: "SIFT Feature Point Matching Based on Improved RANSAC Algorithm", 《2013 FIFTH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS》 *
LIONEL MOISAN等: "A Probabilistic Criterion to Detect Rigid Point Matches Between Two Images and Estimate the Fundamental Matrix", 《INTERNATIONAL JOURNAL OF COMPUTER VISION》 *
付倩文等: "一种带预处理的RANSAC图像拼接算法", 《电子设计工程》 *
姚佳宝等: "一种基于L-M算法的RANSAC图像拼接算法", 《浙江理工大学学报(自然科学版)》 *
李寒等: "基于特征点的全自动无缝图像拼接方法", 《计算机工程与设计》 *
杜京义等: "基于区域分块的SIFT图像匹配技术研究与实现", 《光电工程》 *
江洁等: "一种投票式并行RANSAC算法及其FPGA实现", 《电子与信息学报》 *
王双玲: "基于双向异步投票策略解点匹配的X线医学图像拼接", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *
王程冬等: "SIFT算法在点云配准中的应用", 《传感器与微系统》 *
钱为: "基于角点特征的图像配准技术研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288511A (en) * 2019-05-10 2019-09-27 台州宏达电力建设有限公司台州经济开发区运检分公司 Minimum error joining method, device, electronic equipment based on double camera image
CN110288511B (en) * 2019-05-10 2023-04-07 台州宏达电力建设有限公司台州经济开发区运检分公司 Minimum error splicing method and device based on double camera images and electronic equipment

Also Published As

Publication number Publication date
CN107220580B (en) 2022-08-09

Similar Documents

Publication Publication Date Title
CN101980250B (en) Method for identifying target based on dimension reduction local feature descriptor and hidden conditional random field
CN101567051B (en) Image matching method based on characteristic points
CN109697692B (en) Feature matching method based on local structure similarity
CN107958443B (en) Fingerprint image splicing method based on ridge line characteristics and TPS deformation model
CN108010045A (en) Visual pattern characteristic point error hiding method of purification based on ORB
KR20140102038A (en) Video matching device and video matching method
CN107862319B (en) Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting
Liu et al. Rotation-invariant siamese network for low-altitude remote-sensing image registration
CN107330928A (en) Based on the Image Feature Matching method for improving Shape context
CN111199558A (en) Image matching method based on deep learning
CN108133218A (en) Infrared target detection method, equipment and medium
CN114511012A (en) SAR image and optical image matching method based on feature matching and position matching
CN111027404A (en) Fingerprint identification method based on fingerprint protection template
Sun et al. TriVoC: Efficient voting-based consensus maximization for robust point cloud registration with extreme outlier ratios
Liu et al. Progressive motion coherence for remote sensing image matching
CN113128518B (en) Sift mismatch detection method based on twin convolution network and feature mixing
CN114782503A (en) Point cloud registration method and system based on multi-scale feature similarity constraint
Lee et al. Domain-agnostic meta-learning for cross-domain few-shot classification
CN107610120B (en) A kind of multiple dimensioned building surface Methodology for Entities Matching and system
CN107220580B (en) Image identification random sampling consistent algorithm based on voting decision and least square method
CN116627952A (en) Remote sensing image cataloging data similar repeated cleaning method
CN110956468A (en) Fingerprint payment system
CN111104922A (en) Feature matching algorithm based on ordered sampling
CN116310416A (en) Deformable object similarity detection method based on Radon transformation and electronic equipment
CN113221914B (en) Image feature point matching and mismatching elimination method based on Jacobsad distance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant