CN107239792A - A kind of workpiece identification method and device based on binary descriptor - Google Patents

A kind of workpiece identification method and device based on binary descriptor Download PDF

Info

Publication number
CN107239792A
CN107239792A CN201710335746.1A CN201710335746A CN107239792A CN 107239792 A CN107239792 A CN 107239792A CN 201710335746 A CN201710335746 A CN 201710335746A CN 107239792 A CN107239792 A CN 107239792A
Authority
CN
China
Prior art keywords
mrow
msub
matching
point
workpiece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710335746.1A
Other languages
Chinese (zh)
Inventor
陈喆
殷福亮
张青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201710335746.1A priority Critical patent/CN107239792A/en
Publication of CN107239792A publication Critical patent/CN107239792A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of workpiece identification method based on binary descriptor, its step includes:1) Fast Hessian feature detection operator extraction characteristic points are utilized;2) pixel gray level information MAP is under circular sampling configuration in the characteristic point information and all subregion that will be arrived respectively using Fast Hessian feature detection operator extractions, to build workpiece features description;3) template characteristic description described to gained workpiece features in son and ATL uses cascade connection type matching algorithm to carry out Hamming distance matching than in the way of by arest neighbors, obtains initial matching pair and counts initial matching to quantity;4) RANSAC algorithm is used, the erroneous matching pair of initial matching centering is rejected, obtains correctly matching logarithm 5) according to matching Logarithmic calculation matching fraction, so as to obtain workpiece identification result.Present invention application FREAK descriptions and the binary features that are combined of Fast Hessian feature detection algorithms describe algorithm, with realize workpiece it is quick, accurately identify.

Description

Workpiece identification method and device based on binary descriptor
Technical Field
The invention relates to a workpiece identification device and a method, in particular to a workpiece identification method and a workpiece identification device based on a binary descriptor.
Background
Workpiece recognition is a typical application of computer vision technology in the field of industrial production, and is an important component of production automation and intellectualization. Currently, in the field of workpiece recognition, there are a method based on image contour, a method based on image invariant moment, a method based on image local feature, and a workpiece recognition method based on binary feature.
Yuanfu et al propose a part identification algorithm based on SURF (speeded up robustness feature) features. The scheme includes that an industrial CCD (charge coupled device) camera is used for acquiring a workpiece image, and then preprocessing operation is carried out on the workpiece image, wherein the preprocessing operation mainly comprises image enhancement, median filtering and isolated point denoising. And then, detecting the workpiece image by adopting an SURF technology, wherein the SURF algorithm mainly comprises three parts of feature point detection, main direction determination and descriptor generation. The characteristic point detection is based on a scale space theory, and a Hessian matrix and an integral image are combined to position characteristic points in the image; the main direction is determined, the rotation invariance of the descriptor is realized by adopting a main direction technology, the weighted Harr wavelet response sum of the characteristic points in different directions in a circular area is calculated, and the direction with the maximum modulus value is taken as the main direction of the characteristic points; the descriptor generation is that the sum of weighted Harr wavelet response vectors in 16 sub-regions taking the feature point as the center is calculated in the principal direction of the feature point to obtain a 64-dimensional floating point type descriptor, then the similarity is calculated by using the Euclidean distance, and the feature vectors are matched by an approximate nearest neighbor algorithm. The SURF descriptor adopted by the method is a 64-dimensional floating point descriptor, the calculation in the process of generating the descriptor is time-consuming, and the descriptor occupies a large amount of memory. In addition, the SURF descriptor realizes rotation invariance of the descriptor through a main direction technology, and once a main direction is calculated incorrectly or has a certain calculation deviation, a large change of the feature vector can be caused, so that the mismatching rate is obviously increased.
Ortiz proposed the FREAK algorithm in the "FREAK: Fast Retina Keypoint" paper. The scheme firstly uses an AGAST (adaptive generalized accelerated segmentation detection algorithm) feature detector to extract feature points. The sampling mode of the middle dense and peripheral sparse similar human retina structures is established by the FREAK, and 43 sampling points are selected in total by the FREAK. The selected sampling points of FREAK are formed together And in order to reduce the calculation amount, the sampling point pairs are trained and learned, and 512 sampling points with large variance and low correlation are selected. To ensure the rotational invariance of the descriptor, the gradient direction of 45 pairs of long-distance sampling point pairs is selected as the main direction of the FREAK. Selecting a local neighborhood of a sampling point by FREAK, carrying out Gaussian smoothing on the sampling point to reduce the influence of noise, then comparing the gray value of the sampling point pair subjected to Gaussian smoothing, and then generating a binary string to construct a binary stringAnd forming a binary descriptor, and calculating the similarity by using the Hamming distance to match the feature vectors. According to the method, the FREAK descriptor is adopted, the overlapping degree of the smooth ranges of the sampling points is high, excessive redundant information exists, and the scale invariance is poor; the FREAK descriptor only uses a single neighborhood comparison result of a sampling point to form a binary descriptor, and lacks hierarchical information; in addition, the FREAK relies on the main direction technology to realize the rotation invariance of the FREAK, and the robustness is low.
Although the existing method can finish basic workpiece identification, the problem that the rotation is not changed depending on the main direction of the characteristic point is solved, and the robustness of the algorithm to the rotation is poor. Therefore, the method applies an improved binary characteristic description algorithm and provides a workpiece identification method combining an improved FREAK descriptor and a Fast Hessian characteristic detection algorithm.
Disclosure of Invention
In view of the defects of the prior art, the invention aims to provide a method for realizing workpiece identification by applying a binary characteristic description algorithm combining a FREAK descriptor and a FastHessian characteristic detection algorithm.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a workpiece identification method based on binary descriptors is characterized by comprising the following steps:
step 1) carrying out smooth denoising on an input gray workpiece image by adopting a median filtering method, and extracting feature points by utilizing a FastHessian feature detection operator;
step 2) dividing the original image into M sub-regions according to the gray value sequence of pixel points, and respectively mapping the characteristic point information extracted by the FastHessian characteristic detection operator and the gray value information of the pixel points in each sub-region to a circular sampling mode to construct a workpiece characteristic descriptor;
step 3) carrying out Hamming distance matching on the obtained workpiece feature descriptors and template feature descriptors in the template library by adopting a cascade matching algorithm in a mode of nearest neighbor ratio to obtain initial matching pairs and counting the number of the initial matching pairs;
step 4) adopting a random sampling consistency algorithm to remove wrong matching pairs in the initial matching pairs to obtain correct matching pairs;
and 5) calculating a matching score according to the matching logarithm, thereby obtaining a workpiece identification result.
Another object of the present invention is to provide a workpiece recognition apparatus based on the above workpiece recognition method, comprising: the device comprises a feature point detection and descriptor construction unit, a template library, a feature matching unit and a mismatching elimination unit, wherein the feature point detection and descriptor construction unit divides an original image into M sub-regions according to the gray value sequence of pixel points, utilizes a FastHessian feature detection operator to extract feature points, and maps the feature point information extracted by the FastHessian feature detection operator and the gray value information of the pixel points in each sub-region of the original image to a circular sampling mode to construct a workpiece feature descriptor; the template library is used for storing template feature descriptors; the feature matching unit carries out Hamming distance matching on the workpiece feature descriptors and the template feature descriptors in the template library by adopting a cascade matching algorithm in a mode of nearest neighbor ratio to obtain initial matching pairs and counts the number of the initial matching pairs; and the mismatching eliminating unit adopts a random sampling consistency algorithm to eliminate the mismatching pairs in the initial matching pairs so as to obtain correct matching pairs, and calculates matching scores according to the matching pairs so as to obtain a workpiece identification result.
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention combines the improved FREAK descriptor with the Fast Hessian feature detector, thereby enhancing the scale invariance of the FREAK algorithm.
(2) The invention provides a novel circular sampling mode, which reduces the overlapping degree of the smooth range of the sampling points and improves the construction speed of the descriptor.
(3) The invention provides a method for sequencing similar gray values, which realizes the rotation invariance of descriptors and divides the image by utilizing gray information to increase the hierarchy information of the descriptors.
(4) The invention provides a method for screening sampling points in the sub-region by setting the gray threshold, which reduces the influence of background pixels on a descriptor and improves the distinguishing performance of the descriptor.
(5) The method adopts a cascade type matching algorithm with sub-regions judged in sequence, and eliminates the mismatching by using the RANSAC algorithm, thereby greatly improving the matching speed and improving the matching accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a workpiece recognition method of the present invention;
FIG. 2 is a process for building a workpiece feature descriptor according to the present invention;
FIG. 3 is a schematic view of a circular sampling pattern of the present invention;
FIG. 4 is a histogram of the gray level statistics of the workpiece image according to the present invention;
FIG. 5-1 is a diagram of an example of extracting gray scale values of the original image in the range of 0-63 according to the present invention;
FIG. 5-2 is a diagram of an example of extracting gray scale values 63-127 of the original image according to the present invention;
FIG. 5-3 is a diagram of an example of extracting gray scale value range 128-191 of the original image according to the present invention;
FIGS. 5-4 are diagrams of examples of the gray level range 192-255 of the original image according to the present invention;
FIG. 6 is a graph comparing workpiece identification rates for the present invention with other methods;
FIG. 7 is a graph of the total rate of workpiece identification versus the error rate for the present invention and other methods;
FIG. 8 is a functional block diagram of a workpiece recognition apparatus according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Aiming at the problems that the descriptor is poor in rotation robustness and the descriptor is poor in distinguishing capability under the change of scale and angle in the prior art, the improved FREAK descriptor and a Fast Hessian feature detection operator are combined to carry out workpiece identification. The technical scheme of the invention is further explained by combining the drawings and the specific embodiments:
fig. 1 shows a workpiece recognition method based on binary descriptors, which includes the steps of:
step 1) smooth denoising is carried out on an input gray workpiece image by adopting a median filtering method, and characteristic points are extracted by utilizing a FastHessian characteristic detection operator, and the method specifically comprises the following steps:
① selecting a linear viewing window of 5 × 5 in the neighborhood of gray-scale workpiece image pixels, and setting the viewing windowThe pixel gray value sequence in the mouth is { a }1,a2…anAnd f, outputting a median filtering output result of the pixel points
Secondly, defining the image obtained after median filtering as C, and calculating a candidate characteristic response value for a pixel point X (i, j) in the image C
det(Happrox)=DxxDyy-(kDxy)2(2)
And is
In a preferred embodiment of the present invention, the weight k is preferably 0.9, and the candidate eigen response value det (H) obtained by equation (2) is usedapprox) And comparing the characteristic point with a preset threshold, if the characteristic point is larger than the threshold, judging the characteristic point as a candidate characteristic point, and if the response value of the candidate characteristic point is larger than the response values of all the adjacent pixel points, judging the candidate characteristic point as the characteristic point.
And 2) dividing the original image into M sub-regions according to the gray value sequence of the pixel points, and mapping the characteristic point information extracted by the FastHessian characteristic detection operator and the gray value information of the pixel points in each sub-region to a circular sampling mode to construct a workpiece characteristic descriptor. Fig. 2 shows the extraction process of the workpiece feature descriptor.
As a preferred embodiment of the present invention, M is preferably 4, that is, the pixel points are sorted by a gray sorting idea, as shown in fig. 4, a histogram for counting gray values of the workpiece image is shown, and then the image is divided into 4 sub-regions according to the sorting result, and the specific dividing step includes:
firstly, sorting all pixel points according to gray values, and dividing the sorted gray values into 4 parts;
classifying the pixel points according to M gray value ranges, reserving the pixel points belonging to a certain gray value range on the image, abandoning all the pixel points not belonging to the class, namely decomposing the original image into M sub-regions, and taking the pixel points in each sub-region as a classification.
As shown in fig. 5-1 to 5-4, the gray-level sorting division result of the workpiece is obtained, wherein for the purpose of visual display, the gray-level value of the pixel belonging to the sub-region is set to be white 255, and the gray-level value of the pixel not belonging to the sub-region is set to be black 0.
The invention provides a new sampling mode, namely a circular sampling mode, as shown in fig. 3, the sampling mode needs to satisfy the following conditions: taking the detected characteristic point as the center of the sampling pattern; the sub-circles on the same layer of concentric circles have the same area, and the sub-circles on different layers have different areas; each layer of sampling points has 6 sampling points in a concentric circle, 31 sampling points are selected in total, and every two layers of sub-circles which are separated are tangent.
The sampling mode mentioned in the invention is mapped into the 4 sub-regions respectively, a random test mode is adopted to generate binary strings, and the binary strings in the 4 sub-regions are connected in sequence, so that the binary descriptor which is independent of the main direction and contains certain space information and has stronger robustness is obtained. The method comprises the following specific steps:
mapping gray information of pixel points in 4 sub-regions to a circular sampling mode;
mapping the characteristic points detected by a Fast Hessian characteristic detection operator to a circular sampling mode;
mapping the gray information in the sub-region to the value of the sampling point according to the position of the sampling point in the circular sampling mode and the Gaussian smooth range;
randomly selecting values of two sampling points in any sub-area mapped by the sampling mode, comparing the values with a set threshold value, if the values are equal to the set threshold value, judging the sampling points as background points, and discarding the sampling points; if the values of the sampling points are not equal to the set threshold, comparing the gray values according to the formula (6):
wherein, Pi(0 ≦ i < 4) represents the sampling pattern of the ith sub-region map, m and n are PiTwo randomly selected sampling points, I (P), within the mapped sub-regioniM) is the sampling pattern PiGray value, I (P) of sampling point m in the ith sub-regioniN) is the sampling pattern PiThe gray value of a corresponding sampling point n in the ith sub-area;
⑤ selecting N sampling point pairs in each subregion, obtaining N comparison results according to formula (6), and then obtaining N-dimensional subregion descriptor SiAs shown in formula (7).
All sub-regions are described by SiAnd sequentially connecting in series to form an N × M-dimensional workpiece feature descriptor, which is described by the way of an equation (8), wherein M is 4, so that the descriptor dimension depends on N, and N is 128 in the invention, so that the constructed descriptor dimension is 512.
And 3) carrying out Hamming distance matching on the obtained workpiece feature descriptors and the template feature descriptors in the template library by adopting a cascade matching algorithm in a mode of nearest neighbor ratio to obtain initial matching pairs and counting the number of the initial matching pairs.
The invention adopts Hamming distance as the similarity measurement between two descriptors and adopts a cascade type matching algorithm of subregions. And matching the Hamming distance of the sub-region on the template feature descriptor and the workpiece feature descriptor in the template library model in an exclusive OR mode according to the formula (9).
Wherein,representing the hamming distance of the descriptors within the sub-region,a threshold value representing the hamming distance of the sub-region. The 128-bit hamming distance of the descriptors in the first subregion is calculated, if the hamming distance is greater than the selected threshold, the matching point is discarded, if the hamming distance is less than the selected threshold, the distance comparison in the second subregion is performed, and so on, and the comparisons are performed one by one. And (3) performing matching judgment by adopting a nearest neighbor ratio mode, namely judging the point to be matched as a correct matching pair only when the ratio of the nearest Hamming distance to the next nearest Hamming distance of the point to be matched is smaller than a set threshold value, increasing 1 to the logarithm Q of the initial matching pair, and otherwise, keeping the Q unchanged. The method comprises the following specific steps:
calculating the Hamming distance between a workpiece feature descriptor and a template feature descriptor in a first sub-region;
comparing the Hamming distance with a preset threshold, if the distance is greater than the preset distance threshold, abandoning the matching point, otherwise, comparing the distance in a second subregion;
carrying out matching judgment by adopting a nearest neighbor ratio mode, namely judging the point to be matched as a correct matching pair only when the ratio of the nearest Hamming distance to the next nearest Hamming distance of the point to be matched is smaller than a set ratio threshold, increasing 1 to the logarithm Q of the initial matching pair, and otherwise, keeping the Q unchanged;
and fourthly, repeating the steps until all the sub-areas are traversed.
The method comprises the following steps of selecting a small part of data as interior points to obtain an initial parameter model, dividing the data into 'exterior points' and 'interior points' by using initial parameters, and recalculating a parameter model of a function by using all the interior points, wherein ① sets an initial matching point set of a template image as A, sets the initial matching point set of the image to be matched as B, randomly selects 4 initial matching pairs from the sets A and B, and obtains a projection transformation matrix H and H by using the 4 point pairs11And h12Etc. represent translational and rotational, respectively, equal movements of the object, h33Typically normalized to 1.
Wherein h is331, randomly corresponding the rest position data to 8 data of 4 initial matching pairs;
secondly, all the characteristic points in the set point A are transformed according to the projection transformation matrix H to obtain a set B'. The coordinates of all corresponding points within sets B and B' are compared, i.e.:
ei=||Bi-B′iif e | |i<hTThen, the point pair is determined to be an inner point, wherein hTJudging the internal point number as an external point if the internal point number is a preset judgment threshold value, and counting the internal point number obtained by the conversion;
repeating the steps (1) to (2), selecting the transformation with the largest number of interior points, taking the set obtained through the transformation as a new set A and a new set B, and continuing to carry out iterative operation;
and fourthly, terminating the iteration until the number of the inner points of the current iteration is the same as that of the previous iteration, taking the sets A and B obtained by the last iteration as sets for eliminating the mismatching characteristic point pairs, and taking the corresponding projection transformation matrix as the final projection transformation matrix.
And 5) calculating a matching score according to the matching logarithm, thereby obtaining a workpiece identification result. In particular, the method comprises the following steps of,
recording the correct matching logarithm Q after eliminating the wrong matching pairs1And a match score is calculated η,
wherein Q is the initial matching logarithm of the feature points obtained after feature matching, when the matching score η is greater than the set threshold ThAnd judging that the workpiece to be identified and the template workpiece belong to the same category, otherwise, judging that the workpiece to be identified and the template workpiece belong to different categories, and obtaining the result of workpiece identification.
Further, if the image obtained from the camera is a color image before workpiece identification, the color image is grayed by adopting a weighted average method, and the processing procedure is as follows
g(x,y)=w1R(x,y)+w2G(x,y)+w3B(x,y) (11)
Wherein, R (x, y), G (x, y) and B (x, y) are the component values of the original color image at the image coordinate (x, y), G (x, y) is the gray value of the image at the coordinate (x, y) after transformation, wi(i ═ 1,2,3) are the weights of the RGB components.
The invention also discloses a workpiece identification device based on the workpiece identification method, which comprises the following steps: the device comprises a feature point detection and description sub-construction unit 20, a template library 30, a feature matching unit 40 and a rejection mismatching unit 50; the feature point detection and descriptor construction unit divides the original image into M sub-regions according to the gray value sequence of pixel points, utilizes a Fast Hessian feature detection operator to extract feature points, and maps the feature point information extracted by the Fast Hessian feature detection operator and the gray value information of the pixel points in each sub-region of the original image to a circular sampling mode to construct a workpiece feature descriptor; the template library is used for storing template feature descriptors; the feature matching unit carries out Hamming distance matching on the workpiece feature descriptors and the template feature descriptors in the template base in a mode of nearest neighbor ratio by adopting a cascade matching algorithm to obtain initial matching pairs and counts the number of the initial matching pairs; the mismatching eliminating unit adopts a random sampling consistency algorithm to eliminate the mismatching pairs in the initial matching pairs to obtain correct matching pairs, and calculates matching scores according to the matching pairs to obtain a workpiece identification result, as shown in fig. 8, the functional flow diagram of the device is shown. The input of the device is a gray image of a workpiece to be matched, and the output is a workpiece identification result. If the workpieces which are not in the current template library are identified, a judgment result can be obtained by eliminating the mismatching module, and the current template library is updated.
In order to verify the effectiveness of the invention, 5 workpieces of a nut, a double-lug washer, a screw, a straight screw and a cross screw are selected for testing, 120 workpiece images are selected for each workpiece as a test set, and the 120 workpiece images comprise translation, rotation, scale transformation, illumination change, noise influence and the like of the images. The experimental environment of the invention is that a PC of a Window 7 flagship edition operating system is configured as Intel (R) core (TM) i3-370M CPU @2.40GHZ, the memory is 2GB, and the compiling environment is VS2010 and OpenCV2.4.9. Comparing the invention with the FREAK and BRISK before improvement, the statistical results of the workpiece recognition rate of the three methods are shown in FIG. 6. As can be seen from fig. 6, the recognition rate of the improved method provided by the present invention for 5 kinds of workpieces used in the experiment is higher than that of the conventional BRISK algorithm and FREAK algorithm. Aiming at the problem of poor scale change performance of the FREAK algorithm, the Fast Hessian feature detection operator with better scale change performance is adopted to replace the AGAST feature detection algorithm. During sampling, the interference of background pixels to a workpiece is reduced by setting a gray threshold, and the distinguishing performance of the descriptors is improved. Fig. 7 shows a recall-error rate curve for three descriptors, which demonstrates the effectiveness of the method of the present invention. The performance of the improved FREAK descriptor is better than that of the FREAK descriptor before improvement under the condition of rotation scaling, and the invention mainly divides the sub-regions by gray level sequencing, realizes the rotation invariance independent of the main direction, increases the local gray level information and gray level contrast information in the descriptor, improves the description sub-region division performance and improves the robustness of rotation.
In addition, table 1 shows BRISK, FREAK and descriptor construction time of the method of the present invention, and it can be seen from table 1 that the method descriptor construction time of the present invention is less than BRISK and FREAK, mainly because the present invention provides a new sampling mode, redundant information in sampling points is less, and a FREAK algorithm is not required to screen sampling point pairs, thereby reducing time complexity. The scheme provided by the invention has better real-time performance.
TABLE 1 different descriptor construction times
Descriptor category Construction time (ms)
BRISK method 285.16
FREAK method 204.23
The method of the invention 180.54
The results show that the workpiece identification by the method provided by the invention has higher identification rate and better real-time property and robustness.
The invention utilizes Fast Hessian feature detection operator to replace the original feature detection operator AGAST of the FREAK, thereby enhancing the distinguishing capability of the FREAK descriptor under scale transformation; in the circular sampling mode provided by the invention, the characteristic points are sampled, and redundant information in the descriptor is reduced; and finally, a multi-region binary descriptor independent of a main direction technology is adopted to improve the robustness and the distinguishability of the binary descriptor and realize the effective identification of the workpiece image under the condition of rotating and zooming.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A workpiece identification method based on binary descriptors is characterized by comprising the following steps:
step 1) carrying out smooth denoising on an input gray workpiece image by adopting a median filtering method, and extracting feature points by utilizing a Fast Hessian feature detection operator;
step 2) dividing the original image into M sub-regions according to the gray value sequence of pixel points, and respectively mapping the characteristic point information extracted by the Fast Hessian characteristic detection operator and the gray value information of the pixel points in each sub-region to a circular sampling mode to construct a workpiece characteristic descriptor;
step 3) carrying out Hamming distance matching on the obtained workpiece feature descriptors and template feature descriptors in the template library by adopting a cascade matching algorithm in a mode of nearest neighbor ratio to obtain initial matching pairs and counting the number of the initial matching pairs;
step 4) adopting a random sampling consistency algorithm to remove wrong matching pairs in the initial matching pairs to obtain correct matching pairs;
and 5) calculating a matching score according to the matching logarithm, thereby obtaining a workpiece identification result.
2. The binary descriptor-based workpiece recognition method according to claim 1, wherein the step 1) comprises the following steps:
① A linear observation window of 5 × 5 is selected in the neighborhood of the pixels of the gray workpiece image, and the gray value sequence of the pixels in the observation window is set as { a }1,a2…an}, then
Wherein, b is the median filtering output result of the pixel points.
Secondly, defining the image obtained after median filtering as C, and solving candidate characteristic response value for a pixel point X (i, j) in the image C
det(Happrox)=DxxDyy-(kDxy)2(2)
Wherein the weight k is 0.9, and
<mrow> <msub> <mi>D</mi> <mrow> <mi>x</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>r</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> <mn>2</mn> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> <mn>1</mn> </munderover> <mo>&amp;lsqb;</mo> <mo>-</mo> <mn>2</mn> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>l</mi> <mo>-</mo> <mn>3</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>l</mi> <mo>+</mo> <mn>3</mn> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>D</mi> <mrow> <mi>y</mi> <mi>y</mi> </mrow> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>r</mi> <mo>=</mo> <mo>-</mo> <mn>1</mn> </mrow> <mn>1</mn> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mo>-</mo> <mn>2</mn> </mrow> <mn>2</mn> </munderover> <mo>&amp;lsqb;</mo> <mo>-</mo> <mn>2</mn> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>r</mi> <mo>-</mo> <mn>3</mn> <mo>)</mo> </mrow> <mo>+</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>r</mi> <mo>+</mo> <mn>3</mn> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>D</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>r</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>3</mn> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mo>&amp;lsqb;</mo> <mo>-</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>+</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>r</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>3</mn> </munderover> <mo>&amp;lsqb;</mo> <mo>-</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>r</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>3</mn> </munderover> <mo>&amp;lsqb;</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>+</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>r</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>3</mn> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>l</mi> <mo>=</mo> <mo>-</mo> <mn>3</mn> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <mo>&amp;lsqb;</mo> <mi>X</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>+</mo> <mi>l</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
candidate characteristic response value det (H) obtained by equation (2)approx) And comparing the characteristic point with a preset threshold, if the characteristic point is larger than the threshold, judging the characteristic point as a candidate characteristic point, and if the response value of the candidate characteristic point is larger than the response values of all the adjacent pixel points, judging the candidate characteristic point as the characteristic point.
3. The binary descriptor-based workpiece recognition method according to claim 1, wherein: the step 2) comprises the following steps:
mapping gray information of pixel points in M sub-regions to a circular sampling mode;
mapping the characteristic points detected by a Fast Hessian characteristic detection operator to a circular sampling mode;
mapping the gray information in the sub-region to the value of the sampling point according to the position of the sampling point in the circular sampling mode and the Gaussian smooth range;
randomly selecting values of two sampling points in any sub-area mapped by the sampling mode, comparing the values with a set threshold value, if the values are equal to the set threshold value, judging the sampling points as background points, and discarding the sampling points; if the values of the sampling points are not equal to the set threshold, comparing the gray values according to the formula (6):
wherein, Pi(0 ≦ i < 4) represents the sampling pattern of the ith sub-region map, m and n are PiTwo randomly selected sampling points, I (P), within the mapped sub-regioniM) is the sampling pattern PiGray value, I (P) of sampling point m in the ith sub-regioniN) is the sampling pattern PiThe gray value of a corresponding sampling point n in the ith sub-area;
⑤ selecting N sampling point pairs in each subregion, obtaining N comparison results according to formula (6), and then obtaining N-dimensional subregion descriptor SiAs shown in formula (7).
<mrow> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mn>1</mn> <mo>&amp;le;</mo> <mi>j</mi> <mo>&amp;le;</mo> <mi>N</mi> </mrow> </munder> <msup> <mn>2</mn> <mrow> <mi>j</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mi>T</mi> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>;</mo> <msub> <mi>m</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>n</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>P</mi> <msub> <mi>S</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>2</mn> </msub> <mo>,</mo> <mn>...</mn> <mo>,</mo> <msub> <mi>S</mi> <mi>M</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
All sub-regions are described by SiAnd the serial connection is carried out in sequence to form the N × M-dimensional workpiece feature descriptor.
4. The binary descriptor-based workpiece recognition method according to claim 1, wherein: the step of sorting and dividing the original image according to the gray value of the pixel points in the step 2) comprises the following steps:
firstly, sorting all pixel points according to gray values, and dividing the sorted gray values into M parts;
classifying the pixel points according to M gray value ranges, reserving the pixel points belonging to a certain gray value range on the image, abandoning all the pixel points not belonging to the class, namely decomposing the original image into M sub-regions, and taking the pixel points in each sub-region as a classification.
5. A binary descriptor-based workpiece recognition method according to claim 3, wherein: taking the detected characteristic point in the circular sampling mode in the step 2) as the center of the sampling mode; the sub-circles on the same layer of concentric circles have the same area, and the sub-circles on different layers have different areas; each layer of sampling points has 6 sampling points in the concentric circle, and every two separated sub-circles are tangent.
6. The binary descriptor-based workpiece recognition method according to claim 1, wherein: the step 3) comprises the following steps:
calculating the Hamming distance between a workpiece feature descriptor and a template feature descriptor in a first sub-region;
comparing the Hamming distance with a preset threshold, if the distance is greater than the preset distance threshold, abandoning the matching point, otherwise, comparing the distance in a second subregion;
carrying out matching judgment by adopting a nearest neighbor ratio mode, namely judging the point to be matched as a correct matching pair only when the ratio of the nearest Hamming distance to the next nearest Hamming distance of the point to be matched is smaller than a set ratio threshold, increasing 1 to the logarithm Q of the initial matching pair, and otherwise, keeping the Q unchanged;
and fourthly, repeating the steps until all the sub-areas are traversed.
7. The binary descriptor-based workpiece recognition method according to claim 1, wherein: the step 4) comprises the following steps:
① setting initial match of template imageThe matching point set is A, the initial matching point set on the image to be matched is B, 4 initial matching pairs are randomly selected from the sets A and B, and a projection transformation matrix H, H is obtained through the 4 initial matching pairs11And h12Etc. represent translational and rotational, respectively, equal movements of the object, h33Typically normalized to 1.
<mrow> <mi>H</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>h</mi> <mn>11</mn> </msub> </mtd> <mtd> <msub> <mi>h</mi> <mn>12</mn> </msub> </mtd> <mtd> <msub> <mi>h</mi> <mn>13</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>h</mi> <mn>21</mn> </msub> </mtd> <mtd> <msub> <mi>h</mi> <mn>22</mn> </msub> </mtd> <mtd> <msub> <mi>h</mi> <mn>23</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>h</mi> <mn>31</mn> </msub> </mtd> <mtd> <msub> <mi>h</mi> <mn>32</mn> </msub> </mtd> <mtd> <msub> <mi>h</mi> <mn>33</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow>
Wherein h is331, randomly corresponding the rest position data to 8 data of 4 initial matching pairs;
secondly, all the characteristic points in the set point A are transformed according to the projection transformation matrix H to obtain a set B'. The coordinates of all corresponding points within sets B and B' are compared, i.e.:
ei=||Bi-B′iif e | |i<hTThen, the point pair is determined to be an inner point, wherein hTJudging the internal point number as an external point if the internal point number is a preset judgment threshold value, and counting the internal point number obtained by the conversion;
repeating the steps (1) to (2), selecting the transformation with the largest number of interior points, taking the set obtained through the transformation as a new set A and a new set B, and continuing to carry out iterative operation;
and fourthly, terminating the iteration until the number of the inner points of the current iteration is the same as that of the previous iteration, taking the sets A and B obtained by the last iteration as sets for eliminating the mismatching characteristic point pairs, and taking the corresponding projection transformation matrix as the final projection transformation matrix.
8. The binary descriptor-based workpiece recognition method according to claim 1, wherein: before workpiece identification, the gray level processing is carried out on the color image by adopting a weighted average method, and the processing process is as follows
g(x,y)=w1R(x,y)+w2G(x,y)+w3B(x,y) (11)
Wherein, R (x, y), G (x, y) and B (x, y) are the component values of the original color image at the image coordinate (x, y), G (x, y) is the gray value of the image at the coordinate (x, y) after transformation, wi(i ═ 1,2,3) are the weights of the RGB components.
9. A workpiece recognition apparatus based on the workpiece recognition method of claim 1, characterized by comprising: the device comprises a feature point detection and descriptor construction unit, a template library, a feature matching unit and a rejection mismatching unit;
the feature point detection and descriptor construction unit divides the original image into M sub-regions according to the gray value sequence of pixel points, utilizes a Fast Hessian feature detection operator to extract feature points, and maps the feature point information extracted by the Fast Hessian feature detection operator and the gray value information of the pixel points in each sub-region of the original image to a circular sampling mode to construct a workpiece feature descriptor;
the template library is used for storing template feature descriptors;
the feature matching unit carries out Hamming distance matching on the workpiece feature descriptors and the template feature descriptors in the template library by adopting a cascade matching algorithm in a mode of nearest neighbor ratio to obtain initial matching pairs and counts the number of the initial matching pairs;
and the mismatching eliminating unit adopts a random sampling consistency algorithm to eliminate the mismatching pairs in the initial matching pairs so as to obtain correct matching pairs, and calculates matching scores according to the matching pairs so as to obtain a workpiece identification result.
CN201710335746.1A 2017-05-12 2017-05-12 A kind of workpiece identification method and device based on binary descriptor Pending CN107239792A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710335746.1A CN107239792A (en) 2017-05-12 2017-05-12 A kind of workpiece identification method and device based on binary descriptor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710335746.1A CN107239792A (en) 2017-05-12 2017-05-12 A kind of workpiece identification method and device based on binary descriptor

Publications (1)

Publication Number Publication Date
CN107239792A true CN107239792A (en) 2017-10-10

Family

ID=59984391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710335746.1A Pending CN107239792A (en) 2017-05-12 2017-05-12 A kind of workpiece identification method and device based on binary descriptor

Country Status (1)

Country Link
CN (1) CN107239792A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102013A (en) * 2018-08-01 2018-12-28 重庆大学 A kind of improvement FREAK Feature Points Matching digital image stabilization method suitable for tunnel environment characteristic
CN109521742A (en) * 2018-12-05 2019-03-26 西安交通大学 A kind of control system and control method for electric rotary body
CN109615645A (en) * 2018-12-07 2019-04-12 国网四川省电力公司电力科学研究院 The Feature Points Extraction of view-based access control model
CN109829853A (en) * 2019-01-18 2019-05-31 电子科技大学 A kind of unmanned plane image split-joint method
CN110009549A (en) * 2019-03-14 2019-07-12 北京航空航天大学 A kind of calculation method and hardware accelerator of rotational symmetry description
CN110084783A (en) * 2019-03-30 2019-08-02 天津大学 Local feature real-time detection and matching process on star
CN111160363A (en) * 2019-12-02 2020-05-15 深圳市优必选科技股份有限公司 Feature descriptor generation method and device, readable storage medium and terminal equipment
CN111340109A (en) * 2020-02-25 2020-06-26 深圳市景阳科技股份有限公司 Image matching method, device, equipment and storage medium
CN111709434A (en) * 2020-06-28 2020-09-25 哈尔滨工业大学 Robust multi-scale template matching method based on nearest neighbor feature point matching

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102332092A (en) * 2011-09-14 2012-01-25 广州灵视信息科技有限公司 Flame detection method based on video analysis
CN104268602A (en) * 2014-10-14 2015-01-07 大连理工大学 Shielded workpiece identifying method and device based on binary system feature matching

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102332092A (en) * 2011-09-14 2012-01-25 广州灵视信息科技有限公司 Flame detection method based on video analysis
CN104268602A (en) * 2014-10-14 2015-01-07 大连理工大学 Shielded workpiece identifying method and device based on binary system feature matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WU YANHAI 等: "Image Registration Method Based on SURF and FREAK", 《2015 IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING,COMMUNICATIONS AND COMPUTING(ICSPCC)》 *
吴攀超: "模糊图像中待测目标边缘轮廓的三维测量", 《万方数据知识服务平台》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102013B (en) * 2018-08-01 2022-03-15 重庆大学 Improved FREAK characteristic point matching image stabilization method suitable for tunnel environment characteristics
CN109102013A (en) * 2018-08-01 2018-12-28 重庆大学 A kind of improvement FREAK Feature Points Matching digital image stabilization method suitable for tunnel environment characteristic
CN109521742A (en) * 2018-12-05 2019-03-26 西安交通大学 A kind of control system and control method for electric rotary body
CN109615645A (en) * 2018-12-07 2019-04-12 国网四川省电力公司电力科学研究院 The Feature Points Extraction of view-based access control model
CN109829853A (en) * 2019-01-18 2019-05-31 电子科技大学 A kind of unmanned plane image split-joint method
CN109829853B (en) * 2019-01-18 2022-12-23 电子科技大学 Unmanned aerial vehicle aerial image splicing method
CN110009549A (en) * 2019-03-14 2019-07-12 北京航空航天大学 A kind of calculation method and hardware accelerator of rotational symmetry description
CN110009549B (en) * 2019-03-14 2020-08-21 北京航空航天大学 Computing method of rotational symmetry descriptor and hardware accelerator
CN110084783A (en) * 2019-03-30 2019-08-02 天津大学 Local feature real-time detection and matching process on star
CN111160363A (en) * 2019-12-02 2020-05-15 深圳市优必选科技股份有限公司 Feature descriptor generation method and device, readable storage medium and terminal equipment
CN111160363B (en) * 2019-12-02 2024-04-02 深圳市优必选科技股份有限公司 Method and device for generating feature descriptors, readable storage medium and terminal equipment
CN111340109A (en) * 2020-02-25 2020-06-26 深圳市景阳科技股份有限公司 Image matching method, device, equipment and storage medium
CN111340109B (en) * 2020-02-25 2024-01-26 深圳市景阳科技股份有限公司 Image matching method, device, equipment and storage medium
CN111709434A (en) * 2020-06-28 2020-09-25 哈尔滨工业大学 Robust multi-scale template matching method based on nearest neighbor feature point matching
CN111709434B (en) * 2020-06-28 2022-10-04 哈尔滨工业大学 Robust multi-scale template matching method based on nearest neighbor feature point matching

Similar Documents

Publication Publication Date Title
CN107239792A (en) A kind of workpiece identification method and device based on binary descriptor
CN108898610B (en) Object contour extraction method based on mask-RCNN
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN105956582B (en) A kind of face identification system based on three-dimensional data
CN107610087B (en) Tongue coating automatic segmentation method based on deep learning
CN107145829B (en) Palm vein identification method integrating textural features and scale invariant features
CN106355577B (en) Rapid image matching method and system based on significant condition and global coherency
CN111126482B (en) Remote sensing image automatic classification method based on multi-classifier cascade model
CN105184265A (en) Self-learning-based handwritten form numeric character string rapid recognition method
CN111401145B (en) Visible light iris recognition method based on deep learning and DS evidence theory
US20230047131A1 (en) Contour shape recognition method
CN113592911B (en) Apparent enhanced depth target tracking method
CN109544523B (en) Method and device for evaluating quality of face image based on multi-attribute face comparison
CN104268602A (en) Shielded workpiece identifying method and device based on binary system feature matching
CN104036284A (en) Adaboost algorithm based multi-scale pedestrian detection method
CN112200121A (en) Hyperspectral unknown target detection method based on EVM and deep learning
CN109815923B (en) Needle mushroom head sorting and identifying method based on LBP (local binary pattern) features and deep learning
CN112784722B (en) Behavior identification method based on YOLOv3 and bag-of-words model
CN113221956B (en) Target identification method and device based on improved multi-scale depth model
CN111753119A (en) Image searching method and device, electronic equipment and storage medium
CN108288276B (en) Interference filtering method in touch mode in projection interaction system
CN111199558A (en) Image matching method based on deep learning
CN117036737A (en) Feature extraction and matching method based on information entropy, GMS and LC significant detection
CN109902581A (en) It is a kind of based on multistep weighting single sample portion block face identification method
CN106897723B (en) Target real-time identification method based on characteristic matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171010

RJ01 Rejection of invention patent application after publication