CN104268602A - Shielded workpiece identifying method and device based on binary system feature matching - Google Patents

Shielded workpiece identifying method and device based on binary system feature matching Download PDF

Info

Publication number
CN104268602A
CN104268602A CN201410543452.4A CN201410543452A CN104268602A CN 104268602 A CN104268602 A CN 104268602A CN 201410543452 A CN201410543452 A CN 201410543452A CN 104268602 A CN104268602 A CN 104268602A
Authority
CN
China
Prior art keywords
workpiece
point
image
matching
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410543452.4A
Other languages
Chinese (zh)
Inventor
陈喆
殷福亮
李腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201410543452.4A priority Critical patent/CN104268602A/en
Publication of CN104268602A publication Critical patent/CN104268602A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a shielded workpiece identifying method and device based on binary system feature matching. The identifying problem of a shielded workpiece is solved, and meanwhile the computation complexity can be lowered, the storage space can be reduced, and the matching precision can be improved. In addition, good robustness is achieved on ambient light, view angle changing and partial shielding, the shielded workpiece can be identified in different interference environments, and a good identifying result is obtained. Thus, the shielded workpiece identifying method and device can be widely applied to the field of image identifying.

Description

A kind of based on binary features coupling block workpiece identification method and device
Technical field
The present invention relates to a kind of workpiece identification method and device, particularly about a kind of based on binary features coupling block workpiece identification method and device.
Background technology
Robot is widely used in industrial equipment manufacture, has played great function in the industrial production.Robot both can be used for the dull loaded down with trivial details repetitive operation such as processing, assembling, carrying, sorting, can be used for again the technological operation that welding, spraying, Laser Processing, pressure casting etc. are harmful.Use robot can not only save manpower, reduce labor strength, enhance productivity, reduce production cost, and the constant product quality of producing.Machine vision technique is introduced in robot controlling, make the function of robot energy simulating human visual cognition and judgement, by obtaining image scene, based on certain image procossing and the identification of mode identification technology realize target, sorting etc., the intelligent level of robot greatly can be improved.
Workpiece identification is machine vision applications in one of gordian technique of industrial circle, and object is distinguished from the workpiece of other type by the workpiece of a type.In the automated production links such as machining, assembling, sorting, all need first to identify workpiece, but occlusion issue is challenging problem in workpiece identification process.In industrial production line or worktable, the pose of putting of workpiece is not fixed, and between multiple workpiece, normal existence is blocked, and the workpiece image information of picked-up is imperfect, causes workpiece identification accuracy significantly to decline, even None-identified, causes the reduction of production efficiency.
Problem in existing workpiece identification is as follows:
1)Liu W,Wang P,Qiao H.Part-based adaptive detection of workpieces using differential evolution.Signal Processing,2012,92(2):301-307.
The detection method of a kind of partial occlusion workpiece of middle proposition, its basic ideas are: utilize the method that Shape-based interpolation divides, the integrity profile of template workpiece is divided into multiple sub-profile, give different weights according to their ability to see things in their true light, then realize by differential evolution method the detection and positioning blocking workpiece.The accuracy of the method testing result depends on the rationality of contour segmentation, is applicable to the workpiece that profile variations is many, and for the fewer workpiece of profile variations, its accuracy of detection is lower, and applicability is poor.
2) Liu M Y, Tuzel O, Veeraraghavan A.Fast object localization and pose estimation in heavy clutter for robotic bin picking.The International Journal of Robotics Research, 2012, propose one in 31 (8): 951-973. to be applicable to block workpiece identification Apparatus and method for, its basic ideas are: utilize multi-angle to expose camera shooting image, calculate the depth information of the edge of work, then quick form fit algorithm is utilized, realize workpiece identification and pose discrimination.Above two kinds of method equipment are complicated, and cost is higher; Data acquisition and complex disposal process, calculated amount is larger.
3) Wang X H, Fu W P, Zhu D X.Research on recognition of work-piece based on improved SIFT algorithm.International Conference on Electronic Measurement & Instruments, Beijing, China, a kind of workpiece identification method based on improving SIFT (Scale Invariant Feature Transform, Scale invariant features transform) algorithm is proposed in 20091:417-421..Its basic ideas are: utilize SIFT algorithm to obtain image at translation, rotation, convergent-divergent and the invariant feature under the situation such as blocking, the linear combination of application chessboard distance and city block distance replaces Euclidean distance, and adopt the efficiency dynamically reducing and calculate and improve algorithm apart from number of features involved in process, effectively solve the identification problem of blocking workpiece.The method utilizes the process of SIFT algorithm extract minutiae and generating feature descriptor complicated, and calculated amount is large; Feature Descriptor floating number represents, EMS memory occupation amount is large, and matching speed is low.
4) Gui Zhenwen, Liu Yue, Wang Yongtian. a kind of visual search method being applicable to mobile terminal. Chinese patent: applicant discloses a kind of visual search method being applicable to mobile terminal in 103530649A, 2014-01-22..Its basic ideas are: utilize mobile terminal to gather the workpiece image to be identified of current scene, and obtain the gravity direction of mobile terminal and the GPS information of current scene when gathering image; Adopt scale-of-two local feature detection algorithm BRISK (Binary Robust Invariant Scalable Keypoints, binary robust is stretched unchangeability key point) feature point detection is carried out to workpiece image to be identified, obtain the unique point of workpiece image to be identified; According to described gravity direction, with feature descriptor FREAK (Fast Retina Keypoint, quick retina key point), described unique point is described, obtains the scale-of-two local feature vectors of workpiece image to be identified; GPS information and scale-of-two local feature vectors are packaged into a descriptor file, from Sample Storehouse, find the sample image the most close with workpiece image to be identified, realize visual search.There is following problem in this patent: adopts fixed sample pattern when (1) utilizes FREAK algorithm to set up descriptor, only extract limited sampled point number, although this method calculated amount declines, but owing to being subject to the impact of sample point number and position, the ability to see things in their true light of descriptor is poor, and matching precision is low.(2) FREAK algorithm is when obtaining optional sampling point to position, only represent by the distance of the right average of sampled point and 0.5 ability to see things in their true light that sampled point is right, then the sampled point pair that correlativity is greater than certain threshold value is removed simply, have ignored the impact of correlativity on ability to see things in their true light, be difficult to reach gratifying matching result.
In sum, there is following technical matters in existing workpiece identification method: (1) identifies for some specific workpiece, and universality is poor, and application is limited; (2) realize workpiece identification by the first-class complex apparatus of multi-angle exposure shooting, cause cost higher, data acquisition and complex disposal process, calculated amount is larger; (3) utilize SIFT algorithm realization to block workpiece identification, calculated amount and storage space large; (4) the descriptor ability to see things in their true light of FREAK algorithm acquisition is low, and matching effect is undesirable.
Summary of the invention
For the problems referred to above, the object of this invention is to provide a kind of based on binary features coupling block workpiece identification method and device.
For achieving the above object, the present invention takes following technical scheme: a kind ofly block workpiece identification method based on binary features coupling, and it comprises the following steps: 1) adopt camera shooting workpiece template image and workpiece image to be identified; 2) workpiece template image is carried out pre-service, and adopt the unique point on BRISK algorithm extraction workpiece template image, and convert thereof into binary features descriptor; 3) step 2 is adopted) identical method extracts unique point on workpiece image to be identified, and converts thereof into binary features descriptor; 4) unique point adopting approximate KNN method to find workpiece image to be identified and workpiece template image to match, obtains all initial matching point set matched; 5) adopt RANSAC algorithm to reject the error matching points pair obtained in all initial matching point set matched, obtain correct matching double points, and then complete workpiece identification.
Described step 2) in comprise the following steps: 1. workpiece template image is carried out pre-service, obtains filtered smoothed image; 2. adopt the unique point on the filtered level and smooth workpiece template image of BRISK algorithm extraction, it comprises the following steps: I, build metric space pyramid; II, at the pyramidal every one deck of metric space, FAST algorithm is adopted to obtain the potential unique point of filtered smoothed image; III, in metric space, non-maxima suppression is carried out to each potential unique point, and reject the unique point of some non-maximum value, obtain preliminary unique point; IV, sub-pix and dimension correction are carried out to each preliminary unique point, obtain accurate characteristic point position and yardstick; 3. the gray-scale value size obtaining optional sampling point right by comparing FREAK method carrys out structure description, unique point on workpiece template image is converted into binary features descriptor, and it comprises the following steps: I, adopt FREAK method to obtain the right position of optional sampling point; II, gray scale centroid method is utilized to determine the principal direction of each unique point; III, the principal direction of position right for optional sampling point and unique point is built binary features descriptor.
Described step 4) in comprise the following steps: set workpiece template image as P={p 1, p 2..., p m, its feature interpretation subclass is VP={vp 1, vp 2..., vp m; Feature point set in workpiece image to be identified is combined into Q={q 1, q 2..., q n, its feature interpretation subclass is VQ={vq 1, vq 2..., vq n, each descriptor vector of binary features represents, length is 512 bits; Search represents the Expressive Features of the top n bit in 512 bits of fuzzy message, remains 512-N bit, wherein N≤128; If matching distance is less than set threshold value, then carry out the coupling of feature below, its concrete steps are as follows: 1. for the Feature Descriptor vp in workpiece template image i, calculate vp ifeature Descriptor vq in top n bit and workpiece image to be identified jthe Hamming distance HD of top n bit j: HD j = Hamm ( v p i , v q j ) = Σ k = 1 128 ( v p i k ⊕ v q j k ) , j = 1,2 , . . . n ; Wherein, represent xor operation; If 2. HD jbe more than or equal to setting threshold value T 1, get 30 ~ 50, then judge vp iand vq jfor unmatched proper vector; If 3. HD jbe less than setting threshold value T 1, then vp is calculated iremain 512-N bit and vq jthe Hamming distance of residue 512-N bit, adds the Hamming distance of top n bit, obtains vp iand vq jthe Hamming distance HD' of whole 512 bits j; 4. smallest hamming distance HD' is found minwith secondary smallest hamming distance HD' sec, when time, then judge vp iand vq minfor the proper vector of coupling, characteristic of correspondence point p iand q minit is the unique point of a pair coupling; 5. repeat step 1. ~ 4., all elements in the feature interpretation subclass VP in traversal workpiece template image, obtains all initial matching point set matched.
Described step 5) in comprise the following steps: set up an office and integrate E as the initial matching point set that workpiece template image obtains, F is the initial matching point set that workpiece image to be identified obtains, the transformation relation projective transformation matrix between workpiece template image and workpiece image to be identified H = h 00 h 01 h 02 h 10 h 11 h 12 h 20 h 21 h 22 Describe; To the p (x, y) of in image, this point transforms to a p ' (x ', y ') by projective transformation matrix H: x ′ y ′ 1 = H x y 1 Wherein, projective transformation model H obtains by four pairs of match points; The concrete steps that RANSAC algorithm rejects the error matching points obtained in all initial matching point set matched right are as follows: 1. in set E and F, random selecting 4 pairs of matching double points, calculate the projective transformation matrix that this four couple point is right; 2. in the remaining unique point of set F, selected characteristic point (x 2, y 2), with step 1. in the projective transformation matrix H that calculates it is converted, obtain the coordinate figure (x' after converting 2, y' 2), if the coordinate figure (x of character pair point in E 1, y 1) and (x' 2, y' 2) meet then think matching double points (x 1, y 1) and (x 2, y 2) meet model H, be called interior point, wherein ε is interior exterior point distance threshold, gets 3 or 4; 3. repeat step 2., the remaining all unique points of traversal set F, statistics meets the match point logarithm of projective transformation matrix, the size of point set namely; 4. repeat step 1. ~ 3., finding the linear transformation that interior point is maximum to quantity, using putting in this conversion obtains set as new point set E and F, carrying out the iteration of a new round; 5., when putting the point before to number therewith secondary iteration in point set E with F in iteration obtains and being consistent to number, iteration ends, E and F of last current iteration, is exactly the match point set after rejecting error hiding, obtains correct matching double points, and then complete workpiece identification.
What mate based on binary features blocks a Workpiece recognition device, it is characterized in that: it comprises image acquisition units, workpiece template image feature extraction unit, workpiece image feature extraction unit to be identified, characteristic matching unit and rejects error hiding unit; Wherein, described image acquisition units picked-up workpiece template image and workpiece image to be identified, and correspondence sends described workpiece template image feature extraction unit and described workpiece image feature extraction unit to be identified to; Described workpiece template image feature extraction unit extracts the unique point on workpiece template image, and converts thereof into binary features descriptor, and sends described characteristic matching unit to; Described workpiece image feature extraction unit to be identified extracts the unique point on workpiece image to be identified, is translated into binary features descriptor and represents, and is sent to described characteristic matching unit; Described characteristic matching unit adopts Hamming distance to find as matching criterior the unique point that workpiece image to be identified and workpiece template image match, and obtains all initial matching points matched, and sends all initial matching points to described rejecting error hiding unit; Described rejecting error hiding unit rejects the error matching points pair obtained in all initial matching point set matched, and obtains correct matching double points, and then completes workpiece identification.
Described workpiece template image feature extraction unit and described workpiece image feature extraction unit to be identified all comprise image pre-processing module, feature point detection module and unique point describing module; Described image pre-processing module sends described feature point detection module to after the image of extraction is carried out greyscale transformation and medium filtering process, described feature point detection module adopts the unique point in BRISK algorithm detected image, and sends the unique point on image to described unique point describing module; Unique point is converted into binary features descriptor by described unique point describing module, and sends described characteristic matching unit to.
Described image acquisition units adopts monocular cam.
The present invention is owing to taking above technical scheme, and it has the following advantages: 1, the present invention adopts the method for Feature Points Matching, does not rely on workpiece profile, for the workpiece that profile variations is fewer, also can correct match cognization, and applicability is higher.2, the present invention adopts monocular cam to obtain workpiece template image and workpiece image to be identified, this equipment not only cost is low, and can be more flexible image taking and process are carried out to workpiece to be identified, its process obtaining data is simple, and calculated amount is less.3, the present invention adopts binary features descriptor to carry out Description Image feature, only needs several byte just to represent the description vectors of higher-dimension, greatly saves storage space.4, the present invention utilizes Hamming distance to mate, and only needs to carry out simple xor operation, and then add up the number of " 1 " in XOR result, matching speed efficiency is high, and these operations can be realized by the computing of bottom, is convenient to realize on hardware.5, the present invention adopts BRISK algorithm as feature point detecting method, and feature point detection speed is fast, can either overcome image generation illumination, translation, rotation and change of scale problem, have good robustness simultaneously to partial occlusion, the robustness had.6, the present invention describes in process in unique point, adopt the point simultaneously taking into account mean distance and correlativity to ability to see things in their true light model, the initial samples point obtained at FREAK fixed sample pattern drill carries out meticulousr sampling in the neighborhood of position, improve the ability to see things in their true light of descriptor, decrease error hiding, and then improve coupling accuracy.7, the present invention adopts simple gray scale centroid method efficiently to substitute gradient calculation consuming time, reduces algorithm complex.8, the inventive method rejects the error matching points pair obtained in all initial matching point set matched, thus improves matching precision.Therefore, the present invention can be widely used in field of image recognition.
Accompanying drawing explanation
Fig. 1 is the functional block diagram of the inventive method
Fig. 2 is FAST feature point detection schematic diagram
Fig. 3 is FAST fractional value
Fig. 4 is BRISK feature point detection schematic diagram
Fig. 5 is FREAK sampling pattern
Fig. 6 is thin sampling pattern
Fig. 7 blocks work piece match recognition result figure
Fig. 8 is test pattern: (a) workpiece template image and (b) test pattern
Embodiment
Below in conjunction with drawings and Examples, the present invention is described in detail.
As shown in Figure 1, the present invention is a kind of blocks Workpiece recognition device based on binary features coupling, and it comprises image acquisition units 1, workpiece template image feature extraction unit 2, workpiece image feature extraction unit 3 to be identified, characteristic matching unit 4 and rejects error hiding unit 5.
Wherein, image acquisition units 1 absorbs workpiece template image and workpiece image to be identified, and correspondence sends workpiece template image feature extraction unit 2 and workpiece image feature extraction unit 3 to be identified to; Workpiece template image feature extraction unit 2 comprises image pre-processing module 21, feature point detection module 22 and unique point describing module 23, image pre-processing module 21 sends feature point detection module 22 to after extraction workpiece template image is carried out greyscale transformation and medium filtering process, feature point detection module 22 adopts the unique point in BRISK algorithm detection workpiece template image, and sends the unique point on workpiece template image to unique point describing module 23; Unique point is converted into binary features descriptor by unique point describing module 23, and sends characteristic matching unit 4 to; Workpiece image feature extraction unit 3 to be identified extracts the unique point on workpiece image to be identified, is translated into binary features descriptor and represents, and sent to characteristic matching unit 4; Characteristic matching unit 4 adopts Hamming distance to find as matching criterior the unique point that workpiece image to be identified and workpiece template image match, and obtains all initial matching points matched, and is sent to by all initial matching points and reject error hiding unit 5; Rejecting error hiding unit 5 adopts RANSAC (Random Sample Consensus random sampling consistance) algorithm to reject the error matching points pair obtained in all initial matching point set matched, obtain correct matching double points, thus raising matching precision, complete workpiece identification.Above-mentioned workpiece template image feature extraction unit 2 is identical with the module that workpiece image feature extraction unit 3 to be identified comprises, therefore no longer describes in detail.
Above-mentioned image acquisition units 1, workpiece image feature extraction unit 3 to be identified, characteristic matching unit 4 and rejecting error hiding unit 5 are for have processed online.Workpiece template image feature extraction unit 2 completed in the processed offline stage, reduced calculated amount thus.
In above-described embodiment, image acquisition units 1 preferably adopts monocular cam shooting workpiece template image and workpiece image to be identified.
The present invention a kind of based on binary features coupling block workpiece identification method, it comprises the following steps:
1) camera shooting workpiece template image and workpiece image to be identified is adopted;
2) extract the unique point on workpiece template image, and convert thereof into binary features descriptor;
1. workpiece template image is carried out pre-service, to remove the defect because of illumination or imaging system, and cause the workpiece template image of shooting can be subject to the impact of noise, to obtain image not affected by noise; It comprises the following steps:
Adopt greyscale transformation that the coloured image of workpiece template image is converted to gray level image, and adopt medium filtering that the gray level image of workpiece template image is carried out denoising, obtain filtered smoothed image.
I, the coloured image of workpiece template image is converted to gray level image;
Be gray space by workpiece template image Im (x, y) of camera collection by RGB color space conversion, obtain gray level image Im_gray (x, y), its conversion formula:
g=0.299R+0.587G+0.114B (1)
Wherein, R, G, B correspondence represents the red, green, blue color component of each pixel in workpiece template image, and g represents this grey scale pixel value after conversion.Convert corresponding grey scale pixel value for each pixel in workpiece template image Im (x, y) in RGB color space to by formula (1), then obtain the gray level image Im_gray (x, y) of this image in gray space.
II, adopt medium filtering that gray level image Im_gray (x, y) is carried out denoising, obtain filtered smoothed image, namely remove the workpiece template image after noise.
Window size is adopted to be that the median filter of 3 × 3 is to gray level image Im_gray (x, y) filtering is carried out, i.e. any one pixel (x in gray level image, y) watch window of 3 × 3 is set in neighborhood, pixel value in watch window is sorted, get the median-filtered result of median pixel value as this pixel, obtain filtered smoothed image Im_smooth.
2. adopt the unique point on the filtered level and smooth workpiece template image of BRISK algorithm extraction, it comprises the following steps:
I, metric space pyramid is built;
The metric space gold tower set up comprises n layer (octaves) and n middle layer (intra-octaves), and adopts c irepresent i-th octaves, d irepresent i-th intra-octaves, wherein i=1,2 ..., n, c 0for filtered smoothed image Im_smooth, d 0by image c 01.5 times of down-samplings obtain, c i+1by last layer c i0.5 times of down-sampling obtain, d i+1by last layer d i0.5 times of down-sampling obtain, d ibe positioned at two adjacent layer c iand c i+1between.
c i+1(x,y)=c i(2x,2y) (2)
d i+1(x,y)=d i(2x,2y) (3)
T is made to represent yardstick, then t (c i)=2 i, t (d i)=2 i× 1.5.
II, at the pyramidal every one deck of metric space, FAST algorithm (Features From Accelerated Segment Test, Accelerated fractionation test feature) is adopted to obtain the potential unique point of filtered smoothed image Im_smooth;
At the pyramidal every one deck of metric space, obtain the potential unique point of filtered smoothed image Im_smooth with FAST9-16 algorithm, namely with current pixel point p for the center of circle, building radius is the circle of 3, as shown in Figure 2, definition circle on each pixel x ∈ 1 ... the angle point response function of 16} is
Wherein, I p-> xrepresent the gray-scale value of pixel on circle, I pfor centre point gray-scale value, τ is feature point detection threshold value, is generally 10.If there is the angle point response function CRF of at least 9 pixels to equal 1, then think that p point is potential unique point.
III, in metric space, carry out non-maxima suppression to each potential unique point, and reject the unique point of some non-maximum value, obtain preliminary unique point, its concrete steps are as follows:
A, 26 pixels chosen in potential unique point place layer eight neighborhood (eight points namely except unique point) and levels 3 × 3 neighborhood, calculate the FAST fractional value s of 26 pixels in potential unique point and neighborhood, namely
s = max ( Σ x ∈ S bright | I p - > x - I p | - τ , Σ x ∈ S darkt | I p - I p - > x | - τ ) - - - ( 5 )
Wherein, S brightrepresent and meet I p-> x>=I pthe pixel set of+τ, i.e. S bright={ x|I p-> x>=I p+ τ }.S darkrepresent and meet I p-> x≤ I pthe pixel set of-τ, i.e. S dark={ x|I p-> x≤ I p-τ }.
B, the FAST fractional value of 26 points in three-dimensional with it for potential unique point neighborhood to be compared, if it is all larger than the FAST fractional value of these 26 points, remain, as preliminary unique point.
IV, carry out sub-pix and dimension correction to each preliminary unique point, to obtain accurate characteristic point position and yardstick, its detailed process is as follows:
A, the correction of FAST fractional value is carried out respectively to three layers (unique point place layer, the upper and lower) of preliminary Feature point correspondence, the trimming process of every layer is identical, its concrete steps are: the FAST fractional value calculating 9 pixels in 3 × 3 pixel regions centered by the preliminary unique point obtained, is designated as s respectively 0,0, s 1,0, s 2,0, s 0,1, s 1,1, s 2,1, s 0,2, s 1,2and s 2,2, as shown in Figure 3.
According to formula (6) design factor k 1~ k 6, then have
k 1 = 3 × ( s 0 , 0 + s 0,1 + s 0,2 + s 2,0 + s 2,1 + s 2,2 - 2 s 1,0 - 2 s 1,1 - 2 s 1,2 ) k 2 = 3 × ( s 0,0 + s 1,0 + s 0,2 + s 2,0 + s 1,2 + s 2,2 - 2 s 0,1 - 2 s 1,1 - 2 s 2,1 ) k 3 = - 3 × ( s 0,0 + s 0,2 - s 2,0 - s 2,2 + s 0,1 - s 2,1 ) k 4 = - 3 × ( s 0,0 - s 0,2 + s 2,0 - s 2,2 + s 1,0 - s 1,2 ) k 5 = 4 × ( s 0 , 0 - s 0,2 - s 2,0 + s 2,2 ) k 6 = - 2 × ( s 0,0 + s 0,2 - 2 × ( s 1,0 + s 0,1 + s 1,2 + s 2,1 ) - 5 s 1,1 + s 2.0 + s 2,2 ) - - - ( 6 )
According to formula (7) computational discrimination formula ▽, then have
▿ = k 5 2 - 4 k 1 k 2 - - - ( 7 )
If ▽=0, then x=y=0; Otherwise, x = max ( - 1 , min ( 1 , 2 k 2 k 3 - k 4 k 5 k 5 2 - 4 k 1 k 2 ) ) , the pixel (x, y) of any point, x is horizontal ordinate, and y is ordinate.
Formula (8) is utilized to calculate the FAST fractional value after correcting
s ^ = k 1 x 2 + k 2 y 2 + k 3 x + k 4 y + k 5 xy + k 6 - - - ( 8 )
B, the FAST fractional value of the Feature point correspondence three layers (unique point place layer, the upper and lower) obtained by steps A is established to be respectively the yardstick of its correspondence is respectively t 1, t 2, t 3, utilize formula (9) right carry out one dimension Parabolic Fit.
s ^ = a t 2 + bt + c - - - ( 9 )
Wherein, a is the coefficient of quadratic term, and b is once the coefficient of item, and c is constant, and t is the yardstick corresponding with FAST fractional value.
Will substitution formula obtains system of equations in (10)
s ^ 1 = a t 1 2 + b t 1 + c s ^ 2 = a t 2 2 + b t 2 + c s ^ 3 = a t 3 2 + b t 3 + c - - - ( 10 )
Solving equation group (10), obtains
a = ( t 2 - t 3 ) ( s ^ 1 - s ^ 2 ) - ( t 1 - t 2 ) ( s ^ 2 - s ^ 3 ) ( t 1 - t 2 ) ( t 2 - t 3 ) ( t 1 - t 3 ) b = ( t 2 2 - t 3 2 ) ( s ^ 1 - s ^ 2 ) - ( t 1 2 - t 2 2 ) ( s ^ 2 - s ^ 3 ) ( t 1 - t 2 ) ( t 2 - t 3 ) ( t 3 - t 1 ) c = t 2 t 3 ( t 2 - t 3 ) s ^ 1 + t 1 t 3 ( t 3 - t 1 ) s ^ 2 + t 1 t 2 ( t 1 - t 2 ) s ^ 3 ( t 1 - t 2 ) ( t 2 - t 3 ) ( t 1 - t 3 ) - - - ( 11 )
Ask parabolical extreme point, order then optimal scale estimated value is
t = - b 2 a - - - ( 12 )
C, as shown in Figure 4, between the maximum value near optimal scale estimated value, utilize conventional method to carry out interpolation again to extreme point coordinate, the sub-pixel positioning and the yardstick that namely complete unique point are accurately estimated.
3. the gray-scale value size obtaining optional sampling point right by comparing FREAK method carrys out structure description, the unique point on workpiece template image is converted into binary features descriptor, its concrete steps are as follows:
I, FREAK method is adopted to obtain the right position of optional sampling point;
First " retina " fixed sample pattern adopting FREAK to meet human eye vision physiological effect is slightly trained, find initial sampled point to set, as shown in Figure 5, the position of its central point character pair point, the position of the corresponding sampled point in center of circle, the size of circle is directly proportional to the distance at this Dian Dao center, and the radius of circle corresponding to sampled point that namely decentering point is far away is larger.Sampled point number is from outside to inside respectively 6,6,6,6,6,6,6,1, and 43 sampled points, have thus altogether the combination that individual sampled point is right.
The gray-scale value of more each sampled point centering two sampled points, obtains a binary result T (P a):
Wherein, P arepresent a sampled point pair, with be respectively sampled point to P ain the spatial coordinate location of two sampled points, with corresponding expression sampled point with gray-scale value.
All binary result are formed a binary vector V, be the descriptor of unique point.
V = &Sigma; 0 &le; a < 903 2 a T ( P a ) - - - ( 14 )
Wherein, a gets 903 from 0, and 903 is the right quantity of sampled point.
Set up a two-dimensional matrix every a line of matrix is the descriptor of a unique point, and length is 903, M representation feature point number.
Calculate two-dimensional matrix D m × 903in the distance dist (a) of each column mean a and 0.5 and this row and other degree of correlation Corr (a, b) arranged, namely
a &OverBar; = 1 M &Sigma; 0 &le; i < M D ( i , a ) - - - ( 15 )
dist ( a ) = a &OverBar; - 0.5 - - - ( 16 )
Corr ( a , b ) = &Sigma; 0 &le; i < M ( D ( i , a ) - a &OverBar; ) ( D ( i , b ) - b &OverBar; ) &Sigma; 0 &le; i < M ( D ( i , a ) - a &OverBar; ) 2 &Sigma; 0 &le; i < M ( D ( i , b ) - b &OverBar; ) 2 - - - ( 17 )
Wherein, 0≤a < 903,0≤b < 903, a ≠ b.
Calculate the ability to see things in their true light D of each row and other maximum relation degree Max_Corr arranged and these row a, namely
Max_Corr=argmax(|Corr(a,b)|) (18)
D a = e - | dist ( a ) | Max _ Corr - - - ( 19 )
According to ability to see things in their true light D asize sorts from high to low, preferably gets front 512 row as ideal point pair, records sampled point to position, obtain initial samples point to set A={ (p i, p j) | 0≤j < i < 43}, wherein (p i, p j) be to each sampled point pair in set A to initial samples point.
As shown in Figure 6, to initial samples point to each sampled point in set A to (p i, p j), in two blockages of the 2r × 2r (r is the radius of these two some place circles) centered by these two points, randomly draw two points of Gaussian distributed respectively form new sampled point to (p' i, p' j).
Calculate the right ability to see things in their true light of new sampled point according to formula (19), if the right ability to see things in their true light of this sampled point ability to see things in their true light more right than former sampled point is high, then by new sampled point to the old sampled point pair of replacement; Otherwise, delete this sampled point pair.Said process is experimentally selected iteration 200 times, until find optimum sampled point pair, the position that record optional sampling point is right, obtains final sampled point to set B.
II, gray scale centroid method is utilized to determine the principal direction of each unique point;
Gray scale centroid method is utilized to determine the principal direction of each unique point.Around unique point, choose a border circular areas S, the point in region is f (x, y), then its center-of-mass coordinate for
( x &OverBar; , y &OverBar; ) = ( &Sigma; x &Sigma; y xf ( x , y ) &Sigma; x &Sigma; y f ( x , y ) , &Sigma; x &Sigma; y yf ( x , y ) &Sigma; x &Sigma; y f ( x , y ) ) - - - ( 20 )
The angle of unique point and barycenter is the principal direction θ of unique point:
&theta; = arctan ( y &OverBar; x &OverBar; ) = arctan [ &Sigma; x &Sigma; y yf ( x , y ) &Sigma; x &Sigma;x y f ( x , y ) ] - - - ( 21 )
III, the principal direction of position right for optional sampling point and unique point is built binary features descriptor;
According to the principal direction of each unique point, location sets B right for optional sampling point is rotated θ around this unique point by correspondence, the gray-scale value that more postrotational sampled point is right, the point obtaining one group of binary feature composition, to location sets C, generates binary features descriptor by formula (15).
3) step 2 is adopted) identical method extracts unique point on workpiece image to be identified, and converts thereof into binary features descriptor;
Due to step 2) with step 3) processing procedure consistent, therefore no longer to describe in detail.
4) unique point adopting approximate KNN (Approximate Nearest Neighbors, ANN) method to find workpiece image to be identified and workpiece template image to match, obtains all initial matching point set matched;
If workpiece template image is P={p 1, p 2..., p m, its feature interpretation subclass is VP={vp 1, vp 2..., vp m; Feature point set in workpiece image to be identified is combined into Q={q 1, q 2..., q n, its feature interpretation subclass is VQ={vq 1, vq 2..., vq n, each descriptor vector of binary features represents, length is 512 bits;
Search represents the Expressive Features of the top n bit in 512 bits of fuzzy message, remains 512-N bit, wherein N≤128; If matching distance is less than set threshold value, then carry out the coupling of feature below, such search strategy, can reject the unique point of uncorrelated of major part fast, its concrete steps are as follows:
1. for the Feature Descriptor vp in workpiece template image i, calculate vp ifeature Descriptor vq in top n bit and workpiece image to be identified jthe Hamming distance HD of top n bit j:
HD j = Hamm ( v p i , v q j ) = &Sigma; k = 1 128 ( v p i k &CirclePlus; v q j k ) , j = 1,2 , . . . n
Wherein, represent xor operation, it is defined as: if two numbers are not identical, then XOR result is 1, otherwise XOR result is 0;
If 2. HD jbe more than or equal to setting threshold value T 1, get 30 ~ 50, then judge vp iand vq jfor unmatched proper vector;
If 3. HD jbe less than setting threshold value T 1, then vp is calculated iremain 512-N bit and vq jthe Hamming distance of residue 512-bit, adds the Hamming distance of top n bit, obtains vp iand vq jthe Hamming distance HD' of whole 512 bits j;
4. smallest hamming distance HD' is found minwith secondary smallest hamming distance HD' sec, when time, then judge vp iand vq minfor the proper vector of coupling, characteristic of correspondence point p iand q minit is the unique point of a pair coupling;
5. repeat step 1. ~ 4., all elements in the feature interpretation subclass VP in traversal workpiece template image, obtains all initial matching point set matched.
In above-described embodiment, N is preferably 128.
5) adopt RANSAC algorithm to reject the error matching points pair obtained in all initial matching point set matched, obtain correct matching double points, thus improve matching precision, and then complete workpiece identification.
Set up an office and integrate E as the initial matching point set that workpiece template image obtains, F is the initial matching point set that workpiece image to be identified obtains, the transformation relation projective transformation matrix between workpiece template image and workpiece image to be identified H = h 00 h 01 h 02 h 10 h 11 h 12 h 20 h 21 h 22 Describe.To the p (x, y) of in image, this point transforms to a p ' (x ', y ') by projective transformation matrix H, namely
x &prime; y &prime; 1 = H x y 1 - - - ( 22 )
Wherein, projective transformation model H obtains by four pairs of match points.
The concrete steps that RANSAC algorithm rejects the error matching points obtained in all initial matching point set matched right are as follows:
1., in set E and F, random selecting 4 pairs of matching double points, calculate the projective transformation matrix that this four couple point is right;
2. in the remaining unique point of set F, selected characteristic point (x 2, y 2), with step 1. in the projective transformation matrix H that calculates it is converted, obtain the coordinate figure (x' after converting 2, y' 2), if the coordinate figure (x of character pair point in E 1, y 1) and (x' 2, y' 2) meet then think matching double points (x 1, y 1) and (x 2, y 2) meet model H, be called interior point, wherein ε is interior exterior point distance threshold, generally gets 3 or 4;
3. repeat step 2., the remaining all unique points of traversal set F, statistics meets the match point logarithm of projective transformation matrix, the size of point set namely;
4. repeat step 1. ~ 3., finding the linear transformation that interior point is maximum to quantity, using putting in this conversion obtains set as new point set E and F, carrying out the iteration of a new round;
5., when putting the point before to number therewith secondary iteration in point set E with F in iteration obtains and being consistent to number, iteration ends, E and F of last current iteration, is exactly the match point set after rejecting error hiding, obtains correct matching double points, and then complete workpiece identification.
In order to verify the validity of the inventive method, subjective performance measures and objective performance test are carried out.
Experimental situation is: operating system is the PC of Window 7 Ultimate, is configured to Intel (R) Core (TM) i3-3110M CPU 2.40GHZ, inside saves as 4GB, and compiling platform is VS2010, OpenCV2.4.8.
Experiment parameter is: workpiece template image size is 300 × 300 pixels, and the size of workpiece image to be identified is 600 × 600 pixels, metric space pyramid number of plies n=4, BRISK feature point detection threshold tau=10, the threshold value T in characteristic matching unit 4 1=30, RANSAC rejects exterior point distance threshold ε=3 in error hiding.
1) subjective performance measures (visual effect)
The method (being called for short BRISK-FREAK method) adopting BRISK and FREAK directly to combine and the inventive method, to having shielded image, image rotating, zoomed image, brightness modified-image, noise image, and multiple interference (comprising partial occlusion, translation, rotation, convergent-divergent, brightness change and noise) simultaneous image, carry out many experiments, part Experiment design sketch (as shown in Figure 7).
As shown in Figure 7, although the workpiece in workpiece image to be identified may exist the interference of translation, rotation, convergent-divergent, brightness change, partial occlusion and noise compared with template workpiece, the inventive method still can workpiece exactly in matching image.In multiple interference (comprising partial occlusion, translation, rotation, convergent-divergent, brightness change and noise) simultaneous situation, BRISK-FREAK method is fixed due to sampling pattern, the right performance of sampled point is not necessarily optimum, cause the ability to see things in their true light of Feature Descriptor lower, occur erroneous matching.And the inventive method is by selection, and more reasonably point is to identification model, by slightly training optimum point to position to essence, make descriptor have very high ability to see things in their true light, reduce the possibility of error hiding, make matching precision higher, robustness is stronger.
2) objective performance test
From correct matching rate and match time two aspect objective evaluation is carried out to the inventive method.
1. correct matching rate
Choose Mikolajczyk K, Schmid C. " A performance evaluation of local descriptors.IEEE Transactions on Pattern Analysis and Machine Intelligence ", 2005,27 (10): the 1615-1630. correct matching rates proposed are as evaluation criterion.
Using Fig. 8 (a) as workpiece template image, Fig. 8 (b) is as workpiece image to be identified, and (90 degree), convergent-divergent (amplifying 1.5 times) are rotated to workpiece image to be identified, add multiple interference (90-degree rotation, amplify 1.5 times, brightness increases by 30, adds the zero-mean Gaussian noise that variance is 0.02) and carry out correct matching rate statistics, result is as shown in table 1.As shown in Table 1, the inventive method is by improving the ability to see things in their true light of descriptor, and correct matching rate is obviously high than the correct matching rate of BRISK-FREAK algorithm.
The correct matching rate statistics of table 1
Without conversion 90-degree rotation Amplify 1.5 times Multiple interference
BRISK-FREAK 0.90 0.83 0.82 0.79
The inventive method 0.95 0.90 0.86 0.83
2. match time
In order to prove the superiority of context of methods in computing velocity, selected part experimental result does contrast match time, and comparing result is as shown in table 2.
The result of table 2 Different matching Riming time of algorithm
Matching process Working time (ms)
SIFT 1449.14
BRISK-FREAK 265.85
The inventive method 58.15
As known from Table 2, on total match time, the inventive method is obviously few for match time than SIFT method, the present invention simultaneously adopts simple gray scale centroid method efficiently to substitute partial gradient to calculate unique point principal direction, and by improving the ability to see things in their true light of descriptor, the error hiding reducing initial matching is counted, and then minimizing input RANSAC rejects counting of error hiding unit 5, calculated amount is obviously low than BRISK-FREAK algorithm, therefore can realize the real-time process of mating better.
The various embodiments described above are only for illustration of the present invention; wherein the structure of each parts, connected mode and manufacture craft etc. all can change to some extent; every equivalents of carrying out on the basis of technical solution of the present invention and improvement, all should not get rid of outside protection scope of the present invention.

Claims (8)

1. what mate based on binary features blocks a workpiece identification method, and it comprises the following steps:
1) camera shooting workpiece template image and workpiece image to be identified is adopted;
2) workpiece template image is carried out pre-service, and adopt the unique point on BRISK algorithm extraction workpiece template image, and convert thereof into binary features descriptor;
3) step 2 is adopted) identical method extracts unique point on workpiece image to be identified, and converts thereof into binary features descriptor;
4) unique point adopting approximate KNN method to find workpiece image to be identified and workpiece template image to match, obtains all initial matching point set matched;
5) adopt RANSAC algorithm to reject the error matching points pair obtained in all initial matching point set matched, obtain correct matching double points, and then complete workpiece identification.
2. as claimed in claim 1 a kind of based on binary features coupling block workpiece identification method, it is characterized in that: described step 2) in comprise the following steps:
1. workpiece template image is carried out pre-service, obtain filtered smoothed image;
2. adopt the unique point on the filtered level and smooth workpiece template image of BRISK algorithm extraction, it comprises the following steps:
I, metric space pyramid is built;
II, at the pyramidal every one deck of metric space, FAST algorithm is adopted to obtain the potential unique point of filtered smoothed image;
III, in metric space, non-maxima suppression is carried out to each potential unique point, and reject the unique point of some non-maximum value, obtain preliminary unique point;
IV, sub-pix and dimension correction are carried out to each preliminary unique point, obtain accurate characteristic point position and yardstick;
3. the gray-scale value size obtaining optional sampling point right by comparing FREAK method carrys out structure description, and the unique point on workpiece template image is converted into binary features descriptor, and it comprises the following steps:
I, FREAK method is adopted to obtain the right position of optional sampling point;
II, gray scale centroid method is utilized to determine the principal direction of each unique point;
III, the principal direction of position right for optional sampling point and unique point is built binary features descriptor.
3. as claimed in claim 1 a kind of based on binary features coupling block workpiece identification method, it is characterized in that: described step 4) in comprise the following steps:
If workpiece template image is P={p 1, p 2..., p m, its feature interpretation subclass is VP={vp 1, vp 2..., vp m; Feature point set in workpiece image to be identified is combined into Q={q 1, q 2..., q n, its feature interpretation subclass is VQ={vq 1, vq 2..., vq n, each descriptor vector of binary features represents, length is 512 bits;
Search represents the Expressive Features of the top n bit in 512 bits of fuzzy message, remains 512-N bit, wherein N≤128; If matching distance is less than set threshold value, then carry out the coupling of feature below, its concrete steps are as follows:
1. for the Feature Descriptor vp in workpiece template image i, calculate vp ifeature Descriptor vq in top n bit and workpiece image to be identified jthe Hamming distance HD of top n bit j:
HD j = Hamm ( vp i , vq j ) = &Sigma; k = 1 128 ( vp i k &CirclePlus; vq j k ) , j = 1,2 , . . . n
Wherein, represent xor operation;
If 2. HD jbe more than or equal to setting threshold value T 1, get 30 ~ 50, then judge vp iand vq jfor unmatched proper vector;
If 3. HD jbe less than setting threshold value T 1, then vp is calculated iremain 512-N bit and vq jthe Hamming distance of residue 512-N bit, adds the Hamming distance of top n bit, obtains vp iand vq jthe Hamming distance HD' of whole 512 bits j;
4. smallest hamming distance HD' is found minwith secondary smallest hamming distance HD ' sec, when time, then judge vp iand vq minfor the proper vector of coupling, characteristic of correspondence point p iand q minit is the unique point of a pair coupling;
5. repeat step 1. ~ 4., all elements in the feature interpretation subclass VP in traversal workpiece template image, obtains all initial matching point set matched.
4. as claimed in claim 2 a kind of based on binary features coupling block workpiece identification method, it is characterized in that: described step 4) in comprise the following steps:
If workpiece template image is P={p 1, p 2..., p m, its feature interpretation subclass is VP={vp 1, vp 2..., vp m; Feature point set in workpiece image to be identified is combined into Q={q 1, q 2..., q n, its feature interpretation subclass is VQ={vq 1, vq 2..., vq n, each descriptor vector of binary features represents, length is 512 bits;
Search represents the Expressive Features of the top n bit in 512 bits of fuzzy message, remains 512-N bit, wherein N≤128; If matching distance is less than set threshold value, then carry out the coupling of feature below, its concrete steps are as follows:
1. for the Feature Descriptor vp in workpiece template image i, calculate vp ifeature Descriptor vq in top n bit and workpiece image to be identified jthe Hamming distance HD of top n bit j:
HD j = Hamm ( vp i , vq j ) = &Sigma; k = 1 128 ( vp i k &CirclePlus; vq j k ) , j = 1,2 , . . . n
Wherein, represent xor operation;
If 2. HD jbe more than or equal to setting threshold value T 1, get 30 ~ 50, then judge vp iand vq jfor unmatched proper vector;
If 3. HD jbe less than setting threshold value T 1, then vp is calculated iremain 512-N bit and vq jthe Hamming distance of residue 512-N bit, adds the Hamming distance of top n bit, obtains vp iand vq jthe Hamming distance HD' of whole 512 bits j;
4. smallest hamming distance HD' is found minwith secondary smallest hamming distance HD ' sec, when time, then judge vp iand vq minfor the proper vector of coupling, characteristic of correspondence point p iand q minit is the unique point of a pair coupling;
5. repeat step 1. ~ 4., all elements in the feature interpretation subclass VP in traversal workpiece template image, obtains all initial matching point set matched.
5. as claimed in claim 1 or 2 or 3 or 4 a kind of based on binary features coupling block workpiece identification method, it is characterized in that: described step 5) in comprise the following steps:
Set up an office and integrate E as the initial matching point set that workpiece template image obtains, F is the initial matching point set that workpiece image to be identified obtains, the transformation relation projective transformation matrix between workpiece template image and workpiece image to be identified H = h 00 h 01 h 02 h 10 h 11 h 12 h 20 h 21 h 22 Describe; To the p (x, y) of in image, this point transforms to a p ' (x ', y ') by projective transformation matrix H:
x &prime; y &prime; 1 = H x y 1
Wherein, projective transformation model H obtains by four pairs of match points;
The concrete steps that RANSAC algorithm rejects the error matching points obtained in all initial matching point set matched right are as follows:
1., in set E and F, random selecting 4 pairs of matching double points, calculate the projective transformation matrix that this four couple point is right;
2. in the remaining unique point of set F, selected characteristic point (x 2, y 2), with step 1. in the projective transformation matrix H that calculates it is converted, obtain the coordinate figure (x' after converting 2, y' 2), if the coordinate figure (x of character pair point in E 1, y 1) and (x' 2, y' 2) meet then think matching double points (x 1, y 1) and (x 2, y 2) meet model H, be called interior point, wherein ε is interior exterior point distance threshold, gets 3 or 4;
3. repeat step 2., the remaining all unique points of traversal set F, statistics meets the match point logarithm of projective transformation matrix, the size of point set namely;
4. repeat step 1. ~ 3., finding the linear transformation that interior point is maximum to quantity, using putting in this conversion obtains set as new point set E and F, carrying out the iteration of a new round;
5., when putting the point before to number therewith secondary iteration in point set E with F in iteration obtains and being consistent to number, iteration ends, E and F of last current iteration, is exactly the match point set after rejecting error hiding, obtains correct matching double points, and then complete workpiece identification.
6. realize the device blocking workpiece identification method based on binary features coupling as described in any one of Claims 1 to 5, it is characterized in that: it comprises image acquisition units, workpiece template image feature extraction unit, workpiece image feature extraction unit to be identified, characteristic matching unit and rejects error hiding unit;
Wherein, described image acquisition units picked-up workpiece template image and workpiece image to be identified, and correspondence sends described workpiece template image feature extraction unit and described workpiece image feature extraction unit to be identified to; Described workpiece template image feature extraction unit extracts the unique point on workpiece template image, and converts thereof into binary features descriptor, and sends described characteristic matching unit to; Described workpiece image feature extraction unit to be identified extracts the unique point on workpiece image to be identified, is translated into binary features descriptor and represents, and is sent to described characteristic matching unit; Described characteristic matching unit adopts Hamming distance to find as matching criterior the unique point that workpiece image to be identified and workpiece template image match, and obtains all initial matching points matched, and sends all initial matching points to described rejecting error hiding unit; Described rejecting error hiding unit rejects the error matching points pair obtained in all initial matching point set matched, and obtains correct matching double points, and then completes workpiece identification.
7. as claimed in claim 6 a kind of based on binary features coupling block Workpiece recognition device, it is characterized in that: described workpiece template image feature extraction unit and described workpiece image feature extraction unit to be identified all comprise image pre-processing module, feature point detection module and unique point describing module; Described image pre-processing module sends described feature point detection module to after the image of extraction is carried out greyscale transformation and medium filtering process, described feature point detection module adopts the unique point in BRISK algorithm detected image, and sends the unique point on image to described unique point describing module; Unique point is converted into binary features descriptor by described unique point describing module, and sends described characteristic matching unit to.
8. as claimed in claims 6 or 7 a kind of blocks Workpiece recognition device based on binary features coupling, it is characterized in that: described image acquisition units adopts monocular cam.
CN201410543452.4A 2014-10-14 2014-10-14 Shielded workpiece identifying method and device based on binary system feature matching Pending CN104268602A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410543452.4A CN104268602A (en) 2014-10-14 2014-10-14 Shielded workpiece identifying method and device based on binary system feature matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410543452.4A CN104268602A (en) 2014-10-14 2014-10-14 Shielded workpiece identifying method and device based on binary system feature matching

Publications (1)

Publication Number Publication Date
CN104268602A true CN104268602A (en) 2015-01-07

Family

ID=52160122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410543452.4A Pending CN104268602A (en) 2014-10-14 2014-10-14 Shielded workpiece identifying method and device based on binary system feature matching

Country Status (1)

Country Link
CN (1) CN104268602A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616302A (en) * 2015-02-04 2015-05-13 四川中科腾信科技有限公司 Real-time object identification method
CN104959705A (en) * 2015-06-10 2015-10-07 四川英杰电气股份有限公司 Welded and molten pipe fitting identification method
CN105500370A (en) * 2015-12-21 2016-04-20 华中科技大学 Robot offline teaching programming system and method based on somatosensory technology
CN105989128A (en) * 2015-02-13 2016-10-05 深圳先进技术研究院 Image retrieving method and device
CN106408597A (en) * 2016-09-08 2017-02-15 西安电子科技大学 Neighborhood entropy and consistency detection-based SAR (synthetic aperture radar) image registration method
CN106815589A (en) * 2015-12-01 2017-06-09 财团法人工业技术研究院 Feature description method and feature descriptor using same
CN107239792A (en) * 2017-05-12 2017-10-10 大连理工大学 A kind of workpiece identification method and device based on binary descriptor
CN107526772A (en) * 2017-07-12 2017-12-29 湖州师范学院 Image indexing system based on SURF BIT algorithms under Spark platforms
CN107657175A (en) * 2017-09-15 2018-02-02 北京理工大学 A kind of homologous detection method of malice sample based on image feature descriptor
CN107895179A (en) * 2017-11-29 2018-04-10 合肥赑歌数据科技有限公司 It is a kind of based on close on value analysis workpiece categorizing system and method
CN108495089A (en) * 2018-04-02 2018-09-04 北京京东尚科信息技术有限公司 vehicle monitoring method, device, system and computer readable storage medium
CN109448033A (en) * 2018-10-14 2019-03-08 哈尔滨理工大学 A kind of method for registering images based on BRISK algorithm
CN110880139A (en) * 2019-09-30 2020-03-13 珠海随变科技有限公司 Commodity display method, commodity display device, terminal, server and storage medium
CN111428064A (en) * 2020-06-11 2020-07-17 深圳市诺赛特系统有限公司 Small-area fingerprint image fast indexing method, device, equipment and storage medium
CN111666847A (en) * 2020-05-26 2020-09-15 张彦龙 Iris segmentation, feature extraction and matching method based on local 0-1 quantization technology
CN112837265A (en) * 2021-01-04 2021-05-25 江苏新安电器股份有限公司 Detection algorithm for realizing no-stop board of assembly line
CN114111035A (en) * 2021-10-17 2022-03-01 深圳市铁腕创新科技有限公司 Hot air gun and hot air temperature adjusting method
CN114742789A (en) * 2022-04-01 2022-07-12 中国科学院国家空间科学中心 General part picking method and system based on surface structured light and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7899634B1 (en) * 2005-11-07 2011-03-01 Advanced Micro Devices, Inc. Method and apparatus for analysis of continuous data using binary parsing
CN103325106A (en) * 2013-04-15 2013-09-25 浙江工业大学 Moving workpiece sorting method based on LabVIEW
CN103530649A (en) * 2013-10-16 2014-01-22 北京理工大学 Visual searching method applicable mobile terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7899634B1 (en) * 2005-11-07 2011-03-01 Advanced Micro Devices, Inc. Method and apparatus for analysis of continuous data using binary parsing
CN103325106A (en) * 2013-04-15 2013-09-25 浙江工业大学 Moving workpiece sorting method based on LabVIEW
CN103530649A (en) * 2013-10-16 2014-01-22 北京理工大学 Visual searching method applicable mobile terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何林阳 等: "改进BRISK特征的快速图像配准算法", 《红外与激光工程》 *
许允喜 等: "惯性组合导航系统中基于BRISK的快速景象", 《光电子·激光》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616302A (en) * 2015-02-04 2015-05-13 四川中科腾信科技有限公司 Real-time object identification method
CN105989128B (en) * 2015-02-13 2019-05-07 深圳先进技术研究院 A kind of method and device of image retrieval
CN105989128A (en) * 2015-02-13 2016-10-05 深圳先进技术研究院 Image retrieving method and device
CN104959705A (en) * 2015-06-10 2015-10-07 四川英杰电气股份有限公司 Welded and molten pipe fitting identification method
CN106815589A (en) * 2015-12-01 2017-06-09 财团法人工业技术研究院 Feature description method and feature descriptor using same
CN105500370A (en) * 2015-12-21 2016-04-20 华中科技大学 Robot offline teaching programming system and method based on somatosensory technology
CN106408597A (en) * 2016-09-08 2017-02-15 西安电子科技大学 Neighborhood entropy and consistency detection-based SAR (synthetic aperture radar) image registration method
CN107239792A (en) * 2017-05-12 2017-10-10 大连理工大学 A kind of workpiece identification method and device based on binary descriptor
CN107526772A (en) * 2017-07-12 2017-12-29 湖州师范学院 Image indexing system based on SURF BIT algorithms under Spark platforms
CN107657175A (en) * 2017-09-15 2018-02-02 北京理工大学 A kind of homologous detection method of malice sample based on image feature descriptor
CN107895179A (en) * 2017-11-29 2018-04-10 合肥赑歌数据科技有限公司 It is a kind of based on close on value analysis workpiece categorizing system and method
CN108495089A (en) * 2018-04-02 2018-09-04 北京京东尚科信息技术有限公司 vehicle monitoring method, device, system and computer readable storage medium
CN109448033A (en) * 2018-10-14 2019-03-08 哈尔滨理工大学 A kind of method for registering images based on BRISK algorithm
CN110880139A (en) * 2019-09-30 2020-03-13 珠海随变科技有限公司 Commodity display method, commodity display device, terminal, server and storage medium
CN111666847A (en) * 2020-05-26 2020-09-15 张彦龙 Iris segmentation, feature extraction and matching method based on local 0-1 quantization technology
CN111428064A (en) * 2020-06-11 2020-07-17 深圳市诺赛特系统有限公司 Small-area fingerprint image fast indexing method, device, equipment and storage medium
CN112837265A (en) * 2021-01-04 2021-05-25 江苏新安电器股份有限公司 Detection algorithm for realizing no-stop board of assembly line
CN114111035A (en) * 2021-10-17 2022-03-01 深圳市铁腕创新科技有限公司 Hot air gun and hot air temperature adjusting method
CN114742789A (en) * 2022-04-01 2022-07-12 中国科学院国家空间科学中心 General part picking method and system based on surface structured light and electronic equipment

Similar Documents

Publication Publication Date Title
CN104268602A (en) Shielded workpiece identifying method and device based on binary system feature matching
CN108898610B (en) Object contour extraction method based on mask-RCNN
CN106355577B (en) Rapid image matching method and system based on significant condition and global coherency
CN110399884B (en) Feature fusion self-adaptive anchor frame model vehicle detection method
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN113592845A (en) Defect detection method and device for battery coating and storage medium
CN106251353A (en) Weak texture workpiece and the recognition detection method and system of three-dimensional pose thereof
CN104680519A (en) Seven-piece puzzle identification method based on contours and colors
CN107464252A (en) A kind of visible ray based on composite character and infrared heterologous image-recognizing method
CN107292869B (en) Image speckle detection method based on anisotropic Gaussian kernel and gradient search
CN105335725A (en) Gait identification identity authentication method based on feature fusion
CN107239792A (en) A kind of workpiece identification method and device based on binary descriptor
CN104715481A (en) Multi-scale presswork defect detecting method based on random forest
CN104851095A (en) Workpiece image sparse stereo matching method based on improved-type shape context
CN114187267B (en) Stamping part defect detection method based on machine vision
CN109086350B (en) Mixed image retrieval method based on WiFi
Wang et al. Segmentation of corn leaf disease based on fully convolution neural network
CN108182705A (en) A kind of three-dimensional coordinate localization method based on machine vision
CN111401449A (en) Image matching method based on machine vision
CN105975906B (en) A kind of PCA static gesture identification methods based on area features
CN112364881B (en) Advanced sampling consistency image matching method
CN115731257A (en) Leaf form information extraction method based on image
CN117576079A (en) Industrial product surface abnormality detection method, device and system
CN106897723B (en) Target real-time identification method based on characteristic matching
CN108388854A (en) A kind of localization method based on improvement FAST-SURF algorithms

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150107

RJ01 Rejection of invention patent application after publication